Category Archives: .net

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Visual Studio “14” announced, preview available with “Roslyn” open source compiler

Microsoft’s Soma Somasegar has announced the next version of Visual Studio, currently known as Visual Studio 14, but likely to be fully released in 2015 (and, I am guessing, likely to be called Visual Studio 2015).

This is a major release. It includes a new VB and C# compiler which is itself written in managed code, codenamed Roslyn. The open source Roslyn project provides new APIs that enable more powerful IDE features. Visual Basic is getting refactoring support for the first time.

The preview also includes a major update to ASP.NET that unifies ASP.NET MVC and the ASP.NET Web API, and has a new deployment model and developer experience:

Thanks to the Rosyln compiler, if you change ".cs” files or project.json file and want to see the change in the browser, you don’t need to build the project any more. Just refresh the browser.

There is no IIS express, nor IIS involved when you run from the command line. It means that you can publish your website to a USB drive, and run it by double clicking the web.cmd file!

On the C++ side, there is improved C++ 11 support and more features from C++ 14:

The Visual Studio "14" CTP includes support for user-defined literals, noexcept, alignof and alignas, and inheriting constructors from C++11, generalized lambda capture, auto function return type deduction, and generic lambdas from C++14, as well as many more new C++ features.

says Somasegar. There is also a refactored C Runtime (CRT):

msvcr140.dll no longer exists. It is replaced by a trio of DLLs: vcruntime140.dll, appcrt140.dll, and desktopcrt140.dll.

If you install the CTP (mine is downloading) use a spare machine or VM; it is an early preview that does not work side-by-side with other versions and the only uninstall may be to flatten the machine:

Installing a CTP release will place a computer in an unsupported state. For that reason, we recommend only installing CTP releases in a virtual machine, or on a computer that is available for reformatting.

Xamarin 3.0 brings iOS visual design to Visual Studio, cross-platform XAML, F#, NuGet and more

Xamarin has announced the third version of its cross-platform tools, which use C# and .NET to target multiple platforms, including iOS, Android and Mac OS X.

Xamarin 3.0 is a big release. In summary:

Xamarin Designer for iOS

Using a visual designer for iOS Storyboard projects, you can create and modify a GUI in both Visual Studio and Xamarin Studio (Xamarin’s own IDE). The designer uses the native Storyboard format, so you can open and modify existing files created in Xcode on the Mac. The technology here is amazing, since you iOS controls are rendered remotely on a Mac, and transmitted to the designer on Windows. See here for a quick hands-on.

Xamarin Forms

Xamarin has created the cross-platform GUI framework that it said it did not believe in. It is based on XAML though not compatible with Microsoft’s existing XAML implementations. There is no visual designer yet.

Why has Xamarin changed its mind? It was pressure from enterprise customers, from what I heard from CEO Nat Friedman. They want to make internal mobile apps with many forms, and do not want to rewrite the GUI code for every mobile platform they support.

Friedman made the point that Xamarin Forms still render as native controls. There is no drawing code in Xamarin Forms.

“The challenge for us in  building Xamarin forms was to give people enhanced productivity without compromising the native approach. The mix and match approach, where you can mix in native code at any point, you can get a handle for the native control, we’re think we’ve got the right compromise. And we’re not forcing Xamarin forms on you, this is just an option,”

he told me.

Again, there is a quick hands-on here.

F# support

F# is now officially supported in Xamarin projects. This brings functional programming to Xamarin, and will be warmly welcomed by the small but enthusiastic F# community (including, as I understand it, key .NET users in the financial world).

Portable Class Libraries

Xamarin now supports Microsoft’s Portable Class Libraries, which let you state what targets you want to support, and have Visual Studio ensure that you write compatible code. This also means that library vendors can easily support Xamarin if they choose to do so.

NuGet Packages

The NuGet package manager has transformed the business of getting hold of new libraries for use in Visual Studio. Now you can use it with Xamarin in both Visual Studio and Xamarin Studio.

Microsoft partnership

Perhaps the most interesting part of my interview with Nat Friedman was what he said about the company’s partnership with Microsoft. Apparently this is now close both from a technical perspective, and for business, with Microsoft inviting Xamarin for briefings with key customers.

Hands on with Xamarin 3.0: a cross-platform breakthrough for Visual Studio

Today Xamarin announced version 3.0 of its cross-platform mobile development tools, which let you target Android and iOS with C# and .NET. I have been trying a late beta preview.

In order to use Xamarin 3.0 with iOS support you do need a Mac. However, you can do essentially all of your development in Visual Studio, and just use the Mac for debugging.

To get started, I installed Xamarin 3.0 on both Windows (with Visual Studio 2013 installed) and on a Mac Mini on the same network.

image

Unfortunately I was not able to sit back and relax. I got an error installing Xamarin Studio, following which the installer would not proceed further. My solution was to download the full DMG (Mac virtual disk image) for Xamarin Studio and run that separately. This worked, and I was able to complete the install with the combined installer.

When you start a Visual Studio iOS project, you are prompted to pair with a Mac. To do this, you run a utility on the Mac called Xamarin.IOS Build Host, which generates a PIN. You enter the PIN in Visual Studio and then pairing is active.

image

Once paired, you can create or open iOS Storyboard projects in Visual Studio, and use Xamarin’s amazing visual designer.

image

Please click this image to open it full-size. What you are seeing is a native iOS Storyboard file open in Visual Studio 2013 and rendering the iOS controls. On the left is a palette of visual components I can add to the Storyboard. On the right is the normal Visual Studio solution explorer and property inspector.

The way this works, according to what Xamarin CEO Nat Friedman told me, is that the controls are rendered using the iOS simulator on the Mac, and then transmitted to the Windows designer. Thus, what you see is exactly what the simulator will render at runtime. Friedman says it is better than the Xcode designer.

“The way we do event handling is far more intuitive than Xcode. It supports the new iOS 7 auto-layout feature. It allows you to live preview custom controls. Instead of getting a grey rectangle you can see it live rendered inside the canvas. We use the iOS native format for Storyboard files so you can open existing Storyboard files and edit them.”

I made a trivial change to the project, configured the project to debug on the iOS simulator, and hit Start. On the Mac side, the app opened in the simulator. On the Windows side, I have breakpoint debugging.

image

Now, I will not pretend that everything ran smoothly in the short time I have had the preview. I have had problems with the pairing after switching projects in Visual Studio. I also had to quit and restart the iOS Simulator in order to get rendering working again. This is an amazing experience though, combining remote debugging with a visual designer on Visual Studio in Windows that remote-renders design-time controls.

Still, time to look at another key new feature in Xamarin 3: Xamarin Forms. This is none other than our old friend XAML, implemented for iOS and Android. The Mono team has some experience implementing XAML on Linux, thanks to the Moonlight project which did Silverlight on Linux, but this is rather different. Xamarin forms does not do any custom drawing, but wraps native controls. In other words, it like is the Eclipse SWT approach for Java, and not like the Swing approach which does its own drawing. This is keeping with Xamarin’s philosophy of keeping apps as native as possible, even though the very existence of a cross-platform GUI framework is something of a compromise.

I have not had long to play with this. I did create a new Xamarin Forms project, and copy a few lines of XAML from a sample into a shared XAML file. Note that Xamarin Forms uses Shared Projects in Visual Studio, the same approach used by Microsoft’s Universal Apps. However, Xamarin Forms apps are NOT Universal Apps, since they do not support Windows 8 (yet).

image 

In a Shared Project, you have some code that is shared, and other code that is target-specific. By default hardly any code is shared, but you can move code to the shared node, or create new items there. I created XamFormsExample.xaml in the shared node, and amended App.cs so that it loads automatically. Then I ran the project in the Android emulator.

image

I was also able to run this on iOS using the remote connection.

I noticed a few things about the XAML. The namespace is:

xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"

I have not seen this before. Microsoft’s XAML always seems to have a “2006” namespace. For example, this is for a Universal App:

xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml

However, XAML 2009 does exist and apparently can be used in limited circumstances:

In WPF, you can use XAML 2009 features, but only for XAML that is not WPF markup-compiled. Markup-compiled XAML and the BAML form of XAML do not currently support the XAML 2009 language keywords and features.

It’s odd, because of course Xamarin’s XAML is cut-down compared to Microsoft’s XAML. That said, I am not sure of the exact specification of XAML in Xamarin Forms. I have a draft reference but it is incomplete. I am not sure that styles are supported, which would be a major omission. However you do get layout managers including AbsoluteLayout, Grid, RelativeLayout and StackLayout. You also get controls (called Views) including Button, DatePicker, Editor, Entry (single line editor), Image, Label, ListView, OpenGLView, ProgressBar, SearchBar, Slider, TableView and WebView.

Xamarin is not making any claims for compatibility in its XAML implementation. There is no visual designer, and you cannot port from existing XAML code. The commitment to wrapping native controls may limit prospects for compatibility. However, Friedman did say that Xamarin hopes to support Universal Apps, ie. to run on Windows 8 as well as Windows Phone, iOS and Android. He said:

I think it is the right strategy, and if it does take off, which I think it will, we will support it.

Friedman says the partnership with Microsoft (which begin in November 2013) is now close, and it would be reasonable to assume that greater compatibility with Microsoft XAML is a future goal. Note that Xamarin 3 also supports Portable Class Libraries, so on the non-visual side sharing code with Microsoft projects should be straightforward.

Personally I think both the Xamarin forms and the iOS visual designer (which, note, does NOT support Xamarin Forms) are significant features. The iOS designer matters because you can now do almost all of your cross-platform mobile development within Visual Studio, even if you want to follow the old Xamarin model of a different, native user interface for each platform; and Xamarin Forms because it enables a new level of code sharing for Xamarin projects, as well as making XAML into a GUI language that you can use across all the most popular platforms. Note that I do have reservations about XAML; but it does tick the boxes for scaling to multiple form factors and for enormous flexibility.

Office, Azure Active Directory, and mobile: the three pillars of Microsoft’s cloud

When Microsoft first announced Azure, at its PDC Conference in October 2008, I was not impressed. Here is the press release, if you fancy a look back. It was not so much the technology – though with hindsight Microsoft’s failure to offer plain old Windows VMs from the beginning was a mistake – but rather, the body language that was all wrong. After all, here is a company whose fortunes are built on supplying server and client operating systems and applications to businesses, and on a partner ecosystem that has grown up around reselling, installing and servicing those systems. How can it transition to a cloud model without cannibalising its own business and disrupting its own partners? In 2008 the message I heard was, “we’re doing this cloud thing because it is expected of us, but really we’d like you to keep buying Windows Server, SQL Server, Office and all the rest.”

Take-up was small, as far as anyone could tell, and the scene was set for Microsoft to be outflanked by Amazon for IaaS (Infrastructure as a Service) and Google for cloud-based email and documents.

Those companies are formidable competitors; but Microsoft’s cloud story is working out better than I had expected. Although Azure sputtered in its early years, the company had some success with BPOS (Business Productivity Online Suite), which launched in the UK in 2009: hosted Exchange and SharePoint, mainly aimed at education and small businesses. In 2011 BPOS was reshaped into Office 365 and marketed strongly. Anyone who has managed Exchange, SharePoint and Active Directory knows that it can be arduous, thanks to complex installation, occasional tricky problems, and the challenge of backup and recovery in the event of disaster. Office 365 makes huge sense for many organisations, and is growing fast – “the fastest growing business in the history of the company,” according to Corporate VP of Windows Server and System Center Brad Anderson, speaking to the press last week.

image
Brad Anderson, Corporate VP for Windows Server and System Center

The attraction of Office 365 is that you can move users from on-premise Exchange almost seamlessly.

Then Azure changed. I date this from May 2011, when Scott Guthrie and others moved to work on Azure, which a year later offered a new user-friendly portal written in HTML5, and Windows Azure VMs and web sites. From that moment in 2012, Azure because a real competitor in cloud computing.

That is only two years ago, but Microsoft’s progress has been remarkable. Azure has been adding features almost as fast as Amazon Web Services (AWS – and I have not attempted to count), and although it is still behind AWS in some areas, it compensates with its excellent portal and integration with Visual Studio.

Now at TechEd Microsoft has made another wave of Azure announcements. A quick summary of the main ones:

  • Azure Files: SMB shared storage for Azure VMs, also accessible over the internet via a REST API. Think of it as a shared folder for VMs, simplifying things like having multiple web servers serve the same web site. Based on Azure storage.
  • Azure Site Recovery: based on Hyper-V Recovery Manager, which orchestrates replication and recovery across two datacenters, the new service adds the rather important feature of letting you use Azure itself as your space datacenter. This means anyone could use it, from small businesses to the big guys, provided all your servers are virtualised.
  • Azure RemoteApp: Remote Desktop Services in Azure, though currently only for individual apps, not full desktops
  • Antimalware for Azure: System Center Endpoint Protection for Azure VMs. There is also a partnership with Trend Micro for protecting Azure services.
  • Public IPs for individual VMs. If you are happy to handle the firewall aspect, you can now give a VM a public IP and access it without setting up an Azure endpoint.
  • IP Reservations: you get up to five IP addresses per subscription to assign to Azure services, ensuring that they stay the same even if you delete a service and add a new one back.
  • MSDN subscribers can use Windows 7 or 8.1 on Azure VMs, for development and test, the first time Microsoft has allows client Windows on Azure
  • General availability of ExpressRoute: fast network link to Azure without going over the internet
  • General availability of multiple site-to-site virtual network links, and inter-region virtual networks.
  • General availability of compute-intensive VMs, up to 16 cores and 112GB RAM
  • General availability of import/export service (ship data on physical storage to and from Azure)

There is more though. Those above are just a bunch of features, not a strategy. The strategy is based around Azure Active Directory (which everyone gets if they use Office 365, or you can set up separately), Office, and mobile.

Here is how this works. Azure Active Directory (AD), typically synchronised with on-premise active directory, is Microsoft’s cloud identity system which you can use for single sign-on and single point of control for Office 365, applications running on Azure, and cloud apps run by third-parties. Over 1200 software as a service apps support Azure AD, including Dropbox, Salesforce, Box, and even Google apps.

Azure AD is one of three components in what Microsoft calls its Enterprise Mobility Suite. The other two are InTune, cloud-based PC and device management, and Azure Rights Management.

InTune first. This is stepping up a gear in mobile device management, by getting the ability to deploy managed apps. A managed app is an app that is wrapped so it supports policy, such as the requirement that data can only be saved to a specified secure location. Think of it as a mobile container. iOS and Android will be supported first, with Office managed apps including Word, Excel, PowerPoint and Mobile OWA (kind-of Outlook for iOS and Android, based on Outlook Web Access but delivered as a native app with offline support).

Businesses will be able to wrap their own applications as managed apps.

Microsoft is also adding Cordova support to Visual Studio. Cordova is the open source part of PhoneGap, for wrapping HTML and JavaScript apps as native. In other words, Visual Studio is now a cross-platform development tool, even without Xamarin. I have not seen details yet, but I imagine the WinJS library, also used for Windows 8 apps, will be part of the support; yes it works on other platforms.

Next, Azure Rights Management (RMS). This is a service which lets you encrypt and control usage of documents based on Azure AD users. It is not foolproof, but since the protection travels in the document itself, it offers some protection against data leaking out of the company when it finds its way onto mobile devices or pen drives and the like. Only a few applications are fully “enlightened”, which means they have native support form Azure RMS, but apparently 70% of more of business documents are Office or PDF, which means if you cover them, then you have good coverage already. Office for iOS is not yet “enlightened”, but apparently will be soon.

This gives Microsoft a three-point plan for mobile device management, covering the device, the applications, and the files themselves.

Which devices? iOS, Android and Windows; and my sense is that Microsoft is now serious about full support for iOS and Android (it has little choice).

Another announcement at TechEd today concerns SharePoint in Office 365 and OneDrive for Business (the client), which is getting file encryption.

What does this add up to? For businesses happy to continue in the Microsoft world, it seems to me a compelling offering for cloud and mobile.

Microsoft’s new open source direction for C# and .NET (and native compilation too): Anders Hejlsberg explains

At the April 2014 Build conference Microsoft made some far-reaching announcements about its .NET platform and the C# programming language. Yes, there was talk of C# 6.0, the next version, but the real changes are more profound. Specifically:

C# and Visual Basic have a new compiler, itself written in C#, code-named Roslyn. Roslyn is not just a new compiler; Microsoft now calls it the “.NET Compiler Platform”.

There is a new commitment to open source for .NET projects. Microsoft formed the .NET Foundation to oversee existing open source projects, including  ASP.NET, Entity Framework, the Azure .NET SDK, and now Roslyn as well. “When it comes to development projects we are going to operate from the premise that open source is the default. Unless there are reasons why it does not work,” said C# lead architect Anders Hejlsberg.

image

Note that open source does not mean chaos. It does mean that you can fork the project if you want – the Roslyn license is Apache 2.0 – but getting Microsoft to accept new features you have contributed will not be trivial. Hejlsberg makes the point that language features are easy to add, but impossible to take away, so extreme care is necessary.

Microsoft is also supporting cross-platform C# to a greater extent than it has done in the past. The most obvious sign of this is its cooperation with Xamarin, which provides C# compilers for iOS and Android. Xamarin’s Miguel de Icaza got a top billing at Build, and is also involved in the .NET Foundation.

There is more though. The idea of standardised C# is re-emerging:

“The last ECMA standard was C# 2.0. There wasn’t a lot of demand for it, but that demand has recently risen and we have re engaged with the ECMA community to produce a standard for C# 5.0,” said Hejlsberg.

This bears some unpacking. Why was there little demand for ECMA C#? Partly I would guess from the assumption the C# was firmly in Microsoft’s grip, with Java the obvious choice for cross-platform development. The main interest was from the Mono folk (Miguel de Icaza again), which implemented .NET for Linux and the Mac with some success, but nothing to disturb Java’s momentum.

The focus now though is on mobile, and interest in C# is stronger, mainly from Microsoft-platform developers reaching beyond Windows. There is also Unity, which uses C# as a scripting language for developing games for multiple platforms, including iOS, Android, Windows, Mac, Linux, Xbox, PS3 and Wii – PS4 is coming very soon.

Microsoft has now consciously embraced multiple platforms, as evidenced by Office for iOS as well as the Xamarin collaboration. “We want C#developers to build great applications across different form factors and different device platforms,” said Jay Schmelzer Director of Program Management for Visual Studio.

You might observe that this position has been forced on the company by the rise of iOS and Android, a view which likely has some merit, but the impact it has on C# and .NET itself is still real.

I asked Hejlsberg to unpack the difference between the Roslyn project and C# 6.0, bearing in mind that both are covered on the Roslyn open source site; you can see the current status of C# 6.0 and the next Visual Basic here.

Roslyn is the name for the project that encompasses the new C#compiler and the new VB compiler and the new language services that they share. C# 6.0 is the name of the next version of the C #language which will have a specification and which will have an implementation. We are implementing C# 6.0 on the Roslyn platform. We are not going to continue to evolve our old C++ C# compiler – the C# compiler was originally written in C++ and has been evolved up through C# 5.0. That is where we are going to retire that code base, and going forward versions of C# will be built on Roslyn and therefore will be built open source. Unlike previously where, boom. C# came down from the sky with a set of features, it is going to happen more organically now, people will submit pull requests, open up issues, and you will see us work on these features. You will see them from inception to fruition.

“The C# team, the Roslyn team, the VB team, their day to day workplace now is the open source site. That is where they check-in code. It is a community in the making.

Even that is not all. At Build, Microsoft also announced .NET Native, which is a native compiler for C# and Visual Basic, now in preview for x64 Store apps. What is the difference between .NET Native and the existing NGen native compiler for .NET? Over to Hejlsberg:

NGen is the native feature that we currently support. NGen is really, “I’m going to JIT [Just in time compile] your code and then snapshot all the data structures and dump them in a file so that I can quickly rebuild that file later when you run this particular application”. But it is the same code generator and all the same features, and JIT is still there. NGen is really a way to pre-cache the JIT output and therefore get better performance, but it adds to the size of your app because you still have all the assemblies and metadata and then the NGen image as well.

.NET Native is a completely different approach. Instead of the JIT we use the backend from the C++ compiler. You can think of it as a linker that takes as input assemblies, and as output produces a PE [Portable Executable] executable. In the process this linker or code generator will analyse all the IL [Intermediate Language] that goes into the application and it will apply a thing known as tree-shaking where it eliminates all of the code that will never execute based on known execution roots.

In other words, the public static main of your program and also whatever pieces of your app that you designate as reflectable, they also become roots. Based on that we produce an optimised exe, and into that exe we link the pieces of the framework that you are referencing. We link in a garbage collector [GC], and it looks to the operating system just like an exe. When you run it, it runs a local GC in there and it is as efficient really as C++ code.

There are some restrictions associated with .net native, mainly that you can’t just willy-nilly reflect on the whole world. You can’t just generate new code and ask for that to be jitted because they may not be a JIT compiler. We are considering allowing you to link in a JIT compiler, but there are certain execution environments which don’t permit jitting, like Xbox. If you use reflection in your lap you have to tell us what to keep reflectable, because otherwise we will optimise it away.

According to Schmelzer:

The preview out today is scoped to Store app x64 and ARM. We haven’t run into any technical limitation that shows it can’t be done across the breadth, it is just a matter of request and need.

Open source, native code compilation, and an innovative compiler: it adds up to huge changes for C# and .NET, positive ones as far as I can tell.

The Xamarin connection is intriguing though. Developers in general admire the technology as far as I can tell, but it is expensive, and paying out for a Xamarin subscription on top of maybe MSDN for Visual Studio is too much for some smaller organisations and does not encourage experimentation. Might Microsoft acquire Xamarin and build Visual Studio into an IDE targeting all the major mobile platforms, but with special hooks to Azure-hosted services?

That prospect makes sense to me, though it would be a shame if the energetic Xamarin culture became bogged down in big-company bureaucracy. Currently though: no news to report.

Hands on: SQL Server 2014 with data files in Azure Blob Storage

One intriguing new feature in Micrsosoft’s SQL Server 2014 is the ability to create or attach databases whose files are in Azure blog storage. This sounds like something that would not work at all well: why would you want a database engine to mount files located hundreds or thousands of miles away? However, the feature is apparently baked deeply into SQL Server, according to this white paper (which is essential reading if you want to know more):

SQL Server 2014 integration with Windows Azure blob storage occurs at a deep level, directly into the SQL Server Storage Engine; SQL Server Data Files in Windows Azure is more than a simple adapter mechanism built on top of an existing software layer.

· The Manager Layer includes a new component called XFCB Credential Manager, which manages the security credentials necessary to access the Windows Azure blob containers and provides the necessary security interface; secrets are maintained encrypted and secured in the SQL Server built-in security repository in the master system database.

· The File Control Layer contains a new object called XFCB, which is the Windows Azure extension to the file control block (FCB) used to manage I/O against each single SQL Server data or log file on the NTFS file system; it implements all the APIs that are required for I/O against Windows Azure blob storage.

· At the Storage Layer, the SQL Server I/O Manager is now able to natively generate REST API calls to Windows Azure blob storage with minimal overhead and great efficiency; in addition, this component can generate information about performance counters and extended events (xEvents).

It also seems that the main target usage is SQL Server running on Azure VMs in the same region as the blog storage, removing latency concerns, though the wording of the explanation is curious, implying almost that on-premise connection is supported but should not be:

Although it is theoretically possible and officially supported, using an on-premises SQL Server 2014 installation and database files in Windows Azure blob storage is not recommended due to high network latency, which would hurt performance; for this reason, the main target scenario for this white paper is SQL Server 2014 installed in Windows Azure Virtual Machines (IaaS). This scenario provides immediate benefits for performance, data movement and portability, data virtualization, high availability and disaster recovery, and scalability limits.

If you use blob storage in this way on an Azure VM, then I/O goes through the Virtual Network Driver, whereas an Azure data disk uses the Virtual Disk Driver. This nicety may be the main reason to consider the feature.

I tried both scenarios: on-premise and from an Azure VM. I had some difficulty getting started, despite this seemingly exhaustive tutorial. I followed it, I thought, to the letter, but got either the error:

Unable to open the physical file "https://myaccount.blob.core.windows.net/sqldata/azuredb.mdf". Operating system error 86: "86(The specified network password is not correct.)".

or else

CREATE FILE encountered operating system error 1117(The request could not be performed because of an I/O device error.) while attempting to open or create the physical file https://myaccount.blob.core.windows.net/sqldata/azuredb.mdf

The problem turned out to relate to the Shared Access Signature required. The supposedly exhaustive tutorial merely refers you to the CloudBlobContainer.GetSharedAccessSignature method in the Azure SDK and offers an incomplete code snippet. I wrote C# code for this and was able to generate a Shared Access Signature but it did not work (see above). I found myself in the depths of the Azure SDK, wondering if I should use version 2.1 or 3.0, and whether I should use Microsoft.WindowsAzure.StorageClient.CloudBlobClient or Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient. The tutorial is also not clear about exactly which part of the Shared Access Signature you should store in the SQL Server Credential Manager; it is a multipart string separated by ampersands.

I have still not fully worked it out, but discovered the very helpful Azure Storage Explorer on CodePlex. If you follow the instructions in the white paper referenced above, and use the Azure Storage Explorer to generate the Shared Access Signature, then it works. The project is open source, so with a little effort it should be possible to find and document the exact requirements.

image

I tried creating and using a database from my on-premise SQL Server 2014 and I find the performance remarkably good, considering. There is no doubt some smart caching going on under the covers. Selecting 1000 rows from a table took 11 seconds the first time, and was instant the second time. It seems to me viable, on my brief look, though I am not sure why you would want to do this. However it is a good demonstration of how cloud and on-premise are coming ever-closer.

image

Running from an Azure VM in the same region is a different case, though I would suggest detailed and intensive testing before going into production.

A quick hands-on with native code compilation for .NET

I had a quick look at the .NET Native Preview. I am interested to see what the benefits might be. Note that currently the preview only supports 64-bit Windows Store apps.

Here is what is promised:

For users of your apps, .NET Native offers these advantages:

  • Fast execution times
  • Consistently speedy startup times
  • Low deployment and update costs
  • Optimized app memory usage

I created a small C# app that counts prime numbers using a simple approach. I created it first as a Universal App that does not use .NET Native, and then as a second app that does use .NET Native.

image

My initial results are disappointing. The time taken to count prime numbers is similar in both apps.

image

As a further test, I added code that adds the prime numbers found to a ListView control using an async task. The idea was to see if GUI interaction and multi-threading would be more revealing than simply crunching numbers in a tight loop. The apps takes much longer with this enabled, but again, there is nothing to choose between the two.

Did I really succeed in compiling the app to native code? I think so. Here are the contents of the AppX folder for the standard .NET executable:

image

and here for the native compiled executable – note the additional DLLs:

image

I am not actually surprised that there is no performance benefit in my example. JIT (just-in-time) compilation means that any .NET application is native code at runtime, though optimization might be different.

I have ended up with a much larger app to deploy, though the relative difference would be less, I imagine, with an app that contains more code.

I can also readily believe that start-up time will be better for a native compiled app, since there is no need for the JIT compiler. However my app is so small that this is not significant.

My question though is what kind of app will benefit most from native compilation? I would be interested to see guidance on this.

Of course it is also possible that later iterations of the technology will yield different results.

Microsoft Build 2014: what happened

It’s curious. Microsoft’s new CEO Satya Nadella has been in place for only a month which means that almost everything announced at Build, Microsoft’s developer conference which took place last week in San Francisco, must have been set before he was appointed; yet there was a sense of “all things new” at the event, as if he had overseen a wave of changes.

The wave began the previous week, with the simultaneous announcement and delivery of Office for iPad. The significance of this is threefold:

  • It demonstrated Microsoft’s decision to give first-class support to mobile platforms other than Windows
  • It demonstrated that Office can be redesigned to work nicely on a tablet
  • The quality of the product exceeded expectations, showing that in the right circumstances Microsoft can do excellent non-Windows software

image

Next came Build itself. It was a tale of two keynotes. The first was all about Windows client – both Phone and PC. The core news is the arrival of the Windows Runtime  (WinRT, the engine behind Metro/Store Apps) on Windows Phone 8.1. This means that WinRT is now the runtime that developers should target for apps that run across phone and desktop – and even, we were shown, Xbox One, which will support WinRT apps written in HTML and WinJS (Microsoft’s JavaScript library for Windows apps).

In support of this, Microsoft announced a new Universal App project for Visual Studio, which lets you share both visual and non-visual code across multiple targets. How much is shared is a developer choice.

There is more. A Universal App is now (kind-of) a desktop app as well as a Store app, since in a future free update to Windows 8, it will run on the desktop within a window, as well as appearing in the Start menu on the desktop. We were even shown this; apparently it is a mock-up. This was the biggest surprise at Build.

image

What did Executive VP Terry Myerson say about this? Here is the exact quote:

We are going all in with this desktop experience, to make sure your applications can be accessed and loved by people that love the Windows desktop. We’re going to enable your Universal Windows applications to run in a window. We’re going to enable your users to find, discover and run your Windows applications with the new Start menu. We have Live Tiles coming together with the familiar experience customers are looking for to start and run their applications and we’ll be making this available to all Windows 8.1 users as an update. I think there will be a lot of happy people out there.

This is significant. When Myerson says, “we are going all in with this desktop experience”, he does not mean backtracking on Windows Store apps, to return to desktop windows apps (Win32 or WPF) as the future of Windows development. Rather, he means Windows Store apps integrated into the desktop.

There is a further twist to this. Windows Store apps are sandboxed and cannot communicate with each other or with the operating system other than via carefully designed and secured paths. This is in general a good thing, but restrictive for businesses designing line of business apps. It also means that legacy code cannot be carried over into a Store app, other than by full porting.

In the just-released Windows 8.1 Update this has changed. Side-loaded apps (in other words, not deployed from the Windows Store) can now escape the sandbox thanks to Brokered Windows Runtime Components. There are some limitations (32-bit only on the desktop side, for example) but this will make it possible to implement business applications as Store apps even if they need to interact with existing desktop applications or services.

There is still a huge blocker to Store apps from a business perspective, which is that you need Windows 8. Still, my guess is that once the update with the restored Start menu appears, most of the objections to Windows 8 will melt away.

We also saw Office for the Windows Runtime, which will run on both Phone and PC. It is written, I discovered later, in XAML, DirectX and C++ (“Blazingly fast”, we were told). Corporate VP Kirk Koenigsbauer introduced a preview of this, or at least PowerPoint.

image

No detail yet, and several references to “early code” suggest to me that this is a year or more away from full release (giving Office on iPad a big head start); but it will come. Koenigsbauer did not call it cut-down; in fact, it was instanced as proof that WinRT is suitable for large-scale apps, so I would expect something more complete than Office on iPad; yet it is hard to imagine things like the VBA macro language appearing here in its current form (VBA is based on the ancient Visual Basic 6.0 runtime), so there will be some major differences.

We also saw Windows Phone 8.1, including the Cortana virtual personal assistant who responds to voice input. For me other things in Windows Phone 8.1 are more significant, including new swipe-style keyboard for fast text input, VPN, S/MIME secure email, and a new notification centre. Unlike touch Office, Windows Phone 8.1 is coming soon; Nokia’s Stephen Elop (soon to be in charge of Windows Phone at Microsoft) said that the first 8.1 Lumia devices could be out from May, depending on territory, and that all Lumia Windows Phone 8 devices will get the update in the summer.

On to day two, which was Cloud day, though we also got significant .NET developer news.

Executive VP Scott Guthrie introduced a new portal for Microsoft Azure, the cloud platform. This is not just a new look, but integrates with Visual Studio online so you can easily view and edit the code and track team projects. There are also new monitoring and analytics features so you can check page views, page load time, browser usage and more. Guthrie also announced integration with Puppet and Chef for deployment automation.

image

Language designer Anders Hejsberg also came on stage. He announced the release version of TypeScript, a “typed superset of JavaScript” which is suitable for large applications. He also announced a new preview release of the compiler project code-named Roslyn, and on stage pushed the button that published it as open source. What is Roslyn? It is the next generation compiler for C# and VB, and is itself written in C#. This enables compiler and workspace APIs, which in turn enable rich editor features:

The transition to compilers as platforms dramatically lowers the barrier to entry for creating code focused tools and applications. It creates many opportunities for innovation in areas such as meta-programming, code generation and transformation, interactive use of the C# and VB languages, and embedding of C# and VB in domain specific languages.

Roslyn will be fully released in the next version of Visual Studio, for which we do not yet have a date. Roslyn will be delivered alongside C# 6.0.

There is also a new .NET Foundation which will oversee open source projects for .NET, with backing from folk including Xamarin’s Miguel de Icaza and Umbraco’s Niels Hartvig. It is all a bit vague at the moment:

In the upcoming months, the .NET Foundation will be inviting many companies and community leaders to join the foundation, including its Board of Directors and will then finalize its operational details, including governance models for its open source initiatives, membership structure and industry and community engagement.

Another significant event in the .NET story is the arrival of true native code compilation for .NET, although currently only for 64-bit Store apps. More on this soon.

A couple of events during Build caught my eye. One was de Icaza’s session on using C# to build for iOS and Android, not so much for the content itself (though there was nothing wrong with it), but rather for the huge attendance it drew.

image

The session was moved to the Build keynote room, and while there were spare seats, the room felt well filled. This speaks loudly about the importance of those platforms even to Microsoft platform developers, as well as of Microsoft’s support of Xamarin’s work.

Another was the appearance of John Gruber, author of the Daring Fireball blog and an Apple enthusiast. He appeared in a video during the keynote, explaining how a project in which he is involved uses Azure for back-end services, and then in person at another session, interviewing journalist Ed Bott about what is changing at Microsoft.

image

Gruber seems to me representative of a group of smart observers who have not in general been impressed with Microsoft’s endeavours over the past few years; but he for one is now more positive on the subject. Windows Phone is much better than its market share suggests, he said. This alongside Azure and a new openness to supporting third-party clients has made him look more favourably on the company.

My summary is this. On the Windows client side, Microsoft is taking its unpopular Windows release and its minority Phone platform and making them better and more compatible with each other, making sense of the client platform in a way that should result in growth of the app ecosystem both on Phone and PC/Tablet. On the cloud side, the company is building Azure and Office 365 (two platforms united by Azure Active Directory) into a one-stop platform that is increasingly compelling. The result was a conference and a direction that was largely welcomed by those in attendance, as far as I could tell.

That does not mean that the PC will stop declining, or that iOS and Android will become less dominant in mobile. There is progress though, and more clarity about the direction of Microsoft’s platform than we have seen for some years.

For the official news from Build, see the Build Newsroom.