All posts by onlyconnect

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Microsoft StorSimple brings hybrid cloud storage to the enterprise, but what about the rest of us?

Microsoft has released details of its StorSimple 8000 Series, the first major new release since it acquired the hybrid cloud storage appliance business back in late 2012.

I first came across StorSimple at what proved to be the last MMS (Microsoft Management Summit) event last year. The concept is brilliant: present the network with infinitely expandable storage (in reality limited to 100TB – 500TB depending on model), storing the new and hot data locally for fast performance, and seamlessly migrating cold (ie rarely used) data to cloud storage. The appliance includes SSD as well as hard drive storage so you get a magical combination of low latency and huge capacity. Storage is presented using iSCSI. Data deduplication and compression increases effective capacity, and cloud connectivity also enables value-add services including cloud snaphots and disaster recovery.

image

The two new models are the 8100 and the 8600:

  8100 8600
Usable local capacity 15TB 40TB
Usable SSD capacity 800GB 2TB
Effective local capacity 15-75TB 40-200TB
Maxiumum capacity
including cloud storage
200TB 500TB
Price $100,000 $170,000

Of course there is more to the new models than bumped-up specs. The earlier StorSimple models supported both Amazon S3 (Simple Storage Service) and Microsoft Azure; the new models only Azure blob storage. VMWare VAAPI (VMware API for Array Integration) is still supported.

On the positive site, StorSimple is now backed by additional Azure services – note that these only work with the new 8000 series models, not with existing appliances.

The Azure StoreSimple Manager lets you manage any number of StorSimple appliances from the Azure portal – note this is in the old Azure portal, not the new preview portal, which intrigues me.

image

Backup snapshots mean you can go back in time in the event of corrupted or mistakenly deleted data.

image

The Azure StorSimple Virtual Appliance has several roles. You can use it as a kind of reverse StorSimple; the virtual device is created in Azure at which point you can use it on-premise in the same way as other StorSimple-backed storage. Data is uploaded to Azure automatically. An advantage of this approach is if the on-premise StorSimple becomes unavailable, you can recreate the disk volume based on the same virtual device and point an application at it for near-instant recovery. Only a 5MB file needs to be downloaded to make all the data available; the actual data is then downloaded on demand. This is faster than other forms of recovery which rely on recovering all the data before applications can resume.

image

The alarming check box “I understand that Microsoft can access the data stored on my virtual device” was explained by Microsoft technical product manager Megan Liese as meaning simply that data is in Azure rather than on-premise but I have not seen similar warnings for other Azure data services, which is odd. Further to this topic, another journalist asked Marc Farley, also on the StorSimple team, whether you can mark data in standard StorSimple volumes not to be copied to Azure, for compliance or security reasons. “Not right now” was the answer, though it sounds as if this is under consideration. I am not sure how this would work within a volume, since it would break backup and data recovery, but it would make sense to be able to specify volumes that must remain always on-premise.

All data transfer between Azure and on-premise is encrypted, and the data is also encrypted at rest, using a service data encryption key which according to Farley is not stored or accessible by Microsoft.

image

Another way to use a virtual appliance is to make a clone of on-premise data available, for tasks such as analysing historical data. The clone volume is based on the backup snapshot you select, and is disconnected from the live volume on which it is based.

image

StorSimple uses Azure blob storage but the pricing structure is different than standard blob storage; unfortunately I do not have details of this. You can access the data only through StorSimple volumes, since the data is stored using internal data objects that are StorSimple-specific. Data stored in Azure is redundant using the usual Azure “three copies” principal; I believe this includes geo-redundancy though this may be a customer option.

StorSimple appliances are made by Xyratex (which is being acquired by Seagate) and you can find specifications and price details on the Seagate StorSimple site, though we were also told that customers should contact their Microsoft account manager for details of complete packages. I also recommend the semi-official blog by a Microsoft technical solutions professional based in Sydney which has a ton of detailed information here.

StorSimple makes huge sense, but with 6 figure pricing this is an enterprise-only solution. How would it be, I muse, if the StorSimple software were adapted to run as a Windows service rather than only in an appliance, so that you could create volumes in Windows Server that use similar techniques to offer local storage that expands seamlessly into Azure? That also makes sense to me, though when I asked at a Microsoft Azure workshop about the possibility I was rewarded with blank looks; but who knows, they may know more than is currently being revealed.

Amazon Mobile SDK adds login, data sync, analytics for iOS and Android apps

Amazon Web Services has announced an updated AWS Mobile SDK, which provides libraries for mobile apps using Amazon’s cloud services as a back end. Version 2.0 of the SDK supporting iOS, and Android including Amazon Fire, is now in preview, adding several new features:

Amazon Cognito lets users log in with Amazon, Facebook or Google and then synchronize data across devices. The data is limited to a 20MB, stored as up to 20 datasets of key/value pairs. All data is stored as strings, though binary data can be encoded as a base64 string up to 1MB. The intent seems to be geared to things like configuration or game state data, rather than documents.

Amazon Mobile Analytics collects data on how users are engaging with your app. You can get data on metrics including daily and monthly active users, session count and average daily sessions per active user, revenue per active user, retention statistics, and custom events defined in your app.

Other services in the SDK, but which were already supported in version 1.7, include push messaging for Apple, Google, Fire OS and Windows devices; Amazon S3 storage (suitable for any amount of data, unlike the Cognito sync service), SimpleDB and Dynamo DB NoSQL database service, email service, and SQS (Simple Queue Service) messaging.

Windows Phone developers or those using cross-platform tools to build mobile apps cannot use Amazon’s mobile SDK, though all the services are published as a REST API so you could use it from languages other than Objective-C or Java by writing your own wrapper.

The list of supported identity providers for Cognito is short though, with notable exclusions being Microsoft accounts and Azure Active Directory. Getting round this is harder since the federated identity services are baked into the server-side API.

image

Microsoft repositions for a post-Windows client world

Microsoft CEO Satya Nadella has penned a rather long public letter which sets out his ambitions for the company. It is not full of surprises for those who have been paying attention, but confirms what we are already seeing in projects such as Office for iPad: Microsoft is positioning itself for a world in which the Windows client does not dominate.

The statement that stands out most to me is this one (the highlighting is mine):

Apps will be designed as dual use with the intelligence to partition data between work and life and with the respect for each person’s privacy choices. All of these apps will be explicitly engineered so anybody can find, try and then buy them in friction-free ways. They will be built for other ecosystems so as people move from device to device, so will their content and the richness of their services

Microsoft is saying that it will build work/personal data partitioning into its applications, particularly one would imagine Office, and that it will write them for ecosystems other than its own, particularly one would imagine iOS and Android.

This is a big change from the Windows company, and one that I will expect to see reflected in the tools it offers to developers. If Microsoft is not trying to acquire Xamarin, you would wonder why not. It has to make Visual Studio a premier tool for writing cross-platform mobile applications. It also has to address the problem that an increasingly large proportion of developers now use Macs (I do not know the figures, but observe at some developer conferences that Windows machines are a rarity), perhaps via improved online developer tools or new tools that themselves run cross-platform.

Nadella is careful to avoid giving the impression that Microsoft is abandoning its first-party device efforts, making specific mention of Windows Phone, Surface, Cortana and Xbox, for example.

Our first-party devices will light up digital work and life. Surface Pro 3 is a great example – it is the world’s best productivity tablet. In addition, we will build first-party hardware to stimulate more demand for the entire Windows ecosystem. That means at times we’ll develop new categories like we did with Surface. It also means we will responsibly make the market for Windows Phone, which is our goal with the Nokia devices and services acquisition.

Here is another statement that caught my eye:

We will increase the fluidity of information and ideas by taking actions to flatten the organization and develop leaner business processes.

The company has become increasingly bureaucratic over the years, and that is holding back its ability to be agile (though some teams seem to move at high speed regardless; I would instance the Azure team as an example).

Nadella’s letter has too many flowery passages of uncertain meaning – “We will reinvent productivity for people who are swimming in a growing sea of devices, apps, data and social networks. We will build the solutions that address the productivity needs of groups and entire organizations as well as individuals by putting them at the center of their computing experiences.” – but I do not doubt that major change is under way.

Supporting developers: how could Microsoft improve?

Microsoft invests substantial resources in supporting developers; yet the last two topics I have explored in earnest – the Azure blob storage service, and ASP.NET MVC with Azure Active Directory integration – have been frustrating and difficult. Admittedly I am only an occasional developer, but I suspect my experience is common. What is going wrong, and how could Microsoft improve?

Among the problems I have encountered:

  • Abundant documentation of simple first steps with a vacuum for anything more advanced
  • Samples that do not run without tweaking
  • Samples designed for old versions of Visual Studio
  • Samples which use obsolete or deprecated libraries
  • Samples which are poor solutions for the problem they are supposed to address
  • Documentation or samples which use preview, beta or even alpha libraries. Microsoft sometimes seems to make more effort documenting what is in preview than what is fully released.
  • Posts on a topic which are out of date, but for which it is hard to find something current
  • Circular links – click here for more information – you get another article which links back to the first one, perhaps with an intermediate step
  • Poor quality responses to questions on official Microsoft forums

On the positive side, the reference documentation is not too bad. StackOverflow is a great resource and seems to attract higher quality responses (even sometimes from Microsoft staff) than the company’s own forums.

Here then are some of the improvements I would like to see:

1. A sharper distinction between what is in preview and what is production-ready. For any given problem, it would be great to find a clear statement of how you should address it for production now, with fully released and supported libraries, and another statement showing how you will be able to address it with the latest and greatest (but perhaps less stable) technology which is in preview.

2. For key teams in Microsoft to maintain sites which offer clearly delineated production and preview sections and which are kept rigorously up to date.

3. More short samples and fewer “this demonstrates everything” samples. Large samples are more difficult to install and study and have more complex dependencies.

4. Posts and their accompanying code inevitably go out of date and I do not favour removing them, which causes more difficulties than it solves (broken links). However it seems to me reasonable for teams to maintain a number of key samples for their product area and keep them up to date.

What am I missing – or am I complaining too much about what is normal in software development? As ever, I welcome your views.

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Google I/O 2014: impressive momentum, no wow moments

I am not in San Francisco but attended Google I/O Extended in London yesterday, to hear the keynote and a couple of sessions from Google’s annual developer conference.

image

I found the demographics different than most IT events I attend: a younger crowd, and plenty of start-ups and very small businesses, not at all enterprisey (is that a word?)

image

The main announcements:

A new version of Android, known as Android L (I don’t know if this will expand eventually to Lollipop or Liquorice or some such). Big release  with over 5,000 new APIs, we were told (when does Android start being called bloated, I wonder?). Themes include a new visual style called Material Design (which extends also to the Web and to Chrome), and suitability for more device types including Android TV, Android Wear (smart watches) and Android Auto. A new hardware accelerated graphics API called Android Extension Pack which implements OpenGL ES for better game performance, with support from NVIDIA Tegra. Android graphics performance will be good enough for a considerable subset of the gaming community and we saw Unreal Engine demoed.

Android L does not use Dalvik, the virtual machine that runs Java code. In its place is ART (Android Runtime). This is 64-bit, so while Java code will run fine, native code will need updating.

Google is working hard to keep Android under its control, putting more features into its Play Services, the closed part of Android available only from Google and which is updated every 6 weeks, bypassing the operator obstacle to OS updates. There is also a new reference design including both hardware and software which is designed for affordable smartphones in the developing world: third parties can take this and build a decent Android mobile which should sell for under $100 as I understood it. I imagine this is designed to ward off fractured Android efforts like Microsoft’s Nokia X, aimed at the same kind of market but without Play Services.

There are new Android smart watches on the way, and we saw the inevitable demonstration of a user using voice control to the watch for ordering taxis or pizzas, getting notifications, and sending simple messages.

Voice control demos always seem to be nervous moments for presenters – will they be understood? Unfortunately that uncertainty remains for real users too, as evidenced by Xbox One Kinect which is amazing in that it often works, but fails often enough to be irritating. Voice recognition is a hard problem, not only in respect of correctly translating the command, but also in correctly detecting what is a command (if the person standing next to me shouts “Taxi please” I do not want my watch to order one for me).

The smart watch problem also parallels the TV problem. The appeal of the watch is that it is a simple glanceable device for telling the time. The appeal of the TV is that it is a simple sit-back screen where you only have to select a channel. Putting more smarts into these devices seems to make sense, but at the same time damages that core feature, unless done with extreme care.

Android TV puts the OS into your television, though Google’s messaging here is somewhat confusing in that, on the one hand, Chromecast (also known as Googlecast) means that you can use your Google device (Android or Chromebook) as the computer and the TV as the display and audio system, while on the other hand you can use Android on the TV itself as an all-in-one.

We are inching towards unified home entertainment, but with Google, Microsoft (Xbox One), Sony (PlayStation) and Apple all jostling for position it is too early to call a winner.

Material Design – Metro for Android?

We heard a lot about Material Design, which is Google’s new design style. Google borrowed plenty of buzzwords form Microsoft’s “Metro” playbook, and I heard expressions like “fast and fluid”, clean typography, signposting, and content-first. Like Metro, it also seems to have a blocky theme (we will know when the next design wave kicks in as it will have rounded corners).

image

Material Design is not just for Android. You can also implement the concept in Polymer, which is a web presentation framework built on Web Components, a standard in draft at the W3C. Support for Web Components (and therefore Polymer) is already in Chrome, advancing rapidly in Mozilla Firefox, probably coming in Apple Safari, and maybe coming in Microsoft IE. However, a JavaScript library called Polyfill means that Polymer will run to some extent in any modern browser.

Whenever IE was mentioned by a presenter at Google I/O there was an awkward/knowing laugh from the audience. Think about what that means.

One of the ideas here is that with a common design concept across Android and web, developers can make web apps (and therefore Chrome apps) look and behave more like Android apps (or vice versa). Again, there is a similar concept at Microsoft, where the WinJS library lets you implement a Metro look and feel in a web app.

Microsoft may have been ahead of Google in this, but it has done the company little good in that adoption for Metro has been weak, for well-rehearsed reasons connected with the smartphone wars, legacy Windows desktop and so on. Google has less legacy weighing it down.

How good is Material Design though? Apple’s Steve Jobs once said of a new OS X design update that it was so good you want to lick it. Metro lacks that kind of appeal, and judging from yesterday’s brief samples, so does Material Design, whatever its other merits in terms of clarity and usability. It is early days though.

Business features: Samsung Knox, Office support, unlimited storage

Google announced a couple of  features aimed at business users. One is that Samsung Knox, app sandboxing and data security for business users, has been donated to Google for integration into Android. Another is that Google Docs will get the ability to edit Microsoft Office documents in their native format, removing an annoyance for users who previously had to convert documents to and from Google’s own format when exchanging them with Microsoft Office users.

This seems to be an admission that Microsoft Office is the business standard for documents, and you can take it either way – good for Google because compatibility is better, or good for Microsoft because it cements Office as the standard. There will be ifs and buts of course.

Google is also offering unlimited online storage for business users, called Drive for Work, at $10 per user per month, upping the ante for everyone in the online storage game – Microsoft, Dropbox, Box and so on.

Google’s Cloud Platform

Google showed new features in its cloud platform, with a focus on big data analytics using an approach called Cloud Dataflow. “We don’t use MapReduce any more”, said the presenter, explaining that Cloud Dataflow enables all of us to use the same technology Google uses to analyse big data.

Greg DeMichille, a director of product management for the cloud platform, appeared on stage to show features for in-browser tracing and debugging of cloud applications. I recall DeMichille being much involved in Microsoft’s version of Java back in the days of the battle with Sun; he also had a spell at Adobe getting behind Flash and Flex for developers.

No Wow moments

The Google I/O 2014 keynote impressed in terms of numbers – Android growth continues unabated – and in terms of partners lining up behind initiatives like Android TV and Android Auto. The momentum seems unstoppable and the mass market for mobile and embedded devices is Google’s to lose.

On the other hand, I did not notice any game-changing moments such as I experienced when first seeing the Chromebook, or the Google Now personalisation service. Both of those still exist, of course, but if Android will really change our lives for the better, Google could have done a better job of conveying that message.

Embarcadero AppMethod: another route to cross-platform mobile, now with C++ support

Embarcadero has updated AppMethod, its IDE for cross-platform mobile and desktop applications. The IDE now supports C++, and as a special offer, you can develop Android phone “free forever”, according to the web site.

AppMethod is none other than our old friend Delphi, combined with the FireMonkey cross-platform framework. The difference between AppMethod and the older RAD Studio product line (current version is XE6) is twofold:

1. AppMethod does not include the VCL, the Delphi framework for Windows applications. It does let you develop for Windows or Mac OS X using FireMonkey.

2. You can buy RAD Studio outright with a perpetual license, from £1342.00 plus VAT for a new user (RAD Studio Professional). AppMethod is only available on subscription.

AppMethod pricing is per developer per platform per year. Currently this is £179.83 plus VAT for individuals (very small businesses up to a maximum of 5 employees in the entire organisation) or £600 for larger businesses (a rather large premium).

C++ support is new in AppMethod 1.14 and supports all target platforms except the iOS Simulator (an annoying limitation). It supports ARC (Automatic Reference Counting) on Android as well as iOS. Mac OS X is supported from 10.8 (Mountain Lion) and up.

There are also a few changes in FireMonkey. You can load HTML into the TWebBrowser component using LoadFromStrings. There is a new date picker component.

Another new feature is in the RTL (run time library). Called App Tethering, it lets applications communicate with each other, for example using TCP. These can be apps on the same device or remote apps. Once paired, apps can run remote actions and share standard data types and streams.

There are also updates to push notifications for iOS and Android, Google Glass support, updated OpenGL and DirectX support on Windows, and more: see here for the complete documentation of what is new.

A Quick Hands-on

I installed the latest AppMethod on Windows 8. The install warns that AppMethod cannot co-exist with RAD Studio XE6, presumably because it is essentially the same thing re-wrapped. The product name is relatively new, but there is plenty of old stuff under the covers. AppMethod still has a dependency on JSharp, Microsoft’s Java implementation for .NET. Java code in the IDE dating back to who knows when?

image

There is a 10-field dialog conforming paths for Android tools, which is a reminder of how many moving parts there are here. It is more complex that most Android development environments because it uses the NDK (Native Development Kit) as well as the usual SDK.

image

Once up and running, you can start a new project such as a FireMonkey mobile application:

image

and then you are in an IDE which would not be entirely unfamiliar to a Delphi user in 1995 (or I suppose, a C++ Builder user in 1997) – I am not saying this is a bad thing, though the IDE feels dated in comparison to Microsoft’s Visual Studio.

image

After coming from a spell of development with XAML it feels odd to have a form builder that defaults to xy layout, but layout managers are available:

image

Compile and run, and after the usual slow initialization of the Android emulator, the app appeared.

image

Why AppMethod?

In the crowded world of cross-platform mobile development, why use AppMethod?

Embarcadero makes a big play of its native development, though it is “native” in respect of code execution but not in GUI fidelity since by default visual controls are custom-drawn by the framework. This is in contrast to Xamarin (the obvious alternative for developers from a Windows background) which does no custom drawing but only uses native controls; however for raw performance AppMethod may have the edge (I have not done comparisons).

Delphi developers should also look at RemObjects Oxygene which also uses a Delphi-like language but is hosted in Visual Studio and, like Xamarin, uses native UI components.

The AppMethod approach does make sense if you prioritise maximum code-sharing over getting exactly the right look and feel for each supported platform, and need better performance or more capability than HTML and JavaScript can get you. There is no support for Windows Phone though; if that is in your plans, Xamarin or HTML and JavaScript development is a better fit.

Microsoft Azure: growing but still has image problems

I attended a Microsoft Cloud Day in London organised by the Azure User Group; I booked this when Technical Fellow Mark Russinovich was set to attend, but regrettably he cancelled at a late stage. I skipped the substitute keynote by UK Microsoftie Dave Coplin as I heard the very same talk earlier this month, so arrived mid-morning at the venue in Whitechapel; not that easy to find amid the stalls of Whitechapel Market (well, not quite), but if you seek out the Whitechapel branch of the Foxcroft and Ginger cafe (not known to Here Maps on Windows Phone, incidentally) then you will find premises upstairs with logos for Barclays Accelerator and Microsoft Ventures; something to do with assisting the flow of cash from corporate giants desperate for community engagement to business start-ups desperate for cash.

Giving technical presentations is hard, and while I admired Richard Conway’s efforts at showing how, with some PowerShell, he could transform some large dataset into rows of numbers using the magic of Azure HDInsight I didn’t think it quite worked. Beat Schwegler dived into code to explain the how and why of Azure Notification Hubs, a service which delivers push notifications to mobile apps; useful material, but could have been compressed. Then there was Richard Astbury at software development company two10degrees who talked about Project Orleans, high scale applications via “an Actor Model framework of programmable in-memory objects”; we learned about grains and silos (or software equivalents) in a session that was mostly new to me.

At the break I chatted with a somewhat bemused attendee who had come in the hope of learning about whether he should migrate some or all of his small company’s server requirements to Azure. I explained about Office 365 and Azure Active Directory which he said was more relevant to him than the intricacies of software development. It turns out that the Azure User Group is really about software development using Azure services, which is only one perspective on Microsoft’s cloud platform.

For me the most intriguing presentation was from Michael Delaney at ElevateDirect, a young business which has a web application to assist businesses in finding employees directly rather than via recruitment agencies. His company picked Amazon Web Services (AWS) over Azure two and a half years ago, but is now moving to Microsoft’s cloud.

image
Michael Delaney, CTO and co-founder ElevateDirect

Why did he pick AWS? He is not a typical Microsoft-platform person, preferring open source products including Linux, Apache Solr, Python and MySQL. When he chose AWS, Azure was not a suitable platform for a mainly Linux-based application. However, he does prefer C# to Java. According to Delaney, AWS is a Java-first platform and he found this getting in the way of development.

Azure today, says Delaney, has the first-class support for Linux that it lacked a few years back, and is a better platform for C# applications than AWS even though AWS does support Windows servers.

Migrating the application was relatively straightforward, he said, with the biggest issue being the move from Amazon S3 (Simple Storage Service) to Azure Storage, though he overcame this by abstracting the storage API behind his own wrapper code.

Azure is not all the way there though. Delaney is disappointed with the relational database options on offer, essentially SQL Server or third-party managed MySQL from ClearDB. He would like to see options for PostgreSQL and others. He would also like the open source Elastic Search to be offered as an Azure service.

There was a panel discussion later at which the question of Azure’s market perception was discussed. Most businesses, according to one attendee, think of AWS as the only option for cloud, even if they are Microsoft-platform businesses for whom Azure might be more suitable. It is a branding problem caused by the AWS first-mover advantage and market dominance, said Microsoft’s Steve Plank.

I would add that Azure is relatively new, at least in its new incarnation offering full IaaS (infrastructure as a service). AWS is also ahead on the number and variety of services on offer, and has not really messed up, which means there is little incentive for existing users to move unless, like Delaney, they find some aspect of Microsoft’s platform (in his case C#) particularly compelling.

This leads me back to the bemused attendee. It seems to me that Azure’s biggest advantage is Azure Active Directory and seamless integration with Office 365. Having said that, it is not difficult to host an application on AWS that uses Azure Active Directory, but there may be some advantage in working with a single cloud provider (and you can expect fast low-latency networking between Azure and Office 365).