Category Archives: microsoft

Asus bets on everything with new UK product launches for Android, Google Chromebook and Microsoft Windows

Asus unveiled its Winter 2014 UK range at an event in London yesterday. It is an extensive range covering most bases, including Android tablets, Windows 8 hybrids, Google Chromebooks, and Android smartphones.

image

Asus never fails to impress with its innovative ideas – like the Padfone, a phone which docks into a tablet – though not all the ideas win over the public, and we did not hear about any new Padfones yesterday.

The company’s other strength though is to crank out well-made products at a competitive price, and this aspect remains prominent. There was nothing cutting-edge on show last night, but plenty of designs that score favourably in terms of what you get for the money.

At a glance:

  • Chromebook C200 dual-proc Intel N2830 laptop 12″ display £199.99 and C300 13″ display £239.99
  • MeMO Pad Android tablets ME176C 7″ £119 and 8″ ME181 (with faster Z3580 2.3 GHz quad-core processor) £169
  • Transformer Pad TF103C Android tablet with mobile keyboard dock (ie a tear-off keyboard) £239
  • Two FonePad 7″ Android phablets: tablets with phone functionality, LTE in the ME372CL at £129.99  and 3G in the ME175CG at £199.99.
  • Three Zenfone 3G Android phones, 4″ at £99.99, 5″ at £149.99 and 6″ at £249.99.
  • Transformer Book T200 and T300 joining the T100 (10.1″ display) as Windows 8 hybrids with tear-off keyboards. The T200 has an 11.6″ display and the T300 a 13.3″ display and processors from Core i3 to Core i7 – no longer just a budget range. The T200 starts at £349.
  • Transformer Book Flip Windows 8.1 laptops with fold-back touch screens so you can use them as fat tablets. 13.3″ or 15.6″ screens, various prices according to configuration starting with a Core 13 at £449.
  • G750 gaming laptops from £999.99 to £1799.99 with Core i7 processors and NVIDIA GeForce GTX 800M GPUs.
  • G550JK Gaming Notebook with Core i7 and GTX 850M GPU from £899.99.

Unfortunately the press event was held in a darkened room useless for photography or close inspection of the devices. A few points to note though.

The T100 is, according to Asus, the world’s bestselling Windows hybrid. This does not surprise me since with 11 hr battery life and full Windows 8 with Office pre-installed it ticks a lot of boxes. I prefer the tear-off keyboard concept to complex flip designs that never make satisfactory tablets. The T100 now seems to be the base model in a full range of Windows hybrids.

On the phone side, it is odd that Asus did not announce any operator deals and seems to be focused on the sim-free market.

How good are the Zenfones? This is not a review, but I had a quick play with the models on display. They are not high-end devices, but nor do they feel cheap. IPS+ (in-plane switching) displays give a wide viewing angle. Gorilla Glass 3 protects the screen; the promo video talks about a 30m drop test which I do not believe for a moment*. The touch screens are meant to be responsive when wearing gloves. The camera has a five-element lens with F/2.0 aperture, a low-light mode, and “time rewind” which records images before you tap. A “Smart remove” feature removes moving objects from your picture. You also get “Zen UI” on top of Android; I generally prefer stock Android but the vendors want to differentiate and it seems not to get in the way too much.

Just another phone then; but looks good value.

As it happens, I saw another Asus display as I arrived in London, at St Pancras station.

image

The stand, devoted mainly to the T100, was far from bustling. This might be related to the profile of Windows these days; or it might reflect the fact that the Asus brand, for all the company’s efforts, is associated more with good honest value than something you stop to look at on the way to work.

For more details see the Asus site or have a look in the likes of John Lewis or Currys/ PC World.

*On the drop test, Asus says: “This is a drop test for the Gorilla glass, and is dropping a metal ball on to a pane of it that is clamped down, not actually a drop of the phone itself.”

Developing an app on Microsoft Azure: a few quick reflections

I have recently completed (if applications are ever completed) an application which runs on Microsoft’s Azure platform. I used lots of Microsoft technology:

  • Visual Studio 2013
  • Visual Studio Online with Team Foundation version control
  • ASP.NET MVC 4.0
  • Entity Framework 4.0
  • Azure SQL
  • Azure Active Directory
  • Azure Web Sites
  • Azure Blob Storage
  • Microsoft .NET 4.5 with C#

The good news: the app works well and performance is good. The application handles the upload and download of large files by authorised users, and replaces a previous solution using a public file sending service. We were pleased to find that the new application is a little faster for upload and download, as well as offering better control over user access and a more professional appearance.

There were some complications though. The requirement was for internal users to log in with their Office 365 (Azure Active Directory) credentials, but for external users (the company’s customers) to log in with credentials stored in a SQL Server database – in other words, hybrid authentication. It turns out you can do this reasonably seamlessly by implementing IPrincipal in a custom class to support the database login. This is largely uncharted territory though in terms of official documentation and took some effort.

Second, Microsoft’s Azure Active Directory support for custom applications is half-baked. You can create an application that supports Azure AD login in a few moments with Visual Studio, but it does not give you any access to metadata like to which security groups the user belongs. I have posted about this in more detail here. There is an API of course, but it is currently a moving target: be prepared for some hassle if you try this.

Third, while Azure Blob Storage itself seems to work well, most of the resources for developers seem to have little idea of what a large file is. Since a primary use case for cloud storage is to cover scenarios where email attachments are not good enough, it seems to me that handling large files (by which I mean multiple GB) should be considered normal rather than exceptional. By way of mitigation, the API itself has been written with large files in mind, so it all works fine once you figure it out. More on this here.

What about Visual Studio? The experience has been good overall. Once you have configured the project correctly, you can update the site on Azure simply by hitting Publish and clicking Next a few times. There is some awkwardness over configuration for local debugging versus deployment. You probably want to connect to a local SQL Server and the Azure storage emulator when debugging, and the Azure hosted versions after publishing. Visual Studio has a Web.Debug.Config and a Web.Release.Config which lets you apply a transformation to your main Web.Config when publishing – though note that these do not have any effect when you simply run your project in Release mode. The correct usage is to set Web.Config to what you want for debugging, and apply the deployment configuration in Web.Release.Config; then it all works.

The piece that caused me most grief was a setting for <wsFederation>. When a user logs in with Azure AD, they get redirected to a Microsoft site to log in, and then back to the application. Applications have to be registered in Azure AD for this to work. There is some uncertainty though about whether the reply attribute, which specifies the redirection back to the app, needs to be set explicitly or not. In practice I found that it does need to be explicit, otherwise you get redirected to the deployed site even when debugging locally – not good.

I have mixed feelings about Team Foundation version control. It works, and I like having a web-based repository for my code. On the other hand, it is slow, and Visual Studio sulks from time to time and requires you to re-enter credentials (Microsoft seems to love making you do that). If you have a less than stellar internet connection (or even a good one), Visual Studio freezes from time to time since the source control stuff is not good at working in the background. It usually unfreezes eventually.

As an experiment, I set the project to require a successful build before check-in. The idea is that you cannot check in a broken build. However, this build has to take place on the server, not locally. So you try to check in, Visual Studio says a build is required, and prompts you to initiate it. You do so, and a build is queued. Some time later (5-10 minutes) the build completes and a dialog appears behind the IDE saying that you need to reconcile changes – even if there are none. Confusing.

What about Entity Framework? I have mixed feelings here too, and have posted separately on the subject. I used code-first: just create your classes and add them to your DbContext and all the data access code is handled for you, kind-of. It makes sense to use EF in an ASP.NET MVC project since the framework expects it, though it is not compulsory. I do miss the control you get from writing your own SQL though; and found myself using the SqlQuery method on occasion to recover some of that control.

Finally, a few notes on ASP.NET MVC. I mostly like it; the separation between Razor views (essentially HTML templates into which you pour your data at runtime) and the code which implements your business logic and data access is excellent. The code can get convoluted though. Have a look at this useful piece on the ASP.NET MVC WebGrid and this remark:

grid.Column("Name",
  format: @<text>@Html.ActionLink((string)item.Name,
  "Details", "Product", new { id = item.ProductId }, null)</text>),

The format parameter is actually a Func, but the Razor view engine hides that from us. But you’re free to pass a Func—for example, you could use a lambda expression.

The code works fine but is it natural and intuitive? Why, for example, do you have to cast the first argument to ActionLink to a string for it to work (I can confirm that it is necessary), and would you have worked this out without help?

I also hit a problem restyling the pages generated by Visual Studio, which use the twitter Bootstrap framework. The problem is that bootstrap.css is a generated file and it does not make sense to edit it directly. Rather, you should edit some variables and use them as input to regenerate it. I came up with a solution which I posted on stackoverflow but no comments yet – perhaps this post will stimulate some, as I am not sure if I found the best approach.

My sense is that what ASP.NET MVC is largely a thing of beauty, it has left behind more casual developers who want a quick and easy way to write business applications. Put another way, the framework is somewhat challenging for newcomers and that in turn affects the breadth of its adoption.

Developing on Azure and using Azure AD makes perfect sense for businesses which are using the Microsoft platform, especially if they use Office 365, and the level of integration on offer, together with the convenience of cloud hosting and anywhere access, is outstanding. There remain some issues with the maturity of the frameworks, ever-changing libraries, and poor or confusing documentation.

Since this area is strategic for Microsoft, I suggest that it would benefit the company to work hard on pulling it all together more effectively.

Should you use Entity Framework for .NET applications?

I have been working on a project which I thought would be simpler than it turned out to be – nothing new there, most software projects are like that.

The project involves upload and download of large files from Azure storage. There is a database as part of the application, nothing too demanding, but requiring some typical CRUD (Create, Retrieve, Update, Delete) functionality. I had to decide how to implement this.

First, a confession. I am comfortable using SQL and my normal approach to a database application is to use ADO.NET DataReaders to read data. They are brilliant; you just send some SQL to the database and back comes the data in a format that is easy to read back in C# code.

When I need to update the data, I use SqlCommand.ExecuteNonQuery which executes arbitrary SQL. It is easy to use parameters and transactions, and I get full control over how many connections are open and so on.

This approach has always worked well for me and I get excellent performance and complete flexibility.

However, when coding in ASP.NET MVC and Visual Studio you are now steered firmly towards Entity Framework (EF), Microsoft’s object-relational mapping library. You can use a code-first approach. Simply create a C# class for the object you want to store, and EF handles all the drudgery of creating tables and building SQL queries, letting you concentrate on the unique features of your application.

In addition, you can right-click in the Solution Explorer, choose Add Controller, and a wizard will generate all the code for listing, creating, editing and deleting those objects.

image

Well, that is the idea, and it does work, but I soon ran into issues that made me wonder if I had made the right decision.

One of the issues is what happens when you change your mind. Maybe that field should be an Int rather than a String. Maybe you need a second phone number field. Maybe you need to create new tables. How do you keep the database in synch with your classes?

This is called Code First Migrations and involves running commands that work out how the database needs to change and generates code to update it. It’s clever stuff, but the downside is that I now have a bunch of generated classes and a generated _MigrationHistory table which I did not need before. In addition, something when slightly wrong in my case and I ended up having to comment out some of the generated code in order to make the migration work.

At this point EF is creating work for me, rather than saving it.

Another issue I encountered was puzzling out how to do stuff beyond the most trivial. How do you replace an HTML edit box with a dropdown list? How do you exclude fields from being saved when you call dbContext.SaveChanges? What is the correct way to retrieve and modify data in pure code, without data binding?

I am not the first to have questions. I came across this documentation: an article promisingly entitled How to: Add, Modify, and Delete Objects which tells you nothing of value. Spot how many found it helpful:

image

You should probably start here instead. Still, be aware that EF is by no means straightforward. Instead of having to know SQL and the basics of ADO.NET commands and DataReaders, you now have to know EF, and I am not sure it is any less intricate. You also need to be comfortable with data binding and LINQ (Language Integrated Query) to make sense of it all, though I will add that strong data binding support is one reason whey EF is a good fit for ASP.NET MVC.

Should you use Entity Framework? It remains, as far as I can tell, the strategic direction for data access on Microsoft’s platform, and once you have worked out the basics you should be able to put together simple database applications more quickly and more naturally than with manually coded SQL.

I am not sure it makes sense for heavy-duty data access, since it is harder to fine-tune performance and if you hit subtle bugs, you may end up in the depths of EF rather than debugging your own code.

I would be interested in hearing from other developers. Do you love EF, avoid it, or is it just about OK?

The UK government is adopting Open Document: some observations

The UK government is adopting the Open Document Format for Office Applications, for documents that are editable (read-only documents will be PDF or HTML). You can read Mike Bracken’s (Government Digital Service) blog on the subject here, and the details of the new requirements here. If you want to see the actual standards, they are on the OASIS site here.

I followed the XML document standards wars in some details back in 2006-2008. The origins of ODF go back to Sun Microsystems (a staunch opponent of Microsoft) which acquired an Office suite called Star Office, made it open source, and supported OpenOffice.org. My impression was that Sun’s intentions were in part to disrupt the market for Microsoft Office, and in part to promote a useful open standard out of conviction. OpenOffice eventually found its way to the Apache Foundation after Oracle’s acquisition of Sun. You can find it here.

During the time, Microsoft responded by shifting Office to use XML formats by default – these are the formats we know as .docx, .xlsx etc. It also made the formats an open standard via ECMA and ISO, to the indignation of ODF advocates who found every possible fault in the standards and the process. There were and are faults; but it has always seemed to me that an open XML standard for Microsoft Office documents was a real step forward from the wholly proprietary (but reverse engineered) binary formats.

The standards wars are to some extent a proxy for the effort to shift Microsoft from its dominance of business document authoring. Microsoft charges a lot for Office, particularly for businesses, and arguably this is an unnecessary burden. On the other hand, it is a good product which I personally prefer to the alternatives on Windows (on the Mac I am not so sure), and considering the amount of use Office gets during the working day even a small improvement in productivity is worth paying for.

As a further precaution, Microsoft added ODF support into its own Office suite. This was poor at first, though it has no doubt improved since 2007. However I would not advise anyone to set Microsoft Office to use ODF by default, unless mandated by some requirement such as government regulation. It is not the native format and I would expect a greater likelihood that something could go slightly wrong in formatting or metadata.

Bracken does not mention Microsoft Office in his blog; but as ever, the interesting part of this decision is how it will impact Office users in government, or working with government. If it is a matter of switching defaults in Office, that is no big deal, but if it means replacing Microsoft Office with Open Office or its fork, Libre Office, that will have more impact.

The problem with abandoning Microsoft Office is not only that that the alternatives may fall short, but also that the ecosystem around Microsoft Office and is document formats is richer – in other words, tools that consume or generate Office documents, add-ins for Office, and so on.

This also means that Microsoft Office documents are, in my experience, more interoperable (not less) than ODF documents.

That does not in itself make the UK government’s decision a bad one, because in making the decision it is helping to promote an alternative ecosystem. On the other hand, it does mean that the decision could be costly in constraining the choice of tools while the ODF ecosystem catches up (if it does).

How does the move towards cloud services like Office 365 and Google Docs impact on all this? Microsoft says it supports ODF in SharePoint; but for sure it is better to use Microsoft’s own formats there. For example, check the specifications for Office Online. You can edit docx in the browser, but not odt (Open Document Text); it is the same story with spreadsheets and presentations.

Google has recently added native support for the Microsoft formats to Google Docs.

Amazon’s Zocalo service, which I have just reviewed for the Register, can preview Microsoft’s formats in the browser, but while it also supports odt for preview, it does not support ods (Open Document Spreadsheet).

A good decision then by the UK government? Your answer may be partly ideological, but as a UK taxpayer, my feelings are mixed.

For more information on this and other government IT matters, I recommend Bryan Glick’s pieces over on Computer Weekly, like this one.

Microsoft Financials show cloud growth, Nokia loss

Microsoft has announced its financial results for the quarter ending June 30th 2013. How is it doing?

Quarterly revenue is up to $23.38 billion from $20.49 billion year on year, though $1.98 billion of that is phone hardware – Nokia, in other words. Operating income is up to $6.48 billion from $6.07. Net income is down to $4.61 billion from $4.96 billion because of tax adjustments.

I am more interested in the segment breakdown, though Microsoft’s segments are not particularly clear:

Quarter ending June 30th 2014 vs quarter ending June 30th 2013, $millions

Segment Revenue Change Gross margin Change
Devices and Consumer Licensing 4694 +406 4407 +526
Computing and Gaming Hardware 1441 +274 18 +665
Phone Hardware 1985 N/A 54 N/A
Devices and Consumer Other 1880 +317 446 +78
Commercial Licensing 11222 +595 10296 +345
Commercial Other 2262 +688 691 +355

Revenue is actually up year on year in all segments. Windows has benefited from the end of XP support driving upgrades. Products Microsoft wants to talk about are Azure, SQL Server and System Center which are all growing revenue. “Commercial cloud revenue” or in other words Office 365, CRM online and Azure, grew 147% and is now a $4.4 billion business at current rate of sale.

The bad news is that Nokia contributed a $692 million loss (diminishment of operating income). Microsoft says it sold 5.8 million Lumia (Windows) phones and 30.3 million non-Lumia phones, with the majority of Lumia sales being low-cost devices.

Bing search grew revenue by 40% and US search share is up to 19.2% according to Microsoft.

Microsoft CEO Satya Nadella promises “One Windows” in place of three, but should that be two?

Microsoft released its latest financial results yesterday, on which I will post separately. However, this remark from the earnings call transcript (Q&A with financial analysts) caught my eye:

In the year ahead, we are investing in ways that will ensure our Device OS and first party hardware align to our core. We will streamline the next version of Windows from three Operating Systems into one, single converged Operating System for screens of all sizes. We will unify our stores, commerce and developer platforms to drive a more coherent user experiences and a broader developer opportunity. We look forward to sharing more about our next major wave of Windows enhancements in the coming months.

What are the three versions of Windows today? I guess, Windows x86, Windows RT (Windows on ARM), and Windows Phone. On the other hand, there is little difference between Windows x86 and Windows RT other than that Windows RT runs on ARM and is locked down so that you cannot install desktop apps. The latter is a configuration decision, which does not make it a different operating system; and if you count running on ARM as being a different OS, then Windows Phone will always be a different OS unless Microsoft makes the unlikely decision to standardise on x86 on the phone (a longstanding relationship with Qualcomm makes this a stretch).

Might Nadella have meant PC Windows, Windows Phone and Xbox? It is possible, but the vibes from yesterday are that Xbox will be refocused on gaming, making it more distinct from PC and phone:

We made the decision to manage Xbox to maximize enterprise value with a focus on gaming. Gaming is the largest digital life category in a mobile first, cloud first world. It’s also the place where our past success, revered brand and passionate fan base present us a special opportunity.

With our decision to specifically focus on gaming we expect to close Xbox Entertainment Studios and streamline our investments in Music and Video. We will invest in our core console gaming and Xbox Live with a view towards the broader PC and mobile opportunity.

said Nadella.

As a further aside, what does it mean to “manage Xbox to maximize enterprise value”? It is not a misprint, but perhaps Nadella meant to say entertainment? Or perhaps the enterprise he has in mind is Microsoft?

Never mind; the real issue issue is about the development platform and making it easier to build applications for PC, phone and tablets without rewriting all your code. That is the promise of the Universal App announced earlier this year at the Build conference.

That sounds good; but remember that Windows 8.x is two operating systems in one. There is the desktop side which is what most of us use most of the time, and the tablet side (“Metro”) which is struggling. Universal Apps run on the tablet side. The desktop side has different frameworks and different capabilities, making it in effect a separate platform for developers.

“One Windows” then is not coming soon. But we might be settling on two.

Farewell Nokia X? Not quite, but the signs are clear as Microsoft bets on Universal Apps

I could never make sense of Nokia X, the Android-with-Microsoft-services device which Nokia announced less than a year ago at Mobile World Congress in Barcelona:

If Nokia X is a worse Android than Android, and a worse Windows Phone than Windows Phone, what is the point of it and why will anyone buy?

Nokia X is Android without Google’s Play Store; if Amazon struggles to persuade developers to port apps to Kindle Fire (another non-Google Android) then the task for Nokia, lacking Amazon’s ecosystem, is even harder. Now, following Microsoft’s acquisition, it makes even less sense: how can Microsoft simultaneously evangelise both Windows Phone and an Android fork with its own incompatible platform and store?

Nokia X was meant to be a smartphone at feature phone prices, or something like that, but since Windows phone runs well on low-end hardware, that argument does not stand up either.

Now Nokia X is all but dead. Microsoft CEO Satya Nadella:

image

Second, we are working to integrate the Nokia Devices and Services teams into Microsoft. We will realize the synergies to which we committed when we announced the acquisition last September. The first-party phone portfolio will align to Microsoft’s strategic direction. To win in the higher price tiers, we will focus on breakthrough innovation that expresses and enlivens Microsoft’s digital work and digital life experiences. In addition, we plan to shift select Nokia X product designs to become Lumia products running Windows. This builds on our success in the affordable smartphone space and aligns with our focus on Windows Universal Apps.

and former Nokia CEO Stephen Elop, now in charge of Microsoft devices:

In addition to the portfolio already planned, we plan to deliver additional lower-cost Lumia devices by shifting select future Nokia X designs and products to Windows Phone devices. We expect to make this shift immediately while continuing to sell and support existing Nokia X products.

Nadella has also announced a huge round of job cuts, mainly of former Nokia employees, around 12,500 which is roughly 50% of those who came over. Nokia’s mobile phone business is no all Windows Phone (Lumia) and Nokia X. In addition, it sells really low-end phones, the kind you can pick up for £10 at a supermarket, and the Asha range which are budget smartphones. Does Microsoft have any interest in Asha? Elop does not even mention it.

It seems then that Microsoft is focusing on what it considers strategic: Windows Phone at every price point, and Universal Apps which let developers create apps for both Windows Phone and full Windows (8 and higher) from a single code base.

Microsoft does also intend to support Android and iOS with apps, but has no need to make its own Android phones in order to do so.

My view is that Nokia did an good job with Windows Phone within the constraints of a difficult market; not perfect (the early Lumia 800 devices were buggy, for example), but better by far than Microsoft managed with any other OEM partner. I currently use a Lumia 1020 which I regard as something of a classic, with its excellent camera and general high quality.

It seems to me reassuring (from a Windows Phone perspective) that Microsoft is keeping Windows Phone engineering in Finland:

Our phone engineering efforts are expected to be concentrated in Salo, Finland (for future, high-end Lumia products) and Tampere, Finland (for more affordable devices). We plan to develop the supporting technologies in both locations.

says Elop, who also notes that Surface and Xbox teams will be little touched by today’s announcements.

Incidentally, I wrote recently about Universal Apps here (free registration required) and expressed the view that Microsoft cannot afford yet another abrupt shift in its developer platform; the continuing support for Universal Apps in the Nadella era makes that less likely.

Speculating a little, it also would not surprise me if Universal Apps were extended via Xamarin support to include Android and iOS – now that is really a universal app.

Will Microsoft add some kind of Android support to Windows Phone itself? This is rumoured, though it could be counter-productive in terms of winning over developers: why bother to create a Windows Phone app if your Android app will kind-of run?

Further clarification of Microsoft’s strategy is promised in the public earnings call on July 22nd.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Microsoft StorSimple brings hybrid cloud storage to the enterprise, but what about the rest of us?

Microsoft has released details of its StorSimple 8000 Series, the first major new release since it acquired the hybrid cloud storage appliance business back in late 2012.

I first came across StorSimple at what proved to be the last MMS (Microsoft Management Summit) event last year. The concept is brilliant: present the network with infinitely expandable storage (in reality limited to 100TB – 500TB depending on model), storing the new and hot data locally for fast performance, and seamlessly migrating cold (ie rarely used) data to cloud storage. The appliance includes SSD as well as hard drive storage so you get a magical combination of low latency and huge capacity. Storage is presented using iSCSI. Data deduplication and compression increases effective capacity, and cloud connectivity also enables value-add services including cloud snaphots and disaster recovery.

image

The two new models are the 8100 and the 8600:

  8100 8600
Usable local capacity 15TB 40TB
Usable SSD capacity 800GB 2TB
Effective local capacity 15-75TB 40-200TB
Maxiumum capacity
including cloud storage
200TB 500TB
Price $100,000 $170,000

Of course there is more to the new models than bumped-up specs. The earlier StorSimple models supported both Amazon S3 (Simple Storage Service) and Microsoft Azure; the new models only Azure blob storage. VMWare VAAPI (VMware API for Array Integration) is still supported.

On the positive site, StorSimple is now backed by additional Azure services – note that these only work with the new 8000 series models, not with existing appliances.

The Azure StoreSimple Manager lets you manage any number of StorSimple appliances from the Azure portal – note this is in the old Azure portal, not the new preview portal, which intrigues me.

image

Backup snapshots mean you can go back in time in the event of corrupted or mistakenly deleted data.

image

The Azure StorSimple Virtual Appliance has several roles. You can use it as a kind of reverse StorSimple; the virtual device is created in Azure at which point you can use it on-premise in the same way as other StorSimple-backed storage. Data is uploaded to Azure automatically. An advantage of this approach is if the on-premise StorSimple becomes unavailable, you can recreate the disk volume based on the same virtual device and point an application at it for near-instant recovery. Only a 5MB file needs to be downloaded to make all the data available; the actual data is then downloaded on demand. This is faster than other forms of recovery which rely on recovering all the data before applications can resume.

image

The alarming check box “I understand that Microsoft can access the data stored on my virtual device” was explained by Microsoft technical product manager Megan Liese as meaning simply that data is in Azure rather than on-premise but I have not seen similar warnings for other Azure data services, which is odd. Further to this topic, another journalist asked Marc Farley, also on the StorSimple team, whether you can mark data in standard StorSimple volumes not to be copied to Azure, for compliance or security reasons. “Not right now” was the answer, though it sounds as if this is under consideration. I am not sure how this would work within a volume, since it would break backup and data recovery, but it would make sense to be able to specify volumes that must remain always on-premise.

All data transfer between Azure and on-premise is encrypted, and the data is also encrypted at rest, using a service data encryption key which according to Farley is not stored or accessible by Microsoft.

image

Another way to use a virtual appliance is to make a clone of on-premise data available, for tasks such as analysing historical data. The clone volume is based on the backup snapshot you select, and is disconnected from the live volume on which it is based.

image

StorSimple uses Azure blob storage but the pricing structure is different than standard blob storage; unfortunately I do not have details of this. You can access the data only through StorSimple volumes, since the data is stored using internal data objects that are StorSimple-specific. Data stored in Azure is redundant using the usual Azure “three copies” principal; I believe this includes geo-redundancy though this may be a customer option.

StorSimple appliances are made by Xyratex (which is being acquired by Seagate) and you can find specifications and price details on the Seagate StorSimple site, though we were also told that customers should contact their Microsoft account manager for details of complete packages. I also recommend the semi-official blog by a Microsoft technical solutions professional based in Sydney which has a ton of detailed information here.

StorSimple makes huge sense, but with 6 figure pricing this is an enterprise-only solution. How would it be, I muse, if the StorSimple software were adapted to run as a Windows service rather than only in an appliance, so that you could create volumes in Windows Server that use similar techniques to offer local storage that expands seamlessly into Azure? That also makes sense to me, though when I asked at a Microsoft Azure workshop about the possibility I was rewarded with blank looks; but who knows, they may know more than is currently being revealed.

Amazon Mobile SDK adds login, data sync, analytics for iOS and Android apps

Amazon Web Services has announced an updated AWS Mobile SDK, which provides libraries for mobile apps using Amazon’s cloud services as a back end. Version 2.0 of the SDK supporting iOS, and Android including Amazon Fire, is now in preview, adding several new features:

Amazon Cognito lets users log in with Amazon, Facebook or Google and then synchronize data across devices. The data is limited to a 20MB, stored as up to 20 datasets of key/value pairs. All data is stored as strings, though binary data can be encoded as a base64 string up to 1MB. The intent seems to be geared to things like configuration or game state data, rather than documents.

Amazon Mobile Analytics collects data on how users are engaging with your app. You can get data on metrics including daily and monthly active users, session count and average daily sessions per active user, revenue per active user, retention statistics, and custom events defined in your app.

Other services in the SDK, but which were already supported in version 1.7, include push messaging for Apple, Google, Fire OS and Windows devices; Amazon S3 storage (suitable for any amount of data, unlike the Cognito sync service), SimpleDB and Dynamo DB NoSQL database service, email service, and SQS (Simple Queue Service) messaging.

Windows Phone developers or those using cross-platform tools to build mobile apps cannot use Amazon’s mobile SDK, though all the services are published as a REST API so you could use it from languages other than Objective-C or Java by writing your own wrapper.

The list of supported identity providers for Cognito is short though, with notable exclusions being Microsoft accounts and Azure Active Directory. Getting round this is harder since the federated identity services are baked into the server-side API.

image