Category Archives: cloud computing

The Microsoft Azure VM role and why you might not want to use it

I’ve spent the morning talking to Microsoft’s Steve Plank – whose blog you should follow if you have an interest in Azure – about Azure roles and virtual machines, among other things.

Windows Azure applications are deployed to one of three roles, where each role is in fact a Windows Server virtual machine instance. The three roles are the web role for IIS (Internet Information Server) applications, the worker role for general applications, and newly announced at the recent PDC, the VM role, which you can configure any way you like. The normal route to deploying a VM role is to build a VM on your local system and upload it, though in future you will be able to configure and deploy a VM role entirely online.

It’s obvious that the VM role is the most flexible. You will even be able to use 64-bit Windows Server 2003 if necessary. However, there is a critical distinction between the VM role and the other two. With the web and worker roles, Microsoft will patch and update the operating system for you, but with the VM role it is up to you.

That does not sound too bad, but it gets worse. To understand why, you need to think in terms of a golden image for each role, that is stored somewhere safe in Azure and gets deployed to your instance as required.

In the case of the web and worker roles, that golden image is constantly updated as the system gets patched. In addition, Microsoft takes responsibility for backing up the system state of your instance and restoring it if necessary.

In the case of the VM role, the golden image is formed by your upload and only changes if you update it.

The reason this is important is that Azure might at any time replace your running VM (whichever role it is running) with the golden image. For example, if the VM crashes, or the machine hosting it suffers a power failure, then it will be restarted from the golden image.

Now imagine that Windows server needs an emergency patch because of a newly-discovered security issue. If you use the web or worker role, Microsoft takes responsibility for applying it. If you use the VM role, you have to make sure it is applied not only to the running VM, but also to the golden image. Otherwise, you might apply the patch, and then Azure might replace the VM with the unpatched golden image.

Therefore, to maintain a VM role properly you need to keep a local copy patched and refresh the uploaded golden image with your local copy, as well as updating the running instance. Apparently there is a differential upload, to reduce the upload time.

The same logic applies to any other changes you make to the VM. It is actually more complex than managing VMs in other scenarios, such as the Linux VM on which this blog is hosted.

Another feature which all Azure developers must understand is that you cannot safely store data on your Azure instance, whichever role it is running. Microsoft does not guarantee the safety of this data, and it might get zapped if, for example, the VM crashes and gets reverted to the golden image. You must store data in Azure database or blob storage instead.

This also impacts the extent to which you can customize the web and worker VMs. Microsoft will be allowing full administrative access to the VMs if you require it, but it is no good making extensive changes to an individual instance since they could get reverted back to the golden image. The guidance is that if manual changes take more than 5 minutes to do, you are better off using the VM role.

A further implication is that you cannot realistically use an Azure VM role to run Active Directory, since Active Directory does not take kindly to be being reverted to an earlier state. Plank says that third-parties may come up with solutions that involve persisting Active Directory data to Azure storage.

Although I’ve talked about golden images above, I’m not sure exactly how Azure implements them. However, if I have understood Plank correctly, it is conceptually accurate.

The bottom line is that the best scenario is to live with a standard Azure web or worker role, as configured by you and by Azure when you created it. The VM role is a compromise that carries a significant additional administrative burden.

Now you can rent GPU computing from Amazon

I wrote back in September about why programming the GPU is going mainstream. That’s even more the case today, with Amazon’s announcement of a Cluster GPU instance for the Elastic Compute Cloud. It is also a vote of confidence for NVIDIA’s CUDA architecture. Each Cluster GPU instance has two NVIDIA Tesla M2050 GPUs installed and costs $2.10 per hour. If one GPU instance is not enough, you can use up to 8 by default, with more available on request.

GPU programming in the cloud makes sense in cases where you need the performance of a super-computer, but not very often. It could also enable some powerful mobile applications, maybe in financial analysis, or image manipulation, where you use a mobile device to input data and view the results, but cloud processing to do the heavy lifting.

One of the ideas I discussed with someone from Adobe at the NVIDIA GPU conference was to integrate a cloud processing service with PhotoShop, so you could send an image to the cloud, have some transformative magic done, and receive the processed image back.

The snag with this approach is that in many cases you have to shift a lot of data back and forth, which means you need a lot of bandwidth available before it makes sense. Still, Amazon has now provided the infrastructure to make processing as a service easy to offer. It is now over to the rest of us to find interesting ways to use it.

UK business applications stagger towards the cloud

I spent today evaluating several competing vertical applications for a small business working in a particular niche – I am not going to identify it or the vendors involved. The market is formed by a number of companies which have been serving the market for some years, and which have Windows applications born in the desktop era and still being maintained and enhanced, plus some newer companies which have entered the market more recently with web-based solutions.

Several things interested me. The desktop applications seemed to suffer from all the bad habits of application development before design for usability became fashionable, and I saw forms with a myriad of fields and controls, each one no doubt satisfying a feature request, but forming a confusing and ugly user interface when put together. The web applications were not great, but seemed more usable, because a web UI encourages a simpler page-based approach.

Next, I noticed that the companies providing desktop applications talking to on-premise servers had found a significant number of their customers asking for a web-hosted option, but were having difficulty fulfilling the request. Typically they adopted a remote application approach using something like Citrix XenApp, so that they could continue to use their desktop software. In this type of solution, a desktop application runs on a remote machine but its user interface is displayed on the user’s desktop. It is a clever solution, but it is really a desktop/web hybrid and tends to be less convenient than a true web application. I felt that they needed to discard their desktop legacy and start again, but of course that is easier said than done when you have an existing application widely deployed, and limited development resources.

Even so, my instinct is to be wary of vendors who call desktop applications served by XenApp or the like cloud computing.

Finally, there was friction around integrating with Outlook and Exchange. Most users have Microsoft Office and use Outlook and Exchange for email, calendar and tasks. The vendors with web application found their users demanding integration, but it is not easy to do this seamlessly and we saw a number of imperfect attempts at synchronisation. The vendors with desktop applications had an easier task, except when these were repurposed as remote applications on a hosted service. In that scenario the vendors insisted that customers also use their hosted Exchange, so they could make it work. In other words, customers have to build almost their entire IT infrastructure around the requirements of this single application.

It was all rather unsatisfactory. The move towards the cloud is real, but in this particular small industry sector it seems slow and painful.

The cloud permeates Microsoft’s business more than we may realise

I’m in the habit of summarising Microsoft’s financial results in a simple table. Here is how it looks for the recently announced figures.

Quarter ending September 30 2010 vs quarter ending September 30 2009, $millions

Segment Revenue Change Profit Change
Client (Windows + Live) 4785 1905 3323 1840
Server and Tools 3959 409 1630 393
Online 527 40 -560 -83
Business (Office) 5126 612 3388 561
Entertainment and devices 1795 383 382 122

The Windows figures are excellent, mostly reflecting Microsoft’s success in delivering a successor to Windows XP that is good enough to drive upgrades.

I’m more impressed though with the Server and tools performance – which I assume is mostly Server – though noting that it now includes Windows Azure. Microsoft does not break out the Azure figures but said that it grew 40% over the previous quarter; not especially impressive given that Azure has not been out long and will have grown from a small base.

The Office figures, also good, include Sharepoint, Exchange and BPOS (Business Productivity Online Suite), which is to become Office 365. Microsoft reported “tripled number of business customers using cloud services.”

Online, essentially the search and advertising business, is poor as ever, though Microsoft says Bing gained market share in the USA. Entertainment and devices grew despite poor sales for Windows Mobile, caught between the decline of the old mobile OS and the launch of Windows Phone 7.

What can we conclude about the health of the company? The simple fact is that despite Apple, Google, and mis-steps in Windows, Mobile, and online, Microsoft is still a powerful money-making machine and performing well in many parts of its business. The company actually does a poor job of communicating its achievements in my experience. For example, the rather dull keynote from TechEd Berlin yesterday.

Of course Microsoft’s business is still largely dependent on an on-premise software model that many of us feel will inevitably decline. Still, my other reflection on these figures is that the cloud permeates Microsoft’s business more than a casual glance reveals.

The “Online” business is mainly Bing and advertising as far as I can tell; and despite CTO Ray Ozzie telling us back in 2005 of the importance of services financed by advertising, that business revolution has not come to pass as he imagined. I assume that Windows Live is no more successful than Online.

What is more important is that we are seeing Server and tools growing Azure and cloud-hosted virtualisation business, and Office growing hosted Exchange and SharePoint business. I’d expect both businesses to continue to grow, as Microsoft finally starts helping both itself and its customers with cloud migration.

That said, since the hosted business is not separated from the on-premise business, and since some is in the hands of partners, it is hard to judge its real significance.

Reflections on Microsoft PDC 2010

I’m in Seattle airport waiting to head home – so here are some quick reflections on Microsoft’s Professional Developers Conference 2010.

Let’s start with the content. There was a clear focus on two things: Windows Azure, and Windows Phone 7.

On the Azure front, the cloud platform, Microsoft impressed. Features are being added rapidly, and it looks solid and interesting. The announcements at PDC mean that Azure provides pretty much the complete Windows Server platform, should you want it. You will get elevated privileges for complete control over a server instance; and full IIS functionality including support for multiple web sites and the ability to install modules. You will also be able to remote desktop into your Azure servers, which is going to make Windows admins feel more comfortable with Azure.

The new virtual machine role is also a big deal, even though in some ways it goes against the multi-tenanted philosophy by leaving the customer responsible for patches and updates. Businesses with existing virtual servers can simply move them to Azure if they no longer wish to run their own hardware. There are also existing tools for migrating physical servers to virtual.

I asked Bob Muglia, president of server and tools at Microsoft, whether having all these VMs maintained by customers and potentially compromised with malware posed a security threat to the platform. He assured me that they are fully isolated, and that the main danger is to the customer who might consume unexpected amounts of bandwidth.

Simply running on an Azure VM does not take full advantage of the platform though. It makes more sense to hook into Azure services such as SQL Azure, or the non-relational storage services, and deploy to Azure web or worker roles where Microsoft take care of maintenance. There is also a range of middleware services called AppFabric; see here for a few notes on these.

If there was one gap in the Azure story at PDC, it was a lack of partner announcements. Microsoft says there are more than 20,000 applications running on Azure, but we did not hear much about them, or about notable large customers embracing Azure. There is still a lot of resistance to the cloud among customers. I asked some attendees at lunch whether they expect to use Azure; the answer was “no, we have our own datacenter”.

I think the partner announcements will come. Microsoft is firmly behind Azure now, and it makes sense for its customers. I expect Azure to succeed; but whether it will do well enough to counter-balance the cost to Microsoft of migration away from on-premise servers is an open question.

Alongside Azure, though hardly mentioned at PDC, is the hosted application business originally called BPOS and now called Office 365. This is not currently hosted on Azure, though Muglia told me that most of it will in time move there. There are some potential synergies here, for example in Azure workflow applications that handle SharePoint forms or documents.

Microsoft’s business is primarily based on partners selling Windows hardware and licenses for on-premise or client software. Another open question is how easily the company can re-orient itself to be a cloud platform and services company. It is a massive shift.

What about Windows Phone? Microsoft has some problems here, and they are not primarily to do with the phone itself, which is decent. There are a few issues over the design of the launch devices, and features that are lacking initially. Further, while the Silverlight and XNA SDK forms a strong development platform, there is a need for a native code SDK and I expect this will follow at some point.

The key issue though is that outside the Microsoft bubble there is not much interest in the phone. Google Android meets the needs of the OEM hardware and operator partners, being open and easily customised. Apple owns the market for high-end devices with the design quality and ease of use that comes from single-vendor control of the whole stack. The momentum behind these platforms is such that it will not be easy for Microsoft to grab much market share, or attention from third-party app developers. It deserves to do well; but I will not be surprised if it under-performs relative to its quality.

There was also some good material to be found on the PDC sidelines, as it were. Andes Hejlsberg presented on new asynchronous features coming in C# 5.0, which look like a breakthrough in making concurrent programming safer and easier. He also showed a bit of Microsoft’s work on compiler as a service, which has huge potential. Patrick Smaccia has an enthusiastic report on the C# presentation. Herb Sutter gave a brilliant talk on lambdas.

The PDC site lets you stream pretty much all the sessions and seems to work very well. The player application is written in Silverlight. Note that there are twice as many sessions as appear in the schedule, since many were pre-recorded and only show in the full session list.

Why did Microsoft run such a small event, with only around 1000 attendees? I asked a couple of people about this; the answer seems to be partly as a cost-saving measure – it is much cheaper to run an event on the Microsoft campus than to hire an external venue and pay transport and expenses for all the speakers and staff – and partly to emphasise the virtual aspect of PDC, with a global audience tuning in.

This does not altogether make sense to me. Microsoft is still generating a ton of cash, as we heard in the earnings call at the event, and PDC is a key opportunity to market its platform to developers and influencers, so it should not worry too much about the cost. Second, you can do virtual as well as physical; they are not alternatives. You get more engagement from people who are actually present.

One of the features of the player is that you see how many are currently streaming the content. I tuned into Mark Russinovich’s excellent session on Azure – he says he has “drunk the cloud kool-aid” – while it was being streamed live, and was surprised to see only around 300 virtual attendees. If that figure is accurate, it is disappointing, though I am sure there will be thousands of further views after the event.

Finally, what about all the IE9/HTML 5 vs Silverlight discussion generated at PDC? Clearly Microsoft’s messaging went badly awry here, and frankly the company has only itself to blame. It cannot be surprised if after making a huge noise about how IE9 forms a great client for web applications, standards-based and integrated with Windows, that people question what sort of role is envisaged for Silverlight. It did not help that a planned session on Silverlight futures was apparently cancelled, probably for innocent reasons such as not being quite ready to show, but increasing speculation that Silverlight is now getting downplayed.

Microsoft chose to say nothing on the subject, other than some remarks by Bob Muglia to freelance journalist Mary-Jo Foley which seem to confirm that yes, Silverlight is no longer Microsoft’s key technology for cross-platform web applications.

If that was not quite the message Microsoft intended, then why not clarify the matter to press, myself included, as we sat in the press room on Microsoft’s campus?

My take is that while Silverlight is by no means dead, it seems destined for a lesser role than was once envisaged – a shame, as it is an excellent cross-platform .NET client.

AppFabric – Microsoft’s new middleware

I took the opportunity here at Microsoft PDC to find out what Microsoft means by AppFabric. Is it a product? a brand? a platform?

The explanation I was given is that AppFabric is Microsoft’s middleware brand. You will normally see the work in conjunction with something more specific, as in “AppFabric Caching” (once known as Project Velocity) or “AppFabric Composition Runtime” (once known as Project Dublin. The chart below was shown at a PDC AppFabric session:

image

Of course if you add in the Windows Azure prefix you get a typical Microsoft mouthful such as “Windows Azure AppFabric Access Control Service.”

Various AppFabric pieces run on Microsoft’s on-premise servers, though the emphasis here at PDC is on AppFabric as part of the Windows Azure cloud platform. On the AppFabric stand in the PDC exhibition room, I was told that AppFabric in Azure is now likely to get new features ahead of the on-premise versions. The interesting reflection is that cloud customers may be getting a stronger and more up-to-date platform than those on traditional on-premise servers.

Microsoft PDC big on Azure, quiet on Silverlight

I’m at Microsoft PDC in Seattle. The keynote, introduced by CEO Steve Ballmer, started with a recap of the company’s success with Windows 7 – 240 million sold, we were told, and adoption plans among 88% of businesses – and showing off Windows Phone 7 (all attendees will receive a device) and Internet Explorer 9.

IE9 guy Dean Hachamovitch demonstrated the new browser’s hardware acceleration, and made an intriguing comment. When highlighting IE9’s embrace of web standards, he noted that “accelerating only pieces of the browser holds back the web.” It sounded like a jab at plug-ins, but what about Microsoft’s own plug-in, Silverlight? A good question. You could put this together with Ballmer’s comment that “We’ve tried to make web the feel more like native applications” as evidence that Microsoft sees HTML 5 rather than Silverlight as its primary web application platform.

Then again you can argue that it just happens Microsoft had nothing to say about Silverlight, other than in the context of Windows Phone 7 development, and that its turn will come. The new Azure portal is actually built in Silverlight.

The messaging is tricky, and I found it intriguing, especially coming after the Adobe MAX conference where there were public sessions on Flash vs HTML and a focus in the day two keynote emphasising the importance of both. All of which shows that Adobe has a tricky messaging problem as well; but it is at least addressing it, whereas Microsoft so far is not.

The keynote moved on to Windows Azure, and this is where the real news was centered. Bob Muglia, president of the Server and Tools business, gave a host of announcements on the subject. Azure is getting a Virtual Machine role, which will allow you to upload server images to run on Microsoft’s cloud platform, and to create new virtual machines with full control over how they are configured. Server 2008 R2 is the only supported OS initially, but Server 2003 will follow.

Remote Desktop is also coming to Azure, which will mean instant familiarity for Windows admins and developers.

Another key announcement was Windows Azure Marketplace, where third parties will be able to sell “building block components training, services, and finished services and applications.” This includes DataMarket, the new name for the Dallas project, which is for delivering live data as a service using the odata protocol. An odata library has been added to the Windows Phone 7 SDK, making the two a natural fit.

Microsoft is also migrating Team Foundation Server (TFS) to Azure, interesting both as a case study in moving a complex application, and as a future option for development teams who would rather not wrestle with the complexities of deploying this product.

Next came Windows Azure AppFabric Access Control, which despite its boring name has huge potential. This is about federated identity – both with Active Directory and other identity services. In the example we saw, Facebook was used as an identity provider alongside Microsoft’s own Active Directory, and users got different access rights according to the login they used.

In another guide Azure AppFabric – among the most confusing Microsoft product names ever – is a platform for hosting composite workflow applications.

Java support is improving and Microsoft says that you will be able to run the Java environment of your choice from 2011.

Finally, there is a new “Extra small” option for Azure instances, aimed at developers, priced at $0.05 per compute hour. This is meant to make the platform more affordable for small developers, though if you calculate the cost over a year it still amounts to over $400; not too much perhaps, but still significant.

Attendees were left in no doubt about Microsoft’s commitment to Azure. As for Silverlight, watch this space.

Microsoft unveils Office 365, wins vs Google in California. What are the implications for its future?

Today Microsoft announced Office 365, though it is not really a new product. Rather, it pulls together a bunch of existing ones: Business Productivity Online Suite (BPOS), Office Live Small Business, and Live@edu, the cloud  . It also impacts the desktop Office business, in that with at least some varieties of Office 365 subscriptions, users get the right to download and install Office 2010 Pro Plus edition.

This rebranding is a smart move. I have long been mystified by the myriad brands Microsoft users for its online offerings. I hope this will all integrate nicely with the new Small Business Server “Aurora”, a forthcoming version of SBS designed to bridge the cloud and the local network. If it does, this will be attractive for small businesses – who will pay $6.00 per user per month, we were told today – as well as for larger organisations.

Enterprises will pay between $2.00 and $27.00 per user depending on which services they buy, and can get extra features such as unlimited space for email archiving.

I also find it interesting that Microsoft has won what sounds like a bitter battle with Google for the migration of the State of California to online services.

Why would anyone choose Microsoft rather than Google for cloud services? Google was born in the web era, has no desktop legacy weighing it down, has helped to drive browser standards forward with HTML 5 and lightning-fast JavaScript, promotes open standards, and has a great free offering as well as subscriptions? Further, with Android Google has a fast-growing mobile platform which it can integrate with its services.

No doubt Microsoft can make a case for its cloud offerings, but I suspect a lot of it is the power of the familiar. If you already run on Office documents and Exchange email, moving to online versions of the same applications will seem a smoother transition. There is also the document format issue: you can import Office documents into Google Apps, but not with with 100% fidelity, and the online editors are basic compared with Microsoft Office.

When Microsoft seemingly had no idea what the cloud was about, it was easier for Google to win customers. Now Microsoft is slowly but surely getting the idea, and the value of its long-standing hold over business computing is being felt.

Google is also winning customers, of course, and even if you accept that Office 365 is the future for many existing Microsoft-platform businesses – and, Microsoft will hope, some new ones – there are still a host of interesting questions about the company’s future.

One is how the numbers stack up. Can Microsoft as cloud provider be as profitable as Microsoft has been with the old locally installed model?

Second, what are the implications for its partners? In today’s press announcement we were told that customers migrating to BPOS report a 10%-50% cost saving. The implication is that these companies are spending less money on IT than before – so who is losing out? It could be Microsoft, it could be hardware suppliers, it could be integration partners. Microsoft does include potential for partners to profit from Office 365 migrations, presuming it follows the BPOS model, but partners could still be worse off.

For example, if support requests diminish,because cloud services are more reliable, and if Microsoft does some support directly, there is less opportunity for partners support services.

Finally, what are the implications for developers? The main one is this. Organisations that migrate to online services will have little enthusiasm for locally installed custom applications, and will also want to reduce their dependence on local servers. In other words, custom applications will also need to live in the cloud.

Ray Ozzie no longer to be Microsoft’s Chief Software Architect

A press release, in the form of a memo from CEO Steve Ballmer, tells us that Ray Ozzie is to step down from his role as Chief Software Architect. He is not leaving the company:

Ray and I are announcing today Ray’s intention to step down from his role as chief software architect. He will remain with the company as he transitions the teams and ongoing strategic projects within his organization … Ray will be focusing his efforts in the broader area of entertainment where Microsoft has many ongoing investments.

It is possible that I have not seen the best of Ozzie. His early Internet Services Disruption memo was impressive, but the public appearances I have seen at events like PDC have been less inspiring. He championed Live Mesh, which I thought had promise but proved disappointing on further investigation, and was later merged with Live Synch, becoming a smaller initiative than was once envisaged. Balmer says Ozzie was also responsible for “conceiving, incubating and shepherding” Windows Azure, in which case he deserves credit for what seems to be a solid platform.

Ozzie may have done great work out of public view; but my impression is that Microsoft lacks the ability to articulate its strategy effectively, with neither Ozzie nor Ballmer succeeding in this. Admittedly it is a difficult task for such a diffuse company; but it is a critical one. Ballmer says he won’t refill the CSA role, which is a shame in some ways. A gifted strategist and communicator in that role could bring the company considerable benefit.

Salesforce.com is the wrong kind of cloud says Oracle’s Larry Ellison

Oracle CEO Larry Ellison took multiple jabs at Salesforce.com in the welcome keynote at OpenWorld yesterday.

He said it was old, not fault tolerant, not elastic, and built on a bad security model since all customers share the same application. “Elastic” in this context means able to scale on demand.

Ellison was introducing Oracle’s new cloud-in-a-box, the Exalogic Elastic Cloud. This features 30 servers and 360 cores packaged in a single cabinet. It is both a hardware and software product, using Oracle’s infiniband networking internally for fast communication and the Oracle VM for hosting virtual machines running either Oracle Linux or Solaris. Oracle is positioning Exalogic as the ideal machine for Java applications, especially if they use the Oracle WebLogic application server, and as a natural partner for the Exadata Database Machine.

Perhaps the most interesting aspect of Exalogic is that it uses the Amazon EC2 (Elastic Compute Cloud) API. This is also used by Eucalyptus, the open source cloud infrastructure adopted by Canonical for its Ubuntu Enterprise Cloud. With these major players adopting the Amazon API, you could almost call it as standard.

Ellison’s Exalogic cloud is a private cloud, of course, and although he described it as low maintenance it is nevertheless the customer’s responsibility to provide the site, the physical security and to take responsibility for keeping it up and running. Its elasticity is also open to question. It is elastic from the perspective of an application running on the system, presuming that there is spare capacity to run up some more VMs as needed. It is not elastic if you think of it as a single powerful server that will be eye-wateringly expensive; you pay for all of it even though you might not need all of it, and if your needs grow to exceed its capacity you have to buy another one – though Ellison claimed you could run the entire Facebook web layer on just a couple of Exalogics.

In terms of elasticity, there is actually an advantage in the Salesforce.com approach. If you share a single multi-tenanted application with others, then elasticity is measured by the ability of that application to scale on demand. Behind the scenes, new servers or virtual servers may come into play, but that is not something that need concern you. The Amazon approach is more hands-on, in that you have to work out how to spin up (or down) VMs as needed. In addition, running separate application instances for each customer means a larger burden of maintenance falling on the customer – which with a private cloud might mean an internal customer – rather than on the cloud provider.

In the end it is not a matter of right and wrong, more that the question of what is the best kind of cloud is multi-faceted. Do not believe all that you hear, whether the speaker is Oracle’s Ellison or Marc Benioff from Salesforce.com.

Incidentally, Salesforce.com runs on Oracle and Benioff is a former Oracle VP.

Postscript: as Dennis Howlett observes, the high capacity of Exalogic is actually a problem – he estimates that only 5% at most of Oracle’s customers could make use of such an expensive box. Oracle will address this by offering public cloud services, presumably sharing some of the same technology.