Tag Archives: microsoft

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Strong financial results from Microsoft as it aims for breadth of services

Microsoft reported a big quarter (in terms of revenue) for the three months ending December 31st, with revenue of $28,918 million.

What’s notable? Mainly the big jump in Microsoft’s recent success stories: year on year Office 365 up by 41%, Azure up by 98%, Dynamics 365 up by 67%.

Windows is flat/weak as you would expect, and Surface hardware is standing still. Xbox grew a bit following the launch of Xbox One X.

LinkedIn is growing: revenue of $1.3 billion and “sessions growth of over 20%” in the quarter. In the earnings webcast, Microsoft’s Amy Hood said that the LinkedIn acquisition has both performed better, and seems more strategic, now than it did at the time.

Hood also made reference to the company’s ability to up-sell cloud users to higher-margin services. “Office 365 commercial revenue increased 41 percent from installed base growth across all customer segments, and ARPU [Average Revenue per User] expansion from continued customer migration to higher value offers in the E3 and E5 workloads.”

This point is key and is the answer (from the provider’s point of view) to the lower margins implicit in moving from software to services. When Microsoft sells a licence for you to use Windows or Office, the margin is huge because reproducing the software, or providing it for download, costs almost nothing; whereas with a subscription there is significant cost to providing the service. However the subscription has advantages which offset this, in particular the continuing interaction with the customer that both provides data, which the customer as well as the provider can mine (subject to appropriate privacy controls), and gives opportunity for the provider to extend the relationship into new or upgraded services.

CEO Satya Nadella fielded a good question about Microsoft losing out to Sony in gaming and to Alexa and Google Home in voice devices. On gaming, Nadella referred to the PC alongside Xbox as a strategic asset. “PC gaming is a growth market,” he said, as well as software such as Minecraft now on mobile devices, giving the company a broad reach. He also remarked on Azure as a gaming back end.

As for Cortana in the home (or absence from), Nadella said that the focus is on the server-side cognitive services. He also talked about voice input and control of Office 365. The key point though was that Microsoft wants to work both with its own and other voice assistant devices so it can win on services even when competitor devices are in use. “One-turn dialogs on one speaker in one home, that’s just not our vision,” he said.

Nadella made another key point in the webcast, in answer to a question about how Azure Stack (a packaged version of Azure for installation on-premises) will impact Azure. “Computing is becoming more distributed, not less distributed,” he said. IoT and sensors play a large part in this. Everything goes to the cloud but computing on the edge (the new buzzword for local processing) is important for efficiency.

It is easy to see ways in which Microsoft could stumble. The PC will decline as the number of users who need a desktop or laptop computer diminishes. Microsoft’s failure in mobile could prove costly as competitors use synergy with their own applications and cloud services to steer customers away. There are opportunities such as home automation and payments which seem closed to the company now.

Then again, strong results such as these show how the company can succeed by continuing to migrate its business users to cloud services. It remains deeply embedded in business computing.

Here is my chart summarising Microsoft’s performance:   

Quarter ending December 31st 2017 vs quarter ending December 31st 2016, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 8953 +1774 3337 +284
Intelligent Cloud 7795 +1037 2832 +541
More Personal Computing 12170 +281 2510 -51

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Office 2016 now “built out of one codebase for all platforms” says Microsoft engineer

Microsoft’s Erik Schweibert, principal engineer in the Apple Productivity Experiences group, says that with the release of Office 2016 version 16 for the Mac, the productivity suite is now “for the first time in 20 years, built out of one codebase for all platforms (Windows, Mac, iOS, Android).”

image

This is not the first time I have heard of substantial code-sharing between the various versions of Office, but this claim goes beyond that. Of course there is still platform-specific code and it is worth reading the Twitter thread for a more background.

“The shared code is all C++. Each platform has native code interfacing with the OS (ie, Objective C for Mac and iOS, Java for Android, C/C++ for Windows, etc),” says Schweibert.

Does this mean that there is exact feature parity? No. The mobile versions remain cut-down, and some features remain platform-specific. “We’re not trying to provide uniform “lowest common denominator” support across all platforms so there will always be disparate feature gaps,” he says.

Even the online version of Office shares much of the code. “Web components share some code (backend server is shared C++ compiled code, front end is HTML and script)”, Schweibert says.

There is more news on what is new in Office for the Mac here. The big feature is real-time collaborative editing in Word, Excel and PowerPoint. 

What about 20 years ago? Schweibert is thinking about Word 6 for the Mac in 1994, a terrible release about which you can read more here:

“Shipping a crappy product is a lot like beating your head against the wall.  It really does feel good when you ship a great product as a follow-up, and it really does motivate you to spend some time trying to figure out how not to ship a crappy product again.

Mac Word 6.0 was a crappy product.  And, we spent some time trying to figure out how not to do that again.  In the process, we learned a few things, not the least of which was the meaning of the term “Mac-like.”

Word 6.0 for the Mac was poor for all sorts of reasons, as explained by Rick Schaut in the post above. The performance was poor, and the look and feel was too much like the Windows version – because it was the Windows code, recompiled. “Dialog boxes had "OK" and "Cancel" exactly reversed compared to the way they were in virtually every other Mac application — because that was the convention under Windows,” says one comment.

This is not the case today. Thanks to its lack of a mobile platform, Microsoft has a strong incentive to create excellent cross-platform applications.

There is more about the new cross-platform engineering effort in the video below.

Windows Mixed Reality: Acer headset review and Microsoft’s (lack of) content problem

Acer kindly loaned me a Windows Mixed Reality headset to review, which I have been trying over the holiday period.

First, an aside. I had a couple of sessions with Windows Mixed Reality before doing this review. One was at IFA in Berlin at the end of August 2017, where the hardware and especially the software was described as late preview. The second was at the Future Decoded event in London, early November. On both occasions, I was guided through a session either by the hardware vendor or by Microsoft. Those sessions were useful for getting a hands-on experience; but an extended review at home has given me a different understanding of the strengths and weaknesses of the product. Readers beware: those rushed “reviews” based on hands-on sessions at vendor events are poor guides to what a product is really like.

A second observation: I wandered into a few computer game shops before Christmas and Windows Mixed Reality hardware was nowhere to be seen. That is partly because PC gaming has hardly any bricks and mortar presence now. Retailers focus on console gaming, where there is still some money to be made before all the software becomes download-only. PC game sales are now mainly Steam-powered, with a little bit of competition from other download stores including GOS and Microsoft’s Windows Store. That Steam and download dominance has many implications, one of which is invisibility on the High Street.

What about those people (and there must be some) who did unwrap a Windows Mixed Reality headset on Christmas morning? Well, unless they knew exactly what they were getting and enjoy being on the bleeding edge I’m guessing they will have been a little perplexed and disappointed. The problem is not the hardware, nor even Microsoft’s implementation of virtual reality. The problem is the lack of great games (or other virtual reality experiences).

This may improve, provided Microsoft sustains enough momentum to make Windows Mixed Reality worth supporting. The key here is the relationship with Steam. Microsoft cheerfully told the press that Steam VR is supported. The reality is that Steam VR support comes via preview software which you get via Steam and which states that it “is not complete and may or may not change further.” It will probably all be fine eventually, but that is not reassuring for early adopters.

image

My experience so far is that native Windows MR apps (from the Microsoft Store) work more smoothly, but the best content is on Steam VR. The current Steam preview does work though with a few limitations (no haptic feedback) and other issues depending on how much effort the game developers have put into supporting Windows MR.

I tried Windows MR on a well-specified gaming PC: Core i7 with NVIDIA’s superb GTX 1080 GPU. Games in general run super smoothly on this hardware.

Getting started

A Windows Mixed Reality headset has a wired connection to a PC, broken out into an HDMI and a USB 3.0 connection. You need Windows 10 Fall Creators Update installed, and Setup should be a matter of plugging in your headset, whereupon the hardware is detected, and a setup wizard starts up, downloading additional software as required.

image

In my case it did not go well. Setup started OK but went into a spin, giving me a corrupt screen and never completing. The problem, it turned out, was that my GPU has only one HDMI port, which I was already using for the main display. I had the headset plugged into a DisplayPort socket via an adapter. I switched this around, so that the headset uses the real HDMI port, and the display uses the adapter. Everything then worked perfectly.

The controllers use Bluetooth. I was wary, because in my previous demos the controllers had been problematic, dropping their connection from time to time, but these work fine.

image

They are perhaps a bit bulky, thanks to their illuminated rings which are presumably a key part of the tracking system. They also chew batteries.

The Acer headsets are slightly cheaper than average, but I’ve enjoyed my time with this one. I wear glasses but the headset fits comfortably over them.

A big selling point of the Windows system is that no external tracking sensors are required. This is called inside-out tracking. It is a great feature and makes it easier just to plug in and go. That said, you have to choose between a stationary position, or free movement; and if you choose free movement, you have to set up a virtual boundary so that you do not walk into physical objects while immersed in a VR experience.

image

The boundary is an important feature but also illustrates an inherent issue with full VR immersion: you really are isolated from the real world. Motion sickness and disorientation can also be a problem, the reason being that the images your brain perceives do not match the physical movement your body feels.

Once set up, you are in Microsoft’s virtual house, which serves as a kind of customizable Start menu for your VR experiences.

image

The house is OK though it seems to me over-elaborate for its function, which is to launch games and apps.

I must state at this point that yes, a virtual reality experience is amazing and a new kind of computing. The ability to look all around is extraordinary when you first encounter it, and adds a level of realism which you cannot otherwise achieve. That said, there is some frustration when you discover that the virtual world is not really as extensive as it first appears, just as you get in an adventure game when you find that not all doors open and there are invisible barriers everywhere. I am pretty sure though that a must-have VR game will come along at some point and drive many new sales – though not necessarily for Windows Mixed Reality of course.

I looked for content in the Windows Store. It is slim pickings. There’s Minecraft, which is stunning in VR, until you realise that the controls do not work quite so well as they do in the conventional version. There is Space Pirate, an old-school arcade game which is a lot of fun. There is Arizona Sunshine, which is fine if you like shooting zombies.

I headed over to Steam. The way this works is that you install the Steam app, then launch Windows Mixed Reality, then launch a VR game from your Steam library. You can access the Windows Desktop from within the Windows MR world, though it is not much fun. Although the VR headset offers two 1440 x 1440 displays I found it impossible to keep everything in sharp focus all the time. This does not matter all that much in the context of a VR game or experience, but makes the desktop and desktop applications difficult to use.

I did find lots of goodies in the Steam VR store though. There is Google Earth VR, which is not marked as supporting Windows MR but works. There is also The Lab, which a Steam VR demo which does a great job of showing what the platform can do, with several mini-games and other experiences – including a fab archery game called Longbow where you defend your castle from approaching hordes. You can even fire flaming arrows.

image
Asteroids! VR, a short, wordless VR film which is nice to watch once. It’s free though!

Mainstream VR?

Irrespective of who provides the hardware, VR has some issues. Even with inside-out tracking, a Windows Mixed Reality setup is somewhat bulky and makes the wearer look silly. The kit will become lighter, as well as integrating audio. HTC’s Vive Pro, just announced at CES, offers built-in headphones and has a wireless option, using Intel’s WiGig technology.

Even so, there are inherent issues with a fully immersive environment. You are vulnerable in various ways. Having people around wearing earbuds and staring at a screen is bad enough, but VR takes anti-social to another level.

The added expense of creating the content is another issue, though the right tools can do an amazing job of simplifying and accelerating the process.

It is worth noting that VR has been around for a long time. Check out the history here. Virtual Reality arcade machines in 1991. Sega VR Glasses in 1993. Why has this stuff taken so long to take off, and remains in its early stages? It is partly about technology catching up to the point of real usability and affordability, but also an open question about how much VR we want and need.

Why patching to protect against Spectre and Meltdown is challenging

The tech world has been buzzing with news of bugs (or design flaws, take your pick) in mainly Intel CPUs, going way back, which enables malware to access memory in the computer that should be inaccessible.

How do you protect against this risk? The industry has done a poor job in communicating what users (or even system admins) should do.

A key reason why this problem is so serious is that it risks a nightmare scenario for public cloud vendors, or any hosting company. This is where software running in a virtual machine is able to access memory, and potentially introduce malware, in either the host server or other virtual machines running on the same server. The nature of public cloud is that anyone can run up a virtual machine and do what they will, so protecting against this issue is essential. The biggest providers, including AWS, Microsoft and Google, appear to have moved quickly to protect their public cloud platforms. For example:

The majority of Azure infrastructure has already been updated to address this vulnerability. Some aspects of Azure are still being updated and require a reboot of customer VMs for the security update to take effect. Many of you have received notification in recent weeks of a planned maintenance on Azure and have already rebooted your VMs to apply the fix, and no further action by you is required.

With the public disclosure of the security vulnerability today, we are accelerating the planned maintenance timing and will begin automatically rebooting the remaining impacted VMs starting at 3:30pm PST on January 3, 2018. The self-service maintenance window that was available for some customers has now ended, in order to begin this accelerated update.

Note that this fix is at the hypervisor, host level. It does not patch your VMs on Azure. So do you also need to patch your VM? Yes, you should; and your client PCs as well. For example, KB4056890 (for Windows Server 2016 and Windows 10 1607), or KB4056891 for Windows 10 1703, or KB4056892. This is where it gets complex though, for two main reasons:

1. The update will not be applied unless your antivirus vendor has set a special registry key. The reason is that the update may crash your computer if the antivirus software accesses memory is a certain way, which it may do. So you have to wait for your antivirus vendor to do this, or remove your third-party anti-virus and use the built-in Windows Defender.

2. The software patch is not complete protection. You also need to update your BIOS, if an update is available. Whether or not it is available may be uncertain. For example, I am pretty sure that I found the right update for my HP PC, based on the following clues:

– The update was released on December 20 2017

– The description of the update is “Provides improved security”

image

So now is the time, if you have not done so already, to go to the support sites for your servers and PCs, or motherboard vendor if you assembled your own, see if there is a BIOS update, try to figure out it it addresses Spectre and Meltdown, and apply it.

If you cannot find an update, you are not fully protected.

It is not an easy process and realistically many PCs will never be updated, especially older ones.

What is most disappointing is the lack of clarity or alerts from vendors about the problem. I visited the HPE support site yesterday in the hope of finding up to date information on HP’s server patches,  to find only a maze of twist little link passages, all alike, none of which led to the information I sought. The only thing you can do is to trace the driver downloads for your server in the hope of finding a BIOS update.

Common sense suggests that PCs and laptops will be a bigger risk than private servers, since unlike public cloud vendors you do not allow anyone out there to run up VMs.

At this point it is hard to tell how big a problem this will be. Best practice though suggests updating all your PCs and servers immediately, as well as checking that your hosting company has done the same. In this particular case, achieving this is challenging.

PS kudos to BleepingComputer for this nice article and links; the kind of practical help that hard-pressed users and admins need.

There is also a great list of fixes and mitigations for various platforms here:

https://github.com/hannob/meltdownspectre-patches

PPS see also Microsoft’s guidance on patching servers here:

https://support.microsoft.com/en-us/help/4072698/windows-server-guidance-to-protect-against-the-speculative-execution

and PCs here:

https://support.microsoft.com/en-us/help/4073119/protect-against-speculative-execution-side-channel-vulnerabilities-in

There is a handy PowerShell script called speculationcontrol which you can install and run to check status. I was able to confirm that the HP bios update mentioned above is the right one. Just run PowerShell with admin rights and type:

install-module speculationcontrol

then type

get-speculationcontrolsettings

image

Thanks to @teroalhonen on Twitter for the tip.

Quick thoughts on Salesforce and Google Cloud Platform alliance

image

Yesterday Salesforce and Google announced a strategic partnership:

1. Salesforce named Google Cloud as “a preferred public cloud provider”. Salesforce says it “continues to invest in its own data centers”. However it will use public cloud infrastructure “for its core services” as well, especially in “select international markets.” Why is Google Cloud Platform (GCP) just a preferred partner and not the? Well, “AWS is a great partner”, as the release also notes.

2. New integrations will be introduced between Salesforce and G Suite (Gmail, Docs, Google Drive and Calendar for business), and there is a promotional offer of one year’s free G Suite for Salesforce customers. Note that the release also says “restrictions apply, see here”, with the see here link currently inactive.

3. Salesforce will integrate with Google Analytics.

Google has also posted about the partnership but adds little of substance to the above.

Why this alliance? On Google’s side, it is keen to build momentum for its cloud platform and to catch up a little with AWS and Microsoft Azure. Getting public support from a major cloud player like Salesforce is helpful. On the Salesforce side, it is an obvious alliance following the public love-in between Adobe and Microsoft Azure. Adobe competes with Salesforce in marketing tools, and Microsoft competes with Salesforce in CRM.

Google will also hope to win customers from Microsoft Exchange, Office and Office 365. However Salesforce knows it has to integrate nicely with Microsoft’s email and productivity tools as well as with G Suite. The analytics integration is a bigger deal here, thanks to the huge reach of Google’s cloud data and tools.

A quick look at Surface Book 2: powerful but heavy

Microsoft’s Surface range is now extensive. There is the Surface Pro (tablet with keyboard cover), the Surface Laptop (laptop with thin keyboard), and the Surface Book (detachable tablet). And the Surface Studio, an all-in-one desktop. Just announced, and on display here at Microsoft’s Future Decoded event in London, is Surface Book 2.

image

The device feels very solid and the one I saw has an impressive spec: an 8th Gen Intel Core i7 with 16GB RAM and NVIDIA GeForce GTX 1050 discrete GPU. And up to 17 hours battery life.

All good stuff; but I have a couple of reservations. One is the weight; “from 3.38 lbs (1.534 Kg) ”, according to the spec. By contrast, the Surface Laptop starts at 1.69 lbs (0.767 Kg).

That makes the Book 2 heavy in today’s terms. I am used to ultrabook-style laptops now.

Of course you can lighten your load by just using the tablet. Will you though? I rarely see Windows convertible or detachable devices used other than like laptops, with the keyboard attached. The Surface is more likely to be used like a tablet, since you can simply fold the keyboard cover back, but with the Book you either leave the keyboard at home, and put up with short battery life, or have it at least in your bag.

Microsoft announces Office 2019, Exchange Server 2019 and SharePoint Server 2019

This was not one of Microsoft’s most surprising announcements, but even so, confirmation that some of the company’s most significant products are to receive updates a year or so from now. The announcement was made at the SharePoint and OneDrive session at the Ignite event here in Orlando.

image

If you have an hour or so spare, you can view the session here:

Note that fewer people now use these products; that is, increasing numbers of users are on Exchange Online and Office 365. These are the same but not the same, and get updates earlier than the on-premises equivalents. Still, we may well see a makeover for Office 365 at around the time Office 2019 is released.

Either way, we should not expect a radical departure from the current Office. Rather, we can expect improvements in the area of collaboration and deeper integration with cloud services.

You will also need to think about the following dialog, if you have not already (the exact wording will vary according to the context):

image

The deal is that you send your document content to Microsoft in order to get AI-driven features.

Microsoft Ignite: where next for Microsoft’s cloud? The Facebook of business?

image

Microsoft has futuristic domes as part of its Envision event, running alongside Ignite here in Orlando. Ignite is the company’s main technical event of the year, focusing mainly on IT Pros but embracing pretty much the whole spectrum of Microsoft’s products and services (maybe not much Xbox!). With the decline of the PC and retreat from mobile, and a server guy at the helm, the company’s focus has shifted towards cloud and enterprise, making Ignite all the more important.

This year sees around 25-30,000 attendees according to a quick estimate from one of the PRs here; a little bigger than last year’s event in Atlanta.

Microsoft will present itself as an innovative company doing great things in the cloud but the truth is more complex, much though I respect the extent to which the business has been transformed. This is a company with a huge amount of legacy technology, designed for a previous era, and its challenge has been, and still is, how to make that a springboard for moving to a new way of working as opposed to a selling opportunity for cloud-born competitors, primarily Amazon Web Services (AWS) and Google, but also the likes of Salesforce and Dropbox.

If there is one product that has saved Microsoft, it is probably Exchange, always a solid email server and basic collaboration tool. Hosted Exchange is the heart of Office 365 (and BPOS before it), making it an easy sell to numerous businesses already equipped with Office and Outlook. Email servers are horrible things to manage, so hosted has great appeal, and it has driven huge uptake. A side-effect is that it has kept customers using Office and to some extent Windows. A further side-effect is that it has migrated businesses onto Azure Active Directory, the directory behind Exchange Online.

Alongside Office 365, the Azure cloud has matured into a credible competitor to AWS. There are still shortcomings (a few of which you can expect to be addressed by announcements here at Ignite), but it works, providing the company with the opportunity to upsell customers from users of cloud infrastructure to consumers of cloud services, such as Azure IoT, a suite of tools for gathering and analysing data.

The weakness of Microsoft’s cloud efforts has been the moving parts between hosted services and Windows PCs, and legacy pieces that do not work as you would expect.  OneDrive has been a persistent annoyance, with issues over reliable document sync and limitations over things like the number of documents in a folder and the total length of a path. And where are my Exchange Public Folders, or any shared folders, in Outlook for IoS and Android? And why does a PC installation of Office now and again collapse with activation or other issues, so that the only solution is removal and reinstall?

At Ignite we will not hear of such things. Instead, Microsoft will be presenting its vision of AI-informed business collaboration. Think “Facebook of business”, powered by the “Microsoft graph”, the sum of data held on each user and their files and activity, now combined with LinkedIn. The possibilities for better-informed business activity, and systems that know what you need before you ask, are enticing. Open questions are how well it will work, and old issues of privacy and surveillance.

Such things also can only work if businesses do in fact commit more of their data to Microsoft’s cloud. The business case for this is by no means as simple as the company would have us think.

VMware Cloud on AWS: a game changer? What about Microsoft’s Azure Stack?

The biggest announcement from VMWorld in Las Vegas and then Barcelona was VMware Cloud on AWS; essentially VMware hosts on AWS servers.

image

A key point is that this really is VMware on AWS infrastructure; the release states “Run VMware software stack directly on metal, without nested virtualization”.

Why would you use this? Because it is hybrid cloud, allowing you to plan or move workloads between on-premises and public cloud infrastructure easily, using the same familiar tools (vCenter, vSphere, PowerCLI) as you do now, presuming you use VMware.

You also get low-latency connections to other AWS services, of which there are far too many to mention.

This strikes me as significant for VMware customers; and let’s not forget that the company dominates virtualisation in business computing.

Why would you not use VMware Cloud on AWS? Price is one consideration. Each host has 2 CPUs, 36 cores, 512GB RAM, 10.71TB local flash storage. You need a minimum of 4 hosts. Each host costs from $4.1616 to $8.3681 per hour, with the lowest price if you pay up front for a 3-year subscription (a substantial investment).

Price comparisons are always difficult. A big VM of a similar spec to one of these hosts will likely cost less. Maybe the best comparison is an EC2 Dedicated Host (where you buy a host on which you can run up VM instances without extra charge). An i3 dedicated host has 2 sockets and 36 cores, similar to a VMware host. It can run 16 xlarge VMs, each with 950GB SSD storage. Cost is from $2.323 to $5.491. Again, the lowest cost is for a 3 year subscription with payment upfront.

I may have this hasty calculation wrong; but there has to be a premium paid for VMware; but customers are used to that. The way the setup is designed (a 4-host cluster minimum) also makes it hard to be as flexible with with costs as you can be when running up individual VMs.

A few more observations. EC2 is the native citizen of AWS. By going for VMware on AWS instead of EC2 you are interposing a third party between you and AWS which intuitively seems to me a compromise. What you are getting though is smoother hybrid cloud which is no small thing.

What about Microsoft, previously the king of hybrid cloud? Microsoft’s hypervisor is Hyper-V and while there are a few features in VMware ESXi that Hyper-V lacks, they are not all that significant in my opinion. As a hypervisor, Hyper-V is solid. The pain points with Microsoft’s solution though are Cluster Shared Volumes, for high availability Hyper-V deployments, and System Center Virtual Machine Manager; VMware has better tools. There is a reason Azure uses Hyper-V but not SCVMM.

Hyper-V will always be cheaper than VMware (other than for small, free deployments) because it is a feature of Windows and not an add-on. Windows Server licenses are not cheap at all but that is another matter, and you have to suffer these anyway if you run Windows on VMware.

Thus far, Hyper-V has not been all that attractive to VMware shops, not only because of the cost of changing course, but also because of the shortcomings mentioned above.

Microsoft’s own game-changer here is Azure Stack, pre-packaged hardware which uses Azure rather than System Center technology, relieving admins of the burden of managing Cluster Shared Volumes and so forth. It is a great solution for hybrid since it really is the same (albeit with some missing features and some lag over implementing features that come to the public version) as Microsoft’s public cloud.

Azure Stack, like VMware on AWS, is new. Further, there is much more friction in migrating an existing datacenter to use Azure Stack, than in extending an existing VMware operation to use VMware Cloud on AWS.

But there is more. Is cloud computing really about running up VMs and moving them about? Arguably, not. Containers are another approach with some obvious advantages. Serverless is a big deal, and abstracts away both VMs and containers. Further, as you shift the balance of applications away from code you write and more towards use of cloud services (database, ML, BI, queuing and so on), the importance of VMs and containers lessens.

Azure Stack has an advantage here, since it gives an on-premises implementation of some Azure services, though far short of what is in Microsoft’s cloud. And VMware, of course, is not just about VMs.

Overall it seems to me that while VMware Cloud on AWS is great for VMware customers migrating towards hybrid cloud, it is unlikely to be optimal, either for cost or features, especially when you take a long view.

It remains a smart move and one that I would expect to have a rapid and significant take-up.