Category Archives: cloud computing

Would you consider running PHP on Azure? Microsoft faces uphill battle to convince customers.

Yesterday Microsoft announced Windows Azure SDK for PHP version 3.0, an update to its open source SDK for PHP on Windows Azure. The SDK wraps Azure storage, diagnostics and management services with a PHP API.

Microsoft has been working for years on making IIS a good platform for PHP. FastCGI for IIS was introduced partly, I guess, with PHP in mind; and Microsoft runs a dedicated site for PHP on IIS. The Web Platform Installer installs a number of PHP applications including WordPress, Joomla and Drupal.

It is good to see Microsoft making an effort to support this important open source platform, and I am sure it has been welcomed by Microsoft-platform organisations who want to run WordPress, say, on their existing infrastructure.

Attracting PHP developers to Azure may be harder though. I asked Nick Hines, CTO for Innovation at Thoughtworks, a global IT consultancy and developer, what he thought of the idea.

I’d struggle to see any reason. Even if you had it in your datacentre, I certainly wouldn’t advise a client, unless there was some corporate mandate to the contrary, and especially if they wanted scale, to be running a Java or a PHP application on Windows.

Microsoft’s scaling and availability story around windows hasn’t had the penetration of the datacentre that Java and Linux has. If you look at some of the heavy users of all kinds of technology that we come across , such as some of the investment banks, what they’re tending to do is to build front and middle tier applications using C# and taking advantage of things like Silverlight to get the fancy front ends that they want, but the back end services and heavy lifting and number crunching predominantly is Java or some sort of Java variant running on Linux.

Hine also said that he had not realised running PHP on Azure was something Microsoft was promoting, and voiced his suspicion that PHP would be at a disadvantage to C# and .NET when it came to calling Azure APIs.

His remarks do not surprise me, and Microsoft will have to work hard to persuade a broad range of customers that Azure is as good a platform for PHP as Linux and Apache – even leaving aside the question of whether that is the case.

The new PHP SDK is on Codeplex and developed partly by a third-party, ReadDolmen, sponsored by Microsoft. While I understand why Microsoft is using a third-party, this kind of approach troubles me in that you have to ask, what will happen to the project if Microsoft stops sponsoring it? It is not an organic open source project driven by its users, and there are examples of similar exercises that have turned out to be more to do with PR than with real commitment.

I was trying to think of important open source projects from Microsoft and the best I could come up with is ASP.NET MVC. This is also made available on CodePlex, and is clearly a critical and popular project.

However the two are not really comparable. The SDK for PHP is licensed under the New BSD License; whereas ASP.NET MVC has the restrictive Microsoft Source License for ASP.NET Pre-Release Components (even though it is now RTM – Released to manufacturing). ASP.NET MVC 1.0 was licensed under the Microsoft Public License, but I do not know if this will eventually also be the case for ASP.NET MVC 3.0.

Further, ASP.NET MVC is developed by Microsoft itself, and has its own web site as part of the official ASP.NET site. Many users may not realise that the source is published.

My reasoning, then, is that if Microsoft really want to make PHP a first-class citizen on Azure, it should hire a crack PHP team and develop its own supporting libraries; as well as coming up with some solid evidence for its merits versus, say, Linux on Amazon EC2, that might persuade someone like Nick Hine that it is worth a look.

Three questions about Microsoft’s cloud play at TechEd 2011

This year’s Microsoft TechEd is subtitled Cloud Power: Delivered, and sky blue is the theme colour. Microsoft seems to be serious about its cloud play, based on Windows Azure.

Then again, Microsoft is busy redefining its on-premise solutions in terms of cloud as well. A bunch of Windows Servers on virtual machines managed by System Center Virtual Machine Manager (SCVMM) is now called a private cloud – note that the forthcoming SCVMM 2012 can manage VMWare and Citrix XenServer as well as Microsoft’s own Hyper-V. If everything is cloud then nothing is cloud, and the sceptical might wonder whether this is rebranding rather than true cloud computing.

I think there is a measure of that, but also that Microsoft really is pushing Azure heavily, as well as hosted applications like Office 365, and does intend to be a cloud computing company. Here are three side-questions which I have been mulling over; I would be interested in comments.

image

Microsoft gets Azure – but does its community?

At lunch today I sat next to a delegate and asked what she thought of all the Azure push at TechEd. She said it was interesting, but irrelevant to her as her organisation looks after its own IT. She then added, unprompted, that they have a 7,000-strong IT department.

How much of Microsoft’s community will actually buy into Azure?

Is Microsoft over-complicating the cloud?

One of the big announcements here at TechEd is about new features in AppFabric, the middleware part of Windows Azure. When I read about new features in the Azure service bus I think how this shows maturity in Azure; but with the niggling question of whether Microsoft is now replicating all the complexity of on-premise software in a new cloud environment, rather than bringing radical new simplicity to enterprise computing. Is Microsoft over-complicating the cloud, or it is more that the same necessity for complex solutions exist wherever you deploy your applications?

What are the implications of cloud for Microsoft partners?

TechEd 2011 has a huge exhibition and of course every stand has contrived to find some aspect of cloud that it supports or enables. However, Windows Azure is meant to shift the burden of maintenance from customers to Microsoft. If Azure succeeds, will there be room for so many third-party vendors? What about the whole IT support industry, internal and external, are their jobs at risk? It seems to me that if moving to a multi-tenanted platform really does reduce cost, there must be implications for IT jobs as well.

The stock answer for internal staff is that reducing infrastructure cost is an opportunity for new uses of IT that are beneficial to the business. Staff currently engaged in keeping the wheels turning can now deliver more and better applications. That seems to me a rose-tinted view, but there may be something in it.

Cloud is identity management says Kim Cameron, now ex-Microsoft

Kim Cameron, formerly chief identity architect at Microsoft, has  confirmed that he has left the company.

In an interview at the European Identity Conference in Munich he discusses the state of play in identity management, but does not explain what interests me most: why he left. He was respected across the industry and to my mind was a tremendous asset to Microsoft; his presence went a long way to undoing the damage of Hailstorm, an abandoned project from 2001 which sought to place Microsoft at the centre of digital life and failed largely because of industry mistrust. He formulated laws of identity which express good identity practice, things like minimal disclosure, justifiable parties, and user control and consent.

Identity is a complex and to most people an unexciting topic; yet it has never been more important. It is a central issue around Google’s recently announced Chromebook, for example; yet we tend to be distracted by other issues, like hardware features or software quality, and to miss the identity implications. Vendors are careful never to spell these out, so we need individuals like Cameron who get it.

“Cloud is identity management,” he says in the interview.

Cameron stands by his laws of identity, which is says are still “essentially correct”. However, events like the recent Sony data loss show how little the wider industry respects them.

So what happened at Microsoft? Although he puts a brave face on it, I am sure he must have been disappointed by the failure of Cardspace, a user interface and infrastructure for identity management that was recently abandoned. It was not successful, he says, because “it was not adopted by the large players,” but what he does not say is that Microsoft itself could have done much more to support it.

That may have been a point of tension; or maybe there were other disagreements. Cameron does not talk down his former company though. “There are a lot of people there who share the ideas that I was expressing, and my hope is that those ideas will continue to be put in practice,” he says, though the carefully chosen words leave space for the possibility that another well-represented internal group do not share them. He adds though that products like SharePoint do have his ideas about claims-based identity management baked into them.

Leaving aside Microsoft, Cameron makes what seems to me an important point about advocacy. “We’re at the beginning of a tremendously complex and deep technological change,” he says, and is worried by the fact that with vendors chasing immediate advantage there may be “no advocates for user-centric, user in control experience.”

Fortunately for us, Cameron is not bowing out altogether. “How can I stop? It is so interesting,” he says.

Chromebook: web applications put to the test, and by the way no Java

Yesterday Google announced the availability of the first commercial Chromebook, a Linux computer running the Chrome browser and not much else. There are machines from Acer and Samsung which are traditional laptop/netbook clamshell designs, with an Intel Atom dual core processor, 16GB solid state storage, and a 12.1” screen. Price will be a bit less than $400, or organisations can subscribe from $28 or €21 per month in which case they get full support and hardware replacement. There are wi-fi and 3G options. Nobody is going to be excited about the hardware.

image

The Chromebook may be the most secure computer available, if Google has got it right. The OS is inaccessible to the user and protected from the browser, and system patching is automatic.

The strength and weakness of the Chromebook is that is only runs web applications – the only exception being utilities that Google itself supplies. Are we ready for a computer that is little use offline? I am not sure; but this will be an interesting experiment.

The Chromebook is a compelling alternative to a traditional PC with its susceptibility to malware and dependence on locally installed applications and data. If you lose your PC, getting a new one up and running can be a considerable hassle, though large businesses have almost cracked the problem with system images and standard builds. Lose a Chromebook, and you just get another one and sign in.

You sign into Google of course, and that is a worry if you would rather not be dependent on a single corporation for your digital identity and a large chunk of your data.

The problem for the Chromebook is that Apple’s iPad and numerous Google Android tablets and netbooks offer security that is nearly as good, and local applications as well as web applications, for a not dissimilar cost. These devices are also easy to restore if they break or go missing, slightly less so than a Chromebook but not much.

The choice looks a bit like this:

  1. Chromebook: Web applications only
  2. iPad/Android: Web applications and local apps

Put like that, it is difficult to see the advantage of the Chromebook. The subscription scheme is interesting though; it is a new business computing model that brings the cloud computing principle of operating expenditure instead of capital expenditure to the desktop.

The offline issue may be the worst thing about a Chromebook. When I travel, I frequently find myself without a good internet connection. The word “offline” does not feature in either the consumer or business frequently asked questions – a question Google would rather you did not ask?

Yet there is 16GB storage on board. That is a lot. In theory, HTML 5 local storage should solve the offline problem, but few web apps, including Google’s own, make this seamless yet.

A few other observations. While there are no user-installable client apps, Google is adding some utilities.

VPN is coming:

We’ve heard from our pilot customers that VPN is an important feature for businesses and schools, and we’re working very hard to bake this into Chromebooks soon. Support for some VPN implementation is already in the product and we’ll both extend support for more VPNs and get these features to stable soon.

Remote desktop access is coming:

we are developing a free service called Chromoting that will enable Chrome notebook users to remotely access their existing PCs and Macs.

Apparently this is based on Citrix Receiver.

There is a bias towards Adobe Flash:

Chromebooks have Flash support built-in, but they do not support Java or Silverlight.

Another blow for Java on the client.

Microsoft’s Azure toolkit for Apple iOS and Android is a start, but nothing like enough

Microsoft ‘s Jamin Spitzer has announced toolkits for Apple iOS, Google Android and Windows Phone, to support its Azure cloud computing platform.

I downloaded the toolkit for iOS and took a look. It is a start, but it is really only a toolkit for Azure storage, excluding SQL Azure.

image

What would I hope for from an iOS toolkit for Azure? Access to SQL Server in Azure would be useful, as would a client for WCF (Windows Communication Foundation). In fact, I would suggest that the WCF RIA Services which Microsoft has built for Silverlight and other .NET clients has a more useful scope than the Azure toolkit; I realise it is not exactly comparing like with like, but most applications built on Azure will be .NET applications and iOS lacks the handy .NET libraries.

A few other observations. The rich documentation for WFC RIA Services is quite a contrast to the Doxyfile docs for the iOS toolkit and its few samples, though Wade Wegner has a walkthrough. One comment asks reasonably enough why the toolkit does not use a two or three letter prefix for its classes, as Apple recommends for third-party developers, in order to avoid naming conflicts caused by Obective C’s lack of namespace support.

The development tool for Azure is Visual Studio, which does not run on a Mac. Microsoft offers a workaround: a Cloud Ready Package which is a pre-baked Azure application; you just have to amend the configuration in a text editor to point to your own storage account, so developers without Visual Studio can get started. That is all very well; but I cannot imagine that many developers will deploy Azure services on this basis.

I never know quite what to make of these little open source projects that Microsoft comes up with from time to time. It looks like a great start, but what is its long-term future? Will it be frozen if its advocate within Microsoft happens to move on?

In other words, this looks like a project, not a strategy.

The Windows Azure Tools for Eclipse, developed by Soyatec and funded by Microsoft, is another example. I love the FAQ:

image

This sort of presentation says to developers: Microsoft is not serious about this, avoid.

That is a shame, because a strategy for making Azure useful across a broad range of Windows and non-Windows clients and devices is exactly what Microsoft should be working on, in order to compete effectively with other cloud platforms out there. A strategy means proper resources, a roadmap, and integration into the official Microsoft site rather than quasi-independent sites strewn over the web.

Microsoft’s Scott Guthrie moving to Windows Azure

According to an internal memo leaked to ZDNet’s Mary Jo Foley, Microsoft’s Scott Guthrie who is currently Corporate VP of the .NET Developer Platform is moving to lead the Azure Application Platform team. This means he will report to Ted Kummert who is in charge of the Business Platform Division, instead of S Somasegar who runs the Developer Division; however both divisions are part of the overall Server and Tools Division. Server and Tools is the division from which Bob Muglia was ousted as president in January; the reason for this is still not clear to me, though I would guess at some significant strategy disagreement with CEO Steve Ballmer.

Guthrie was co-inventor of ASP.NET and is one of the most approachable of senior Microsoft execs; he is popular and respected by developers and his blog is one of the first places I look for in-depth and hands-on explanations of new features in Microsoft’s developer platform, such as ASP.NET MVC and Entity Framework.

I have spent a lot of time researching and using Visual Studio 2010, and while not perfect it is among the most impressive developer products I know, from the detail of the editor and debug features right through to ALM (Application Lifecycle Management) aspects like Team Foundation Server, testing in various forms, and build management. Some of that quality is likely due to Guthrie’s influence. The successful evolution of ASP.NET from web forms towards the leaner and more flexible ASP.NET MVC is another achievement in which I am sure he played a significant role.

Is it wise to take Guthrie away from his first love and over to the Azure platform? Only Microsoft can answer that, and of course he will still be responsible for an ASP.NET platform. I’d guess that we will see further improvement in the Visual Studio tools for Azure as well.

Still, it is a bold move and one that underlines the importance of Azure to the company. In my own research I have gained increasing respect for Azure and I would expect Guthrie’s arrival there to be successful in winning attention from the Microsoft platform developer community.

Implications of Amazon’s cloud failure

Amazon is into day three of a major failure of its Elastic Compute Cloud at its North Virginia datacenter, and at the time of writing it is still not fully recovered.

image

I am reminded of a prescient remark by Tony Lucas at Flexiant, a UK cloud provider, who told me a couple of years ago (with commendable honesty) that cloud failures will be rare, but when they occur will be on a grand scale.

It seems that it is hard to engineer around the possibility of cascading failure. I am not sure what happened in North Virginia, but Amazon says on its status page that:

A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it’s difficult to create new EBS volumes and EBS backed instances.

It sounds like an automated recovery system built into the compute cloud actually became the problem, as a large number of volumes tried to fix themselves at the same time.

This is not the first Amazon outage, but I believe it is the most severe; though it could have been worse and I have not heard that any data was lost. What are the implications?

Any computer system can fail. There will be a lot of companies reflecting on this though, both those directly affected and others, and realising that the cloud can be a single point of failure, despite the scale and expertise which a company like Amazon invests in high availability.

Is Amazon EC2 more or less likely to fail for an extended period than Salesforce.com? Or Microsoft Azure? Or Google App Engine, or Gmail, or IBM’s evolving SmartCloud? Clearly an excellent question; but I am not sure how we go about answering it other than by reviewing historical performance. I do not expect any of these companies to take advantage of Amazon’s problems to proclaim their own superior resiliency; they will all be worrying too much about the same thing happening on their platforms.

My guess is that the industry will get better at this, and that at some unspecified future moment the chance of one of these cloud platforms failing for three days will become exceedingly small – of course risk can never be eliminated, only reduced.

It seems that the risk is not exceedingly small on Amazon’s cloud today; and we should probably assume that the same applies to other providers.

That is something we have always known, so in one sense nothing has changed. This outage is a sharp reminder though; and planning for failure is a hidden cost of cloud computing that has now been brought into the light.

Getting started with VMWare Cloud Foundry

I have been meaning to post about VMWare’s Cloud Foundry, a new cloud platform for Spring Java, Ruby on Rails and Node.js applications. Good, incidentally, to see Node.js getting increasing attention – see my post from December when I heard Ryan Dahl present on the subject. I signed up for Cloud Foundry when it was announced, since it has free participation in the current beta.

Today my account went live and I spent just a few minutes creating a first application using the steps here. Well, my first app is indistinguishable from a static web page; but having been immersed in Windows Azure for the last few days, with its accounts and subscriptions and certificates and the portal and the ServiceConfiguration.cscfg and the ServiceDefinition.csdef and all the rest of it, I admit to having a pleasant sense of “is that it?” when deploying the Cloud Foundry app.

I typed a few lines of Ruby code, saved the file to a directory, and then typed vmc push.

image

image

Let me add that I like what Microsoft is doing with Azure, and that in combination with Visual Studio it makes an impressive cloud development platform that, from what I have seen, works very well; I think it is of major importance. I also like the fact that Microsoft has pushed down the cost of entry, with its Extra Small instances and more generous offers to MSDN subscribers, partners and others. It easier to experiment a bit with Azure than it used to be.

Nevertheless, Azure lacks the delightful simplicity of getting started that you can see in Cloud Foundry. After doing my Hello World app I look forward to trying something more advanced; and while I am sure that the delightful simplicity soon disappears as the lines of Ruby code increase, and as you try to make your application scalable and transactional and secure and all those good things, the fact that it draws you in is an advantage.

Hands on with Office 365 – great service, some hassles

I have been trying Microsoft’s Office 365 which has recently gone into public beta, and is expected to go live later this year.

This cloud service provides Exchange 2010, SharePoint 2010 with Office Web Apps, and Lync Server to provide a complete collaboration service for organisations who prefer not to run these servers themselves – which is understandable give their cost and complexity.

Trying the beta is a little complex when you already have a working email and collaboration infrastructure. I chose to use a virtual machine running Windows 7 Professional. I also pre-installed Office 2010 Professional in an attempt to get the best experience.

Initial sign-up is easy and I was soon online looking at the admin screen. I could also sign into Outlook Web Access and view my SharePoint site.

image

Hassles started when I clicked to setup up desktop applications. This is done by a helper application which configures and updates Outlook, SharePoint and Lync on your desktop PC. At this point I had not configured my own domain; I was simply username@username.onmicrosoft.com.

setup-office-365

The wizard successfully configured SharePoint and Lync, but not Outlook.

image

There was a “Learn more” link; but I was in a maze of twisty passages, all alike, none of which seemed to lead to the information I needed.

Part of the problem – and I have noticed this with BPOS as well – is that the style of the online help is masterful at telling you things you know already, while neglecting to tell you what you need to know. It also has a patronising style that I find infuriating, and a habit of showing you marketing videos at every opportunity.

I did eventually find instructions for configuring Outlook manually for Office 365, but they did not work. I also noticed discrepancies in the instructions. For example, this document says that the Exchange server is ch1prd0201.mailbox.outlook.com and that the proxy server for Outlook over HTTP is pod51004.outlook.com. That did not match with the server given in my online account for IMAP, POP3 and SMTP use, which was a different podnnnnn.outlook.com. I tried all sorts of combinations and none worked.

Eventually I found this comment in another help document:

Currently, the only supported scenario for configuring Outlook to work with Office 365 is a fully migrated environment.

I am not sure if this is true, but it did seem to explain my problems. Of course it would be easy for Microsoft to surface this information in a more obvious place, such as building it into the setup wizard. Anyway, I decided to go for the full Office 365 experience and to set up a domain.

Fortunately I have a domain which I obtained for a bright idea that I have yet to find time for. I added it to Office 365. This is a process which involves first adding a CNAME record to the DNS in order to prove ownership, and then making Office 365 the authoritative nameserver for the domain. I was not impressed by the process, because when Microsoft took over the nameserver role it threw away existing settings. This means that if you have a web site or blog at that domain, for example, it will disappear from the internet after the transfer. Once transferred, you can reinstate custom records.

Still, I had chosen an unused domain so that I did not care about this. I set up a new user with an email address at the new domain, and I amended the default SharePoint web site address to use the domain as well.

image

That all worked fine; but what about Outlook? The bad news was that the setup wizard still failed to configure Outlook, and I still did not know the correct server settings.

I could have contacted support; but I had one last try. I went into the mail applet in control panel and deleted the Outlook profile, so Outlook had no profile at all. Then I ran Outlook, went through the setup wizard, and it all worked, using autodiscover. Out of interest, I then checked the server settings that the wizard had found, which were indeed different in every case from those in the various help documents I had seen.

A few hassles then, and I am not happy with the way this stuff is documented, but nevertheless it all looks good once set up. The latest Exchange and SharePoint does make a capable collaboration platform, the storage limits are generous – up to 25GB per Exchange mailbox – and I think it makes a lot of sense. I expect Microsoft’s online services to win huge amounts of business that is currently going to Small Business Server, and some business from larger organisations too. Migration from existing Microsoft-platform servers should be smooth.

The biggest disappointment so far is that in Lync online the Enterprise Voice feature is disabled. This means no general-purpose voice over IP, though you can call PC to PC. To get this you have to install Lync on-premise:

Organizations that want to leverage the full benefits of Microsoft Unified Communications can purchase and deploy Microsoft Lync Server 2010 on their premises as part of Microsoft Office 365. Lync Server 2010 on-premises delivers full enterprise voice and premises-based, dial-in audio conferencing, enabling customers to reduce costs and increase productivity by replacing or enhancing traditional PBX systems.

though it is confusing since Enterprise Voice is listed as a feature of the high-end E4 edition; I believe this implies an on-premise server alongside Office 365 in the cloud.

Perhaps the biggest question is the unknown: will Office 365 live up to its promised 99.9% scheduled uptime SLA, and how will its reliability compare to that of Google Apps?

Office 365 is priced at $10 per user per month for the basic service (E1), $16 to add Office Web Apps (E2), $24.00 to add licenses for Office Professional, archiving for Exchange (E3) and voicemail, and $27.00 to add Enterprise Voice (E4). The version in beta is E3.

Trying out Remote Desktop to a Microsoft Azure virtual machine

I have been trying out Visual Studio LightSwitch, which has an option to deploy apps to Windows Azure.

Of course I wanted to try this,  and after a certain amount of hassle generating certificates and switching between Visual Studio LightSwitch and the Azure management portal I succeeded.

image

I doubt I would have made it without this step by step guide by Andy Kung. The article begins:

One of the many features introduced in Visual Studio LightSwitch Beta 2 is the ability to publish your app directly to Windows Azure with storage in SQL Azure. We have condensed many steps one would typically have to go through to deploy an application to the cloud manually.

Somewhere between 30 and 40 screens later he writes:

The last step shows you a summary of what you’re about to publish. FINALLY! Click Publish.

We just have to imagine how many screens there would have been if Microsoft had not condensed the “many steps”. The result is also not quite right, because it uses self-signed certificates that will present security warnings when you use the app. For a product supposedly aimed at non-developers it is all hopelessly difficult; but I guess techies are used to this kind of thing.

I was not content though. First, I wanted to use an Extra Small instance, and LightSwitch defaults to a Small instance with no obvious way to change it. I cracked that one. You switch the view in Solution Explorer to the File view, then find the file ServiceDefinition.csdef and edit the vmsize attribute:

image

It worked and I had an Extra Small instance.

I was still not satisfied though. I wanted to use Remote Desktop so I could check out the VM Azure had created for me. I could not see any easy way to do this in the LightSwitch project, so I created another Azure project and configured it for Remote Desktop access using the guide on MSDN. More certificate fun, more passwords. I then started to publish the project, but bailed out when it warned me that I was overwriting a previous deployment.

Then I copied the likely looking parts of ServiceDefinition.csdef and ServiceConfiguration.cscfg from the standard Azure project to the LightSwitch project. In ServiceDefinition.csdef that was the Imports section and the Certificates section. In ServiceConfiguration.cscfg it was all the settings starting Microsoft.WindowsAzure.Plugins.Remote…; and again the Certificates section. I think that was it.

It worked. I published the LightSwitch app, went to the Azure management portal, selected the instance, and clicked Connect.

image

What I found was a virtual Quad-Core Opteron with 767MB RAM and running Windows Server 2008 Enterprise SP2. It seems Azure does not use Server 2008 R2 yet – at least, not for Extra Small instances.

image

750MB RAM is less than I would normally consider for Server 2008 – this is Extra Small, remember – but I tried using my simple LightSwitch app and it seemed to cope OK, though memory is definitely tight.

image

This VM is actually not that small in relation to many Linux VMs out there, happily running Apache, PHP, MySQL and numerous web applications. Note that my Azure VM is not running SQL Server; SQL Azure runs on separate servers. I am not 100% sure why Azure does not use Server Core for VMs like this. It may be because server core is usually used in conjunction with GUI tools running remotely, and setting up all the permissions for this to work is a hassle.

I took a look at the Event Viewer. I have never seen a Windows event log without at least a few errors, and I was interested to see if a Microsoft-managed VM would be the first. It was not, though there are a mere 16 “Administrative events” which is pretty good, though the VM has only been running for an hour or so. There were a bunch of boot-start drivers which failed to load:

image

and this, which I would describe as a typical obscure and probably-unimportant-but-who-knows Windows error:

image

The Azure VM is not domain-joined, but is in a workgroup. It is also not activated; I presume it will become activated if I leave it running for more than 14 days.

Internet Explorer is installed but I was unable to browse the web, and attempting to ping out gave me “Request timed out”. Possibly strict firewall rules prevent this. It must be carefully balanced, since applications will need to connect out.

The DNS suffix is reddog.microsoft.com – a remnant of the Red Dog code name which was originally used for Azure.

As I understand it, the main purpose of remote desktop access is for troubleshooting, not so that you can install all sorts of extra stuff on your VM. But what if you did install all sorts of extra stuff? It would not be a good idea, since – again as I understand it – the VM could be zapped by Azure at any time, and replaced with a new one that had reverted to the original configuration. You are not meant to keep any data that matters on the VM itself; that is what the Azure storage services are for.