Tag Archives: microsoft

How will online services impact Microsoft’s partner business?

2010 is the year Microsoft got serious about cloud services. Windows Azure opened for real business in November 2009 – OK, just before 2010 – and CEO Steve Ballmer took to telling the world how Microsoft is “all in” for cloud computing whenever he got up to speak. Office and SharePoint 2010 launched in May 2010 complete with the ability to create and edit Office documents from a web browser. Microsoft also announced Office 365, essentially an upgrade of its existing BPOS offering, offering hosted Exchange, Sharepoint and Lync (Office Communicator). Microsoft also announced Small Business Server 2011, including an Essentials edition, formerly codenamed “Aurora”, which is little more than Windows Home Server plus Active Directory and points small businesses towards cloud services for email and document collaboration.

I’d guess that Microsoft’s cloud conversion is driven in part by the progress Google, Salesforce.com and others have made in persuading businesses that hosted internet services make more sense than maintaining your own servers and server applications in many cases.

But what is the impact on Microsoft partners, who have been kept busy supplying and configuring servers, implementing backup, keeping systems running, and then upgrading them as they become obsolete? On the face of it they have less to do in a hosted world, and although Microsoft offers commission on the sale of online subscriptions, that might not compensate for lost business.

Then again, cloud services offer new opportunities, still need configuring, and look likely to be a source of new business for partners particularly at a time when the majority of businesses have not yet made the transition.

I’m researching a further piece on the subject and would love to hear honest views from partners such as resellers and solution providers about how Microsoft’s online services are affecting partner business now and in the future. Or maybe you think this cloud thing is overdone and it will be business as usual for a while yet. You can contact me by email – tim(at)itwriting.com – or of course comment below.

The Microsoft Azure VM role and why you might not want to use it

I’ve spent the morning talking to Microsoft’s Steve Plank – whose blog you should follow if you have an interest in Azure – about Azure roles and virtual machines, among other things.

Windows Azure applications are deployed to one of three roles, where each role is in fact a Windows Server virtual machine instance. The three roles are the web role for IIS (Internet Information Server) applications, the worker role for general applications, and newly announced at the recent PDC, the VM role, which you can configure any way you like. The normal route to deploying a VM role is to build a VM on your local system and upload it, though in future you will be able to configure and deploy a VM role entirely online.

It’s obvious that the VM role is the most flexible. You will even be able to use 64-bit Windows Server 2003 if necessary. However, there is a critical distinction between the VM role and the other two. With the web and worker roles, Microsoft will patch and update the operating system for you, but with the VM role it is up to you.

That does not sound too bad, but it gets worse. To understand why, you need to think in terms of a golden image for each role, that is stored somewhere safe in Azure and gets deployed to your instance as required.

In the case of the web and worker roles, that golden image is constantly updated as the system gets patched. In addition, Microsoft takes responsibility for backing up the system state of your instance and restoring it if necessary.

In the case of the VM role, the golden image is formed by your upload and only changes if you update it.

The reason this is important is that Azure might at any time replace your running VM (whichever role it is running) with the golden image. For example, if the VM crashes, or the machine hosting it suffers a power failure, then it will be restarted from the golden image.

Now imagine that Windows server needs an emergency patch because of a newly-discovered security issue. If you use the web or worker role, Microsoft takes responsibility for applying it. If you use the VM role, you have to make sure it is applied not only to the running VM, but also to the golden image. Otherwise, you might apply the patch, and then Azure might replace the VM with the unpatched golden image.

Therefore, to maintain a VM role properly you need to keep a local copy patched and refresh the uploaded golden image with your local copy, as well as updating the running instance. Apparently there is a differential upload, to reduce the upload time.

The same logic applies to any other changes you make to the VM. It is actually more complex than managing VMs in other scenarios, such as the Linux VM on which this blog is hosted.

Another feature which all Azure developers must understand is that you cannot safely store data on your Azure instance, whichever role it is running. Microsoft does not guarantee the safety of this data, and it might get zapped if, for example, the VM crashes and gets reverted to the golden image. You must store data in Azure database or blob storage instead.

This also impacts the extent to which you can customize the web and worker VMs. Microsoft will be allowing full administrative access to the VMs if you require it, but it is no good making extensive changes to an individual instance since they could get reverted back to the golden image. The guidance is that if manual changes take more than 5 minutes to do, you are better off using the VM role.

A further implication is that you cannot realistically use an Azure VM role to run Active Directory, since Active Directory does not take kindly to be being reverted to an earlier state. Plank says that third-parties may come up with solutions that involve persisting Active Directory data to Azure storage.

Although I’ve talked about golden images above, I’m not sure exactly how Azure implements them. However, if I have understood Plank correctly, it is conceptually accurate.

The bottom line is that the best scenario is to live with a standard Azure web or worker role, as configured by you and by Azure when you created it. The VM role is a compromise that carries a significant additional administrative burden.

25 years of Windows: triumph and tragedy

I wrote a (very) short history of Windows for the Register, focusing on the launch of Windows 1.0 25 years ago.

image

I used Oracle VirtualBox to run Windows 1.0 under emulation since it more or less works. I found an old floppy with DOS 3.3 since Windows 1.0 does not run on DOS 6.2, the only version offered by MSDN. In the course of my experimentation I discovered that Virtual PC still supports floppy drives but no longer surfaces this in the UI. You have to use a script. Program Manager Ben Armstrong says:

Most users of Windows Virtual PC do not need to use floppy disks with their virtual machines, as general usage of floppy disks has become rarer and rarer.

An odd remark in the context of an application designed for legacy software.

What of Windows itself? Its huge success is a matter of record, but it is hard to review its history without thinking how much better it could have been. Even in version 1.0 you can see the intermingling of applications, data and system files that proved so costly later on. It is also depressing to see how mistakes in the DOS/Windows era went on to infect the NT range.

Another observation. It took Microsoft 8 years to release a replacement for DOS/Windows – Windows NT in 1993 – and another 8 years to bring Windows NT to the mainstream on desktop and server with Windows XP in 2001. It is now 9 years later; will there ever be another ground-up rewrite, or do just get gradual improvements/bloat from now on?

I don’t count 64-bit Windows as a ground-up rewrite since it is really a port of the 32-bit version.

Finally, lest I be accused of being overly negative, it is also amazing to look at Windows 1.0, implemented in fewer than 100 files in a single directory, and Windows 7/Server 2008 R2, a platform on which you can run your entire business.

WS-I closes its doors–the end of WS-* web services?

The Web Services Interoperability Organization has announced [pdf] the “completion” of its work:

After nearly a decade of work and industry cooperation, the Web Services Interoperability Organization (WS-I; http://www.ws-i.org) has successfully concluded its charter to document best practices for Web services interoperability across multiple platforms, operating systems and programming languages.

In the whacky world of software though, completion is not a good thing when it means, as it seems to here, an end to active development. The WS-I is closing its doors and handing maintenance of the WS interoperability profiles to OASIS:

Stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards), a group of technology vendors and customers that drive development and adoption of open standards.

Simon Phipps blogs about the passing of WS-I and concludes:

Fine work, and many lessons learned, but sadly irrelevant to most of us. Goodbye, WS-I. I know and respect many of your participants, but I won’t mourn your passing.

Phipps worked for Sun when the WS-* activity was at its height and WS-I was set up, and describes its formation thus:

Formed in the name of "preventing lock-in" mainly as a competitive action by IBM and Microsoft in the midst of unseemly political knife-play with Sun, they went on to create massively complex layered specifications for conducting transactions across the Internet. Sadly, that was the last thing the Internet really needed.

However, Phipps links to this post by Mike Champion at Microsoft which represents a more nuanced view:

It might be tempting to believe that the lessons of the WS-I experience apply only to the Web Services standards stack, and not the REST and Cloud technologies that have gained so much mindshare in the last few years. Please think again: First, the WS-* standards have not in any sense gone away, they’ve been built deep into the infrastructure of many enterprise middleware products from both commercial vendors and open source projects. Likewise, the challenges of WS-I had much more to do with the intrinsic complexity of the problems it addressed than with the WS-* technologies that addressed them. William Vambenepe made this point succinctly in his blog recently.

It is also important to distinguish between the work of the WS-I, which was about creating profiles and testing tools for web service standards, and the work of other groups such as the W3C and OASIS which specify the standards themselves. While work on the WS-* specifications seems much reduced, there is still work going on. See for example the W3C’s Web Services Resource Access Working Group.

I partly disagree with Phipps about the work of the WS-I being “sadly irrelevant to most of us”. It depends who he means by “most of us”. Granted, all this stuff is meaningless to the world at large; but there are a significant number of developers who use SOAP and WS-* at least to some extent, and interoperability is key to the usefulness of those standards.

The Salesforce.com API is mainly SOAP based, for example, and although there is a REST API in preview it is not yet supported for production use. I have been told that a large proportion of the transactions on Salesforce.com are made programmatically through the API, so here is one place at least where SOAP is heavily used.

WS-* web services are also built into Microsoft’s Visual Studio and .NET Framework, and are widely used in my experience. Visual Studio does a good job of wrapping them so that developers do not have to edit WSDL or SOAP requests and responses by hand. I’d also suggest that web services in .NET are more robust than DCOM (Distributed COM) ever was, and work successfully over the internet as well as on a local network, so the technology is not a failure.

That said, I am sure it is true that only a small subset of the WS-* specifications are widely used, which implies a large amount of wasted effort.

Is SOAP and WS-* dying, and REST the future? The evidence points that way to me, but I would be interested in other opinions.

The Java crisis and what it means for developers

What is happening with the Java language and runtime? Since Java passed into the hands of Oracle, following its acquisition of Sun, there has been a succession of bad news. To recap:

  • The JavaOne conference in September 2010 was held in the shadow of Oracle OpenWorld making it a less significant event than in previous years.
  • Oracle is suing Google, claiming that Java as used in the Android SDK breaches its copyright.
  • IBM has abandoned the Apache open source Harmony project and is committing to the Oracle-supported Open JDK. Although IBM’s Sutor claims that this move will “help unify open source Java efforts”, it seems to have been done without consultation with Apache and is as much divisive as unifying.
  • Apple is deprecating Java and ceasing to develop a Mac-specific JVM. This should be seen in context. Apple is averse to runtimes of any kind – note its war against Adobe Flash – and seems to look forward to a day when all or most applications delivered to Apple devices come via the Apple-curated and taxed app store. In mitigation, Apple is cooperating with the OpenJDK and OpenJDK for Mac OS X has been announced.
  • Apache has written a strongly-worded blog post claiming that Oracle is “violating their contractual obligation as set forth under the rules of the JCP”, where JCP is the Java Community Process, a multi-vendor group responsible for the Java specification but in which Oracle/Sun has special powers of veto. Apache’s complaint is that Oracle stymies the progress of Harmony by refusing to supply the test kit for Java (TCK) under a free software license. Without the test kit, Harmony’s Java conformance cannot be officially verified.
  • The JCP has been unhappy with Oracle’s handling of Java for some time. Many members disagree with the Google litigation and feel that Oracle has not communicated well with the JCP. JCP member Doug Lea stood down, claiming that “the JCP is no longer a credible specification and standards body”. Another member, Stephen Colebourne, has a series of blog posts in which he discusses the great war of Java and what he calls the “unravelling of the JCP”, and recently  expressed his view that Oracle was trying to manipulate the recent JCP elections.

To set this bad news in context, Java was not really in a good way even before the acquisition. While Sun was more friendly towards open source and collaboration, the JCP has long been perceived as too slow to evolve Java, and unrepresentative of the wider Java community. Further, Java’s pre-eminence as a pervasive cross-platform runtime has been reduced. As a browser plug-in it has fallen behind Adobe Flash, the JavaFX initiative failed to win wide developer support, and on mobile it has also lost ground. Java’s advance as a language has been too slow to keep up with Microsoft’s C#.

There are a couple of ways to look at this.

One is to argue that bad news followed by more bad news means Java will become a kind of COBOL, widely used forever but not at the cutting edge of anything.

The other is to argue that since Java was already falling behind, radical change to the way it is managed may actually improve matters.

Mike Milinkovich at the Eclipse Foundation takes a pragmatic view in a recent post. He concedes that Oracle has no idea how to communicate with the Java community, and that the JCP is not vendor-neutral, but says that Java can nevertheless flourish:

I believe that many people are confusing the JCP’s vendor neutrality with its effectiveness as a specifications organization. The JCP has never and will never be a vendor-neutral organization (a la Apache and Eclipse), and anyone who thought it so was fooling themselves. But it has been effective, and I believe that it will be effective again.

It seems to me Java will be managed differently after it emerges from its crisis, and that on the scale between “open” and “proprietary” it will have moved towards proprietary but not in a way that destroys the basic Java proposition of a free development kit and runtime. It is also possible, even likely, that Java language and technology will advance more rapidly than before.

For developers wondering what will happen to Java at a technical level, the best guide currently is still the JDK Roadmap, published in September. Some of its key points:

  • The open source Open JDK is the basis for the Oracle JDK.
  • The Oracle JDK and Java Runtime Environment (JRE) will continue to be available as free downloads, with no changes to the existing licensing models.
  • New features proposed for JDK 7 include better support for dynamic languages and concurrent programming. JDK 8 will get Lambda expression.

While I cannot predict the outcome of Oracle vs Google or even Apache vs Oracle, my guess is that there will be a settlement and that Android’s momentum will not be disrupted.

That said, there is little evidence that Oracle has the vision that Sun once had, to make Java truly pervasive and a defence against lock-in to proprietary operating systems. Microsoft seems to have lost that vision for .NET and Silverlight as well – though the Mono folk have it. Adobe still has it for Flash, though like Oracle it seems if anything to be retreating from open source.

There is therefore some sense in which the problems facing Java (and Silverlight) are good for .NET, for Mono and for Adobe. Nevertheless, 2010 has been a bad year for write once – run anywhere.

Update: Oracle has posted a statement saying:

The recently released statement by the ASF Board with regard to their participation in the JCP calling for EC members to vote against SE7 is a call for continued delay and stagnation of the past several years. We would encourage Apache to reconsider their position and work together with Oracle and the community at large to collectively move Java forward.  Oracle provides TCK licenses under fair, reasonable, and non-discriminatory terms consistent with its obligations under the JSPA.   Oracle believes that with EC approval to initiate the SE7 and SE8 JSRs, the Java community can get on with the important work of driving forward Java SE and other standards in open, transparent, consensus-driven expert groups.   This is the priority.   Now is the time for positive action.  Now is the time to move Java forward.

to which Apache replies succinctly:

The ball is in your court. Honor the agreement.

First impressions of Microsoft Kinect – great hardware waiting for great software

The moment of magic comes when someone walks through the gaming area and Xbox flashes up the message that they have signed in. No button was pressed; this was face recognition working in the background during gameplay.

So Kinect is amazing. And it is amazing: it is controller-less video gaming that works well enough to have a lot of fun. That said, I also have reservations about the device, though these are first impressions only, and feel it is let down in a big way by the games currently available.

My device arrived on the UK launch day, November 10th. It is a relatively compact affair, around 28 cm wide on a stubby stand. The first task is positioning it, which can be a challenge. You are meant to place it above or below your TV screen, at a height of between 0.6m to 1.8m. I was lucky, in that our TV is on a stand that has space for it; the height is fractionally below 0.6m but it seems to be happy. Alternatively, you can purchase a free-standing support or a bracket that clips to the top of a TV. I imagine there are some frustrated first-day purchasers who received a device but cannot satisfactorily position it.

You also need free space in front of the set. Our coffee table got moved when the Nintendo Wii arrived, so the 6ft required for one-player play is not a problem.  Two-player is more difficult; we can do it but it means moving furniture, which is a nuisance. Overall it is more intrusive than the Wii, but less than Rock Band or Guitar Hero with the drum kit, so not a deal-breaker.

Microsoft takes full advantage of over-the-wire updates with Kinect. After connecting, the Xbox, the device firmware, and the bundled Kinect Adventures game all received patches; but the procedure went smoothly.

Kinect is a sophisticated device, a lot more than just a camera. There are three major subsystems in Kinect: optical, audio and motor.

  • Motor is the simplest – the stubby stand also contains a motor assembly that swivels the device up and down, enabling it to allow for different positions and to find the optimal angle for players of different heights.
  • The optical subsystem includes two cameras and an infra-red projector. The projector overlays a pattern on the field of view. This allows the first camera, a depth sensor, to map the position of the players in three dimensions. This lets the system detect hand movements, for example, which are usually closer to the camera than the rest of the body. The second camera is a colour device more like the one in your webcam, and enables Kinect to take pictures of your gaming antics which you can share with the world if you feel so inclined, as well as presumably feeding into the positioning system.
  • The audio subsystem includes no less than four microphones. The reason is that Kinect does voice recognition at a distance, so needs to be able to compensate for both the sounds of the video game and other background noise. Using multiple microphones enables the audio processor to calculate the position of sounds, since each microphone will receive a sound at a fractionally different time.

These sensors systems are backed by considerable processing power – necessary because the Xbox itself devotes most of its processing to the game being played. The trade-off in systems like this is that the more processing means more accurate interpretation of voice and gestures, but taking too much time introduces lag. As I saw at the NVIDIA GPU conference in September – see here and here for posts – very rapid processing enables magic like robotic pinhole surgery on a beating heart – and like Kinect, that magic is based on real-time interpretation of physical movement. Kinect is not at that level, but has audio and image processor chips and 512MB RAM, along with other components including for some reason an accelerometer, mounted on three circuit boards squashed into the slim plastic container. See for yourself in the ifixit teardown.

But how is it in practice? It certainly works, and we had a good and energetic time playing Kinect Adventures and a little bit of Joy Ride. Playing without a controller is a liberating experience. That said, there were some annoyances:

  • Kinect play is more vulnerable to interference than controller gaming. If someone walks across the play area, for example, it will interfere.
  • In the Kinect system, there is no such thing as a click. Therefore, to activate an option you have to hover over it for a short period while a progress circle fills; when the circle is filled, the system decides that you have “clicked”. It is slower and less reliable than clicking a button.
  • The audio system enables voice control which seems to work well when available, but most of the time it seems not to be available. Considering the amount of hardware dedicated to this, it seems rather a waste; but presumably more is to come. Controlling Sky player by voice, for example, would be great; no more hunting for the remote.
  • The Kinect seems to work best when you are standing. For something like a driving game, that is not what you want. Apparently seated gameplay is supported, but does not work properly with the launch games; so watch this space.

Launching stuff before it is really ready seems to be ingrained in Microsoft’s culture. Is Kinect another example? To some extent I suspect it is. I recall the early days with the Nintendo Wii as exciting moments of discovery: the system worked well from the get-go, and the bundled Wii Sports game is a masterpiece. The Kinect games so far are less impressive.

In fact, my overwhelming impression so far is that this is great hardware waiting for software to show what it can do. The 20,000 Leaks mini-game in Adventures is not very good – you are in a glass cage underwater and have to cover leaks to stem them – but it is interesting because you have to use head, hands and feet to play it. It could not be duplicated with a conventional controller, because a conventional controller does not allow you to move one thing this way, and another thing that way, at the same time.

It follows that Kinect should enable some brilliant new gaming concepts. I’d love to see a stealth adventure done for Kinect, for example; there are new possibilities for realism and excitement.

As it is, the Kinect launch games show little imagination and seem to be heavily Wii-influenced – and if you compare Kinect with Wii on that basis, you might well conclude that the Wii is better in some ways, worse in others, but cheaper and with better games, and without the friction of Kinect’s somewhat fussy requirements.

Such a comparison is not fair to Kinect, which in concept and hardware is a generation ahead of Wii or PlayStation Move. It now awaits software to take advantage.

The cloud permeates Microsoft’s business more than we may realise

I’m in the habit of summarising Microsoft’s financial results in a simple table. Here is how it looks for the recently announced figures.

Quarter ending September 30 2010 vs quarter ending September 30 2009, $millions

Segment Revenue Change Profit Change
Client (Windows + Live) 4785 1905 3323 1840
Server and Tools 3959 409 1630 393
Online 527 40 -560 -83
Business (Office) 5126 612 3388 561
Entertainment and devices 1795 383 382 122

The Windows figures are excellent, mostly reflecting Microsoft’s success in delivering a successor to Windows XP that is good enough to drive upgrades.

I’m more impressed though with the Server and tools performance – which I assume is mostly Server – though noting that it now includes Windows Azure. Microsoft does not break out the Azure figures but said that it grew 40% over the previous quarter; not especially impressive given that Azure has not been out long and will have grown from a small base.

The Office figures, also good, include Sharepoint, Exchange and BPOS (Business Productivity Online Suite), which is to become Office 365. Microsoft reported “tripled number of business customers using cloud services.”

Online, essentially the search and advertising business, is poor as ever, though Microsoft says Bing gained market share in the USA. Entertainment and devices grew despite poor sales for Windows Mobile, caught between the decline of the old mobile OS and the launch of Windows Phone 7.

What can we conclude about the health of the company? The simple fact is that despite Apple, Google, and mis-steps in Windows, Mobile, and online, Microsoft is still a powerful money-making machine and performing well in many parts of its business. The company actually does a poor job of communicating its achievements in my experience. For example, the rather dull keynote from TechEd Berlin yesterday.

Of course Microsoft’s business is still largely dependent on an on-premise software model that many of us feel will inevitably decline. Still, my other reflection on these figures is that the cloud permeates Microsoft’s business more than a casual glance reveals.

The “Online” business is mainly Bing and advertising as far as I can tell; and despite CTO Ray Ozzie telling us back in 2005 of the importance of services financed by advertising, that business revolution has not come to pass as he imagined. I assume that Windows Live is no more successful than Online.

What is more important is that we are seeing Server and tools growing Azure and cloud-hosted virtualisation business, and Office growing hosted Exchange and SharePoint business. I’d expect both businesses to continue to grow, as Microsoft finally starts helping both itself and its customers with cloud migration.

That said, since the hosted business is not separated from the on-premise business, and since some is in the hands of partners, it is hard to judge its real significance.

Adobe MAX 2010 – it’s all about the partners

Last week was all conferences – Adobe MAX 2010 followed by Microsoft PDC – which left me with plenty of input but too little time to write it up. It is not too late though; and one advantage of attending these two events back-to-back was to highlight the tale of two runtimes, Adobe Flash and Microsoft Silverlight. MAX was a good event for Flash, and PDC a bad one for Silverlight, though the tale has a long way yet to run.

The key difference at this point is not technical, but all about partners. At MAX we saw how the Flash runtime is integral to the Blackberry PlayBook, with RIM founder Mike Lazaridis coming on stage to tell us so. Flash is also built into Google TV, and Andres Ferrate and Daniels Lee from Google Developer Relations presented a session on creating web apps for the platform – worth watching as it brings out the difference between developing for a TV “lean back” environment and traditional mouse or touch user interfaces -  and we also heard from Samsung about its Flash-enabled TVs coming in 2011. In each case, it is not just Flash but AIR, for applications that run outside the browser, which is supported. Google TV runs Android; and AIR for Android in general drew attention at MAX, encouraged by free Motorola Droid 2 smartphones handed out to attendees.

If the task was to convince Flash developers – and those on the fence – that the platform has a future, MAX delivered in spades; and Adobe can only benefit from the uncertainty surrounding the most obvious runtime rivals to Flash, Java and Silverlight.

But what about that other platform, HTML? Well, Adobe made a bit of noise about projects like EDGE, which exports animations and transitions to SVG and JavaScript using an extended JQuery library, as well as showing a “sneak peek” of a tool to export a Flash animation (but not application) to  HTML. Outside the Adobe fan club there is still considerable aversion to Flash, stoked by Apple; in one of the sessions at MAX we were told that Steve Jobs’ open memo Thoughts on Flash has done real damage.

My impression though is that Adobe still has a Flash-first philosophy. The Solution Accelerators announced for LiveCycle 2.5, for example, all seem to be based on Flash clients, which could prove difficult if Apple’s iPad continues to take off in the enterprise. Adobe could do more to provide JavaScript libraries for LiveCycle clients, and tools for creating HTML applications. If you came to MAX looking for evidence that Adobe is moving towards web standard HTML clients, you would have been largely disappointed; though seeing JQuery guy John Resig in the day two keynote would give you some comfort.

Some other MAX highlights:

  • Round-tripping between Catalyst and Flash Builder at last. This makes Catalyst more useful, though I still find myself thinking that the Catalyst features could be rolled into one of the other products, either as a designer personality for Flash Builder, or maybe in Flash Professional. The former would be easier as both Catalyst and Flash Builder are built on Eclipse.
  • Enhancements in the Flash Player – I am writing a separate piece on this, but it is great to see the 3D extensions codenamed Molehill, which together with game controller support lay the foundations for Flash games that compete more closely with console games.
  • Analytics – Adobe’s acquisition of Omniture a year ago was a far-sighted move, and the company talked about analytics in the context of applications as well as web sites. Despite unsettling privacy implications, the ability for developers to drill down into exactly how an application is used, and which parts are hardly used, has great potential for improving usability.
  • Digital publishing – it was fascinating to hear from publisher Condé Nast about its plans for digital publishing, using Adobe’s Digital Publishing Suite to create files targeting Adobe’s content viewer on iOS and eventually AIR. As a web enthusiast I have mixed feelings, and there was some foot-shuffling when I asked about SEO (Search Engine Optimisation); but as someone with a professional interest in a flourishing media industry I also hope this becomes a solid and profitable platform.

Disappointments? I was sorry to hear that Adobe is closing down contributions and reducing transparency in the open source Flex SDK, though it is said to be temporary. It also seems that plans to enhance ActionScript are not well advanced; Silverlight remains well ahead in this respect with its C# and .NET support.

What about Adobe’s enterprise ambitions? Klint Finley’s post on the Adobe Stack and what it means for Enterprise Development is a good read. The pieces are almost in place, but the focus on document processing at the back end, and Flash and Acrobat on the front end, makes this a specialist rather than a generic application platform.

Overall though it was a strong MAX. I appreciate Adobe for not being Google or Apple or Microsoft or IBM, and hope that takeover rumours remain as rumours.

See also my earlier post Adobe aims to fill mobile vacuum with AIR.

Microsoft pledges commitment to Silverlight – but is it enough?

Microsoft’s president of Server and Tools Bob Muglia has posted a response to the widespread perception that the company is backing off its commitment to Silverlight, a cross-browser, cross-platform runtime for rich internet applications. He is the right person to do so, since it was his remark that ”Our strategy with Silverlight has shifted” which seemed to confirm a strategy change that had already been implied by the strong focus in the keynote on HTML 5 as an application platform.

Muglia says Silverlight is in fact “very important and strategic to Microsoft”. He confirms that a new release is in development, notes that Silverlight is the development platform for Windows Phone 7, and affirms Silverlight both as a media client and as “the richest way to build web-delivered client apps.”

So what is the strategy change? It is this:

When we started Silverlight, the number of unique/different Internet-connected devices in the world was relatively small, and our goal was to provide the most consistent, richest experience across those devices.  But the world has changed.  As a result, getting a single runtime implementation installed on every potential device is practically impossible.  We think HTML will provide the broadest, cross-platform reach across all these devices.  At Microsoft, we’re committed to building the world’s best implementation of HTML 5 for devices running Windows, and at the PDC, we showed the great progress we’re making on this with IE 9.

The key problem here is Apple’s iOS, which Muglia mentioned specifically in his earlier interview:

HTML is the only true cross platform solution for everything, including (Apple’s) iOS platform.

Muglia’s words are somewhat reassuring to Silverlight developers; but not, I think, all that much. Silverlight will continue on Windows, Mac and on Windows Phone; but there are many more devices which developers want to target, and it sounds as if Microsoft does not intend to broaden Silverlight’s reach.

Faced with the same issues, Adobe has brought Flash to device platforms including Android, MeeGo, Blackberry and Google TV; and come up with a packager that compiles Flash applications to native iOS code. There is still no Flash or AIR (out of browser Flash) on Apple iOS; but Adobe has done all possible to make Flash a broad cross-platform runtime.

Microsoft by contrast has not really entered the fight. It has been left to Novell’s Mono team to show what can be done with cross-platform .NET, including MonoTouch for iOS and MonoDroid for Google’s Android platform.

Microsoft could have done more to bring Silverlight to further platforms, but has chosen instead to focus on HTML 5 – just as Muglia said in his earlier interview.

Whether Microsoft is right or wrong in this is a matter for debate. From what I have seen, the  comments on Microsoft’s de-emphasis of Silverlight at PDC have been worrying for .NET developers, but mostly cheered elsewhere.

The problem is that HTML 5 is not ready, nor is it capable of everything that can be done in Silverlight or Flash. There is a gap to be filled; and it looks as if Microsoft is leaving that task to Adobe.

It does seem to me inevitable that if Microsoft really gets behind HTML 5, by supporting it with tools and libraries to make it a strong and productive client for Microsoft’s server applications, then Silverlight will slip further behind.

Reflections on Microsoft PDC 2010

I’m in Seattle airport waiting to head home – so here are some quick reflections on Microsoft’s Professional Developers Conference 2010.

Let’s start with the content. There was a clear focus on two things: Windows Azure, and Windows Phone 7.

On the Azure front, the cloud platform, Microsoft impressed. Features are being added rapidly, and it looks solid and interesting. The announcements at PDC mean that Azure provides pretty much the complete Windows Server platform, should you want it. You will get elevated privileges for complete control over a server instance; and full IIS functionality including support for multiple web sites and the ability to install modules. You will also be able to remote desktop into your Azure servers, which is going to make Windows admins feel more comfortable with Azure.

The new virtual machine role is also a big deal, even though in some ways it goes against the multi-tenanted philosophy by leaving the customer responsible for patches and updates. Businesses with existing virtual servers can simply move them to Azure if they no longer wish to run their own hardware. There are also existing tools for migrating physical servers to virtual.

I asked Bob Muglia, president of server and tools at Microsoft, whether having all these VMs maintained by customers and potentially compromised with malware posed a security threat to the platform. He assured me that they are fully isolated, and that the main danger is to the customer who might consume unexpected amounts of bandwidth.

Simply running on an Azure VM does not take full advantage of the platform though. It makes more sense to hook into Azure services such as SQL Azure, or the non-relational storage services, and deploy to Azure web or worker roles where Microsoft take care of maintenance. There is also a range of middleware services called AppFabric; see here for a few notes on these.

If there was one gap in the Azure story at PDC, it was a lack of partner announcements. Microsoft says there are more than 20,000 applications running on Azure, but we did not hear much about them, or about notable large customers embracing Azure. There is still a lot of resistance to the cloud among customers. I asked some attendees at lunch whether they expect to use Azure; the answer was “no, we have our own datacenter”.

I think the partner announcements will come. Microsoft is firmly behind Azure now, and it makes sense for its customers. I expect Azure to succeed; but whether it will do well enough to counter-balance the cost to Microsoft of migration away from on-premise servers is an open question.

Alongside Azure, though hardly mentioned at PDC, is the hosted application business originally called BPOS and now called Office 365. This is not currently hosted on Azure, though Muglia told me that most of it will in time move there. There are some potential synergies here, for example in Azure workflow applications that handle SharePoint forms or documents.

Microsoft’s business is primarily based on partners selling Windows hardware and licenses for on-premise or client software. Another open question is how easily the company can re-orient itself to be a cloud platform and services company. It is a massive shift.

What about Windows Phone? Microsoft has some problems here, and they are not primarily to do with the phone itself, which is decent. There are a few issues over the design of the launch devices, and features that are lacking initially. Further, while the Silverlight and XNA SDK forms a strong development platform, there is a need for a native code SDK and I expect this will follow at some point.

The key issue though is that outside the Microsoft bubble there is not much interest in the phone. Google Android meets the needs of the OEM hardware and operator partners, being open and easily customised. Apple owns the market for high-end devices with the design quality and ease of use that comes from single-vendor control of the whole stack. The momentum behind these platforms is such that it will not be easy for Microsoft to grab much market share, or attention from third-party app developers. It deserves to do well; but I will not be surprised if it under-performs relative to its quality.

There was also some good material to be found on the PDC sidelines, as it were. Andes Hejlsberg presented on new asynchronous features coming in C# 5.0, which look like a breakthrough in making concurrent programming safer and easier. He also showed a bit of Microsoft’s work on compiler as a service, which has huge potential. Patrick Smaccia has an enthusiastic report on the C# presentation. Herb Sutter gave a brilliant talk on lambdas.

The PDC site lets you stream pretty much all the sessions and seems to work very well. The player application is written in Silverlight. Note that there are twice as many sessions as appear in the schedule, since many were pre-recorded and only show in the full session list.

Why did Microsoft run such a small event, with only around 1000 attendees? I asked a couple of people about this; the answer seems to be partly as a cost-saving measure – it is much cheaper to run an event on the Microsoft campus than to hire an external venue and pay transport and expenses for all the speakers and staff – and partly to emphasise the virtual aspect of PDC, with a global audience tuning in.

This does not altogether make sense to me. Microsoft is still generating a ton of cash, as we heard in the earnings call at the event, and PDC is a key opportunity to market its platform to developers and influencers, so it should not worry too much about the cost. Second, you can do virtual as well as physical; they are not alternatives. You get more engagement from people who are actually present.

One of the features of the player is that you see how many are currently streaming the content. I tuned into Mark Russinovich’s excellent session on Azure – he says he has “drunk the cloud kool-aid” – while it was being streamed live, and was surprised to see only around 300 virtual attendees. If that figure is accurate, it is disappointing, though I am sure there will be thousands of further views after the event.

Finally, what about all the IE9/HTML 5 vs Silverlight discussion generated at PDC? Clearly Microsoft’s messaging went badly awry here, and frankly the company has only itself to blame. It cannot be surprised if after making a huge noise about how IE9 forms a great client for web applications, standards-based and integrated with Windows, that people question what sort of role is envisaged for Silverlight. It did not help that a planned session on Silverlight futures was apparently cancelled, probably for innocent reasons such as not being quite ready to show, but increasing speculation that Silverlight is now getting downplayed.

Microsoft chose to say nothing on the subject, other than some remarks by Bob Muglia to freelance journalist Mary-Jo Foley which seem to confirm that yes, Silverlight is no longer Microsoft’s key technology for cross-platform web applications.

If that was not quite the message Microsoft intended, then why not clarify the matter to press, myself included, as we sat in the press room on Microsoft’s campus?

My take is that while Silverlight is by no means dead, it seems destined for a lesser role than was once envisaged – a shame, as it is an excellent cross-platform .NET client.