Category Archives: software

Apple’s Mac App Store – and the forgotten Windows Marketplace

Apple launched the Mac App Store yesterday and I had a look this morning. It is only available to users of Mac OS X Snow Leopard, where it comes with the latest system update.

image

It is interesting that Apple has not used iTunes for the App Store, but has developed new client software. Maybe it is coming round to opinion that iTunes has become bloated; it is only for historic reasons that a music player has become an all-purpose app installer.

The store itself worked well for me. I picked a free app, TextWrangler, and signed in with my Apple ID. The UI showed Installing, then Installed, and I was done.

image

The TextWrangler icon appeared in the Dock so I could start the app easily.

What counts is what I did not have to do – reboot, select from setup options, or deal with perplexing error messages.

Users will also like the common-sense licensing, which lets you download and install a purchased app on any Mac you use, controlled by your App Store log-in. I am not sure what happens if you install your app on your friend’s Mac, then sign out of the App Store. There is some link between the app and your Apple ID, because if you copy the application to another Mac it will ask for your sign-in details when you first run it, but I am not clear whether this is checked on every run to deter piracy.

Most important, there is an attractive range of apps at good prices. In the UK, Angry Birds is £2.99, Pinball HD £1.79, and Apple Pages or Keynote £11.99 each. That is less than typical Apple Store shrink-wrap prices. The prices for Pages and Keynote makes the price Microsoft charges for Office look impossibly expensive. Good for customers; but worrying for independent software vendors who want to make a living.

Developers pay $99.00 per year to join the Mac Developer Program and then 30% commission to Apple on every sale. Of course, like the iPhone App Store, apps are subject to Apple’s approval.

Lest you think it is clever of Apple to invent an app store for the desktop, it is worth noting that the concept is an old one. Linux has delivered free software like this for years, and some distributions have also featured paid app installers integrated into the OS.

So has Microsoft, which has run various varieties of Windows Marketplace over the years, for mobile and desktop applications. Windows Vista shipped with an app store for both Microsoft and third-party apps built-in. It was on the Start menu:

image

as well as in Control Panel:

image

On November 1st 2008 Microsoft shut down Windows Marketplace and “transitioned” it to a referral site. There was some angst at the time about the closing of the digital locker, which proved insecure against the threat of corporate mind-changing. It still runs the online Microsoft Store, but this is for Microsoft-only products. For example, you can download Microsoft Songsmith for £25.00:

image

Why did Windows Marketplace fail? Well, the user experience was poor, it was insufficiently prominent in the Vista user interface, setup could be troublesome. Major Windows app vendors figured out that they would be better off drawing potential customers to their own web sites, where they have full control. As is often the case, Microsoft was conflicted over whether it wanted to drive customers to the online store, or to partner retailers, or to app vendor sites; and the OEMs would have their say as well, when customising Windows for their own PCs.

Another factor is that Windows apps are often not well isolated. Silverlight actually solves this problem – out-of-browser apps are well isolated and secure – but Microsoft does not even ship Silverlight by default with Windows.

The indications are that Microsoft will have another go in Windows 8. Documents leaked last year show an app store. From my post at the time:

There’s a pattern here. Microsoft gets bright idea – Tablet, Windows Marketplace, Passport. Does half-baked implementation which flops. Apple or Google works out how to do it right. Microsoft copies them.

Adobe declares glittering results as CEO says Apple’s Flash ban has no impact on its revenue

Adobe has proudly declared its first billion dollar quarter, $1,008 m in the quarter ending Dec 3 2010 versus $757.3 m in the same quarter of 2009.

I am not a financial analyst, but a few things leap out from the figures. One is that Omniture, the analytics company Adobe acquired at the end of 2009, is doing well and contributing significantly to Adobe’s revenue – $98.4 m in Q4 2010. The billion dollar quarter would not have happened without it. Second, Creative Suite 5 is selling well, better than Creative Suite 4.

Creative Suite 4 was released in October 2008, and Creative Suite 5 in April 2010. It is not perfect, but the following table compares the Creative Solutions segment (mainly Creative Suite) of the two products quarter by quarter from their respective release dates:

Quarters after release 1st 2nd 3rd 4th 5th 6th
Creative Suite 4 508.7 460.7 411.7 400.4 429.30 432.0
Creative Suite 5 532.7 549.7 542.1      

CS4 drops off noticeably following an initial surge, whereas CS5 has kept on selling. It is a good product and a de-facto industry standard, but not every user is persuaded to upgrade every time a new release appears. My guess is that things like better 64-bit support – which make a huge difference in the production tools – and new tricks in PhotoShop have been successful in driving upgrades to CS5. Further, the explosion of premium mobile devices led by Apple’s iPhone and iPad has not been bad for Adobe despite Apple CEO Steve Jobs doing his best to put down Flash. Publishers creating media for the iPad, for example, will most likely use Adobe’s tools to do so. CEO Shantanu Narayen said in the earnings calls, “We have not seen any impact on our revenue from Apple’s choice [to not support Flash]”, though I am sure he would make a big deal of it if Apple were to change its mind.

Before getting too carried away though, I note that Creative Suite 3, published in March 2007, did just as well as CS5.  Here are the figures:

Quarters after release 1st 2nd 3rd 4th 5th 6th
Creative Suite 3 436.6 545.5 570.5 543.5 527.2 493.6

In fact, Q4 2007 at $570.5 m is still a record for Adobe’s Creative Solutions segment. So maybe CS4 was an unfortunate blip. Then again, not quite all the revenue in Creative Solutions is the suite; it also includes Flash Platform services such as media streaming. Further, the economy looked rosier in 2007.

Here is the quarter vs quarter comparison over the whole company:

  Q4 2009 Q4 2010
Creative Solutions 429.3 542.1
Digital Enterprise 211.8 274.10
Omniture 26.3 98.4
Platform 47 46.1
Print and publishing 42.9 47.3

In this table, Creative Solutions has already been mentioned. Digital Enterprise, formerly called Business Productivity, includes Acrobat, LiveCycle and Connect web conferencing. Platform is confusing; according to the Q4 09 datasheet it includes the developer tools, Flash Platform Services and ColdFusion. However, the Q4 10 datasheet omits any list of products for Platform, though it includes them for the other segments, and lists ColdFusion under Print and Publishing along with Director, Contribute, PostScript, eLearning Suite and some other older products. According to this document [pdf] InDesign which is huge in print publishing is not included in Print and Publishing, so I guess it is in Creative Solutions.

In the earnings call, Adobe’s Mark Garrett did mention Platform, and attributed its growth (compared to Q3 2010) to “higher toolbar distribution revenue driven primarily by the release of the new Adobe Reader version 10 in the quarter.” This refers to the vile practice of foisting a third-party toolbar (unless they opt-out) on people forced to download Adobe Reader because they have been send a PDF. Perhaps in the light of these good results Adobe could be persuaded to stop doing so?

I am not sure how much this breakdown can be trusted as it makes little sense to me. Do not take the segment names too seriously then; but they are all we have when it comes to trying to compare like with like.

Still, clearly Adobe is doing well and has successfully steered around some nasty rocks that Apple threw in its way. I imagine that Microsoft’s decision to retreat from its efforts to establish Silverlight as a cross-platform rival to Flash has also helped build confidence in Adobe’s platform. The company’s point of vulnerability is its dependence on shrink-wrap software for the majority of its revenue; projects like the abandoned Rome show that Adobe knows how to move towards cloud-deployed, subscription-based software but with business booming under its current model, and little sign of success for cloud projects like Acrobat.com, you can understand why the company is in no hurry to change.  

Why Windows Installer pops up when you run an application

Warning: this post is about old Windows hassles; I’ve written it partly because some of us still need to run old versions of Windows and apps, and partly because it reminds me that Windows has in fact improved so that this sort of thing is less common, though there is still immense complexity under its surface which can leak out to cause you grief – especially for people like reviewers and developers who install lots of stuff.

I’ve been retreating to Windows XP recently, in order to tweak an old Visual Basic 6 application. VB6 can be persuaded to run on later versions of Windows, but it is not really happy there. I have an old XP installation that I migrated from a physical machine to a VM on Hyper-V.

I was annoyed to find that when I fired up VB 6, the Windows Installer would pop up – not for VB 6, but for Visual Studio 2005, which was also installed.

image

Worse still, after thrashing away for a bit it decided that it needed the original DVD:

image

I actually found the DVD and stuck it in. The installer ground away for ages with its deceptive progress bars – “20 seconds remaining” sitting there for 10 minutes – repeated what looked like a loop several times, then finally let me in to VB. All was well for the rest of that session; but after restarting the machine, if I started VB 6 the very same thing would happen again.

This annoyance is not confined to VB 6; it used to happen a lot in XP days, though in my experience it is much less common with Vista and Windows 7.

I investigated further. This article explains what happens:

What you see is the auto-repair feature of Windows Installer. When an application is launched, Windows Installer performs a health check in order to restore files or registry entries that may have been deleted. Such a health check is not only triggered by clicking a shortcut but also by other events, such as activation of a COM server. The events triggering a health check depend on the operating system.

When you see this auto-repair problem this means that Windows Installer came to the conclusion that some application is broken and needs to be repaired.

A good concept, but in practice one that often fails and causes frustration. The worst part of it is the lack of information. Look at the dialog above, which refers to “the feature you are trying to use”. But which feature? In my case, how can my VB 6 depend on a feature of Visual Studio 2005, which came later and does not include VB 6? In any case, it is a lie, since VB 6 works fine even after the installer fails to fix its missing feature.

Fortunately, the article explains how to troubleshoot. You go to the event viewer, application tab, and MsiInstaller entries will tell you which product and component raised the repair attempt. Unfortunately the component is identified by a GUID. What is it?

To find out, you can try Google, or you can use a utility that queries the Windows Installer database. The best I’ve found is a tool called msiinv; the script mentioned in the post above did not work. You can find msiinv described by Aaron Stebner here, with a download link. Note how Stebner had to change the download locations because they kept breaking; a constant frustration with troubleshooting Windows, as Microsoft regularly moves or removes articles and downloads even when they are still useful.

Running msiinv with its verbose option (which you will need) seems to pretty much dump the entire msi installer database to a text file. You can then search for these GUIDs and find out what they are. You may find even products listed that are not in Control Panel’s Add/Remove programs. You can remove these from the command line like this:

msiexec /x {GUID}

where GUID identifies the product to remove.

In my case I found beta versions of WinFX (which became .NET 3.0). I said this was old stuff! I removed them, restarted Windows, and VB6 started cleanly.

That still does not explain how they got hooked to VB6; the answer is probably somewhere in the msiinv output, but having fixed the issue I’m not inclined to spend more time on it.

HTML 5 Canvas: the only plugin you need?

The answer is no, of course. And Canvas is not a plugin. That said, here is an interesting proof of concept blog and video from Alexander Larsson: a GTK3 application running in Firefox without any plugin.

image

GTK is an open source cross-platform GUI framework written in C but with bindings to other languages including Python and C#.

So how does C native code run the browser without a plugin? The answer is that the HTML 5 Canvas element, already widely implemented and coming to Internet Explorer in version 9, has a rich drawing API that goes right down to pixel manipulation if you need it. In Larsson’s example, the native code is actually running on a remote server. His code receives the latest image of the application from the server and transmits mouse and keyboard operations back, creating the illusion that the application is running in the browser. The client only needs to know what is different in the image as it changes, so although sending screen images sounds heavyweight, it is amenable to optimisation and compression.

It is the same concept as Windows remote desktop and terminal services, or remote access using vnc, but translated to a browser application that requires no additional client or setup.

There are downsides to this approach. First, it puts a heavy burden on the server, which is executing the application code as well as supplying the images, especially when there are many simultaneous users. Second, there are tricky issues when the user expects the application to interact with the local machine, such as playing sounds, copying to the clipboard or printing. Everything is an image, and not character-by-character text, for example. Third, it is not well suited to graphics that change rapidly, as in a game with fast-paced action.

On the other hand, it solves an immense problem: getting your application running on platforms which do not support the runtime you are using. Native applications, Flash and Silverlight on Apple’s iPad and iPhone, for example. I recall seeing a proof of concept for Flash at an Adobe MAX conference (not the most recent one) as part of the company’s research on how to break into Apple’s walled garden.

It is not as good as a true local application in most cases, but it is better than nothing.

Now, if Microsoft were to do something like this for Silverlight, enabling users to run Silverlight apps on their Apple and Linux devices, I suspect attitudes to the viability of Silverlight in the browser would change considerably.

WS-I closes its doors–the end of WS-* web services?

The Web Services Interoperability Organization has announced [pdf] the “completion” of its work:

After nearly a decade of work and industry cooperation, the Web Services Interoperability Organization (WS-I; http://www.ws-i.org) has successfully concluded its charter to document best practices for Web services interoperability across multiple platforms, operating systems and programming languages.

In the whacky world of software though, completion is not a good thing when it means, as it seems to here, an end to active development. The WS-I is closing its doors and handing maintenance of the WS interoperability profiles to OASIS:

Stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards), a group of technology vendors and customers that drive development and adoption of open standards.

Simon Phipps blogs about the passing of WS-I and concludes:

Fine work, and many lessons learned, but sadly irrelevant to most of us. Goodbye, WS-I. I know and respect many of your participants, but I won’t mourn your passing.

Phipps worked for Sun when the WS-* activity was at its height and WS-I was set up, and describes its formation thus:

Formed in the name of "preventing lock-in" mainly as a competitive action by IBM and Microsoft in the midst of unseemly political knife-play with Sun, they went on to create massively complex layered specifications for conducting transactions across the Internet. Sadly, that was the last thing the Internet really needed.

However, Phipps links to this post by Mike Champion at Microsoft which represents a more nuanced view:

It might be tempting to believe that the lessons of the WS-I experience apply only to the Web Services standards stack, and not the REST and Cloud technologies that have gained so much mindshare in the last few years. Please think again: First, the WS-* standards have not in any sense gone away, they’ve been built deep into the infrastructure of many enterprise middleware products from both commercial vendors and open source projects. Likewise, the challenges of WS-I had much more to do with the intrinsic complexity of the problems it addressed than with the WS-* technologies that addressed them. William Vambenepe made this point succinctly in his blog recently.

It is also important to distinguish between the work of the WS-I, which was about creating profiles and testing tools for web service standards, and the work of other groups such as the W3C and OASIS which specify the standards themselves. While work on the WS-* specifications seems much reduced, there is still work going on. See for example the W3C’s Web Services Resource Access Working Group.

I partly disagree with Phipps about the work of the WS-I being “sadly irrelevant to most of us”. It depends who he means by “most of us”. Granted, all this stuff is meaningless to the world at large; but there are a significant number of developers who use SOAP and WS-* at least to some extent, and interoperability is key to the usefulness of those standards.

The Salesforce.com API is mainly SOAP based, for example, and although there is a REST API in preview it is not yet supported for production use. I have been told that a large proportion of the transactions on Salesforce.com are made programmatically through the API, so here is one place at least where SOAP is heavily used.

WS-* web services are also built into Microsoft’s Visual Studio and .NET Framework, and are widely used in my experience. Visual Studio does a good job of wrapping them so that developers do not have to edit WSDL or SOAP requests and responses by hand. I’d also suggest that web services in .NET are more robust than DCOM (Distributed COM) ever was, and work successfully over the internet as well as on a local network, so the technology is not a failure.

That said, I am sure it is true that only a small subset of the WS-* specifications are widely used, which implies a large amount of wasted effort.

Is SOAP and WS-* dying, and REST the future? The evidence points that way to me, but I would be interested in other opinions.

Now you can rent GPU computing from Amazon

I wrote back in September about why programming the GPU is going mainstream. That’s even more the case today, with Amazon’s announcement of a Cluster GPU instance for the Elastic Compute Cloud. It is also a vote of confidence for NVIDIA’s CUDA architecture. Each Cluster GPU instance has two NVIDIA Tesla M2050 GPUs installed and costs $2.10 per hour. If one GPU instance is not enough, you can use up to 8 by default, with more available on request.

GPU programming in the cloud makes sense in cases where you need the performance of a super-computer, but not very often. It could also enable some powerful mobile applications, maybe in financial analysis, or image manipulation, where you use a mobile device to input data and view the results, but cloud processing to do the heavy lifting.

One of the ideas I discussed with someone from Adobe at the NVIDIA GPU conference was to integrate a cloud processing service with PhotoShop, so you could send an image to the cloud, have some transformative magic done, and receive the processed image back.

The snag with this approach is that in many cases you have to shift a lot of data back and forth, which means you need a lot of bandwidth available before it makes sense. Still, Amazon has now provided the infrastructure to make processing as a service easy to offer. It is now over to the rest of us to find interesting ways to use it.

First impressions of Microsoft Kinect – great hardware waiting for great software

The moment of magic comes when someone walks through the gaming area and Xbox flashes up the message that they have signed in. No button was pressed; this was face recognition working in the background during gameplay.

So Kinect is amazing. And it is amazing: it is controller-less video gaming that works well enough to have a lot of fun. That said, I also have reservations about the device, though these are first impressions only, and feel it is let down in a big way by the games currently available.

My device arrived on the UK launch day, November 10th. It is a relatively compact affair, around 28 cm wide on a stubby stand. The first task is positioning it, which can be a challenge. You are meant to place it above or below your TV screen, at a height of between 0.6m to 1.8m. I was lucky, in that our TV is on a stand that has space for it; the height is fractionally below 0.6m but it seems to be happy. Alternatively, you can purchase a free-standing support or a bracket that clips to the top of a TV. I imagine there are some frustrated first-day purchasers who received a device but cannot satisfactorily position it.

You also need free space in front of the set. Our coffee table got moved when the Nintendo Wii arrived, so the 6ft required for one-player play is not a problem.  Two-player is more difficult; we can do it but it means moving furniture, which is a nuisance. Overall it is more intrusive than the Wii, but less than Rock Band or Guitar Hero with the drum kit, so not a deal-breaker.

Microsoft takes full advantage of over-the-wire updates with Kinect. After connecting, the Xbox, the device firmware, and the bundled Kinect Adventures game all received patches; but the procedure went smoothly.

Kinect is a sophisticated device, a lot more than just a camera. There are three major subsystems in Kinect: optical, audio and motor.

  • Motor is the simplest – the stubby stand also contains a motor assembly that swivels the device up and down, enabling it to allow for different positions and to find the optimal angle for players of different heights.
  • The optical subsystem includes two cameras and an infra-red projector. The projector overlays a pattern on the field of view. This allows the first camera, a depth sensor, to map the position of the players in three dimensions. This lets the system detect hand movements, for example, which are usually closer to the camera than the rest of the body. The second camera is a colour device more like the one in your webcam, and enables Kinect to take pictures of your gaming antics which you can share with the world if you feel so inclined, as well as presumably feeding into the positioning system.
  • The audio subsystem includes no less than four microphones. The reason is that Kinect does voice recognition at a distance, so needs to be able to compensate for both the sounds of the video game and other background noise. Using multiple microphones enables the audio processor to calculate the position of sounds, since each microphone will receive a sound at a fractionally different time.

These sensors systems are backed by considerable processing power – necessary because the Xbox itself devotes most of its processing to the game being played. The trade-off in systems like this is that the more processing means more accurate interpretation of voice and gestures, but taking too much time introduces lag. As I saw at the NVIDIA GPU conference in September – see here and here for posts – very rapid processing enables magic like robotic pinhole surgery on a beating heart – and like Kinect, that magic is based on real-time interpretation of physical movement. Kinect is not at that level, but has audio and image processor chips and 512MB RAM, along with other components including for some reason an accelerometer, mounted on three circuit boards squashed into the slim plastic container. See for yourself in the ifixit teardown.

But how is it in practice? It certainly works, and we had a good and energetic time playing Kinect Adventures and a little bit of Joy Ride. Playing without a controller is a liberating experience. That said, there were some annoyances:

  • Kinect play is more vulnerable to interference than controller gaming. If someone walks across the play area, for example, it will interfere.
  • In the Kinect system, there is no such thing as a click. Therefore, to activate an option you have to hover over it for a short period while a progress circle fills; when the circle is filled, the system decides that you have “clicked”. It is slower and less reliable than clicking a button.
  • The audio system enables voice control which seems to work well when available, but most of the time it seems not to be available. Considering the amount of hardware dedicated to this, it seems rather a waste; but presumably more is to come. Controlling Sky player by voice, for example, would be great; no more hunting for the remote.
  • The Kinect seems to work best when you are standing. For something like a driving game, that is not what you want. Apparently seated gameplay is supported, but does not work properly with the launch games; so watch this space.

Launching stuff before it is really ready seems to be ingrained in Microsoft’s culture. Is Kinect another example? To some extent I suspect it is. I recall the early days with the Nintendo Wii as exciting moments of discovery: the system worked well from the get-go, and the bundled Wii Sports game is a masterpiece. The Kinect games so far are less impressive.

In fact, my overwhelming impression so far is that this is great hardware waiting for software to show what it can do. The 20,000 Leaks mini-game in Adventures is not very good – you are in a glass cage underwater and have to cover leaks to stem them – but it is interesting because you have to use head, hands and feet to play it. It could not be duplicated with a conventional controller, because a conventional controller does not allow you to move one thing this way, and another thing that way, at the same time.

It follows that Kinect should enable some brilliant new gaming concepts. I’d love to see a stealth adventure done for Kinect, for example; there are new possibilities for realism and excitement.

As it is, the Kinect launch games show little imagination and seem to be heavily Wii-influenced – and if you compare Kinect with Wii on that basis, you might well conclude that the Wii is better in some ways, worse in others, but cheaper and with better games, and without the friction of Kinect’s somewhat fussy requirements.

Such a comparison is not fair to Kinect, which in concept and hardware is a generation ahead of Wii or PlayStation Move. It now awaits software to take advantage.

UK business applications stagger towards the cloud

I spent today evaluating several competing vertical applications for a small business working in a particular niche – I am not going to identify it or the vendors involved. The market is formed by a number of companies which have been serving the market for some years, and which have Windows applications born in the desktop era and still being maintained and enhanced, plus some newer companies which have entered the market more recently with web-based solutions.

Several things interested me. The desktop applications seemed to suffer from all the bad habits of application development before design for usability became fashionable, and I saw forms with a myriad of fields and controls, each one no doubt satisfying a feature request, but forming a confusing and ugly user interface when put together. The web applications were not great, but seemed more usable, because a web UI encourages a simpler page-based approach.

Next, I noticed that the companies providing desktop applications talking to on-premise servers had found a significant number of their customers asking for a web-hosted option, but were having difficulty fulfilling the request. Typically they adopted a remote application approach using something like Citrix XenApp, so that they could continue to use their desktop software. In this type of solution, a desktop application runs on a remote machine but its user interface is displayed on the user’s desktop. It is a clever solution, but it is really a desktop/web hybrid and tends to be less convenient than a true web application. I felt that they needed to discard their desktop legacy and start again, but of course that is easier said than done when you have an existing application widely deployed, and limited development resources.

Even so, my instinct is to be wary of vendors who call desktop applications served by XenApp or the like cloud computing.

Finally, there was friction around integrating with Outlook and Exchange. Most users have Microsoft Office and use Outlook and Exchange for email, calendar and tasks. The vendors with web application found their users demanding integration, but it is not easy to do this seamlessly and we saw a number of imperfect attempts at synchronisation. The vendors with desktop applications had an easier task, except when these were repurposed as remote applications on a hosted service. In that scenario the vendors insisted that customers also use their hosted Exchange, so they could make it work. In other words, customers have to build almost their entire IT infrastructure around the requirements of this single application.

It was all rather unsatisfactory. The move towards the cloud is real, but in this particular small industry sector it seems slow and painful.

The cloud permeates Microsoft’s business more than we may realise

I’m in the habit of summarising Microsoft’s financial results in a simple table. Here is how it looks for the recently announced figures.

Quarter ending September 30 2010 vs quarter ending September 30 2009, $millions

Segment Revenue Change Profit Change
Client (Windows + Live) 4785 1905 3323 1840
Server and Tools 3959 409 1630 393
Online 527 40 -560 -83
Business (Office) 5126 612 3388 561
Entertainment and devices 1795 383 382 122

The Windows figures are excellent, mostly reflecting Microsoft’s success in delivering a successor to Windows XP that is good enough to drive upgrades.

I’m more impressed though with the Server and tools performance – which I assume is mostly Server – though noting that it now includes Windows Azure. Microsoft does not break out the Azure figures but said that it grew 40% over the previous quarter; not especially impressive given that Azure has not been out long and will have grown from a small base.

The Office figures, also good, include Sharepoint, Exchange and BPOS (Business Productivity Online Suite), which is to become Office 365. Microsoft reported “tripled number of business customers using cloud services.”

Online, essentially the search and advertising business, is poor as ever, though Microsoft says Bing gained market share in the USA. Entertainment and devices grew despite poor sales for Windows Mobile, caught between the decline of the old mobile OS and the launch of Windows Phone 7.

What can we conclude about the health of the company? The simple fact is that despite Apple, Google, and mis-steps in Windows, Mobile, and online, Microsoft is still a powerful money-making machine and performing well in many parts of its business. The company actually does a poor job of communicating its achievements in my experience. For example, the rather dull keynote from TechEd Berlin yesterday.

Of course Microsoft’s business is still largely dependent on an on-premise software model that many of us feel will inevitably decline. Still, my other reflection on these figures is that the cloud permeates Microsoft’s business more than a casual glance reveals.

The “Online” business is mainly Bing and advertising as far as I can tell; and despite CTO Ray Ozzie telling us back in 2005 of the importance of services financed by advertising, that business revolution has not come to pass as he imagined. I assume that Windows Live is no more successful than Online.

What is more important is that we are seeing Server and tools growing Azure and cloud-hosted virtualisation business, and Office growing hosted Exchange and SharePoint business. I’d expect both businesses to continue to grow, as Microsoft finally starts helping both itself and its customers with cloud migration.

That said, since the hosted business is not separated from the on-premise business, and since some is in the hands of partners, it is hard to judge its real significance.

Understanding the Silverlight controversy

There has been much discussion of the future of Microsoft’s Silverlight plugin since Server and Tools President Bob Muglia’s statement in a PDC interview that “Our strategy with Silverlight has shifted”, and spoke of HTML as the “only true cross platform solution”.

The debate was even reported on the BBC’s web site under the headline Coders decry Silverlight change.

It is unfortunate that headlines tend to think in binary; alive or dead. In other words, if Microsoft is repositioning Silverlight then it must be killing it.

That is not the case. Muglia did not say that Silverlight has no future, nor that it was unimportant. He affirmed that there will be another version of Silverlight for Windows and Mac, as well as highlighting that it is the development platform for Windows Phone.

Speaking personally for a moment, I have reviewed Silverlight favourably in the past and still regard it as a great achievement by Microsoft: the power of the .NET runtime, the elegance of C#, the flexible layout capabilities of XAML, integrated with a capable multimedia player, and wrapped in a lightweight package that in my experience installs quickly and easily.

Silverlight forms an excellent client for cloud services such as those delivered by the Azure platform which we heard about at PDC.

Perhaps it is the case that IE9 maestro Dean Hachamovitch tended towards the gleeful as he demonstrated features in HTML and JavaScript that previously would have required Silverlight or Flash. At the same time, IE9 is not yet released, and even when it is, will not match the capabilities or the tooling and libraries available for Silverlight.

The Silverlight press generated by PDC must have been disappointing and frustrating for Microsoft’s Silverlight team. I am reading reports of Developer VP Scott Guthrie’s remarks at the DevConnections conference this week.

The reports of my death are greatly exaggerated … I have more people working on Silverlight now than any time in Silverlight history … don’t believe everything you read on the internet.

I have great respect for Guthrie; you need only see the speed and manner with which he reacted to the recent ASP.NET security scare – not trying to diminish its importance, delivering practical advice, answering comments, and working with his team to come up with workarounds and a proper solution as quickly as possible – to appreciate his commitment and that he understands the needs of developers.

So were posts like my own Silverlight dream is over unfair and inaccurate? Well, there is always a risk of being misunderstood; but the problem, as I perceive it, is not primarily about Silverlight’s progress on Windows and Mac. The problem is that those two desktop platforms no longer have sufficient reach; or rather, even if they have sufficient reach today, they will not tomorrow. We have the rise of iOS and Android; an explosion of non-Windows tablets in the wings; we have a man like James Gardner, CTO at the UK’s Department for Work and Pensions, writing of Windows 7 that:

Personally, I think it likely this is  the last version of Windows anyone ever widely deploys

See also Cliff Saran’s comments at Computer Weekly.

In other words, Guthrie’s team can do a cracking job with Silverlight 5 for Windows and Mac – it could even merge Silverlight with WPF and make it the primary application platform for Windows – but that would still not address the concerns raised by what happened at PDC. If Silverlight remains imprisoned in Windows and Mac, it cannot deliver on its original promise.

What could Microsoft do to restore confidence in Silverlight? Something along these lines would make me change my mind:

  1. Announce Silverlight for Android.
  2. Nurture Silverlight for Symbian.
  3. Follow through on commitments for Silverlight on Moblin/MeeGo.
  4. Either implement Silverlight for Linux, or enter a deeper partnership with Novell’s Mono so that Microsoft-certified Silverlight runtimes appear on Linux in a timely manner alongside Microsoft’s releases.
  5. Come up with a solution for Silverlight on iOS. One idea is to follow Adobe with a native code compiler from Silverlight to iOS. Another would be a way of compiling XAML and C# to SVG and JavaScript. Neither would be perfect; but as it is, every company that starts deploying iPads or their successors is a customer that cannot use Silverlight.

Do I think Microsoft will implement the above? I doubt it. My interpretation of Muglia’s remarks is that Microsoft has decided not to go down that path, but to reserve Silverlight for Windows, Mac, and Windows Phone, and to invest in HTML for broad-reach applications.

That may well be the right decision; it is one that makes sense, though Microsoft was perhaps unwise to highlight it before IE9 is released. Further, cross-platform is not in Microsoft’s blood, and the path that Silverlight has taken is in line which what you would expect from a company built on Windows.

Silverlight is not dead, and for developers targeting Windows, Mac and Windows Phone it is as good as ever, and no doubt will be even better in its next version. But failing another change of heart, it will never now be WPF Everywhere; and PDC 2010 was when that truth sank home.

Update: this is pretty much what Guthrie says in his latest post:

Where our strategy has shifted since we first started working on Silverlight is that the number of Internet connected devices out there in the world has increased significantly in the last 2 years (not just with phones, but also with embedded devices like TVs), and trying to get a single implementation of a runtime across all of them is no longer really practical (many of the devices are closed platforms that do not allow extensibility).  This is true for any single runtime implementation – whether it is Silverlight, Flash, Java, Cocoa, a specific HTML5 implementation, or something else.  If people want to have maximum reach across *all* devices then HTML will provide the broadest reach (this is true with HTML4 today – and will eventually be true with HTML5 in the future).  One of the things we as a company are working hard on is making sure we have the best browser and HTML5 implementation on Windows devices through the great work we are doing with IE9.