Category Archives: .net

Microsoft Expression Blend is too hard to learn

Expression Blend is the design tool for Windows Presentation Foundation (WPF) and Silverlight, and thus a key tool for building applications for the current generation of Microsoft’s platform. How good is it? There is a shortage of in-depth reviews, if my quick Google search is anything to go by, though there are plenty of quick write-ups saying that it is not as good as Adobe Flash. Blend got a bit of attention following the 2009 Mix conference thanks to SketchFlow, the prototyping feature built into Blend 3, and which has been well received.

One reason for Blend’s relatively low profile is that it is aimed at designers, whereas Microsoft’s community is more developer-focused. WPF developers can avoid Blend to a large extent, by using the designer built into Visual Studio, which is fine for laying out typical business applications. Now with Visual Studio 2010 the same is true for Silverlight. Another option is to write your own XAML code, which works for laying out controls though it is inconceivable for drawings. XAML is verbose

It is just as well you can avoid it, because although Blend is very capable, it is not easy to learn. I’m guessing there are quite a few developers who have opened it up, clicked around a bit, and retreated gratefully back to Visual Studio. This was a problem for Adobe Flash Professional as well, and one of the reasons for the creation of Flex and Flex Builder, a code-centric IDE for the Flash runtime.

You can argue that a design tool does not need to be easy for developers to use; it needs to be good enough for designers to create great designs. That’s true; but the developer/design divide is not a absolute one, and ideally Blend should be something a developer can dip into easily, to create or enhance a simple layout, without too much stress.

Maybe some developers can; but I have not found Blend particularly intuitive. The user interface is busy, and finding what you want or getting focus on the right object can be a challenge.

As evidence of this, take a look at Adam Kinney’s Through the Eyes of Expression Blend tutorial, which is among the best I have found. Try lesson 9, Styling and working with design-time data. Then ask yourself how easy it would be to discover the way to do this without the step-by-step instructions. Would you have known to right-click the StackPanel and choose Change Layout Type > Grid? What about step 8, right-clicking the ListBox, and selecting Edit Additional Templates > Edit Layout of Items (Items Panel) > Create Empty?

image  

And notice how in step 9 you have to click the “small grey square next to the Source property”, that’s the one called Advanced Options:

image

Overall it is a nice tutorial, but you might need an evening or two with a couple of fat books, one on XAML and one on Blend itself (if you can find a good one), in order to understand the features you have have been using.

It is probably worth it, if you intend to work with Silverlight or WPF. Blend has one great advantage over Flash Professional: it authors XAML, and you can open it up in Visual Studio and continue working on it there. Microsoft has no need for something like Adobe Catalyst to bridge the XML/Designer divide.

Still, Microsoft had a clean slate with Blend, which is only a few years old, and it is a shame it could not come up with something a bit more user-friendly.

The other implication is that the new visual designer in Visual Studio 2010 makes Silverlight applications a great deal easier to create. You can Blend if you want to; but the Visual Studio effort is far more approachable.

How do you find Blend? I’d be interested in other perspectives.

Silverlight 4 vs Silverlight 3: a little bit faster?

Microsoft’s Scott Guthrie spoke of “twice as fast performance” in the newly-released Silverlight 4, thanks to a new just-in-time compiler.

Performance is a hard thing to nail down. Maybe he meant that compilation is twice as fast? I’m not sure; but I tried a couple of quick tests.

First, I looked at my Primes test. Version 3 running in Windows Vista took around 0.40 seconds (the exact figure varies on each run, thanks to background processes or other factors). I then upgraded to version 4.0. No significant difference, on average over several runs. I used Vista because I’d already upgraded my Windows 7 install.

Next I tried Bubblemark. I maxed it out at 128 bubbles. On Vista with Silverlight 3 I got about 240 fps; on the same machine with Silverlight 4 about 260fps; about 8%.

image

Next I tried on an Apple Mac. My Mac Mini is less powerful, though not that bad, an Intel 1.83 Ghz Core Duo. On the Prime test I got 0.54 secs before, and 0.50 secs after the upgrade to 4.0, about 7.5% improvement. On Bubblemark, it was only 24 fps before and after.

I guess the vast difference in graphics performance is also interesting. It is not just Mac vs Windows; the Nvidia GeForce 6800 on the PC is more powerful than whatever is in the Mac Mini.

If anyone can tell me in what respect version 4.0 is twice as fast, I’d be grateful.

Update: prompted by the comment from David Heffernan below, I also tried the Encog Silverlight Benchmark. I used an older core duo laptop, since I am running out of machines to upgrade. I ran the test twice before upgrading, and twice after. Lower is better:

Silverlight 3.0: 22.0

Silverlight 4.0: 12.7

That’s about 42% better, where “twice as fast” would be 50% better, much closer to Guthrie’s claim. I guess it depends what you measure.

Silverlight 4.0 released to the web; tools still not final

Microsoft released the Silverlight 4.0 runtime yesterday. Developers can also download the Silverlight 4 Tools; but they are not yet done:

Note that this is a second Release Candidate (RC2) for the tools; the final release will be announced in the coming weeks.

Although it is not stated explicitly, I assume it is fine to use these tools for production work.

Another product needed for Silverlight development but still not final is Expression Blend 4.0. This is the designer-focused IDE for Silverlight and Windows Presentation Foundation. Microsoft has made the release candidate available, but it looks as if the final version will be even later than that for Silverlight 4 Tools.

Disappointing in the context of the launch of Visual Studio 2010; but bear in mind that Silverlight has been developed remarkably fast overall. There are huge new features in version 4, which was first announced at the PDC last November; and that followed only a few months after the release of version 3 last summer.

Why all this energy behind Silverlight? It’s partly Adobe Flash catch-up, I guess, with Silverlight 4 competing more closely with Adobe AIR; and partly a realisation that Silverlight can be the unifying technology that brings together web and client, mobile and desktop for Microsoft. It’s a patchy story of course – not only is the appearance of Silverlight on Apple iPhone or iPad vanishingly unlikely, but more worrying for Microsoft, I hear few people even asking for it.

Even so, Silverlight 4.0 plus Visual Studio 2010 is a capable platform; it will be interesting to see how well it is taken up by developers. If version 4.0 is still not enough to drive mainstream adoption, then I doubt whether any version will do it.

That also raises the question: how can we measure Silverlight take-up? The riastats charts tell us about browser deployment, but while that is important, it only tells us how many have hit some Silverlight content and allowed the plug-in to install. I look at things like activity in the Silverlight forums:

Our forums have 217,426 threads and 247,562 posts, contributed by 77,034 members from around the world. In the past day, we had 108 new threads, 529 new posts, and 70 new users.

it says currently – substantial, but not yet indicative of a major platform shift. Or job stats – 309 UK vacancies right now, according to itjobswatch, putting it behind WPF at 662 vacancies and Adobe Flash at 740. C# on the other hand has 5349; Java 6023.

Book Reviews: Programming F# and Beginning F#

I’ve been working with Microsoft’s new language F# recently and enjoying it. If it catches your interest, you might turn to a book in order to familiarise yourself with the basics. Here are two which I’ve looked at. They are both aimed at experienced developers who are new to F#.

Programming F# by Chris Smith

Programming F# comes from O’Reilly. It kicks off with Hello World and an introduction to the interactive console in Visual Studio 2010 – a great way to try out F#. Next we get a summary of types, and a brief explanation of how to write an F# program – stuff you have to know.

Chapter 3 is an introduction to functional programming, and also mentions type inference, an important F# feature. The following chapter explains mutable programming in F# – yes, you can do it; it is just not the default behaviour – and also covers exceptions. Chapter 5 turns to object-oriented programming in F#, another distinctive feature, while chapter 6 covers aspects specific to .NET such as garbage collection. That’s about half the book, and gets you up and running with the language.

The second half of the book is more interesting, looking at ways of using F#. Smith looks at applied functional programming, including pattern matching, recursion, continuations and closures. Next, there’s a look at applied object orientation. Chapter 9 covers scripting with F# – an interesting use case – and includes welcome examples for things like copying files and automating Microsoft Office. Chapter 10 is a key one, explaining computation expressions that let you create workflows and all-but extend the language. It’s “a very advanced concept that even expert developers can have a hard time grasping,” admits the author, though he presents it clearly.

Next we get a section on a likely reason for picking up F#: asynchronous and parallel programming. There’s a wide-ranging chapter explaining both .NET and F# asynchronous techniques, including the .NET Parallel Extensions. It’s a little confusing, especially since Smith observes that F# asynchronous workflows are sub-optimal for CPU parallelism; he recommends the .NET Parallel Extensions because of the better thread management it offers.

The last chapters cover .NET Reflection, code analysis and generation with Quotations – “deep wizardry”, says the author. An appendix summarises .NET Libraries and F# interop with COM and native code.

While this is an excellent language introduction, and thorough within the topics it covers, some aspects disappointed me. I cannot find any mention of F# agents, based on the MailboxProcessor class (nothing to do with email), which is a surprising omission; F# designer Don Syme sees it as a critical feature for scalable web development. Nor is there anything on graphics processing, another common usage, or any hint about how you might use F# for financial analysis. I also found it rather dry overall – hard to avoid with so much plumbing to cover, but I feel the author could have conveyed a little more excitement about what F# enables. Don’t make this the only F# book you read.

Beginning F# by Robert Pickering

This Apress title covers similar ground as the O’Reilly book, but with a slightly more hands-on and informal style, which on the whole I enjoyed. It starts with an introduction including a quote from Ralf Herbrich at Microsoft Research describing how he converted 1000 lines of C# into 100 lines of F# which performed just as well – this is the kind of real-world touch that makes you want to read on. The second chapter explains how to install F#, including different versions of Visual Studio, the open source SharpDevelop, and Mono on Linux – excellent diversity. Chapter 3 introduces functional programming, in effect a brisk overview of the core of F#. Read it slowly!

The author goes on to look at imperative programming and mutability in F#, and then object orientation, just as Smith did in his book. Chapter 6 is useful overview of how code is organised into modules and namespaces, and also covers comment annotations. Next, Pickering looks at F# libraries, including a brief look at math programming. Chapter 8 covers user interface coding – completely lacking in Smith’s book – complete with a quick look at GTK#, which works on Linux. There’s also a quick look at ASP.NET, Microsoft’s web server platform. Although this little introductions are too brief to be really useful, they do spark ideas about how you might use F#.

Data Access is next, another important real-world topic, covering XML, LINQ (Language Integrated Query) and ADO.NET.

Chapter 10 is when we get to parallel programming and reactive programming – code that waits for an external event before running. Pickering introduces F# agents and the MailboxProcessor class. Next comes a look at distributed applications using sockets, HTTP requests, web services, and Windows Communication Foundation.

Chapter 12 has the intriguing title Language-Oriented Programming, and should be taken together with the next chapter on parsing text. This is where we look at using data structures as little languages, parsing and interpreting text, and extending F# syntax. Finally, chapter 14 is the interop chapter, covering interop with C# as well as platform invocation for COM and native code.

Of the two books, this is the more lively and wide-ranging read, and more likely to enthuse you about the possibilities F# offers, though it skims the surface in places; many topics receive only shallow coverage.

View on Amazon.com

 

View on Amazon.co.uk

 

Anders Hejlsberg on functional programming, programming futures

At TechDays in Belgium Micrososft’s C# designer and Technical Fellow Anders Hejlsberg spoke on trends in programming languages; you can watch the video here.

I recommend it highly, not so much because of any new or surprising content, but because in his low-key way Hejlsberg is a great communicator. The talk is mostly not about the far future, and much of what he covers relates directly to C# 4.0 and F# as found in Visual Studio 2010. Despite his personal investment in C#, Hejlsberg talks cheerfully about the benefits of F# and gives perhaps the best overview of functional programming I have heard, explaining what it is and why it is well suited to concurrency.

image

I will not try and summarise the whole talk here; but will bring out its unifying thought, which is that programming is moving towards a style that emphasises the “what” rather than the “how” of the tasks it encodes. This fits with a number of other ideas: greater abstraction, more declarative, more use of DSLs (domain-specific languages).

The example he gives early on describes how to get a count of groups in a set of data. You can do this using a somewhat manual approach, iterating through the data, identifying the groups, storing them somewhere, and incrementing the count as items belonging to each group are discovered.

Alternatively you can code it in one shot using the count keyword in LINQ or SQL (though Hejlsberg talked about LINQ). This is an example of using a DSL (Domain Specific Language), and also demonstrates a “what” rather than “how” approach to code. It is easier for another programmer to see your intention, as there is no need to analyse a set of loops and variables to discover what they do.

There is another reason to prefer this approach. Since the implementation is not specified, the compiler can more easily optimise your code; you do not care provided the result is correct. This becomes hugely important when it comes to concurrency, where we want the compiler or runtime to utilise many CPU cores if they are available. He has a screenshot of Task Manager running on a 128-core machine which apparently exists in Redmond (I can’t quite read the figure for total RAM but think it may be 128GB):

image

Hejlsberg says there was a language doldrums between 1995 and 2005, when many assumed that Java was the be-all and end-all. I wonder if this is a tacit admission that C#, which he was working on during that period, is not that different in philosophy from Java? The doldrums are over and we now have an explosion of new and revived languages: Ruby, Groovy, Python, Clojure, Boo, Erlang, F#, PowerShell and more. However, Hejlsberg says it makes sense for these to run on an existing framework – in practice either the Java or .NET runtime – since the benefits are so great.

Hejsberg also predicts that distinctions such as dynamic versus static languages will disappear as each language absorbs the best features from other languages. “Traditional taxonomies of languages are breaking down as languages pick paradigms from each other,” he says. The new language paradigm is multi-paradigm.

Just as C# has now acquired dynamic features, we can expect it to get better support for immutability in future (borrowed from functional languages).

Building for multiple mobile platforms with one codebase

Individuals may have strong opinions about the merits of Apple iPhone versus Google Android versus the struggling Palm WebOS versus the not-yet Windows Phone 7; but sit them round a table to discuss app strategy and those diverse platforms change from a debating point to a problem. Presuming a web app won’t cut it, how do you target all those devices without the unreasonable expense and complication of managing multiple projects? The native languages are all different; Objective-C for iPhone and iPad, Java for Android and RIM BlackBerry, JavaScript for WebOS, C# for Windows Phone 7.

There are three possibilities that come to mind. One is that all the platforms will eventually allow you to write in C or C++, making this the unifying language, though you still have some fancy footwork to do overcoming library differences. Android now allows this via the NDK, and Palm via the PDK. There is currently no alternative to Java for Blackberry, and Microsoft says native code won’t be possible on Windows Phone 7, but well, you never know.

The second is Adobe Flash. This is an interesting one, because Apple prohibits Flash on the iPhone, but Adobe has a Packager for iPhone that builds native iPhone apps from Flash projects. Another issue is that although Flash is available or promised for all the main non-Apple devices – Apple’s gift of a selling point to its rivals – it is not Flash alone that does what it needed, but AIR, the “desktop” or out-of-browser runtime. This has been previewed for Android and promised for other devices including Blackberry. AIR for Windows Phone 7? Maybe, though I’ve not seen it mentioned.

A third idea is a clever framework that does the necessary cross-compilation under the covers. This cannot depend on deploying a runtime, nor compiling to native code, because these approaches are blocked by some mobile platforms. Rhomobile has the Rhodes framework, where you code your app in HTML and Ruby and compile for devices including iPhone, Windows Mobile, RIM Blackberry, Symbian, and Android. Rhodes includes an MVC (Model View Controller) framework and an ORM (Object Relational Mapper) to wrap database access. There is also a RhoSync server component to enable offline data with synchronisation back to the server when reconnected; and the RhoHub hosted IDE for buildings apps with a web browser.

Rhomobile tells me that Palm WebOS support is in the works. They also promise Windows Phone 7 support, which intrigued me because Rhodes says it compiles to “true native device applications”. Has Rhomobile gotten round Microsoft’s opposition to native code? Apparently not; VP Rob McMillen eventually told me that this will mean a .NET IL (intermediate language) implementation.

The Rhomobile approach reminds me of AppForge, a company which produced the well-regarded Crossfire add-on for Visual Studio and compiled Visual Basic to a wide variety of mobile platforms. Unfortunately AppForge was acquired by Oracle, and its new owners showed callous disregard for existing customers. Not only did development cease; it also became impossible to renew existing licenses. Thanks to an activation component, that also blocked new deployment of existing applications – every developer’s nightmare.

That said, there is no activation requirement for Rhodes that I know of, and the framework is open source, so I don’t mean to suggest it will suffer a similar fate.

What about Java? On the face of it, Java should be ideal, since multi-device support is what it was designed for. It is a measure of how far Java has fallen that we hear far more about the lack of Flash on the iPhone, than the lack of Java. Microsoft says yes to Flash on Windows Phone 7, though not on first release, but nothing about Java.

Java as a mobile runtime needs a strong dose of lobbying and evangelism from its new stewards Oracle if it is not to fall by the wayside in this context. Hmm, AppForge.

QCon London 2010 report: fix your code, adopt simplicity, cool .NET things

I’m just back from QCon London, a software development conference with an agile flavour that I enjoy because it is not vendor-specific. Conferences like this are energising; they make you re-examine what you are doing and may kick you into a better place. Here’s what I noticed this year.

Robert C Martin from Object Mentor gave the opening keynote, on software craftsmanship. His point is that code should not just work; it should be good. He is delightfully opinionated. Certification, he says, provides value only to certification bodies. If you want to know whether someone has the skills you want, talk to them.

Martin also came up with a bunch of tips for how to write good code, things like not having more than two arguments to a function and never a boolean. I’ve written these up elsewhere.

image

Next I looked into the non-relational database track and heard Geir Magnusson explain why he needed Project Voldemort, a distributed key-value storage system, to get his ecommerce site to scale. Non-relational or NOSQL is a big theme these days; database managers like CouchDB and MongoDB are getting a lot of attention. I would like to have spent more time on this track; but there was too much else on; a problem with QCon.

I therefore headed for the functional programming track, where Don Syme from Microsoft Research gave an inspiring talk on F#, Microsoft’s new functional language. He has a series of hilarious slides showing F# code alongside its equivalent in C#. Here is an example:

image

The white panel is the F# code; the rest of the slide is C#.

Seeing a slide that this makes you wonder why we use C# at all, though of course Syme has chosen tasks like asychronous IO and concurrent programming for which F# is well suited. Syme also observed that F# is ideal for working with immutable data, which is common in internet programming. I grabbed a copy of Programming F# for further reading.

Over on the Architecture track, Andres Kütt spoke on Five Years as a Skype Architect. His main theme: most of a software architect’s job is communication, not poring over diagrams and devising code structures. This is a consistent theme at QCon and in the Agile movement; get the communication right and all else follows. I was also interested in the technical side though. Skype started with SOAP but switched to a REST model for web services. Kütt also told us about the languages Skype uses: PHP for the web site, C or C++ for heavy lifting and peer-to-peer networking; Delphi for the Windows interface; PostgreSQL for the database.

Day two of QCon was even better. I’ve written up Martin Fowler’s talk on the ethics of software development in a separate post. Following that, I heard Canonical’s Simon Wardley speak about cloud computing. Canonical is making a big push for Ubuntu’s cloud package, available both for private use or hosted on Amazon’s servers; and attendees at the QCon CloudCamp later on were given a lavish, pointless cardboard box with promotional details. To be fair to Wardley though, he did not talk much about Ubuntu’s cloud solution, though he did make the point that open source makes transitions between providers much cheaper.

Wardley’s most striking point, repeated perhaps too many times, is that we have no choice about whether to adopt cloud computing, since we will be too much disadvantaged if we reject it. He says it is now more a management issue than a technical one.

Dan North from ThoughtWorks gave a funny and excellent session on simplicity in architecture. He used pseudo-biblical language to describe the progress of software architecture for distributed systems, finishing with

On the seventh day God created REST

Very good; but his serious point is that the shortest, simplest route to solving a problem is often the best one, and that we constantly make the mistake of using over-generalised solutions which add a counter-productive burden of complexity.

North talked about techniques for lateral thinking, finding solutions from which we are mentally blocked, by chunking up, which means merging details into bigger ideas, ending up with “what is this thing for anyway”; and chunking down, the reverse process, which breaks a problem down into blocks small enough to comprehend. Another idea is to articulate a problem to a colleague, which exercises different parts of the brain and often stimulates a solution – one of the reasons pair programming can be effective.

A common mistake, he said, is to keep using the same old products or systems or architectures because we always do, or because the organisation is already heavily invested in it, meaning that better alternatives do not get considered. He also talked about simple tools: a whiteboard rather than a CASE tool, for example.

Much of North’s talk was a variant of YAGNI – you ain’t gonna need it – an agile principle of not implementing something until/unless you actually need it.

I’d like to put this together with something from later in the day, a talk on cool things in the .NET platform. One of these was Guerrilla SOA, though it is not really specific to .NET. To get the idea, read this blog post by Jim Webber, another from the ThoughtWorks team (yes, there are a lot of them at QCon). Here’s a couple of quotes:

Prior to our first project starting, that client had already undertaken some analysis of their future architecture (which needs scalability of 1 billion transactions per month) using a blue-chip consultancy. The conclusion from that consultancy was to deploy a bus to patch together the existing systems, and everything else would then come together. The upfront cost of the middleware was around £10 million. Not big money in the grand scheme of things, but this £10 million didn’t provide a working solution, it was just the first step in the process that would some day, perhaps, deliver value back to the business, with little empirical data to back up that assertion.

My (small) team … took the time to understand how to incrementally alter the enterprise architecture to release value early, and we proposed doing this using commodity HTTP servers at £0 cost for middleware. Importantly we backed up our architectural approach with numbers: we measured the throughput and latency characteristics of a representative spike (a piece of code used to answer a question) through our high level design, and showed that both HTTP and our chosen Web server were suitable for the volumes of traffic that the system would have to support … We performance tested the solution every single day to ensure that we would always be able to meet the SLAs imposed on us by the business. We were able to do that because we were not tightly coupled to some overarching middleware, and as a consequence we delivered our first service quickly and had great confidence in its ability to handle large loads. With middleware in the mix, we wouldn’t have been so successful at rapidly validating our service’s performance. Our performance testing would have been hampered by intricate installations, licensing, ops and admin, difficulties in starting from a clean state, to name but a few issues … The last I heard a few weeks back, the system as a whole was dealing with several hundred percent more transactions per second than before we started. But what’s particularly interesting, coming back to the cost of people versus cost of middleware argument, is this: we spent nothing on middleware. Instead we spent around £1 million on people, which compares favourably to the £10 million up front gamble originally proposed.

This strikes me as an example of the kind of approach North advocates.

You may be wondering what other cool .NET things were presented. This session was called the State of the Art .NET, given by Amanda Laucher and Josh Graham. They offer a dozen items which they considered .NET folk should be using or learning about:

  1. F# (again)
  2. M – modelling/DSL language
  3. Boo – static Python for .NET
  4. NUnit – unit testing. Little regard for Microsoft’s test framework in Team System, which is seen as a wasted and inferior effort.
  5. RhinoMocks – mocking library
  6. Moq – another mocking library
  7. NHibernate – object-relational mapping
  8. Windsor – dependency injection, part of Castle project. Controversial; some attendees thought it too complex.
  9. NVelocity – .NET template engine
  10. Guerrilla SOA – see above
  11. Azure – Microsoft’s cloud platform – surprisingly good thanks to David Cutler’s involvement, we were told
  12. MEF – Managed Extensibility Framework as found in Visual Studio 2010, won high praise from those who have tried it

That was my last session (I missed Friday) though I did attend the first part of CloudCamp, an unconference for cloud early adopters. I am not sure there is much point in these now. The cloud is no longer subversive and the next new thing; all the big enterprise vendors are onto it. Look at the CloudCamp sponsor list if you doubt me. There are of course still plenty of issues to talk about, but maybe not like this; I stayed for the first hour but it was dull.

For more on QCon you might also want to read back through my Twitter feed or search the entire #qcon tag for what everyone else thought.

Microsoft maybe gets the cloud – maybe too late

Microsoft CEO Steve Ballmer gave a talk on the company’s cloud strategy at the University of Washington yesterday. Although a small event, the webcast was widely publicised and coincides with a leaked internal memo on “how cloud computing will change the way people and businesses use technology”, a new Cloud website, and a Cloud Computing press portal, so it is fair to assume that this represents a significant strategy shift.

According to Ballmer:

about 70 percent of our folks are doing things that are entirely cloud-based, or cloud inspired. And by a year from now that will be 90 percent

I watched the webcast, and it struck me as significant that Ballmer kicked off with a vox pop video where various passers by were asked what they thought about cloud computing. Naturally they had no idea, the implication being, I suppose, that the cloud is some new thing that most people are not yet aware of. Ballmer did not spell out why Microsoft made the video, but I suspect he was trying to reassure himself and others that his company is not too late.

I thought the vox pop was mis-conceived. Cloud computing is a technical concept. What if you did a vox pop on the graphical user interface? or concurrency? or Unix? or SQL? You would get equally baffled responses.

It was an interesting contrast with Google’s Eric Schmidt who gave a talk at last month’s Mobile World Congress that was also a big strategy talk; I posted about it here. Schmidt takes the cloud for granted. He does not treat it as the next big thing, but as something that is already here. His talk was both inspiring and chilling. It was inspiring in the sense of what is now possible – for example, that you can go into a restaurant, point your mobile at a foreign-language menu, and get back an instant translation, thanks to Google’s ability to mine its database of human activity. It was chilling with its implications for privacy and Schmidt’s seeming disregard for them.

Ballmer on the other hand is focused on how to transition a company whose business is primarily desktop operating systems and software to one that can prosper in the cloud era:

If you think about where we grew up, other than Windows, we grew up with this product called Microsoft Office. And it’s all about expressing yourself. It’s e-mail, it’s Word, it’s PowerPoint. It’s expression, and interaction, and collaboration. And so really taking Microsoft Office to the cloud, letting it run in the cloud, letting it run from the cloud, helping it let people connect and communicate, and express themselves. That’s one of the core kind of technical ambitions behind the next release of our Office product, which you’ll see coming to market this June.

Really? That’s not my impression of Office 2010. It’s the same old desktop suite, with a dollop of new features and a heavily cut-down online version called Office Web Apps. The problem is not only that Office Web Apps is designed to keep you dependent on offline Office. The problem is that the whole model is wrong. The business model is still based on the three-year upgrade cycle. The real transition comes when the Web Apps are the main version, to which we subscribe, which get constant incremental updates and have an API that lets them participate in mash-ups across the internet.

That said, there are parallels between Ballmer’s talk and that of Schmidt. Ballmer spoke of 5 dimensions:

  • The cloud creates opportunities and responsibilities
  • The cloud learns and helps you learn, decide and take action
  • The cloud enhances your social and professional interactions
  • The cloud wants smarter devices
  • The cloud drives server advances

In the most general sense, those are similar themes. I can even believe that Ballmer, and by implication Microsoft, now realises the necessity of a deep transition, not just adding a few features to Office and Windows. I am not sure though that it is possible for Microsoft as we know it, which is based on Windows, Office and Partners.

Someone asks if Microsoft is just reacting to others. Ballmer says:

You know, if I take a look and say, hey, look, where am I proud of where we are relative to other guys, I’d point to Azure. I think Azure is very different than anything else on the market. I don’t think anybody else is trying to redefine the programming model. I think Amazon has done a nice job of helping you take the server-based programming model, the programming model of yesterday that is not scale agnostic, and then bringing it into the cloud. They’ve done a great job; I give them credit for that. On the other hand, what we’re trying to do with Azure is let you write a different kind of application, and I think we’re more forward-looking in our design point than on a lot of things that we’re doing, and at least right now I don’t see the other guy out there who’s doing the equivalent.

Sorry, I don’t buy this either. Azure does have distinct advantages, mainly to do with porting your existing ASP.NET application and integrating with existing Windows infrastructure. I don’t believe it is “scale agnostic”; something like Google App Engine is better in that respect. With Azure you have to think about how many virtual machines you want to purchase. Nor do I think Azure lets you write “a different kind of application.” There is too little multi-tenancy, too much of the old Windows server model remains in Azure.

Finally, I am surprised how poor Microsoft has become at articulating its message. Azure was badly presented at last year’s PDC, which Ballmer did not attend. It is not an attractive platform for small-scale developers, which makes it hard to get started.

Windows Phone 7 incompatibility may drive developers elsewhere

Microsoft’s Charlie Kindel has blogged about the Windows Phone 7 development platform.

As widely leaked, the new mobile device supports Silverlight and XNA; Kindel also mentions .NET, but since both Silverlight and XNA are .NET platforms, that might not mean anything additional.

The big story is about compatibility:

To deliver what developers expect in the developer platform we’ve had to change how phone apps were written. One result of this is previous Windows mobile applications will not run on Windows Phone 7 Series.

This puts Microsoft in an awkward position. Support for custom business apps has been one of the better aspects of Windows Mobile. What Microsoft should do is to have some way of continuing to run those old apps on the new devices. Instead, Kindel adds:

To be clear, we will continue to work with our partners to deliver new devices based on Windows Mobile 6.5 and will support those products for many years to come, so it’s not as though one line ends as soon as the other begins.

I would not take much account of this. No doubt there will some devices, but demand for Windows Mobile will dive through the floor (if it has not already) once Phone 7 is available, making it an unattractive proposition for hardware partners.

The danger for Microsoft is that after this let-down, those with existing Windows Mobile apps that are now forced to choose a new development platform might choose one from a competitor.

The mitigation is that apps which use the Compact Framework will likely be easier to port to Windows Phone 7, because the language is the same. Native code apps are a different matter. Of course it will be technically possible to write native code apps for Windows Phone 7, but probably locked down and restricted to special cases, such as perhaps the Adobe Flash runtime (I am speculating here).

PS – I see that developer Thomas Amberg has articulated exactly these concerns in a comment to Kindel’s post:

Platform continuity was the single most important feature of Windows Mobile. Being able to run code from 2003 on a current phone is more important to our customers than a fancy UI (which Microsoft seems not able to get right anyway). Further, the ability to access hardware specific APIs through P/Invoke has been vital in many of our projects (e.g. to use Bluetooth in the early days). Those advantages have now gone. You just rendered useless years of development work and many thousands of lines of code.

"we will continue to work with our partners to deliver new devices based on Windows Mobile 6.5 and will support those products for many years to come"

You will, I bet. But which device manufacturer will produce such "dead-end" devices?

Time to switch to another mobile OS.

Microsoft .NET gotchas revealed by Visual Studio team

The Visual Studio Blog makes great reading for .NET developers, and not only because of the product it describes. Visual Studio 2010 is one of the few Microsoft products that has made a transition from native C++ code to .NET managed code – the transition is partial, in that parts of Visual Studio remain in native code, but this is true of the shell and the editor, two of the core components. Visual Studio is also a complex application, and one that is extensible by third parties. Overall the development team stressed the .NET platform, which is good for the rest of us because the developers are in a strong position both to understand problems, and to get them fixed even if it means changes to the .NET Framework.

Two recent posts interested me. One is Marshal.ReleaseComObject Considered Dangerous. I have some familiarity with this obscure-sounding topic, thanks to work on embedding Internet Explorer components. It relates to a fundamental feature of .NET: the ability to interact with the older COM component model, which is still widely used. In fact, Microsoft still uses COM for new Windows 7 APIs; but I digress. A strong feature of .NET from its first release is that it can easily consume COM objects, and also expose .NET objects to COM.

The .NET platform manages memory using garbage collection, where the runtime detects objects that are no longer referenced by active code and deletes them. COM on the other hand uses reference counting, maintaining a count of the number of references to an object and deleting the object when it reaches zero.

Visual Studio 2008 and earlier has lots of COM APIs. Some of these were called from .NET code, and for the same of efficiency called the method mentioned above, Marshal.ReleaseComObject, to reduce the reference count immediately so that the COM object would be deleted.

Now here comes Visual Studio 2010, and some of those COM APIs are re-implemented as .NET code. For compatibility with existing code, the new .NET code is also exposed as a COM API. Some of that existing code is actually .NET code which wraps the COM API as .NET code. Yes, we have .NET to COM to .NET, a double wrapper. Everything still works though, until you call Marshal.ReleaseComObject on the doubly-wrapped object. At this point the .NET runtime throws up its hands and says it cannot decrement the reference count, because it isn’t really a COM object. Oops.

The post goes on to observe that Marshal.ReleaseComObject is dangerous in any cause, because it leaves you with an invalid .NET wrapper. This means you should only call it when the .NET instance is definitely not going to be used again. Obvious, really.

Once you’ve digested that, try this illuminating post on WPF in Visual Studio 2010 – Part 2 : Performance tuning. WPF, or Windows Presentation Foundation, is the .NET API for rich graphical user interfaces on desktop Windows applications. Here is an example of why you should read the post, if you work with WPF. Many of us frequently use Remote Desktop to run applications on remote PCs or PCs that do not have a screen and keyboard attached. This is really a bad scenario for WPF, which is designed to take advantage of local accelerated graphics. Here’s the key statement:

Over a remote desktop connection, all WPF content is rendered as a bitmap. This is in contrast to GDI rendering, where primitives such as rectangles and text are sent over the wire for reconstruction on the client.

It’s a bad scenario, but mitigated if you use graphics that are amenable to compression, like solid colours. There are also some tweaks introduced in WPF 4.0, like the ability to scroll an area on the remote client, which saves having to re-send the entire bitmap if it has moved.