Category Archives: software development

First baby steps for Moonlight 2.0: Silverlight for Linux

Miguel de Icaza has announced the first preview release of Moonlight 2.0. This is the one that counts, in that it brings the .NET runtime to Silverlight applets running on Linux:

This is the ECMA VM running inside the browser and powering C# and any other CIL-compatible languages like Ruby, Python, Boo and others. You can use Moonlight/Silverlight as a GUI (this is what most folks do) or you can use it as the engine to power your Python/Ruby scripting in the browser.

The download page has plenty of health warnings:

Keep in mind this preview release is not feature complete. Most importantly not all security features are present or fully enabled in this release. Even existing security features have, at this stage, received only minimal testing and no security audit of the source code (mono or moonlight) has yet been done.

Undeterred, I installed it into FireFox 3.0, running on Ubuntu Linux. The download is under 9 MB. My first effort was unsuccessful; the plug-in appeared to load OK, but no Silverlight apps displayed. My second attempt in a VM worked. Naturally I went along to my Silverlight database example which as it happens runs on Mono. Here it is:

This is what it should look like (Silverlight on Windows):

Well, it is only an alpha preview, and it shows. On the plus side, the data is displayed, the search works, and the buttons operate. It is a considerable achievement. But don’t plan to move your users onto Moonlight applications just yet.

Flex Builder for Linux on hold: another sign of financial stress at Adobe?

On 21st April Adobe’s Ben Forta told a user group that Flex Builder 3 for Linux is on hold, citing lack of requisition, which is corp-speak for lack of demand.

Note that the Flex SDK does run on Linux. It is just the official IDE that is in question.

Linux is a free operating system, and this could be evidence that users of a free OS are less likely to purchase software than users of a paid-for OS. Or it could simply reflect poor market share for Linux outside servers. Even if it has just hit 1%, as hitslink reports, it is still barely more than 10% of the Mac share and a little over 1% of the Windows share. Some of those Linux machines will also be netbooks – secondary systems for users with a Windows or Mac for serious work such as design and development.

Nevertheless, I suspect there is more to it than that. I suspect Adobe would like to support Linux, because it wants to portray Flex as an open platform – the SDK is open source, though managed by Adobe, but the runtime engine is closed-source and proprietary. This may be another sign of Adobe’s financial stress. The company reported reduced quarter-on-quarter revenue for the the 3 months ending February 2009, and has been cutting staff numbers.

The backdrop to this, in contrast, is that Adobe is having great success with its Flash platform. There is no sign of Microsoft’s Silverlight denting the popularity of Flash on web sites, either for applets or media streaming.

The recession then? Partly; but this is also about Adobe’s business model. Adobe does not break out its figures in detail as far as I know: the last financial statement merely shows that its revenue is nearly 95% from product sales, the rest being services and support. Still, I’d guess that the largest component of its product sales must be Creative Suite. In other words, its business model is based on selling tools and giving away runtimes. When 47 million people watch Susan Boyle on YouTube, Adobe doesn’t make a penny, even though they are almost all using Flash to do so.

The tools market is a difficult one for various reasons, including competition from free products and the fact that the number of people needing development or design tools is always much smaller than the number needing runtimes. In a recession, deferring a tools upgrade is a obvious way for businesses to save money. Remaining primarily a tools company is a limit to Adobe’s growth and ultimately its profitability.

This is of concern to all Flash platform users. Adobe has proved to date a good steward of the technology. Some of us would like the balance of proprietary vs open tilted further towards open, but I doubt many would welcome a takeover or merger such as we have seen with Sun and Oracle (and there are a few parallels there).

There would also be many cries of “foul” if Adobe sought to further monetize Flash by starting to sell, say, a premium version of the Flash runtime.

Adobe is still a profitable company, and maybe when the economy recovers all this stress will be forgotten. Still, I’d guess that long-term Adobe will want to shift away from its dependency on sales of tools; and how and what it does to achieve that will have a big influence on the future of its RIA (Rich Internet Application) platform.

A Silverlight database application with image upload

I’ve been amusing myself creating a simple online database application using Silverlight. I had this mostly working a while back, but needed to finish off some pieces in order to get it fully functional.

This is created using Silverlight 2.0 and demonstrates the following:

  • A bound DataGrid (as you can see, work is still needed to get the dates formatted sensibly).
  • Integration with ASP.NET authentication. You have to log in to see the data, and you have to log in with admin rights to be able to update it.
  • Create,Retrieve,Update,Delete using ASP.NET web services.
  • Image upload using Silverlight and an ASP.NET handler.
  • Filter a DataGrid (idea taken from here).
  • Written in Visual Studio 2008, and hosted on this site, which runs Debian Linux, hence Mono and MySQL. Would you have known if I had not told you?

You can try it here. I’ll post the code eventually, but it will be a couple of months as it links in with another article.

MVP Ken Cox notes in a comment to Jesse Liberty’s blog:

Hundreds of us are scouring the Internet for a realistic (but manageable and not over-engineered) sample of manipulating data (CRUD operations) in a Silverlight 2 application. There are promising pieces of the puzzle scattered all over the place. Unfortunately, after investing time in a sample, we discover it lacks a key element – like actually saving changed data back to the database.

I can safely say that mine is not over-engineered, and that yes, it does write data.

Parallel Programming: five reasons for caution. Reflections from Intel’s Parallel Studio briefing.

I’m just back from an Intel software conference in Salzburg where the main topic was Parallel Studio, a new suite which adds Intel’s C/C++ compiler, debugging and profiling tools into Visual Studio. To some extent these are updates to existing tools like Thread Checker and VTune, though there are new features such as memory checking in Parallel Inspector (the equivalent to Thread Checker) and a new user interface for Parallel Amplifier (the equivalent to VTune). The third tool in the suite, Parallel Composer, is comprised of the compiler and libraries including Threading Building Blocks and Intel Integrated Performance Primitives.

It is a little confusing. Mostly Parallel Studio replaces the earlier products for Windows developers using Visual Studio; though we were told that there are some advanced features in products like VTune that meant you might want to stick with them, or use both.

Intel’s fundamental point is that there is no point in having multi-core PCs if the applications we run are unable to take advantage of them. Put another way, you can get remarkable performance gains by converting appropriate routines to use multiple threads, ideally as many threads as there are cores.

James Reinders, Intel’s Chief Evangelist for software products, introduced the products and explained their rationale. He is always worth listening to, and did a good job of summarising the free lunch is over argument, and explaining Intel’s solution.

That said, there are a few caveats. Here are five reasons why adding parallelism to your code might not be a good idea:

1. Is it a problem worth solving? Users only care about performance improvements that they notice. If you have a financial analysis application that takes a while to number-crunch its data, then going parallel is a big win. If your application is a classic database forms client, it is probably a waste of time from a performance perspective. You care much more about how well your database server is exploiting multiple threads on the server, because that is likely to be the bottleneck.

There is a another reason to do background processing, and that is in order to keep the user interface responsive. This matters a lot to users. Intel said little about this aspect; Reinders told me it is categorised as convenience parallelism. Nevertheless, it is something you probably should be doing, but requires a different approach than parallelising for performance.

2. Will it actually speed up your app? There is an overhead in multi-threading, as you now have to manage the threads as well as performing your calculations. The worst case, according to Reinders, is a dual-core machine, where you have all the overhead but only one additional core. If the day comes when we routinely have, say, 64 cores on our desktop or laptop, then the benefit becomes overwhelming.

3. Is it actually desirable on a multi-tasking operating system? Consider this: an ideally parallelised application, from a performance perspective, is one that uses 100% CPU across all cores until it completes its task. That’s great if it is the only application you are running, but what if you started four of these guys (same or different applications) simultaneously on a quad-core system? Now each application is contending with others, there’s no longer a performance benefit, and most likely the whole system is going to slow down. There is no perfect solution here: sometimes you want an application to go all-out and grab whatever CPU it needs to get the job done as quickly as possible, while sometimes you would prefer it to run with lower priority because there are other things you care about more, such as a responsive operating system, other applications you want to use, or energy efficiency.

This is where something like Microsoft’s concurrency runtime (which Intel will support) could provide a solution. We want concurrent applications to talk to the operating system and to one another, to optimize overall use of resources. This is more promising than simply maxing out on concurrency in every individual application.

4. Will your code still run correctly? Edward Lee argues in a well-known paper, The Problem with Threads, that multi-threading is too dangerous for widespread use:

Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures. In this paper, I argue that this is not a good idea. Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly nondeterministic, and the job of the programmer becomes one of pruning that nondeterminism. Although many research techniques improve the model by offering more effective pruning, I argue that this is approaching the problem backwards. Rather than pruning nondeterminism, we should build from essentially deterministic, composable components. Nondeterminism should be explicitly and judiciously introduced where needed, rather than removed where not needed.

I put this point to Reinders at the conference. He gave me a rather long answer, saying that it is partly a matter of using the right libraries and tools (Parallel Studio, naturally), and partly a matter of waiting for something better:

Law articulates the dangers of threading. Did we magically fix it or do we really know what we’re doing in inflicting this on the masses? It really come down to determinism. If programmers make their program non-deterministic, getting out of that mess is something most programmers can’t do, and if they can it’s horrendously expensive.

He’s right, if we stayed with Windows threads and Pthreads and programming at that level, we’re headed for disaster. What you need to see is tools and programming templates that avoid that. The evil thing is what we call shared mutable state. When you have things happening in parallel, the safest thing you can do is that they’re totally independent. This is one of the reasons that parallelism on servers works so well, in that you do lots and lots of transactions and they don’t bump into each other, or they only interface through the database.

Once we start opening up shared mutable state, encouraging threading, we set ourselves up for disaster. Parallel Inspector can help you figure out what disasters you create and get rid of them, but ultimately the answer is that you need to encourage people to use programming like OpenMP or Threading Building Blocks. Those generally guide you away from those mistakes. You can still make them.

One of the open questions is can you come up with programming techniques that completely avoid the problem? We do have one that that we’ve just started talking about called Ct … but I think we’re at the point now where OpenMP and Threading Building Blocks have proven that you can write code with that and get good results.

Reinders went on to distinguish between three types of concurrent programming, referring to some diagrams by Microsoft’s David Callaghan. The first is explicit, unsafe parallelism, where the developer has to do it right. The second is explicit, safe parallelism. The best approach according to Reinders would be to use functional languages, but he thinks it unlikely that they will catch on in the mainstream. The third type is implicit parallelism that’s safe, where the developer does not even have to think about it. An example is the math kernel library in IPP (Intel Integrated Performance Primitives) where you just call an API that returns the right answers, and happens to use concurrency for its work.

Intel also has a project called Ct (C/C++ for Throughput) which is a dynamic runtime for data parallelism, which Reinders considers also falls into the implicit parallelism category.

It was a carefully nuanced answer, but proceed with caution.

5. Will your application need a complete rewrite? This is a big maybe. Intel’s claim is that many applications can be updated for parallelism with substantial benefits. A guy from Nero did a presentation though, and said that an attempt to parallelise one of their applications, a media transcoder, had failed because the architecture was not right, and it had to be completely redone. So I guess it depends.

This brings to mind another thing which everyone agrees is a hard challenge: how to design an application for effective parallelism. Intel has a tool in preparation called Parallel Advisor, to be part of Parallel Studio at a future date, which is meant to identify candidates for parallelism, but that will not be a complete answer.

Go parallel, or not?

None of the above refutes Intel’s essential point: that effective concurrent programming is essential to the future of computing. This is an evolutionary process though, and at this point there is every reason to be cautious rather than madly parallelising every piece of code you touch.

Additional Links

Microsoft has a handy Parallel Computing home page.

David Callaghan: Design considerations for Parallel Programming

The end of Sun’s bold open source experiment

This is a sad day for Sun. It sought to re-invent its business through open source; and the experiment has failed, culminating not in a re-invigorated company, but instead acquisition by an old-school proprietary software company, Oracle.

It is possible to build a successful business around open source software. Zend is doing it with PHP; Red Hat has done it with Linux. These are smaller companies though, and they have not tried to migrate an older business built on a proprietary model. A further complication is that Sun is a hardware business, and although open source is an important part of its hardware strategy as well as its software strategy, it is a different kind of business.

Maybe the strategy was good, but it was the recession, or the server market, that killed Sun. In the end it does not make any difference, the outcome is what counts.

Reading the official overview of the deal, I see lots of references to “open” and “standard-based”, which means nothing, but no mention of open source.

The point of interest now is what happens to Sun’s most prominent open source projects: OpenOffice.org, MySQL, Java and OpenSolaris. Developers will be interested to see what happens to NetBeans, the open source Java IDE, following the Oracle acquisition, and how it will relate to Oracle’s JDeveloper IDE. These open source projects have a momentum of their own and are protected by their licenses, but a significant factor is what proportion of the committers – those who actually write the software and commit their changes to the repository – are Sun employees. Although it is not possible to take back open source code, it is possible to reduce investment, or to start creating premium editions available only to commercial subscribers, which already appeared to be part of MySQL’s strategy.

I presume that both OpenOffice and Java will feature in Oracle’s stated intention to build an end-to-end integrated solution:

Oracle will be the only company that can engineer an integrated system – applications to disk – where all the pieces fit and work together so customers do not have to do it themselves. Our customers benefit as their systems integration costs go down while system performance, reliability and security go up.

says CEO Larry Ellison, who also says nothing about open source. This will involve invading Microsoft’s turf – something Sun was always willing to do, but not particularly successful at executing.

The best outcome for the open source community will be if Oracle continues to support Sun’s open source projects along the same lines as before. Even if that happens, the industry has lost a giant of the open source world.

Some good comments from Redmonk’s Michael Coté here.

Is Silverlight the problem with ITV Player? Microsoft, you have a problem.

I sat down last night to watch a programme on ITV’s catch-up service, using the Silverlight-based ITV Player. It was watchable, but not too good. Now and again the stream would pause for buffering, and I saw the Silverlight busy icon for a while. Quality is also an issue. Sometimes it is great; sometimes it is horribly pixelated.

I took a look at the ITV forums. It seems to be a common problem. The Best of ITV section is dominated by complaints. Some are from an aggrieved minority running Linux or PowerPC Macs; but there are plenty of others. My experience is relatively good; other issues include broadcasts that only play the ads; or codec issues; or streams failing completely half way through a programme. Here’s a sample:

Believe me guys even if you had Windows OS the player still wouldn’t work its completely rubbish; 6 times i’ve tried to watch Britains Got Talent and it either vanishes, or skips etc.
Rubbish, rubbish, rubbish! BBC iPlayer is excellent compared to this, i’m quite disappointed!

Readers of this blog will know that I have nothing against Silverlight, though my interest is more in the application development side than video streaming. Still, the impact of one on the other should not be discounted. You can guess what the pundits in the ITV forum are calling for. It’s Adobe Flash, because they have seen it working well for the BBC and elsewhere.

Now transition to the development team as they put forward the question of whether to use Flash or Silverlight for their upcoming RIA (Rich Internet Application) project. If the exec responsible struggled to watch ITV player the night before, thanks as far as she can tell to the Silverlight plug-in, that becomes a factor in the outcome.

I understand why people blame Silverlight for these problems; but I realise that this may be wrong, cross-platform issues aside. Maybe ITV has inadequate servers; or there is some other technical issue, and Silverlight is innocent.

If you know the answer to this, please let me know or comment below.

Microsoft must realise, though, that this is the most visible use of Silverlight for many UK folk. Some may also remember how BBC iPlayer transformed its reputation when it moved from using primarily Microsoft technology – though not Silverlight, and made worse by poor peer-to-peer client software – to Adobe’s Flash platform. I suggest that Redmond’s finest give it some attention; though who knows, it may be too late.

RIA (Rich Internet Applications): one day, all applications will be like this

I loved this piece by Robin Bloor on The PC, The Cloud, RIA and the future. My favourite line:

Nowadays very few Mac/PC users have any idea where any program is executing.

And why should they? Users want stuff to just work, after all. Bloor says more clearly than I have managed why RIA is the future of client computing. He emphasises the cost savings of multi-tenancy, and the importance of offline capability; he says the PC will become a caching device. He thinks Google Chrome is significant. So do I. He makes an interesting point about piracy:

All apps will gradually move to RIA as a matter of vendor self interest. (They’d be mad not to, it prevents theft entirely.)

Bloor has said some of this before, of course, and been only half-right. In 1997 he made his remark that

Java is the epicenter of a software earthquake, and the shockwaves are already shaking the foundations of the software industry.

predicting that Java browser-hosted or thin clients would dominate computing; he was wrong about Java’s impact, though perhaps he could have been right if Sun had evolved the Java client runtime to be more like Adobe Flash or Microsoft Silverlight, prior to its recent hurried efforts with JavaFX. I also suspect that Microsoft and Windows have prospered more than Bloor expected in the intervening 12 years. These two things may be connected.

I think Bloor is more than half-right this time round, and that the RIA model with offline capability will grow in importance, making Flash vs Silverlight vs AJAX a key battleground.

Google’s cut-down Java: wanton and irresponsible, or just necessary?

Sun’s Simon Phipps stirred things up last weekend when he called Google’s actions wanton and irresponsible. Its crime: delivering a cut-down Java library for use on its App Engine platform, “flaunting the rules” which forbid creating sub-sets of the core classes.

It does sound as if Google is not talking to Sun as much as it might. Still, let’s note that Google has good reason to omit certain classes or methods. App Engine is a distributed, shared environment; this mean that some things make no sense – for example: writing to a local file – and other things may be unacceptable, such as grabbing a large slice of CPU time for an extended period.

Salesforce.com addressed this same issue by inventing a new language, called Apex. It’s Java-like, but not Java. The company therefore avoided accusations of creating an incompatible Java, and conveniently ensured that Apex code would run only on Force.com, at least until someone attempts to clone it.

Google’s approach was to use Java, but leave a few things out. This FAQ gives an overview; and the article Will it play in App Engine lists common frameworks and libraries with notes on whether they work. Given that languages like JRuby, Groovy and Rhino work fine, it’s clear that core App Engine Java is not too badly damaged. The big omissions are JDBC (because you are meant to use the App Engine datastore, which is not relational), and Enterprisey things like JMS, EJB and JNDI. Google is nudging, or shoving, developers towards RESTful APIs along with its built-in services.

Will you be able to escape App Engine if you have a change of heart after deployment? I’d guess that porting the code will not be all that hard. Perhaps the biggest lock-in is with identity; you could roll your own I guess, but Google intends you to use Google accounts and supplies a Java API. Microsoft is ahead of Google here since it does support federated identity, if you can get your head round it: you can authenticate users in the Microsoft cloud against your own directory using Geneva. The best Google can offer is Directory Sync; though even that is some protection from identity lock-in.

Java support on App Engine is actually a vote of confidence in Java; if what is good for Java is good for Sun, then Sun is a winner here. That said, just where is the benefit for Sun if companies host Java applications, built with Eclipse, on Google’s platform? Not much that I can see.

Technorati Tags: ,,,

Salesforce.com = CRM + platform?

Organizations evolve; and that can be an untidy process. Salesforce.com started out as an online application for CRM (Customer Relationship Management), and that remains its core business, as suggested by its name. Seeing its success, observers naturally asked whether the company would break out of that niche to service other needs, such as ERP (Enterprise Resource Planning). Sometimes there were hints that this is indeed the case; I recall being told by one of the executives last year that if the company was still called Salesforce.com in five years’ time it would have failed. However, rather than developing new applications itself, the company has chosen to encourage third parties to do this, by opening its underlying platform. The platform is called Force.com, and supports its own programming language called Apex. Third-party applications are sold on the AppExchange, and either extend the CRM functionality or address new and different areas. According to CEO Marc Benioff this morning, there are now 750 applications on AppExchange.

A question I’ve asked a couple of times is whether Salesforce.com gives any assurance to its 3rd party partners that it will not compete with them by rolling into its core platform features similar to those in an AppExchange offering. I’ve not received a clear answer, though EMEA co-president Lyndsey Armstrong told me last year that it just was not an issue; and Benioff today at Cloudforce told me it has not proved to be a problem so far. It is an interesting question though, since if Salesforce.com did choose to expand into new application areas, this kind of competition would be all-but inevitable. It therefore seems to me that the company is more interested in growing its platform business, and continuing to grow its CRM business, than in addressing new kinds of online applications itself. There were also broad hints today that the company intends to improve its platform as an application server.

Let’s speculate for a moment. What if Salesforce gets acquired, say by Oracle, a move which would not be unexpected? If such a thing happened, it would make sense for existing Oracle applications like the E-business Suite or PeopleSoft Enterprise to get extended or merged or migrated into Force.com. That might be less comfortable for AppExchange 3rd parties.

Tim Bray’s contrarian views on Rich Internet Applications

There’s a though-provoking interview with Sun’s Tim Bray over on the InfoQ site. One of his points is that Rich Internet Applications aren’t worth the hype. He says that web applications are generally better than desktop applications, because they enforce simplicity and support a back button, and that users prefer them. He adds:

Over the years since then I have regularly and steadily heard them saying: "We need something that is more immersive, more responsive, more interactive". Every time without exception that somebody said that to me, they have either been a developer or a vendor who wants to sell the technology that is immersive or responsive, or something like that. I have not once in all those years heard an ordinary user say "Oh I wish we go back to before the days of the web when every application was different and idiosyncratic … ".

In further gloomy news for advocates of Adobe Flash, Microsoft Silverlight or Sun’s own JavaFX he adds:

I suspect that the gap in the ecosystem that lies between what you could achieve with Ajax and what you need something like Flash or JavaFX or Silverlight to achieve is not that big enough to be terribly interesting.

I think there is a lot of truth in what he says, and I still regularly see Flash applications or Flash-enabled sites where I wish the developers or designers had not bothered. Nevertheless, I don’t go along with it completely. I’m typing this post in Live Writer, a desktop application, when I could be using the WordPress online editor. The reason is that I much prefer it. It is faster, smoother, and easier to use.

Another example is Twitter clients. I use Twhirl though I may switch to Tweetdeck; both are Flash (AIR) applications running as it happens outside the browser. I’d hate to go back to interacting with Twitter only through web pages.

I agree there there is some convergence going on between what we loosely call Ajax, and the RIA plug-ins; Yahoo Pipes apparently uses the HTML 5 Canvas element, for example, using this Google Code script for IE support. I’m glad there is a choice of RIA platforms, but I don’t see either Flash or Silverlight going away in the forseeable future.

It’s worth recalling that the RIA concept began with the notion that a rich user interface can be more productive and user-friendly than an HTML equivalent. I’ve written a fair amount about the legendary iHotelier Broadmoor Hotel booking application which kind-of kicked it off – and I’ve interviewed the guy who developed it – and it was undoubtedly motivated by the desire to improve usability. As far as I can tell it achieved its goals, which were easy to measure in that online bookings increased.

Multimedia, rich visual controls, Deep Zoom, offline support, pixel-level control of the UI; there’s a lot of stuff in what we currently call RIA that is worthwhile when used appropriately.

Another twist on this is that RIA is enabling a more complete move to web applications, by reducing the number of applications that do not work either in the browser, or as offline-enabled Flash or Silverlight.

Still, Bray is right to imply that RIAs also increase the number of ways developers can get the UI wrong; and that in many cases HTML with a dash of Ajax is a better choice.

I think the RIA space is more significant than Bray suggests; but his comments are nonetheless a useful corrective.