Category Archives: professional

Continuous Integration vs Continuous Delivery vs Continuous Deployment: what is the difference?

I am reading the excellent book Continuous Delivery by Jez Humble and David Farley. But what is Continuous Delivery and how does it differ from the other “continuous” development methodologies?

It helps to understand that all these methodologies spring from the Agile software development movement, and the expression Continuous Delivery is a quote from the Agile Manifesto:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Now, the starting assumption is that most software projects integrate a number of smaller projects, whether from third-parties or from team members. Since these pieces are developed to some extent independently there is a risk that changes made to one piece will require modifications to another piece; hence according to Humble and Farley:

Most software developed by large teams spends a significant proportion of its development time in an unusable state.

The business of getting all the parts to work together is called integration, and if this involves serious work you need to have an integration phase where this is the sole objective. This is a bad idea for all sorts of reasons, slowing development and preventing proper testing other than at the end of these integration phases.

The solution is called Continuous Integration (CI). You have a frequent automated build that assembles all the pieces from all the teams into a working application. If the build fails, or if automated tests run against the build fail, then this is a bug that should be fixed immediately, not later in some separate phase of development.

Tools for CI include Cruise Control, Hudson, TeamCity and others; .NET developers can also configure Visual Studio Team Foundation Server for CI.

The problem with CI alone is that the development environment is not the same as the production environment. What if the CI build works and tests pass, but once deployed the application breaks or performs badly? Perhaps the development environment runs a multi-tier application with all the tiers on a single box, but when deployed onto actual multiple machines or VMs, something goes wrong. Permission problems are another common source of errors.

Continuous Delivery means that you not only build the software, but also deploy it frequently. This usually means provisioning servers, which you can automate using a tool like Puppet for Unix-like servers, or with Virtual Lab Management in a Visual Studio environment. Automation is pretty much essential for this to work. The more closely the test environment matches the production environment, the better.

Generally though, Continuous Delivery means deployment to a test environment. What about taking the next step, and deploying continuously to production? That is the methodology called Continuous Deployment. It sounds risky; but if you have a very extensive and thorough set of automated tests, then the risks are mitigated, especially as the extent of the changes in any one deployment is reduced.

Other suggestions for reducing risk include deploying to a small subset of users first, called “canary testing”; and making rollback easy.

That said, to judge by the Humble/Farley book the distinction between Continuous Delivery and Continuous Deployment is just a little blurred. The authors acknowledge that continuous deployment into production is not always a good idea. They also imply that Continuous Deployment might mean only that your application is always ready and easy to deploy into production, not that you necessarily deploy it constantly:

Your implementation should make it possible to deploy any version of your application that has made it past the automated tests into any of your environments at the push of a button, given the correct credentials.

Compliance and security are also factors that may rightly make it impossible to automate deployment to production completely.

Client-installed applications present some special difficulties which Humble and Farley discuss.

In summary then:

Continuous integration: your application always builds and passes its tests, including all the pieces from different sub-teams.

Continuous delivery: your application always builds and deploys to a test environment and passes its tests.

Continuous deployment: your application is always ready to deploy to production through a largely automated process.

Update: I received an email from Martin Fowler about this post. He refers to Jez Humble’s post on Continuous Delivery vs Continous Deployment and adds:

– I would use your definition of Continuous Deployment for Continuous Delivery

– I would change the definition of Continuous Deployment to say something like "every good build is released to production"

However, I clarified with him that if you building for a test environment but are confident that any build that passes would be OK to deploy to production, then you are still doing Continuous Delivery. In the end, while I am sure you should use Fowler and Humble’s definitions rather than mine, it seems to me a fine distinction and that if you are doing Continuous Delivery properly then the transition to Continuous Deployment is largely a matter of policy.

HP business breakdown and why a PC spin-off could backfire

I had a look at HP’s latest financials, following last night’s triple blast of news from the computer giant. It is ceasing webOS operations, acquiring enterprise knowledge management company Autonomy, and considering (though only considering) a spin-off or other major change to its PC division, the Personal Systems Group. Here is what HP said:

As part of the transformation, HP announced that its board of directors has authorized the exploration of strategic alternatives for the company’s Personal Systems Group. HP will consider a broad range of options that may include, among others, a full or partial separation of PSG from HP through a spin-off or other transaction. (See accompanying press release.)

Looking at the results for the second quarter 2011, here is how the main pieces break down:

$millions

Segment Percentage of revenue Earnings Percentage of total earnings
Services 9,089 28.5% 1225 33.8%
Servers, storage and networking 5396 16.9% 699 19.3%
HP Software 780 2.4% 151 4.2%
Personal systems group 9,592 30.1% 567 15.7%
Imaging and printing 6,087 19.1% 892 24.6%
Financial Services 932 2.9% 88 2.4%

Note that “Earnings” is earnings from operations; HP actually made less money than that, because various other corporate costs have to be deducted. But it gives an idea of where HP’s profit comes from.

So what do these groups do? PSG is notebooks, desktops, workstations and other, where “other” I’d guess will include the webOS mobile devices. In PSG, notebooks accounts for 54% of the total, with desktops taking 38% of the rest. Virtually all of these run Windows.

In servers, storage and networking, 61% is from what HP calls “Industry standard servers”. This is code for Windows server.

Under services, the three big businesses are Infrastructure Technology Outsourcing (42%), Technology Services (30%) and Application Services (19%). The first of these is clear-cut (have HP run your infrastructure), but the second two are both consulting services and on a brief look seem to have some overlap.

Autonomy, by the way, reported revenue of $million 247 in the three months ending June 30 2011 – pretty tiny relative to HP.

A few comments then. It’s worth noting that PSG is the biggest single segment for revenue, but not so for profit, though it is still making a useful contribution.

Imaging and Printing contributes most earnings as a proportion of revenue. I do not know how much of that comes from absurdly overpriced ink cartridges!

If you take PSG together with Industry Standard Servers, you find that around 40% of HP’s revenue comes from boxes running Windows. If you then consider what its printers, network and storage systems attach to, and that a proportion of HP’s consulting business concerns Windows systems and applications, it is obvious that HP’s fortunes are deeply entwined with Microsoft.

If HP removes PSG that will still be true, though less so. But why would HP want do remove PSG? I would guess two main reasons. One is that it is unprofitable relative to the other segments, and the other is that HP foresees the business declining under the force of various well-documented pressures: Apple, mobile, cloud.

It still makes little sense to me. I can understand why HP might want to get out of consumer desktops and laptops, but it seems to me that to supply corporate PCs fits snugly with the rest of HP’s business and has beneficial side-effects. After all, PCs, printers and servers do all plug together both physically and conceptually. Getting rid of PSG might have a negative effect on other parts of HPs business.

In the SMB market, by the way, resellers like HP because unlike Dell it does not mainly sell direct. HP boxes generally work as advertised in my experience, though I rate the laptops less highly than the servers and desktops.

HP discontinues WebOS, considers PC spin-off. Should have stuck with Microsoft

Oh yes, and buys Autonomy, a fast-growing specialist in enterprise knowledge management.

Here’s the news from HP’s announcement:

As part of the transformation, HP announced that its board of directors has authorized the exploration of strategic alternatives for the company’s Personal Systems Group. HP will consider a broad range of options that may include, among others, a full or partial separation of PSG from HP through a spin-off or other transaction. (See accompanying press release.)

HP will discontinue operations for webOS devices, specifically the TouchPad and webOS phones. The devices have not met internal milestones and financial targets. HP will continue to explore options to optimize the value of webOS software going forward.

In addition, HP announced the terms of a recommended transaction for all of the outstanding shares of Autonomy Corporation plc for £25.50 ($42.11) per share in cash.

A few quick comments. First, the failure of webOS does not surprise me. There is not much wrong with webOS as such; in pure technical terms it deserves better. Its focus on adapting web technologies for local mobile applications is far-sighted; it is a more interesting operating system than Android and in some ways it is surprising that it went to HP and not to Google, which is a web technology specialist.

The problem is that HP, despite its size, is not big enough to make a success of webOS on its own. This was my comment from just over a year ago:

Mobile platforms stand (or fall) on several pillars: hardware, software, mobile operator partners, and apps. Apple is powering ahead with all of these. Google Android is as well, and has become the obvious choice for vendors (other than HP) who want to ride the wave of a successful platform. Windows Phone 7 faces obvious challenges, but at least in theory Microsoft can make it work though integration with Windows and by offering developers a familiar set of tools, as I’ve noted here.

It is obvious that not all these platforms can succeed. If we accept that Apple and Android will occupy the top two rungs of the ladder when it comes to attracting app developers, that means HP webOS cannot do better than third; and I’d speculate that it will be some way lower down than that.

Frankly, if HP did not want to do Android, it should have stuck with Microsoft. But this is where the webOS news ties in with the announcement about he Personal Systems Group. HP fell out with Microsoft last year, as I noted in my 2010 retrospective. I said the two companies should make up; but it looks as if HP is more inclined to give up on PCs and pursue other lines that have better margins – like enterprise software.

I am puzzled though by the PSG announcement. It is always curious when a company announces that it might or might not do something, and the fact that HP says it is considering a spin-off of its PC division will be enough to makes its customers uncertain about the long-term future of HP PCs and some of them will buy elsewhere as a result. It would have paid HP either to say nothing, or to be more definite and aim for a speedy transition.

All this, on the eve of Microsoft’s detailed unveiling of Windows 8. What are the implications? More than I can put into a single post; but like Gartner’s reports of dramatically declining PC sales in Western Europe presented earlier this week, this is a sign of structural change in the industry.

Microsoft will be glad of one thing: it no longer has this major partner promoting a rival mobile and tablet operating system. Note that HP still is a major partner: even if it sells the Personal Systems Group, its server and services business will still be deeply entwined with Windows.

Reports of 19% decline in Western European PC market show structural change

As if we needed telling, a new Gartner report shows a steep decline in the PC market in Western Europe. A “PC” in this context includes Macs but excludes smartphones and what Gartner called “media tablets”, mostly Apple iPads. A few figures comparing shipments in the second quarter 2011 with the same period in 2010:

  • Total PC sales down 18.9%
  • Netbook sales down 53%
  • Desktop PCs down 15.4%
  • Apple up 0.5%
  • Consumer PC market down 27%

What interests me here is not so much the normal ebbing and flowing of the PC market, but structural change indicating a switch away from PCs and laptops to more lightweight mobile devices. I believe this is evidence of that, though the economy is weak and extending the life of existing PCs is an obvious saving both for businesses and consumers.

Still, the dramatic decline in netbook sales suggests that consumers really are buying the more expensive iPad in preference. If you believe that consumers are to some extent ahead of business in their technology choices, then we can expect more of the same in the corporate market too.

No doubt alarm bells have been ringing in Microsoft’s Redmond headquarters for some time. The company is betting on Windows 8 to rescue its operating system from permanent decline, which is why next month’s BUILD conference is so critical. Nevertheless, it will be a year or so before we get new-style tablets running Windows 8, so will it be too late? I tend to think not, just because of the strength of Microsoft in the business world and the importance of Windows for existing applications, but it is interesting to speculate.

One factor which you can argue either way, in terms of Microsoft’s prospects, is that non-iPad tablets seem to be struggling. HP’s TouchPad and RIM’s PlayBook seem to be selling poorly. Google Android looks more hopeful though overshadowed by legal concerns from multiple sources. In Australia and parts of Europe Apple has successfully barred or delayed sales of Samsung’s Galaxy Tab 10.1, though the latest news is that the ban has been lifted outside Germany.

See also: Fumbling tablet computing – Microsoft’s biggest mistake?

Google is now a hardware company as it announces acquisition of Motorola Mobility and its patents

Google is to acquire Motorola Mobility, a major manufacturer of Android handsets. Why? I believe this is the key statement:

We recently explained how companies including Microsoft and Apple are banding together in anti-competitive patent attacks on Android. The U.S. Department of Justice had to intervene in the results of one recent patent auction to “protect competition and innovation in the open source software community” and it is currently looking into the results of the Nortel auction. Our acquisition of Motorola will increase competition by strengthening Google’s patent portfolio, which will enable us to better protect Android from anti-competitive threats from Microsoft, Apple and other companies.

What are the implications? This will assist Google in the patent wars and perhaps give it some of the benefits of vertical integration enjoyed by Apple with iOS; though this last is a difficult point. The more Google invests in Google Motorola, the more it will upset other Android partners. Google CEO Larry Page says:

This acquisition will not change our commitment to run Android as an open platform. Motorola will remain a licensee of Android and Android will remain open. We will run Motorola as a separate business.

It is unlikely to be so simple; and the main winner I foresee from today’s announcement is Microsoft. Nokia’s decision to embrace Windows Phone rather than Android looks smarter today, since for all its faults Microsoft has a history of working with multiple hardware vendors. The faltering launches of HP’s TouchPad and RIM’s PlayBook have also worked in Microsoft’s favour. I do not mean to understate Microsoft’s challenge in competing with Apple and Android, but I believe it has a better chance than either HP or RIM, thanks to its size and existing market penetration with Windows.

Microsoft will be clarifying its mobile and slate strategy next month at the BUILD conference.

Today’s announcement is also a sign that Google takes Android’s patent problems seriously, as indeed it should. The company’s policy of act first, seek forgiveness later seems to be unravelling. Oracle has a lawsuit against Google with respect to use of Java in Android that looks like it will run and run. FOSS patent expert Florian Mueller argues today that Android also infringes the Linux license, and that this is a problem that cannot easily be fixed. Samsung’s latest Galaxy Tab has been barred from the EU; not entirely a Google issue, but it runs Android.

Note of clarification: Google is acquiring Motorola Mobility, not the whole of Motorola. In January 2011 Motorola split into two businesses. Motorola Mobility is one, revenue in second quarter 2011 around $3.3 billion. The other is Motorola Solutions, revenue in second quarter 2011 around $2 billion.

Adobe Muse: so what is wrong with Dreamweaver?

Adobe has released a preview of Muse, a new web site design tool.

My first reaction was one of be-musement. What is wrong with Dreamweaver, the excellent web design tool included in Creative Suite? Bearing in mind that there is also a simplified Dreamweaver aimed at less technical business users, called Contribute.

Here are some distinctive features of Muse:

1. It is aimed at non-coders. The catch phrase is “Design and publish HTML websites without writing code”. Muse actually hides the code. I installed Muse on a Mac, and one of the first things I looked for was View Source. I cannot find any such feature. You have to preview the page in the browser, and view the source there. That is in contrast to Dreamweaver, where the split view shows you simultaneous HTML and visual designers, and you can edit freely in either.

2. It is an Adobe AIR application. I discovered this in a bad way. It would not install for me on Windows:

image

A curious error. Luckily I am also working on a Mac right now, and there it worked fine.

image

3. It will be sold by subscription only. The FAQ answer is worth quoting in full, as it describes one of the key advantages of cloud computing:

Muse will be sold only by subscription because it will allow the Muse team to improve the product more quickly and be more responsive to your needs. Traditionally Adobe builds up a collection of new features over 12, 18 or 24 months, then makes those changes available as a major upgrade. It is anticipated that new updates of Muse will be released much more frequently, probably quarterly. New features will be made available when they’re ready, not held to be part of an annual or biannual major upgrade. This will enable us to stay on top of browser and device compatibility issues and web design trends, as well as enabling us to respond to feature requests and market changes in a much more timely fashion.

I am reminded of Project Rome, a cancelled project which was also intended to be subscription only. Rome was for desktop publishing, Muse is for web design; otherwise there are plenty of parallels.

4. Muse promotes Adobe hosting via Business Catalyst, and if you select Publish this is the sole option:

image

Of course you can also Export as HTML. Still, it looks as if Muse is intended as part of a wider initiative which will include hosting and web analytics.

5. Muse is not a Flash authoring tool. Check out the Features page. The word Flash does not appear. Nor did any hidden Flash content appear when I exported a page as HTML. My guess: there is a quiet Flash crisis at Adobe, and the company is hastening to make its tools less Flash-centric, in favour of something more cloud and HTML 5 based. I do not mean that Flash is now unimportant. It is still critical to Adobe, and after all Muse itself runs on Flash. However it is being repositioned.

A few comments. Unfortunately I’ve not yet spoken to Adobe about Muse, but the obvious question is reflected in my heading: what is wrong with Dreamweaver? To answer my own question, I can see that Dreamweaver is a demanding tool, and that Muse, while still aimed at professionals, should be easier to learn.

On the other hand, I recall many early web design tools that tried to hide the mechanics of web pages, some more successful than others, and that in the end Dreamweaver triumphed partly thanks to its easy access to the code. Some still miss HomeSite, an even more code-centric tool. What has changed now?

Needless to say, Dreamweaver is not going away, but there is clearly overlap between the two tools.

Of course non-coders do need to be involved in web site authoring, but the trend has been towards smart content management tools, such as WordPress or Drupal, which let designers and coders develop themes while making content authoring easy for contributors. Muse is taking a different line.

Watch this space though. Even on the briefest of looks, this is an impressive AIR application, and it will be interesting to see how it fits into Adobe’s evolving business strategy.

Update: Elliot Jay Stocks blogs about the code generated by Muse, which he says is poor, and his opinion that it is too much print-oriented:

warning signs are present in this public beta that suggest Muse is very much a step in the wrong direction.

Google Native Client: browser apps unleashed, or misconceived and likely to fail?

Last week Google integrated Native Client into the beta of Chrome 14. Native client lets you compile C/C++ code to run in the browser. It depends on a new plug-in API called Pepper. These are open source projects sponsored by Google and implemented in the Chrome browser, and therefore also likely to turn up in Chrome OS which is an operating system in which all apps run in the browser.

Native Client is cool. For example, NaCLBox lets you run old DOS games in the browser by porting DOSBox to Native Client.

image

Another project is Qt for Google Native Client, a project currently in development. Qt is an excellent and popular GUI and application framework which would speed development of Native Client apps as well as enabling many existing applications to be ported.

It is also worth mentioning that Native Client provides another way to run .NET code in the browser, via Mono with NaCl support.

Why Native Client? Google’s vision, or at least the part of it that focuses on Chrome OS rather than Android, is that everything runs on the Internet and in the browser, making the local operating system unimportant and easily replaced. Native Client removes any performance compromises in managed languages such as JavaScript, ActionScript or Java, as well as easing migration for businesses with existing C/C++ code.

Writing native code for the browser is nothing new. Both Microsoft’s ActiveX and the NPAPI plug-in API used by non-Microsoft browsers let you extend the browser with native code. However Native Client is seamless for the user; you do not have to install any additional plug-in. The main limitation is that Native Client applets do not have access to the local operating system, for security reasons.

It is also worth noting that Native Client apps are not altogether cross-platform. They must be recompiled for different CPU instruction sets, with the current implementation supporting x86 and ARM though you have to compile two binaries. Google says it will support LLVM output to enable cross-platform binaries though this will impact performance.

But is Native Client secure? That is an open question. Google was aware of the security challenge from the beginning of the project. Unlike the plug-in mechanisms which rely mainly on trust in developer competence and signed code to verify the origin of the plug-in or ActiveX control, Native Client inspects the actual code for unsafe instructions before allowing it to run. There is also an “outer sandbox” which intercepts system calls.

However, adding any new way for code to run makes the browser less secure. Google ran a Native Client Security Contest to help identify vulnerabilities, and the contestants did not have any problem finding security flaws. Of course all of these discovered flaws will have been fixed, but there may be others and likely will be.

And is Native Client necessary? The latest JIT-compiled JavaScript engines are fast enough to enable most types of application to run at a satisfactory speed. This is not just about performance though; it is about reusing existing skills, libraries and applications. There is no doubt that Native Client is nice to have; whether its benefits outweigh the risks is harder to judge.

The last question, which may prove the most significant, is political. Google has forged ahead on its own with Native Client, saying as vendors always do that it hopes it will become a web standard. In the early days of the project, it looked like a Native Client plug-in might enable the feature in other browsers, but abandoning NPAPI for Pepper makes this difficult. Will other browser vendors support Native Client?

Here is a comment from Google’s Ian NI-Lewis that I find remarkable:

As you probably know, the rule in Web standards is "implementation wins." So we’re concentrating on getting a good quality implementation out the door. We’re doing that in Chrome. That doesn’t mean that NaCl is intended to be "Chrome only," just that we have to start somewhere.

So Native Client is non-standard, and therefore less interesting than HTML 5 until either Google has a Microsoft-Office-like de facto monopoly of web browsers, or it persuades Mozilla, Microsoft and Apple to support it.

That said, you can think of Chrome as an installable runtime in the same way as the Java Virtual Machine or Adobe Flash, just a potentially more intrusive one. Here is our app, you have to install the free Chrome browser to use it. If this happens to any great extent, I can foresee other browser makers hastening to support it.

C++ 11 is approved by ISO: a big day for native code development

Herb Sutter reports that C++ 0x, which will be called C++ 11, has been unanimously approved by the ISO C++ committee. The “11” in the name refers to the year of approval, 2011. The current standard is C++ 98, though amended as C++ 03, so it has taken 8 or 13 years to update it depending on how you count it.

This means that compiler makers can get on with implementing the full C++ 11 standard. Most current compilers implement some of the features already. This Apache wiki shows the current status. A quick glance suggests that the open source GCC is ahead of the pack, followed by Intel C++ and then perhaps Microsoft Visual C++.

C++ 11 is pretty much compatible with C++ 03 so existing code should still work. However there are many new features, enough for Bjarne Stroustrup to say in his feature summary:

Surprisingly, C++0x feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It’s easier now.

Concurrent programming is better supported in C++ 11, important for getting the best performance from modern hardware.

It is curious how the programming landscape has changed in recent year. A few years back, you might have foreseen a day when most programming would be .NET, Java or JavaScript: all varieties of managed code. While those languages do still dominate, native code has come more to the fore, thanks to factors like Apple’s focus on Objective C, and signs of internal conflict at Microsoft over the best language for coding Windows applications.

That said, C++ 11 remains a demanding language to learn and use. As Stroustrup notes, since C++ 11 is a superset of C++ 98 it is technically harder to learn all of it, though new libraries and abstractions should help beginners. The reasons for using or not using C++ are not going to change significantly with this new standard.

Building PasswordSafe for the Mac: Lion development hassles

I am doing some work on a Mac at the moment. On Windows I store passwords in PasswordSafe, an open source utility that works well, so I wondered if I could access my PasswordSafe database from the Mac.

image

I could have run the Windows version in Parallels, which I have just installed, but I figured a Mac version would be more convenient. I didn’t see a Mac build among the downloads, but PasswordSafe is cross-platform, so I downloaded the source to do a quick compile.

I was glad to find README.MAC.DEVELOPERS.txt in the PasswordSafe source and set to work. The first task is to download wxWidgets, a cross-platform GUI library, so I went off to download that. Ran the osx-build-wx script as instructed. Result: error message stating C compiler cannot create executables.

The problem seems to be that PasswordSafe expects GCC 4.0 but the latest Xcode installs GCC 4.2. The solution suggested here is to remove Xcode 4, install Xcode 3, and then reinstate Xcode 4. There are related issues concerning PPC fat binaries and older versions of the Mac SDK.

That solution seemed risky and ardous to me, and I remembered that I still had an old Mac Mini from which I was forced to upgrade in order to install Lion, the latest OS X. I hooked it up, removed Xcode 4, installed Xcode 3, and set to work again.

I get the impression not many people build PasswordSafe for the Mac. The first issue I discovered was that the steps in in the README.MAC.DEVELOPERS.txt don’t mention that after running osx-build-wx you also have to run make in order to build static libraries. That was easy though. The next thing is to load the supplied PasswordSafe project into Xcode and build.

I did that but got an error – the linker could not resolve SizeRestrictedPanel. The fix was to add SizeRestrictedPanel.cpp and SizeRestrictedPanel.h to the project. PasswordSafe then built and seems to work fine, on Lion as well as earlier versions of OS X, though there are a few cosmetic issues. You can see from the image that the caption for the New Database button is slightly awry.

If anyone wants my build, it is here. There is also a Java version, and some people have success with that on the Mac.

Mozilla to take on the cross-platform app challenge

Mozilla is facing an uncertain future. Its problem: basing a business (even a non-profit one) on being the alternative to Microsoft’s Internet Explorer is no longer sensible, given that Apple and Google are now doing this too, and even Microsoft is now investing in HTML 5. I discussed these issues in more detail here.

So what is Mozilla to do? Mozilla Chair Mitchell Baker has posted about a possible new approach, based on being the alternative to Apple for apps. She lists some of the problems with the current “app experience”. Apps are device-specific, require permission at many levels, and a few App Store owners (mainly Apple but also Google) control the business model and customer relationships.

Mozilla is proposing what I presume is a new app platform, which will be cross-platform and cross-device. Instead of discovering apps in a single app store, she envisages multiple providers and the ability to find apps in the same way we find web content.

In other words, if the old Mozilla was about freedom from Microsoft and allowing web technology to progress, the new Mozilla might be about freedom from Apple and allowing app technology to progress.

It is a bold vision and one that in principle would be welcome. That said, Mozilla cannot change the control Apple has over its platform, and its insistence that apps are installed only through its own App Store. Maybe she has in mind a cross-platform toolkit, or browser-based apps, or some combination.

Another snag is that whereas there was widespread dissatisfaction with Microsoft’s Internet Explorer back in 2004 when Firefox was launched, this is not the case with Apple and its app platform today. Apple’s App Store system undoubtedly has a dark side, but the user experience is good and developers are making money, some of them at least. Apple’s control over app installation and the constraints imposed on what apps can do are also good for security.

Nevertheless, having looked at a number of cross-platform mobile toolkits, from PhoneGap to Appcelerator Titanium to Adobe AIR, I can see both the significance of this kind of development and that there is plenty of scope for improvement.