Category Archives: open source

Adobe to ship Flash 11 and AIR 3, repositions Flash vs HTML 5

Adobe has announced that Flash 11 and AIR 3 will ship in early October.

There are significant changes in this release.

  • Flash gets Stage 3D (previously codenamed Molehill), a set of low-level 3D APIs, GPU accelerated where hardware allows, which will make console-like 3D graphics and games possible in Flash. Stage 3D wraps DirectX on Windows and OpenGL on desktop and mobile platforms.
  • 64-bit Flash is here at last, supporting 64-bit Internet Explorer and other browses on Windows, Mac and Linux.
  • AIR, which uses Flash as a runtime for desktop and mobile applications, now supports native extensions for better device support, operating system integration, and the ability to speed performance-critical code or use open source libraries.
  • In addition, the AIR packager for iOS, which lets you wrap your application as a native executable, is now a feature called Captive Runtime which is available for Windows, Mac and Android as well as iOS. Users who install a packaged application will not know it uses AIR, and will not need to install or update the AIR runtime as it is packaged with the application, though it is not actually a single file (on Windows at least).

These new options make the Flash and AIR combination an interesting comparison with other cross-platform development tools, such as Embarcadero’s new Delphi XE2, which targets Windows, Mac and iOS with a new framework called FireMonkey; or Appcelerator’s Titanium tool for cross-platform desktop and mobile development. Note though that Adobe is not promising any performance improvement. This is just another way to package the same runtime.

Adobe’s advantage is its high quality design and development tools and the maturity of the Flash runtime. For application size and performance, it will likely fall short of true native development tools. The ActionScript language could do with updating, and I would not be surprised if Adobe addresses this in the next major Flash release.

But do we still need Flash? Flash in the browser is in decline, thanks to the influence of Apple and the rise of HTML 5. Adobe’s MAX conference is coming up soon, and I noticed in the schedule [Flash needed] a defensive note in some of the sessions; there is even one called “The Death of Flash” which talks about “the misinformation that’s percolated through the web over the past year”.

That may be so; but even Adobe is re-positioning Flash and recognizing the rise of HTML 5. “Customers see significant advantages for Flash in a few focused areas,” said Adobe’s Danny Winokur, VP and General Manager of Platform , in a press briefing. He identified these areas as gaming, media apps, and “sophisticated data-driven applications” – think data visualisation rather than just forms over data. “For everything else it is very clear that … HTML 5 is a mature enough technology that it is a really good solution.”

Adobe is therefore investing in HTML 5 tools as well as Flash tools, and Winokur mentioned the Edge motion design tool as well as the venerable Dreamweaver.

I asked Winokur, given that HTML 5 is maturing fast, how Adobe sees the picture vs Flash in say two years time. He replied that Adobe is actively working to advance HTML 5, but that “there will continue to be opportunities for innovation in Flash, where we can … enable new possibilities that did not previously exist on the Web.” He makes the case for Flash as a kind of leading edge for HTML, with features that eventually become part of the HTML standard.

It is a fair point, but it is obvious that the niche for Flash is getting smaller rather than larger.

Adobe has never charged for the Flash runtime, and while the Flash vs HTML path is tricky to navigate, Adobe mainly makes its money from design tools, server applications and web analytics, and while Flash plays some client role in many of these products, Adobe can tune them over time to make less use of the runtime. I believe we can see this happening.

More positively, Adobe is benefiting from the demand for rich content across both web and applications, and has just reported decent financial results, showing the company’s resilience.

Finally, everyone is asking what Adobe will do about Microsoft’s WIndows 8 Metro platform for tablets, given that browser plug-ins are not supported. Here is the answer:

… we expect Flash based apps will come to Metro via Adobe AIR, much the way they are on Android, iOS and BlackBerry Tablet OS today

though I hope this will be delivered more quickly than the promised Flash runtime for Windows Phone 7, which is not a subject either Adobe or Microsoft seems willing to talk about.

Update: Adobe has also announced the Flex 4.6 SDK and Flash Builder 4.6, which supports these new capabilities including Captive Runtime and Native Extensions, and has new controls specifically aimed at tablet apps.

PhoneGap comes to Windows Phone

Nitobi has announced PhoneGap for Windows Phone 7, nicely timed just before the Microsoft BUILD conference next week.

PhoneGap is a cross-platform mobile development tool that uses the HTML and JavaScript engine on the phone as its runtime, supplemented by extensions which give access to other device features:

After unpackaging the contents of the www folder, your www/index.html file is loaded into an embedded headless browser control. This is essentially the same paradigm as other platforms, except here it is an IE9 browser and not a webkit variant. IE9 is a much more standards-compliant browser than previous IEs, and implements commonly used html5 features like DOMContentLoaded events, addEventListener interfaces, and CSS3. Be sure to use to get the html5 implementation otherwise the browser may fallback to a compatibility mode, and your code will likely choke and die.

The version for Windows Phone 7, just released in preview, is extended to support features including the camera, accelerometer, contacts, and notifications. There is also support for plugins:

PhoneGap-WP7 maintains the plugability of other platforms via a command pattern, to allow developers to add functionality with minimal fuss, simply define your C# class in the WP7GapClassLib.PhoneGap.Commands namespace and derive your class from BaseCommand.

In general Windows Phone 7 is not well supported by cross-platform toolkits, so PhoneGap support is an interesting development. PhoneGap has a high profile currently, and is being integrated into a diverse range of tools ranging from Adobe Dreamweaver to Embarcadero RadPHP, as well as the standard PhoneGap tools based on Eclipse.

Review: Continuous Delivery by Jez Humble and David Farley

I like this book. I know I like it because I find myself wanting to quote from it frequently. It is a book that almost every software developer should read, even if you disagree with parts of it – which is likely, because it is opinionated. The authors always give reasons for their opinions though, which means that if you disagree, you need to articulate why that is; or they may even change your mind. In consequence you find yourself learning as you read.

The authors are software theoreticians, but they are also practitioners; in fact they are practitioners first and theoreticians afterwards. This means they are pragmatic rather than dogmatic. Here is an example. Chapter 13 discusses software dependencies, and page 372 covers circular dependencies, “probably the nastiest dependency problem.” A circular dependency is when component A depends on component B, and component B also depends on component A.

A bad idea; but the authors write:

Surprisingly, we have seen successful projects with circular dependencies in their build systems. You may argue with our definition of “successful” in this case, but there was working code in production, which is enough for us.

As an aside, this kind of dry humour is characteristic, as also evident in remarks like this:

We are certain that, occasionally, manually intensive releases work smoothly. We may well have been unlucky in having mostly seen the bad ones.

The subject of the book is Continuous Delivery. So what is that? Well, if Continuous Integration is about ensuring that your software always builds, then Continuous Delivery is about ensuring that your software always deploys. The final form, as it were, of Continuous Delivery is Continuous Deployment, where you are so confident of your automated build and deploy process that any checked-in code that passes its tests can be deployed immediately. I was confused about the difference between Continuous Delivery and Continuous Deployment so I wrote a post about it; it turns out that there is not much difference.

The principle behind Continuous Delivery is that software is not done until it is released. If the release process is long, arduous and infrequent, then you are not really doing Agile development. A section of chapter 1 is devoted to release anti-patterns, and these form an excellent rationale for taking an interest in Continuous Delivery.

My guess is that anyone who has been involved in professional software development will wince a little while reading through these anti-patterns, thinking “that is what we used to do” or even “that is what we do”.

That said, Humble and Farley do not fall into the trap of merely writing about how not to do it. Rather, they address in some detail the kinds of problems you will face if you decide to embrace the Continuous Delivery methodology. The key ingredient in Continuous Delivery is that pretty much everything must be automated, otherwise it is too difficult to do. But how do you automate something like Acceptance Testing? That is the subject of chapter 8. How do you automate a deployment at all? That is the subject of chapter 6. The authors are not on a higher plane than the rest of us, and much of the advice is straightforward, even at the level of “Always use relative paths,” which is a tip in chapter 6.

The authors talk a lot about testing, as you would expect, but there is also extensive discussion of software configuration management, describing different approaches such as centralised and distributed version control and even specific tools. The chapter on Advanced Version Control is a particularly good read. Humble and Farley articulate the point that branching and merging is antithetical to Continuous Integration and therefore Continuous Delivery:

If different members of the team are working on separate branches or streams then by definition they’re not continuously integrating (p 390)

Does this mean branches are a bad idea? Not always, say the authors, but they also state:

Our strong recommendation is to crate long-lived branches only on release … new work is always committed to the trunk (p 392)

The reason is not only to enable Continuous Integration, but also because merging is complex and error-prone.

Software configuration management is not easy, but it is a relatively mature aspect of software development. This is less true of what you might call infrastructure configuration management; yet infrastructure dependencies such as versions and configurations of the operating system or web server are a common reason for deployment failures. Several chapters discuss this problem in detail. In principle, the authors say:

The desired state of your infrastructure should be specified through version-controlled configuration.

This leads to some thoughtful discussion of how to achieve this.

Another theme, as you would expect, is that development and operations people need to be working together and not in isolation. To some extent this is a DevOps book.

A great book then; but there are flaws. One is that there is some repetition because of the way the book is organised. This is good if you are inclined to read chapters in isolation, but not so good if you are reading straight through. In practice I did not find it too annoying, but it is there.

Another issue is that while the authors do cover Microsoft .NET to some extent, this is usually in the form of a brief mention and there is more focus on Java. This may be in part because of their preference for open source. It is still a good read for .NET developers, because the principles are platform-agnostic, but Microsoft platform developers may find it irritating at times. Team Foundation Server, say the authors, is “essentially an inferior knock-off of Perforce” (p 386).

The discussion of specific tools is a strength but also a weakness, in that the tools will change over time and the book will become dated.

This is not the last word on Continuous Delivery, but it is an enjoyable and thought-provoking read. Recommended.

 

Google is now a hardware company as it announces acquisition of Motorola Mobility and its patents

Google is to acquire Motorola Mobility, a major manufacturer of Android handsets. Why? I believe this is the key statement:

We recently explained how companies including Microsoft and Apple are banding together in anti-competitive patent attacks on Android. The U.S. Department of Justice had to intervene in the results of one recent patent auction to “protect competition and innovation in the open source software community” and it is currently looking into the results of the Nortel auction. Our acquisition of Motorola will increase competition by strengthening Google’s patent portfolio, which will enable us to better protect Android from anti-competitive threats from Microsoft, Apple and other companies.

What are the implications? This will assist Google in the patent wars and perhaps give it some of the benefits of vertical integration enjoyed by Apple with iOS; though this last is a difficult point. The more Google invests in Google Motorola, the more it will upset other Android partners. Google CEO Larry Page says:

This acquisition will not change our commitment to run Android as an open platform. Motorola will remain a licensee of Android and Android will remain open. We will run Motorola as a separate business.

It is unlikely to be so simple; and the main winner I foresee from today’s announcement is Microsoft. Nokia’s decision to embrace Windows Phone rather than Android looks smarter today, since for all its faults Microsoft has a history of working with multiple hardware vendors. The faltering launches of HP’s TouchPad and RIM’s PlayBook have also worked in Microsoft’s favour. I do not mean to understate Microsoft’s challenge in competing with Apple and Android, but I believe it has a better chance than either HP or RIM, thanks to its size and existing market penetration with Windows.

Microsoft will be clarifying its mobile and slate strategy next month at the BUILD conference.

Today’s announcement is also a sign that Google takes Android’s patent problems seriously, as indeed it should. The company’s policy of act first, seek forgiveness later seems to be unravelling. Oracle has a lawsuit against Google with respect to use of Java in Android that looks like it will run and run. FOSS patent expert Florian Mueller argues today that Android also infringes the Linux license, and that this is a problem that cannot easily be fixed. Samsung’s latest Galaxy Tab has been barred from the EU; not entirely a Google issue, but it runs Android.

Note of clarification: Google is acquiring Motorola Mobility, not the whole of Motorola. In January 2011 Motorola split into two businesses. Motorola Mobility is one, revenue in second quarter 2011 around $3.3 billion. The other is Motorola Solutions, revenue in second quarter 2011 around $2 billion.

Google Native Client: browser apps unleashed, or misconceived and likely to fail?

Last week Google integrated Native Client into the beta of Chrome 14. Native client lets you compile C/C++ code to run in the browser. It depends on a new plug-in API called Pepper. These are open source projects sponsored by Google and implemented in the Chrome browser, and therefore also likely to turn up in Chrome OS which is an operating system in which all apps run in the browser.

Native Client is cool. For example, NaCLBox lets you run old DOS games in the browser by porting DOSBox to Native Client.

image

Another project is Qt for Google Native Client, a project currently in development. Qt is an excellent and popular GUI and application framework which would speed development of Native Client apps as well as enabling many existing applications to be ported.

It is also worth mentioning that Native Client provides another way to run .NET code in the browser, via Mono with NaCl support.

Why Native Client? Google’s vision, or at least the part of it that focuses on Chrome OS rather than Android, is that everything runs on the Internet and in the browser, making the local operating system unimportant and easily replaced. Native Client removes any performance compromises in managed languages such as JavaScript, ActionScript or Java, as well as easing migration for businesses with existing C/C++ code.

Writing native code for the browser is nothing new. Both Microsoft’s ActiveX and the NPAPI plug-in API used by non-Microsoft browsers let you extend the browser with native code. However Native Client is seamless for the user; you do not have to install any additional plug-in. The main limitation is that Native Client applets do not have access to the local operating system, for security reasons.

It is also worth noting that Native Client apps are not altogether cross-platform. They must be recompiled for different CPU instruction sets, with the current implementation supporting x86 and ARM though you have to compile two binaries. Google says it will support LLVM output to enable cross-platform binaries though this will impact performance.

But is Native Client secure? That is an open question. Google was aware of the security challenge from the beginning of the project. Unlike the plug-in mechanisms which rely mainly on trust in developer competence and signed code to verify the origin of the plug-in or ActiveX control, Native Client inspects the actual code for unsafe instructions before allowing it to run. There is also an “outer sandbox” which intercepts system calls.

However, adding any new way for code to run makes the browser less secure. Google ran a Native Client Security Contest to help identify vulnerabilities, and the contestants did not have any problem finding security flaws. Of course all of these discovered flaws will have been fixed, but there may be others and likely will be.

And is Native Client necessary? The latest JIT-compiled JavaScript engines are fast enough to enable most types of application to run at a satisfactory speed. This is not just about performance though; it is about reusing existing skills, libraries and applications. There is no doubt that Native Client is nice to have; whether its benefits outweigh the risks is harder to judge.

The last question, which may prove the most significant, is political. Google has forged ahead on its own with Native Client, saying as vendors always do that it hopes it will become a web standard. In the early days of the project, it looked like a Native Client plug-in might enable the feature in other browsers, but abandoning NPAPI for Pepper makes this difficult. Will other browser vendors support Native Client?

Here is a comment from Google’s Ian NI-Lewis that I find remarkable:

As you probably know, the rule in Web standards is "implementation wins." So we’re concentrating on getting a good quality implementation out the door. We’re doing that in Chrome. That doesn’t mean that NaCl is intended to be "Chrome only," just that we have to start somewhere.

So Native Client is non-standard, and therefore less interesting than HTML 5 until either Google has a Microsoft-Office-like de facto monopoly of web browsers, or it persuades Mozilla, Microsoft and Apple to support it.

That said, you can think of Chrome as an installable runtime in the same way as the Java Virtual Machine or Adobe Flash, just a potentially more intrusive one. Here is our app, you have to install the free Chrome browser to use it. If this happens to any great extent, I can foresee other browser makers hastening to support it.

C++ 11 is approved by ISO: a big day for native code development

Herb Sutter reports that C++ 0x, which will be called C++ 11, has been unanimously approved by the ISO C++ committee. The “11” in the name refers to the year of approval, 2011. The current standard is C++ 98, though amended as C++ 03, so it has taken 8 or 13 years to update it depending on how you count it.

This means that compiler makers can get on with implementing the full C++ 11 standard. Most current compilers implement some of the features already. This Apache wiki shows the current status. A quick glance suggests that the open source GCC is ahead of the pack, followed by Intel C++ and then perhaps Microsoft Visual C++.

C++ 11 is pretty much compatible with C++ 03 so existing code should still work. However there are many new features, enough for Bjarne Stroustrup to say in his feature summary:

Surprisingly, C++0x feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It’s easier now.

Concurrent programming is better supported in C++ 11, important for getting the best performance from modern hardware.

It is curious how the programming landscape has changed in recent year. A few years back, you might have foreseen a day when most programming would be .NET, Java or JavaScript: all varieties of managed code. While those languages do still dominate, native code has come more to the fore, thanks to factors like Apple’s focus on Objective C, and signs of internal conflict at Microsoft over the best language for coding Windows applications.

That said, C++ 11 remains a demanding language to learn and use. As Stroustrup notes, since C++ 11 is a superset of C++ 98 it is technically harder to learn all of it, though new libraries and abstractions should help beginners. The reasons for using or not using C++ are not going to change significantly with this new standard.

Building PasswordSafe for the Mac: Lion development hassles

I am doing some work on a Mac at the moment. On Windows I store passwords in PasswordSafe, an open source utility that works well, so I wondered if I could access my PasswordSafe database from the Mac.

image

I could have run the Windows version in Parallels, which I have just installed, but I figured a Mac version would be more convenient. I didn’t see a Mac build among the downloads, but PasswordSafe is cross-platform, so I downloaded the source to do a quick compile.

I was glad to find README.MAC.DEVELOPERS.txt in the PasswordSafe source and set to work. The first task is to download wxWidgets, a cross-platform GUI library, so I went off to download that. Ran the osx-build-wx script as instructed. Result: error message stating C compiler cannot create executables.

The problem seems to be that PasswordSafe expects GCC 4.0 but the latest Xcode installs GCC 4.2. The solution suggested here is to remove Xcode 4, install Xcode 3, and then reinstate Xcode 4. There are related issues concerning PPC fat binaries and older versions of the Mac SDK.

That solution seemed risky and ardous to me, and I remembered that I still had an old Mac Mini from which I was forced to upgrade in order to install Lion, the latest OS X. I hooked it up, removed Xcode 4, installed Xcode 3, and set to work again.

I get the impression not many people build PasswordSafe for the Mac. The first issue I discovered was that the steps in in the README.MAC.DEVELOPERS.txt don’t mention that after running osx-build-wx you also have to run make in order to build static libraries. That was easy though. The next thing is to load the supplied PasswordSafe project into Xcode and build.

I did that but got an error – the linker could not resolve SizeRestrictedPanel. The fix was to add SizeRestrictedPanel.cpp and SizeRestrictedPanel.h to the project. PasswordSafe then built and seems to work fine, on Lion as well as earlier versions of OS X, though there are a few cosmetic issues. You can see from the image that the caption for the New Database button is slightly awry.

If anyone wants my build, it is here. There is also a Java version, and some people have success with that on the Mac.

PhoneGap is at version 1.0

I’ve just spotted that PhoneGap has reached version 1.0. The release was announced at PhoneGap day in Portland, on Friday 29th July.

I have spent some time trying out various cross-platform mobile development tools. PhoneGap is among the most interesting and popular, and is also open source and free to use. If you believe that using the browser engine as an application runtime is the most sensible route to cross-platform mobile applications, then PhoneGap is the leading contender. It wraps your application to look like a native app, and also provides ways to call the native API when necessary.

PhoneGap received a boost when Adobe built it into Dreamweaver 5.5. I tried it out and was impressed with the design environment, but I am not sure how serious Adobe is about PhoneGap since there is no documentation on how to package your PhoneGap app for release, and my post has comments from puzzled users. My solution was to export the project to Eclipse and the standard PhoneGap tools, which misses part of the value of having it integrated into Dreamweaver.

Adobe installs PhoneGap into the Dreamweaver directory, so another issue is how to take advantage of the latest version if you are using Adobe’s tools. Overall I would suggest that using the PhoneGap SDK and Eclipse is a better option, though there is no problem with bringing in Dreamweaver for parts of the design.

I interviewed Nitobi president André Charland about PhoneGap earlier this year.

Android only 23% open says report; Linux, Eclipse win praise

Vision Mobile has published a report on what it calls the Open Governance Index. The theory is that if you want to measure the extent to which an open source project is really open, you should look at its governance, rather than focusing on the license under which code is released:

The governance model used by an open source project encapsulates all the hard questions about a project. Who decides on the project roadmap? How transparent are the decision-making processes? Can anyone follow the discussions and meetings taking place in the community? Can anyone create derivatives based on the project? What compliance requirements are there for creating derivative handsets or applications, and how are these requirements enforced? Governance determines who has influence and control over the project or platform – beyond what is legally required in the open source license.

The 45-page report is free to download, and part-funded by the European Union Seventh Framework Program. It is a good read, covering 8 open source projects, including the now-abandoned Symbian Foundation. Here is the result:

Open Governance Index (%open)
Eclipse 84%
Linux 71%
WebKit 68%
Mozilla 65%
MeeGo 61%
Symbian 58%
Qt 58%
Android 23%

The percentages are derived by analysing four aspects of each project.

  • Access covers availability of source code and transparency of decisions.
  • Development refers to the transparency of contributions and acceptance processes.
  • Derivatives covers constraints on use of the project, such as trademarks and distribution channels.
  • Community structure looks at project membership and its hierarchy.

What is wrong with Android? I am not sure how the researchers get to 23%, but it scores badly in all four categories. The report observes that the code to the latest “Honeycomb” version of Android has not been published. It also has this to say about the Open Handset Alliance:

When launched, the Open Handset Alliance served the purpose of a public industry endorsement for
Android. Today, however, the OHA serves little purpose besides a stamp of approval for OHA
members; there is no formal legal entity, no communication processes for members nor frequent
member meetings.

By contrast, Eclipse and Linux are shining lights. MeeGo and Mozilla are also praised, thought the report does mention Mozilla’s “Benevolent dictators”:

In the case of conflicts and disputes, these are judged by one of two Mozilla “benevolent dictators” – Brendan Eich for technical disputes and Mitchell Baker for non-technical disputes.

Qt comes out OK but has a lower score because of Nokia’s control over decision making, though it sounds like this was written before Nokia’s Windows Mobile revolution.

WebKit scores well though the report notes that most developers work for Apple or Google and that there is:

Little transparency regarding how decisions are made, and no public information provided on this

Bearing that in mind, it seems odd to me that WebKit comes above Mozilla, but I doubt the percentages should be taken too seriously.

It is good to see a report that looks carefully at what it really means to be open, and the focus on governance makes sense.

GPU Programming for .NET: Tidepowerd’s GPU.NET gets some improvements, more needed

When I attended the 2010 GPU programming conference hosted by NVIDIA I encounted Tidepowerd, which has a .NET library called GPU.NET for GPU programming.

GPU programming enables amazing performance improvements for certain types of code. Most GPU programming is done in C/C++, but Typepowerd lets you run code in .NET, simply marking any methods you want to run on the GPU with a [kernel] attribute:

[Kernel]

private static void AddGpu(float[] a, float[] b, float[] c)

{

// Get the thread id and total number of threads

int ThreadId = BlockDimension.X * BlockIndex.X + ThreadIndex.X;

int TotalThreads = BlockDimension.X * GridDimension.X;

for (int ElementIndex = ThreadId; ElementIndex < a.Length; ElementIndex += TotalThreads)

{

c[ElementIndex] = a[ElementIndex] + b[ElementIndex];

}

}

GPU.NET is now at version 2.0 and includes Visual Studio Error List and IntelliSense support. This is useful, since some C# code will not run on the GPU. Strings, for example, are not supported. Take a look at this article which lists .NET OpCodes that do not work in GPU.NET.

GPU.NET requires an NVIDIA GPU with CUDA support and a CUDA 3.0 driver. It can run on Mac and Linux using Mono, the open source implementation of .NET. In principle, GPU.NET could also work with AMD GPUs or others via a vendor-specific runtime:

image

but the latest FAQ says:

Support for AMD devices is currently under development, and support for other hardware architectures will follow shortly.

Another limitation is support for multiple GPUs. If you want to do serious supercomputing relatively cheaply, stuffing a PC with a bunch of Tesla GPUs is a great way to do it, but currently GPU.NET only used one GPU per active thread as far as I can tell from this note:

The GPU.NET runtime includes a work-scheduling system which can distribute device method (“kernel”) calls to multiple GPUs in the system; at this time, this only works for applications which call device-based methods from multiple host threads using multiple CPU cores. In a future release, GPU.NET will be able to use multiple GPUs to execute a single method call.

I doubt that GPU.NET or other .NET libraries will ever compete with C/C++ for performance, but ease of use and productivity count for a lot too. Potentially GPU.NET could bring GPU programming to the broad range of .NET developers.

It is also worth checking out hoopoe’s CUDA.NET and OpenCL.NET which are free libraries. I have not done a detailed comparison but would be interested to hear from others who have.