Category Archives: software development

Adobe discontinues Flash Catalyst, clarifies Flex and Flash Builder futures

Adobe has told a group of Flex developers, invited to San Francisco for a special reconciliatory summit following the sudden announcement that Flex is moving to the Apache Foundation, that Flash Catalyst will be discontinued. Developer Fabien Nicollet was there and posts:

CS5.5 version of Catalyst is the latest version of Flash Catalyst. It is compatible with Flex 4.5, but compatibility will not be ensured for future versions.

Flash Builder will also have features removed in future versions. Adobe’s slide talks of:

Removing unpopular and expensive to maintain features: Design View, Data Centric Development (DCD) and Flash Catalyst workflows.

The Monocle profiler, shown at the MAX conference as a sneak peek, “continues as a priority”.

The FalconJS project, to compile Flex to HTML5, will be discontinued, though possibly donated to Apache at a date to be determined.

AIR on Linux will not be given to Apache because it would mean sharing the proprietary Flash Player code. This is bad news in the Apache context.

Nicollet concludes:

Flex still has a bright future for companies who want to build fast and robust applications . Not to mention the people who will have a hard time building complex applications on HTML5, for whom Flex will always be a viable and mature alternative.

That is the optimistic view. What is clear from the summit is that Adobe is greatly reducing its investment. I guess we knew this already; but hearing about how Flash Builder will be cut-down, Catalyst discontinued, and so on, will not improve developer confidence.

A lot depends on the progress of the Apache project. My concern here is that since the Flash player, which is the Flex runtime, remains proprietary, this will dampen enthusiasm in the open source community and limit its ability to innovate around Flex.

NVIDIA plans to merge CPU and GPU – eventually

I spoke to Dr Steve Scott, NVIDIA’s CTO for Tesla, at the end of the GPU Technology Conference which has just finished here in Beijing. In the closing session, Scott talked about the future of NVIDIA’s GPU computing chips. NVIDIA releases a new generation of graphics chips every two years:

  • 2008 Tesla
  • 2010 Fermi
  • 2012 Kepler
  • 2014 Maxwell

Yes, it is confusing that the Tesla brand, meaning cards for GPU computing, has persisted even though the Tesla family is now obsolete.

image
Dr Steve Scott showing off the power efficiency of GPU computing

Scott talked a little about a topic that interests me: the convergence or integration of the GPU and the CPU. The background here is that while the GPU is fast and efficient for parallel number-crunching, it is of course still necessary to have a CPU, and there is a price to pay for the communication between the two. The GPU and the CPU each have their own memory, so data must be copied back and forth, which is an expensive operation.

One solution is for GPU and CPU to share memory, so that a single pointer is valid on both. I asked CEO Jen-Hsun Huang about this and he did not give much hope for this:

We think that today it is far better to have a wonderful CPU with its own dedicated cache and dedicated memory, and a dedicated GPU with a very fast frame buffer, very fast local memory, that combination is a pretty good model, and then we’ll work towards making the programmer’s view and the programmer’s perspective easier and easier.

Scott on the other hand was more forthcoming about future plans. Kepler, which is expected in the first half of 2012, will bring some changes to the CUDA architecture which will “broaden the applicability of GPU programming, tighten the integration of the CPU and GPU, and enhance programmability,” to quote Scott’s slides. This integration will include some limited sharing of memory between GPU and CPU, he said.

What caught my interest though was when he remarked that at some future date NVIDIA will probably build CPU functionality into the GPU. The form that might take, he said, is that the GPU will have a couple of cores that do the CPU functions. This will likely be an implementation of the ARM CPU.

Note that this is not promised for Kepler nor even for Maxwell but was thrown out as a general statement of direction.

There are a couple of further implications. One is that NVIDIA plans to reduce its dependence on Intel. ARM is a better partner, Scott told me, because its designs can be licensed by anyone. It is not surprising then that Intel’s multi-core evangelist James Reinders was dismissive when I asked him about NVIDIA’s claim that the GPU is far more power-efficient than the CPU. Reinders says that the forthcoming MIC (Many Integrated Core) processors codenamed Knights Corner are a better solution, referring to the:

… substantial advantages that the Intel MIC architecture has over GPGPU solutions that will allow it to have the power efficiency we all want for highly parallel workloads, but able to run an enormous volume of code that will never run on GPGPUs (and every algorithm that can run on GPGPUs will certainly be able to run on a MIC co-processor).

In other words, Intel foresees a future without the need for NVIDIA, at least in terms of general-purpose GPU programming, just as NVIDIA foresees a future without the need for Intel.

Incidentally, Scott told me that he left Cray for NVIDIA because of his belief in the superior power efficiency of GPUs. He also described how the Titan supercomputer operated by the Oak Ridge National Laboratory in the USA will be upgraded from its current CPU-only design to incorporate thousands of NVIDIA GPUs, with the intention of achieving twice the speed of Japan’s K computer, currently the world’s fastest.

This whole debate also has implications for Microsoft and Windows. Huang says he is looking forward to Windows on ARM, which makes sense given NVIDIA’s future plans. That said, the I get impression from Microsoft is that Windows on ARM is not intended to be the same as Windows on x86 save for the change of processor. My impression is that Windows on ARM is Microsoft’s iOS, a locked-down operating system that will be safer for users and more profitable for Microsoft as app sales are channelled through its store. That is all very well, but suggests that we will still need x86 Windows if only to retain open access to the operating system.

Another interesting question is what will happen to Microsoft Office on ARM. It may be that x86 Windows will still be required for the full features of Office.

This means we cannot assume that Windows on ARM will be an instant hit; much is uncertain.

NVIDIA CEO Jen-Hsun Huang beats the drum for GPU computing

In his keynote at the GPU Technology Conference here in Beijing NVIIDA CEO Jens-Hsun Huang presented the simple logic of GPU computing. The main constraint on computing is power consumption, he said:

Power is now the limiter of every computing platform, from cellphones to PCs and even datacenters.

CPUs are optimized for single-threaded computing and are relatively inefficient. According to Huang a CPU spends 50 times as much power scheduling instructions as it does executing them. A GPU by contrast is formed of many simple processors and is optimized for parallel processing, making it more efficient when measured in FLOP/s (Floating Point Operations per Second), a way of benchmarking computer performance. Therefore it is inevitable that computers make use of GPU computing in order to achieve best performance. Note that this does not mean dispensing with the CPU, but rather handing off processing to the GPU when appropriate.

This point is now accepted in the world of supercomputers. The computer at Chinese National Supercomputing Center in Tianjin has 14,336 Intel CPUs, 7168 Nvidia Tesla GPUs, and 2048 custom-designed 8-core CPUs called Galaxy FT-1000, and can achieve 4.7 Petaflop/s for a power consumption of 4.04 MegaWatts (million watts), as presented this morning by the center’s Vice Director Xiaoquian Zhu. This is currently the 2nd fastest supercomputer in the world.

Huang says that without GPUs the world would wait until 2035 for the first Exascale (1 Exaflop/s) supercomputer, presuming a power constraint of 20MW and current levels of performance improvement year by year, whereas by combining CPUs with GPUs this can be achieved in 2019.

Supercomputing is only half of the GPU computing story. More interesting for most users is the way this technology trickles down to the kind of computers we actually use. For example, today Lenovo announced several workstations which use NVIDIA’s Maximus technology to combine a GPU designed primarily for driving a display (Quadro) with a GPU designed primarily for GPU computing (Tesla). These workstations are aimed at design professionals, for whom the ability to render detailed designs quickly is important. The image below shows a Lenovo S20 on display here. Maybe these are not quite everyday computers, but they are still PCs. Approximate price to follow soon when I have had a chance to ask Lenovo. Update: prices start at around $4500 for an S20, with most of the cost being for the Tesla board.

image

GPU computing with NVIDIA in Beijing

I’m in Beijing for NVIDIA’s GPU Technology Conference; I attended last year’s event in San Jose and found it fascinating, partly because it has an academic and research flavour with a huge variety of projects on display.

This year the event is in Beijing, reflecting the level of HPC (High Performance Computing) activity in this region.

image

image

NVIDIA’s business is graphics processors, though it has expanded into the SoC (System on a chip) business with its ARM-based Tegra chipset. This conference though is focused at the other end of the scale: Tesla GPUs that are primarily designed not for driving a display, but for rapid processing using massively parallel computing.

The Tesla business is relatively small for NVIDIA; less than 5% of its overall revenue, I was told; and I was told that the company treats it partly as research and development. That said, GPU computing is coming into the mainstream and the business is expected to grow. NVIDIA’s desktop GPU cards also support GPU computing.

I recently reviewed a video format converter from Cyberlink; the product was unexceptional except that it can take advantage of GPU computing when available to speed processing when converting from one video format to another. Since I do have a suitable graphics card (though sadly not a Tesla) this made a substantial difference, converting several times faster than another format converted I tried.

Of course NVIDIA is not the only player; there is an open standard (OpenCL) for GPU computing and other GPU vendors such as AMD implement OpenCL. NVIDIA implements OpenCL but also has its own CUDA architecture, which tends to be the focus of its conference as you would expect.

More reports soon.

Silverlight 5 is done. Is Silverlight also done?

Microsoft has has announced the release of Silverlight 5.0.

image

Silverlight is a cross-platform, cross-browser plug-in for Windows and Mac. It is relatively small size – less than 7MB according to Microsoft, though the Mac version seems to be bigger, with a 14MB compressed setup .dmg and apparently over 100MB once installed:

image

Never mind, it is a fine piece of work and has considerable capabilities, including the .NET Framework, the ability to render a GUI defined in XAML, multimedia playback, and support for applications running inside the browser or on the desktop. New in version 5 is better H.264 performance, 3D graphics, and Platform Invoke support on Windows enabling trusted applications to call the native API. Another change is that in-browser applications can also run with full trust, again only on Windows. The cross-platform idea has become increasingly diluted.

If Microsoft had come up with Silverlight early in the .NET story it might have become a major application platform. As it is, while still useful in some contexts, the technology has been side-lined by new things including HTML 5 and the Windows Runtime in the forthcoming Windows 8.

While I have huge respect for the team which created Silverlight and rapidly improved it, it now looks a sad story of reactive technology that failed to capture sufficient developer support. Microsoft invented Silverlight when Adobe Flash looked like it might take over as a universal runtime for web applications. The outcome was that Adobe evolved Flash with renewed vigour, keeping Silverlight at bay. Then Apple invented a new platform called iOS that supported neither Flash nor Silverlight, and the whole plug-in strategy began to look less compelling. Adobe has now reduced its focus on Flash, while Microsoft has been signalling a reduced role for Silverlight since its Professional Developers Conference in October 2010.

The question now is whether there will ever be a Silverlight 6.

Microsoft itself uses Silverlight across a number of products, such as administrative consoles for various server applications. Silverlight will be around for a while yet. Of course it is also the runtime for Windows Phone 7. Visual Studio LightSwitch generates Silverlight applications, and this one I am rather sad about, because it is an interesting tool that now seems to target the wrong platform. Perhaps the team will create an HTML 5 version one day.

Not allowed in Windows 8 Metro: porn, ads in live tiles, bugs, or opt-out data collection

Microsoft’s newly published Certification Requirements for the forthcoming Windows 8 store includes some notable points. Here are a few that caught my eye.

2.3 Your app must not use tiles or notifications for ads

No complaints about that one.

3.2 Your app must not stop responding, end unexpectedly, or contain programming errors

Hmm, this could be a tough one.

3.3 Your app must provide the same user experience on all processor types

OK, no “Intel-only” features. However you could by implication submit an “Intel-only” version of your app as long as it is called something different than than the ARM version.

3.7 Your app must not use an interaction gesture in a way that is different from how Windows uses the gesture

This is interesting as an example of enforcing application style guidelines. The intent is a consistent user experience, but is this heavy-handed?

4.1 Your app must obtain opt-in or equivalent consent to publish personal information

No stealthy personal data collection. A good thing; though if opt-in means “Hand over your data or you cannot run the app” it can still be difficult for users to avoid.

4.4 Your app must not be designed or marketed to perform, instruct, or encourage tasks that could cause physical harm to a customer or any other person

What a relief!

5.1 Your app must not contain adult content

Windows Metro a porn-free zone? This could be troublesome though. No games beyond PEGI 16? This is a preliminary document and it would not surprise me if there is some change here; maybe this is a restriction for the beta period only.

Windows Store: Microsoft explains another piece of its new platform

Microsoft’s Ted Dworkin, Partner Progam Manager, has posted details of how the forthcoming Windows Store will work. There is also detailed new information on MSDN. It is a key piece if you care about the next version of Windows, including details of how enterprises will be able to deploy apps as well as the terms of business for independent developers.

image

Here is a quick summary:

  • The store is both an app and a web site. The same content will automatically appear in both.
  • The store is for Metro-style apps, which run on the Windows Runtime. No word about desktop apps; my presumption is that they are excluded. The certification requirements refer only to Metro-style apps.
  • Apps can be offered as full-featured, limited or unlimited trial, upgradeable via in-app purchases.
  • Enterprise apps can be deployed through the store with access limited to employees.
  • Enterprise apps can also be deployed outside the store, using PowerShell scripts to domain-joined machines. Apps must be signed.
  • App vendors can use their own transaction engine and/or ad service if they choose, or use the built-in services for sale, in-app purchase and advertising. Subscriptions do not have to go through the store. My impression is that the initial sale does not have to be transacted through the store either but this is not 100% clear to me.
  • Developer registration for the store costs $49.00 for individuals or $99.00 for companies.
  • Revenue share is 70%, rising to 80% if you achieve over $25,000 revenue for an app.
  • Apps are subject to approval, but developers are given the App Certification Kit as part of the SDK. There is still scope for disagreement over the interpretation of policies.

 

There is an initial beta preview period during which all apps will be free. Microsoft has also annoyed most of the world’s developers by restricting a First Apps Contest to those who:

 are a developer – professional, hobbyist, or student – and you are a legal resident of the 50 United States and District of Columbia, France, Germany, Japan, or India

Oxygene for Java released: develop for Android and Java runtime with Delphi language in Visual Studio

RemObjects has released Oxygene for Java, a new version of its Object Pascal compiler. Object Pascal is pretty much the Delphi language though with some additional features of its own. Previous versions target the .NET runtime, and a version of this is marketed by Embarcadero as Prism. The IDE for Oxygene is Microsoft’s Visual Studio. This new version targets both the Java Runtime and the Android Dalvik VM. The obvious target market is Delphi developers who now want to create apps for Android, or cross-platform Java applications.

I downloaded the trial and ran the supplied Hello World in the Android emulator … it works.

image

A few further notes from the RemObjects announcement. While only Visual Studio is supported initially, an Eclipse version is also in preparation. Oxygene directly consumes .JAR libraries so you can use both first and third-party libraries. There is also a tool called Oxidizer that lets you import Java language code, which will be converted to Oxygene Object Pascal.

A point to note is that Embarcadero has already announced that its cross-platform FireMonkey framework will support Android as well as Apple iOS. This means that developers who want to code for Android in the Delphi language will have two choices. It looks to me as if Oxygene will be more suitable if you want to stay close to the Android SDK, whereas FireMonkey has its own custom-drawn user interface widgets and effects and should come into its own if you want the same code to run on both iOS and Android.

Given that a skilled Delphi developer would probably learn Java fairly quickly, how much value is there in Oxygene for Java? I guess factors include how much more productive you can be in Oxygene and the value of sharing code across projects targeting different platforms, presuming that you do not want to run Java everywhere.

Sencha’s Michael Mullany talks about Flash developers “flailing around for an alternative” and the Big App Rewrite

I spoke to Michael Mullany, CEO of Sencha, a company which creates HTML5 frameworks and tools for desktop and mobile browsers. Ext JS is aimed at desktop browser applications, while Sencha Touch is for mobile devices, currently Apple iOS, Google Android and Blackberry 6+. Sencha’s tools include Ext Designer, a visual application builder for Ext JS, and Sencha Animator, a designer for CSS 3 animations. Sencha Touch apps can also be packaged as native apps for iOS or Android.

At its developer conference in Austin USA earlier this month, attended by around 600, the company announced Sencha.io, a cloud service for mobile web apps, as well as presenting Sencha Touch 2.0, a major update.

image

Mullany talks on his blog about “The Big App Rewrite”:

It’s a world where HTML5 powers the client apps, and they’re enriched with local APIs that execute on everything from traditional desktops to Smart TV’s. And cloud services provide the fabric that enables continuous, shared experiences across the diversity of end-devices. We think this is the platform for the web.

Sencha is perfectly in tune with the trends towards cloud, HTML5 and mobile, which is why I was keen to speak to Mullany. I asked him to contrast Ext JS and Sencha Touch with JQuery and JQuery Mobile.

JQuery is a pretty tiny library that helps with Dom abstraction and animations but that’s it. JQuery UI gives you some visual components as well but Ext-JS is the full enchilada. It’s supposed to be the web equivalent of Cocoa or the Microsoft Windows presentation foundation. It’s got an event system, a theming system, a very rich set of user interface controls, its object oriented, and it’s got a complex layout system so you can build nested layouts that have very complex event handling among different parts of the user interface. We’ve seen user interfaces that have several thousand data elements on a page.

It also has a model-view-controller architecture library on the client side so you can structure your code properly for large applications, it’s got a theming system so you can variable-ise your colours, shapes and look and feel very easily. It also has a full data package so you can do very rich data manipulation on the client, bind data in various complex ways across variables, it’s very different than J query.

And just like you’d probably never use Ext-JS on a public web page, you’d never use JQuery to build something like Marketo or Salesforce VisualForce or a Documentum content management system, all of which use Ext-JS. Ext-JS is one of the most popular behind the firewall development libraries for desktop development.

Now on the mobile side the difference is slightly less. JQuery mobile does give you a set of user interface widgets, but the difference is also similar to the desktop … Sencha Touch is designed to let you do anything you could do with Cocoa Touch or an Android SDK or a Windows Mobile SDK. Its intent is to equip you to develop native quality experiences with native style interaction, things like fixed user interface chrome, multiple independent scrollable areas, nested layouts, those kinds of capabilities.

Our performance tends to be better cross-platform, we’ve done more performance work, we have our theming system, we have an MVC library, we have a templating system. With JQuery mobile you tend to want to add multiple things together and you can certainly assemble a collection of things that will look like Sencha Touch, but Sencha Touch is designed to be integrated, everything is designed to work the same, and the general feedback is that even though Sencha Touch is a much richer system that takes some insight to learn, you get better applications out of it.

I also asked about the new cloud service, Sencha.io. A notable feature is that according to Mullany developers do not have to touch the code that runs on the cloud, they just call its API from the client:

We call it the first client-centric HTML5 cloud, which is a set of authentication, data, data synchronisation, and geo-location services that help people build mobile applications without needing to write server side code. So you literally write your client side application in HTML 5 using Sencha SDKs and then you store your user’s data and you store your user’s authentication credentials in our cloud. You don’t have to worry about mucking around with anything from Ruby on Rails to PHP to Java, it’s all abstracted behind these very clean APIs. We think that’s the future of mobile development, that you’ll have these very thin abstracted server-side services, and and these very rich mobile clients that have off-line state and local data storage powered by HTML5. We think that model is the future of mobile web development and we obviously hope that Sencha.io will be the most popular back-end.

Sencha’s frameworks are open source and dual-licenced. You can use both Touch and Ext JS freely under the terms of the GPL v3. There is also free commercial licencing for Sencha Touch, while commercial licences for Ext JS are paid for. Sencha also has commercial tools, and I asked Mullany to describe the tool products:

We really see the three legs of the business being cloud, tools, and SDKs. We just did a preview of the Sencha Designer 2.0 release at our conference. That has support for Sencha Touch in it so you can drag and drop Sencha touch applications together and then actually package them from within the tool. The intent is also to allow you to hook up to cloud APIs from within the tool as well so it is an integrated, easy-to-use visual application builder for both desktop and touch. So that’s targeted at developers.

Sencha Animator is a little bit different. There’s no JavaScript in it really at all. It is a pure CSS 3 animation tool, and it is a traditional visual timeline with keyframe manipulation, and a style visual editor for creating rich animations.

The market we’re targeting for that is people doing interactive brand advertising on mobile. That’s where you have ubiquitous support for CSS 3 animations that are hardware accelerated so they tend to be the best performance. It’s also very web content friendly so you don’t have to write your application in Sencha Touch just to use Animator, it’s pure CSS output that you drop into whatever piece of content that you want to build.

The reason we built it is because we saw people flailing around for an alternative to doing Flash ads on mobile. Because Flash was banned from iOS, it meant that a whole segment of rich advertising that was based on Flash for the desktop had nowhere to go. They weren’t going to build native iOS applications, it had to be web. So the question then was what do you build it in, do you use JavaScript animation, do you use SVG, do you use Canvas, do you use some of the other graphic technologies such as Web GL? The answer is that CSS 3 is really the highest performance and cognitively pretty easy to wrap your head around.

People “flailing around for an alternative to doing Flash ads?” Mullany has his own agenda, but his comments do highlight the problems caused for Adobe by the success of Flash-free iPhone and iPad. I cannot help thinking that Sencha would be an attractive acquisition for Adobe or certain other companies, but I am sure smarter people than myself have thought of that.

Post sponsored by Monster for the best in IT jobs.

Microsoft backs ECMAScript, dismisses Google Dart

Microsoft has posted an article on Evolving ECMAScript on its IE Blog. ECMAScript is the official standard for what we call JavaScript. The company is proposing some minor additions “to address gaps in Math, String and Number functionality as well as Globalization.” It has also taken the opportunity to take a shot at Google, which is proposing a new web language called Dart:

Some examples, like Dart, portend that JavaScript has fundamental flaws and to support these scenarios requires a “clean break” from JavaScript in both syntax and runtime. We disagree with this point of view. We believe that with committee participant focus, the standards runtime can be expanded and the syntactic features necessary to support JavaScript at scale can be built upon the existing JavaScript standard.

Dart will compile to JavaScript so there is a measure of compatibility, but if the language catches on then browsers without a native implementation will be disadvantaged.