There has always been an uneasy balance between Java as a cross-platform, cross-vendor standard; and Java as a proprietary technology. Under Sun’s stewardship the balance was tilted towards the cross-platform standard. Eventually, Java was open-sourced as the OpenJDK. However, Sun, and therefore now Oracle following its acquisition of Sun, still owns Java. The official Java specification is determined by the optimistically-named Java Community Process (JCP). The JCP is a democratic organisation up to a point, the point in question being clause 5.9 in the JCP procedures:
EC ballots to approve UJSRs for new Platform Edition Specifications or JSRs that propose changes to the Java language, are approved if (a) at least a two-thirds majority of the votes cast are "yes" votes, (b) a minimum of 5 "yes" votes are cast, and (c) Sun casts one of the "yes" votes. Ballots are otherwise rejected.
In other words, nothing happens without Sun’s approval.
Now the Register reports that Oracle and the JCP have fallen out. According to this report, the JCP does not like Oracle’s suit against Google; and does not have confidence in Java FX or Java ME both of which were promoted at the recent OpenWorld/JavaOne conference (though Java FX is to change significantly). The JCP still wants true independence – as, amusingly, proposed by Oracle in 2007:
… that the JCP become an open independent vendor-neutral Standards Organization where all members participate on a level playing field with the following characteristics:
members fund development and management expenses
a legal entity with by-laws, governing body, membership, etc.
a new, simplified IPR Policy that permits the broadest number of implementations
stringent compatibility requirements
dedicated to promoting the Java programming model
Oracle seems now to have changed its mind, wanting to tighten rather than loosen control over Java. Oracle still needs to work through the JCP in order to progress the Java specification so it will need either to mend relationships or reform the JCP somehow in order to deliver what was promised at JavaOne.
What does this mean for Java and its future? Perhaps surprisingly little. Alex Handy at the sdtimes reports this comment from Rod Johnson, now at VMware, whose SpringSource business was built on building Java frameworks outside the JCP:
There’s been very little activity on the [JCP] executive committee. I think we just have to wait and see what Oracle comes up with for JavaOne," he said. "The rest of the world is moving along fairly quickly. It’s not like we need Oracle or the EC of the JCP to get things done.
Java is the world’s most popular programming language. Further, Oracle is a smart company and although it is doing a good job of alienating members of the Java community – not least inventor James Gosling, now a loose cannon on deck – its technical work on Java will likely be excellent. That said, we are heading into an increasingly fractured world in terms of development platforms, especially in mobile, and that looks unlikely to change.
Scott Guthrie’s blog reports that a fix is now available for the Padding Oracle attack, which enables successful attackers to break the security of ASP.NET applications. There are a few points of interest.
First, there is not one patch but several, and which ones you need depend both on the version of Windows and the version of .NET. Multiple versions of .NET may be installed on a single server.
Second, the exploit is rated “important” in Microsoft security-speak, rather than “critical”. This is apparently because in itself the vulnerability merely discloses information. However, Microsoft is treating it with a high priority because the vulnerability is likely to reveal information that would let the attacker go to to more sever actions such as taking over a server. Confusing, but to my mind it is as critical as they come.
Third, Guthrie’s blog notes:
We’d like to thank Juliano Rizzo and Thai Duong, who discovered that their previous research worked against ASP.NET, for not releasing their POET tool publicly before our update was ready.
The implication is that the POET tool may be publicly available soon – so if you are responsible for an affected machine, get patching! In fact, in the webcast on the subject Microsoft stated that “The potential for exploit is very high during the next 30 days.”
Fourth, the update works by “additionally signing all data that is encrypted by ASP.NET.”
Update: Marc Brooks has investigated and it looks like there is a bit more to it than that.
Finally, the update will be included in Windows Update but not immediately. Your choice is whether to risk a hack in the period before the automatic update appears, or endure the hassle of the manual downloads. Microsoft advises to do it as soon as possible for servers on the public internet.
I am not sure what percentage of systems are likely to be patched soon, but I’d guess that plenty of vulnerable systems will remain online and that we have not heard the last of this bug.
I am puzzled by Microsoft’s decision to close Live Spaces and send all its users to WordPress.com. Of course WordPress is a superior blogging platform; but Spaces made sense as an element within an integrated Live.com platform. According to Microsoft it has 7 million users and 30 million visitors; and if you accept that business on the web is all about traffic and monetizing traffic, then it strikes me as odd that Microsoft has no better idea of what to do with that traffic than to give it to someone else.
It makes me wonder what exactly Microsoft is trying to do with its Live.com web property. You can make a generous interpretation, as Peter Bright does, and say that the company is learning to focus and losing its “not invented here” religion. Or you can argue that it exposes the lack of a coherent strategy for Microsoft’s online services for consumers.
Part of the reason may be that blogging itself has changed. The original concept of an online diary or “web log” has fractured, with much of the trivia that might once have been blogged now being expressed on Facebook or Twitter. At the other end, blog engines like WordPress have evolved into capable content management systems. Many blogs are just convenient tools to author web sites.
Spaces is also a personal CMS. When combined with other features of Live.com, it provides a way of authoring your own web site, with photos, lists, documents, music and video, gadgets and other modules. You can apply themes, select layouts, and even add custom HTML. Everything integrates with the Windows Live identity system. The blog is just one element in this.
Now, although you can move your blog to WordPress.com, much of this is going away. Themes, gadgets, guestbook and lists are not transferred. If you were using Spaces for in effect a personal web site, you will have to start again on WordPress.
What this means is that WordPress, not Microsoft, now has the opportunity to show ads or market other services to these users.
Other services including SkyDrive, which is an excellent online storage platform, and Hotmail for email, are continuing as before. Still, the wider question is this. If Microsoft is happy to abandon 7 million users and all the customisation effort they have put into creating a personal online space, why should I trust it for email, or online storage?
Microsoft’s Dharmesh Mehta does his best to explain the decision here:
When we looked at Spaces, and what we had done with Spaces, and the more we thought about where do we want this to go, where do we think blogging evolves to, what’s important about that, you look at WordPress.com, and they’re building that. They’re doing a great job. And there really isn’t much value in us trying to compete with that.
This seems weak to me. Mehta is even less convincing when it comes to Live ID:
Windows Live ID is not really a means unto itself. There are times when it’s important for us to be able to associate an identity with someone. But there’s many things that we do where you don’t need a Windows Live ID — Photo Gallery, if you’re just using it on your PC, you don’t need a Windows Live ID at all. You can take our Mail app and connect it to Yahoo or Gmail or something like that. You don’t need a Windows Live ID. So I wouldn’t say that Windows Live ID is a goal, or something that we’re trying to drive in and of itself. It’s really more a means when we think it’s valuable for someone to have an account.
Now, I thought the Live ID was a single sign-on for Microsoft’s online services, and the basis of a network of friends and contacts. Perhaps Microsoft is now ceding that concept to Facebook or others? This does seem to be a move in that direction; and while it may be acceptance of something that was inevitable, it is a bad day for Microsoft’s efforts to matter online.
Blackberry has announced its pitch for the emerging tablet market, the 7” screen PlayBook. It has a new OS base on QNX Neutrino, a webkit-based web browser, Adobe Flash and AIR – offline Flash applications – front and rear cameras for video conferencing as well as taking snaps, and includes a USB port and HDMI out. There is wi-fi and Bluetooth but no 3G in the first release; you can connect to the Internet via your Blackberry. Storage is not yet specified as far as I can tell. There is no physical keyboard, which is surprising in some ways as the keyboard is the reason I hear most often for users choosing a BlackBerry smartphone over Apple’s iPhone.
Alongside the PlayBook RIM has announced a new developer platform. WebWorks is an HTML5 platform extended with access to local APIs, and targets both the Tablet OS and BlackBerry smartphones:
BlackBerry WebWorks applications can tap into the always-on, notification-based, push-enabled, contextual and social attributes of the BlackBerry smartphone. These applications can also access hardware features and integrate with other apps, and are powerful Super Apps that are fully integrated into the BlackBerry Application Platform.
In order to access local resources you need to package your app as a Blackberry application. Java and native C applications are also supported.
A winner? Well, there is a widespread industry presumption that we all want tablets; for example NVIDIA CEO Jen-Hsun Huang is planning on this basis, judging by what he said to the press last week. It is certainly a market in which every vendor wants a presence. There are a number of open questions though. The new tablet market is really defined by Apple’s iPad, and success for other operating systems and form factors is yet to be demonstrated. Personally I am not sure about the 7-inch screen, which is perhaps too large for a pocket and too small for the desktop-like web browsing you can do on an iPad. Here are the dimensions:
BlackBerry PlayBook: 7-inch 1024×600 screen,130mm x 193mm x 10mm
Apple iPad: 9.7-inch 1024×768 screen, 189.7mm x 242.8mm x 13.4mm
I doubt there will be much enthusiasm for carting around a phone, a small tablet, and a laptop, so in order to be viable as a portable device for work it has to be a laptop replacement. I do see this happening already with the iPad, though for me personally a netbook is both cheaper and more practical.
Apps are another key factor. It is smart of RIM to support Flash and AIR, which along with HTML 5 web applications are likely the best bet for supporting something like the PlayBook without a lot of device-specific work.
The most eye-opening demonstration at the NVIDIA GPU Technology Conference last week was from Adobe’s David Salesin (Sr. Principal Scientist) and Todor Georgiev (Sr Research Scientist), who showed their Plenoptic Lens along with software for processing the resulting images.
There was a gasp of amazement from the audience when we saw what the process is capable of. We saw an image refocused after the event.
For anyone who has ever taken an out of focus picture – which I guess is everyone – the immediate reaction is to want one NOW. Another appealing idea is to take an image that has several items of interest, but at different depths, and shift the focus from one to another.
So how does it work? It starts with the plenoptic lens, which lets you “capture multiple views of the scene from slightly different viewpoints,” said Salesin:
If you have a high resolution sensor then each one of those images can be fairly high resolution. The neat thing is that with software, with computation, you can put this together into one large high-resolution image.
In a sense you are capturing a whole 4D lightfield. You’ve got two dimensions of the spatial position of the light ray, and also two dimensions of the orientation of the light ray.
With that 4D image, you can then after the fact use computation to take the place of optics. With computation you have a lot more flexibility. You can change the vantage point, the viewpoint a little bit, and you can also change the focus.
To resolve that, to take these individual little pieces of an image and put them together into one large image from any arbitrary view with any arbitrary focus, it turns out that texture mapping hardware is exactly what you need to do that. Using GPU chips we’ve been able to get speedups over the CPU of about 500 times.
Note that the image ends up being constructed in software. It is not just a matter of overlaying the small images in a certain way.
There is a good reason NVIDIA showed this at its conference. Suddenly we all want little cameras with GPUs powerful enough to do this on the fly.
I guess this demo is likely to show up again at the Adobe MAX conference next month.
There’s another report on this with diagrams here.
I had some free time following the NVIDIA GPU Technology conference and wandered up to the Valley Fair mall in San Jose. I took a quick look at the Apple store, there was really nothing for me to see in terms of new product but it has a kind of "bees round a honeypot" appeal.
Next I went along to the Sony Style store, another strong brand you might think:
Clearly this is a social story as well as a technical story but it is significant.
The Sony store was actually more interesting to me since the PlayStation 3 Move was on display and I had not had an opportunity to try it before. A helpful assistant gave me a demo; we were going to play 2-player table tennis but there was a technical issue with one of the controllers so I ended up playing solo. In conjunction with the huge screen in the Sony store it was a very passable imitation of the real thing. Although it is well done it does not feel like a revolution in the way the Wii did when it first appeared – you may recall that the pre-release Wii was code-named "Revolution".
Adding Move to your PS3 setup is somewhat expensive – you will probably want two controllers as well as the Eye camera – and there are not yet many games which support it, but I reckon it will be a lot of fun. Playing Table Tennis one of the best aspects was the ability to rush forward for a forehand slam.
The Sony guy admitted to being curious about the Microsoft Xbox Kinect which is coming out in a couple of months, and does away with the controller completely. He said Microsoft is opening a store in San Francisco and plans to go up to take a look in due course.
A question: which of the above two pictures will the new Microsoft store most resemble?
Exhibiting here at the NVIDIA GPU Technology Conference is a Cambridge-based company called tidepowerd, whose product GPU.NET brings GPU programming to .NET developers. The product includes both a compiler and a runtime engine, and one of the advantages of this hybrid approach is that your code will run anywhere. It will use both NVIDIA and AMD GPUs, if they support GPU programming, or fall back to the CPU if neither is available. From the samples I saw, it also looks easier than coding CUDA C or OpenCL; just add an attribute to have a function run on the GPU rather than the CPU. Of course underlying issues remain – the GPU is still isolated and cannot access global variables available to the rest of your application – but the Visual Studio tools try to warn of any such issues at compile time.
GPU.NET is in development and will go into public beta “by October 31st 2010” – head to the web site for more details.
I am not sure what sort of performance or other compromises GPU.NET introduces, but it strikes me that this kind of tool is needed to make GPU programming accessible to a wider range of developers.
Looking for a mini PC, maybe to plug into your TV without taking over the living room? I’ve just been looking at the range from Giada, here at the NVIDIA GPU Tech conference, and like their handy size, which makes my Toshiba netbook look distinctly bulky, and quiet running.
The latest Giada N20 measures just 160 x 175 x 23mm but still packs in an Atom D510 CPU, NVIDIA ION GT218 with 512MB RAM, 2GB main memory, and 320GB hard drive. Ports includes two USB 2.0 ports, one USB 2.0/E-SATA combo port, four-in-one card reader, Gigabit LAN, wi-fi, Bluetooth, HDMI and VGA video output, and both analogue audio and SPDIF digital output.
Price with Windows 7 Home Premium is $449, though it is not yet available in Europe.
NVIDIA CEO Jen-Hsung Huang spoke to the press at the GPU Technology Conference and I took the opportunity to ask some questions.
I asked for his views on the cloud as a supercomputer and whether that would impact the need for local supercomputers of the kind GPU computing enables.
Although we expect more and more to happen in the cloud, in the meantime we’re going to keep buying devices with more and more solid state memory. The way to think about it is, storage is simply a surrogate for bandwidth. If we had infinite bandwidth none of us would need storage. As bandwidth improves the requirement for storage should reduce. But there’s another trend which is that the amount of data we collect is growing incredibly fast … It’s going to be quite a long time before our need for storage will reduce.
But what about local computing power, Gigaflops as opposed to storage?
Wherever there is storage, there’s GigaFlops. Local storage, local computing.
Next, I brought up a subject which has been puzzling me here at GTC. You can do GPU programming with NVIDIA’s CUDA C, which only works on NVIDIA GPUs, or with OpenCL which works with other vendor’s GPUs as well. Why is there more focus here on CUDA, when on the face of it developers would be better off with the cross-GPU approach? (Of course I know part of the answer, that NVIDIA does not mind locking developers to its own products).
The reason we focus all our evangelism and energy on CUDA is because CUDA requires us to, OpenCL does not. OpenCL has the benefit of IBM, AMD, Intel, and ourselves. Now CUDA is a little difference in that its programming approach is different. Instead of an API it’s a language extension. You program in C, it’s a different model.
The reason why CUDA is more adopted than OpenCL is because it is simply more advanced. We’ve invested in CUDA much longer. The quality of the compiler is much better. The robustness of the programming environment is better. The tools around it are better, and there are more people programming it. The ecosystem is richer.
People ask me how do we feel about the fact that it is proprietary. There’s two ways to think about it. There’s CUDA and there’s Tesla. Tesla’s not proprietary at all, Tesla supports OpenCL and CUDA. If you bought a server with Tesla in it, you’re not getting anything less, you’re getting CUDA more. That’s the reason Tesla has been adopted by all the OEMs. If you want a GPU cluster, would you want one that only does OpenCL? Or does OpenCL and CUDA? 80% of GPU computing today is CUDA, 20% is OpenCL. If you want to reach 100% of it, you’re better off using Tesla. Over time, if more people use OpenCL that’s fine with us. The most important thing is GPU computing, the next most important thing to us is NVIDIA’s GPUs, and the next is CUDA. It’s way down the list.
Next, a hot topic. Jen-Hsun Huang explained why he announced a roadmap for future graphics chip architectures – Kepler in 2011, Maxwell in 2013 – so that software developers engaged in GPU programming can plan their projects. I asked him why Fermi, the current chip architecture, had been so delayed, and whether there was good reason to have confidence in the newly announced dates.
He answered by explaining the Fermi delay in both technical and management terms.
The technical answer is that there’s a piece of functionality that is between the shared symmetric multiprocessors (SMs), 236 processors, that need to communicate with each other, and with memory of all different types. So there’s SMs up here, and underneath the memories. In between there is a very complicated inter-connecting system that is very fast. It’s nearly all wires, dense metal with very little logic … we call that the fabric.
When you have wires that are next to each other that closely they couple, they interfere … it’s a solid mesh of metal. We found a major breakdown between the models, the tools, and reality. We got the first Fermi back. That piece of fabric – imagine we are all processors. All of us seem to be working. But we can’t talk to each other. We found out it’s because the connection between us is completely broken. We re-engineered the whole thing and made it work.
Your question was deeper than that. Your question wasn’t just what broke with Fermi – it was the fabric – but the question is how would you not let it happen again? It won’t be fabric next time, it will be something else.
The reason why the fabric failed isn’t because it was hard, but because it sat between the responsibility of two groups. The fabric is complicated because there’s an architectural component, a logic design component, and there’s a physics component. My engineers who know physics and my engineers who know architecture are in two different organisations. We let it sit right in the middle. So the management lesson learned – there should always be a pilot in charge.
Huang spent some time discussing changes in the industry. He identifies mobile computing “superphones” and tablets as the focus of a major shift happening now. Someone asked “What does that mean for your Geforce business?”
I don’t think like that. The way I think is, “what is my personal computer business”. The personal computer business is Geforce plus Tegra. If you start a business, don’t think about the product you make. Think about the customer you’re making it for. I want to give them the best possible personal computing experience.
Tegra is NVIDIA’s complete system on a chip, including ARM processor and of course NVIDIA graphics, aimed at mobile devices. NVIDIA’s challenge is that its success with Geforce does not guarantee success with Tegra, for which it is early days.
The further implication is that the immediate future may not be easy, as traditional PC and laptop sales decline.
The mainstream business for the personal computer industry will be rocky for some time. The reason is not because of the economy but because of mobile computing. The PC … will be under disruption from tablets. The difference between a tablet and a PC is going to become very small. Over the next few years we’re going to see that more and more people use their mobile device as their primary computer.
[Holds up Blackberry] There’s no question right now that this is my primary computer.
The rise of mobile devices is a topic Huang has returned to on several occasions here. “ARM is the most important CPU architecture, instruction set architecture, of the future” he told the keynote audience.
Clearly NVIDIA’s business plans are not without risk; but you cannot fault Huang for enthusiasm or awareness of coming changes. It is clear to me that NVIDIA has the attention of the scientific and academic community for GPU computing, and workstation OEMs are scrambling to built Tesla GPU computing cards into their systems, but transitions in the market for its mass-market graphics cards will be tricky for the company.
Update: Huang’s comments about the reasons for Fermi’s delay raised considerable interest as apparently he had not spoken about this on record before. Journalist Nico Ernst captured the moment on video:
I’m at NVIDIA’s GPU tech conference in San Jose. The central theme of the conference is that the capabilities of modern GPUs enable substantial performance gains for general computing, not just for graphics, though most of the examples we have seen involve some element of graphical processing. The reason you should care about this is that the gains are huge.
Take Matlab for example, a popular language and IDE for algorithm development, data analysis and mathematical computation. We were told in the keynote here yesterday that Matlab is offering a parallel computing toolkit based on NVIDIA’s CUDA, with speed-ups from 10 to 40 times. Dramatic performance improvements opens up new possibilities in computing.
Why has GPU performance advanced so rapidly, whereas CPU performance has levelled off? The reason is that they use different computing models. CPUs are general-purpose. The focus is on fast serial computation, executing a single thread as rapidly as possible. Since many applications are largely single-thread, this is what we need, but there are technical barriers to increasing clock speed. Of course multi-core and multi-processor systems are now standard, so we have dual-core or quad-core machines, with big performance gains for multi-threaded applications.
By contrast, GPUs are designed to be massively parallel. A Tesla C1060 has not 2 or 4 or 8 cores, but 240; the C2050 has 448. These are not the same as CPU cores, but nevertheless do execute in parallel. The clock speed is only 1.3Ghz, whereas an Intel Core i7 Extreme is 3.3Ghz, but the Intel CPU has a mere 6 cores. An Intel Xeon 7560 runs at 2.266 Ghz and has 8 cores.The lower clock speed in the GPU is one reason it is more power-efficient.
NVIDIA’s CUDA initiative is about making this capability available to any application. NVIDIA made changes to its hardware to make it more amenable to standard C code, and delivered CUDA C with extensions to support it. In essence it is pretty simple. The extensions let you specify functions to execute on the GPU, allocate memory for pointers on the GPU, and copy memory between the GPU (called the device) and the main memory on the PC (called the host). You can also synchronize threads and use shared memory between threads.
The reward is great performance, but there are several disadvantages. One is the challenge of concurrent programming and the subtle bugs it can introduce.
Another is the hassle of copying memory between host and device. The device is in effect a computer within a computer. Shifting data between the two is relatively show.
A third is that CUDA is proprietary to NVIDIA. If you want your code to work with ATI’s equivalent, called Streams, then you should use the OpenCL library, though I’ve noticed that most people here seem to use CUDA; I presume they are able to specify the hardware and would rather avoid the compromises of a cross-GPU library. In the worst case, if you need to support both CUDA and non-CUDA systems, you might need to support different code paths depending on what is detected at runtime.
It is all a bit messy, though there are tools and libraries to simplify the task. For example, this morning we heard about GMAC, which makes host and device appear to use a single address space, though I imagine there are performance implications.
NVIDIA says it is democratizing supercomputing, bringing high performance computing within reach for almost anyone. There is something in that; but at the same time as a developer I would rather not think about whether my code will execute on the CPU or the GPU. Viewed at the highest level, I find it disappointing that to get great performance I need to bolster the capabilities of the CPU with a specialist add-on. The triumph of the GPU is in a sense the failure of the CPU. Convergence in some form or other strikes me as inevitable.