Category Archives: Uncategorized

First impressions of Microsoft Kinect – great hardware waiting for great software

The moment of magic comes when someone walks through the gaming area and Xbox flashes up the message that they have signed in. No button was pressed; this was face recognition working in the background during gameplay.

So Kinect is amazing. And it is amazing: it is controller-less video gaming that works well enough to have a lot of fun. That said, I also have reservations about the device, though these are first impressions only, and feel it is let down in a big way by the games currently available.

My device arrived on the UK launch day, November 10th. It is a relatively compact affair, around 28 cm wide on a stubby stand. The first task is positioning it, which can be a challenge. You are meant to place it above or below your TV screen, at a height of between 0.6m to 1.8m. I was lucky, in that our TV is on a stand that has space for it; the height is fractionally below 0.6m but it seems to be happy. Alternatively, you can purchase a free-standing support or a bracket that clips to the top of a TV. I imagine there are some frustrated first-day purchasers who received a device but cannot satisfactorily position it.

You also need free space in front of the set. Our coffee table got moved when the Nintendo Wii arrived, so the 6ft required for one-player play is not a problem.  Two-player is more difficult; we can do it but it means moving furniture, which is a nuisance. Overall it is more intrusive than the Wii, but less than Rock Band or Guitar Hero with the drum kit, so not a deal-breaker.

Microsoft takes full advantage of over-the-wire updates with Kinect. After connecting, the Xbox, the device firmware, and the bundled Kinect Adventures game all received patches; but the procedure went smoothly.

Kinect is a sophisticated device, a lot more than just a camera. There are three major subsystems in Kinect: optical, audio and motor.

  • Motor is the simplest – the stubby stand also contains a motor assembly that swivels the device up and down, enabling it to allow for different positions and to find the optimal angle for players of different heights.
  • The optical subsystem includes two cameras and an infra-red projector. The projector overlays a pattern on the field of view. This allows the first camera, a depth sensor, to map the position of the players in three dimensions. This lets the system detect hand movements, for example, which are usually closer to the camera than the rest of the body. The second camera is a colour device more like the one in your webcam, and enables Kinect to take pictures of your gaming antics which you can share with the world if you feel so inclined, as well as presumably feeding into the positioning system.
  • The audio subsystem includes no less than four microphones. The reason is that Kinect does voice recognition at a distance, so needs to be able to compensate for both the sounds of the video game and other background noise. Using multiple microphones enables the audio processor to calculate the position of sounds, since each microphone will receive a sound at a fractionally different time.

These sensors systems are backed by considerable processing power – necessary because the Xbox itself devotes most of its processing to the game being played. The trade-off in systems like this is that the more processing means more accurate interpretation of voice and gestures, but taking too much time introduces lag. As I saw at the NVIDIA GPU conference in September – see here and here for posts – very rapid processing enables magic like robotic pinhole surgery on a beating heart – and like Kinect, that magic is based on real-time interpretation of physical movement. Kinect is not at that level, but has audio and image processor chips and 512MB RAM, along with other components including for some reason an accelerometer, mounted on three circuit boards squashed into the slim plastic container. See for yourself in the ifixit teardown.

But how is it in practice? It certainly works, and we had a good and energetic time playing Kinect Adventures and a little bit of Joy Ride. Playing without a controller is a liberating experience. That said, there were some annoyances:

  • Kinect play is more vulnerable to interference than controller gaming. If someone walks across the play area, for example, it will interfere.
  • In the Kinect system, there is no such thing as a click. Therefore, to activate an option you have to hover over it for a short period while a progress circle fills; when the circle is filled, the system decides that you have “clicked”. It is slower and less reliable than clicking a button.
  • The audio system enables voice control which seems to work well when available, but most of the time it seems not to be available. Considering the amount of hardware dedicated to this, it seems rather a waste; but presumably more is to come. Controlling Sky player by voice, for example, would be great; no more hunting for the remote.
  • The Kinect seems to work best when you are standing. For something like a driving game, that is not what you want. Apparently seated gameplay is supported, but does not work properly with the launch games; so watch this space.

Launching stuff before it is really ready seems to be ingrained in Microsoft’s culture. Is Kinect another example? To some extent I suspect it is. I recall the early days with the Nintendo Wii as exciting moments of discovery: the system worked well from the get-go, and the bundled Wii Sports game is a masterpiece. The Kinect games so far are less impressive.

In fact, my overwhelming impression so far is that this is great hardware waiting for software to show what it can do. The 20,000 Leaks mini-game in Adventures is not very good – you are in a glass cage underwater and have to cover leaks to stem them – but it is interesting because you have to use head, hands and feet to play it. It could not be duplicated with a conventional controller, because a conventional controller does not allow you to move one thing this way, and another thing that way, at the same time.

It follows that Kinect should enable some brilliant new gaming concepts. I’d love to see a stealth adventure done for Kinect, for example; there are new possibilities for realism and excitement.

As it is, the Kinect launch games show little imagination and seem to be heavily Wii-influenced – and if you compare Kinect with Wii on that basis, you might well conclude that the Wii is better in some ways, worse in others, but cheaper and with better games, and without the friction of Kinect’s somewhat fussy requirements.

Such a comparison is not fair to Kinect, which in concept and hardware is a generation ahead of Wii or PlayStation Move. It now awaits software to take advantage.

What chance for MeeGo in the age of the iPad?

Today is Apple iPad day in the UK; but the portable device I’ve been playing with is not from Apple. Rather, I downloaded the first release build of MeeGo, proudly labelled 1.0, and installed it on my Toshiba NB 300 netbook, which normally runs Windows. You can choose between the evil edition with Google Chrome; or the free edition with Chromium – I picked the Chrome version. I did not burn any bridges: I simply copied the image to a 2GB USB memory stick and booted from that. There was one oddity: the USB boot only worked when using the USB port on the right by the power socket, and not from the one on the left edge of the netbook. It is a common problem with USB, that not all ports are equal.

image

MeeGo is a joint project from Intel and Nokia, formed by the merging of Intel Moblin and Nokie Maemo. It is a version of Linux designed for mobile devices, from smartphones to netbooks, though this first release is only for netbooks. Further releases are planned on a "six-month cadence", and a wider range of devices including handsets and touch-screen tables is promised for October.

First impressions are mixed. Starting with the good news: performance is great, the user interface is smooth and polished, and less child-like and cutesy than the last Moblin I looked at. The designers have really thought about how to make the OS netbook-friendly. Applications run full-screen, making the best use of the limited screen size. Navigation is via a toolbar which slides into view if you move the mouse to the top of the screen. From here, you can switch between "Zones" – in effect, each zone is a running  applications. Not difficult but laborious; I found myself using Alt-Tab for switching between applications. I also miss the Windows taskbar, despite the screen space it occupies, since it helps to have a visual reminder of the other apps you have running.

There is also a home page which is a kind of local portal, showing showing current Twitter status (once I had added my Twitter account), application shortcuts, current appointments, recent web history, and other handy shortcuts.

Getting started was relatively quick. I soon figured out that the Network icon in the toolbar would let me configure wireless networking. It look me a little longer to find the system preferences, which are found by clicking the All Settings button in the Devices menu. Here I was able to change the keyboard layout from US to GB, though since it does not take effect until you logout, and I was using the live image which does not save changes, I was still stuck with the wrong layout.

A terminal – essential for serious Linux users – can be found in the System Tools section of the Application menu. I needed a password to obtain root access, which I discovered is set by default to "meego" in the live image. I presume this is a feature of the live image only, as this would otherwise be a serious security risk.

I soon found annoyances. This may be version 1.0, but it is described as a "core" release and seems mainly intended for software developers and I presume device manufacturers who are getting started. The selection of pre-installed applications is very limited, and does not include a word processor or spreadsheet.  There is a "Garage" utility for installing new apps, but although it seems to offer Abiword and Gnumeric, I could not get the links to resolve. I cannot find an image editor either. Without basic apps like this, MeeGo is not something I could rely on while out and about.

image

I was surprised to find no link to the Intel AppUp store, which will offer applications for MeeGo, and when I tried to install the AppUp beta I got failed dependencies. I optimistically tried to install Adobe AIR; no go there either.

There must be other ways of getting apps installed – this is Linux after all – but I was looking for a quick and easy route.

Adobe Flash 10.1 is installed and works, though not on my first attempt. Trying to play a Youtube video made Chrome unresponsive, and I could not get Flash content to play on any site. Rebooted and all was well.

A big irritation for me is that you cannot disable tapping on the touchpad. There is a checkbox for it in settings, but it is both ticked and grayed so you cannot change it. I detest tapping since you inevitably tap by accident sometimes, on occasion losing work or just wasting time. No doubt there is some setting you can change though the terminal but I haven’t had time to investigate. It  is also possible that doing a full install to hard drive would fix it, as the live image does not save changes.

image

Nevertheless, the progress is encouraging and if development continues at this pace I can see MeeGo becoming a strong alternative to Windows on netbooks: faster, cheaper, and better optimized for this kind of device. Even against the Apple iPad, I can see the attraction of something like a MeeGo netbook: freedom, Flash, value for money, and a keyboard.

The big question though: what chance has MeeGo got in the face of competition from Apple, Google with Android, and Microsoft with Windows? It seems to me that all these three are safe bets, in that they are not going away and already have momentum behind them. Will the public also make room for MeeGo? I like it well enough to hope it succeeds, but fear it may be crowded out by the competition, other than for Nokia Smartphones.

Microsoft – make up your mind about Moonlight

I’ve been trying out Microsoft’s Office Web Apps, as provided for the release version of SharePoint 2010. The cross platform story is uneven, whether across Mac/Windows/Linux, or across different browsers, or even across different versions of Windows and Office. So far it does mostly work though, even if there are problems with certain tasks like printing or opening an online document in a desktop application.

The biggest problem I’ve had is on Linux. Supposedly Firefox 3.5 on Linux is supported. I ran up Ubuntu and Firefox 3.5, and went to look at a document in Word Web App. When I selected the document, Firefox quit. Every time.

After checking that Firefox was up-to-date it occurred to me that the problem might be related to Moonlight, the Linux implementation of Silverlight done by the Mono team. I disabled it. Suddenly, everything worked, even Edit in browser.

Moonlight is not just an open source project like the original Mono. It has a certain amount of official blessing from Microsoft. Here’s what VP Scott Guthrie said back in September 2007:

Over the last few months we’ve been working to enable Silverlight support on Linux, and today we are announcing a formal partnership with Novell to provide a great Silverlight implementation for Linux.  Microsoft will be delivering Silverlight Media Codecs for Linux, and Novell will be building a 100% compatible Silverlight runtime implementation called “Moonlight”.

Moonlight will run on all Linux distributions, and support FireFox, Konqueror, and Opera browsers.  Moonlight will support both the JavaScript programming model available in Silverlight 1.0, as well as the full .NET programming model we will enable in Silverlight 1.1.

You would think therefore that Microsoft would test the Firefox/Linux/Moonlight combination with its shiny new Office Web Apps. Apparently not. Here’s what the user experience is like for Office Word App. I figured that the solution might be to upgrade Moonlight to the latest version, so I did, installing what is now called Novell Moonlight 2.2. I went back to Word Web App. Firefox no longer crashes, but I now get a blank area where the Word document should be shown, and an error if I resize the browser window:

Now let’s see what happens if I disable Moonlight:

Everything is fine – except now there is a banner inviting me to “Improve my experience” by installing Silverlight. If I follow the link I eventually get back to the same Moonlight install that I have just enabled, which would actually break rather than improve Word Web App.

It is obvious that if users have to disable Moonlight to work with Office Web Apps, this will not help Moonlight adoption on Linux.

Office Web Apps are new and I’d hope this is something that Microsoft, Novell and the Mono team can soon fix between them. One reason for highlighting it now is the hope that something could be done before the full roll-out of Office and SharePoint 2010 on May 12th.

The real point though is what this says about the extent to which Microsoft cares about Moonlight and Linux users, and how much or little communication takes place between Microsoft and Novell. Silverlight isn’t required for Office Web Apps – as you can see from the above – but it is used to good effect where available, and this Office release is therefore an important release for Silverlight as well.

Microsoft should make up its mind. Is Novell really a trusted partner for Silverlight on Linux? Or a third-party product that has to take its chances?

Apple locks down its platform just a little bit more

How much money is enough? “Just a little bit more”, said J D Rockefeller; and Apple is taking a similar line with respect to control of its mobile platform. It is no longer enough that all apps are approved by Apple, sold by Apple, and that a slice of any sales goes to Apple. It now wants to control how you make that app as well, stipulating the tools you use and prohibiting use of others:

Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine.

On the face of it, bad news for third-party companies like Adobe, whose Flash to iPhone compiler is released tomorrow, Novell’s Monotouch, or Unity3D:

JavaScript and C# scripts are compiled to native ARM assembler code during the build process. This gives an average performance increase of 20-40 times over interpreted languages.

What is interesting is not only the issue itself, but the way debate is being conducted. I don’t know how Novell is getting on in “reaching out to Apple” concerning Monotouch, but as far as I can tell Apple introduced the restriction by revising a clause in a contract shown only to paid-up iPhone developers and possibly under NDA, then seeing if anyone would notice. Now that sparks are flying, CEO Steve Jobs is participating by one-line emails to a blogger referencing a post by another blogger, John Gruber.

Further, his responses do not altogether make sense. Gruber’s post is long – does Jobs agree with all of it? Gruber says that Apple wants the lock-in:

So what Apple does not want is for some other company to establish a de facto standard software platform on top of Cocoa Touch. Not Adobe’s Flash. Not .NET (through MonoTouch). If that were to happen, there’s no lock-in advantage.

Probably true, but not the usual PR message, as lock-in is bad for customers. How much are inkjet cartridges? I suspect Jobs was thinking more of this part:

Cross-platform software toolkits have never — ever — produced top-notch native apps for Apple platforms. Not for the classic Mac OS, not for Mac OS X, and not for iPhone OS. Such apps generally have been downright crummy.

As it happens, I think Gruber, and by extension Jobs, is wrong about this; though it all depends what you mean by the output of a cross-platform toolkit. Firefox? NeoOffice? WebKit, as found in Safari? Jobs says:

We’ve been there before, and intermediate layers between the platform and the developer ultimately produces sub-standard apps and hinders the progress of the platform.

Well, we know he does not like Java – “this big heavyweight ball and chain” – but there are many approaches to cross-platform. In fact, I’m not even sure whether Jobs means technical layers or political layers. As Gruber says:

Consider a world where some other company’s cross-platform toolkit proved wildly popular. Then Apple releases major new features to iPhone OS, and that other company’s toolkit is slow to adopt them. At that point, it’s the other company that controls when third-party apps can make use of these features.

The point is: we don’t know what Jobs means. We might not know until apps hit the app store and are approved or not approved. It is a poor way to treat third parties who are investing in your platform; and that was one part of the reason for my initial reaction: it stinks.

The other reason is that I enjoy the freedom a personal computer gives you, to install what you want, from whomever you want, and the creativity that this inspires. At the same time, I can see the problems this has caused, for security, for technical stability, and for user experience. Personal computing seems to be transitioning to a model that gives us less control over the devices we use, and which makes a few privileged intermediaries more powerful and wealthy than anything we have seen before.

In the end, it is Apple’s platform. Apple does not yet monopolise the market – though my local supermarket has iPods in all sorts of colours but no other portable music player on sale – and the short answer is that if you don’t like the terms, don’t buy (or develop for) the product.

As Apple’s market share grows, the acceptability of its terms will lessen, and protests will grow louder, just as they did for Microsoft – though I hesitate to make that comparison because of the many differences between the two companies and their business models. Having said which, looking at Zune and Windows Phone 7, Microsoft seems to like Apple’s business model enough to imitate it.

Why programmers should study Microsoft’s random failure and not trust Google search

The bizarre story of the EU-mandated Windows browser choice screen took an unexpected twist recently when it was noticed that the order of the browsers was not truly random.

image

IBM’s Rob Weir was not the first to spot the problem, but did a great job in writing it up, both when initially observed and after it was fixed by Microsoft.

It was an algorithm error, a piece of code that did not return the results the programmer intended.

Unless Microsoft chooses to tell us, there is no way to tell how the error happened. However, as Weir and others observe, it may be significant that a Google search for something like Javascript random sort immediately gets you sample code that has the same error. Further, the error is not immediately obvious, making it particularly dangerous.

I am sure I am not the only person to turn to Google when confronted with some programming task that requires some research. In general, it is a great resource; and Google’s own algorithms help a little with filtering the results so that sites with better reputation or more inbound links come higher in the results.

Still, what this case illustrates – though accepting again that we do not know how the error occurred in this instance – is that pasting code from a Google search into your project without fully understanding and testing it does not always work. Subtle bugs like this one, which may go unnoticed for a long time, can have severe consequences. Randomisation is used in security code, for example.

As an aside, there also seems to be some randomness in the appearance of the browser choice screen. It turned up on my laptop, but not on my desktop, although both have IE as the default.

And who would have guessed that the EU would arrange for so many of us to get an ad for something like the GreenBrowser popping up on our desktop? Apparently it is the “best choice of flexible and powerful green web browser”, though since it is based on IE it is less radical a choice than it first seems.

image

SharePoint Explorer View hassles show benefits of cloud storage

Many of us want access to our documents from anywhere these days, and if you are still storing documents on a Windows server then remote access to documents usually means either VPN or SharePoint. VPN is heavy on bandwidth and not great for security, so SharePoint seems the obvious solution.

SharePoint is a mixed bag of course, but once it is up and running the browser user interface seems reliable as a means of getting at your documents over the internet. That said, it is inconvenient to run up the browser and navigate to a web site whenever you want a document. A user recently highlighted another issue. Their company uses a web application that frequently requires documents to be uploaded. This is straightforward if the document is on a local hard drive or network share, but not if it is in SharePoint. The workaround is to save the document out of SharePoint to the local drive, then upload it.

Fortunately there is another option. SharePoint Explorer View lets you access documents through Windows Explorer; you can even map SharePoint as a network drive. Now you can browse documents without a web browser, and upload directly to a web application.

Sounds great; and when it works, it is great. Troubleshooting though is a world of pain. If you have looked into this, you will know that there are really two Explorer Views, one using Internet Explorer and ancient FrontPage protocols, and the other using WebDav and Explorer. It’s the second of these that you most likely want. However, achieving this is notoriously troublesome, raising uninformative messages such as “Your client does not support opening this list with Windows Explorer", or from the command line System Error 67, or System Error 53 “The network path was not found”.

image

Another common complaint is incessant login dialogs.

I discovered a few useful resources.

This white paper on Understanding and Troubleshooting the SharePoint Explorer View is essential reading.

From this you will discover that if you are using Windows XP, the WebDav SharePoint Explorer view will not work over SSL or on any port other than 80. You are stuck with the FrontPage view, which is less useful. Apparently Microsoft has no intention of fixing this. Upgrade to Vista or Windows 7.

In addition, many XP and even Vista users find this update essential before anything starts working. It is necessary on Windows 2003 since the web client is not installed by default. It does not apply to Windows 7 though.

A good resource on the repeated login issue is here. It can be tamed.

Windows 7 is better, though I experienced an odd issue. One Windows 7 machine cheerfully opened the Explorer view to a remote site on port 444. I could engage Explorer View from the SharePoint web site, or from Network in Explorer, and it just worked.

On another machine, same network, also Windows 7, same web client settings, I could not get it working. I was on the point of giving up when I happened on the right incantation from a command prompt:

net use s: https://your.domain.name:444\shared%20documents /user:domain\username password

In this example S is the drive letter for a mapped drive, your.domain.name is the URL for SharePoint, 444 is the port number, shared documents is the folder name. For some reason this worked instantly.

Well, SharePoint is an option. Before leaving this subject though, I would like to mention Gladinet, a third-party utility which is able to mount a variety of cloud storage providers as network drives, including Amazon S3, Google Docs, Windows Live SkyDrive, and in the latest version Windows Azure.  It works on XP, Vista, Windows 7 and Windows 2003, comes in 32-bit and 64-bit editions, and worked immediately in my quick test. The ability to mount drives in Explorer itself, as opposed to an Explorer-like application, makes a big difference in usability.

image

Gladinet does not support SharePoint, sadly. Still, before you roll out SharePoint it is worth considering that something like an Amazon S3 account requires no CALs (though third-party clients like Gladinet may do), is maintained by a cloud provider rather than on your premises, is not hooked in any way to Windows clients, and might be a lot less hassle to deploy.

I do also understand the attraction of SharePoint, if you don’t or can’t trust the cloud, and like the way it integrates with Active Directory or its other clever features such as versioning or workflow management. What I don’t get is why Microsoft makes basic features like Explorer View so hard to get working.

Finally, this aspect of SharePoint should get better in Office 2010 and SharePoint 2010, which includes SharePoint Workspace 2010. This will synchronize with SharePoint 2010 document lists, giving you an offline copy you can access in Explorer. Agnes Molnar has a summary with screenshots.

Chrome OS: will Google keep its vision?

I spent some time with Chrome OS over the weekend and yesterday, first doing my own build of the open source Chromium OS, and then running it and writing a review.

The build process was interesting: you actually compile Chromium OS from a chroot virtual environment. My first efforts were unsuccessful, for two reasons. First, Chromium OS assumes the presence of a pre-built Chromium (the browser), so you have to either build Chromium first, or download a pre-built version. However, the Chromium build has to be customised for Chromium OS. I did manage to build Chromium, but it failed to run, with what looked like a gtk version error, so I gave up and downloaded a zip.

Chromium OS itself I did build successfully, though I ran into an error that needed this patch, which I applied manually. I was using the latest code from the git repository at the time. I expect that this problem has been fixed now though you may run into different ones; life on the bleeding edge can be painful.

I also had difficulty logging in. You are meant to log in with a Google account, which presumes a live internet connection at least on the first occasion. Although Chromium OS successfully used the ethernet connection on my laptop, getting an IP address and successfully pinging internet sites, the login still failed with a “Network not connected” error. Studying the logs revealed a certificate error. You can also create a backdoor user at build time, so I did that instead.

Once I got Chromium OS up and running, booting from a USB key, I found it mostly worked OK. It is a fascinating project, because of Google’s determination to avoid local application installs, thereby gaining better security as well as driving the user towards web solutions for all their needs.

That’s a bold vision, but also an annoying one. Normally, when reviewing something relevant like an operating system or a word processor, I try to write the review in the product I am testing. In fact, I am writing this post in Chromium OS. However, I could not write my review on Chromium OS, because I needed screenshots; and although there are excellent web-based image editing tools, I could not find a way to take screenshots and paste or upload them into those tools. The solution I adopted was to run Chromium OS in a virtual machine – I used VirtualBox – and take the screenshots from the host operating system.

It is a small point; but makes me wonder whether Google will end up bundling just a few local utilities to make the web-based life a little easier. If it does so, third parties will want to add their own; and Google will be under pressure to abandon its idea of no local application installs.

Another interesting point: the rumour is that Google will unify Chrome OS  with Android, which does allow application installs. Can that happen without providing a way to run Android apps on Chrome?

Chromium OS includes a calculator utility, which opens in a panel. Mine does not work though; I get a blank panel with the URL http://welcome-cros.appspot.com/calculator.html – which seems to be a broken link. Still, is that really a sensible way to provide a calculator? What about offline – will it work from a Gears local web server, or as a static HTML page with a JavaScript calculator, or will it not work at all?

I will be interested to see whether Google ends up compromising a little in order to improve the usability and features of its new OS.

Google’s new language: Go

Google has a new language. The language is called Go, though issue 9 on the bug tracker is from the inventor of another language called Go and asks for a name change. Co-inventor Rob Pike says [PDF] that Google’s Go is a response to the problem of long build times and uncontrolled dependencies; fast compilation is an important feature. It is a garbage-collected language with C-like syntax – echoes of Java and C# there – and has strong support for concurrency and communication. Pike’s examples in the paper referenced above do show a simple and effective approach to communication, with communication channel objects, and to concurrency, with Goroutines.

Go runs only on Linux or Mac OS X. I installed it on Ubuntu and successfully compiled and ran a one-line application. I used the 32-bit version, though apparently the 64-bit implementation is the most advanced.

Pike claims that performance is “typically within 10%-20% of C”. No debugger yet, but in preparation. No generics yet, but planned long-term. Pointers, but no pointer arithmetic.

Go does not support type inheritance, but “Rather than requiring the programmer to declare ahead of time that two types are related, in Go a type automatically satisfies any interface that specifies a subset of its methods.”

Google has many projects, and while Go looks significant, it is dangerous to make assumptions about its future importance.

I don’t think Google is doing this just to prove that it can; I think it is trying to solve some problems and doing so in an interesting way.

Technorati Tags: ,,,

Migrating to Hyper-V 2008 R2

I have a test setup in my office which runs mostly on Hyper-V. It is a kind of home-brew small  business server, with Exchange, ISA and SharePoint all running on separate VMs. I’ve followed Microsoft’s advice and kept Active Directory on a separate physical server. Until today, Hyper-V itself was running on Server 2008.

I’m reviewing Hyper-V Server 2008 R2, so I figured it would be interesting to migrate the VMs. I attached an external USB drive, shut down the  VMs and exported them. Next, I verified that there was nothing else I needed to preserve on that machine, and set about installing Hyper-V Server 2008 R2 from scratch.

Aside: when I first set this up I broke the rules by having Active Directory on the Hyper-V host. That worked well enough in my small setup; but I realised that you lose some of the benefit of virtualisation if you have anything of value on the host, so I moved Active Directory to a separate box.

I wish I could tell you that the migration went smoothly. Actually, from the Hyper-V perspective it did go smoothly. However, I had an ordeal with my server, a cheapie HP ML110 G5. The driver for the embedded Adaptec Sata RAID did not work with Hyper-V Server 2008 R2, and I couldn’t find an update, so I disabled the RAID. The driver for my second network card also didn’t work, and I had to replace the card. Finally, my efforts at updating the BIOS had landed me with a known problem on this server: the fans staying at maximum speed and deafening volume. Fortunately I found this thread which gives a fix: installing upgraded firmware for HP’s Lights-Out Remote Management as well. Blissful (near) silence.

Once I’d got the operating system installed successfully, bringing the VMs back on line was a snap. I used the console menu to join the machine to the domain, set up remote management, and configure the network cards. Next, I copied the exported VMs to the new server, imported them using Hyper-V manager running on Windows 7, and shortly afterwards everything was up and running again. I did get a warning logged about the integration services being out-of-date, but this was easy to upgrade. I’m hoping to see some performance benefit, since my .vhd virtual drives are dynamic, and these are meant to be much faster in the R2 update.

Although I’m impressed with Hyper-V itself, some aspects of Hyper-V Server 2008 R2 are lacking. Mostly this is to do with Server Core. Shipping a cut-down Server OS without a GUI is a great idea in itself, but Microsoft either needs to make it easy to manage from the command line, or easy to hook up to remote tools. Neither is the case. If you want to manage Hyper-V from the command line you need this semi-official management library, which seems to be the personal project of technical evangelist James O’Neill. Great work, but you would have thought it would be built into the product.

As for remote tools, the tools themselves exist, but getting the permissions right is such an arcane process that another dedicated Microsoft individual, program manager John Howard, wrote a script to make it possible for humans. It is not so bad with domain-joined hosts like mine, but even then I’ve had strange errors. I haven’t managed to get device manager working remotely yet – “Access denied” – and sometimes I get a kerberos error “network path not found”.

Fortunately there’s only occasional need to access the host once it is up and running; it seems very stable and I doubt it will require much attention.

Ubuntu Linux: the agony and the ecstasy

Just after writing a positive review of Ubuntu Karmic Koala I noticed this piece on The Register: Early adopters bloodied by Ubuntu’s Karmic Koala:

Blank and flickering screens, failure to recognize hard drives, defaulting to the old 2.6.28 Linux kernel, and failure to get encryption running are taking their toll, as early adopters turn to the web for answers and log fresh bug reports in Ubuntu forums.

Did I get it wrong? Should I be warning users away from an operating system and upgrade that will only bring them grief?

I doubt it, though I see both sides of this story. I’ve been there: hours spent trying to get Bluetooth working on the Toshiba laptop on which I’m typing; or persuading an Asus Eee PC to connect to my wi-fi; or running dpkg-reconfigure xserver-xorg to try to get Compiz working or to escape basic VGA; or running Super Grub to fix an Ubuntu PC that will not boot; or trying to fix a failed migration from Lilo to Grub 2 on my Ubuntu server.

That said, I noticed that the same laptop which gave me Ubuntu Bluetooth grief a couple of years ago now works fine with a clean install, Bluetooth included. It’s even possible that my own contribution helped – that’s how Linux works – though I doubt it in this case.

I also noticed how Ubuntu 9.10 has moved ahead of Windows in several areas. Here are three:

  1. Cloud storage and synchronization

    Microsoft has Live Mesh. Typical Microsoft: some great ideas, I suspect over-engineered, requires complex runtime to be downloaded and installed, not clear where it fits into Microsoft’s overall strategy, still in beta long after it was first trumpeted as a big new thing. So is this thing built into Windows 7? No way.

    By contrast Ubuntu turns up with what looks like a dead simple cloud storage and synchronization piece, web access, file system access, optional sharing, syncs files over multiple computers. Ubuntu One. I’ve not checked how it handles conflicts; but then Mesh was pretty poor at that too, last time I looked. All built-in to Karmic Koala, click, register, done.

  2. Multiple workspaces

    Apple and Linux have had this for years; I have no idea why it isn’t in Windows 7, or Vista for that matter. Incredibly useful – if the screen is busy but you don’t fancy closing all those windows, just switch to a new desktop.

  3. Application install

    This is so much better on Linux than on Windows or Mac; the only platform I know of that is equally user-friendly is the iPhone. OK, iPhone is better, because it has user ratings and so on; but Ubuntu is pretty good: Software Centre – browse – install.

I could go on. Shift-Alt-UpArrow, Ubuntu’s version of Exposé, very nice, not on Windows. And the fact that I can connect a file explorer over SSL using Places – Connect to server, where on Windows I have to download and install WinScp or the like.

Plus, let’s not forget that Ubuntu is free.

Of course you can make a case for Windows too. It’s more polished, it’s ubiquitous, app availability is beyond compare. It is a safe choice. I’m typing this on Ubuntu in BlogGTK but missing Windows Live Writer.

Still, Ubuntu is a fantastic deal, especially with Ubuntu One included. I don’t understand the economics by which Canonical can give everyone in the world 2GB of free cloud storage; if it is hoping that enough people will upgrade to the 50GB paid-for version that it will pay for the freeloaders, I fear it will be disappointed.

My point: overall, there is far more right than wrong with Ubuntu in general and Karmic Koala in particular; and I am still happy to recommend it.