Category Archives: software

Windows Media Center madness

I use Windows Vista Media Center with a digital TV card. It had been working fine for a year, until last week.

Then it started playing up. Browsing TV recordings would raise an error: “A critical Windows Media Center process has failed. Please restart the computer and try again.” In addition, one particular TV program was reported as still recording, days after it had ended. Nothing was being written to the drive, but nothing else would record.

Needless to say, restarting the computer fixed nothing. For all the song and dance about self-healing applications, Windows Error Reporting, and the rest of it, the reality is that Google searches and fiddling with the registry and configuration files often remains the only way to fix things.

After a couple of a false trails, I found the help I wanted on the Green Button site. Stop the Media Center services, delete the files recording.xml and recording.bak in C:\ProgramData\Microsoft\eHome\Recording, restart Media Center. All is fine, except that any existing recording schedules are lost.

A small price for domestic harmony.

Technorati tags:

Project Astoria a hit at TechEd

There is a buzz here at TechEd about Project Astoria. The reason is that it promises to simplify development of web applications that deal with data, which is most of them. Astoria is a REST API for ADO.NET, and hooks into the new Entity Framework object-relational mapping layer. Therefore, it solves two problems in one go.

Here’s a quick look at how it works. Let’s assume that you have a database which stores some information you want to present in your web application. Step one is to use Visual Studio to generate an Entity Data Model from your database.

Next, you tweak the model so that it looks as close as possible to the objects you are storing. The framework should deal with the complexities of mapping collections to linked tables and so on.

Now you create a new ADO.NET Data Service (sadly, this may well be the official name for Astoria), and point the service at your model. By default a new service does not expose any data, for security reasons, but by writing an InitializeService method you can configure which objects you want to publish.

Run the service, and the objects in your model are now URL-addressable. It’s pretty flexible. For example:

[serviceurl]/Products : return all the products (yes, you can do paging).

[serviceurl]/Products[2]: return the product with an ID of 2.

[serviceurl]/Products[Color eq ‘Blue’]: return all the blue products.

[serviceurl]/Customers[2]/Orders/:return all the orders for the customer with an ID of 2.

The data comes back in either ATOM or JSON format. Naturally, each element in the returned data includes its URL. Let’s say you have an AJAX application so you are calling this service from JavaScript. Iterating through the results and populating an HTML list or table is easy, especially as Astoria includes a client JavaScript library. There is also a client library for .NET applications. You can also add or update data with HTTP PUT, or remove it with DELETE.

You can extend your Astoria API by adding arbitrary methods that have the [WebGet] (or presumably [WebPut] or [WebDelete]) attribute. You can also add “interceptors” – code that gets called just before requests are executed, to enable validation or custom security.

Presuming it works as advertised, Astoria is a more natural and concise way to handle data than SOAP web services, and easier than devising your own techniques. It should work well with other technologies such as Adobe’s Flex. It will play nicely with mash-ups, since many other services out there use ATOM or JSON. it is a natural fit with AJAX and will work well with Silverlight – see http://www.silverlightdata.com [Silverlight 1.1 Alpha required] for an example.

Astoria will not be finished in time for the initial release of Visual Studio 2008, though reading between the lines it might be done in time for SQL Server 2008. It will work with any database server for which there is an Entity Framework provider. I was assured that creating such a provider is relatively easy, if you already have an ADO.NET 2.0 provider, so it is reasonable to expect wide support.

I think this will be popular.

Technorati tags: , , , , ,

SQL Server 2008 will miss own launch party

Excellent session here at Tech-Ed from Francois Ajenstat (Director of Product Management for SQL Server) and others on new features in SQL Server 2008. It looks good: transparent data encryption; new policy-based admin; data compression that actually speeds performance; new datatypes including FILESTREAM, for queryable but unstructured data, and DATETIME2 for high-precision date/time; spatial data support so you can query by distance, for example; new entity-data framework (not the same as LINQ but works with it) for object-relational mapping; new REST-based data API code-named Astoria; richer reporting including features acquired from Dundas and removing the dependence on Internet Information Services.

It’s a significant upgrade, but when do we get it? I had previously assumed that it would be no later than February 27th 2008, the announced launch date for Windows Server 2008, Visual Studio 2008 and SQL Server 2008. Visual Studio will actually be available to developers three months earlier, at the end of November. Not so SQL Server. Ajenstat said the version available at the launch will be a CTP (Community Tech Preview), with the final version coming in the “second quarter” – in other words, perhaps as late as June 2008.

By way of compensation, an earlier CTP coming later this month should contain most of the new features, unlike the preview available now.

Ajenstat also noted that SQL Server “vNext” is set for 2010-2011. I’m betting on 2011 at the earliest.

Programming Slimserver from .NET

I did a short article for Personal Computer World (Christmas 2007 issue, just published) on how to use the Slimserver API from VB.NET. This example app is the result. It is sadly incomplete but could be a starting point for an app that controls Slimserver.

Why would you want to do this? Well, Slimserver has a good web UI but programmatic control is useful as well. One possibility would be to create a smart remote using a Windows Mobile device. Operating a Squeezebox using the supplied remote is fairly arduous. I also like the idea of a really capable rich client for Slimserver, with advanced search, playlist management and so on.

Just in case you don’t know Slimserver … it’s a great way to manage your music on a network. It’s free, cross-platform, understands lots of music formats including FLAC (my favourite), and supports multi-room playback using either the free Softsqueeze, a Java player, or the Squeezebox hardware. It can also transcode to an MP3 stream, so that all sorts of devices can play what’s on your Slimserver.

Technorati tags: , ,

Two spins on Microsoft’s excellent quarter

Microsoft has delivered an excellent set of results, showing growth in pretty much all areas.

It seems to me that you can spin this in a couple of ways. First, you could argue that Microsoft is alive and well and still in the race. Certainly, with figures like these you can hardly suggest that it is out of the race.

Second, you could argue that the figures demonstrate how monopolies can continue to make good profits even when their products disappoint, especially in a buoyant market like computing.

The truth? Somewhere in between. It doesn’t matter how good the financials are: the disappointment with Vista is real. Personally I have Vista working fairly well, though annoyingly slowly at times, but I notice plenty of people advising one another to stick with XP, for performance and compatibility. Maybe the long-awaited SP1 will fix it, but some are now resigned to waiting for Windows 7 (you know, odd-number release theory) for a really good upgrade. Vista’s problems have created an opportunity for Apple and even Linux to grab some market share.

Other shadows hanging over Microsoft that come to mind:

  • Lack of clarity over Internet strategy
  • Continuing security problems centered on Windows (for whatever reason)
  • Losing the search wars
  • Governments mandating ODF
  • Apple’s increasing market share, especially among thought leaders
  • Bureaucracy and litigation
  • PR and image problems

On the plus side I’d mention the strength of the .NET platform and languages; Silverlight’s promise; and the fact that most people still want to use Microsoft Office rather than Open Office (in my experience).

I am absolutely not a financial analyst; but I observe that having a good quarter does not fix what strike me as deep-rooted problems. At the same time it is a reminder of Microsoft’s huge resources and entrenched position; that’s not going to go away quickly either.

TechEd Europe the week after next; no doubt some more Microsoft reflections then.

Technorati tags: , , , ,

OOXML vs ODF: where next for interoperability?

Gary Edwards of the Open Document Foundation has a fascinating post on the important of Microsoft Office compatibility to the success of the ISO-approved Open Document formats.

It is in places a rare voice of sanity:

People continue to insist that if only Microsoft would implement ODF natively in MSOffice, we could all hop on down the yellow brick road, hand in hand, singing kumbaya to beat the band. Sadly, life doesn’t work that way. Wish it did.
Sure, Microsoft could implement ODF – but only with the addition of application specific extensions to the current ODF specification … Sun has already made it clear at the OASIS ODF TC that they are not going to compromise (or degrade) the new and innovative features and implementation model of OpenOffice just to be compatible with the existing 550 million MSOffice desktops.

More:

The simple truth is that ODF was not designed to be compatible – interoperable with existing Microsoft documents, applications and processes. Nor was it designed for grand convergence. And as we found out in our five years participation at the OASIS ODF TC, there is an across the boards resistance to extending ODF to be compatible with Microsoft documents, applications and processes.

Summary: in Edwards’ opinion, there are technical and political reasons why seamless ODF interop cannot be baked into Microsoft Office. Therefore the Foundation is now working on interop with the W3C’s Compound Document Format, about which I know little.

Surprisingly, Edwards also says that ODF will fail in the market:

If we can’t convert existing MS documents, applications and processes to ODF, then the market has no other choice but to transition to MS-OOXML.

Edwards is thoroughly spooked by the success of Sharepoint in conjunction with Exchange, and overstates his case:

If we can’t neutralize and re purpose MSOffice, the future will belong to MS-OOXML and the MS Stack. Note the MS Stack noticeably replaces W3C Open Web technologies with Microsoft’s own embraced “enhancements”. Starting with MS-OOXML/Smart Tags as a replacement for HTML-XHTML-RDF Metadata. HTML and the Open Web are the targets here. ODF is being used as a diversion from the real end game – the taking of the Internet.

I find this implausible. At the same time, I agree about the importance of interoperability with Microsoft Office.

I would also like clarification on what are the limitations of OOXML / ODF conversion. Here’s a technique that does a reasonable job. Open OOXML in Microsoft Office, save to binary Office format. Open binary Office format in Open Office, save as ODF. The same works in reverse. Not perfect perhaps, but a whole lot better than the Microsoft-sponsored add-in that works through XSLT.  Could this existing Open Office code be made into a Microsoft Office plug-in, and if so, what proportion of existing documents would not be satisfactorily converted?

Note that Sun’s ODF converter seems to be exactly this, except that it does not yet work with Office 2007. It could presumably be used with Office 2003 and the OOXML add-in, to provide a way to convert OOXML to ODF in a single application. Some further notes on Sun’s converter here.

Flash, Silverlight the future of video games?

According to the BBC, gaming giant Electronic Arts is fed up with having to code the same game three, four or five times over. That’s the downside of the console wars – several incompatible systems.

The article says that streamed server-based games will be increasingly important.

A few observations. First, the PC is the nearest thing to an open platform right now, and it’s interesting that PC games typically cost around 30% less than those on the top consoles. For example, the hot new FIFA 08 typically sells for £40.00 on PS3 or Xbox 360, £25.00 on PC. It’s cheaper on DS or PSP, but must be considerably cut down on these low-powered devices. The Wii is somewhere in between.

Second, I’m writing this after seeing the amazing things being done with Flash. Microsoft’s Silverlight is also interesting in this context, as is Canvas 3D – OpenGL running in the browser.

That’s still three separate platforms; but since they are all cross-platform, there would be no necessity to code for more than one of them.

Third, Flash games are already very popular. If you calculate market share by time spent playing, I guess Flash games would already show a significant portion (I’d be interested to see those figures).

Fourth, the success of the Nintendo Wii proves that although geeks care deeply about who can shift pixels and calculate transforms the most quickly, the general public does not. All they want is a playable and enjoyable game.

All this suggests that the business model behind Microsoft’s and Sony’s console strategy is flawed. The idea is to buy market share by subsidizing the hardware, then profit from the software sales to your locked-in users. What if users can get the same games by subscribing, say, to a hypothetical EA Live, and play the games on a variety of devices? The money is still in the software, but there is no hardware lock-in. Prices could fall, and game developers could spend more time being creative and less time re-implementing the same game for different platforms.

Flash is actually in the PS3 and PSP, but appears to be an old version. If Microsoft isn’t thinking about Silverlight for the Xbox 360, then it should be. But if my logic is correct, then the investment Microsoft and Sony have put into game studios is actually more valuable, long-term, than the money they have put into hardware.

That said, the online experience is not yet good enough to threaten the consoles. I doubt it will be long though. A key point is hardware acceleration in the Flash player. H.264 video will be hardware-accelerated in the forthcoming Moviestar release of Flash 9. I am confident that a hardware accelerated gaming API will not be far behind.

Adobe shows how anything can be a web application

The closing session here at Adobe MAX Europe was a series of “sneak peeks” at forthcoming technology, presented with a disclaimer to the effect that they may never appear commercially. I am not going to do a blow-by-blow account of these, since it was mostly the same as was shown a couple of weeks ago in the USA, and you may as well read one of the accounts from there. For example, this one from Anara Media, if you can cope with its breathless enthusiasm.

So what was interesting? Overall, Adobe is doing a good job of challenging assumptions about the limitations of web applications, and I am not just talking about AIR. A few years ago you might single out something like Photoshop as an example of something that would always be a desktop application; yet this evening we saw Photoshop Express, a web-hosted Photoshop aimed at consumers, but with impressive image manipulation capabilities. For example, we saw how the application could turn all shades of one colour into those of another colour, so you can make a red car blue. Another application traditionally considered as local-only is desktop publishing, yet here we saw a server version of InDesign controlled by a Web UI written in Flex.

The truth is, given a fast Internet connection and a just-in-time compiler anything can be a web application. Of course, under the covers huge amounts of code are being downloaded and executed on the client, but the user will not care , provided that it is a seamless and reasonably quick experience. Microsoft should worry.

We also got a glimpse into the probable future of Adobe Reader. This already runs JavaScript, but in some future version this runtime engine will be merged with ActionScript 3.0. In addition, the Flash player will be embedded into Adobe Reader. In consequence, a PDF or a bundle of PDFs can take on the characteristics of an application or an offline web site. A holiday brochure could include video of your destination as well as a live booking form. Another idea which comes to mind (we were not shown anything like this) is ad-supported ebooks where the ads are Flash videos. I can see the commercial possibilities, and there are all kinds of publications which could be enhanced by videos, but not everyone will welcome skip-the-intro annoyances arriving in PDF form.

This was a fun and impressive session, and well received by the somewhat bedazzled crowd of delegates.

How much “branded desktop presence” will you put up with?

We saw a lot of AIR applications at this morning’s keynote here at Adobe MAX Europe. AIR lets you take either Flash applications, or Javascript/HTML applications, out of the browser and onto the desktop. The additional richness you get from running outside the browser is currently rather limited – we saw lots of drag-and-drop, because that is one of the few additional things you can do. However, AIR has a huge advantage for web vendors, because it puts their application and/or their content onto the user’s desktop. A great example is an Allurent-developed online shopping catalog called Anthropologie, which we saw this morning. Here’s a quote from the case study, headed “Branded desktop presence”:

“The idea underlying our Adobe AIR applications is to enable retailers to push relevant content to the consumer and let the consumer consider it from the comfort of their desktop,” says Victoria Glickman Hodgkins, vice president of marketing at Allurent. “The retailer avoids mailing a circular or catalog to promote special items, and the consumer can interact with digital catalog information in highly engaging ways.”

Right. Now we realize how the web browser has actually protected us from intrusive commercial presence on our desktop. The beauty of browser-based applications is that they completely disappear when you navigate away from the page, with only perhaps a Favorites shortcut to take us back there when we choose. An AIR application by contrast installs into our machine, probably puts an icon on the desktop, can run minimized and fire system notifications.

This isn’t a bad thing in itself, provided the user remains in control. But how many such applications will you want to install?

Put another way, AIR developers will need to exercise restraint in their efforts to inflict branded desktop presence on hapless users.

Technorati tags: , , ,

Matt Mullenweg’s less-is-better approach to software quality

Interview with Matt Mullenweg in the Guardian today. This was done at the Future of Web Apps conference. I enjoyed meeting him. He is open and articulate. I had not appreciated until now that WordPress.com took the opposite decision from Google over the issue of being blocked in countries such as China which are less permissive than the USA about what can be published. He found out that by blocking certain words and tracking certain people the site could be unblocked:

Google had the same decision, and they decided that being there was less evil than not being there, ultimately. For us, we decided that being there under those circumstances isn’t worth it. We’d rather not be there.

A blogging site is not the same as a search engine. It’s arguable that both sites made the right decision. Not easy.

I was also struck by Mullenweg’s espousal of an Apple-like minimalism in software design. He says WordPress has too many options. He was particularly critical of Open Office:

If you open up Open Office, look at the preference screen, there are like 30 or 40 pages of preferences. Stuff that you and I will never care about and should never care about.

I accept the main premise – software should just work. I understand the further implicit argument, that adding options tends to diminish software quality, by adding complexity to the code. But it would be interesting to analyze some of the options in, say, Open Office, and find out why they are there and who is using them. Is having all these options tucked away really a bad thing, or this really more about user interface design?