Category Archives: windows

The computer desktop is a faulty abstraction

In Windows 7, Microsoft has made further efforts to make the desktop more usable. There is a "peek" feature that makes all running applications temporarily transparent when you hover over the Show Desktop button. If you click the button the apps all minimize, so you can interact with the desktop, and if you click again they come back. Nice feature; but it cannot disguise the desktop’s inherent problems. Or should I say problem. The issue is that the desktop cannot easily be both the place where you launch applications, and the place where they run, simply because the running application makes the desktop partly or wholly inaccessible.

The Show Desktop button (sans Peek) is in XP and Vista too, and there is also the handy Desktop toolbar which makes desktop shortcuts into a Taskbar menu. All worthy efforts, which are workarounds for  the fact that having shortcuts and gadgets behind your running applications is a silly idea. The desktop is generally useful only once per session – when you start up your PC.

In this respect, the computer desktop differs from real desktops. Cue jokes about desks so cluttered that you cannot see the surface. Fair enough, but on my real desktop I have a telephone, I have drawers, I have an in-tray and out-tray, I have pen and paper, and all of these things remain accessible even though I’m typing. The on-screen desktop is a faulty abstraction.

The inadequacy of the desktop is the reason that the notification area (incorrectly known as the system tray) get so abused by app developers – it’s the only place you can put something that you want always available and visible. In Windows 7 the taskbar is taking on more characteristics of the notification area, with icons that you can overlay with activity indicators like the IE8 download progress bar.

It’s true that if you don’t run applications full-screen, then you can move them around to get desktop stuff into view. I find this rarely works well, because I have more than one application visible, and behind one application is another one.

Why then do OS designers persist with the desktop idea? It’s possibly because it makes users feel more comfortable. I suspect it is a Skeuomorph (thanks to Phil Thane for the word) – “a derivative object which retains ornamental design cues to structure that was necessary in the original”. An example is that early electric kettles retained a squat shape with a large base, even though the logical shape for an electric kettle is a slim jug, enabling small quantities of water to cover the element. The reason for the squat shape was to spread the heat when boiling water on a stove. It took years before “jug” kettles caught on.

It is better to call the computer desktop a workspace, and to forget the idea of putting shortcuts and gadgets onto it. Which reminds me: why does Windows still not surface multiple desktops (or workspaces) as is common on Linux, and also implemented in Mac OS X Leopard as Spaces?  Windows does have multiple desktops – you see one every time UAC kicks in with its permission dialog on Vista, or when using the Switch User feature – but they are not otheriwse available.

I’m also realising that sidebar gadgets were a missed opportunity in Vista. Microsoft made two big mistakes with the sidebar. The first was to have it stay in the background by default. Right-click the sidebar and check “Sidebar is always on top of other windows”. Then it makes sense; it behaves like the taskbar and stays visible. Not so good for users with small screens; but they could uncheck the box. I know; you don’t like losing the screen space. But what if the gadgets there were actually useful?

The other mistake was to release the sidebar with zero compelling gadgets. Users took a look, decided it was useless, and ignored or disabled it. That’s a shame, since it is a more suitable space for a lot of the stuff that ends up in the notification area. If Microsoft had put a few essentials there, like the recycle bin, volume control, and wi-fi signal strength meter; and if the Office team had installed stuff like quick access to Outlook inbox, calendar and alerts, then users would get the idea: this stays visible for a good reason.

In Windows 7, gadgets persist but the sidebar does not. Possibly a wrong decision, though apparently there is a hack to restore it. It’s not too late – Microsoft, how about an option to have the old sidebar behaviour back?

I’d also like a “concentrate” button. This would hide everything except the current application. Maximized applications would respond by filling the entire screen (no taskbar or sidebar), save for an “unconcentrate” button which would appear at bottom right. This would be like hanging “Do not disturb” outside your hotel room, and would suppress all but the highest priority notifications (like “your battery has seconds to live”).

My suggestion for Windows 8 and OS 11 – ditch the desktop, make it a workspace only. Implement multiple workspaces in Windows. And stop encouraging us to clutter our screens with desktop shortcuts which, in practice, are very little use.

SharePoint – the good, the bad and the ugly

I’ve been messing around with SharePoint. When it works, it is a beautiful product. It is a smart file system with versioning, check-in and check-out, point-and-click workflow (eg document approval), offline support via Outlook, direct open and save from Office 2007, and more. It is an instant intranet with blogs, wikis, discussion forums, surveys, presence information, easy page authoring, and more. It is an application platform with all the features of ASP.NET combined with those of SharePoint. It is a content management system capable of supporting a public web site as well as an intranet. It is a search server capable of crawling the network, with a good-looking and sophisticated web UI. And in the high-end Enterprise version you get a server-side Excel engine and all sorts of Business Intelligence features. Fantastic.

Even better, the base product – Windows SharePoint Services 3.0 – comes free with Windows server. Search Server Express is also free and delivers all the search capability a small organization is likely to need.

What’s wrong with this picture? Here’s a few things:

  • Gets very expensive once you move to MOSS (Microsoft Office SharePoint Server) rather than the free WSS.
  • Deeply confusing. Working out the difference between WSS and MOSS is just the start. If you want to deploy it, you had better learn about site collections, applications, operations, farm topologies, web parts, workspaces, and the rest.
  • Complex to deploy. Make sure you read Planning and Architecture for Office SharePoint 2007 Part 1 (616pp); the good news is that part 2 is only 52pp. SharePoint is all that is bad about Microsoft deployments: a massive product with many dependencies, including IIS, ASP.NET and the .NET Framework, SQL Server in particular configurations, and of course hooks with Office 2007, Exchange and Active Directory.
  • Generates horrible source code. Try opening a page in SharePoint designer and viewing the source. Ugh.
  • Challenging to back up and restore, thanks to being spread across IIS and SQL Server.

I am out of sorts with SharePoint right now, after a difficult time with Search Server Express (SSX). I have a working WSS 3.0 installation, and I tried to install SSX on the same server. My setup is just slightly unusual, since I have both SharePoint and a default web site on port 80, using the host headers feature in IIS to direct traffic. The SSX install seemed to proceed reasonably well, expect for two things.

First, I puzzled for some time over what account to use as the default account for services. Setup asks you to specify this; and the documentation is a classic case of unhelpful help:

In the Default Account For Services section, type the user name and password for the default services account.

In the Search Center Account section, type the user name and password for the account for the application pool identity of the default Search Center site

Well, thanks, but I could have figured out that I have to type a user name where it says “User name”. But I would like help on how to create or select a suitable account. What permissions does it need? What are the security implications? The temptation is to use an administrator account just because it will most likely work.

Then there was the problem of creating the search site application manually. I had a go at this, helped by these notes from Ian Morrish. I set up a crawl rule and successfully indexed some content. Then I made a search, to be greeted by this error:

Your license for Microsoft Search Server has expired.

Well hang on, this is Search Server Express and meant to be free! A quick Google turns up this depressing recommendation from Microsoft:

To solve your immediate problem, however, it is suggested you uninstall WSS, MSS Express, repave your machine with a clean OS, and reinstall only MSS Express (WSS is installed with it).

Thanks but no thanks. See this thread for a more informative analysis. The user yanniemx reckons, after 10 reinstalls, that he has worked it out:

I realized it was due to using the Express version of Search and then not using the SQL install that is included in the install.  From what I can tell if you use another SQL instance it thinks you are using multiple servers and that is not allowed for the Express version.

I think I’ll just uninstall. I did another install of the full MOSS on its own server, and that one works fine. Running on a virtual machine is another good idea.

I hate the way certain Microsoft server products like to be installed on their own dedicated server. That makes sense in an Enterprise, but what about small organizations? I don’t see any inherent reason why something like SSX shouldn’t install neatly and in a reasonably isolated manner alongside other products and web applications. Equally, I am sure it can be done, just as I used the host headers trick to get WSS installed alongside another web site on port 80; but working out how to do it can be a considerable effort.

Performance: Windows 7 fast than Vista, Vista faster than XP

The second part of that statement interests me as much as the first. ZDNet’s Adrian Kingsley-Hughes ran some informal tests on XP vs Vista vs Windows 7 beta 1 (as leaked, I presume), ranking them in order for a number of tasks. The results show that in general XP is slower than either Vista or XP on an AMD Phenom system with 4GB. Even on a Pentium dual core with just 1GB, which should favour XP, Vista was neck-and-neck with XP for speed (score of 57 vs 56, where less is better). Windows 7 came top in most of the tests.

I’ve done enough of these kinds of tests myself to know some of the pitfalls. Kingsley doesn’t mention whether UAC was on or off in Vista, or whether Aero is enabled, or how many background processes were running on each machine, or how many times the tests were repeated and whether there was much variation. It would also be interesting to know timings, rather than simple ranking. Finally, Kingsley’s tests seem overly weighted towards file I/O.

I’d also be intrigued to see a comparison of Vista as on first release vs a fully patched system.

Still, this does suggest (as I’ve argued before) that Vista is better than its reputation; and it is wrong to assume that XP will generally out-perform it.

That said, let’s not forget the dire performance of those early Vista laptops with 1GB RAM, a full helping of third-party foistware, and Outlook 2007. Even today, Outlook 2007 can kill the performance of a high-end system, as this recent comment shows:

Technorati tags: , ,

Windows Azure: since PDC, how is it going?

At the Professional Developers Conference 2008, held at the end of October 2008, Microsoft unveiled Windows Azure, its new cloud platform. I was there, and got the impression that this is a big deal for Microsoft; arguably the future of the company depends on it. It is likely that the industry will reduce its use of on-premise servers in favour of hosted applications, and if Microsoft is to preserve its overall market share it needs a credible cloud platform.

That was nearly two months ago. What’s been the developer reaction, and how is it going with the early tech previews made available at PDC? It’s hard to tell; but there is less public activity than I expected. On the official Azure forums there are just 550 messages at the time of writing; and glancing through them shows that many of them are from people simply having difficulty signing up. One of the problems is that access to the preview is limited by developer tokens of various types, and although Microsoft gave the impression at PDC that all attendees would have these, that has not really been the case. Those who attended hands-on labs at PDC got tokens there; others have had to apply and wait like everyone else. Part of the reason for lack of activity may just be that not many have been able to get in.

There are other issues too. I’ve spent some time trying out Live Framework and building applications for Live Mesh. I’ve written this up separately, in a piece that will be posted shortly. However, I found it harder than I expected to get good information on how to proceed. There is plenty of high-level marketing, but hands-on documentation is lacking. Azure may be different – though I was interested to find another user with similar frustrations (it’s worth reading this thread, as Microsoft’s moderator Yi-Lun Luo gives a handy technical outline of Azure and Live Services).

Still, let’s bear in mind that PDC is where Microsoft shares early technical information about the Windows platform, which is subject to change. Anyone who built applications for the preview Windows Longhorn code doled out at PDC 2003 (Paul Thurrott’s report is a reminder of what it felt like at the time) would have been in for some disappointment – Longhorn was both greatly delayed and much altered for its eventual release as Windows Vista.

It’s possible then that most developers are wisely waiting for the beta of Azure before doing serious experimentation. Alternatively – the bleakest outcome for Microsoft – they are ignoring Azure and presuming that if and when they do migrate applications to the cloud they will use some other platform.

Nevertheless, I’d suggest that Microsoft’s evangelism of Azure has been poor since PDC. There is more buzz about other things presented there – including Windows 7, which in contrast to Azure seems nearly done.

Update

Matt Rogers from Microsoft comments below that the service is not going to change radically between now and general release. He claims that feedback is extensive, but not evident in the online forums because it comes from other sources – he told me on Twitter that “we are getting much of it directly through relationships with customers, local user group meetings and through our evangelists”.

Maarten Balliauw has converted an application to Azure and written up the experience on his blog. He is using Azure TableStorage for data and Live ID for authentication. He says:

Overall, Microsoft is doing a good job with Azure. The platform itself seems reliable and stable, the concept is good.

Unfortunately the app itself does not work at the time of writing.

From the archives: Mark Anders and Scott Guthrie on ASP+

My editor at The Register asked me if I had any interviews that would be fun to dig out for a retrospective piece. This one is from September 2000, shortly after the announcement of the .NET Framework, where Microsoft’s Mark Anders and Scott Guthrie talk to me about ASP+, the name for ASP.NET when it was in preview.

Listening to the whole interview was a little frustrating, because most of the time I asked questions that were interesting at the time, like the relationship between ASP.NET and COM, but not so much now. I was reminded though that Guthrie gave an impressive demo of what we now call AJAX, where updates to a web page are processed on the client, and described to me how it worked.

The pair also enthused about hosting Windows Forms controls in the browser, one of the .NET ideas that did not really become practical until the release of Silverlight. The full WPF (Windows Presentation Foundation) also works well in the browser, but most web developers rule it out because it is Windows only.

As it turned out, AJAX might never have taken off without the work of Google, while Silverlight now looks like a reaction to Flash. I suspect that Microsoft found it difficult to evolve these ideas into full products because it clung to the idea of a Windows-centric Internet, where Windows rather than the browser is the rich client.

Guthrie is now Corporate VP, .NET Developer Division at Microsoft, while Anders is at Adobe where he has been working on a tool for the Flash platform called Catalyst, previously known as Thermo.

Service triggers: an attempt to reduce bloat in Windows 7

I’ve been reading through the Windows 7 Developer Guide. I like this document; it is tilted more towards information than hype, and is readable even for non-developers. There are things mentioned which I had not spotted before.

One example is triggers in the service control manager. There was actually a PDC session which covered this, among other things, under the unexciting title Designing Efficient Background Processes (PowerPoint). If you check out the slides, you’ll see that this is actually something significant for Windows users. It is an attempt to reduce all that stuff that runs whether or not you need it, increasing boot time and slowing performance. Apparently some people are so upset with the time it takes Windows to boot that they are threatening to sue; so yes, this does matter.

Services are applications that run in the background, usually without any visible interface. They consume system resources, so it makes sense to run them only when needed. Unfortunately, many services run on a “just in case” basis. For example, if I check the services on this machine I see I have one running called Apple Mobile Device, just in case I might connect one. It is using 4MB of RAM. However, I never connect an Apple device to this machine. I’m sure it was installed by iTunes, which I rarely use, though I like to keep up-to-date with what Apple is doing. So every time I start Windows this thing also starts, running uselessly in the background.

According to Vikram Singh, who took the PDC session, adding 10 typical 3rd party services to a clean Vista install has a dramatic effect on performance:

  • Boot time: up by 87% (24.7 to 46.1 seconds)
  • CPU time when idle: up by six times (to 6.04%)
  • Disk Read Count: up by three times (from 10,192 to 31,401 in 15 seconds)

Service triggers are an attempt to address this, by making it possible to install services that start in response to specific events, instead of always running “just in case”. Four trigger types are mentioned:

  • On connection of a certain class of device
  • On connection to a Windows domain
  • On group policy refresh
  • On connection to a network (based on IP address change)

In theory then, Apple can rewrite iTunes for Windows 7, so that the Apple Mobile Device service only starts when an Apple device is connected. A good plan.

Now, I can think of three reasons why this might not happen. First, inertia. Second, compatibility. This means coding specifically for Windows 7, whereas it will be easier just to do it the old, compatible way. Third, I imagine this would mean faster boot, but slower response when connecting the device. Apple (or any third party) might think: the user will just blame Windows for slow boot, but a slow response when connecting the device will impact the perceived performance of our product. So the service will still run at start-up, just in case.

Still, I’m encouraged that Microsoft is at least thinking about the problem and providing a possible solution. We may also benefit if Microsoft tweaks some of its own Windows services to start on-demand.

Microsoft plans free anti-malware

Microsoft will be offering a free anti-malware suite codenamed “Morro”, from the second half of 2009, according to a press release:

This streamlined solution will … provide comprehensive protection from malware including viruses, spyware, rootkits and trojans. This new solution, to be offered at no charge to consumers, will be architected for a smaller footprint that will use fewer computing resources, making it ideal for low-bandwidth scenarios or less powerful PCs.

It’s a good move. Here’s why:

  • The current situation is calamitous. Even users with fully paid-up anti-virus solutions installed get infected, as I recently saw for myself. PC security is ineffective.
  • The practice of shipping PCs with pre-installed anti-virus that has a trial subscription is counter-productive. There will always be a proportion of users who take the free trial and do not renew, ending up with out-of-date security software. A free solution is better – several are available now – if only because it does not expire.
  • Microsoft wants to compete more effectively with Apple. It is addressing an extra cost faced by PC users, as well as (possibly) the poor user experience inherent in pre-installed anti-virus trialware.
  • The performance issue is also important. Anti-malware software is a significant performance drag. Microsoft is the vendor best placed to implement anti-malware that minimizes the drag on the system.

Counter-arguments:

  • Only specialist companies have the necessary expertise. I don’t believe this; Microsoft’s investment in security is genuine.
  • Single-supplier security gives malware a fixed target, easier to bypass. There’s some merit to this argument; but it is weakened by the fact that the current multi-vendor scenario is clearly failing. Further, the Mac is a fixed target that does not appear to be easy to bypass.

All of this is hot air compared to the real challenge, which is securing the operating system. Vista is progress, Windows 7 not much different according to my first impressions.

Why not just use another operating system? There’s a good case for it; ironically the theory that a large factor in Windows insecurity is its dominance can/will only be properly tested when an alternative OS is equally or more popular. If people continue switching to Macs perhaps it will happen some day. Windows is still hampered by its legacy, though my impression is that Vista’s UAC is having its intended effect: fewer applications now write to system areas in Windows, bringing us closer to the day when security can be tightened further.

What about business systems? This is one area that needs clarification. Microsoft says Morro is only for consumers. Why should businesses have to pay for a feature that consumers get for free? On the other hand, some equivalent initiative may be planned for business users.

Reasons to love Linux #1: package management

I posted recently about a difficult Ubuntu upgrade, drawing the comment “What do you prefer to do on Linux that you don’t on Windows?”

Today I patched the Debian server which runs this blog. APT upgraded the following applications:

MySQL 5

Apache 2.2

Clam AntiVirus

Time Zone data (tzdata)

Some of these involve several packages, so 16 packages were updated.

Bear in mind that this is a running system, and that MySQL and Apache are in constant heavy use, mostly by WordPress.

I logged on to the terminal and typed a single command:

apt-get upgrade

The package manager took less than a minute to upgrade all the packages, which had already been downloaded via a scheduled job. Services were stopped and started as needed. No reboot needed. Job done.

I guess a few people trying to access this site got a slow response, but that was all.

Now, how long would it take to upgrade IIS, SQL Server and some server anti-virus package on Windows? What are the odds of getting away without a restart?

Admittedly this is not risk-free. I’ve known package management to get messed up on Linux, and it can take many hours to resolve – but this usually happens on experimental systems. Web servers that stick to the official stable distribution rarely have problems in my experience.

I realise that the comment really referred to desktop Linux, not server, and here the picture is less rosy. In fact, this post was inspired by a difficult upgrade, though in this case it was the entire distribution being updated. Even on the desktop though, the user experience for installing updates and applications is generally much better.

Let’s say I’m looking for an image editor. I click on Add/Remove and type a search:

I like the way the apps show popularity. I’d like a few more things like ratings and comments; but it’s a start. Inkscape looks interesting, so I check it, click Apply Changes, and shortly after I get this dialog:

I double-click, and there it is:

I admit, I did take a few moments to download an example SVG file from the W3C, just to make the screen grab look better. But provided you have broadband, and the app you want is in the list, it is a great experience.

Windows Vista has had a go at this. From Control Panel – Programs and Features you can get to Windows Marketplace, where you might search and find something like The Gimp (free)  or Sketsa SVG Editor (costs). I tried The Gimp, to be more like-with-like. I had to sign in with a Live ID even though it is free. I went through several web dialogs and ended up with a download prompt for a zipped setup. That was it.

In other words, I went through all these steps, but I still do not have The Gimp. OK, I know I have to extract the ZIP and run the setup; but Ubuntu’s Add/Remove spares me all that complication; it is way ahead in usability.

App Store on the iPhone also has it right. For the user, that is. I detest the lock-in and the business model; but usability generally wins. The online stores on games consoles, like XBox Live Marketplace, are good as well. I guess one day we will install or buy most applications this way.

Windows is an adventure game

Many video games in the adventure genre are in essence collecting games. You have to get the gem to open the gate, and to get the gem you need the three pieces of tablet, etc etc.

Windows is like this sometimes. I want to try Windows Azure. I need SQL Express. I download SQL Express 2008. Try to run, it tells me I need Windows Installer 4.5. I download Windows Installer 4.5. Try to run, it tells me “The system cannot find the file specified.”

This makes me pause. Is it a broken download, or is my system broken? Maybe it’s because I downloaded to a network drive. Yup – copy it to a local drive, and it runs fine. This is the adventure game equivalent of a puzzle.

Now the dialog says, “You must restart your computer for the updates to take effect.” To be continued, then.

Shame Microsoft hasn’t (as far as I know) issued a VM image with all this ready to go.

Code for Mac Cocoa in Visual Studio – surprised to see this?

I grabbed this screenshot from a preview just installed:

Cocoa app in Visual Studio

It comes from Delphi Prism, a new product from Embarcadero/Codegear which lets you code for .NET using the Delphi language, an object-oriented version of Pascal. The product is not as new as it first appears. It is based on an existing product from RemObjects, called Oxygene, which it now replaces.

Here’s the story in a nutshell. 2003: Borland, the company which created Delphi, decides (rightly) that .NET is here to stay, and releases Delphi 8, a pure .NET version. Nobody wants it, because it has no advantages to speak of over Win32 Delphi (which is faster), or C#, which is the Microsoft .NET language.

At that time some voices muttered that what Borland should do is to integrate Delphi into Visual Studio, rather than doing its own .NET IDE.  One was Marc Hoffman at RemObjects, only he did more than mutter: his company developed its own implementation of Delphi Pascal for Visual Studio, called Chrome.

Borland soldiers on with Delphi 2005, which does both .NET and Win32 in a single IDE. Developers are happy to have a new Win32 Delphi, but most still don’t see the point of the .NET stuff. Further, Delphi 2005 is buggy; many stick with Delphi 7. Next comes Delphi 2006: more of the same, but less buggy.

There’s a couple of problems with Delphi’s .NET support. First, it is always out-of-date compared to Microsoft’s .NET tools. Second, it has component library schizophrenia. There’s VCL for .NET, based on Delphi’s component and GUI library, but that’s not compatible with .NET components built for Windows Forms. There’s Windows Forms, but that’s not compatible with existing Delphi code. Borland decides to deprecate use of Delphi .NET with Windows Forms. This is really for VCL developers, it says.

Next comes Delphi 2007. Nice product, but where’s .NET? Gone. Nobody seems to mind [and it turns up later in RAD Studio 2007*]. Delphi 2009, gone again. But now there’s Prism, and it is a complete U-turn. Forget VCL.NET. It uses standard .NET libraries, runs in Visual Studio, supports Windows Forms, ASP.NET, WPF, and soon Silverlight. Oh, and it’s based on what that other guy did back in 2004, with some Borland Codegear Embarcadero technology thrown in: dbExpress database framework, client support for DataSnap multi-tier applications, and the Blackfish pure .NET database engine.

Very good; but there’s still that awkward question: why not use C#? The answer, I guess, being either that you love coding in the Delphi language, or you want to use one of the Delphi-compatible libraries.

Or that you want to use Mono, which of course is what enables those tasty Mac options in the New Project dialog above. You can also use C# with Mono – possibly you should, since it is Mono’s core language – but in Prism it comes nicely integrated into Visual Studio. Well, somewhat nicely. In practice there are a few extra steps you need to take to get it working. The recommendation is to run Visual Studio in a VM on a Mac, since Windows cannot run Cocoa applications. And you’re going to be using Apple’s Interface Builder; there’s no GUI designer in Visual Studio itself.

Hardly enterprise-ready then; but still an intriguing development.

*Added correction thanks to John Moshakis’ comment below.