Category Archives: Uncategorized

What happens when you click a Windows 7 taskbar icon?

While working on a Windows 7 piece recently, I tried to write a description of what happens when you click on an icon in the Windows 7 taskbar. Trouble is, there are so many contextual variations that it is hard to describe concisely. Here’s what I’ve got so far; I may have missed a few things.

Left click or primary mouse button:

If the app to which the icon points is not running, it runs and comes to the front.

If the app is running and there is only one instance, it comes to the front.

If the app is running and there are two or more instances (might not be real instances; could be several Word documents or tabs in IE), then preview windows appear  – provided that Aero is enabled. Neither comes to the front until you take further action. If you move the mouse over a preview window, the associated app window comes to the front temporarily and other windows go transparent; if you move the mouse away from the preview without clicking it reverts to the background.

This can be counter-intuitive – if you move your mouse over the seemingly activated window without clicking the preview first, it disappears because it does not really have the focus.

Previews can contain their own controls such as buttons – so strictly, clicking a preview will only bring its main app window to the front if the click is not overridden by a button or other control on the preview.

Hovering the mouse

Hovering the mouse over an icon is almost the same as clicking: it raises the previews, though it will do so even if there is only one instance. The exception is when previews are already locked to another icon. Clicking an icon locks its previews into view, and they remain until you click somewhere else.

Other variations:

A right-click (or secondary mouse button) raises the jump list – a contextual menu that app developers can customize.

Click, hold and drag up also raises the jump list. This is really aimed at touch users.

A middle-click (or mouse wheel click) or SHIFT+click, starts a new instance of the application. Users may have trouble with this. It is not obvious how to start a new instance. There is also a link on the jump list, but again it is not really intuitive.

SHIFT+CTRL+Click starts a new instance with elevated permissons, subject to an elevation prompt if UAC is enabled.

The Show Desktop icon is special. If you hover the mouse there, app windows go transparent so you can see the desktop. If you click there, all apps minimize. If you click again without activating any app, all the apps which minimized are restored; however if you restore an individual app first, the Show Desktop icon loses its memory and reverts to minimizing all apps.

Confusing or intuitive?

Describing all the variations makes it sound confusing, but in practice you soon learn what to do. I can see the reasoning behind the behaviour. I do find it odd that left-clicking an icon doesn’t necessarily bring the application to the front – but Windows doesn’t know which instance you want. The full-window preview could do with a special outline or a degree of opacity to show that it is not fully activated. You can turn both features off from taskbar properties if you prefer. Overall the behaviour is OK and a step up from Vista.

The computer desktop is a faulty abstraction

In Windows 7, Microsoft has made further efforts to make the desktop more usable. There is a "peek" feature that makes all running applications temporarily transparent when you hover over the Show Desktop button. If you click the button the apps all minimize, so you can interact with the desktop, and if you click again they come back. Nice feature; but it cannot disguise the desktop’s inherent problems. Or should I say problem. The issue is that the desktop cannot easily be both the place where you launch applications, and the place where they run, simply because the running application makes the desktop partly or wholly inaccessible.

The Show Desktop button (sans Peek) is in XP and Vista too, and there is also the handy Desktop toolbar which makes desktop shortcuts into a Taskbar menu. All worthy efforts, which are workarounds for  the fact that having shortcuts and gadgets behind your running applications is a silly idea. The desktop is generally useful only once per session – when you start up your PC.

In this respect, the computer desktop differs from real desktops. Cue jokes about desks so cluttered that you cannot see the surface. Fair enough, but on my real desktop I have a telephone, I have drawers, I have an in-tray and out-tray, I have pen and paper, and all of these things remain accessible even though I’m typing. The on-screen desktop is a faulty abstraction.

The inadequacy of the desktop is the reason that the notification area (incorrectly known as the system tray) get so abused by app developers – it’s the only place you can put something that you want always available and visible. In Windows 7 the taskbar is taking on more characteristics of the notification area, with icons that you can overlay with activity indicators like the IE8 download progress bar.

It’s true that if you don’t run applications full-screen, then you can move them around to get desktop stuff into view. I find this rarely works well, because I have more than one application visible, and behind one application is another one.

Why then do OS designers persist with the desktop idea? It’s possibly because it makes users feel more comfortable. I suspect it is a Skeuomorph (thanks to Phil Thane for the word) – “a derivative object which retains ornamental design cues to structure that was necessary in the original”. An example is that early electric kettles retained a squat shape with a large base, even though the logical shape for an electric kettle is a slim jug, enabling small quantities of water to cover the element. The reason for the squat shape was to spread the heat when boiling water on a stove. It took years before “jug” kettles caught on.

It is better to call the computer desktop a workspace, and to forget the idea of putting shortcuts and gadgets onto it. Which reminds me: why does Windows still not surface multiple desktops (or workspaces) as is common on Linux, and also implemented in Mac OS X Leopard as Spaces?  Windows does have multiple desktops – you see one every time UAC kicks in with its permission dialog on Vista, or when using the Switch User feature – but they are not otheriwse available.

I’m also realising that sidebar gadgets were a missed opportunity in Vista. Microsoft made two big mistakes with the sidebar. The first was to have it stay in the background by default. Right-click the sidebar and check “Sidebar is always on top of other windows”. Then it makes sense; it behaves like the taskbar and stays visible. Not so good for users with small screens; but they could uncheck the box. I know; you don’t like losing the screen space. But what if the gadgets there were actually useful?

The other mistake was to release the sidebar with zero compelling gadgets. Users took a look, decided it was useless, and ignored or disabled it. That’s a shame, since it is a more suitable space for a lot of the stuff that ends up in the notification area. If Microsoft had put a few essentials there, like the recycle bin, volume control, and wi-fi signal strength meter; and if the Office team had installed stuff like quick access to Outlook inbox, calendar and alerts, then users would get the idea: this stays visible for a good reason.

In Windows 7, gadgets persist but the sidebar does not. Possibly a wrong decision, though apparently there is a hack to restore it. It’s not too late – Microsoft, how about an option to have the old sidebar behaviour back?

I’d also like a “concentrate” button. This would hide everything except the current application. Maximized applications would respond by filling the entire screen (no taskbar or sidebar), save for an “unconcentrate” button which would appear at bottom right. This would be like hanging “Do not disturb” outside your hotel room, and would suppress all but the highest priority notifications (like “your battery has seconds to live”).

My suggestion for Windows 8 and OS 11 – ditch the desktop, make it a workspace only. Implement multiple workspaces in Windows. And stop encouraging us to clutter our screens with desktop shortcuts which, in practice, are very little use.

PHP Development Tools 2.0 released, joins official Eclipse “Galileo” release

I picked up a couple of PHP and Eclipse news snippets from Zend’s Andi Gutman. He reports on his blog that PHP Development Tools (PDT) 2.0 has been released – this is a free, open source PHP IDE for Eclipse. He also notes that PDT is now part of Galileo, a release of Eclipse together with numerous language-specific projects set for June 2009.

These yearly Eclipse releases form the mainstream Eclipse releases – you can think of them as equivalent to new versions of Microsoft’s Visual Studio. The big problem with Eclipse is one of dependencies; projects depend on other projects and maintaining a single Eclipse environment with the latest of everything you are interested it is challenging to say the least. Galileo guarantees compatibility for the projects which it includes. This announcement will bring many more users to PDT.

I’m pleased about this as it seemed at one time that it would not happen, and I was among those asking for it.

So what should you download if you want to use PDT 2.0 now? The decision is complicated by the debugger choices: Zend or XDebug. You can either:

  1. Attempt to integrate PDT 2.0 into your existing Ganymede Eclipse. I did this with earlier builds, but it may not be straightforward. Or
  2. Download the all-in-one for Windows, Linux or Mac. An easy solution, but you still have to get the debugger from elsewhere. Or
  3. Download the all-in-one from Zend, with Zend debugger included.

The third option may be the easiest, presuming you are happy with Zend rather than Xdebug.

I was amused by the language on the Zend "Open Source PHP Development Tools” page:

Looking to experiment with PHP or build simple PHP applications? PHP Development Tools (PDT), as its name suggests, is an open source development tool that provides you with all the basic code editing capabilities you need to get started.

I’d suggest that you can build a lot more than simple PHP applications with PDT alone. Take a look at Zend’s own comparison if you are wondering what the differences are. Still, it is worth supporting Zend by buying the commercial product if you can; after all, Zend is a big contributor to the PDT.

Technorati tags: , , , ,

First steps with Adobe Alchemy: good news and bad

I’m fascinated by Adobe’s Alchemy project, which compiles C and C++ code into ActionScript, and stayed up late last night to give it a try. I used Ubuntu Linux, which felt brave since the instructions are for Windows (using Cygwin, which enables somewhat Unix-like development) or Mac.

Techie note: It took me a while to get going; like a number of others I ran into a configuration problem, the symptom being that you type alc-on, which is meant to enable the special Alchemy version of GCC (the standard open source C compiler), but instead get “Command not found”. In the end I ran the command:

source $ALCHEMY_HOME/alchemy-setup

directly from the terminal instead of in my bash profile, and that fixed it. The command alc-on is actually an alias created by this setup script.

After that it was relatively plain sailing. I used a recent version of the Flex 3 SDK, and adapted the stringecho.c example to create a countprimes function, because I was curious to see how it would compare to other results.

The result of my efforts was a Flex library called primetest.swc. I copied this to my Windows box, where I have Flex Builder installed. I configured Flex Builder to use Flash Player 10 and the Flex 3.2 SDK. Then I modified the prime-counting applet so I could compare the Alchemy version of the function with ActionScript. Once you have the .swc, the Alchemy code is easy to use:

import cmodule.primetest.CLibInit;

//code omitted

if (chkAlchemy.selected)
{
    var loader:CLibInit = new CLibInit;
    var lib:Object = loader.init();
    numprimes = lib.countprimes(n);
}
else
{
    numprimes = countprimes_as(n);
}

Then I tried the application. Note this is not using the debug player.

As you can see, the Alchemy code is slightly slower than ActionScript. I also tried this with a higher value (10,000,000) and got 34.95 secs for Alchemy versus 32.59 secs for ActionScript.

Conclusions? First, despite the slower performance, Alchemy is mighty impressive. Let’s not forget the intent of Alchemy to enable reuse of C and C++ libraries; it is not just about speed.

Second, note that Alchemy is still a research project and may get faster. Further, I may have missed some tricks in my code.

Third, note that this sort of tight loop is ideal for just-in-time compilation and really should run at speeds close to that of native code anyway. In Silverlight or Java it does.

So does the test prove anything? Well, it shows that Alchemy won’t always speed your code, which raises the question: in what circumstances will it be a performance win? There is a useful post here from Joe Steele:

The potential wins on performance are if you are operating on a large ByteArray (because of the optimized ByteArray opcodes) or if your code is doing a lot of boxing/unboxing (which largely goes away with Alchemys arg passing model). You are ultimately getting ActionScript out so if you already have hand-tuned ActionScript that avoids boxing/unboxing and is dealing with small chunks of data, you are not likely to beat that with Alchemy.

My experience is in line with his comments.

The general claim made in this thread (see Brandan Hall’s post) is that:

… under favorable conditions [Alchemy] runs 10 times slower than native code but 10 times faster than ActionScript

I guess the big question, from a performance perspective, is how much real-world code that is worth optimizing (because it does intensive processing that the user notices) falls into the “favorable conditions” category. If that turns out to be not many, we may have to abandon the idea of Alchemy as a performance fix, and just enjoy the portability it enables.

Why Audio Matters: Greg Calbi and others on “this age of bad sound”

The Philoctetes center in New York, which is dedicated to “The multidisciplinary study of imagination”, hosted a round table entitled “Deep Listening: Why Audio Quality Matters”, which you view online or download – 2 and a half hours long. It is moderated by Greg Calbi, mastering engineer from Sterling Sound. Other participants are Steve Berkowitz from Sony Music, audio critic and vinyl enthusiast Michael Fremer, academic and audio enthusiast Evan Cornog representing the listening public, audio engineer Kevin Killen, and record producer Craig Street. A good panel, responsible for sounds from a glittering array of artist over many years.

Here’s how Calbi introduces it:

The senses hold mysteries for us ranging from the sacred to the profane … this seminar … is about the beauty of sound and the alienating and off-putting effects of this age of bad sound, and about a commercial infrastructure which seems to be ignoring the potential which great sound holds to trigger our emotions and energy

More and more people especially young people are missing the opportunity to hear really high definition audio. Distorted live concerts, which I’m sure we’ve all been to, MP3s on iPods with earbuds, over-simplified sound for mini-systems, musical junk food.

Listening the modern way doesn’t mean that musical experiences won’t be meaningful and wonderous, but the door to their imaginations can be opened wider. We must promote within our business a product which is state of the art, truly satisfying, geared to maximum sonic quality, not just portability, convenience and ease of purchase, which is where our entire business has been going for the last 5 to 10 years.

If you care about such things, it is a fascinating debate – heated at times as participants disagree about the merits of MP3, CD, SACD and vinyl – but agreed that overabundance of mid-fi portable music has left the high end all-but abandoned.

Unfortunately absent from the round table is any representative of the teenagers and twenty-somethings who historically have been the primary audience for popular music. Too busy out there enjoying it I guess.

There is something absurd about sitting in on audio comparisons via an extreme lossy compressed downloaded or streamed video, but never mind.

I’m reluctant to take the side of the high-end audio industry, because while there is no shortage of good products out there, the industry is also replete with bad science and bad advice, absurdly expensive cables, and poor technical advice in many retailers (in my experience). On the other hand, I’m inclined to agree that sound quality has taken as many steps back as forward in the past thirty years. It is not only the loudness wars (which gets good coverage here around 2 hours in); it is a general problem that producers and engineers do too much processing because they can. Here’s Calbi:

Kevin would be able to detail what goes into the average album now, and how many stages, how many digital conversions, and how many mults, and how many combinations of things – by the time you actually get it, you wouldn’t believe it, we wouldn’t have time to talk about, we’d have to have a weekend.

The outcome is music that may sound good, but rarely sounds great, because it is too far away from what was performed. Auto-tune anyone? Killen describes how he worked with Elvis Costello on a song where Costello’s voice cracks slightly on one of the high notes. They wanted to re-do that note; they fixed it, but returned to the original because it conveyed more emotion. Few are so wise.

Similar factors account for why old CDs and even vinyl sometimes sound better than remastered versions, when matched for volume.

Another part of the discussion is the merits of different musical formats. There is almost a consensus that standard “Red Book” 16/44 CD is inadequate, and that high-res digital like SACD or even, according to some, vinyl records are needed to get the best sound. I’m still not convinced; a study quoted in the September 2007 issue of Journal of the Audio Engineering Society performed hundreds of blind tests and concluded that passing an audio signal through 16-bit/44.1-kHz A/D/A conversion made no audible difference. Someone could still say it was a bad test for this or that reason.

Craig Street in this Philoctetes round table suggests that high-res digital sounds better because we actually take in high frequencies that CD cannot reproduce though our bodies as well as our ears. Count me deeply sceptical – it might be a reason why live performances sound better than recordings, but surely those super-high frequencies would not be reproduced by normal loudspeakers anyway, even if they made it that far through the chain.

Still, while the reasons may be as much to do with mastering choices as inherent technical superiority, individual SACD and DVD Audio discs often do sound better than CD equivalents, and of course offer surround sound as well as stereo.

However, the market for these is uncertain at best. Cornog says:

I feel like a sucker. I’ve got the SACD player, and it sounds great, and I can’t buy it any more. It’s dying.

Before we get too carried away with the search for natural sound, a thought-provoking comment from Berkowitz:

The Beatles didn’t ever play Sergeant Pepper. It only existed in the studio.

He calls it “compositional recording”; and of course it is even more prevalent today.

I was also interested by what Berkowitz says about remastering classic albums from pre-digital years. He says that the engineers should strive to reproduce the sound of the vinyl, because that was the original document – not the tape, which the artist and producer worked on to create the LP. It is a fair point, especially with things with Sixties singles, which I understand were made to sound quite different during the cutting process. It was the single that became a hit, not the tape, which is one reason why those CD compilations never sound quite right, for those who can remember the originals.

Technorati tags:

Flash Marches On: We are upgrading the Web, says Adobe

Adobe has just released version 10 of the Flash runtime, the piece which is now at the heart of many of the company’s other products. It is extraordinary to recall that there was no Flash in Adobe until 2005 and the acquisition of Macromedia. Flash is now Adobe’s platform, and much of the output from other products like Photoshop or Creative Suite ends up as Flash content. Before 2005 you could argue that PDF was Adobe’s platform; it remains important, but now we are seeing Flash encroach on that territory with new typography and text-handling features forming one of the key advances in Flash 10. Adobe’s online word processor, Buzzword, is based on Flash and not PDF; whereas in the old world I might send you a link to a PDF, or attach it to an email, now I might send a link to a Buzzword document. I’d perhaps more likely send a link to a Google doc, of course, but you can see where Adobe is going with this.

The thing that Adobe talks about most in Flash 10 is Pixel Bender, formerly code-named Hydra, a graphics programming language that enables custom filters, effects and blend modes. Translation: fancy video effects. There is also an improved drawing API, 3D effects, more use of hardware acceleration, and new sound APIs. Game developers will love it, so will those designing Flash ads.

The other thing Adobe emphasises is reach. Flash is on 800M devices, we were told at the online press briefing yesterday, with 1 billion projected for 2009, and delivers 80% of the web’s video. I asked how many of the 800M were Flash 9 or better; I did not get an answer to the exact question; Adobe’s reply talked only about web-connected PCs. There is more of a problem with devices; Flash on my Smartphone (Windows Mobile 6) is a terrible experience.

It can’t be a coincidence that Microsoft has delivered the final Silverlight 2.0 runtime at almost the same time as Flash 10 appears. A few observations on Flash vs Silverlight. Microsoft is the underdog here, in respect of reach and of course adoption, though that’s hardly a surprise considering Silverlight is new and Flash has been around for years. Still, why would a Flash developer would migrate towards Silverlight? I’m not seeing much sign of that.

Despite their obvious similarities, Flash and Silverlight are being presented in different ways. Adobe puts the emphasis on multimedia capabilities, while Microsoft highlights application capabilities. In fact, I think of Flash as a multimedia runtime with application capabilities, whereas I think of Silverlight as an application runtime with multimedia capabilities. Adobe treats the application aspect as a separate topic, to do with Flex and Flex Builder, whereas it is central to Microsoft’s Silverlight story. Both companies play to their strengths; things look better for Microsoft if you talk about Visual Studio versus Flex Builder, rather than Silverlight versus Flash. Equally, if you position CS4 versus the Expression suite, it looks pretty bad.

The Flash runtime is around 1.8MB, whereas Silverlight is around 4.5MB. Personally I doubt that is of much significance, except for users on dial-up connections. They are both small enough.

Kudos to Adobe for delivering Flash 10 for Linux at the same time as for other platforms. I successfully installed this on my Ubuntu system, though the process is not as smooth as on Windows. This contrasts with Microsoft’s uncertain Linux support with Silverlight/Moonlight.

Finally, a comment from the spokesperson the press briefing has stayed in my mind. “We are able to upgrade the Web very quickly”, he said. Does the Web belong to Adobe, that it can upgrade it?

Future of Web Applications wrap-up: check out the Open Stack

I attended two great sessions at day two of FOWA London on Friday – which I guess makes it a good day. The first was from Tim Bray, about which I posted last week. Bray was alone in suggesting that the current economic climate will change the tech world deeply; it’s speculative, but I’m inclined to agree with him, though not on every detail of his analysis. He says recession will boost open source – it might, though I can also imagine companies cutting back on the number of staff they dedicate to open source projects like Eclipse – a large amount of open source development is done by professionals on company time. I asked Bray how he thought Sun (his company) would fare in the downturn; he said it would be better off than software companies which are not committed to the open source model, but again that’s speculative; what will be tested is the business model, and that is one thing Sun has never been able to explain satisfactorily.

I was amused that Tim Bray sneaked “Flash is bad” into his talk, as well as talking about “the Microsoft disaster” and “the Oracle disaster”.

The other high point was Kathy Sierra doing her piece on passionate users; I guess it is what she always says, but none the worse for that, especially as I had not heard here speak before. I’ll be posting again on this, here or elsewhere. I was sorry that her talk came just before the Diggnation silliness which meant no chance for questions.

There were disappointments too. Mark Zuckerberg and Dave Morin spoke about Facebook Connect and a few other things; I would have liked some sort of debate about Connect vs Google’s OpenSocial; my observation is that Google is doing a better PR job in persuading us that it supports the “open web” as opposed to some kind of walled garden.

Although I have an intense interest in things like Adobe AIR, Salesforce.com, and Amazon’s web services, the sessions on these were disappointing to me, being high level and mainly marketing. There was probably more detail in the “University” mini-sessions but one cannot attend everything.

Despite its absence from the main stage, there seemed to be a fair amount of interest in Microsoft’s stand and mini-sessions on Silverlight; nevertheless, if the fowa crowd is wary of Flash it is even more suspicious of anything from Microsoft. Fowa is biased towards open technology; only Apple gets a pass because its laptops and iPhone are well liked.

As for significant clues about, well, the future of web applications, I’d point to David Recordon’s session on the “Open Stack”, about which I posted on Friday. If this, or something like it, wins serious adoption it will have a huge impact.

Many of the sessions have been posted as videos here.

Streaming FLAC from Linux to a PlayStation 3

The digital home is in an anarchic phase right now. The general public has woken up to the idea of a home music server, and I’m seeing this widely discussed in all sorts of forums, but there are several competing standards and plenty of things that don’t quite work. Getting a seamless and satisfactory experience everywhere including desktop computers, portable devices and in the living room can still be a challenge.

For myself I’ve experimented with a number of setups, including Apple iTunes, Microsoft Windows Media Center, and Logitech SqueezeCenter. My interest is mainly in music rather than video; and I’ve settled for the moment on a Linux server running Ubuntu and SqueezeCenter, with CDs ripped to FLAC. I like FLAC because it is open source and lossless, which I hope means I won’t ever have to rip the CDs again; unlike lossy formats FLAC can be converted to other file types if necessary without compromise.

SqueezeCenter works great with Logitech’s Squeezebox, as you’d expect. The Squeezebox goes anywhere in the house, plugs into a hi-fi, and lets you select music either with a remote, or using a web UI on any device with a web browser, or using Logitech’s Duet smart remote. But a Squeezebox is expensive: what if you want access to the music server in more than one room? Well, you can play music from SqueezeCenter on any PC that can play an MP3 stream, but it is a bit fiddly. What about using a games console like a PlayStation 3 or Xbox 360?

This has been a spare-time project for me for some time. There are actually a zillion ways to stream music to a PS3 or Xbox 360, and some of them actually work. SqueezeCenter isn’t one of them; but there is no problem using other servers which access the same files. My own choices are somewhat limited, as I want something that runs on Linux and works with FLAC. Neither PS3 or Xbox plays FLAC natively, so the server has to transcode on the fly to a format the console can play.

My first stop was Fuppes, a strong project that in theory does exactly what I need. Fuppes is a DLNA server, which is the nearest thing to an agreed standard in this area, though not supported by Apple (why not?). I believe Fuppes would be fine if I stored music as MP3, but the transcoding aspect seems problematic. It used to mostly work on the PS3, but after a Sony firmware update I got nothing but “unsupported data” messages, and I never got it working on the 360, despite playing with many different builds and configuration options.

This weekend I tried Mediatomb instead, another Linux DLNA server with transcoding support. It was the usual Linux circus. I prefer to use standard packages where possible, and I noticed that Mediatomb has an official package for Ubuntu 8.04 “Hardy Heron”. My server was on 7.10 “Gutsy Gibbon”, so this seemed a good reason to upgrade. Did the upgrade easily enough, then installed Mediatomb. It took an age to index my FLAC files, following which I went to the PS3 and tried it. No go; “Unsupported data”.

Then I looked at this thread.  This informed me that the PS3 needs a big endian PCM stream; and further, that the version in the Ubuntu repository does not work. So I removed it; downloaded the source with Subversion; compiled and installed; made the suggested configuration changes; and lo, it all works.

The good news is that performance is great, both in sound quality and in the speed of browsing the server. It is more responsive than a Windows Media Center from the PS3. The bad news is that the UI on the PS3 is basic. There is no search facility, you have to scroll to find what you want. Another annoyance is that you cannot search for one song while playing another; as soon as you go back to searching, the music stops. Third gripe (maybe the fault of Mediatomb): everything is case-sensitive, so unless your tagging is perfect you might have several entries for the same artist. Still, it’s a big advance on Fuppes and nothing working at all.

The relentless branding of the Internet

For me, it started with Amazon affiliates. Before that, you mostly saw the Amazon brand on the Amazon site. After that, seemingly every web page you went to had Amazon somewhere on it.

Now we are into mash-ups and widgets; but inevitably it seems to be the big brands that dominate. Instead of going to the site, the site comes to you. It’s even worse if you install one of their toolbars: Google, Yahoo, MSN, eBay.

At the IE8 briefing the other day, we were shown the Accelerator – Smart Tags revisited – and Web Slices. The Accelerator is especially intrusive, because it behaves as if it is embedded in the site you are visiting. You go to an ad-free site (in the UK) like the BBC, select some text or right-click, and suddenly your screen is festooned with links to all the usual suspects: Microsoft, eBay, Google, Amazon.

It feels claustrophobic; an oppressive encirclement by brands.

The Web’s poor security makes this worse. After a couple of bad experiences, users are more inclined to stick with what is tried, trusted and well-known.

In the early days of the Internet, it was possible to think that the inherently low technical barriers to entry would benefit small players and make it hard for a few entities to dominate. It is hard to believe that now.

Technorati tags: , , , , ,