Category Archives: internet

Use HTML not Flash, Silverlight or XUL, says W3C working draft

The W3C has posted its working draft for HTML 5.0. Interesting statement here:

1.1.3. Relationship to XUL, Flash, Silverlight, and other proprietary UI languages

This section is non-normative.

This specification is independent of the various proprietary UI languages that various vendors provide. As an open, vender-neutral language, HTML provides for a solution to the same problems without the risk of vendor lock-in.

Food for thought as you embark on your Flash or Silverlight project. I would have thought XUL is less proprietary, but still.

The contentious part is this:

HTML provides for a solution to the same problems

I doubt that HTML 5.0 can quite match everything that you can do in Flash or Silverlight, depending of course what is meant by “a solution to the same problems”. The other issue is that Flash is here now, Silverlight is just about here now, and HTML 5.0 will be a while yet.

Technorati tags: , , , ,

Escaping the Adobe AIR sandbox

Adobe’s Mike Chambers has an article and sample code for calling native operating system APIs from AIR applications, which use the Flash runtime outside the browser.

I took a look at the native side of the code, which is written in C# and compiled smoothly in Visual Studio 2008. The concept is simple. Instead of launching an AIR application directly, you start the “Command Proxy” application. The Command Proxy launches the AIR application, passing a port number and optionally an authorization string. Next, the Command Proxy creates a TCP socket which listens on the specified port. The AIR application can then use its socket API to send commands to the Command Proxy, which is outside the AIR sandbox.

It’s a neat idea though Microsoft’s Scott Barnes gave the design a C- on security grounds. He clarified his point thus:

The communication channel between the command proxy and AIR application looks like a potential vulnerability. One of the things application developers should worry about with security is insecure cross-process communication mechanisms hanging around on someone’s machine. For example if a process listens on a named pipe, and that named pipe has no ACLs and no validation of inbound communication, the process is vulnerable to all kinds of attacks when garbage is sent down the pipe. In the example on using the command proxy how do you secure it so that it doesn’t turn into a general purpose process launcher?

Barnes has an obvious incentive to cast doubt on AIR solutions (he’s a Microsoft RIA Silverlight evangelist), but nevertheless this is a good debate to have. How difficult is it to do this in a secure manner? It is also interesting to note the opening remarks in Chambers’ post:

Two of the most requested features for Adobe AIR have been the ability to launch native executables from an AIR application, and the ability to integrate native libraries into an AIR application. Unfortunately, neither feature will be included in Adobe AIR 1.0.

This is really one feature: access to native code. I remain somewhat perplexed by AIR in this respect. Is the inability to call native code a security feature, or a way of promoting cross-platform purity, or simply a feature on the to-do list? I don’t think it is really a security feature, since AIR applications have the same access to the file system as the user. This means they can execute native code, just not immediately. For example, an AIR app could download an executable and pop it into the user’s startup folder on Windows. That being the case, why not follow Java’s lead and provide a clean mechanism for calling native code? Adobe could add the usual obligatory warnings about how this breaks cross-platform compatibility and so on.

Van Morrison fan site under attack by Web Sheriff

A popular Van Morrison fan site has received a letter demanding that the site be closed. Here’s an extract from a post by the site’s founder.

This site began as a personal hobby about 12 years ago, an expression of my own enthusiasm for Mr. Morrison’s music, which I hoped to share with other fans…The tone is respectful; there is no advertising on the site — never has been; there is no facilitation or encouragement of piracy; in fact the site has long contained a statement to the effect that bootlegging was “not condoned”. Any fair-minded visitor to the site is likely to have concluded that the site promoted, and helped fans to better understand, Mr. Morrison’s work.

Despite this history, despite all of these facts, on Monday, January 14 I received a message from someone working for an outfit named WebSheriff, who claimed to represent Van Morrison and Exile Productions. According to the message, this website stands accused of (and I quote): “numerous infringements of our said clients’ IP [ed: Intellectual Property] rights including, but not limited to, the infringement of copyrights, trademarks, goodwill, performers rights, moral rights, publicity rights, privacy rights and the wholesale facilitation of further, numerous infringements by third parties on a grand scale (such as providing access to bootleg / unauthorised / illegal recordings)” end quote. I’ll repeat for emphasis: “wholesale facilitation of”; “on a grand scale”.

It looks like this is the Web Sheriff in question:

It was through the acute need and demand for the protection of on-line rights against infringements and abuse, that Web Sheriff was set-up by its parent company, Entertainment Law Associates. Web Sheriff is one of the few specialist, companies that operate in the field of internet policing and has become a market-leader through offering truly across-the-board solutions, from on-line legal enforcement to high tech anti-piracy.

There is quite possibly some degree of copyright infringement on the site in question; but attacking your biggest fans, who are doing unpaid promotion, is silly. The site is well-regarded and was apparently featured in the BBC’s “Best of the Web” guide, among other recommendations.

The site owner received this unwelcome missive on January 14th. It was also sent to his employer, a university, presumably because the site was hosted on its servers. He took the site offline as “an expression of goodwill” and pending receipt of guidelines for making the site legal, which he was told to expect within 48 hours. They did not appear, and he decided to reopen the site, but at a new home, so as not to involve the university any further.

No doubt this controversy will get the Unofficial Van Morrison website many new visitors to enjoy these “grand scale” infringements.

Update: It looks as if the site is offline again.

Sun gets a database manager, but Oracle owns its InnoDB engine

Sun now has a database manager. It’s been a long time coming. Oracle has … Oracle, IBM has DB2, Microsoft has SQL Server; it’s been obvious for years that Sun had a gap to fill. Now Sun has MySQL.

This is interesting to me as I was a relatively early user of the product. I didn’t much like it. It was missing important features like transactions, stored procedures and triggers. I still used it though because of a few appealing characteristics:

  • It was free
  • It was very fast
  • It was lightweight
  • It was the M in LAMP

I should expand slightly on the last of these. The great thing about MySQL was that you did not need to think about installation, PHP drivers, or anything like that. It all came pretty much by default. If you decided that you could not bear MySQL’s limitations, you could use Postgres instead, but it was more effort and less quick.

The ascent of MySQL is a sort of software development parable. Like PHP, MySQL came about from one person’s desire to fix a problem. That person was Michael “Monty” Widenius. He wanted something a little better than mSQL, a popular small database engine at the time:

We once started off with the intention to use mSQL to connect to our own fast low level (ISAM) tables. However, after some testing we came to the conclusion that mSQL was not fast or flexible enough for our needs. This resulted in a new SQL interface to our database but with almost the same API interface as mSQL. This API was chosen to ease porting of third-party code.

Why did MySQL take off when there were better database engines already out there? It was partly to do with the nature of many LAMP applications in the early days. They were often not mission-critical (mine certainly were not), and they were typically weighted towards reading rather than writing data. If you are building a web site, you want pages served as quickly as possible. MySQL did that, and without consuming too many resources. Many database engines were better, but not many were faster.

MySQL today has grown up in many ways, though transactions are still an issue. To use them you need to use an alternate back-end storage engine, either InnoDB or BDB. BDB is deprecated, and InnoDB is included by default in current releases of MySQL. InnoDB is owned by Oracle, which could prove interesting given how this deal changes the dynamics of Sun’s relationship with Oracle, though both MySQL and InnoDB are open source and published under the GPL. Will Sun try to find an alternative to InnoDB?

While I agree with most commentators that this is a good move for Sun, it’s worth noting that MySQL was not originally designed to meet Enterprise needs, which is where most of the money is.

Update: as Barry Carr comments below, there is a planned replacement for InnoDB called Falcon.

Does Google rot your brain?

According to Prof Brabazon (via Danny Sullivan) it does:

She said: “I want students to sit down and read. It’s not the same when you read it online. I want them to experience the pages and the print as much as the digitisation and the pixels. Both are fine but I want them to have both, not one or the other, not a cheap solution.”

She will be giving a lecture on the issue, called Google Is White Bread For The Mind, at the Sallis Benney Theatre in Grand Parade, Brighton, on Wednesday at 6.30pm.

I’d like to hear more detail of her argument before passing judgment. I’ll observe though that there seem to be a couple of things confused here. One is about print versus online, as in the quotation above. The other is about how to research online:

Too many students don’t use their own brains enough. We need to bring back the important values of research and analysis.

She said thousands of students across the country, including those at the universities of Brighton and Sussex, were churning out banal and mediocre work by using what search engines provided them.

Here, I agree. It is easy to get mediocre or simply wrong answers from a quick Google search. It’s especially dangerous because of the internet’s echo effect. Misinformation can spread rapidly when it is something people want to believe. This then looks superficially like corroboration. For example, a poorly-researched paper on DRM in Vista was widely treated as authoritative because the story, that Microsoft broke Vista for the sake of DRM, was so compelling. See here for more on this. I am not making a point about DRM in Vista here. I am making the point that arriving at the truth takes a great deal more work than simply reading the first article you find, even if it is widely quoted.

This implies a need to educating students in how to do research, rather than banning online sources. I agree though that a trip to the library is also important. Not everything is in Google’s index, yet.

Great anecdote in the Economist about the decline of the CD

The Economist has a report on change in the music industry, which kicks off with this anecdote:

IN 2006 EMI, the world’s fourth-biggest recorded-music company, invited some teenagers into its headquarters in London to talk to its top managers about their listening habits. At the end of the session the EMI bosses thanked them for their comments and told them to help themselves to a big pile of CDs sitting on a table. But none of the teens took any of the CDs, even though they were free. “That was the moment we realised the game was completely up,” says a person who was there.

Detailed look at a WordPress hack

Angsuman Chakraborty’s technical blog suffered a similar attack to mine – the malicious script was the same, though the detail of the attack was different. In my case WordPress was attacked via Phorum. Chakraborty offers a detailed look at how his site was compromised and makes some suggestions for improving WordPress security.

In both these cases, WordPress was not solely to blame. At least, that is the implication. Chakraborty thinks his attack began with an exploit described by Secunia, which requires the hacker first to obtain access to the WordPress password database, via a stray backup or a SQL injection attack. Nevertheless, Chakraborty says:

One of the challenges with WordPress is that security considerations were mostly an afterthought (feel free to disagree) which were latched on as WordPress became more and more popular.

I have huge respect for WordPress. Nevertheless, I believe its web site could do better with regard to security. The installation instructions say little about it. You really need to find this page on hardening WordPress. It should be more prominent.

Technorati tags: ,

Is Adobe spying on you?

Abode is on the defensive after users complained that their premier software package Creative Suite 3 is collecting usage stats in an underhand manner.

On the other hand, Adobe’s John Nack reports that the content being tracked is content delivered from the internet, such as a Live News SWF, and online help which really is online, not just local files.

The other part of this story is that Adobe is using Omniture for analytics, and Omniture has chosen a deceptive url for its tracking stats, specifically 192.168.112.2O7.net. That’s not an IP number, it’s an URL – note the capital O used where it looks like a zero.

Breach of privacy? Case not proven. Anyone running a web site should track stats for all kinds of reasons; I used them recently to investigate a break-in. When desktop applications call internet resources, they are acting like a web browser, and users should expect that they leave a digital trail. It is not as if CS3 calls the internet secretly – I think most of us can figure out that a live news panel is doing more than showing files installed by setup.

Unfortunately once you start browsing the web it is difficult to know exactly what resources you are calling and from where. What users see as a single web page typically has ads from one place, maybe images from another, and often slightly sneaky tricks like invisible images or scripts put in place solely to track usage. Now desktop apps are doing the same thing; it is not different in kind though it is true that neither case is transparent for the user.

That’s no excuse for Omniture using a silly URL that is the kind of thing you would expect from spam sites or misleading emails that want you to click malware links. Omniture’s URL is designed to look like an internal IP address which would normally be safe. That’s beyond “not transparent”; it is deliberate deception, albeit easy to spot for anyone moderately technical.

Should Adobe offer an option to turn off all non-local content? Possibly, though not many users would want to do so. There is a simple way for users to protect their privacy, which is to disconnect their machine from the Internet.

The big unknown is how these stats are used. Does Adobe check for the same serial number being used on multiple machines concurrently? Does it link usage stats to registration details? Does it check which apps in the suite are used most, and use that for contextual marketing to specific users? There is probably a privacy policy somewhere which explains what Adobe does, or does not, or might do. Unfortunately users have to take such things on trust. Occasionally companies slip up, even with good intentions – you may recall the day AOL released search logs for 500,000 users naively thinking they were not personally identifiable.

This problem is not specific to Adobe. It is inherent in internet-connected applications including web browsers. That said, Adobe should beat up Omniture for its shady URL, and do a better job informing users what kind of data it is collecting and how it is used. Which is pretty much what Nack says in a second post – except he says security when this is a privacy issue. Not the same thing.

Technorati tags: , , , ,

Wikia Search is live

You can now perform searches on Wikia, the open source search engine from the founder of Wikipedia.

This is from the about page:

We are aware that the quality of the search results is low..

Wikia’s search engine concept is that of trusted user feedback from a community of users acting together in an open, transparent, public way. Of course, before we start, we have no user feedback data. So the results are pretty bad. But we expect them to improve rapidly in coming weeks, so please bookmark the site and return often.

I tried a few searches for things I know about, and indeed the results were poor. I am going to follow the advice.

Wikia’s Jimmy Wales says there is a moral dimension here:

I believe that search is a fundamental part of the infrastructure of the Internet, and that it can and should therefore be done in an open, objective, accountable way.

There are several issues here. The power of Google to make or break businesses is alarming, particularly as it seeks to extend its business and there are growing potential conflicts of interest between delivering the best search results, and promoting particular sites. Google’s engine is a black box, to protect its commercial secrets. Search ranking has become critical to business success, and much energy is expended on the dubious art of search engine optimization, sometimes to the detriment of the user’s experience.

Another thought to ponder is how Google’s results influence what people think they know about, well, almost anything. Children are growing up with the idea that Google knows everything; it is the closest thing yet to Asimov’s Multivac.

In other words, Wales is right to be concerned. Can Wikia fix the problem? The big question is whether it can be both open and spam-resistant. Some people thought that open source software would be inherently insecure, because the bad guys can see the source. This logic has been proven faulty, since it the flaw is more than mitigated by the number of people scrutinizing open source code and fixing problems. Can the same theory apply to search? That’s unknown at this point.

It is interesting to note that Wikipedia itself is not immune to manipulation, but works fairly well overall. However, if Wikia Search attracts significant usage, it may prove a bigger target. I guess this could be self-correcting, in that if Wikia returns bad results because of manipulation, its usage will drop.

I don’t expect Wikia to challenge Google in a meaningful way any time soon. Google is too good and too entrenched. Further, Google and Wikipedia have a symbiotic relationship. Google sends huge amounts of traffic to Wikipedia, and that works well for users since it often has the information they are looking for. Win-win.

Unanswered question: how’s Vista’s real-world security compared to XP?

Reading Bruce Eckel’s disappointing I’m not even trying Vista post (I think he should give it a go rather than swallow all the anti-hype) prompts me to ask: how’s Vista’s security shaping up, after 12 months of real-world use?

I could call the anti-virus companies, but I doubt I’ll get a straight answer. The only story the AV guys want to see is how we still need their products.

I’d like some stats. What proportion of Vista boxes has been successfully infected by malware? How does that compare to XP SP2? And has anyone analysed those infections to see whether User Account Control (Vista’s big new security feature) was on or off, and whether the infection required the user’s cooperation, such as clicking OK when an unsigned malware app asked for admin rights? What about IE’s protected mode – has it reduced the number of infections from compromised or malicious web sites?

Has anyone got hard facts on this?

Technorati tags: , , ,