Category Archives: web authoring

WinFS reborn: SQL Server as a file system

Fascinating interview with Quentin Clark, who led the cancelled WinFS project at Microsoft. Jon Udell is the interviewer.

Clark talks about how technology from WinFS is now emerging as the Entity Framework in ADO.NET (part of .NET 3.5 SP1) and the FileStream column type in SQL Server 2008 – a connection I’d already made at the Barcelona TechEd last year. He also mentions the new HierarchyID column type that enables fast querying of paths, the concept of rows which contain other rows. He adds that a future version of SQL Server will support the Win32 API so that it can support a file system:

In the next release we anticipate putting those two things together, the filesystem piece and the hierarchical ID piece, into a supported namespace. So you’ll be able to type //machinename/sharename, up pops an Explorer window, drag and drop a file into it, go back to the database, type SELECT *, and suddenly a record appears.

Put that together with the work Microsoft is doing on synchronization, and you get offline capability too – something more robust than offline files in Vista. Clark says SharePoint will also benefit from SQL Server’s file system features.

Note that Live Mesh does some of this too. I guess SQL Server is there in the Live Mesh back end, but it strikes me Microsoft is at risk of developing too many ways to do the same thing.

The piece of WinFS that shows no sign of returning is the shared data platform, which was meant to enable applications to share data:

… all that stuff is gone. The schemas, and a layer that we internally referred to as base, which was about the enforcement of the schemas, all that stuff we’ve put on the shelf. Because we didn’t need it.

Cenzic web app report highlights security problems

Will we ever get a secure Internet? There’s no cause for optimism in the latest Cenzic report into web app security. A few highlights:

  • 7 out of 10 Web applications analyzed by Cenzic were found vulnerable to Cross-Site Scripting attacks
  • 70% of Internet vulnerabilities are in web applications
  • FireFox has the most reported browser vulnerabilities at 40%; IE is 23%
  • Weak session management, SQL Injection, and poor authentication remain very common problems
  • 33% of all reported vulnerabilities are caused by insecure PHP coding, compared to 1% caused by insecurities in PHP itself.

OK, it’s another report from a security company with an interest in hyping the figures; but I found this one more plausible than some.

The PHP remarks are interesting; it would be good to see equivalent figures for ASP.NET and Java.

Painful Debian / Ubuntu SSL bug

A bug in the Debian-modified version of OpenSSL (also used by Ubuntu) means that cryptographic keys generated on Debian systems for the last couple of years may be insecure. Instead of being well randomized, they are easily guessable.

More information about the vulnerability is here; how to fix it here.

How much does this matter? The full scope has not emerged yet; but as I understand it, it affects self-generated keys. Those who purchased certificates from a third-party certificate authority are not affected, unless one of those authorities turns out to have been using the broken version which is unlikely. Even if you purchased certificates from a third-party certificate authority, you would still be affected if you generated the certificate request on a system with the broken OpenSSL library (thanks to Nico for the correction below).

This means that a large number of supposedly secure SSH connections or SSL connections to web sites and servers over the last couple of years were actually not very secure at all.

If nothing else, it shows how easy it is to be falsely reassured, to think you are secure when you are not.

It also shows the risks of modifying security code. The problem is not with OpenSSL, but with changes made by a Debian coder who thought he was fixing something when in fact he was breaking it.

This site runs on Debian and I’ve spent some time today checking it for vulnerability and regenerating keys.

Technorati tags: , , ,

Making sense of Salesforce.com

The Salesforce.com marketing pitch here at DreamForce Europe is wearying at times but it is not complete nonsense. CEO Marc Benioff spent the first hour of his keynote yesterday re-iterating what he has said 1000 times before about “no software”. The “no software” slogan is deceptive, since Salesforce.com is a software platform; Benioff deliberately conflates multiple issues, including zero software deployment, cloud availability, and outsourced hardware maintenance.

The core of the model is multi-tenancy. One entity (Salesforce.com) takes responsibility for your hardware and application server. Multiple entities (including you) get shared use of those resources, financed by a subscription. Originally this was a single CRM application; now it is called a platform (known as Force.com) since you can build many types of application on that platform. Is this “software as a service”, or just a web application? It is both, especially since Salesforce.com publishes a SOAP API and claims to be the largest users of SOAP in the world.

I asked Adam Gross, VP of developer marketing, whether the platform can also support REST; the answer is not really; you can create your own REST API to some extent, but authentication must be done through SOAP.

Developers can customize Salesforce or write their own applications (which is really the same thing) either by simple configuration, or writing code in APEX, which is a language created especially by and for Salesforce. Under the covers, I understand that Salesforce runs on Java and Oracle, (an upgrade to 10g is due later this year) so your APEX code ends up as Java byte code and queries Oracle; but this is hidden from the developer.

One of the interesting features of Force.com is that you can develop entirely online. You can also write APEX code in Eclipse. There is a sandbox facility for testing. Another idea is to create mashups which use web services to combine Salesforce applications with other web applications (just as Salesforce itself does with Google documents).

The next version of the Salesforce.com platform, available shortly, includes VisualForce, a tag-based syntax for creating a custom web user interface. VisualForce uses a model – view – controller pattern.

The Salesforce.com model has several attractions. It has inherent advantages. For example, hosted applications make more efficient use of hardware than on-premise servers. Another advantage is that rolling out a Salesforce.com implementation is easier than introducing something like SAP. There is no hardware aspect to worry about, and the application is usable out of the box. Some of the customers I spoke to talked about failed or arduous implementations of SAP or Siebel systems.

As the Salesforce.com customer base grows  – it is currently 41,000 customers and 1.1 million subscribers – it becomes a more attractive target for third party software vendors. You can market a custom Salesforce application through the official AppExchange, or create your own on-demand application and sell it to your own subscribers.

What then are the main reservations? Well, CEO Marc Benioff apparently has not read Chris Anderson’s essay on Free. As a customer, you have to be willing to pay Salesforce.com a per-subscriber annual fee for ever. As a third-party vendor, you have to be willing to pay Salesforce.com a proportion of your revenue for ever. Custom objects, custom language, custom UI tags: it won’t be easy to move away. This is proprietary lock-in reborn for the Web.

Second, if you use any hosted application platform you lose control. If you find yourself needing some new feature that the platform doesn’t implement, you have to ask nicely and wait in hope, or find some way to implement it using a mash-up or APEX code. If you can’t wrest the performance you want from the platform, you can’t upgrade the hardware or introduce a stored procedure: it is what it is. As an example, I’ve heard users here complain that the security system is insufficiently fine-grained; improvements are coming, but they have to wait.

Third, you have to trust Salesforce.com with your data, and trust it to stay available. If you run your business on Salesforce.com, and it goes offline, you may as well all go home. Now, arguably the guys at Salesforce.com will work as hard or harder than your in-house team to keep systems up and running, and in most cases have more resources to work with, but nevertheless, it is a matter of trust.

Fourth, this is mainly a web application platform, though you can make offline applications or desktop applications using the API. The core user interface is functional rather than attractive, and I saw lots of flashing screens and browser messages saying “waiting for na5.salesforce.com”. VisualForce AJAX components will help; though in practice business users do not care that much provided they get the results they want. Still, it’s a point worth noting; Microsoft argues that “software plus services” delivers a better user experience. The rejoinder is that “software plus services” removes key benefits of the software as a service model.

In the end, it comes down to a business case. It should be possible to sit down and calculate whether a move to Salesforce.com for some part of an organization’s IT provision will cost money, or save money. The people I speak to here think it works for them.

Marc Benioff: Google deal is aimed at a common enemy

Here at Dreamforce Europe, I asked Salesforce.com CEO Marc Benioff about the company’s agreement with Google, in which Salesforce becomes an OEM for Google Apps. We saw this demonstrated in the keynote. You can start a email via  Gmail from within a Salesforce contact. When sent – provided you click the Salesforce send button and not the Gmail send button – the email is added to the contact history. A similar feature lets you attach a Google document to a Salesforce record.

It’s a useful feature; but long term, will Salesforce.com and Google be competitors rather than partners? It is a natural question, since both companies are promoting their services as a platform for applications. Salesforce has the Apex programming language, while Google has its App Engine. According to Benioff, App Engine is primarily for Python developers, while Salesforce.com is a platform for enterprise applications. This struck me as downplaying Google’s likely ambitions in the enterprise market.

I therefore asked Benioff whether the agreement with Google included any non-compete element, or whether Google might be a future platform competitor. He did not answer my question, but said:

The enemy of my enemy is my friend

The identity of the enemy is unspecified; but given that Benioff used Microsoft .NET as the example of what his platform is supposedly replacing, and that Google docs competes with Microsoft Office, and that Benioff makes constant jibes at the complexity and expense of developing for Windows, I guess we can draw our own conclusions.

For sure, it did little to allay my suspicion that Salesforce.com and Google will not not always be as warm towards one another.

As an aside, there are ironies in Benioff’s characterization of .NET. Microsoft launched .NET as a “platform for web services”, which is exactly what Salesforce.com has become. Microsoft was a key driver behind the standardization and adoption of SOAP, which is the main protocol in the Salesforce.com API.

Popfly Game Creator – programming online with Silverlight

This looks great: Popfly Game Creator.

Interesting on several counts.

First, casual gaming will help get Silverlight runtimes deployed.

Second, it’s Microsoft doing one of the things it does well: opening up programming to a new group. Another example: Microsoft promotes its XNA gaming framework to universities, where it helps them to entice new students into computer science.

Third, it’s from Adam Nathan, author of the definitive work on .NET interop, .NET and COM. Popfly gaming must be welcome light relief (though I don’t mean to imply that this stuff is easy to do).

Fourth, is online programming – I mean, programming that you actually do online – coming of age?

Technorati tags: , , , ,

Live Mesh: Hailstorm take 2?

So says Spolsky, in a rant about both unwanted mega-architectures, and the way big companies snaffle up all the best coders.

Is he right? Well, I attended the Hailstorm PDC in 2001 and I still have the book that we were given: .NET My Services specification. There are definitely parallels, not least in the marketing pitch (from page 3):

.NET My Services will enable the end user to gain access to key information and receive alerts about important events anywhere, on any device, at any time. This technology will put users in total control of their data and make them more productive.

Swap “.NET My Services” for “Live Mesh” and you wouldn’t know the difference.

But is it really the same? Spolsky deliberately intermingles several points in his piece. He says it is the same stuff reheated. One implication is that because Hailstorm failed, Live Mesh will fail. Another point is that Live Mesh is based on synchronization, which he says is not a killer feature. A third point is that the thing is too large and overbearing; it is not based on what anyone wants.

Before going further, I think we should ask ourselves why Hailstorm failed. Let’s look at what some of the people involved think. We should look at this post by Mark Lucovsky, chief software architect for Hailstorm and now at Google, who says:

I believe that there are systems out there today that are based in large part on a similar set of core concepts. My feeling is that the various RSS/Atom based systems share these core concepts and are therefore very similar, and more importantly, that a vibrant, open and accessible, developer friendly eco-system is forming around these systems.

Joshua Allen, an engineer still at Microsoft, disagrees:

All of these technologies predate Hailstorm by a long shot.  There is a reason they succeeded where Hailstorm failed.  It’s because Hailstorm failed to adopt their essence; not because they adopted Hailstorm’s essence …. the “principles” Mark’s blog post cites are actually principles of the technologies Hailstorm aimed to replace.

but as Allen shows in the latter part of his post, the technology was incidental to the main reasons Hailstorm failed:

  1. Hailstorm intended to be a complete, comprehensive set of APIs and services ala Win32.  Everything — state management, identity, payments, provisioning, transactions — was to be handled by Hailstorm.
  2. Hailstorm was to be based on proprietary, patented schemas developed by a single entity (Microsoft).
  3. All your data belonged to Microsoft.  ISVs could build on top of the platform (after jumping through all sorts of licensing hoops), but we controlled all the access.  If we want to charge for alerts, we charge for alerts.  If we want to charge a fee for payment clearing, we charge a fee.  Once an ISV wrote on top of Hailstorm, they were locked in to our platform.  Unless we licensed a third party to implement the platform as well, kind of like if we licensed Apple to implement Win32.

Hailstorm’s technology was SOAP plus Passport authentication. There were some technical issues. I recall that Passport in those days was suspect. Some smart people worked out that it was not as secure as it should be, and there was a general feeling that it was OK for logging into Hotmail but not something you would want to use for online banking. As for SOAP, it gets a bad rap these days but it can work. That said, these problems were merely incidental compared to the political aspect. Hailstorm failed for lack of industry partners and public trust.

Right, so is Live Mesh any different? It could be. Let me quickly lay out a few differences.

  1. Live Mesh is built on XML feeds, not SOAP messaging. I think that is a better place to start.
  2. Synchronization is a big feature of Mesh, that wasn’t in Hailstorm. I don’t agree with Spolsky; I think this is a killer feature, if it works right.
  3. Live Mesh is an application platform, whereas Hailstorm was not. Mesh plus Silverlight strikes me as appealing.

Still, even if the technology is better, what about the trust aspect? Will Mesh fail for the same reasons?

It is too soon to say. We do not yet know the whole story. In principle, it could be different. Mesh is currently Passport (now Live ID) only. Will it be easy to use alternative authentication providers? If the company listens to its own Kim Cameron, you would think so.

Currently Mesh cloud data resides only on Microsoft’s servers, though it can also apparently do peer-to-peer synch. Will we be able to run Mesh entirely from our own servers? That is not yet known. What about one user having multiple meshs, say one for work, one personal, and one for some other role? Again, I’m not sure if this is possible. If there is only One True Mesh and it lives on Live.com, then some Hailstorm spectres will rise again.

Finally, the world has changed in the last 7 years. Google is feared today in the way that Microsoft was feared in 2001: the entity that wants to have all our information. But Google has softened us up to be more accepting of something like Live Mesh or even Hailstorm. Google already has our search history, perhaps our email, perhaps our online documents, perhaps an index of our local documents. Google already runs on many desktops; Google Checkout has our credit card details. What boundary can Live Mesh cross, that Google has not already crossed?

Hailstorm revisited is an easy jibe, but I’m keeping an open mind.

Role of web video in tech communications

Last week’s Live Mesh announcement was a significant one for Microsoft watchers. It was interesting to note that all the in-depth information came in the form of web video.

Personally I dislike this trend. Video cannot easily be scanned to see what it contains; it also requires audio which is a nuisance. It is more work to quote from a video that to copy some text. I also resort to playing them at double speed where possible, to come closer to the speed of reading, and noting down the time of sections that I want to return to.

Some of these problems could be mitigated by better presentation. For example, you could have summary text on the page next to an embedded video, with links to indexed points.

However I also recognize that I may be in a minority. Video has obvious advantages; it is more informal, and can includes real demos as opposed to diagrams and screen grabs.

I am even contemplating trying some video publishing of my own; it is time I reviewed Adobe Visual Communicator.

Even so, I’d suggest that companies take the time to offer transcripts of important video content. Text has advantages too.

Microsoft: Live Mesh or Live Mess? Here’s what to read.

Here’s what I suggest you read to get to grips with Live Mesh:

Amit Mital’s introduction (he’s the General Manager)

Mike Zintel’s Live Mesh as a Platform (he’s Director of Service Infrastructure)

Mary Jo Foley’s Ten things to know and the helpful stack diagram.

I have a few initial comments. First, it’s the platform that matters, not the Live Desktop which is the first thing Microsoft is delivering and which you will find presented at mesh.com. Microsoft is finally showing us what it means by the “software plus services” thing it has been talking about for so long. It involves a new “Mesh Operating Runtime” which has both cloud pieces and client pieces, a MeshFX API, and an identity system which is Live ID (formerly Passport).

As far as I can tell, Microsoft is delivering an API which we will be able to use to build internet-based data, document and configuration into either desktop or web applications, with synchronization to local storage for offline use. Zintel adds:

… customers will ultimately license applications to their mesh, as opposed to an instantiation of Windows, Mac or a mobile account or a web site.  Such applications will be seamlessly installed and run from their mesh and application settings persisted across their mesh

It sounds good, though the obvious question is whether Microsoft is overstating the importance of the client in an attempt to preserve its core market. Do we need this special client piece? Here’s a paragraph from Zintel’s piece that caught my eye:

A key design goal of the Live Mesh data synchronization platform is to allow customers to retain the ownership of their data that is implicit with local storage while improving on the anywhere access appeal of the web. The evolution of the web as a combined experience and storage platform is increasingly forcing customers to choose between the advantages of local storage (privacy, price, performance and applications) and the browser’s implicit promise of data durability, anywhere access and in many cases, easy sharing.

Can Microsoft improve on the “anywhere access appeal of the web? Zintel says we need to combine it with the advantages of local storage, but the advantages Zintel identifies are not all that convincing. Let’s look at them:

Privacy: maybe, but local data is vulnerable to worms, trojans, viruses; well secured Internet data accessed over SSL is arguably more secure. Data not connected to the Internet is nice and secure, but can’t participate in the Mesh.

Price: I don’t see how Mesh helps here. Yes, local storage is cheap, but as soon as data enters the Mesh it is on the Internet and we are paying for data transfer as well as possibly Internet storage. I realise that Microsoft (among others) offers generous Internet storage for free, but that is just a way of buying market share.

Performance: Granted, some types of application work faster with local storage. Still, there are non-Mesh ways of getting this from web applications in a fairly seamless manner, such as Google Gears or Adobe’s AIR.

Applications: This is perhaps the big one. Many of us are reluctant to do without traditional local applications such as Office. Well, mainly Office. Still, web equivalents get better all the time. One day they will be good enough; and new technology like Silverlight is bringing that day closer. 

What about identity management and permissions? Zintel says:

A side effect of the competition to store customer data in the cloud and display it in a web browser is the fragmentation of that data and subsequent loss of ownership. Individual sites like Spaces, Flickr and Facebook make sharing easy, provided the people you are sharing with also use the same site. It is in fact very difficult to share across sites and equally difficult to work on the same data across the PC, mobile and web tiers.

True; but Mesh currently identifies users by their Live ID. Isn’t that the same as Spaces?

If Microsoft delivers a bunch of useful web services, that’s great. If it tries to somehow replace the web with its Mesh, it will fail.

Mary Jo Foley also asks the question: to what extent is Microsoft extending, and to what extent is it replacing, existing Live services such as Office Live or the excellent Skydrive? Making sense of all this is a challenge.

Now let’s mash all this up with Yahoo! (maybe). Ouch.

RIA means … not much

Ryan Stewart has a go at nailing what the term Rich Internet Application means.

I think he’s coming at this from the wrong end. It’s better to look at the history.

As far as I’m aware – and based partly on my own recollection, and partly on what Adobe’s Kevin Lynch told the press at last year’s MAX Europe – the story begins around 2001, when WebVertising Inc created an online booking application for the Broadmoor Hotel in Colorado Springs. It was an HTML application redone in Flash. A PDF describing what was done is still online, and discusses some of the differences between the HTML and Flash approach, though bear in mind this is Flash evangelism.

They created iHotelier, a fully interactive, data-driven reservation application that reduces the entire reservation process down to a single screen. Users looking for information on available rooms for specific dates highlight their preferred dates in a calendar. With one click of the mouse, the Flash application displays the available (and unavailable) rooms, and their cost. (Figure 10) As a result, users do not feel like they’ve wasted a lot of time and effort if their first room choice is not available.

This case study seemed to trigger a new awareness at Macromedia concerning the potential of Flash for complete applications. I don’t mean that it had never been thought of before; after all, it was Macromedia that put powerful scripting capabilities into Flash, and I’m sure there were Flash projects before this that were applications. Nevertheless, it was a landmark example; and it was around then that I started hearing the term Rich Internet Application from Macromedia. Wikipedia claims that this paper [PDF] is the first use; it’s by Jeremy Allaire and dated March 2002. I’m sure Allaire himself could provide more background.

The problem with the term, as you can see from Allaire’s paper, is that Macromedia (now Adobe) tends to define it pretty much as whatever their latest Flash technology happens to be. This shifts around; so if you are at an AIR event, it’s AIR; if you are at a Flash event, it’s Flash; if you are at a Live Cycle event, it’s apps that use Live Cycle.

Microsoft muddied the waters a little. Realising that RIAs were attracting attention, it started using the term to describe its own technology too, though in the spirit of “embrace and extend” it changed it to mean “rich interactive application”. As I recall, Microsoft used it mainly to describe internet-connected desktop client applications such as those built with Windows Forms. Something like iTunes is a great example (even though it is from Apple), since it runs on the client but gets much of its data from the Internet, especially when you are in the iTunes store.

Now it remains a buzzword but honestly has little meaning, other than “something a bit richer than plain HTML”. If you were doing the Broadmoor Hotel app today, you could do it with AJAX and get similar results.

Technorati tags: , , ,