Category Archives: cloud computing

Single sign-on from Active Directory to Windows Azure: big feature, still challenging

Microsoft has posted a white paper setting out what you need to do in order to have users who are signed on to a local Windows domain seamlessly use an Azure-hosted application, without having to sign in again.

I think this is a huge feature. Maintaining a single user directory is more secure and more robust than efforts to synchronise a local directory with a cloud-hosted directory, and this is a point of friction when it comes to adopting services such as Google Apps or Salesforce.com. Single sign-on with federated directory services takes that away. As an application developer, you can write code that looks the same as it would for a locally deployed application, but host it on Azure.

There is also a usability issue. Users hate having to sign in multiple times, and hate it even more if they have to maintain separate username/password combinations for different applications (though we all do).

The white paper explains how to use Active Directory Federation Services (ADFS) and Windows Identity Foundation (WIF, part of the .NET Framework) to achieve both single sign-on and access to user data across local network and cloud.

image

The snag? It is a complex process. The white paper has a walk-through, though to complete it you also need this guide on setting up ADFS and WIF. There are numerous steps, some of which are not obvious. Did you know that “.NET 4.0 has new behavior that, by default, will cause an error condition on a page request that contains a WS-Federation authentication token”?

Of course dealing with complexity is part of the job of a developer or system administrator. Then again, complexity also means more to remember and more to troubleshoot, and less incentive to try it out.

One of the reasons I am enthusiastic about Windows Small Business Server Essentials (codename Aurora) is that it promises to do single sign-on to the cloud in a truly user-friendly manner. According to a briefing I had from SBS technical product manager Michael Leworthy, cloud application vendors will supply “cloud integration modules,” connectors that you install into your SBS to get instant single sign-on integration.

SBS Essentials does run ADFS under the covers, but you will not need a 35-page guide to get it working, or so we are promised. I admit, I have not been able to test this feature yet, and aside from Microsoft’s BPOS/Office 365 I do not know how many online applications will support it.

Still, this is the kind of thing that will get single sign-on with Active Directory widely adopted.

Consider FaceBook Connect. Register your app with Facebook; write a few lines of JavaScript and PHP; and you can achieve the same results: single sign-on and access to user account information. Facebook knows that to get wide adoption for its identity platform it has to be easy to implement.

On Microsoft’s platform, another option is to join your Azure instance to the local domain. This is a feature of Azure Connect, currently in beta.

Are you using ADFS, with Azure or another platform? I would be interested to hear how it is going.

Microsoft inadvertently shares BPOS offline address books with other customers

According to an email I’ve seen, sent to customers of Microsoft BPOS (Business Productivity Online Suite), some users have found their Offline Address Book – an Exchange feature which stores a company’s internal address list – has been downloaded by other BPOS customers:

Microsoft recently became aware that, due to a configuration issue, Offline Address Book information for Business Productivity Online Suite–Standard customers could be inadvertently downloaded by other customers of the service, in a very specific circumstance. The issue was resolved within two hours of identification, and we completed a thorough review of processes to prevent this type of issue from occurring again. Our records indicate that a very small number of downloads actually occurred, and we are working with those few customers to remove the files.

This issue affected only Business Productivity Online Suite–Standard customers; no other Microsoft Online Services were affected.

Big deal? Probably not, especially as customer address lists, which might be useful to competitors, are not normally included in an Offline Address Book.

That said, any leakage of data from one customer to another is a serious issue, as it is exactly this possibility which deters users from using cloud services in the first place. It is an inherent hazard of multi-tenancy.

Still, kudos to Microsoft for owning up.

The Salesforce.com platform play

I’ve been mulling over the various Salesforce.com announcements here at Dreamforce, which taken together attempt to transition Salesforce.com from being a cloud CRM provider to becoming a cloud platform for generic applications. Of course this transition is not new – it began years ago with Force.com and the creation of the Apex language – and it might not be successful; but that is the aim, and this event is a pivotal moment with the announcement of database.com and the Heroku acquisition.

One thing I’ve found interesting is that Salesforce.com sees Microsoft Azure as its main competition in the cloud platform space – even though alternatives such as Google and Amazon are better known in this context. The reason is that Azure is perceived as an enterprise platform whereas Google and Amazon are seen more as commodity platforms. I’m not convinced that there is any technical justification for this view, but I can see that Salesforce.com is reassuringly corporate in its approach, and that customers seem generally satisfied with the support they receive, whereas this is often an issue with other cloud platforms. Salesforce.com is also more expensive of course.

The interesting twist here is that Heroku, which hosts Ruby applications, is more aligned with the Google/Amazon/open source community than with the Salesforce.com corporate culture, and this divide has been a topic of much debate here. Salesforce.com says it wants Heroku to continue running just as it has done, and that it will not interfere with its approach to pricing or the fact that it hosts on Amazon’s servers – though it may add other options. While I am sure this is the intention, the Heroku team is tiny compared to that of its acquirer, and some degree of change is inevitable.

The key thing from the point of view of Salesforce.com is that Heroku remains equally attractive to developers, small or large. While Force.com has not failed exactly, it has not succeeded in attracting the diversity of developers that the company must have hoped for. Note that the revenue of Salesforce.com remains 75%-80% from the CRM application, according to a briefing I had yesterday.

What is the benefit to Salesforce.com of hosting thousands of Ruby developers? If they remain on Heroku as it is at the moment, probably not that much – other than the kudos of supporting a cool development platform. But I’m guessing the company anticipates that a proportion of those developers will want to move to the next level, using database.com and taking advantage of its built-in security features which require user accounts on Force.com. Note that features such as row-level security only work if you use the Force.com user directory. Once customers take that step, they have a significant commitment to the platform and integrating with other Salesforce.com services such as Chatter for collaboration becomes easy.

The other angle on this is that the arrival of Heroku and VMForce gives existing Salesforce.com customers the ability to write applications in full Java or Ruby rather than being restricted to tools like Visualforce and the Apex language. Of course they could do this before by using the web services API and hosting applications elsewhere, but now they will be able to do this entirely on the Salesforce.com cloud platform.

That’s how the strategy looks to me; and it will fascinating to look back a year from now and see how it has played out. While it makes some sense, I am not sure how readily typical Heroku customers will transition to database.com or the Force.com identity platform.

There is another way in which Salesforce.com could win. Heroku knows how to appeal to developers, and in theory has a lot to teach the company about evangelising its platform to a new community.

Salesforce.com acquires Heroku, wants your Enterprise apps

The big news today is that Salesforce.com has agreed to acquire Heroku, a company which hosts Ruby applications using an architecture that enables seamless scalability. Heroku apps run on “dynos”, each of which is a single process running Ruby code on the Heroku “grid” – an abstraction which runs on instances of Amazon EC2 virtual machines. To scale your app, you simply add more dynos.

image

Why is Salesforce.com acquiring Heroku? Well, for some years an interesting question about Salesforce.com has been how it can escape its cloud CRM niche. The obvious approach is to add further applications, which it has done to some extent with FinancialForce, but it seems the strategy now is to become a platform for custom business applications. We already knew about VMForce, a partnership with VMWare currently in beta that lets you host Java applications that are integrated with Force.com, but it is with the announcements here at Dreamforce that the pieces are falling into place. Database.com for data access and storage; now Heroku for Ruby applications.

These services join several others which Salesforce.com is branding at Force.com 2:

Appforce – in effect the old Force.com, build departmental apps with visual tools and declarative code.

Siteforce – again an existing capability, build web sites on Force.com.

ISVForce – build your own multi-tenant application and sign up customers.

Salesforce.com is thoroughly corporate in its approach and its obvious competition is not so much Google AppEngine or Amazon EC2, but Microsoft Azure: too expensive for casual developers, but with strong Enterprise features.

Identity management is key to this battle. Microsoft’s identity system is Active Directory, with federation between local and cloud directories enabling single sign-on. Salesforce.com has its own user directory and developing on its platform will push you towards using it.

Today’s announcement makes sense of something that puzzled me: why we got a session on node.js at Monday’s Cloudstock event. It was a great session and I wrote it up here. Heroku has been experimenting with node.js support, with considerable success, and says it will introduce a new version next year.

Finally, the Heroku acquisition is great news for Enterprise use of Ruby. Today many potential new developers will be looking at it with interest.

Database.com extends the salesforce.com platform

At Dreamforce today Salesforce.com announced its latest platform venture: Database.com. Salesforce.com is built on an Oracle database with various custom optimizations; and database.com now exposes this as a generic cloud database which can be accessed from a variety of languages – Java, .NET, Ruby and PHP – and accessed from applications running on almost any platform: VMForce, Smartphones, Amazon EC2, Google App Engine, Microsoft Azure, Microsoft Excel, Adobe Flash/Flex and others. One way to use it would via JPA (Java Persistence API) in an VMForce Java application.

The Database.com console is a web application that has a console giving access to your databases and showing useful statistics and system information.

image

You can also create new databases, specifying the schema and relationships.

image

The details presented in the keynote today were sketchy – we saw applications that honestly could have been built just as easily with MySQL – but there is more information in the FAQ. The Database.com API is through SOAP or REST web services, not SQL. Third parties can create drivers so you can you use it with SQL APIs such as ODBC or JDBC. There is row level security, and built-in full text search.

According to the FAQ, Database.com “includes a native trigger and stored procedure language”.

Pricing starts from free – for up to 100,000 records, 50,000 transactions and 3 users per month. After than it is $10.00 per month per additional 100,000 records, $10.00 per month per additional 150,000 transactions, and $10.00 per user if you need the built-in authentication and security system – which as you would expect is based on the native force.com identity system.

As far as I can tell one of the goals of Database.com – and also the forthcoming chatter.com free public collaboration service – is to draw users towards the force.com platform.

Roger Jennings has analysed the pricing and reckons that Database.com is much more expensive than Microsoft’s SQL Azure – for 500 users and a 50GB database $15,000 per month for Database.com vs a little over $500 for the same thing on SQL Azure, though the two are difficult to compare directly and he has had to make a number of assumptions. Responding to a question at the press and analyst Q&A today, Benioff seemed to accept that the pricing is relatively high, but justified in his view by the range of services on offer. Of course the pricing could change if it proves uncompetitive.

Unlike SQL Azure, Database.com starts from free, which is a great attraction for developers interested in giving it a try. Trying out Azure is risky because if you leave a service running inadvertently you may run up a big bill.

In practice SQL Azure is likely to be more attractive than Database.com for its core market, existing Microsoft-platform developers. Microsoft experimented with a web services API for SQL Server Data Services in Azure, but ended up offering full SQL, enabling developers to continue working in familiar ways.

Equally, Force.com developers will like Database.com and its integration with the force.com platform.

Some of what Database.com can do is already available through force.com and I am not sure how the pricing looks for organizations that are already big salesforce.com users; I hope to find out more soon.

What is interesting here is the way salesforce.com is making its platform more generic. There will be more force.com announcements tomorrow and I expect to to see further efforts to broaden the platform then.

Update – I had a chat with Database.com General Manager Igor Tsyganskiy. He says Microsoft’s SQL Azure is the closest competitor to Database.com but argues that because Salesforce.com is extending its platform in an organic way it will do a better job than Microsoft which has built a cloud platform from scratch. We did not address the pricing comparison directly, but Tsyganskiy says that existing Force.com customers always have the option to “talk to their Account Executive” so there could be flexibility.

Since Database.com is in one sense the same as Force.com, the API is similar. The underlying query language is SOQL – the Salesforce Object Query Language which is based on SQL SELECT though with limitations. The language for stored procedures and triggers is Apex. SQL drivers from Progress Software are intended to address the demand for SQL access.

I mentioned that Microsoft came under pressure to replace its web services API for SQL Server Data Services with full SQL – might Database.com face similar pressure? We’ll see, said Tsyganskiy. The case is not entirely parallel. SQL Server is a cloud implementation of an existing SQL database with which developers are familiar. Database.com on the other hand abstracts the underlying data store – although Salesforce.com is an Oracle customer, Tsyganskiy said that the platform stores data in a variety of ways so should not be thought of as a wrapper for an Oracle database server.

Although Database.com is designed to be used from anywhere, I’d guess that Java running on VMForce with JPA, and following today’s announcement Heroku apps also hosted by Salesforce.com, will be the most common scenarios for complex applications.

Google App Engine and why vendor honesty pays

I’ve just attended a Cloudstock session on Google App Engine and new Google platform technologies – an introductory talk by Google’s Christian Schalk.

App Engine has been a subject of considerable debate recently, thanks to a blog post by Carlos Ble called Goodbye App Engine:

Choosing GAE as the platform four our project is a mistake which cost I estimate in about 15000€. Considering it’s been my money, it is a "bit" painful.

Ble’s points is that App Engine has many limitations. Since Google tends not to highlight these in its marketing, Ble discovered them as he went, causing frustrations and costly workarounds. In addition, it has not proved reliable:

Once you overcome all the limitations with your complex code, you are supposed to gain scalabilty for millions of users. After all, you are hosted by Google. This is the last big lie.

Since the last update they did in september 2010, we starting facing random 500 error codes that some days got the site down 60% of the time.

Ble has now partially retracted his post.

I am rewriting this post is because Patrick Chanezon (from Google), has added a kind and respectful comment to this post. Given the huge amount of traffic this post has generated (never expected nor wanted) I don’t want to damage the GAE project which can be a great platform in the future.

He is still not exactly positive, and adds:

I also don’t want to try Azure. The more experience I gain, the less I trust platforms/frameworks which code I can’t see.

Ble’s post is honest, but many of the issues are avoidable and arguably his main error was not to research the platform more thoroughly before than diving in. He blames the platform for issues that in some cases are implementation mistakes.

Still, here at Cloudstock I was interested to see if Schalk was going to mention any of these limitations or respond to Ble’s widely-read post. The answer is no – I got the impression that anything you can do in Java or Python, you can do on App Engine, with unlimited scalability thrown in.

My view is that it pays vendors to explain the “why not” as well as the “why” of using their platform. Otherwise there is a risk of disillusionment, and disillusioned customers are hard to win back.

One day of hacks, REST and cloud: Salesforce.com Cloudstock

I’m in San Francisco for the annual Salesforce.com conference, where the pre-conference day is called Cloudstock and features a bunch of sessions on cloud development from vendors whom Salesforce.com considers more partners than competitors, and from Salesforce.com itself, along with a hackathon competition where you build an instant cloud app.

Why Cloudstock? The parallels with Woodstock’s peace love and music are obscure, but I think the idea is revolution of cloud vs revolution of free love, or something. Presumably nothing to do with mud, getting high or sneaking in without paying.

I’m guessing that the PR goal is to position Salesforce.com at the heart of cloud computing. Good PR, but there are many other ways to do cloud.

I’m in a session on Google App Engine and new Google platform technologies – an introductory talk by Christian Schalk. More on that in this separate post.

Wrestling with Microsoft SharePoint

I do not know what to think about Microsoft SharePoint. It is kind-of inevitable if you live on the Microsoft platform. At some point the question comes up: how do we get to our documents over the Internet; and while VPN works, it is an awkward solution that means opening a VPN tunnel first and arguably opens up too much from a security perspective; direct internet access is easier and works from any device. Microsoft also has Direct Access, which lets you connect to network shares without a VPN; I have not tried it though it requires Windows 7 which is a serious restriction.

In any case, SharePoint does so much more: search, blogs and wikis, company portal, business intelligence, platform for document workflow applications, office web apps for in-browser editing of Office documents, alerts when documents are added or changed, and near-infinite possibilities for customization, to mention a few features.

The downside is that SharePoint is intricate, sometimes slow, and complex to administer.

I run a miniature Microsoft-platform setup for test and development. The servers are on Hyper-V virtual machines, and include Exchange 2007 (2010 upgrade pending), SharePoint 2010, and ISA Server 2006. I have been trying out SharePoint for some time, and once I figured out how to map a drive reliably I have enjoyed using it as a document repository. I have also tried SharePoint WorkSpace 2010 on a netbook, which is useful for offline work, though the user interface needs attention and it has an annoying limit on the number of items it will download.

The problem I have had though is that the internal URL (eg http://sharepoint) was different from the external URL (eg https://sharepoint.mydomain.com). In consequence, it did not work smoothly on a laptop or netbook that is not always on the internal network.

I spent some time fixing this. I am not sure that I found the best solution, but I found one that works. I extended the SharePoint application to a second web site for ssl access; I set up split DNS so that sharepoint.mydomain.com resolves to the internal server on the local network, and to ISA on the internet; I set up a new listener in ISA for the external URL, and a new policy that redirects to the internal server using SSL throughout; I turned off link translation; I removed all the paths except for /*; I set NTLM authentication. Finally I set Alternate Access Mappings for the new external URL which, thanks to split DNS, also works internally.

If that sounds like jargon, welcome to the world of SharePoint administration. Few things are intuitive or straightforward; which is why, for example, you can find a three part series by the Troy Starr on What every SharePoint administrator needs to know about Alternate Access Mappings.

Another thing that is obvious to SharePoint admins but not to occasional visitors: when you patch SharePoint with an official update, it is not enough to run the patch. You also have to update the configuration with psconfig -update or the configuration wizard, to update the metadata.

And after all that, in my very simple SharePoint setup, I still have warnings in Health Analyzer about missing server side dependencies for WebPart class [8d6034c4-a416-e535-281a-6b714894e1aa] – I am not sure why the Analyzer cannot look up the GUID somewhere and present something more meaningful, nor why the “More information about this rule” link in the analyzer takes you to the SharePoint Server home page rather than anywhere useful.

That said, I am pleased with my setup. It makes getting at documents when out and about very easy, particularly with mapped drive integration. I also found several iPhone clients – Steve McDonnell has a round up – and installed the free Moprise to get started. Performance is rather good; and while it has little advantage over Dropbox for an individual, in a corporate environment it makes sense.

My immediate conclusion is that specialising in SharePoint consultancy and administration is probably a smart move if you have the requisite skills; but that tinkering with SharePoint is something non-specialists are unlikely to enjoy. Shifting the administrative burden to Microsoft by using its hosted SharePoint is attractive, as is using an alternative collaboration and document platform such as Google Apps, though the two platforms are not very alike.

HTML 5 Canvas: the only plugin you need?

The answer is no, of course. And Canvas is not a plugin. That said, here is an interesting proof of concept blog and video from Alexander Larsson: a GTK3 application running in Firefox without any plugin.

image

GTK is an open source cross-platform GUI framework written in C but with bindings to other languages including Python and C#.

So how does C native code run the browser without a plugin? The answer is that the HTML 5 Canvas element, already widely implemented and coming to Internet Explorer in version 9, has a rich drawing API that goes right down to pixel manipulation if you need it. In Larsson’s example, the native code is actually running on a remote server. His code receives the latest image of the application from the server and transmits mouse and keyboard operations back, creating the illusion that the application is running in the browser. The client only needs to know what is different in the image as it changes, so although sending screen images sounds heavyweight, it is amenable to optimisation and compression.

It is the same concept as Windows remote desktop and terminal services, or remote access using vnc, but translated to a browser application that requires no additional client or setup.

There are downsides to this approach. First, it puts a heavy burden on the server, which is executing the application code as well as supplying the images, especially when there are many simultaneous users. Second, there are tricky issues when the user expects the application to interact with the local machine, such as playing sounds, copying to the clipboard or printing. Everything is an image, and not character-by-character text, for example. Third, it is not well suited to graphics that change rapidly, as in a game with fast-paced action.

On the other hand, it solves an immense problem: getting your application running on platforms which do not support the runtime you are using. Native applications, Flash and Silverlight on Apple’s iPad and iPhone, for example. I recall seeing a proof of concept for Flash at an Adobe MAX conference (not the most recent one) as part of the company’s research on how to break into Apple’s walled garden.

It is not as good as a true local application in most cases, but it is better than nothing.

Now, if Microsoft were to do something like this for Silverlight, enabling users to run Silverlight apps on their Apple and Linux devices, I suspect attitudes to the viability of Silverlight in the browser would change considerably.

How will online services impact Microsoft’s partner business?

2010 is the year Microsoft got serious about cloud services. Windows Azure opened for real business in November 2009 – OK, just before 2010 – and CEO Steve Ballmer took to telling the world how Microsoft is “all in” for cloud computing whenever he got up to speak. Office and SharePoint 2010 launched in May 2010 complete with the ability to create and edit Office documents from a web browser. Microsoft also announced Office 365, essentially an upgrade of its existing BPOS offering, offering hosted Exchange, Sharepoint and Lync (Office Communicator). Microsoft also announced Small Business Server 2011, including an Essentials edition, formerly codenamed “Aurora”, which is little more than Windows Home Server plus Active Directory and points small businesses towards cloud services for email and document collaboration.

I’d guess that Microsoft’s cloud conversion is driven in part by the progress Google, Salesforce.com and others have made in persuading businesses that hosted internet services make more sense than maintaining your own servers and server applications in many cases.

But what is the impact on Microsoft partners, who have been kept busy supplying and configuring servers, implementing backup, keeping systems running, and then upgrading them as they become obsolete? On the face of it they have less to do in a hosted world, and although Microsoft offers commission on the sale of online subscriptions, that might not compensate for lost business.

Then again, cloud services offer new opportunities, still need configuring, and look likely to be a source of new business for partners particularly at a time when the majority of businesses have not yet made the transition.

I’m researching a further piece on the subject and would love to hear honest views from partners such as resellers and solution providers about how Microsoft’s online services are affecting partner business now and in the future. Or maybe you think this cloud thing is overdone and it will be business as usual for a while yet. You can contact me by email – tim(at)itwriting.com – or of course comment below.