Tag Archives: cloud computing

Interview: Salesforce.com exec Parker Harris on the technology behind the platform

A couple of months ago, I attended Cloudforce in London, where I spoke to Salesforce.com co-founder Parker Harris, Executive Vice President, Technology.

The Salesforce.com platform is interesting for all sorts of reasons. The company has been a powerful advocate of cloud computing from before its adoption by other industry giants, and its service is as far as I can tell well liked by its customers. The technology is interesting as well. The CRM (Customer Relationship Management) application that is the core of the Salesforce.com offering is both a web application and a web service API, with more transactions conducted programmatically than through the browser. The Force.com platform is multi-tenanted, and the company believes that only this type of platform delivers the full benefits of cloud computing. In December 2010 the company acquired Heroku, at the time a company dedicated to running Ruby applications in the cloud, though it now has a broader offering. Salesforce.com was also where I first heard the term “Social Enterprise”, a phrase now taken up by others such as IBM and Microsoft; in fact, Social Enterprise was the dominant theme at Cloudforce.

image

I asked Harris how the company copes with scaling its platform as demand grows.

“We’ve gone through a few changes. We came up years ago with a concept we call a ‘pod’, which is one set of multi-tenant customers. There’s a database in there, there’s application servers, search servers, all kinds of things. So that was a unit of scale. And we invested heavily in capacity planning, just to watch and see how those grow, and so we fill them up to a certain size and then we grow horizontally and add more.”

Harris adds that pods are organised in sets called super-pods, which provides isolation in the event of a failure.

One of the goals is to make the system scale horizontally, whereas the Oracle database on which the platform is built tends to be a vertical unit of scale. "

“We’re doing a lot of research work in looking at horizontal scalable systems like Hbase for example, to look at how can we do more horizontal stores. And then finally we’re in the middle of a process of moving a lot of our processing to completely asynchronous handling, so that as transactions come in they’re being handled in an asynchronous way. Even though the consumer feels like it’s synchronous, the back end will be completely asynchronous, which will then give us even more control and capability to scale.”

In the past I have thought of the Force.com platform as in effect a big database. I asked Harris if that is still the case?

“We have big databases, but I think of it more as a meta-data platform. Behind that meta-data we do have a big database, we also have a big file store, we have a big search engine, which is another unit. That’s really what you know as Salesforce.com. And that’s fully multi-tenant, it’s way at the extreme end of multi-tenancy.

“Heroku is really virtualised single tenancy. So it’s more towards an infrastructure as a service layer. For some workloads and some use cases that’s fine. If you want the predictability of ‘the software I wrote on my computer needs to run exactly the same here’, that’s where that takes place. But you don’t have the higher-level capabilities of Force.com, you don’t have everything working out of the box.

“Where we’re headed is, you’ll have the 4GL of Force.com, you’re going to have the 3GL of Heroku, and you’ll be able to choose.”

At one time Salesforce has a partnership with VMWare called VMForce. What happened to that?

“We don’t have a technical partnership with VMWare, but we do have a close relationship with them. We want to have applications that work in VMWare’s cloud able to move into Heroku or vice versa. Really what happened is, we started the relationship with VMWare and then we discovered Heroku. We were just blown away by the technology there and the team, and that was when we ended up acquiring Heroku, but we still see VMWare as a huge partner. We’re a big customer of VMWare, they’re a big customer of ours.”

You talked at Cloudforce about HTML mobile apps versus native apps, and you seem to be more on the HTML side?

“I was making the point of HTML versus native code. This is an argument that goes on inside of Salesforce, it is a religious debate, but I believe that you can build some extremely rich applications with native code, and you can build them very quickly. The complexity for me happens when you introduce business systems that are on an enterprise platform. If you think about our metadata engine, it’s very powerful, and I can create triggers, I can change the UI, I can do all these magic things, and immediately it works. Change the UI into Japanese, it immediately works.

“If I’m building something 100% native, I have to replicate that platform in the native code, and that’s for me a no-win situation, because this is going to keep moving, and the native code will be behind, it will be playing catch-up. So if it’s for an enterprise business system on a platform like ours that’s metadata driven, you really need to have that be directly from the cloud.

“If you think about the Chatter feed or the business forms, that I believe needs to be HTML5. But I want to use the camera, I want to have files local to the device, that I can open, and various programs, I want location to work, and maybe I want to use the map application: so there are some native assets too. And then when you think of security, I don’t want to have to go to a web page and enter my username and password, and maybe have to VPN in, I don’t want to do that every time I use the application.

“So instead what you do is have a native container that wraps all this HTML5 code, gives you access to things like the camera, but also is a way to store an Oauth token, it now has the right to access my data, and from then on I can just use the application. It’s really high usability.

“I also think we still have a sometimes-disconnected world. That’s frustrating. So I do think there should be a cache, a recent cache of what you’re working on. That’s my view of where we should be going with these disconnected use cases.

“So that’s where we’re going, and we’re open sourcing a lot of this. We have a native container and we’re open sourcing it so people can see how we built it and add to it. We’ve got a UI framework that we’re going to be open sourcing. It’s an HTML5 component framework. Touch, that we announced last year and is going beta now, Touch is built with that framework, and that’s the framework that we will eventually open source so people can use it with their own applications.”

image

Do you use PhoneGap/Cordova for the native wrapper?

“We do. We do a lot on WebKit-enabled browsers, and when you think of the native container, it’s really going through PhoneGap to get to some of the local resources.”

What about your SOAP and REST APIs?

“We have backward compatibility for all of our APIs, because we don’t want to force people to rewrite their code. We do see lots of use  cases for SOAP, but more and more people are doing RESTful interfaces, and when their writing these mobile applications they really want REST APIs so we’re building a lot more of our own.”

How has Heroku evolved since you made that acquisition?

“There are a couple of huge events. One is, they were just a Ruby platform when we acquired them, and now they are a polyglot platform, so they can run many different languages. They’ve also shown how they’ve built support for these other languages, so other people are able to write support for other languages.

“The other big change was when Facebook on their developer page said, if you want to build a Facebook page or application, here’s how to do it, and if you want it hosted in the cloud, use Heroku. There may be others by now, but we saw a massive uptake in the number of applications being created. We’ve seen a huge number of applications grow from when we acquired it to now and with a big spike through that Facebook relationship.

“We have Heroku enterprise bundles that are available now. People are buying Heroku as part of a SELA, or a Social Enterprise License Agreement.

“Developer adoption is still very important. We want to monetise it, but we don’t want to lose this huge asset of the appeal to the developer community. It’s still early for Heroku.”

I also spoke to Harris about the Social Enterprise concept, but will post that separately.

Moving a database from on-premise SQL Server to SQL Azure: some hassle

I am impressed with the new Windows Azure platform, but when I moved a simple app from my local machine to Azure I had some hassle copying the SQL Server database.

The good news is that you can connect to SQL Azure using SQL Server Management studio. You need to do two things. First, check the server location and username. You should already know the password which you set when the database was created. You can get this information by going to the Azure portal, selecting the database, and clicking Show connection strings on the dashboard.

Second, open the SQL firewall for the IP number of your client. There is a link for this in the same connection string dialog.

Now you can connect in SQL Server Management Studio. However, you have limited access compared to what you get as an admin on your local SQL Server.

Here is the Tasks menu for an on-premise SQL Server database:

image

and here it is for a SQL Server Azure database:

image

Still, you can start Export Data or Copy Database from your on-premise connection and enter the Azure connection as the target. However, you should create the destination table first, since the Export Data wizard will not recreate indexes. In fact, SQL Azure will reject data imported into a table without at least one clustered index.

I tried to script a table definition and then run it against the SQL Azure database. You can generate the script from the Script Table as menu.

image

However even the simplest table will fail. I got:

Msg 40514, Level 16, State 1, Line 2
‘Filegroup reference and partitioning scheme’ is not supported in this version of SQL Server.

when attempting to run the script on SQL Azure.

The explanation is here.

Windows Azure SQL Database supports a subset of the Transact-SQL language. You must modify the generated script to only include supported Transact-SQL statements before you deploy the database to SQL Database.

Fortunately there is an easier way. Right-click the table and choose Generate Scripts. In the wizard, click the Advanced button for Set Scripting Options.

image

Find Script for the database engine type, and choose SQL Azure:

image

You may want to change some of the other options too. This generates a SQL script that works with SQL Azure. Then I was able to use the Export Data wizard using the new table as the target.  If you use Identity columns, don’t forget Enable identity insert in Edit Mappings.

image

Amazon web service APIs: a kind of cloud standard?

I am at the Cloud Computing World Forum in London where one of the highlights was a keynote yesterday from Amazon CTO Werner Vogels. Amazon, oddly enough, does not have a stand here; yet the company dominates the IAAS (Infrastructure as a service) market and has moved beyond that into more PAAS (Platform as a service) type services.

image

Vogels said that the reason for Amazon’s innovation in web services was its low margin business model. This was why, he said, none of the established enterprise software companies had been able to innovate in the same way.

He added that Amazon did not try to lock its customers in:

If this doesn’t work for you, you should be able to work away. You should be in charge, not the enterprise software company.

Sounds good; but Amazon has its own web services API and if you build on it there is an element of lock-in. Or is there? I was intrigued by a remark made by Huawei’s Head of Enterprise R&D John Roese at a recent cloud computing seminar:

We think there is an imperative in the industry to settle on standardised interfaces. There should be no more of this rubbish where people think they can differentiate based on proprietary interfaces in the cloud. A lot of suppliers are not very interested in this because they lose the stickiness of the solution, but we will not see massive cloud adoption without that portability. 

But what are these standardised interfaces? Roese said that Huawei uses Amazon APIs. For example, the Huawei Cloud Storage Engine:

The CSE boasts a high degree of openness. It supports S3-like interfaces and exposes the internal storage service enabler to 3rd party applications.

There is also Eucalyptus, open source software for private clouds that uses Amazon APIs. Is the Amazon web services API becoming a de-facto standard, and what does Amazon itself think about third parties adopting its API?

I asked Vogels, whose response was not encouraging for the likes of Huawei treating it as a standard. The question I put: is the adoption of Amazon’s API by third parties influencing his company in its maintenance and evolution of those APIs?

It is not influencing us. It is influencing them. Who is adopting our APIs? We licensed Eucalyptus. People ask us about standardisation. I’d rather focus on innovation. If others adopt our APIs, I don’t know, we rather focus on innovation.

The lack of interest in standardisation does undermine Vogels’ comments about the freedom of its customers to walk away, though lock-in is not so bad if your use of public cloud is primarily at the IAAS level (though you may be locked into the applications that run on it).

Another mitigating factor is that third parties can wrap cloud management APIs to make one cloud look like another, even if the underlying API is different. Flexiant, which offers cloud orchestration software, told me that it can do this successfully with, for example, Amazon’s API and that of Microsoft Azure. Perhaps, then, standardisation of cloud APIs matters less than it first appears.

Windows Azure get new hybrid cloud, infrastructure as a service features

Microsoft has announced new preview features in Windows Azure, its cloud computing platform, which introduce infrastructure as a service features as well as improving its support for hybrid public/private clouds.

The best summary is in the downloadable Fact Sheet (Word document). One key piece is that virtual machines (VMs) can now be persistent. Previously Azure VMs were all conceptually stateless. You could save data within them, but it could be wiped at any time since Azure may replace it with the original you created when it was first deployed.

Supported operating systems are Windows Server 2008 R2, Windows Server 2012 RC, and four flavours of Linux.

One of the features mentioned by VP Bill Laing is the ability to migrate VHDs, the virtual hard drives used by Hyper-V and Azure, between on-premise and Azure:

Virtual Machines give you application mobility, allowing you to move your virtual hard disks (VHDs) back and forth between on-premises and the cloud. Migrate existing workloads such as Microsoft SQL Server or Microsoft SharePoint to the cloud, bring your own customized Windows Server or Linux images, or select from a gallery.

Windows Azure Virtual Network lets you extend your on-premise network to Azure over a VPN. This sounds like the existing feature called Windows Azure Connect first announced at the end of 2010, though since Azure Virtual Network is also a preview this seems to be talking a long time to come to full release.

Windows Azure Web Sites is a new web hosting offering. Previously, Azure was focused on web applications rather than generic web sites. This new offering is aimed at any website, running on Internet Information Server 7 with supported frameworks including ASP.NET, PHP and Node.js. MySQL is supported as well as Microsoft’s own Azure SQL Database based on SQL Server.

There is also a new management portal in preview. Azure SQL Reporting is out of preview, and there are also new services for media, caching, and geo-redundant storage.

My guess is that Microsoft-platform customers will welcome these changes, which make Azure a more familiar platform and one that will integrate more seamlessly with existing on-premise deployments. No doubt Microsoft also hopes to compete more effectively with Amazon’s EC2 (Elastic Compute Cloud).

One thing which I will would like to know more about is Azure’s elasticity, the ability to scale on demand. Laing describes Azure Web Sites as a “highly elastic solution,” and the Fact Sheet mentions:

the ability to scale up as needed with reserved instances.

It is not clear though whether Microsoft is offering built-in load balancing and scaling on demand, an area where both Azure and System Center 2012 (Microsoft’s private cloud management suite) are relatively weak. That is, scaling is possible but complex to automate.

The pros and cons of NVIDIA’s cloud GPU

Yesterday NVIDIA announced the Geforce GRID, a cloud GPU service, here at the GPU Technology Conference in San Jose.

The Geforce GRID is server-side software that takes advantage of new features in the “Kepler” wave of NVIDIA GPUs, such as GPU virtualising, which enables the GPU to support multiple sessions, and an on-board encoder that lets the GPU render to an H.264 stream rather than to a display.

The result is a system that lets you play games on any device that supports H.264 video, provided you can also run a lightweight client to handle gaming input. Since the rendering is done on the server, you can play hardware-accelerated PC games on ARM tablets such as the Apple iPad or Samsung Galaxy Tab, or on a TV with a set-top box such as Apple TV, Google TV, or with a built-in client.

It is an impressive system, but what are the limitations, and how does it compare to the existing OnLive system which has been doing something similar for a few years? I attended a briefing with NVIDIA’s Phil Eisler, General Manager for Cloud Gaming & 3D Vision, and got a chance to put some questions.

The key problem is latency. Games become unplayable if there is too much lag between when you perform an action and when it registers on the screen. Here is NVIDIA’s slide:

image

This looks good: just 120-150ms latency. But note that cloud in the middle: 30ms is realistic if the servers are close by, but what if they are not? The demo here at GTC in yesterday’s keynote was done using servers that are around 10 miles away, but there will not be a GeForce GRID server within 10 miles of every user.

According to Eisler, the key thing is not so much the distance, as the number of hops the IP traffic passes through. The absolute distance is less important than being close to an Internet backbone.

The problem is real though, and existing cloud gaming providers like OnLive and Gaikai install servers close to major conurbations in order to address this. In other words, it pays to have many small GPU clouds dotted around, than to have a few large installations.

The implication is that hosting cloud gaming is expensive to set up, if you want to reach a large number of users, and that high quality coverage will always be limited, with city dwellers favoured over rural communities, for example. The actual breadth of coverage will depend on the hoster’s infrastructure, the users broadband provider, and so on.

It would make sense for broadband operators to partner with cloud gaming providers, or to become cloud gaming providers, since they are in the best position to optimise performance.

Another question: how much work is involved in porting a game to run on Geforce GRID? Not much, Eisler said; it is mainly a matter of tweaking the game’s control panel options for display and adapting the input to suit the service. He suggested 2-3 days to adapt a PC game.

What about the comparison with OnLive? Eisler let slip that OnLive does in fact use NVIDIA GPUs but would not be pressed further; NVIDIA has agreed not to make direct comparisons.

When might Geforce GRID come to Europe? Later this year or early next year, said Eisler.

Eisler was also asked about whether Geforce GRID will cannibalise sales of GPUs to gamers. He noted that while Geforce GRID latency now compares favourably with that of a games console, this is in part because the current consoles are now a relatively old generation, and a modern PC delivers around half the latency of a console. Nevertheless it could have an impact.

One of the benefits of the Geforce GRID is that you will, in a sense, get an upgraded GPU every time your provider upgrades its GPUs, at no direct cost to you.

I guess the real question is how the advent of cloud GPU gaming, if it takes off, will impact the gaming market as a whole. Casual gaming on iPhones, iPads and other smartphones has already eaten into sales of standalone games. Now you can play hardcore games on those same devices. If the centre of gaming gravity shifts further to the cloud, there is less incentive for gamers to invest in powerful GPUs on their own PCs.

Finally, note that the latency issues, while still important, matter less for the non-gaming cloud GPU applications, such as those targeted by NVIDIA VGX. Put another way, a virtual desktop accelerated by VGX could give acceptable performance over connections that are not good enough for Geforce GRID.

What’s in Adobe’s Creative Cloud, and should you go cloud or purchase outright?

Adobe has launched though not quite released its Creative Cloud. The name is slightly misleading since Adobe’s main business is in desktop applications and the “Creative Cloud” is as much or more a subscription model for desktop applications as it is a set of cloud services. In its discussions with financial analysts at the end of last year, Adobe said that moving customers to a subscription model is one of its goals since, quite simply, it makes more money that way.

Subscriptions are good for vendors in various ways. They offer a regular income, tend to keep going through inertia, and offer an opportunity to upsell additional services.

The applications in Creative Cloud include everything in Creative Suite Master Collection as far as I can tell, including Photoshop, Illustrator, InDesign, Dreamweaver, Flash Professional, Flash Builder, Fireworks, Premiere Pro, After Effects, Audition, Edge (animator for HTML5) and Muse (a no-code visual designer for HTML pages).

You also get a set of iOS/Androd apps: Photoshop Touch, Proto, Ideas, Debut, Collage and Kuler. And Lightroom, which curiously is not in Creative Suite.

Adobe Digital Publishing Suite Single Edition is “coming soon” to the Creative Cloud.

Nevertheless there are cloud services as well as desktop applications in the Creative Cloud. Here is what you get:

Store and Share: automatic cloud storage and file syncing. 2GB for a free membership or 20GB paid for. A desktop app called Creative Cloud Connection, for Windows and OS X, synchs files to a computer, while you can also access files from Touch apps on iOS or Android.

Publish: host up to five websites on Adobe’s hosting service. If you use Adobe Muse, you can design and publish without coding. Features of the web hosting? PHP? Coldfusion? Server-side Java? Database? Ecommerce? These details seem to be absent from Adobe’s current information but I am keen to find out more and will post an update.

Update: apparently this is the Business Catalyst hosting service – see here for details. This is rather limited as you cannot use any sort of server-side programming platform, but only the Business Catalyst services, though this does include a “customer database”. That said, there is an API for “connecting third party services” which might be a workaround in some cases.

Pros and cons

There are real advantages to a subscription versus buying the packaged Creative Suite. You get additional services, additional products, and also, Adobe is hinting, more updates than will be available to shrinkwrap purchasers.

If you have a short-term requirement for Creative Suite, the subscription approach is obviously advantageous.

The disadvantage, as with any subscription, is that you have to keep paying in order to keep using the products, whereas the shrinkwrap (actually a download) is a one-off payment. How much? All prices below exclude VAT.

For UK customers, Creative Cloud is £36.11 per month (£433.32 per annum), though there is a special offer for existing shrinkwrap owners of CS3 or later of £22.89 per month (£274.68) for the first year only.

The full version of Creative Suite 6 Master Collection is £2,223.00 – around five years of Creative Cloud and therefore a poor deal. Most software is almost worthless when five years old.

On the other hand an upgrade from Creative Suite 5.5 Master Collection is £397.00. Even that is barely a better deal, unless you plan to use it for two years and do not need the additional products and services.

image

The prices for UK customers are much higher than for the US, a fact which is causing some consternation. For example, the full CS6 Master Collection is $2599.00, a little over £1600 at today’s exchange rate.

The bottom line: Adobe wants you to subscribe so you can expect the pricing to push you in that direction.

Microsoft results: old business model still humming, future a concern

Microsoft has published its latest financials. Here is my at-a-glance summary:

Quarter ending March 31st 2012 vs quarter ending March 31st 2011, $millions

Segment Revenue Change Profit Change
Client (Windows + Live) 4624 +177 2952 +160
Server and Tools 4572 +386 1738 +285
Online 707 +40 -479 +297
Business (Office) 5814 +485 3770 +457
Entertainment and devices 1616 -319 -229 -439

What is notable? Well, Windows 7 is still driving Enterprise sales, but more striking is the success of Microsoft’s server business. The company reports “double-digit” growth for SQL Server and more than 20% growth in System Center. This seems to be evidence that the company’s private cloud strategy is working; and from what I have seen of the forthcoming Server 8, I expect it to continue to work.

Losing $229m in entertainment and devices seems careless though the beleaguered Windows Phone must be in there too. Windows Phone is not mentioned in the press release.

Overall these are impressive figures for a company widely perceived as being overtaken by Apple, Google and Amazon in the things that matter for the future: mobile, internet and cloud.

At the same time, those “things that matter” are exactly the areas of weakness, which must be a concern.

How Amazon Web Services dominate infrastructure as a service

During Mobile World Congress I met with some folk from Twilio, the cloud telephony company, who said something that interested me. Twilio uses Amazon Web Service (AWS) for its infrastructure and told me that essentially there is no choice, AWS is the only cloud provider which can scale on demand quickly and smoothly as required.

Today at QCon London I met a guy from another major cloud-based company, which is also built on Amazon, and I put the same point to him. Not only did he agree, he said that Amazon is increasing its lead over the competition. Amazon is less visible than some, he said, because of its approach to PR. It concentrates on marketing directly to developers rather than chasing press stories or running big advertising campaigns.

Amazon has also just reduced its prices.

This is mixed news for the industry. In general developers I speak to like working with AWS, and its scalability is a huge benefit when, for example, you are entering a new market. You can do so without having to invest in IT infrastructure.

On the other hand, stronger competition would be healthy. My contact said that he reckoned his company could move away from Amazon in 12 weeks or so if necessary, so it is not an absolute lock-in.

Sold out QCon kicks off in London: big data, mobile, cloud, HTML 5

QCon London has just started in London, and I’m interested to see that it is both bigger than last year and also sold out. I should not be surprised, because it is usually the best conference I attend all year, being vendor-neutral (though with an Agile bias), wide-ranging and always thought-provoking.

image

A few more observations. One reason I attend is to watch industry trends, which are meaningful here because the agenda is driven by what currently concerns developers and software architects. Interesting then to see an entire track on cross-platform mobile, though one that is largely focused on HTML 5. In fact, mobile and cloud between them dominate here, with other tracks covering cloud architecture, big data, highly available systems, platform as a service, HTML 5 and JavaScript and more.

I also noticed that Abobe’s Christophe Coenraets is here this year as he was in 2011 – only this year he is talking not about Flex or AIR, but HTML, JavaScript and PhoneGap.

Small Businesses love the cloud says Parallels: three times more likely to choose cloud over on-premise servers

The Parallels Summit is on in Orlando, Florida, and at the event the company has released details of its “Cloud insights” research, focused on small businesses.

Most people know Parallels for its desktop virtualization for the Mac. This or an equivalent comes in handy when you need to run Windows software on a Mac, or cross-develop for Mac and Windows on one machine.

Another sides of the company’s business though is providing virtualization software for hosting providers. The Plesk control panel for managing virtual machines and websites through a web interface is a Parallels product. Many of the customers for this type of hosting are small businesses, which means that Parallels has an indirect focus on this market.

Despite Parallels offering a “Switch to Mac” edition and perhaps competing in some circumstances with Microsoft’s Hyper-V virtualization, Parallels is a Microsoft partner and has tools which work alongside Hyper-V as well as supporting Microsoft cloud services including Office 365.

Given the company’s business, you can expect its research to come out in favour of cloud, but I was still interested in this statistic:

SMBs with less than 20 employees are at least three times more likely to choose cloud services over on-premise services

It was not long ago that SMBs of this size would almost inevitably install Microsoft’s Small Business Server once they got too big to manage with an ad-hoc network.

I would be interested to know more of course. How do they break down between, say, Google apps, Office 365, or other services such as third-party hosted Exchange? Do they go pure cloud as far as possible, or still run a local server for file shares, print management, and legacy software that expects a local Windows server? Or cloud for email, on-premise for everything else? Do they trust the cloud completely, or have a plan “B” in the event that the impossible happens and services fail?

Finally, what happens as these companies grow? Scalability and pay as you go is a primary reason for going cloud in the first place, so my expectation is that they would stay with this model, but I also suspect that there is pressure to have some on-premise infrastructure as sites get larger.