Tag Archives: facebook

Facebook vs Reddit as discussion forums: why is Facebook so poor?

Facebook’s user interface for discussions is terrible. Here are some of the top annoyances for me:

  • Slow. Quite often I get those blank rectangles which seems to be a React thing when content is pending.
  • UI shift. When you go a post it shows the number of comments with some algorithmically selected comments below the post. When you click on “View more answers” or the comments link, the UI changes to show the comments in a new panel.
  • Difficult navigation. Everything defaults to the algorithm’s idea of what it calculates you want to see (or what Facebook wants me to see). So we get “Most relevant” and “Top comments.” I always want to see all the comments (spam aside) with the most recent comment threads at the top. So to get to something approaching that view I have to click first, View more answers, and then drop-down “Top comments” to select one of the other options.
  • Even “All comments” does not show all comments, but only the top level without the replies.

Facebook is also a horrible experience for me thanks to the news feed concept, which pushes all manner of things at me which I do not want to see or which waste time. I have learned that the only way I can get a sane experience is to ignore the news feed and click the search icon at top left, then I get a list of groups or pages I have visited showing which have new posts.

Do not use Facebook then? The problem is that if the content one wants to see is only on Facebook it presents a bad choice: use Facebook, or do not see the content.

Reddit by contrast is pretty good. You can navigate directly to a subreddit. Tabs for “hot” and “new” work as advertised and you can go directly to “new” by the logical url for example:

https://www.reddit.com/r/running/new/

Selecting a post shows the post with comments below and includes comment threads (replies to comments) and the threads can be expanded or hidden with +/- links.

The site is not ad-laden and the user experience generally nice in my experience. The way a subreddit is moderated makes a big difference of course.

The above is why, I presume, reddit is the best destination for many topics including running, a current interest of mine.

Why is Facebook so poor in this respect? I do not know whether it is accident or design, but the more I think about it, the more I suspect it is design. Facebook is designed to distract you, to show you ads, and to keep you flitting between topics. These characteristics prevent it from being any use for discussions.

If I view the HTML for a reddit page I also notice that it is more human-readable, and clicking a random topic I see in the Network tab of the Safari debugger that 30.7 KB was transferred in 767ms.

Navigate back to Facebook and I see 6.96 MB transferred in 1.52s.

These figures will of course vary according to the page you are viewing, the size of the comment thread, the quality of your connection, and so on. Reddit though is much quicker and more responsive for me.

Of course I am on “Old reddit.” New reddit, the revamped user interface (since 2017!) that you get by default if not logged in with an account that has opted for old reddit, is bigger and slower and with no discernible advantages. Even new reddit though is smaller and faster than Facebook.

Tip: If you are on new reddit you can get the superior old version from https://old.reddit.com/

Twitter: will longer tweets spoil the platform?

Twitter is a strange thing. Founded in 2006, it was initially promoted as a way of communicating with friends about what you are doing right now. It did not appeal to me. Who wants to know all that trivia? Who wants to publish it? Lots of people now on Facebook, apparently. But I digress. I joined Twitter eventually and discovered its two key features, first brevity, and second the ability to choose who you follow and not see tweets from anyone else. It is ideal for broadcasting or consuming news and comment, and that is how I use it. I particularly value it because it is not Google, and provides a way of making things known that is independent of Google’s all-powerful search algorithms.

Now CEO Jack Dorsey has tweeted about the lifting the 140 character limit:

At its core Twitter is public messaging. A simple way to say something, to anyone, that everyone in the world can see instantly.

We didn’t start Twitter with a 140 character restriction. We added that early on to fit into a single SMS message (160 characters).

It’s become a beautiful constraint, and I love it! It inspires creativity and brevity. And a sense of speed. We will never lose that feeling.

We’ve spent a lot of time observing what people are doing on Twitter, and we see them taking screenshots of text and tweeting it.

Instead, what if that text…was actually text? Text that could be searched. Text that could be highlighted. That’s more utility and power.

What makes Twitter, Twitter is its fast, public, live conversational nature. We will always work to strengthen that. For every person around the world, in every language!

And by focusing on conversation and messaging, the majority Of tweets will always be short and sweet and conversational!

We’re not going to be shy about building more utility and power into Twitter for people. As long as it’s consistent with what people want to do, we’re going to explore it.

And as I said at #flight, if we decide to ship what we explore, we’re telling developers well in advance, so they can prepare accordingly.

(Also: I love tweetstorms! Those won’t go away.)

Of course he really tweeted the above as an image:

image

Will this wreck Twitter? What he has in mind, I suspect, is that tweets become expandable, so you can tweet as much as you like but by default only 140 characters, or a heading of your choice, appears in the feed. In fact this often happens already, except that the link is to an external site, rather than to Twitter.

Twitter’s problem has always been how to monetize the service. The original concept was almost useless for this, until Twitter added “promoted tweets”, which you see whether or not you want them. In 2011 Twitter added images, making it a richer platform for advertisers, and providing an easy way to bypass the character limit. Vine videos, other video acquisitions (SnappyTV in 2014, Perisccope in March 2015) mean that more video appears on Twitter. Brevity is still a feature of Twitter, but much undermined, and likely to diminish further.

The removal of the character limit will enable Twitter to host more content itself, rather than being a place where people post links to other sites. This will keep users on Twitter (or in its app) for longer, which means more opportunities to advertise.

If these steps make Twitter worse for users like myself though, we might use the site less, which is not good for advertising income.

At this point I am resigned to Twitter getting worse, as it has done for the last few years. Nevertheless, I will carry on using it until something else appears which is better. I see little sign of that, so Twitter still has me for the moment.

I also see that Twitter has to be viable in order to thrive. Making customers fee-paying does not work, so advertising has to be the solution.

Privacy and online data sharing is a journey into the unknown: report from QCon London

I’m at QCon London, an annual developer conference which is among my favourites thanks to its vendor-neutral content.

One session which stood out for me was from Robin Wilton, Director for Identity and Privacy at the Internet Society, who spoke on “Understanding and managing your Digital Footprint”. I should report dissatisfaction, in that we only skated the surface of “understanding” and got nowhere close to “managing”. I will give him a pass though, for his eloquent refutation of the common assumption that privacy is unimportant if you are doing nothing wrong. If you have nothing to hide you are not a social being, countered Wilton, explaining that humans interact by choosing what to reveal about themselves. Loss of privacy leads to loss of other rights.

image

In what struck me as a bleak talk, Wilton described the bargain we make in using online services (our data in exchange for utility) and explained our difficulty in assessing the risks of what we share online and even offline (such as via cameras, loyalty cards and so on). Since the risks are remote in time and place, we cannot evaluate them. We have no control over what we share beyond “first disclosure”. The recipients of our data do not necessarily serve our interests, but rather their own. Paying for a service is no guarantee of data protection. We lack the means to separate work and personal data; you set up a LinkedIn account for business, but then your personal friends find it and ask to be contacts.

Lest we underestimate the amount of data held on us by entities such as Facebook and Google, Wilton reminded us of Max Schrems, who made a Subject Access Request to Facebook and received 1200 pages of data.

When it came to managing our digital footprint though, Wilton had little to offer beyond vague encouragement to increase awareness and take care out there.

Speaking to Wilton after the talk, I suggested an analogy with climate change or pollution, on the basis that we know we are not doing it right, but are incapable of correcting it and can only work towards mitigation of whatever known and unknown problems we are creating for ourselves.

Another issue is that our data is held by large commercial entities with strong lobbying teams and there is little chance of effective legislation to control them; instead we get futility like the EU cookie legislation.

There is another side to this, which Wilton did not bring out, concerning the benefit to us of sharing our data both on a micro level (we get Google Now) or aggregated (we may cure diseases). This is arguably the next revolution in personal computing; or put another way, maybe the bargain is to our advantage after all.

That said, I do not believe we have enough evidence to make this judgment and much depends on how trustworthy those big commercial entities prove to be in the long term.

Good to see this discussed at Qcon, despite a relatively small attendance at Wilton’s talk.

What’s up with Facebook acquiring “we don’t sell ads” WhatsApp

Facebook has acquired WhatsApp for a breathtaking $16 billion. Too much money by any normal valuation; but that might not matter if it makes sense strategically.

image

What is the value of WhatsApp?

WhatsApp is on a path to connect 1 billion people. The services that reach that milestone are all incredibly valuable.

says Facebook’s founder and CEO Mark Zuckerberg. Facebook is purchasing an extension to its “social graph”, a billion people’s interconnections. The obvious goal is to accomplish two things:

  • Defend Facebook from disruption and keep users on its network – particularly the younger demographic that may be drifting away.
  • Gather more data which will be used for targeted advertising and potentially rich future services.

There are two companies which dominate the Internet today, and they are Facebook and Google. Their business model is mainly advertising, but they may be better perceived as data companies than as advertising companies. The big bet is that future technology will revolve around smart, deeply personalised services that will improve our lives, based on what they know about us, our friends and connections, our location, our preferences and and our schedule. The clearest expression of this is Google Now which can notify you when to leave for your meeting based on current traffic – without you having to spend time configuring settings or entering data. Google looks at your schedule (which it knows through your calendar), your location (which it knows through your smartphone), mashes that data with its mapping and traffic services, and surfaces the result as a notification.

The trade-off is that you hand over your data to Google (enabling it to provide ever-richer services) while receiving in return an easier life.

Does the deal have a dark side? Undoubtedly. Might future services be paid for, might Google or others take advantage of that data in ways we dislike? Quite possibly. This is the bet though; and it is everywhere.

Take the automotive industry for example. I wrote up the latest buzz in automotive marketing for the Guardian, and heard this from one of the experts I consulted:

“The actual car; the engine, the wheels, the drive shaft, the bodies, those have become commodities. The differentiator for cars is the in-dash system, the computer," says Patrick Salyer, CEO of Gigya. “If car companies can connect with their customers’ social login, they can build a permanent lasting relationship. If the car company tracks things about me like my driving habits and where I go, it is actually a value add insight. That is a reason to stay with that car company”

Salyer thinks that Facebook, Google and automobile companies are in a war to own that data. It is all about the data.

This therefore is what Facebook is buying: future data, in a messaging service that because of its mobile orientation may prove to be a kind of successor to email and SMS messaging.

That is the rationale, but will it work? The problem with WhatsApp is that it is fashionable; and what comes into fashion can go out, too, especially for the young. You can bet too that Google (Facebook’s biggest problem) will counter by using its OS platform, Android, to point users towards its own messaging services. If you have an Android phone you are already logging into Google, no need to sign up for another service.

Who knows, two to three years from now we may be joking about how much Facebook paid for WhatsApp. But Facebook can afford a few big gambles, and does not need for all of them to come off.

Postscript: you should read WhatsApp founder Jan Koum’s blog post on Why we don’t see ads while it is still online.

Advertising isn’t just the disruption of aesthetics, the insults to your intelligence and the interruption of your train of thought. At every company that sells ads, a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packaged and shipped out… And at the end of the day the result of it all is a slightly different advertising banner in your browser or on your mobile screen.

Remember, when advertising is involved you the user are the product.

Money, as songwriter prophet Bob Dylan observed, does not talk. It swears.

Mobile: Windows Phone appeal growing, iOS and Android secure say Titanium developers

Appcelerator and IDC have released their latest mobile developer report, in which nearly 3,000 users of the cross-platform development tool Titanium report on their views and intentions.

These reports are always interesting but experience suggests that they are poor predictors. A year ago, the Q4 2011 report told us:

Amazon’s new Kindle Fire ignites developer interest. When surveyed among 15 Android tablets, the lowcost, content-rich eReader was second only to the Samsung Galaxy Tab globally in developer interest. A regional breakdown shows Amazon edging Samsung in North America for the top slot. At 49% very interested in North America, the Kindle Fire is just 4 points less than interest in the iPad (53%) prior to its launch in April 2010.

Now, the Q4 2012 report says:

Amazon’s Kindle continues to struggle for developers’ interest. With less than 22% of mobile application developers “very interested” in building mobile apps for the device, the Kindle just barely breaks into developers’ top 10 app targets.

This is one example; a glance back through previous editions shows plenty of others, showing that developers struggle as much as the rest of us when it comes to guessing the value of future markets.

The report is still useful as a snapshot of how things look now, for cross-platform mobile developers. One question which is always asked, and therefore can be compared easily from one report to another, is the proportion of developers who are “very interested” in developing for each platform.

image

The top 5 contenders here are relatively stable, with Apple iOS top (iPhone and iPad), Android next (phone and tablet), and HTML5 Mobile Web also strong at about 65%.

The lower ranges are more interesting, as developers change their minds about prospects for the minority players. Windows Phone dived to around 22% in August 2012 but grew strongly to 36% in this report. Windows tablets, which we should probably take to mean the new Windows 8 app platform, is about the same. BlackBerry has declined from over 40% in March 2010 to 9% today, though I would suggest this will inevitably increase in the next report which will be after the launch of BlackBerry 10.

What else is interesting? One thing is Apple “fragmentation”. The problem here is that Apple iOS now has six screen sizes, once you add iPad mini and iPhone and iPad with or without high-res Retina displays. This gives me pause for thought. The challenge of mobile apps is now closer to that of desktop apps, where you do not know what display will be used or how users will choose to  size the application window. Intelligent layout and scaling is key.

Apple is also increasingly awkward to work with:

More generally, 90% of developers believe that Apple has become more difficult, or about the same, to deal with over the past three years when it comes to application
submission, fragmentation, and monetization.

Part of the report concerns Microsoft Surface. This focus is puzzling, in that it is the Windows 8 app platform which really matters, rather than Surface itself. Another oddity is the questions put, with no option to say “Surface is great”. The most positive option was:

It is a nice piece of hardware, but Windows 8 needs a lot more than that to be successful

A rather obvious statement which apparently won the agreement of 36% of developers.

The report gets even sillier when it comes to market disruption:

The top three companies that developers perceive to be ripe for disruption are a veritable who’s-who of the biggest tech darlings

say Appcelerator and IDC. It is true; but the figures are tiny:

Microsoft (8% of respondents), Google (7% of respondents), and Facebook (7% of respondents).

In other words, over 90% of developers believe these three companies are not likely to be disrupted soon; a figure that strikes me as conservative, especially for Microsoft.

More impressive is that over 60% of developers believe Facebook will lose out in future to a mobile-first social startup. This was also true last time round; 66% in Q3 2012 and 62% in Q4 2012.

The length of time it took Facebook to release just a single native iOS app, coupled with the fact that a corresponding native Android app is still MIA, has proven that the company does not yet have a viable cross-platform mobile strategy.

say Appcelerator and IDC. A fair point; but Facebook’s primary asset is its network of relationships rather than its software and it is not easy to disrupt. I would also guess that disruption is more likely to come from Google as it promotes Google+ and builds it more aggressively, perhaps, into Android, along with apps for iOS and other platforms as needed, than from a startup. But like the developers in this survey, I am guessing.

An ugly dialog from Spotify

I am a big fan of Spotify, mainly because it works so well. Search is near instant, playback is near instant.

I understood when, under pressure from the music industry, it limited the value of the free version by restricting the hours of play and the number of times you can play a specific track.

This is ugly though:

image

Spotify says:

From today, all new Spotify users will need to have a Facebook account to join Spotify. Think of it as like a virtual ‘passport’, designed to make the experience smoother and easier, with one less username and password to remember. You don’t need to connect to Facebook and if you do decide to, you can always control what you share and don’t share by changing your Spotify settings at any time.

Why care? Privacy? Because you might want Spotify but not Facebook?

I would put it another way. I am wary of putting Facebook at the centre of my Internet identity. If others follow Spotify’s example and the Web were to become useless unless you are logged into Facebook, that would give Facebook more power that I would like.

If for some reason you want to withdraw from Facebook, why should that affect your relationship with Spotify? It is an ugly dependency, and I hope that Spotify reconsiders.

See also Cloud is identity management says Kim Cameron, now ex-Microsoft.

Hands on debugging an Azure application – what to do when it works locally but not in the cloud

I have been writing a Facebook application hosted on Microsoft Azure. I hit a problem where my application worked fine on the local development fabric, but failed when deployed to Azure. The application was not actually crashing; it just did not work as expected. Specifically, either the Facebook authentication or the ASP.NET Forms Authentication was failing; when I tried to log on, the log on failed.

This scenario, where the app works locally but not on Azure, is potentially a bad one because you do not have the luxury of breakpoints and variable inspection. There are several approaches. You can have the application write a log, which you could download or view by using Remote Desktop to the Azure instance. You can have the application output debug messages to HTML. Or you can use IntelliTrace.

I tried IntelliTrace. It is easy to set up, just check the box when deploying.

image

Once deployed, I tried the application. Clicked the Log On button, after which the screen flashed but still asked me to Log On. The log on had failed.

image

I closed the app, opened Server Explorer in Visual Studio, drilled down into the Windows Azure Compute node and selected View IntelliTrace Logs.

image

The logs took a few minutes to download. Then you can view is the IntelliTrace log summary, which includes a list of exceptions. You can double-click an exception to start an IntelliTrace debug session.

image

Useful, but I still could not figure out what was wrong. I also found that IntelliTrace did not show the values for local variables in its debug sessions, though it does show exceptions in detail.

Now, if you really want to debug and trace an Azure application you had better read this MSDN article which explains how to create custom debugging and trace agents and write logs to Azure storage. That seems like a lot of work, so I resorted to the old technique of writing messages to HTML.

At this point I should mention something you must do in order to debug on Azure and remain sane.  This is to enable WebDeploy:

image

It is not that hard to set up, though you do need to enable Remote Desktop which means a trip to the Azure management portal. In my case I am behind a firewall so I needed to configure Web Deploy to use the standard SSL port. All is explained here.

Why use Web Deploy? Well, normally when you deploy to Azure the service actually builds, copies and spins up a new virtual machine image for your app. That process is fundamental to Azure’s design and means there are always at least two copies of the VM in existence. It is also slow, so if you are making changes to an app, deploying, and then testing, you will spend most of your time waiting for Azure.

Web Deploy, by contrast, writes to your existing instance, so it is many times quicker. Note that once you have your app working, it is essential to deploy it properly, since Azure might revert your app to the last VM you created.

With Web Deploy enabled I got back to work. I discovered that FormsAuthentication.SetAuthCookie was not working. The odd thing being, it worked locally, and it had worked in a previous version deployed to Azure.

Then I began to figure it out. My app runs in a Facebook canvas. Since the app is served from a different site than Facebook, cookies may be rejected. When I ran the app locally, the app was in a different IE security zone, so different rules applied.

But why had it worked before? I realised that when it worked before I had used Google Chrome. That was it. IE worked locally; but only Chrome worked when deployed.

I have given up trying to fix the specific problem for the moment. I have dug into it a little, and discovered that cookie handling in a Facebook canvas with IE is a long-standing problem, and that the Facebook C# SDK may have bugs in this area. It is not essential for my sample; I have found I can get by with the Facebook session. To get the user ID, for example:

FacebookWebContext.Current.Session.UserId

The time has not been wasted though as I have learned a bit about Azure debugging. I was also amused to discover that my Azure VM has activation problems:

image

The frustration of developing for Facebook with C#

I am researching a piece on developing for Facebook with Microsoft Azure, and of course the first thing I did was to try it out.

It is not easy. The first problem is that Facebook does not care about C#. There are four SDKs on offer: JavaScript, Apple iOS, Google Android, and PHP. This has led to a proliferation of experimental and third-party SDKs which are mostly not very good.

The next problem is that the Facebook API is constantly changing. If you try to wrap it neatly in an SDK, it is likely that some things will break when the next big change comes along.

This leads to the third problem, which is that Google may not be your friend. That helpful article or discussion on developing for Facebook might be out of date now.

Now, there are a couple of reasons why it should be getting better. Jim Zimmerman and Nathan Totten at Thuzi (Totten is now a technical evangelist at Microsoft) created a new C# Facebook SDK, needing it for their own apps and frustrated with what was on offer elsewhere. The Facebook C# SDK looks like it has some momentum.

C# 4.0 actually works well with Facebook, thanks to the dynamic keyword, which makes it easier to cope with Facebook’s changes and also lets it map closely to the official PHP SDK, as Totten explains.

Nevertheless, there are still a few problems. One is that documentation for the SDK is sketchy to say the least. There is currently no reference for it on the Codeplex site, and most of the comments are the kind that produces impressive-looking automatic documentation but actually tells you nothing of substance. Plucking one at random:

FacebookClient.GetAsync(System.Collections.Generic.IDictionary<string,object>)

Summary:
Makes an asynchronous GET request to the Facebook server.

Parameters:
parameters: The parameters.

Another problem, inherent to dynamic typing, is that IntelliSense (auto-completion in Visual Studio) has limited value. You constantly need to reference the Facebook documentation.

Finally, the SDK has changed quite a bit in different versions and some of the samples reference old versions.

In particular, I found it a struggle getting OAuth authentication and access token retrieval working and ended up borrowing Totten’s sample code here which mostly works – though note that the code in the sample does not cope with the same users logging out and logging in again; I fixed this by changing his InMemoryUserStore to use a ConcurrentDictionary instead of a ConcurrentBag, though there are plenty of other ways you can store users.

I’m puzzled why Microsoft does not invest more in making this easier. Microsoft invested in Facebook and it is easy to get the impression that Microsoft and Facebook are in some sort of informal alliance versus Google. Windows Phone 7, for example, ties in closely with Facebook and is probably the best Facebook phone out there.

As it is, although I prefer coding in C# to PHP, I would say that choosing PHP as the platform for your Facebook app will present less friction.

Why is Microsoft giving away web traffic and abandoning users?

I am puzzled by Microsoft’s decision to close Live Spaces and send all its users to WordPress.com. Of course WordPress is a superior blogging platform; but Spaces made sense as an element within an integrated Live.com platform. According to Microsoft it has 7 million users and 30 million visitors; and if you accept that business on the web is all about traffic and monetizing traffic, then it strikes me as odd that Microsoft has no better idea of what to do with that traffic than to give it to someone else.

It makes me wonder what exactly Microsoft is trying to do with its Live.com web property. You can make a generous interpretation, as Peter Bright does, and say that the company is learning to focus and losing its “not invented here” religion. Or you can argue that it exposes the lack of a coherent strategy for Microsoft’s online services for consumers.

Part of the reason may be that blogging itself has changed. The original concept of an online diary or “web log” has fractured, with much of the trivia that might once have been blogged now being expressed on Facebook or Twitter. At the other end, blog engines like WordPress have evolved into capable content management systems. Many blogs are just convenient tools to author web sites.

Spaces is also a personal CMS. When combined with other features of Live.com, it provides a way of authoring your own web site, with photos, lists, documents, music and video, gadgets and other modules. You can apply themes, select layouts, and even add custom HTML. Everything integrates with the Windows Live identity system. The blog is just one element in this.

image

Now, although you can move your blog to WordPress.com, much of this is going away. Themes, gadgets, guestbook and lists are not transferred. If you were using Spaces for in effect a personal web site, you will have to start again on WordPress.

What this means is that WordPress, not Microsoft, now has the opportunity to show ads or market other services to these users.

Other services including SkyDrive, which is an excellent online storage platform, and Hotmail for email, are continuing as before. Still, the wider question is this. If Microsoft is happy to abandon 7 million users and all the customisation effort they have put into creating a personal online space, why should I trust it for email, or online storage?

Microsoft’s Dharmesh Mehta does his best to explain the decision here:

When we looked at Spaces, and what we had done with Spaces, and the more we thought about where do we want this to go, where do we think blogging evolves to, what’s important about that, you look at WordPress.com, and they’re building that. They’re doing a great job. And there really isn’t much value in us trying to compete with that.

This seems weak to me. Mehta is even less convincing when it comes to Live ID:

Windows Live ID is not really a means unto itself. There are times when it’s important for us to be able to associate an identity with someone. But there’s many things that we do where you don’t need a Windows Live ID — Photo Gallery, if you’re just using it on your PC, you don’t need a Windows Live ID at all. You can take our Mail app and connect it to Yahoo or Gmail or something like that. You don’t need a Windows Live ID. So I wouldn’t say that Windows Live ID is a goal, or something that we’re trying to drive in and of itself. It’s really more a means when we think it’s valuable for someone to have an account.

Now, I thought the Live ID was a single sign-on for Microsoft’s online services, and the basis of a network of friends and contacts. Perhaps Microsoft is now ceding that concept to Facebook or others? This does seem to be a move in that direction; and while it may be acceptance of something that was inevitable, it is a bad day for Microsoft’s efforts to matter online.

Oracle: a good home for MySQL?

I’m not able to attend the whole of Oracle OpenWorld / JavaOne, but I have sneaked in to MySQL Sunday, which is a half-day pre-conference event. One of the questions that interests me: is MySQL in safe hands at Oracle, or will it be allowed to wither in order to safeguard Oracle’s closed-source database business?

It is an obvious question, but not necessarily a sensible one. There is some evidence for a change in direction. Prior to the takeover, the MySQL team was working on a database engine called Falcon, intended to lift the product into the realm of enterprise database management. Oracle put Falcon on the shelf; Oracle veteran Edward Screven (who also gave the keynote here) said that the real rationale for Falcon was that InnoDB would be somehow jiggered by Oracle, and that now both MySQL and InnoDB were at Oracle, it made no sense.

Context: InnoDB is the grown-up database engine for MySQL, with support for transactions, and already belonged to Oracle from an earlier acquisition.

There may be something in it; but it is also true that Oracle has fine-tuned the positioning of MySQL. Screven today emphasised that MySQL is Oracle’s small and nimble database manager; it is “quite performant and quite functional”, he said; the word “quite” betraying a measure of corporate internal conflict. Screven described how Oracle has improved the performance of MySQL on Windows and is cheerful about the possibility of it taking share from Microsoft’s SQL Server.

It is important to look at the actions as well as the words. Today Oracle announced the release candidate of MySQL 5.5, which uses InnoDB by default, and has performance and scalability improvements that are dramatic in certain scenarios, as well as new and improved tools. InnoDB is forging ahead, with the team working especially on taking better advantage of multi-core systems; we also heard about full text search coming to the engine.

The scalability of MySQL is demonstrated by some of its best-known deployments, including Facebook and Wikipedia. Facebook’s Mark Callaghan spoke today about making MySQL work well, and gave some statistics concerning peak usage: 450 million rows read per second, 3.5 million rows changed per second, query response time 4ms.

If pressed, Screven talks about complexity and reliability with critical data as factors that point to an Oracle rather than a MySQL solution, rather than lack of scalability.

In practice it matters little. No enterprise currently using an Oracle database is going to move to MySQL; aside from doubts over its capability, it is far too difficult and risky to switch your database manager to an alternative, since each one has its own language and its own optimisations. Further, Oracle’s application platform is built on its own database and that will not change. Customers are thoroughly locked in.

What this means is that Oracle can afford to support MySQL’s continuing development without risk of cannibalising its own business. In fact, MySQL presents an opportunity to get into new markets. Oracle is not the ideal steward for this important open source project, but it is working out OK so far.