Category Archives: software development

Popfly Game Creator – programming online with Silverlight

This looks great: Popfly Game Creator.

Interesting on several counts.

First, casual gaming will help get Silverlight runtimes deployed.

Second, it’s Microsoft doing one of the things it does well: opening up programming to a new group. Another example: Microsoft promotes its XNA gaming framework to universities, where it helps them to entice new students into computer science.

Third, it’s from Adam Nathan, author of the definitive work on .NET interop, .NET and COM. Popfly gaming must be welcome light relief (though I don’t mean to imply that this stuff is easy to do).

Fourth, is online programming – I mean, programming that you actually do online – coming of age?

Technorati tags: , , , ,

Sun’s bad quarter

I was interested to see Sun’s financial results after visiting the company earlier this year.

Not too good:

Revenues for the third quarter of fiscal 2008 were $3.266 billion, a decrease of 0.5 percent as compared with $3.283 billion for the third quarter of fiscal 2007 … Net loss for the third quarter of fiscal 2008 on a GAAP basis was $34 million, or ($0.04) per share, as compared with net income of $67 million, or $0.07 per share, for the third quarter of fiscal 2007.

When I visited we were told that rising income from developing nations would compensate for weakness in the USA, but apparently this is not the case. Although income from the likes of India and Brazil is rising, it is not enough to make up the difference. Another question: why is Sun under-performing relative to other companies such as IBM and Intel, both of which reported strong first quarters last month?

Sun is also set to cut 1,500 to 2,000 jobs, which suggests that the company does not expect demand to pick up soon.

The issue to me is whether Sun can make sense of its commitment to open source, or whether the proprietary guys are showing where the money really is. The MySQL purchase was great PR, but doubtful business sense.

Technorati tags: , , , ,

Live Mesh: Hailstorm take 2?

So says Spolsky, in a rant about both unwanted mega-architectures, and the way big companies snaffle up all the best coders.

Is he right? Well, I attended the Hailstorm PDC in 2001 and I still have the book that we were given: .NET My Services specification. There are definitely parallels, not least in the marketing pitch (from page 3):

.NET My Services will enable the end user to gain access to key information and receive alerts about important events anywhere, on any device, at any time. This technology will put users in total control of their data and make them more productive.

Swap “.NET My Services” for “Live Mesh” and you wouldn’t know the difference.

But is it really the same? Spolsky deliberately intermingles several points in his piece. He says it is the same stuff reheated. One implication is that because Hailstorm failed, Live Mesh will fail. Another point is that Live Mesh is based on synchronization, which he says is not a killer feature. A third point is that the thing is too large and overbearing; it is not based on what anyone wants.

Before going further, I think we should ask ourselves why Hailstorm failed. Let’s look at what some of the people involved think. We should look at this post by Mark Lucovsky, chief software architect for Hailstorm and now at Google, who says:

I believe that there are systems out there today that are based in large part on a similar set of core concepts. My feeling is that the various RSS/Atom based systems share these core concepts and are therefore very similar, and more importantly, that a vibrant, open and accessible, developer friendly eco-system is forming around these systems.

Joshua Allen, an engineer still at Microsoft, disagrees:

All of these technologies predate Hailstorm by a long shot.  There is a reason they succeeded where Hailstorm failed.  It’s because Hailstorm failed to adopt their essence; not because they adopted Hailstorm’s essence …. the “principles” Mark’s blog post cites are actually principles of the technologies Hailstorm aimed to replace.

but as Allen shows in the latter part of his post, the technology was incidental to the main reasons Hailstorm failed:

  1. Hailstorm intended to be a complete, comprehensive set of APIs and services ala Win32.  Everything — state management, identity, payments, provisioning, transactions — was to be handled by Hailstorm.
  2. Hailstorm was to be based on proprietary, patented schemas developed by a single entity (Microsoft).
  3. All your data belonged to Microsoft.  ISVs could build on top of the platform (after jumping through all sorts of licensing hoops), but we controlled all the access.  If we want to charge for alerts, we charge for alerts.  If we want to charge a fee for payment clearing, we charge a fee.  Once an ISV wrote on top of Hailstorm, they were locked in to our platform.  Unless we licensed a third party to implement the platform as well, kind of like if we licensed Apple to implement Win32.

Hailstorm’s technology was SOAP plus Passport authentication. There were some technical issues. I recall that Passport in those days was suspect. Some smart people worked out that it was not as secure as it should be, and there was a general feeling that it was OK for logging into Hotmail but not something you would want to use for online banking. As for SOAP, it gets a bad rap these days but it can work. That said, these problems were merely incidental compared to the political aspect. Hailstorm failed for lack of industry partners and public trust.

Right, so is Live Mesh any different? It could be. Let me quickly lay out a few differences.

  1. Live Mesh is built on XML feeds, not SOAP messaging. I think that is a better place to start.
  2. Synchronization is a big feature of Mesh, that wasn’t in Hailstorm. I don’t agree with Spolsky; I think this is a killer feature, if it works right.
  3. Live Mesh is an application platform, whereas Hailstorm was not. Mesh plus Silverlight strikes me as appealing.

Still, even if the technology is better, what about the trust aspect? Will Mesh fail for the same reasons?

It is too soon to say. We do not yet know the whole story. In principle, it could be different. Mesh is currently Passport (now Live ID) only. Will it be easy to use alternative authentication providers? If the company listens to its own Kim Cameron, you would think so.

Currently Mesh cloud data resides only on Microsoft’s servers, though it can also apparently do peer-to-peer synch. Will we be able to run Mesh entirely from our own servers? That is not yet known. What about one user having multiple meshs, say one for work, one personal, and one for some other role? Again, I’m not sure if this is possible. If there is only One True Mesh and it lives on Live.com, then some Hailstorm spectres will rise again.

Finally, the world has changed in the last 7 years. Google is feared today in the way that Microsoft was feared in 2001: the entity that wants to have all our information. But Google has softened us up to be more accepting of something like Live Mesh or even Hailstorm. Google already has our search history, perhaps our email, perhaps our online documents, perhaps an index of our local documents. Google already runs on many desktops; Google Checkout has our credit card details. What boundary can Live Mesh cross, that Google has not already crossed?

Hailstorm revisited is an easy jibe, but I’m keeping an open mind.

What is Microsoft’s new language?

From Douglas Purdy’s blog:

It is not very often that you get to be part of a team that is developing a programming language that aspires to be used by every developer on the Microsoft platform.

In addition, it is not very often that you can be part of a team that aspires to radically change the dynamics of building a new language, to the extent that a developer can write their own model-driven language in a straightforward way while getting all the language services (Intellisense, colorization, etc.) for “free”.

I am lucky enough to be on such a team – and if you are interested you could be as well.

Something to do with Oslo I guess. And Live Mesh?

All will be revealed at PDC.

Technorati tags: , , , ,

Role of web video in tech communications

Last week’s Live Mesh announcement was a significant one for Microsoft watchers. It was interesting to note that all the in-depth information came in the form of web video.

Personally I dislike this trend. Video cannot easily be scanned to see what it contains; it also requires audio which is a nuisance. It is more work to quote from a video that to copy some text. I also resort to playing them at double speed where possible, to come closer to the speed of reading, and noting down the time of sections that I want to return to.

Some of these problems could be mitigated by better presentation. For example, you could have summary text on the page next to an embedded video, with links to indexed points.

However I also recognize that I may be in a minority. Video has obvious advantages; it is more informal, and can includes real demos as opposed to diagrams and screen grabs.

I am even contemplating trying some video publishing of my own; it is time I reviewed Adobe Visual Communicator.

Even so, I’d suggest that companies take the time to offer transcripts of important video content. Text has advantages too.

Microsoft Live Mesh is AIR++

This post on the Microsoft Live Dev blog reminded me to view some of the Live Mesh videos Microsoft has put out for developers – this quick tour is a good place to start; this video with Ori Amiga has more details with examples.

A few comments. First, it seems to me that Live Mesh is at heart a feed aggregrator. It is interesting to me because I had high hopes for Microsoft’s plans to integrate RSS into the operating system, and wrote about it in 2005. Sadly, Microsoft messed up its common feed platform – though I am perhaps one of the few who uses it outside IE7 or Outlook, with a custom feed reader thrown together in VB.

Live Mesh takes the feed aggregation concept and adds a few things. These include a REST API for posts and updates; a synchronization engine; an identity system so that you can control access; and a local feed server that works entirely offline when needed. Hence MOE (Mesh Operating Environment), also known as the Service Composition Runtime.

By the way, Mesh can synch peer to peer as well as with the cloud hub. Interesting for Intranet usage.

So what’s an application? A feed of course, one that contains stuff you can execute. The local runtime could be just HTML and Javascript engine; but you can see how nicely Silverlight fits into this scheme of things. It’s a neat deployment model. Buying an application becomes similar to subscribing to a web site, except you get an executable that works offline as well as online. As Amiga explains in the video above, this is about performance as well as convenience. The speed of the Net cannot match a local store.

Another aspect of this is that you can use Mesh services in your non-Mesh application, essentially as a data source that is automatically synchronized across everywhere.

If I’m anywhere close to grasping this, then it is not inherently Windows-centric. It also strikes me that this is AIR++, where the ++ is services and synchronization; Adobe should worry – except that Adobe has AIR out already and is no doubt working on great things for version 2.0.

A question though: what’s the business model? Commercial MESHable services? Tools and hosting? Premium MESH? MESH with ads? Right now, I guess Microsoft will do anything to buy mind share and market share for cloud services; but that will not do long-term.

Schwartz vs Mickos on MySQL and open source

At least, that’s how it looks. I was intrigued when I saw reports raising the possibility of “high-end” features in MySQL being released under a closed-source license – confirmed (as a possibility) in a roundabout way here. I found it odd because Sun CEO Jonathan Schwartz had told me of Sun’s intention to open source everything.

So what does Schwartz think of the MySQL idea? Not much, according to his statement in this email interview with Tim O’Reilly:

Marten Mickos (SVP, Database Group at Sun, former CEO, MySQL) made some comments saying he was considering making available certain MySQL add-ons to MySQL Enterprise subscribers only – and as I said on stage, leaders at Sun have the autonomy to do what they think is right to maximize their business value – so long as they remember their responsibility to the corporation and all of its communities (from shareholders to developers). Not just their silo.

I think Marten got some fairly direct and immediate feedback saying the idea was a bad one – and we have no plans whatever of “hiding the ball,” of keeping any technology from the community. Everything Sun delivers will be freely available, via a free and open license (either GPL, LGPL or Mozilla/CDDL), to the community.

Everything.

No exception.

Seems clear enough to me.

Buying a Microsoft code-signing certificate from Thawte? Don’t use Vista.

Here’s the problem. You go along to http://www.thawte.com and ask to buy a Microsoft authenticode certificate. It’s the right thing to do; signing code is increasingly important in these days of Internet delivery of applications; and unsigned code presents the user with dire warnings that may unnerve them.

So you go to buy a certificate. The way this works is in two stages. When you apply for the certificate, you are issued with a new private key, but not the certificate itself. Thawte then does its due diligence and checks out that you really do represent the organization for which you are requesting a certificate. Finally, you can go back and download the certificate and get on with signing your apps.

This process works differently on Vista than on XP. I got this wrong when I first tried it, because it is not obvious. To begin with, you have to relax IE’s security for the thawte site – ironic, for a security operation – and make sure it is not running in protected mode. Next, the first page of the application is a big form that has the details of the organization, how you are going to pay, and so on. If you complete this on Vista, and click Submit, you get a message saying “This web site is requesting a new certificate on your behalf”:

 

You complete the application, sit back and wait. A few days later you get an email saying your certificate is ready for download. You download it; it is a file called something like mycert.spc. You can right-click and choose Install Certificate, to place it in the Windows certificate store. You can even sign code with it. Just open a Visual Studio command prompt, type:

signtool signwizard

and off you go. You can select the new certificate from your certificate store, timestamp the code (recommended), and you’re done.

So what’s the problem? Well, what if you want to sign code on a different machine than the one on which you applied for the certificate? And what if you want to back up your certificate?

Did you realise when you made the purchase that you were irretrievably hooking the certificate to the actual Vista installation which you were using for the transaction?

It is all to do with the private key. To sign code, you need the private key, which was installed into your certificate store when that first page of the application was submitted. Unfortunately it cannot be exported; it is marked as non-exportable, which means the Export feature of Vista’s Certificate Manager will not allow the private key to be exported. Thawte cannot re-issue the private key; the only solution I know of is to get the entire certificate revoked reissued (fortunately this is a free service).

This problem does not occur on Windows XP. Here is the evidence. The screenshot below shows part of the application form on Vista:

Now, here is the same part of the form on Windows XP (still IE7):

Spot the difference? An additional section appears in XP, which lets you specify where to save your private key as a file with a .pvk extension. On Vista, you don’t get that choice and you don’t get a .pvk file. Once you have both the .pvk and the .spc files, you can backup or move the certificate wherever you want, with full signing capability. You can import the the certificate plus private key into your certificate store using this tool:

http://www.microsoft.com/downloads/details.aspx?FamilyID=F9992C94-B129-46BC-B240-414BDFF679A7&displaylang=EN

which is billed as a tool for Office 2000, but works fine for this purpose.

Now, I guess this is a security feature. If you have these private key files hanging around, they are easier to steal than if they are locked into your certificate store and marked non-exportable. Fair enough, but I’d rather make that decision for myself, than have it imposed by an obscure installation process.

Generics, anonymous methods, Unicode coming to Delphi

Codegear has posted an updated roadmap for Delphi and C++ Builder, its native code development tools for Windows. There is also a .NET Delphi but it is not covered here.

The RAD Studio product includes both Delphi (Object Pascal) and C++ “personalities”. A release code-named Tiburon, set for later this year, will update Delphi to be “completely Unicode-based”, including the runtime library and Visual Component Library (VCL). There is also support for generics and anonymous methods.

What about 64-bit, another obvious shortcoming of the current Delphi product? It’s promised for the release after that, code-named “Commodore”, and set for mid-2009.

All of this is a bit late in the day, but probably soon enough to keep Delphi developers happy. The IDE is stable now and if you want RAD features Delphi is the best choice for native code apps on Windows.

Microsoft’s Office UI patent trap: watch out with that MFC update

I installed the Visual Studio 2008 Feature Pack today – which, by the way, you will not find if you use Check for Updates on the Visual Studio 2008 Help menu – and noticed this paragraph in the setup agreement:

What’s this all about? Microsoft has not said so, but it seems likely to be part of the company’s war against OpenOffice. The efforts of Sun and others to improve OpenOffice, along with all the XML standardization brouhaha, prodded Microsoft into delivering the most significant Office upgrade for many years. One of its intentions was to increase the differentiation between Microsoft Office and OpenOffice. The strategy would not work if some future OpenOffice just copied the feature, hence the license.

The unintended consequences concern me.

Until now, you could pretty much use the out-of-the-box UI components in Visual Studio and not worry about licensing. That has now changed. According to Microsoft if you use any element of the Office user interface, for which the feature update supplies new classes, then you have to agree to a separate license.

Is this a burden? Well, the licensing page is now out of date, because it says “The program does not involve code”, but the feature pack provides what it calls “MFC C++ library source code for the Microsoft Office Fluent User Interface. However, Microsoft says that the license is free and covers:

…applications on any platform, except for applications that compete directly with the five Office applications that currently have the new UI (Microsoft Word, Excel, PowerPoint, Outlook, and Access)

What does it mean, to “compete directly”? It sounds like the sort of thing lawyers could have fun with. Further, if you read the license details and FAQ, it is clear that you take on a further obligation, which is to comply with Microsoft’s Office Design Guidelines, and even to update your application if Microsoft changes them:

Your Licensed UI must comply with the Design Guidelines. If Microsoft notifies you that the Design Guidelines have been updated or that you are not complying with the Design Guidelines, you will make the necessary changes to comply as soon as you reasonably can, but no later than your next product release that is 6 months or more from the date you receive notice.

OK, so let’s say you are developing some software for a customer. You deliver the app; customer pays you. Now Microsoft brings out Office 2009, changes the guidelines, and says you must update the app, even though the customer is happy with it as-is. Who will pay? I guess you would need to agree beforehand; but it is a disincentive to using the fluent UI.

Presuming you do not want to sign up, avoid all the CMFCRibbon* classes. Microsoft has helpfully commented these with a paragraph that says:

License terms to copy, use or distribute the Fluent UI are available separately.

Would any of this stand up in court? I have no idea, but I’d be reluctant to sign up or to use these classes lest I might have to find out.