Category Archives: .net

GPU Programming for .NET: Tidepowerd’s GPU.NET gets some improvements, more needed

When I attended the 2010 GPU programming conference hosted by NVIDIA I encounted Tidepowerd, which has a .NET library called GPU.NET for GPU programming.

GPU programming enables amazing performance improvements for certain types of code. Most GPU programming is done in C/C++, but Typepowerd lets you run code in .NET, simply marking any methods you want to run on the GPU with a [kernel] attribute:

[Kernel]

private static void AddGpu(float[] a, float[] b, float[] c)

{

// Get the thread id and total number of threads

int ThreadId = BlockDimension.X * BlockIndex.X + ThreadIndex.X;

int TotalThreads = BlockDimension.X * GridDimension.X;

for (int ElementIndex = ThreadId; ElementIndex < a.Length; ElementIndex += TotalThreads)

{

c[ElementIndex] = a[ElementIndex] + b[ElementIndex];

}

}

GPU.NET is now at version 2.0 and includes Visual Studio Error List and IntelliSense support. This is useful, since some C# code will not run on the GPU. Strings, for example, are not supported. Take a look at this article which lists .NET OpCodes that do not work in GPU.NET.

GPU.NET requires an NVIDIA GPU with CUDA support and a CUDA 3.0 driver. It can run on Mac and Linux using Mono, the open source implementation of .NET. In principle, GPU.NET could also work with AMD GPUs or others via a vendor-specific runtime:

image

but the latest FAQ says:

Support for AMD devices is currently under development, and support for other hardware architectures will follow shortly.

Another limitation is support for multiple GPUs. If you want to do serious supercomputing relatively cheaply, stuffing a PC with a bunch of Tesla GPUs is a great way to do it, but currently GPU.NET only used one GPU per active thread as far as I can tell from this note:

The GPU.NET runtime includes a work-scheduling system which can distribute device method (“kernel”) calls to multiple GPUs in the system; at this time, this only works for applications which call device-based methods from multiple host threads using multiple CPU cores. In a future release, GPU.NET will be able to use multiple GPUs to execute a single method call.

I doubt that GPU.NET or other .NET libraries will ever compete with C/C++ for performance, but ease of use and productivity count for a lot too. Potentially GPU.NET could bring GPU programming to the broad range of .NET developers.

It is also worth checking out hoopoe’s CUDA.NET and OpenCL.NET which are free libraries. I have not done a detailed comparison but would be interested to hear from others who have.

Microsoft releases Visual Studio LightSwitch: a fascinating product with an uncertain future

Microsoft has released Visual Studio LightSwitch, a rapid application builder for data-centric applications.

image

LightSwitch builds Silverlight applications, which may seem strange bearing in mind that the future of Silverlight has been hotly debated since its lack of emphasis at the 2010 Professional Developers Conference. The explanation is either that Silverlight – or some close variant of Silverlight – has a more important future role than has yet been revealed; or that the developer division invented LightSwitch before Microsoft’s strategy shifted.

Either way, note that LightSwitch is a model-driven tool that is inherently well-suited to modification for different output types. If LightSwitch survives to version two, it would not surprise me to see other application targets appear. HTML 5 would make sense, as would Windows Phone.

So LightSwitch generates Silverlight applications, but they do not run on Windows Phone 7 which has Silverlight as its development platform? That is correct, and yes it does seem odd. I will give you the official line on this, which is that LightSwitch is not aimed primarily at developers, but is for business users who run Windows and who want a quick and easy way to build database applications. They will not care or even, supposedly, realise that they are building Silverlight apps.

I do not believe this is the whole story. It seems to me that either LightSwitch is a historical accident that will soon be quietly forgotten; or it is version one of a strategic product that will build multi-tier database applications, where the server is either Azure or on-premise, and the client any Windows device from phone to PC. Silverlight is ideal for this, with its modern presentation language (XAML), its sandboxed security, and its easy deployment. This last point is critical as we move into the app store era.

LightSwitch could be strategic then, or it could be a Microsoft muddle, since the official marketing line is unconvincing. I have spent considerable time with the beta and doubt that the supposed target market will get on with it well. Developers will also have a challenge, since the documentation is, apparently deliberately, incomplete when it comes to writing code. There is no complete reference, just lots of how-to examples that might or might not cover what you wish to achieve.

Nevertheless, there are flashes of brilliance in LightSwitch and I hope, perhaps vainly, that it does not get crushed under Microsoft’s HTML 5 steamroller. I set out some of its interesting features in a post nearly a year ago.

Put aside for a moment concerns about Silverlight and about Microsoft’s marketing strategy. The truth is that Microsoft is doing innovative work with database tools, not only in LightSwitch with its model-driven development but also in the SQL Server database projects and “Juneau” tools coming up for “Denali”, SQL Server 2011, which I covered briefly elsewhere. LightSwitch deserves a close look, even it is not clear yet why you would want actually to use it.

image

The strategy behind Mono has shifted: ten years of open source .NET

Yesterday, SUSE and Xamarin announced, in effect, the transfer of all things Mono to Xamarin.

The agreement grants Xamarin a broad, perpetual license to all intellectual property covering Mono, MonoTouch, Mono for Android and Mono Tools for Visual Studio. Xamarin will also provide technical support to SUSE customers using Mono-based products, and assume stewardship of the Mono open source community project.

Xamarin is a startup formed by Mono founder Miguel de Icaza following the acquisition of Novell and SUSE by Attachmate, which ceased Mono development.

Attachmate acquired Novell in November 2010. Mono has been plucked from the abyss with impressive speed.

That said, the strategy behind Mono has shifted. Mono exists because de Icaza liked what Microsoft announced back in 2000 when it introduced C# and the .NET Framework. Microsoft made a show of standardizing the .NET CLI (Common Language Infrastructure), which made PR sense at the time since there was controversy over Sun’s ownership of Java, though nobody really believed that Microsoft knew how to steward an open source development platform or indeed believed that it was really serious about it. History largely justifies that scepticism; but de Icaza called Microsoft’s bluff and forged ahead with Mono, implementing not only the CLI and C# but most of the .NET Framework as well.

The goal of Mono, as I recall, was to bring the benefits of C# and .NET to Linux developers, and to enable developers to move applications freely between Windows and Linux. Apple OS X was also on the radar, though it took longer to become much use. Recalling Mono’s early days, de Icaza said:

Mono to me is a means to an end: a technology to help Linux succeed on the desktop.

Mono worked remarkably well from quite early on, but never quite well enough to persuade mainstream developers it was a sensible choice for applications that would otherwise have run on Windows. It did emerge as a viable and productive toolset and platform for Linux and a number of Mono applications became popular, including Beagle search, Tomboy notes, and F-Spot photo management. Some ASP.NET applications run on Mono; I have one on this site. Another Mono success was its use as the scripting engine in Unity, a game development platform.

A big problem for Mono though was the lack of a business model. There was support and servicing of course, which must have generated some revenue for Novell, but most Mono use is free. Novell possibly had in mind that Mono could be significant as an application server, but it has never become a really trusted platform in the Enterprise. For example, as Alan Radding (Dancing Dinosaur) notes:

DancingDinosaur has not found any SUSE on z user that has successfully implemented .NET apps on the mainframe. A few have tried but reported that Mono on z wasn’t ready for prime time.

Even among the free software and open source community, Mono was hampered by suspicion of Microsoft. If Mono became successful enough to threaten Microsoft, would lawyers appear? Given the way Microsoft is currently behaving with Android, filing legal actions and signing up licensees, those fears might not be unwarranted.

So what is Mono today? The answer is that Mono is now primarily a mobile platform. The Xamarin home page makes this clear, as well as making it apparent that the Mono team has discovered the value of a business model:

image

Xamarin is tapping into two real business needs. One is the need for a cross-platform mobile development platform that works. The second is a way for Windows developers to use their existing C# skills for mobile development, given that they might not be happy with the tiny market share currently achieved by Windows Phone 7.

When I had a quick try with Monotouch I was impressed, and I would like to spend some more time with it and with Mono for Android.

Mono has touch competition though. In particular, PhoneGap, Appcelerator’s Titanium, and Adobe AIR. I was interested to see that Adobe is coming up with a packager for AIR on Android, which may significantly improve it as a cross-platform mobile toolkit.

Still, Xamarin is small and nimble and I expect it to succeed. It has also has Visual Studio integration, which is an advantage. One of the pieces Xamarin has now licensed from SUSE is Mono for Visual Studio.

The downside of these latest developments is that if you depend on Mono for the desktop or for ASP.NET, you may find these parts of the Mono project getting little attention from the new company. But Mobile is all that matters now, right?

I write this on July 19 2011. According to Wikipedia:

Recognizing that their small team could not expect to build and support a full product, they launched the Mono open source project, on July 19, 2001 at the O’Reilly conference.

Well, if there was a launch there it was low-key. It is not mentioned in this report. But de Icaza does recall:

We planned the announcement to come by July 19th 2001, so we could announce this at the O’Reilly conference, as Tim O’Reilly had been very supportive of this effort, and had offered his help since the early stages, when it was still a very young idea. When we announced the project launch we had our team in place, and we were shipping our metadata framework and our C# compiler as well as a few initial classes So officially the Mono project was launched on that date, but it had been brewing for a very long time.

Happy Anniversary!

Hands on debugging an Azure application – what to do when it works locally but not in the cloud

I have been writing a Facebook application hosted on Microsoft Azure. I hit a problem where my application worked fine on the local development fabric, but failed when deployed to Azure. The application was not actually crashing; it just did not work as expected. Specifically, either the Facebook authentication or the ASP.NET Forms Authentication was failing; when I tried to log on, the log on failed.

This scenario, where the app works locally but not on Azure, is potentially a bad one because you do not have the luxury of breakpoints and variable inspection. There are several approaches. You can have the application write a log, which you could download or view by using Remote Desktop to the Azure instance. You can have the application output debug messages to HTML. Or you can use IntelliTrace.

I tried IntelliTrace. It is easy to set up, just check the box when deploying.

image

Once deployed, I tried the application. Clicked the Log On button, after which the screen flashed but still asked me to Log On. The log on had failed.

image

I closed the app, opened Server Explorer in Visual Studio, drilled down into the Windows Azure Compute node and selected View IntelliTrace Logs.

image

The logs took a few minutes to download. Then you can view is the IntelliTrace log summary, which includes a list of exceptions. You can double-click an exception to start an IntelliTrace debug session.

image

Useful, but I still could not figure out what was wrong. I also found that IntelliTrace did not show the values for local variables in its debug sessions, though it does show exceptions in detail.

Now, if you really want to debug and trace an Azure application you had better read this MSDN article which explains how to create custom debugging and trace agents and write logs to Azure storage. That seems like a lot of work, so I resorted to the old technique of writing messages to HTML.

At this point I should mention something you must do in order to debug on Azure and remain sane.  This is to enable WebDeploy:

image

It is not that hard to set up, though you do need to enable Remote Desktop which means a trip to the Azure management portal. In my case I am behind a firewall so I needed to configure Web Deploy to use the standard SSL port. All is explained here.

Why use Web Deploy? Well, normally when you deploy to Azure the service actually builds, copies and spins up a new virtual machine image for your app. That process is fundamental to Azure’s design and means there are always at least two copies of the VM in existence. It is also slow, so if you are making changes to an app, deploying, and then testing, you will spend most of your time waiting for Azure.

Web Deploy, by contrast, writes to your existing instance, so it is many times quicker. Note that once you have your app working, it is essential to deploy it properly, since Azure might revert your app to the last VM you created.

With Web Deploy enabled I got back to work. I discovered that FormsAuthentication.SetAuthCookie was not working. The odd thing being, it worked locally, and it had worked in a previous version deployed to Azure.

Then I began to figure it out. My app runs in a Facebook canvas. Since the app is served from a different site than Facebook, cookies may be rejected. When I ran the app locally, the app was in a different IE security zone, so different rules applied.

But why had it worked before? I realised that when it worked before I had used Google Chrome. That was it. IE worked locally; but only Chrome worked when deployed.

I have given up trying to fix the specific problem for the moment. I have dug into it a little, and discovered that cookie handling in a Facebook canvas with IE is a long-standing problem, and that the Facebook C# SDK may have bugs in this area. It is not essential for my sample; I have found I can get by with the Facebook session. To get the user ID, for example:

FacebookWebContext.Current.Session.UserId

The time has not been wasted though as I have learned a bit about Azure debugging. I was also amused to discover that my Azure VM has activation problems:

image

The frustration of developing for Facebook with C#

I am researching a piece on developing for Facebook with Microsoft Azure, and of course the first thing I did was to try it out.

It is not easy. The first problem is that Facebook does not care about C#. There are four SDKs on offer: JavaScript, Apple iOS, Google Android, and PHP. This has led to a proliferation of experimental and third-party SDKs which are mostly not very good.

The next problem is that the Facebook API is constantly changing. If you try to wrap it neatly in an SDK, it is likely that some things will break when the next big change comes along.

This leads to the third problem, which is that Google may not be your friend. That helpful article or discussion on developing for Facebook might be out of date now.

Now, there are a couple of reasons why it should be getting better. Jim Zimmerman and Nathan Totten at Thuzi (Totten is now a technical evangelist at Microsoft) created a new C# Facebook SDK, needing it for their own apps and frustrated with what was on offer elsewhere. The Facebook C# SDK looks like it has some momentum.

C# 4.0 actually works well with Facebook, thanks to the dynamic keyword, which makes it easier to cope with Facebook’s changes and also lets it map closely to the official PHP SDK, as Totten explains.

Nevertheless, there are still a few problems. One is that documentation for the SDK is sketchy to say the least. There is currently no reference for it on the Codeplex site, and most of the comments are the kind that produces impressive-looking automatic documentation but actually tells you nothing of substance. Plucking one at random:

FacebookClient.GetAsync(System.Collections.Generic.IDictionary<string,object>)

Summary:
Makes an asynchronous GET request to the Facebook server.

Parameters:
parameters: The parameters.

Another problem, inherent to dynamic typing, is that IntelliSense (auto-completion in Visual Studio) has limited value. You constantly need to reference the Facebook documentation.

Finally, the SDK has changed quite a bit in different versions and some of the samples reference old versions.

In particular, I found it a struggle getting OAuth authentication and access token retrieval working and ended up borrowing Totten’s sample code here which mostly works – though note that the code in the sample does not cope with the same users logging out and logging in again; I fixed this by changing his InMemoryUserStore to use a ConcurrentDictionary instead of a ConcurrentBag, though there are plenty of other ways you can store users.

I’m puzzled why Microsoft does not invest more in making this easier. Microsoft invested in Facebook and it is easy to get the impression that Microsoft and Facebook are in some sort of informal alliance versus Google. Windows Phone 7, for example, ties in closely with Facebook and is probably the best Facebook phone out there.

As it is, although I prefer coding in C# to PHP, I would say that choosing PHP as the platform for your Facebook app will present less friction.

ReSharper 6.0 arrives: intelligent editing and decompiling for Visual Studio

JetBrains has released ReSharper 6.0, an add-on for Visual Studio 2008 and 2010 that delivers a remarkable range of tools, mostly focused on code editing and static analysis. There is also a unit test runner and a source code decompiler.

The heart of ReSharper is refactoring, hence the name, and it adds a large number of refactoring options to Visual Studio. These are nicely integrated with the editor, not only as right-click menu options, but with light-bulb suggestions that appear automatically. Here, for example, ReSharper is telling me that I could use implicit type declaration, and offering to make the change for me, or alternatively to suppress this type of suggestion forever if I do not like it:

image

Source code decompiling is also nicely done. In the above code, IClaimsIdentity is part of the .NET Framework so the source code is not normally available. With ReSharper though, I can navigate to decompiled source:

image

This could be legally sensitive, so I have to pass a Decompiler Legal Notice in which JetBrains attempts to disclaim liability.

image

Then I am in, though the results are not exciting in this instance:

image

If you only want the decompiler, you may find the free dotPeek is all you need.

The what’s new list in ReSharper 6.0 is long. It includes support for JavaScript, ASP.NET Razor, CSS and HTML, better XAML support including creating properties and dependency properties from usage, and macros for file headers which automates things like inserting current date and time.

The pricing is not excessive: in the UK it costs £148 for a personal license or £259 for a commercial license. If you think ReSharper will save you time and improve your code quality, which it likely will, it will soon pay for itself.

Microsoft Office 365: the detail and the developer story

I attended the UK launch of Office 365 yesterday and found it a puzzling affair. The company chose to focus on small businesses, and what we got was several examples of customers who had discovered the advantages of storing documents online. We were even shown a live video conference with a jerky, embarrassing webcam stream adding zero business value and reminding me of NetMeeting back in 1995 – which by the way was a rather cool product. Most of what we saw could have been done equally well in Google Apps, except for a demo of the vile SharePoint Workspace for offline editing of a shared document, though if you were paying attention you could see that the presenter was not really offline at all.

There seems to be a large amount of point-missing going on.

There is also a common misconception that Office 365 is “Office in the cloud”, based on Office Web Apps. Although Office Web Apps is an interesting and occasionally useful feature, it is well down the list of what matters in Office 365. It is more accurate to say that Office 365 is for those who do not want to edit documents in the browser.

I am guessing that Microsoft’s focus on small businesses is partly a political matter. Microsoft has to offer an enterprise story and it does, with four enterprise plans, but it is a sensitive matter considering Microsoft’s relationship with partners, who get to sell less hardware and will make less money installing and maintaining complex server applications like Exchange and SharePoint. The, umm, messaging at the Worldwide Partner Conference next month is something I will be watching with interest.

The main point of Office 365 is a simple one: that instead of running Exchange and SharePoint yourself, or with a partner, you use these products on a multi-tenant basis in Microsoft’s cloud. This has been possible for some time with BPOS (Business Productivity Online Suite), but with Office 365 the products are updated to the latest 2010 versions and the marketing has stepped up a gear.

I was glad to attend yesterday’s event though, because I got to talk with Microsoft’s Simon May and Jo Carpenter after the briefing, and they answered some of my questions.

The first was: what is really in Office 365, in terms of detailed features? You can get this information here, in the Service Description documents for the various components. If you are wondering what features of on-premise SharePoint are not available in the Office 365 version, for example, this is where you can find out. There is also a Support Service Description that sets out exactly what support is available, including response time objectives. Reading these documents is also a reminder of how deep these products are, especially SharePoint which is a programmable platform with a wide range of services.

That leads on to my second question: what is the developer story in Office 365? SharePoint is build on ASP.NET, and you can code SharePoint applications in Visual Studio and deploy them to Office 365. Not all the services available in on-premise SharePoint are in the online version, but there is a decent subset. Microsoft has a Sharepoint Online for Office 365 Developer Guide with more details.

Now start joining the dots with technologies like Active Directory Federation Services – single sign-on to Office 365 using on-premise Active Directory – and Windows Azure which offers hosted SQL Server and App Fabric middleware. What about using Office 365 not only for documents and email, but also as a portal for cloud-hosted enterprise applications?

That makes sense to me, though there are still limitations. Here is a thread where someone asks:

Does some know if it is possible to make a database connection with Office365, SharePoint (Designer) and SQL Azure database ?

and the answer from Microsoft’s Mark Kashman on the SharePoint team:

You cannot do this via SharePoint Designer today. What you can do is to create a Silverlight or javaScript client application that calls out to SQL Azure.

In the near future, we are designing a way to make these connections using the base SharePoint technology called BCS (Business Connectivity Services) where then you could develop a service to service to SQL Azure.

If you cannot wait, check out the Cloud Connector for SharePoint 2010 from Layer 2 GmbH.

It seems obvious that Office 365 and Azure together have potential as a developer platform.

What about third-party applications and extensions for Office 365? This is another thing that Microsoft did not talk about yesterday; but it seems to me that there is potential here as well. It is not well integrated, but you can search Microsoft Pinpoint for Office 365 applications and get some results. If Office 365 succeeds, and I think it will, there is an opportunity for developers here.

Common sense on Windows 8, Silverlight and .NET

I am wary about writing another post on this subject in the absence of any further news, but since there is a lot of speculation out there I thought it would be worth making a few further observations.

Will Windows 8 support Silverlight and/or some other variety of .NET in its new touch-centric mode? I will be astonished if it does not. Aside from other considerations, this is an essential unifying piece between the Windows Phone 7 developer platform and the Windows 8 developer platform, which from what we have seen have a similar user interface. For further evidence, try an internet search for “Jupiter” and “appx”.

Why isn’t Microsoft already shouting about this? A good question. Part of the answer is that Microsoft wants to get developers enthused about the forthcoming build conference in September, and is holding back information.

Another part of the answer is that Windows historically has kept .NET as a layer above the operating system, rather than as part of it. We saw this in Windows 7, where to take advantage of new features like jump lists or thumbnail toolbars, .NET developers had to use a supplementary Windows API Code Pack. The Windows team delivered only native code or COM APIs.

Admittedly, there are differences this time around. The Windows team is not just delivering native code APIs, but also an HTML and JavaScript API. This is a break with the past, hence the talk of a new platform.

When it comes to desktop applications, would not Silverlight or something .NET based be a better choice than HTML5? I can see both sides of this. On one side is all the effort Microsoft has invested in .NET and Silverlight over the past decade. As I’ve noted before, I see Silverlight as what client-side .NET should have been from the beginning, lightweight, secure, simple installation, but with support for C# and much of the .NET Framework which developers know so well.

On the other hand, I can see Microsoft wanting to tap into the wave of HTML5 development and to make it easy for web developers to build apps for Windows 8.

In the end, developers will most likely have the choice. That puts pressure on Microsoft’s developer division to provide strong tools for two different development models; but I think that is what we will get.

Is .NET itself under threat? As far as I am aware, Microsoft has no plan “B” in terms of web and application server technology, and its Azure cloud is largely a .NET platform though there are are efforts to support other things like PHP and Java. Further, this aspect of the Microsoft Platform is under Server and Tools which is 100% behind .NET as far as I can tell. We have also seen Silverlight crop up in the user interfaces for new server products like InTune and System Center. On the server then, there is no evidence for .NET doubts at Microsoft; and considering the trend towards cloud+device computing the server is now at the heart of most business application development.

That said, Microsoft has challenges in sustaining .NET momentum. It cannot afford to fail with Azure, yet other platforms such as Amazon EC2 have greater developer mindshare as cloud computing platforms. VMWare with its Java-based Spring framework is another key competitor. Microsoft was late to the server virtualisation party with Hyper-V. I also see declining market share for IIS versus Apache in Netcraft’s statistics, although these figures are distorted by millions of little-used domains that get shunted from one platform to another by major hosting providers.

Further, it seems to me that the fortunes of .NET on the server cannot be completely separated from what happens on the client. One of the attractions of .NET is the integration between client and server, with Visual Studio as the tool for both. Windows has lost momentum to Apple in mobile, in tablets, and in high-end laptops, making Windows-only clients less attractive. In that context, the decision of the Windows team to favour HTML5 over .NET is a blow, in that it seems to concede that the future client is cross-platform, though I expect there will be some sort of outcry when we see all the proprietary hooks Microsoft has implemented to get HTML5 apps integrated into Windows 8.

Therefore these really are difficult times for .NET. I do not count Microsoft out though; it still dominates business computing, and amongst consumers the Xbox may prove an important new platform as Tom Warren notes.

While I have reservations about Windows 8, it does demo nicely as a new touch-centric operating system and Microsoft surely has chances in the corporate world with new-style tablets that integrate with its system management tools and which run Microsoft Office.

Finally, the angst over the role of .NET in Windows 8 shows that many developers actually like the platform, including Visual Studio, the C# language, the .NET Framework, and XAML for building a rich user interface.

Full circle at Microsoft: from the early days of .NET to the new Chakra JavaScript engine

A discussion with a friend about the origins of Microsoft’s .NET runtime prompted a little research. How did it come about?

A quick search does not throw up any detailed accounts. Part of the problem is that much of it is internal Microsoft history, confidential at the time.

One strand, mentioned here, is Colusa’s OmniVM:

OmniVM was based on research carried out by Steven Lucco at Carnegie Mellon University. Steven co-founded Colusa Software in February 1994 in Berkeley, California. Omniware was released in August 1995. Colusa started working with Microsoft in February 1996. Microsoft acquired Colusa Software on March 12, 1996. Steven is currently a senior researcher at the Microsoft Bay Area Research Center.

OmniVM was appealing to Microsoft because Colusa had already created Visual Basic and C/C++ development environments for the VM. The VM was also claimed to be capable of running Java.

Microsoft took to calling the VM by the name of CVM, presumably for Colusa Virtual Machine. Or perhaps this is where the code name Cool came into being. Other names used at Microsoft include Universal Virtual Machine (UVM), and Intermediate Language (IL).

Microsoft’s Jason Zander, commenting to a story on this blog, does not mention OmniVM:

The CLR was actually built out of the COM+ team as an incubation starting in late 1996. At first we called it the "Component Object Runtime" or COR. That’s why several of the unmanaged DLL methods and environment variables in the CLR start with the Cor prefix.

Still, the timing pretty much matches. If Lucco came to Microsoft in 1996, he could have been part of an incubation project starting later that year.

In June 1999 Microsoft previewed the Common Executable Format for Windows CE:

A demonstration on Common Executable Format (CEF), a new compiler target within the Visual C++® development system for Windows CE, was also presented. This compiler enables cross-processor portability within a category of devices, such as Palm-size PCs or Handheld PCs. A single program executable under CEF is translated to the native code on either the host PC or the device, as desired. This capability eliminates the need for developers to recompile an application for every possible processor on a given Windows CE-based appliance before bringing it to market, thus enabling them to support every version of a device (Palm-size or Handheld PC) quickly and easily.

In 2000 I interviewed Bob Powell, then at Stingray, who told me this in relation to .NET:

There was an early version of the system for Windows CE called the Common Executable Format (CEF). The Pocket PC, which uses around seven different processor types, and which has many different versions of the operating system, is a deployment nightmare. This problem was addressed by the CEF, which was a test case. What is now in the IL is a more refined version of that.

Hmm, now that Windows is coming to ARM alongside x86, this sounds like it could be useful technology … though despite obvious similarities, I don’t think CEF was really an early version of the CLR. Maybe the teams communicated to some extent.

Now this is interesting and brings the story up to date. Lucco is still at Microsoft and apparently his team built Chakra, the new JavaScript engine introduced in Internet Explorer 9:

image

Steven E. Lucco is currently the chief architect for the Microsoft Browser Programmability and Tools (BPT) team. BPT builds the Internet Explorer’s Chakra Javascript script engine, as well as the Visual Studio tools for creating scalable, efficient Web client applications.

Right now, these are dark days for .NET, because Microsoft now seems to be positioning HTML and JavaScript as the new universal runtime.

It seems that the man who perhaps began the .NET Runtime is also at the centre of the technology that might overtake it.

Update: this post has prompted some discussion and the consensus so far is that the OmniVM acquisition probably had little to do with the technology that ended up as .NET. The one thing that is beyond doubt is that the COM team created the .NET CLR as Zander reported. I actually spoke to Zander at TechEd recently and we touched on his early days at Microsoft working with Scott Guthrie:

I was actually one of the original CLR developers. When Scott and I first started working together, he invented ASP.NET and my team invented the CLR.

The history is interesting and if the relevant people at Microsoft are willing to talk about it in more detail it is something I would love to write up – so if that is you, please get in touch!

Gang of Four member Erich Gamma joining Microsoft’s Visual Studio team

Microsoft’s Jason Zander has announced

that Erich Gamma will be joining the Visual Studio team as a Microsoft Distinguished Engineer

Gamma is one of the “Gang of Four” who shook up software development back in 1994 with the book Design Patterns: Elements of Reusable Object-Oriented Software.

The other authors are Richard Helm, Ralph Johnson and John Vlissides.

Gamma has previously been associated with Java rather than .NET. He was co-developer with Kent Beck of the JUnit unit test framework, and also worked on the Eclipse tools platform and at IBM Rational on application lifecycle management (ALM).

It is a prestigious hire and I would expect Gamma’s influence on Visual Studio to be a positive one, especially in areas like software quality, refactoring and ALM.