Category Archives: .net

Data Access in Windows 8 WinRT

I’ve been teasing out details about the Windows Runtime (WinRT) here at Microsoft’s BUILD conference in Anaheim, California.

WinRT is the runtime for touch-friendly Metro-style apps, in effect Microsoft’s operating system for tablets, though it has a dual personality and full desktop Windows is also available.

Microsoft experts told me that there is no client for SQL Server or other network databases in WinRT. To access such databases, you have to create a web service and call that. This is similar to the Silverlight model. The .NET Framework in WinRT will support WCF RIA Services, which is one option for this.

If you only want to store and retrieve data locally, there seems to be a JET API, or you could use something like SQLite or roll your own simple database manager. HTML and JavaScript apps support IndexedDB. All these options read and write data to the app’s isolated storage; they do not enable free access to the file system.

LINQ (Language Integrated Query) is supported but of course that is only as useful as the data it can connect to.

A few facts about Microsoft’s new Windows Runtime

I’ve just come out of Martyn Lovell’s talk on WinRT internals here at BUILD in Anaheim, California.

Make no mistake: Microsoft has re-invented the Windows API in WinRT. Just to recap, WinRT is the API for Metro-style applications, the touch-centric, app-centric API for tablets and, one presumes, eventually for Windows Phone (though Microsoft has yet to admit it).

WinRT is only useable from Metro applications. You cannot call WinRT from a Win32 application, nor vice versa*. I think it is reasonable to assume that a future version of Windows which runs only WinRT is a possibility; and that Windows 8 on ARM will look a bit like that even though Win32 will still be there, but mainly out of sight; but I am speculating.

Does that mean Win32 is now legacy? In a way, but such a huge legacy that for the moment we should think of Windows 8 as two platforms side by side.

There is no inter-app communication in WinRT other than by the pre-defined contracts built into the system (though Lovell noted that you could always use the file system and polling for a crude inter-process communication).

There is no way to install a shared dynamic library. Apps can only use the system libraries together with what you install with the app. Each app lives in its own context and is isolated. In other words, WinRT is not extensible, other than within your app’s code*.

If you figure out a way to bypass limitations of WinRT by calling other Windows APIs, your app might work but the submission process for the Windows Store will prohibit it.

Versioning is built into WinRT. This means that when Windows 9 comes along, you will be able to code just against the Windows 8 versions of the classes, for compatibility, and your IDE can support this by only exposing the Windows 8 version of the API.

The CLR exists in the Metro environment, for use by .NET applications, complete with JIT (Just in time) compilation. However only a subset of the .NET Framework libraries are included. Microsoft aimed to include only what was necessary for Metro. I am not sure yet what is included and what is not, beyond the obvious (no Windows Forms, for example) but will be investigating what is documented. The native WinRT APIs look similar to a COM callable wrapper from the .NET side. That said, you do not normally need to care about WinRT interfaces, even though these are there in WinRT. Normally you interact with WinRT classes, making it more natural for .NET than working with COM.

WinRT is full of asynchronous calls. Lovell told us that Microsoft had seen in the past that if both synchronous and asynchronous APIs are available for the same function, then developers often use the synchronous version even when they should not, making applications less responsive. The new await keyword in C# makes this easy to code.

WinRT makes use of the ILDasm metadata format which is also used by .NET. This means you get rich metadata for IntelliSense and debugging, but note that the actual runtime is not .NET; they just borrowed the same metadata format.

WinRT objects are reference counted like COM for memory management, with weak references to avoid circularity. You should not have to worry about this; you can code according to the conventions of your language.

There are three ways to write WinRT applications. One is C++, in which case you write directly to the “projection” of WinRT into your language. The second is .NET, in which case your code goes via the CLR. The third is HTML and JavaScript, in which case your code goes via the “Chakra” JavaScript engine also used by Internet Explorer 9 and higher. Lovell assured me that there is little difference in performance in most cases, though there could be advantages for C++ in certain niche scenarios. Of course we heard that story for .NET as well, but from what I have seen it is more plausible in WinRT.

There is no message loop in WinRT. There is no GDI in WinRT. All graphics are via DirectX. XNA, the .NET games framework, is not supported. It seems that you will need to use C++ for fancy DirectX coding, though this is not confirmed. Of course your XAML or Canvas code will be rendered by DirectX under the covers.

It is fascinating to see how Microsoft has borrowed XAML and ILDasm from .NET, but that WinRT is native and not .NET at its core. My take on this is that Microsoft intended to preserve the productivity of .NET, but without any performance compromise.

Despite the inclusion of .NET though, the fact that only a subset of the Framework is available, and that interop to the Windows API will not work*, means that most existing apps will need considerable work to be ported to Metro.

*Updates

A few clarifications.

It has been shown that you can call WinRT from Win32 (the favoured word for Win32 seems to be “desktop applications”) though I’m not sure how useful it is.

Concerning P/Invoke (Platform Invocation) to Win32 APIs, apparently this does work for a certain specified, small subset of the Windows API. It also works for your own native code DLL, with the proviso that if your native code DLL calls a disallowed Win32 API it will raise an error.

WinRT is partially extensible. A Framework Extension is a library which you can reference as a dependency in your app’s manifest. When the app is deployed it will download this dependency from the Windows Store. An example is the C Runtime Library. An extension library installs into its own directory, and can be used by multiple WinRT apps provided each one also references it in their manifests. However, the caveat is that only Microsoft can create these extensions: there is no way to create your own shared extension for general distribution, though an enterprise can deploy a shared extension internally.

Here comes Windows 8 – but what about the apps?

I’ve spent what feels like most of the night trying out the first developer preview of Windows 8, using an Intel tablet PC loaned by Microsoft for that purpose. The early preview is frustrating, in that many of what will be standard apps like Mail and Contacts are missing, but it is already obvious that Microsoft has done a great job with what I am calling the “Metro” platform within Windows 8. Here is Control Panel in the new user interface:

image

This is the touch-optimized personality of tthe new operating system, featuring a Start menu with live tiles like an evolved Windows Phone 7, apps that run full-screen to create an "immersive user interface", and swipe control to show application menus, switch apps, or access standard features.

It is a delight to use; but this is Metro, with its own Windows Runtime (WinRT), a native code API which is wrapped for access by either HTML and JavaScript apps (which also use the IE 10 runtime), or by C/C++, VB or C# apps driving XAML-defined user interfaces – yes, kind of like Silverlight but not Silverlight.
What about all our Windows apps? For that we need the desktop personality in Windows 8. Tap the Desktop tile, or launch a "Desktop" app, and it suddenly appears, looking much like Windows 7.

The problem: while Windows 8 "Metro" looks great, there are currently zero apps for it, or at least only those supplied with the preview, because it is brand new.
In truth then, Microsoft has not quite done what would have been ideal, which is to make Windows touch-friendly. That would have been impossible. Instead, it has integrated Windows with a new touch-friendly platform.

The key question: will this new platform attract the support it needs from developers in order to become successful in its own right, so that we can do most of our work there and retreat to the desktop only for legacy apps, or apps which really need mouse and keyboard?

It is a big ask, and we have seen HP with WebOS, and probably RIM with PlayBook, fail at this task.

Of course it is still Windows; but I do have a concern that a proportion of users will try Windows 8, find the transitions between Desktop and Metro unsettling, and stick with version 7.0.

Let me add these are very much first impressions; and that Metro really does look good. Perhaps it will win; but a lot of momentum has to build behind it for that to be possible.

Building Windows – when Microsoft shows its hand

I’m in Anaheim, California on the eve of Microsoft’s BUILD conference. I have heard the phrase “wait until BUILD” so many times from Microsoft over the last few months that it has given this conference a special flavour. After Wednesday, the company will have to think of another way to avoid awkward questions like what is happening to Silverlight.

This is the latest chapter in the progression of Windows, server client and mobile. In particular, I will be trying to understand Microsoft’s software development platform. Whatever it looks like, it will be diverse, and include native code, HTML and JavaScript, .NET code including Silverlight, and perhaps some new hybrid. What will be the pros and cons of each approach, how do developers create apps that span desktop, tablet and mobile, and how will the delivery model change in the app store era?

Interesting questions; but the other theme is about how effectively Microsoft will compete versus its competition as the importance of desktop Windows shrinks. Cloud, mobile and tablet are the themes here, and after many mis-steps time is running out.

Not much to add except “watch this space” over the next few days; though I would be interested in any specific comments or questions on Microsoft’s strategy.

PhoneGap comes to Windows Phone

Nitobi has announced PhoneGap for Windows Phone 7, nicely timed just before the Microsoft BUILD conference next week.

PhoneGap is a cross-platform mobile development tool that uses the HTML and JavaScript engine on the phone as its runtime, supplemented by extensions which give access to other device features:

After unpackaging the contents of the www folder, your www/index.html file is loaded into an embedded headless browser control. This is essentially the same paradigm as other platforms, except here it is an IE9 browser and not a webkit variant. IE9 is a much more standards-compliant browser than previous IEs, and implements commonly used html5 features like DOMContentLoaded events, addEventListener interfaces, and CSS3. Be sure to use to get the html5 implementation otherwise the browser may fallback to a compatibility mode, and your code will likely choke and die.

The version for Windows Phone 7, just released in preview, is extended to support features including the camera, accelerometer, contacts, and notifications. There is also support for plugins:

PhoneGap-WP7 maintains the plugability of other platforms via a command pattern, to allow developers to add functionality with minimal fuss, simply define your C# class in the WP7GapClassLib.PhoneGap.Commands namespace and derive your class from BaseCommand.

In general Windows Phone 7 is not well supported by cross-platform toolkits, so PhoneGap support is an interesting development. PhoneGap has a high profile currently, and is being integrated into a diverse range of tools ranging from Adobe Dreamweaver to Embarcadero RadPHP, as well as the standard PhoneGap tools based on Eclipse.

Full circle for Microsoft database APIs as OLEDB for SQL Server is deprecated

Microsoft’s Eric Nelson has posted about how the OLEDB driver for SQL Server is being deprecated and will not be supported beyond “Denali”, the forthcoming version.

OLEDB was created to be the successor to ODBC – expanding the supported data sources/models to include things other than relational databases. Notably OLEDB was tightly tied to a Windows only technology (COM) whilst ODBC was not (Although we did try and take COM cross platform via partners)

ODBC never did get replaced. What actually happened is that ODBC remained the dominant of the two technologies for many scenarios – and became increasingly used on none Windows platforms and has become the de-facto industry standard for native relational data access.

ODBC was as I recall Microsoft’s first attempt at creating a universal database API.

The death of OLEDB will be slow, according to Nelson. The OLEDB driver for Denali will be supported for seven years following Denali’s release. He also says that OLEDB itself, as opposed to the SQL Server OLEDB driver, is not necessarily being deprecated; though frankly if Microsoft ceases supporting it with its own database I cannot see much future for it.

Note that ADO.NET, which to some extent replaced OLEDB, is not being deprecated. However ADO.NET is only usable from .NET applications. When you consider that Microsoft may be to some extent tilting away from .NET and towards native code, the deprecation of OLEDB becomes even more significant.

ODBC is not particularly easy to use in its raw form. However, you can wrap ODBC with, yes, an OLEDB provider or an ADO.NET provider; or you can wrap the whole lot in an object-relational framework such as Entity Framework.

One more chapter in the long, strange and tortuous history of Microsoft’s data APIs.

Continuous Integration vs Continuous Delivery vs Continuous Deployment: what is the difference?

I am reading the excellent book Continuous Delivery by Jez Humble and David Farley. But what is Continuous Delivery and how does it differ from the other “continuous” development methodologies?

It helps to understand that all these methodologies spring from the Agile software development movement, and the expression Continuous Delivery is a quote from the Agile Manifesto:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Now, the starting assumption is that most software projects integrate a number of smaller projects, whether from third-parties or from team members. Since these pieces are developed to some extent independently there is a risk that changes made to one piece will require modifications to another piece; hence according to Humble and Farley:

Most software developed by large teams spends a significant proportion of its development time in an unusable state.

The business of getting all the parts to work together is called integration, and if this involves serious work you need to have an integration phase where this is the sole objective. This is a bad idea for all sorts of reasons, slowing development and preventing proper testing other than at the end of these integration phases.

The solution is called Continuous Integration (CI). You have a frequent automated build that assembles all the pieces from all the teams into a working application. If the build fails, or if automated tests run against the build fail, then this is a bug that should be fixed immediately, not later in some separate phase of development.

Tools for CI include Cruise Control, Hudson, TeamCity and others; .NET developers can also configure Visual Studio Team Foundation Server for CI.

The problem with CI alone is that the development environment is not the same as the production environment. What if the CI build works and tests pass, but once deployed the application breaks or performs badly? Perhaps the development environment runs a multi-tier application with all the tiers on a single box, but when deployed onto actual multiple machines or VMs, something goes wrong. Permission problems are another common source of errors.

Continuous Delivery means that you not only build the software, but also deploy it frequently. This usually means provisioning servers, which you can automate using a tool like Puppet for Unix-like servers, or with Virtual Lab Management in a Visual Studio environment. Automation is pretty much essential for this to work. The more closely the test environment matches the production environment, the better.

Generally though, Continuous Delivery means deployment to a test environment. What about taking the next step, and deploying continuously to production? That is the methodology called Continuous Deployment. It sounds risky; but if you have a very extensive and thorough set of automated tests, then the risks are mitigated, especially as the extent of the changes in any one deployment is reduced.

Other suggestions for reducing risk include deploying to a small subset of users first, called “canary testing”; and making rollback easy.

That said, to judge by the Humble/Farley book the distinction between Continuous Delivery and Continuous Deployment is just a little blurred. The authors acknowledge that continuous deployment into production is not always a good idea. They also imply that Continuous Deployment might mean only that your application is always ready and easy to deploy into production, not that you necessarily deploy it constantly:

Your implementation should make it possible to deploy any version of your application that has made it past the automated tests into any of your environments at the push of a button, given the correct credentials.

Compliance and security are also factors that may rightly make it impossible to automate deployment to production completely.

Client-installed applications present some special difficulties which Humble and Farley discuss.

In summary then:

Continuous integration: your application always builds and passes its tests, including all the pieces from different sub-teams.

Continuous delivery: your application always builds and deploys to a test environment and passes its tests.

Continuous deployment: your application is always ready to deploy to production through a largely automated process.

Update: I received an email from Martin Fowler about this post. He refers to Jez Humble’s post on Continuous Delivery vs Continous Deployment and adds:

– I would use your definition of Continuous Deployment for Continuous Delivery

– I would change the definition of Continuous Deployment to say something like "every good build is released to production"

However, I clarified with him that if you building for a test environment but are confident that any build that passes would be OK to deploy to production, then you are still doing Continuous Delivery. In the end, while I am sure you should use Fowler and Humble’s definitions rather than mine, it seems to me a fine distinction and that if you are doing Continuous Delivery properly then the transition to Continuous Deployment is largely a matter of policy.

C++ 11 is approved by ISO: a big day for native code development

Herb Sutter reports that C++ 0x, which will be called C++ 11, has been unanimously approved by the ISO C++ committee. The “11” in the name refers to the year of approval, 2011. The current standard is C++ 98, though amended as C++ 03, so it has taken 8 or 13 years to update it depending on how you count it.

This means that compiler makers can get on with implementing the full C++ 11 standard. Most current compilers implement some of the features already. This Apache wiki shows the current status. A quick glance suggests that the open source GCC is ahead of the pack, followed by Intel C++ and then perhaps Microsoft Visual C++.

C++ 11 is pretty much compatible with C++ 03 so existing code should still work. However there are many new features, enough for Bjarne Stroustrup to say in his feature summary:

Surprisingly, C++0x feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It’s easier now.

Concurrent programming is better supported in C++ 11, important for getting the best performance from modern hardware.

It is curious how the programming landscape has changed in recent year. A few years back, you might have foreseen a day when most programming would be .NET, Java or JavaScript: all varieties of managed code. While those languages do still dominate, native code has come more to the fore, thanks to factors like Apple’s focus on Objective C, and signs of internal conflict at Microsoft over the best language for coding Windows applications.

That said, C++ 11 remains a demanding language to learn and use. As Stroustrup notes, since C++ 11 is a superset of C++ 98 it is technically harder to learn all of it, though new libraries and abstractions should help beginners. The reasons for using or not using C++ are not going to change significantly with this new standard.

Mozilla to take on the cross-platform app challenge

Mozilla is facing an uncertain future. Its problem: basing a business (even a non-profit one) on being the alternative to Microsoft’s Internet Explorer is no longer sensible, given that Apple and Google are now doing this too, and even Microsoft is now investing in HTML 5. I discussed these issues in more detail here.

So what is Mozilla to do? Mozilla Chair Mitchell Baker has posted about a possible new approach, based on being the alternative to Apple for apps. She lists some of the problems with the current “app experience”. Apps are device-specific, require permission at many levels, and a few App Store owners (mainly Apple but also Google) control the business model and customer relationships.

Mozilla is proposing what I presume is a new app platform, which will be cross-platform and cross-device. Instead of discovering apps in a single app store, she envisages multiple providers and the ability to find apps in the same way we find web content.

In other words, if the old Mozilla was about freedom from Microsoft and allowing web technology to progress, the new Mozilla might be about freedom from Apple and allowing app technology to progress.

It is a bold vision and one that in principle would be welcome. That said, Mozilla cannot change the control Apple has over its platform, and its insistence that apps are installed only through its own App Store. Maybe she has in mind a cross-platform toolkit, or browser-based apps, or some combination.

Another snag is that whereas there was widespread dissatisfaction with Microsoft’s Internet Explorer back in 2004 when Firefox was launched, this is not the case with Apple and its app platform today. Apple’s App Store system undoubtedly has a dark side, but the user experience is good and developers are making money, some of them at least. Apple’s control over app installation and the constraints imposed on what apps can do are also good for security.

Nevertheless, having looked at a number of cross-platform mobile toolkits, from PhoneGap to Appcelerator Titanium to Adobe AIR, I can see both the significance of this kind of development and that there is plenty of scope for improvement.

Real-world Microsoft Team Foundation Server: Not very good, says ThoughtWorks

I spoke to Sam Newman, who is European Continuous Delivery Practice Lead at ThoughtWorks, a software development company. Needless to say, we talked extensively about Continuous Delivery and I will be reporting on this separately; but I was also interested in his comments on Microsoft’s Team Foundation Server (TFS).  He told me that ThoughtWorks teams often end up working with it at their .NET clients, but it is problematic. In one case, he said, 6% of productive time was absorbed dealing with TFS.

What was the problem, performance, bugs, features lacking?

When we’ve looked at the problems we’ve had, a lot of it unfortunately comes down to the version control system. It’s not very good. It’s slow, you can’t do rollbacks, sometimes things go missing, you get locks. When we talked about 6% of time, they were things like waiting for a solution to expand in Visual Studio. A lot of those issues are in the version control system.

A frustration is that you cannot use TFS with any version control system other than its own.

Every other build server in the world, from Anthill to Go to Cruise Control to Hudson, you can put in at least 10 version control systems. In TFS they are all coupled. So you can’t take the version control and point it at Subversion. That might resolve a lot of the issues.

Why is TFS so widely used? It is because it comes in the box, says Newman.

I can’t think of a single client that wanted a tool, went out into the marketplace, and selected TFS because it is the right tool for them. Most clients use TFS because it comes with their Enterprise MSDN licence.

I have tried TFS myself and found it pretty good; but then I am just testing it on small projects as a solo developer, so it is hard for me to replicate the experience of a real-world team. You would have thought that performance issues, such as waiting ages for a solution to expand in Visual Studio, could be solved by tracing the reason for the delay; but apparently this is not easy.

This is anecdotal evidence, and of course there may be plenty of TFS installations out there that work very well; I would be interested in hearing of counter examples. I am also not sure to what extent the problems apply to all versions of TFS, or whether there is improvement in TFS 2010.

Newman recognizes that anecdotal evidence is not much use: he says ThoughtWorks is trying to collect some solid data that can be used both to discuss with clients making version control and build system choices, and with Microsoft.

Performance is a feature, and makes a large contribution to user satisfaction. The first release of Outlook 2007 was extraordinarily slow in some setups, and I remember the pain of clicking on a folder and then waiting tapping my fingers while it thought about expanding. It sounds like some TFS users are having a similar experience but in Visual Studio.