Category Archives: professional

Data Access in Windows 8 WinRT

I’ve been teasing out details about the Windows Runtime (WinRT) here at Microsoft’s BUILD conference in Anaheim, California.

WinRT is the runtime for touch-friendly Metro-style apps, in effect Microsoft’s operating system for tablets, though it has a dual personality and full desktop Windows is also available.

Microsoft experts told me that there is no client for SQL Server or other network databases in WinRT. To access such databases, you have to create a web service and call that. This is similar to the Silverlight model. The .NET Framework in WinRT will support WCF RIA Services, which is one option for this.

If you only want to store and retrieve data locally, there seems to be a JET API, or you could use something like SQLite or roll your own simple database manager. HTML and JavaScript apps support IndexedDB. All these options read and write data to the app’s isolated storage; they do not enable free access to the file system.

LINQ (Language Integrated Query) is supported but of course that is only as useful as the data it can connect to.

A few facts about Microsoft’s new Windows Runtime

I’ve just come out of Martyn Lovell’s talk on WinRT internals here at BUILD in Anaheim, California.

Make no mistake: Microsoft has re-invented the Windows API in WinRT. Just to recap, WinRT is the API for Metro-style applications, the touch-centric, app-centric API for tablets and, one presumes, eventually for Windows Phone (though Microsoft has yet to admit it).

WinRT is only useable from Metro applications. You cannot call WinRT from a Win32 application, nor vice versa*. I think it is reasonable to assume that a future version of Windows which runs only WinRT is a possibility; and that Windows 8 on ARM will look a bit like that even though Win32 will still be there, but mainly out of sight; but I am speculating.

Does that mean Win32 is now legacy? In a way, but such a huge legacy that for the moment we should think of Windows 8 as two platforms side by side.

There is no inter-app communication in WinRT other than by the pre-defined contracts built into the system (though Lovell noted that you could always use the file system and polling for a crude inter-process communication).

There is no way to install a shared dynamic library. Apps can only use the system libraries together with what you install with the app. Each app lives in its own context and is isolated. In other words, WinRT is not extensible, other than within your app’s code*.

If you figure out a way to bypass limitations of WinRT by calling other Windows APIs, your app might work but the submission process for the Windows Store will prohibit it.

Versioning is built into WinRT. This means that when Windows 9 comes along, you will be able to code just against the Windows 8 versions of the classes, for compatibility, and your IDE can support this by only exposing the Windows 8 version of the API.

The CLR exists in the Metro environment, for use by .NET applications, complete with JIT (Just in time) compilation. However only a subset of the .NET Framework libraries are included. Microsoft aimed to include only what was necessary for Metro. I am not sure yet what is included and what is not, beyond the obvious (no Windows Forms, for example) but will be investigating what is documented. The native WinRT APIs look similar to a COM callable wrapper from the .NET side. That said, you do not normally need to care about WinRT interfaces, even though these are there in WinRT. Normally you interact with WinRT classes, making it more natural for .NET than working with COM.

WinRT is full of asynchronous calls. Lovell told us that Microsoft had seen in the past that if both synchronous and asynchronous APIs are available for the same function, then developers often use the synchronous version even when they should not, making applications less responsive. The new await keyword in C# makes this easy to code.

WinRT makes use of the ILDasm metadata format which is also used by .NET. This means you get rich metadata for IntelliSense and debugging, but note that the actual runtime is not .NET; they just borrowed the same metadata format.

WinRT objects are reference counted like COM for memory management, with weak references to avoid circularity. You should not have to worry about this; you can code according to the conventions of your language.

There are three ways to write WinRT applications. One is C++, in which case you write directly to the “projection” of WinRT into your language. The second is .NET, in which case your code goes via the CLR. The third is HTML and JavaScript, in which case your code goes via the “Chakra” JavaScript engine also used by Internet Explorer 9 and higher. Lovell assured me that there is little difference in performance in most cases, though there could be advantages for C++ in certain niche scenarios. Of course we heard that story for .NET as well, but from what I have seen it is more plausible in WinRT.

There is no message loop in WinRT. There is no GDI in WinRT. All graphics are via DirectX. XNA, the .NET games framework, is not supported. It seems that you will need to use C++ for fancy DirectX coding, though this is not confirmed. Of course your XAML or Canvas code will be rendered by DirectX under the covers.

It is fascinating to see how Microsoft has borrowed XAML and ILDasm from .NET, but that WinRT is native and not .NET at its core. My take on this is that Microsoft intended to preserve the productivity of .NET, but without any performance compromise.

Despite the inclusion of .NET though, the fact that only a subset of the Framework is available, and that interop to the Windows API will not work*, means that most existing apps will need considerable work to be ported to Metro.

*Updates

A few clarifications.

It has been shown that you can call WinRT from Win32 (the favoured word for Win32 seems to be “desktop applications”) though I’m not sure how useful it is.

Concerning P/Invoke (Platform Invocation) to Win32 APIs, apparently this does work for a certain specified, small subset of the Windows API. It also works for your own native code DLL, with the proviso that if your native code DLL calls a disallowed Win32 API it will raise an error.

WinRT is partially extensible. A Framework Extension is a library which you can reference as a dependency in your app’s manifest. When the app is deployed it will download this dependency from the Windows Store. An example is the C Runtime Library. An extension library installs into its own directory, and can be used by multiple WinRT apps provided each one also references it in their manifests. However, the caveat is that only Microsoft can create these extensions: there is no way to create your own shared extension for general distribution, though an enterprise can deploy a shared extension internally.

Which Microsoft cloud? Windows Server 8 shows Azure is not everything

I was fortunate to attend a two-day drilldown into what is coming in Windows Server 8 last week, just before the BUILD conference under way in Anaheim, California. It is an impressive release, with two things standing out for me.

One is that Microsoft has successfully re-engineered Windows Server so that it is both sufficiently modular that you can transition from Server Core to full Server and back without reinstall, and also sufficiently detached from the Windows GUI that everything runs and can be configured without the need to log on to the Windows desktop on the server itself. This is a huge achievement.

Second, much of the engineering in Server 8 is focussed on making it better for cloud hosting. This the focus of changes in both Hyper-V and IIS, isolation of virtual networks, proper bandwidth and CPU quotas and throttling, and the ability to move VMs freely between hosts without taking them offline, and to replicate them for failover purposes. You can read more in my piece on The Register.

The question this raises for me is about Windows Server clouds and Azure. Of course Azure runs on Windows Server, but Azure is a platform, all the VMs are stateless, and when you use Azure you are buying into a whole set of services that might or might not match your needs. At a developer event yesterday, one explained how he could not use Azure because he needed to install a third-party application. The Hyper-V role helps a little, but it is not ideal as you still need to solve the stateless problem; at any time, changes you make to the server may be reverted.

If you simply rent plain Windows Server VMs in the cloud, you lose some of the benefits of cloud computing since you are responsible for everything about how the server is configured and maintained; but you also get complete freedom to set it up as you want.

One of the issues with moving from running your own Exchange and SharePoint, for example, to a cloud-hosted service like Office 365 is that you lose control of your destiny. If the service goes down, you have to beg and plead with support to get information and to speed recovery.

Now consider a scenario in which you have your Exchange and SharePoint on hosted Hyper-V VMs with replication (now coming in Server 8) to an alternate provider such as Amazon Web Services, or to your own on-premise servers. If the service goes down, you failover to the replicas.

Another compelling idea relates to live migration. Imagine you have a VM running on premise, and want to move it to the cloud. Without interruption of service, you could in principle migrate it from on-premise to the cloud and back at will. You need a fast connection of course, but this aspect is constantly improving.

The bottom line: plain Windows Server on a VM has many attractions versus an entire platform like Azure.

The snag is, Microsoft does not offer this type of hosting at the moment. Well, that is not necessarily a snag depending on what you think about hosting with Microsoft; but for some there is considerable reassurance in hosting with a company of Microsoft’s size, and which should in theory have the best understanding of what it takes to host Windows Server.

My guess is that Microsoft will either add this capability to Azure – without the limitations of the Hyper-V role, but with replication and failover – or else develop a new cloud service alongside Azure for this purpose.

My further guess is that it would be popular, possibly more so than Azure is today.

Here comes Windows 8 – but what about the apps?

I’ve spent what feels like most of the night trying out the first developer preview of Windows 8, using an Intel tablet PC loaned by Microsoft for that purpose. The early preview is frustrating, in that many of what will be standard apps like Mail and Contacts are missing, but it is already obvious that Microsoft has done a great job with what I am calling the “Metro” platform within Windows 8. Here is Control Panel in the new user interface:

image

This is the touch-optimized personality of tthe new operating system, featuring a Start menu with live tiles like an evolved Windows Phone 7, apps that run full-screen to create an "immersive user interface", and swipe control to show application menus, switch apps, or access standard features.

It is a delight to use; but this is Metro, with its own Windows Runtime (WinRT), a native code API which is wrapped for access by either HTML and JavaScript apps (which also use the IE 10 runtime), or by C/C++, VB or C# apps driving XAML-defined user interfaces – yes, kind of like Silverlight but not Silverlight.
What about all our Windows apps? For that we need the desktop personality in Windows 8. Tap the Desktop tile, or launch a "Desktop" app, and it suddenly appears, looking much like Windows 7.

The problem: while Windows 8 "Metro" looks great, there are currently zero apps for it, or at least only those supplied with the preview, because it is brand new.
In truth then, Microsoft has not quite done what would have been ideal, which is to make Windows touch-friendly. That would have been impossible. Instead, it has integrated Windows with a new touch-friendly platform.

The key question: will this new platform attract the support it needs from developers in order to become successful in its own right, so that we can do most of our work there and retreat to the desktop only for legacy apps, or apps which really need mouse and keyboard?

It is a big ask, and we have seen HP with WebOS, and probably RIM with PlayBook, fail at this task.

Of course it is still Windows; but I do have a concern that a proportion of users will try Windows 8, find the transitions between Desktop and Metro unsettling, and stick with version 7.0.

Let me add these are very much first impressions; and that Metro really does look good. Perhaps it will win; but a lot of momentum has to build behind it for that to be possible.

PhoneGap comes to Windows Phone

Nitobi has announced PhoneGap for Windows Phone 7, nicely timed just before the Microsoft BUILD conference next week.

PhoneGap is a cross-platform mobile development tool that uses the HTML and JavaScript engine on the phone as its runtime, supplemented by extensions which give access to other device features:

After unpackaging the contents of the www folder, your www/index.html file is loaded into an embedded headless browser control. This is essentially the same paradigm as other platforms, except here it is an IE9 browser and not a webkit variant. IE9 is a much more standards-compliant browser than previous IEs, and implements commonly used html5 features like DOMContentLoaded events, addEventListener interfaces, and CSS3. Be sure to use to get the html5 implementation otherwise the browser may fallback to a compatibility mode, and your code will likely choke and die.

The version for Windows Phone 7, just released in preview, is extended to support features including the camera, accelerometer, contacts, and notifications. There is also support for plugins:

PhoneGap-WP7 maintains the plugability of other platforms via a command pattern, to allow developers to add functionality with minimal fuss, simply define your C# class in the WP7GapClassLib.PhoneGap.Commands namespace and derive your class from BaseCommand.

In general Windows Phone 7 is not well supported by cross-platform toolkits, so PhoneGap support is an interesting development. PhoneGap has a high profile currently, and is being integrated into a diverse range of tools ranging from Adobe Dreamweaver to Embarcadero RadPHP, as well as the standard PhoneGap tools based on Eclipse.

Review: Continuous Delivery by Jez Humble and David Farley

I like this book. I know I like it because I find myself wanting to quote from it frequently. It is a book that almost every software developer should read, even if you disagree with parts of it – which is likely, because it is opinionated. The authors always give reasons for their opinions though, which means that if you disagree, you need to articulate why that is; or they may even change your mind. In consequence you find yourself learning as you read.

The authors are software theoreticians, but they are also practitioners; in fact they are practitioners first and theoreticians afterwards. This means they are pragmatic rather than dogmatic. Here is an example. Chapter 13 discusses software dependencies, and page 372 covers circular dependencies, “probably the nastiest dependency problem.” A circular dependency is when component A depends on component B, and component B also depends on component A.

A bad idea; but the authors write:

Surprisingly, we have seen successful projects with circular dependencies in their build systems. You may argue with our definition of “successful” in this case, but there was working code in production, which is enough for us.

As an aside, this kind of dry humour is characteristic, as also evident in remarks like this:

We are certain that, occasionally, manually intensive releases work smoothly. We may well have been unlucky in having mostly seen the bad ones.

The subject of the book is Continuous Delivery. So what is that? Well, if Continuous Integration is about ensuring that your software always builds, then Continuous Delivery is about ensuring that your software always deploys. The final form, as it were, of Continuous Delivery is Continuous Deployment, where you are so confident of your automated build and deploy process that any checked-in code that passes its tests can be deployed immediately. I was confused about the difference between Continuous Delivery and Continuous Deployment so I wrote a post about it; it turns out that there is not much difference.

The principle behind Continuous Delivery is that software is not done until it is released. If the release process is long, arduous and infrequent, then you are not really doing Agile development. A section of chapter 1 is devoted to release anti-patterns, and these form an excellent rationale for taking an interest in Continuous Delivery.

My guess is that anyone who has been involved in professional software development will wince a little while reading through these anti-patterns, thinking “that is what we used to do” or even “that is what we do”.

That said, Humble and Farley do not fall into the trap of merely writing about how not to do it. Rather, they address in some detail the kinds of problems you will face if you decide to embrace the Continuous Delivery methodology. The key ingredient in Continuous Delivery is that pretty much everything must be automated, otherwise it is too difficult to do. But how do you automate something like Acceptance Testing? That is the subject of chapter 8. How do you automate a deployment at all? That is the subject of chapter 6. The authors are not on a higher plane than the rest of us, and much of the advice is straightforward, even at the level of “Always use relative paths,” which is a tip in chapter 6.

The authors talk a lot about testing, as you would expect, but there is also extensive discussion of software configuration management, describing different approaches such as centralised and distributed version control and even specific tools. The chapter on Advanced Version Control is a particularly good read. Humble and Farley articulate the point that branching and merging is antithetical to Continuous Integration and therefore Continuous Delivery:

If different members of the team are working on separate branches or streams then by definition they’re not continuously integrating (p 390)

Does this mean branches are a bad idea? Not always, say the authors, but they also state:

Our strong recommendation is to crate long-lived branches only on release … new work is always committed to the trunk (p 392)

The reason is not only to enable Continuous Integration, but also because merging is complex and error-prone.

Software configuration management is not easy, but it is a relatively mature aspect of software development. This is less true of what you might call infrastructure configuration management; yet infrastructure dependencies such as versions and configurations of the operating system or web server are a common reason for deployment failures. Several chapters discuss this problem in detail. In principle, the authors say:

The desired state of your infrastructure should be specified through version-controlled configuration.

This leads to some thoughtful discussion of how to achieve this.

Another theme, as you would expect, is that development and operations people need to be working together and not in isolation. To some extent this is a DevOps book.

A great book then; but there are flaws. One is that there is some repetition because of the way the book is organised. This is good if you are inclined to read chapters in isolation, but not so good if you are reading straight through. In practice I did not find it too annoying, but it is there.

Another issue is that while the authors do cover Microsoft .NET to some extent, this is usually in the form of a brief mention and there is more focus on Java. This may be in part because of their preference for open source. It is still a good read for .NET developers, because the principles are platform-agnostic, but Microsoft platform developers may find it irritating at times. Team Foundation Server, say the authors, is “essentially an inferior knock-off of Perforce” (p 386).

The discussion of specific tools is a strength but also a weakness, in that the tools will change over time and the book will become dated.

This is not the last word on Continuous Delivery, but it is an enjoyable and thought-provoking read. Recommended.

 

Internet security hangs on a DNS thread, as hacks of The Register, Telegraph, Acer sites demonstrates

Several well-known web sites including The Register, The Daily Telegraph, UPS.comn and Acer.com suffered a DNS hack on Sunday evening. The consequence is that visitors to the sites may see a Turkish hack message.

image

The hacked sites share a common registrar, Ascio Technologies, and were registered through NetNames. Both NetNames and Ascio are brands of GroupNBT. Zone-h suggests:

It appears that the turk­ish attack­ers man­aged to hack into the DNS panel of Net­Names using a SQL injec­tion and mod­ify the con­fig­u­ra­tion of arbi­trary sites, to use their own DNS.

This kind of attack is more serious than simply hacking into a web server and defacing the content. DNS maps internet names to the IP numbers that identify actual servers on the internet. This means that the hackers can intercept not only web requests for the affected names, but also email. Hackers could also read cookies placed on user’s computers by the real sites, possibly gaining access to user accounts in cases where there is a saved logon.

What this means is that access to DNS records is security-critical. It should give any business pause for thought. How strong is the username/password which gives access to your ISP or registrar’s control panel, allowing the DNS records to be changed? How secure are the servers themselves at that ISP or registrar – it is this that was cracked in this case, according to Zone-h.

Fixing a DNS problem is never instant, since records are replicated across the internet and any changes take time to propagate. This also explains why some users see hacked sites, while others get through to the correct destination. It is possible that the hackers chose to strike at the weekend, in the hope that corrective action would take longer. At the time of writing (23.30 on Sunday) the sites I checked have been fixed at source, including The Register and The Daily Telegraph, but some users are still seeing defaced sites.

Google gets serious about App Engine, ups prices

Google App Engine will be leaving preview status and becoming more expensive in the second half of September, according to an email sent to App Engine administrators:

We are updating our policies, pricing and support model to reflect its status as a fully supported Google product … almost all applications will be billed more under the new pricing.

Along with the new prices there are improvements in the support and SLA (Service Level Agreement) for paid applications. For example, Google’s High Replication Datastore will have a new 99.95% uptime SLA (Service Level Agreement).

Premier Accounts offer companies “as many applications as they need” for $500 per month plus usage fees. Otherwise it is $9.00 per app.

Free apps are now limited to a single instance and 1GB outgoing and incoming bandwidth per day, 50,000 datastore operations, and various other restrictions. The “Instance” pricing is a new model since previously paid apps were billed on the basis of CPU time per hour. Google says in the FAQ that this change removes a barrier to scaling:

Under the current model, apps that have high latency (or in other words, apps that stay resident for long periods of time without doing anything) can’t scale because doing so is cost-prohibitive to Google. This change allows developers to run any sort of application they like but pay for all of the resources that your applications use.

Having said that, the free quota remains generous and sufficient to run a useful application without charge. Google says:

We expect the majority of current active apps will still fall under the free quotas.

Free apps are limited to a single instance, 1GB outgoing and 1GB incoming bandwidth per day, and 50,000 datastore operations, among other restrictions.

The pricing is complex, and comparing prices between cloud providers such as Amazon, Microsoft and Salesforce.com is even more complex as each one has its own way of charging. My guess is that Google will aim to be at least competitive with AWS (Amazon Web Services), while Microsoft Azure and Salesforce.com seem to be more expensive in most cases.

Full circle for Microsoft database APIs as OLEDB for SQL Server is deprecated

Microsoft’s Eric Nelson has posted about how the OLEDB driver for SQL Server is being deprecated and will not be supported beyond “Denali”, the forthcoming version.

OLEDB was created to be the successor to ODBC – expanding the supported data sources/models to include things other than relational databases. Notably OLEDB was tightly tied to a Windows only technology (COM) whilst ODBC was not (Although we did try and take COM cross platform via partners)

ODBC never did get replaced. What actually happened is that ODBC remained the dominant of the two technologies for many scenarios – and became increasingly used on none Windows platforms and has become the de-facto industry standard for native relational data access.

ODBC was as I recall Microsoft’s first attempt at creating a universal database API.

The death of OLEDB will be slow, according to Nelson. The OLEDB driver for Denali will be supported for seven years following Denali’s release. He also says that OLEDB itself, as opposed to the SQL Server OLEDB driver, is not necessarily being deprecated; though frankly if Microsoft ceases supporting it with its own database I cannot see much future for it.

Note that ADO.NET, which to some extent replaced OLEDB, is not being deprecated. However ADO.NET is only usable from .NET applications. When you consider that Microsoft may be to some extent tilting away from .NET and towards native code, the deprecation of OLEDB becomes even more significant.

ODBC is not particularly easy to use in its raw form. However, you can wrap ODBC with, yes, an OLEDB provider or an ADO.NET provider; or you can wrap the whole lot in an object-relational framework such as Entity Framework.

One more chapter in the long, strange and tortuous history of Microsoft’s data APIs.

jQuery usage soars as Adobe Flash shows slight decline

A press release from .appendTo, a company which offers jQuery-based services and training, states that “jQuery Overtakes Flash on World’s Top Websites”. I found it a curious claim insofar as jQuery is not really an alternative to Flash, though there is some limited set of graphical effects for which I guess you could use either.

I took a look at the source data from httparchive.org – note that the data at this link changes regularly. I compared the most recent stats, from August 15 2011, to the oldest available, November 15 2010, an interval of nine months. The data is based on the most visited sites based on various lists and seems to amount to between 15,000 and 20,000 URLs.

In November 2010, jQuery was found on 39% of the sites, whereas Flash was on 49%. In August 2011, the stats show jQuery on 48% of sites with Flash on 47%, hence the press release.

Other figures that caught my eye: in web servers, Microsoft IIS has moved from 21% to 20%, apache from 51% to 49%, nginx from 11% to 13%.

Google analytics is the most commonly found script, moving from 61% to 63% of these sites. The amount of data Google receives on internet traffic is remarkable.

The real story here is the ascendancy of jQuery rather than the decline of Flash. If you want your website to work on Apple’s mobile devices as well as on desktop PCs, then Flash is not an option.

Adobe does not make money from the Flash runtime, which is free. It makes money from design tools and server-side services, among other things. Although it is good for Adobe if everyone uses its Flash client, it can still succeed in an HTML 5 world.

Flash has other roles too. Adobe AIR uses the Flash runtime on desktop PCs and some smartphones, and an iOS compiler lets you build Flash apps for Apple’s iPhone and iPad.

There is also some evidence that Adobe is tilting its efforts a little more towards HTML, with products including the preview of Edge which is a motion and interaction design tool for HTML5, CSS and JavaScript.