Category Archives: windows

RIA (Rich Internet Applications): one day, all applications will be like this

I loved this piece by Robin Bloor on The PC, The Cloud, RIA and the future. My favourite line:

Nowadays very few Mac/PC users have any idea where any program is executing.

And why should they? Users want stuff to just work, after all. Bloor says more clearly than I have managed why RIA is the future of client computing. He emphasises the cost savings of multi-tenancy, and the importance of offline capability; he says the PC will become a caching device. He thinks Google Chrome is significant. So do I. He makes an interesting point about piracy:

All apps will gradually move to RIA as a matter of vendor self interest. (They’d be mad not to, it prevents theft entirely.)

Bloor has said some of this before, of course, and been only half-right. In 1997 he made his remark that

Java is the epicenter of a software earthquake, and the shockwaves are already shaking the foundations of the software industry.

predicting that Java browser-hosted or thin clients would dominate computing; he was wrong about Java’s impact, though perhaps he could have been right if Sun had evolved the Java client runtime to be more like Adobe Flash or Microsoft Silverlight, prior to its recent hurried efforts with JavaFX. I also suspect that Microsoft and Windows have prospered more than Bloor expected in the intervening 12 years. These two things may be connected.

I think Bloor is more than half-right this time round, and that the RIA model with offline capability will grow in importance, making Flash vs Silverlight vs AJAX a key battleground.

Mono creeping into the mainstream?

For those of you who have not already seen this link on Twitter: I’ve posted a short piece on Mono, the open source implementation of Microsoft .NET. The piece was prompted by my own experience writing a simple .NET application in Visual Studio and deploying it to Linux. Admittedly I anticipated the move by using MySQL rather than SQL Server as the database; but even so, I was impressed by how easy it was – I spent more time recently deploying an application from Visual Studio 2008 to Windows Server 2008, thanks to some issues with SQL Server Express.

Don’t Miguel de Icaza’s comment about scalability and garbage collection, two of the factors that have deterred some from real-world Mono deployments.

Hands on with ASP.Net Membership, SQL Express and Server 2008

Is it worth using the built-in membership framework in your ASP.Net application, or should you roll your own? I’ve been trying it out recently, and I have mixed feelings.

On the plus side, it does get you up and running quickly with user login and role-based permissions, saving time and possibly achieving more reliable results, on the grounds that Microsoft and countless other users should have found and fixed any bugs by now.

One the negative side, there are annoying limitations. The most obvious one is that a user as defined in the framework only has a minimal number of fields, not including information you probably want to store like first and last name. You are meant to fill this gap by using profiles, another ASP.Net feature which lets you store arbitrary name-value pairs in a database as a kind of persistent session. That works, but the way profile properties are stored makes it hard to do things like sorting users by last name. Therefore, you will probably end up managing your own user database and joining it to the membership system with the user ID, at which point you begin to lose some of the benefits.

Some of the supplied controls, like the CreateUserWizard, seem rough-and-ready too.

Still, the real fun began when I tried to deploy my demo app to Server 2008 and SQL Server Express 2008. By the way, make sure you install .NET Framework 3.5 SP1 and Windows Installer 4.5 before installing the latest SQL Server Express, otherwise the setup spends ages unpacking its files and then exits with a brief message. I got there eventually, copied my application across, and optimistically tried to run it.

When you debug a web application in Visual Studio, it defaults to a SQL Express database in the App_Data folder within the web site, attached on demand. In theory, that should make it easy to deploy to another machine with SQL Express installed: just copy it across, right? There must be a way of getting this to  work, but it seems a lot of people have problems. I got the message:

Login failed for user ‘NT AUTHORITY\NETWORK SERVICE’.

This makes sense, insofar as ASP.NET runs as this user. I temporarily attached the database and added the login, to be rewarded with a different and more perplexing error:

Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance.

A quick Google shows that many users have suffered from these errors, and that a large number of remedies have been proposed. I abandoned the idea of attaching the database on demand and set up a new database, made ready with Aspnet_regsql. I still got one or other of these errors.

Eventually I realised that my application was using more than one connection string. The problem is that the membership framework uses three different "providers", one for membership, one for roles, and one for profiles. By default in IIS 7.0, these all use an attach-on-demand connection string, defined as LocalSqlServer, and inherited from machine.config buried deep within your Microsoft .NET Framework system folder. In order to prevent ASP.Net membership from using this, you have to override all three providers in the web.config for your application. There’s an example in this article from ISP MaximumASP. I wish I’d come across it sooner; but my demo works fine now.

Hyper-V disk I/O: performance of dynamic vs fixed virtual hard disks

The dynamic virtual hard drive is one of the best things about virtualization. It is like Dr Who’s Tardis. The virtualized OS thinks it has plenty of space, while on the host machine your 128GB virtual drive might occupy just 4 or 5GB – this is typical of the test VMs I set up, running say Server 2008 and a server application or two.

Trouble is, there’s a performance penalty. I first came across this with a hilariously slow Ubuntu install, where the problem is made worse by the lack of integration services, the utilities and drivers that install into the guest to enable smooth interaction with the host.

As an experiment, I created a second Ubuntu VM using a 30GB fixed-size drive. Better? Yes, much better. Here are the figures on my admittedly slow low-end HP Xeon server:

Copy an 891MB file:

  • Ubuntu 8.10 on 127GB dynamic drive with 1GB RAM: 6 min 45 secs
  • Ubuntu 8.10 on 30GB fixed drive with 1GB RAM: 3 min 15 secs

As a further test, I copied the same file in Server 2008:

  • Server 2008 on 127GB dynamic drive with 2GB RAM: 5 min 55 secs

My immediate thoughts: you would be crazy to use VMs in production with dynamic drives. Always use fixed drives. You can still expand them manually if necessary. Note that Hyper-V defaults to dynamic drives.

Still, these tests are not extensive or rigorous; I’d be interested in other results. I’ll also be creating my next Server 2008 VM on a fixed drive and will repeat the test there.

I’ve posted some further hyper-v tips and gotchas here.

Technorati tags: , ,

How Hyper-V can seem to lose your data

I’m sure it can really lose your data as well, but in this case “seem” is the appropriate word. I’ve been messing around with Hyper-V and one of my test machines is a SharePoint server. I started this up and found I could not access it over the network. On further investigation, it turned out to be a broken trust relationship with the Domain Controller. In other words, on attempting to log on with domain credentials I got the message:

The trust relationship between this workstation and the primary domain failed

The official advice when confronted with this problem is to remove and re-join it to the domain, creating a new computer account. I did so. Logged on, and was disappointed to discover that SharePoint was now empty. Worse still, even checking out the SQL Server databases did not uncover them. All my documents had vanished.

It turned out that I had done the wrong thing. What had really happened is that Hyper-V had been saving my changes on that virtual hard drive to a “differencing disk”, a file with an .avhd extension. This is part of the Hyper-V snapshot system. Somehow, Hyper-V had forgotten the differencing disk, and started up my SharePoint VM using the last fully merged copy of the drive, which was over a month old. My drive had gone back in time, so the data had gone.

The solution was to restore the old parent .vhd from backup, and then manually merge it with the differencing file. Step by step instructions are here. Since I had deleted the original computer account, I then had to remove and rejoin the machine to the domain a second time. All was well and my data reappeared.

The bug here is how Hyper-V managed to start with an old version of the virtual hard drive in the first place. I can imagine this causing panic if it occurs in production – and once you start writing new, important data to the old version you are really in trouble. I was lucky that the discrepancy was severe enough that Active Directory complained.

Virtualization may be wonderful; but it also introduces new problems of its own.

The other lesson is that those .vhd files in C:\Users\Public\Public Documents\Hyper-V\Virtual Hard Disks do not necessarily contain your latest data. You also need to consider the .avhd files stored handily at C:\Program Data\Microsoft\Windows\Hyper-V\Snapshots.

Technorati tags: , , ,

Fixing the Exchange 2007 quarantine – most obscure Outlook operation ever

I’ve been testing Exchange 2007 recently and overall I’m impressed. Smooth and powerful; and the built-in anti-spam is a great improvement on what is in Exchange 2003. One of the features lets you redirect spam to a quarantine mailbox. You know the kind of thing: it’s a junk bucket, and someone gets the job of sifting through it looking for false positives, like lotteries you really have won (still looking).

Sounds a nice feature, but apparently Microsoft did not quite finish it. The quarantine is a standard Exchange mailbox, which means you have to add a quarantine user. To view the quarantine, you log onto that mailbox. A bit of a nuisance, but not too bad once you have figured out the somewhat obscure means of opening another user’s mailbox within your own Outlook. You’ll notice a little usability issue. All the entries are non-delivery reports from the administrator. You cannot see who they are from without reading the report, making it harder to scan them for genuine messages.

Another issue is when you find an email you want to pluck out of the bucket. My guess is that you will need to Google this one, or call support. The trick is to open the message, and click Send Again. It is counter-intuitive, because the message you are sending again is not the one you can see – that’s the Administrator’s report – but the original message which is otherwise hidden.

So you hit Send Again. As if by magic, the lost message appears. Great; but there’s another little issue. If you hit Send, the message will be sent from you, not from the original sender.

Both issues can be fixed. The fix for Send Again is to log on as the quarantine user – opening the mailbox is not enough. Since it is not particularly easy to switch user in Outlook, the obvious solution is Outlook Web Access; or you could use Switch User in Vista to log on with Outlook as the quarantine user. Send Again will then use the original sender by default.

How about being able to see the original sender in Outlook? No problem – just follow the instructions here. I won’t bore you by repeating them; but they form, I believe, a new winner in the Outlook obscurity hall of shame. After using Notepad to create and save a form config file, you use the UI to install it, and here’s a screenshot showing how deeply the required dialog is buried:

A few more steps involving a field picker dialog reminiscent of Windows 95, and now you can see all those faked sender email addresses:

The mitigating factor is that the anti-spam rules themselves are pretty good, and I’ve not found many false positives.

The Exchange VSS plug-in for Server 2008 that isn’t (yet)

If you install Exchange 2007 on Server 2008, one problem to consider is that the built-in backup is not Exchange-aware. You have to use a third-party backup, or hack in the old ntbackup from Server 2003. Otherwise, Exchange might not be restorable, and won’t truncate its logs after a backup.

In June 2008 Scott Schnoll, Principal Technical Writer on the Exchange Server product team, announced that:

As a result of the large amount of feedback we received on this issue, we have decided to ship a plug-in for WSB created by Windows and the Small Business Server (SBS) team that enables VSS-based backups of Exchange.

He is making reference to the fact that Small Business Server 2008 does include a VSS (Volume Shadow Copy Service) plug-in for Exchange, so that the built-in backup works as you would expect. This was also announced at the 2008 TechEd, shipping later that summer was mentioned, and the decision was generally applauded. But SBS 2008 shipped last year. So where is the plug-in?

This became the subject of a thread on TechNet, started in August 2008, in which the participants refused to accept a series of meaningless “we’re working on it” responses:

This is becoming more than a little absurd.  I understand that these things can take time, and that unexpected delays can occur, but I rather expect that more information might be provided than “we’re working on it”, because I know that already and knew it months ago.  What sort of timeframe are we looking at, broadly?  What is the most that you are able to tell us?

Then someone spotted a comment by Group Program Manager Kurt Phillips in this thread:

We’re planning on starting work on a backup solution in December – more to follow on that.

Phillips then said in the first thread mentioned above:

The SBS team did implement a plug-in for this.  In fact, we met with them to discuss some of the early design work and when we postponed doing it in late summer, they went ahead with their own plans, as it is clearly more targeted toward their customer segment (small businesses) than the overall Exchange market.

We are certainly evaluating their work in our plan.

For those anxiously awaiting the plug-in, because they either mistrust or don’t want to pay for a third-party solution, the story has changed quite a bit from the June announcement. Apparently no work was done on the plug-in for six months or so; and rather than implementing the SBS plug-in it now seems that the Exchange team is doing its own. Not good communication; and here comes Mr Fed-Up:

Like most things from this company, we can expect a beta quality “solution” by sometime in 2010. We have a few hundred small business clients that we do outsourced IT for, and as it’s come time to replace machines, we’ve been replacing Windows PCs with Macs, and Windows servers with Linux. It’s really amazing how easy it is to setup a Windows domain on a Linux server these days. The end users can’t tell a difference.

What this illustrates is that blogging, forums and open communication are great, but only when you communicate bad news as well as good. It is remarkable how much more patient users are when they feel in touch with what is happening.

Technorati tags: , ,

Mixing Hyper-V, Domain Controller and DHCP server

My one-box Windows server infrastructure is working fine, but I ran into a little problem with DHCP. I’d decided to have the host operating system run not only Hyper-V, but also domain services, including Active Directory, DNS and DHCP. I’m not sure this is best practice. Sander Berkouwer has a useful couple of posts in which he explains first that making the host OS a domain controller is poor design:

From an architectural point of view this is not a desired configuration. From this point of view you want to separate the virtualization and platforms from the services and applications. This way you’re not bound to a virtualization product, a platform, certain services or applications. Microsoft’s high horse from an architectural point of view is the One Server, One Server Role thought, in which one server role per server platform gets deployed. No need for a WINS server anymore? Simply shut it down…

Next, he goes on to explain the pitfalls of having your DC in a VM:

Virtualizing a Domain Controller reintroduces possibilities to mess up the Domain Controller in ways most of the Directory Services Most Valuable Professionals (MVPs) and other Active Directory enthusiasts have been fixing since the dawn of Active Directory.

He talks about problems with time synchronization, backup and restore, saved state (don’t do it), and possible replication errors. His preference after all that:

In a Hyper-V environment I recommend placing one Domain Controller per domain outside of your virtualized platform and making this Domain Controller a Global Catalog. (especially in environments with Microsoft Exchange).

Sounds good, except that for a tiny network there are a couple of other factors. First, to avoid running multiple servers all hungry for power. Second, to make best user of limited resources on a single box. That means either risking running a Primary Domain Controller (PDC) on a VM (perhaps with the strange scenario of having the host OS joined to the domain controlled by one of its VMs), or risking making the host OS the PDC. I’ve opted for the latter for the moment, though it would be fairly easy to change course. I figure it could be good to have a VM as a backup domain controller for disaster recovery in the scenario where the host OS would not restore, but the VMs would – belt and braces within the confines of one server.

One of the essential services on a network is DHCP, which assigns IP numbers to computers. There must be one and only one on the network (unless you use static addresses everywhere, which I hate). So I disabled the existing DCHP server, and added the DHCP server role to the new server.

It was not happy. No IP addresses were served, and the error logged was 1041:

The DHCP service is not servicing any DHCPv4 clients because none of the active network interfaces have statically configured IPv4 addresses, or there are no active interfaces.

Now, this box has two real NICs (one for use by ISA), which means four virtual NICs after Hyper-V is installed. The only one that the DHCP server should see is the virtual NIC for the LAN, which is configured with a static address. So why the error?

I’m not the first to run into this problem. Various solutions are proposed, including fitting an additional NIC just for DHCP. However, this one worked for me.

I simply changed the mask on the desired interface from 255.255.255.0 to 255.255.0.0, saved it, then changed it back.  Suddenly the interface appeared in the DHCP bindings.

Strange I know. The configuration afterwards was the same as before, but the DHCP server now runs fine. Looks like a bug to me.

Hands on with Hyper-V: it’s brilliant

I have just installed an entire Windows server setup on a single cheap box. It goes like this. Take one budget server stuffed with 8GB RAM and two network cards. Install Server 2008 with the Hyper-V and Active Directory Domain Services, DNS and DHCP. Install Server 2003 on a 1GB Hyper-V VM for ISA 2006. Install Server 2008 on a 4GB VM for Exchange 2007. Presto: it’s another take on Small Business Server, except that you don’t get all the wizards; but you do get the flexibility of multiple servers, and you do still have ISA (which is missing from SBS 2008).

Can ISA really secure the network in a VM (including the machine on which it is hosted)? A separate physical box would be better practice. On the other hand, Hyper-V has a neat approach to network cards. When you install Hyper-V, all bindings are removed from the “real” network card and even the host system uses a virtual network card. Hence your two NICs become four:

As you may be able to see if you squint at the image, I’ve disabled Local Area Connection 4, which is the virtual NIC for the host PC. Local Area Connection 2 represents the real NIC and is bound only to “Microsoft Virtual Network Switch Protocol”.

This enables the VM running ISA to use this as its external NIC. It strikes me as a reasonable arrangement, surely no worse than SBS 2003 which runs ISA and all your other applications on a single instance of the OS.

Hyper-V lets you set start-up and shut-down actions for the servers it is hosting. I’ve set the ISA box to start up first, with the Exchange box following on after a delay. I’ve also set Hyper-V to shut down the servers cleanly (through integration services installed into the hosted operating systems) rather than saving their state; I may be wrong but this seems more robust to me.

Even with everything running, the system is snoozing. I’m not sure that Exchange needs as much as 4GB on a small network; I could try cutting it down and making space for a virtual SharePoint box. Alternatively, I’m tempted to create a 1GB server to act as a secondary domain controller. The rationale for this is that disaster recovery from a VM may well be easier than from a native machine backup. The big dirty secret of backup and restore is that it only works for sure on identical hardware, which may not be available.

This arrangement has several advantages over an all-in-one Small Business Server. There’s backup and restore, as above. Troubleshooting is easier, because each major application is isolated and can be worked on separately. There’s no danger of notorious memory hogs like store.exe (part of Exchange) grabbing more than their fair share of RAM, because it is safely partitioned in its own VM. After all, Microsoft designed applications like Exchange, ISA and SharePoint to run on dedicated servers. If the business grows and you need to scale, just move a VM to another machine where it can enjoy more RAM and CPU.

I ran a backup from the host by enabling VSS backup for Hyper-V (requires manual registry editing for some reason), attaching an external hard drive, and running Windows Server backup. The big questions: would it restore successfully to the same hardware? To different hardware? Good questions; but I like the fact that you can mount the backup and copy individual files, including the virtual hard drives of your VMs. Of course you can also do backups from within the guest operating systems. There’s also a snag with Exchange, since a backup like this is not Exchange-aware and won’t truncate its logs, which will grow infinitely. There are fixes; and Microsoft is said to be working on making Server 2008 backup Exchange-aware.

Would a system like this be suitable for production, as opposed to a test and development setup like mine? There are a couple of snags. One is licensing cost. I’ve not worked out the cost, but it is going to add up to a lot more than buying SBS. Another advantage of SBS is that it is fully supported as a complete system aimed at small businesses. Dealing with separate virtual servers is also more demanding than running SBS wizards for setup, though I’d argue it is actually easier for troubleshooting.

Still, this post is really about Hyper-V. I’ve found it great to work with. I had a few hassles, particularly with Server 2003 – I had to remember my Windows keyboard shortcuts until I could get SP2 and Hyper-V Integration Services installed. Once installed though, I log on to the VM using remote desktop and it behaves just like a dedicated box. The performance overhead of using a VM seems small enough not to be an issue.

I’ve found it an interesting experiment. Maybe some future SBS might be delivered like this.

Update: I tried reducing the RAM for the Exchange VM and it markedly reduced performance. 4GB seems the best spot.

Gears of War certificate expiry a reminder to developers: always timestamp signed code

Users of the PC version of Gears of War have been unable to run the game since yesterday (29th January 2009). If they try, they get a message:

You cannot run the game with modified executable code

Joe Graf from Epic has acknowledged the problem:

We have been notified of the issue and are working with Microsoft to get it resolved. Sorry for any problems related to this. I’ll post more once we have a resolution.

The workaround is to set back your system clock. An ugly solution. Of course, some users went through the agony of full Windows reinstalls in an effort to get playing again.

So what happened? This looks to me like a code-signing problem, not a DRM problem as such, though the motivation for it may have been to protect against piracy. Code signing is a technique for verifying both the publisher of an executable, and that it has not tampered with. When you sign code, for example using the signwizard utility in the Windows SDK, you have to select a certificate with which to sign, and then you have an option to apply a timestamp. The wizard doesn’t mention it, but the consequences of not applying a timestamp are severe:

Microsoft Authenticode allows you to timestamp your signed code. Timestamping ensures that code will not expire when the certificate expires because the browser validates the timestamp. The timestamping service is provided courtesy of VeriSign. If you use the timestamping service when signing code, a hash of your code is sent to VeriSign’s server to record a timestamp for your code. A user’s software can distinguish between code signed with an expired certificate that should not be trusted and code that was signed with a Certificate that was valid at the time the code was signed but which has subsequently expired … If you do not use the timestamping option during the signing, you must re-sign your code and re-send it out to your customers.

Unfortunately, there is no timestamping for Netscape Object Signing and JavaSoft Certificates. Therefore you need to re-sign your code with a new certificate after the old certificate expires.

I don’t know if this is the exact reason for the problems with Gears of War, and I’m surprised that the game refuses to run, as opposed to issuing a warning, but this could be where the anti-piracy measures kick in. Epic’s programmers may have assumed that the only reason the certificate would be invalid is if the code had been modified.

I blogged about a similar problem in February 2006, when a Java certificate expired causing APC’s PowerChute software (a utility for an uninterruptible power supply) to fail. That one caused servers to run slow or refuse to boot.

As far as I know, there is no way of telling whether other not-yet-expired certificates are sitting on our PCs waiting to cause havoc one morning. If there are some examples, I hope it does not affect software running, say, Air Traffic Control systems or nuclear power stations.

If you are a Windows developer, the message is: always timestamp when signing your code.