All posts by onlyconnect

Microsoft Small Business Server to Server Essentials R2: not a smooth transition

Recently I assisted a small business (of around 10 users) with a transition from Small Business Server 2003 to Server Essentials R2.

Small Business Server 2003 had served it well for nearly 10 years. The package includes Windows Server 2003 (based on XP), Exchange, and the rather good firewall and proxy server ISA Server 2004 (the first release had ISA 2000, but you could upgrade).

image

SBS 2003 actually still does more than enough for this particular business, but it is heading for end of support, and there are some annoyances like Outlook 2013 not working with Exchange 2003. This last problem had already been solved, in this case, by a migration to Office 365 for email. No problem then: simply migrate SBS 2003 to the latest Server 2012 Essentials R2 and everything can continue running sweetly, I thought.

Sever Essentials is an edition designed for up to 25 users / 50 devices and is rather a bargain, since it is cheap and no CALs are required. In the R2 version matters are confused by the existence of a Server Essentials role which lets you install the simplified Essentials dashboard in any edition of Windows Server 2012. The advantage is that you can add as many users as you like; the snag is that you then need CALs in the normal way, so it is substantially more expensive.

Despite the move to Office 365, an on-premise server is still useful in many cases, for example for assigning permissions to network shares. This is also the primary reason for migrating Active Directory, rather than simply dumping the old server and recreating all the users.

The task then was to install Server Essentials 2012 R2, migrate Active Directory to the new server, and remove the old server. An all-Microsoft scenario using products designed for this kind of set-up, should be easy right?

Well, the documentation starts here. The section in TechNet covers both Server 2012 Essentials and the R2 edition, though if you drill down, some of the individual articles apply to one or the other. If you click the post promisingly entitled Migrate from Windows SBS 2003, you notice that it does not list Essentials R2 in the “applies to” list, only the first version, and there is no equivalent for R2.

Hmm, but is it similar? It turns out, not very. The original Server 2012 Essentials has a migration mode and a Migration Preparation Tool which you run on the old server (it seems to run adprep judging by the description, which updates Active Directory in preparation for migration). There is no migration tool nor migration mode in Server 2012 Essentials R2.

So which document does apply? The closest I could find was a general section on Migrate from Previous Versions to Windows Server 2012 R2 Essentials. This says to install Server 2012 Essentials R2 as a replica domain controller. How do you do that?

To install Windows Essentials as a replica Windows Server 2012 R2 domain controller in an existing domain as global catalog, follow instructions in Install a Replica Windows Server 2012 Domain Controller in an Existing Domain (Level 200).

Note the “Level 200” sneaked in there! The article in question is a general technical article for Server 2012 (though in this case equally applicable to R2) aimed at large organisations and full of information that is irrelevant to a tiny 10-user setup, as well as being technically more demanding that you would expect for a small business setup.

Fortunately I know my way around Active Directory to some extent, so I proceeded. Note you have to install the Active Directory role before you can run the relevant PowerShell cmdlets. Of course it did not work though. I got an error message “Unable to perform Exchange Schema Conflict Check.”

This message appears to relate to Exchange, but I think this is incidental. It just happens to be the first check that does not work. I think it was a WMI (Windows Management Instrumentation) issue,  I did not realise this at first though.

I should mention that although the earlier paper on migrating to Server Essentials 2012 is obsolete, it is the only official documentation that describes some of the things you need to do on the source server before you migrate. These include changing the configuration of the internet connection to bypass ISA Server (single network card configuration), which you do by running the Internet Connection Wizard. You should also check that Active Directory is in good health with dcdiag.exe.

I now did some further work. I removed ISA Server completely, and removed Exchange completely (note you need your SBS 2003 install CD for this). Removing ISA broke the Windows Server 2003 built-in firewall but I decided not worry about it. Following a tip I found, I also used ntdsutil to change the DSRM (Directory Services Recovery Mode) password. I also upgraded the SBS AD forest to Server 2003 (it was on Server 2000), which is necessary for migration to work.

I am not sure which step did the trick, but eventually I persuaded the PowerShell for creating the Replica Domain Controller to work. Then I was able to transfer the FSMO roles. I was relieved; I gather from reading around that some have abandoned the attempt to go from AD in Server 2003 to AD in Server 2012, and used an intermediate Server 2008 step as a workaround – more hassle.

After that things went relatively smoothly, but not without annoyances. There are a couple to mention. One is that after migrating the server, you are meant to connect the client computers by visiting a special URL on the server:

Browse to http://destination-servername/connect and install the Windows Server Connector software as if this was a new computer. The installation process is the same for domain-joined or non-domain-joined client computers.

If you do that from a client computer that was previously joined to the SBS domain (having removed unwanted stuff like the SBS 2003 client and ISA client) then you are prompted to download and run a utility to join the new network. You do that, and it says you cannot proceed because a computer of the same name already exists. But this is that same computer! No matter, the wizard will not run, though the computer is in fact already joined to the domain.

If you want to run the connect wizard and set up the Essentials features like client computer backup and anywhere access, then as far as I can tell this is the official way:

  • Make sure you have an admin user and password for the PC itself (not a domain user).
  • Demote the computer from the domain and join it to a workgroup. Make sure the computer is fully removed from the domain.
  • Then go to the connect URL and join it back.

If you are lucky, the domain user profile will magically reappear with all the old desktop icons, My Documents and so on. If you are unlucky you may need manual steps to recover it, or to use profile migration tools.

This is just lazy on Microsoft’s part. It has not bothered to create a tool that will do what is necessary to migrate an existing client computer into the Server Essentials experience (unless such a tool exists and I did not find it; I have seen reports of regedit hacks).

The second annoyance was with the Anywhere Access wizard. This is for enabling users to log in over the internet and access limited server features, and connect to their client desktop. I ran the wizard, installed a valid certificate, used a valid DNS name, manually opened port 443 on the external firewall, but still got verification errors.

image

Clicking Repair is no help. However, Anywhere Access works fine. I captured this screenshot from a remote session:

image

All of the above is normal business for Microsoft partners, but does illustrate why small businesses that take on this kind of task without partner assistance may well run into difficulties.

Looking at the sloppy documentation and missing pieces I do get the impression that Microsoft cares little about the numerous small businesses trundling away on old versions of SBS, but which now need to migrate. Why should it, one might observe, considering how little it charges for SBS 2012 Essentials? It is a fair point; but I would argue that looking after the small guys pays off, since some grow into big businesses, and even those that do not form a large business sector in aggregate. Google Apps, one suspects, is easier.

An underlying issue, as ever with SBS, is that Windows Server and in particular Active Directory is designed for large scale setups, and while SBS attempts to disguise the complexity, it is all there underneath and cannot always be ignored.

In mitigation, I have to say that for businesses like the one described above SBS has done a solid job with relatively little attention over many years, which is why it is worth some pain in installation.

Update: A couple of further observations and tips.

Concerning remote access, I suspect the wizard wants to see port 80 open and directed to the server. However this is not necessary as far as I can tell. It is also worth noting that SBS Essentials R2 installs TS Gateway, which means you can configure RDP direct to the server desktop (rather than to the limited dashboard you get via the Anywhere Access site).

The documentation, such as it is, suggests that you use the router for DHCP. Personally I prefer to have this on the server, and it also saves time and avoids errors since you can import the DHCP configuration to the new server.

Hands on with Cordova in Visual Studio

At TechEd this week, Microsoft announced Apache Cordova support in Visual Studio 2013. A Cordova app is HTML and JavaScript wrapped as a native app, with support for multiple platforms including iOS and Android. It is the open source part of Adobe’s PhoneGap product. I downloaded the preview from here and took a quick look.

There is a long list of dependencies which the preview offers to install on your behalf:

image

and

image

The list includes the Java SDK, Google Chrome and Apple iTunes. The documentation explains that Java is required for the Android build process, Chrome is required to run the Ripple emulator (so you could choose not to install if you do not require Ripple), and iTunes is required for deploying an app to an iOS device, though a Mac is also required.

The license terms for both Chrome and iTunes are long and onerous, plus iTunes is on my list of applications not to install on Windows if you want it to run fast. Chrome is already installed on my PC, and I unchecked iTunes.

Next, I ran Visual Studio and selected a Multi-Device Hybrid App project (I guess “Cordova app” was rejected as being too short and simple).

image

An annoyance is that if you use the default project location, it is incompatible because of spaces in the path:

image

The project opened, and being impatient I immediately hit Run.

When you build, and debug using the default Ripple emulator (which runs in Chrome, hence the dependency), Visual Studio grabs a ton of dependencies.

image

and eventually the app runs:

image

or you can debug in the Android emulator:

image

A good start.

Microsoft has some sample projects for AngularJS, BackboneJS and WinJS. This last is intriguing since you could emulate the Windows Phone look and feel (or something like it) on Android on iOS, though it would look far from native.

The preview is not feature-complete. The only supported device targets are Android 4.x, IOS 6 and 7, Windows 8.x Store apps, and Windows Phone 8.x. Windows Phone debugging does not work in this preview.

Office, Azure Active Directory, and mobile: the three pillars of Microsoft’s cloud

When Microsoft first announced Azure, at its PDC Conference in October 2008, I was not impressed. Here is the press release, if you fancy a look back. It was not so much the technology – though with hindsight Microsoft’s failure to offer plain old Windows VMs from the beginning was a mistake – but rather, the body language that was all wrong. After all, here is a company whose fortunes are built on supplying server and client operating systems and applications to businesses, and on a partner ecosystem that has grown up around reselling, installing and servicing those systems. How can it transition to a cloud model without cannibalising its own business and disrupting its own partners? In 2008 the message I heard was, “we’re doing this cloud thing because it is expected of us, but really we’d like you to keep buying Windows Server, SQL Server, Office and all the rest.”

Take-up was small, as far as anyone could tell, and the scene was set for Microsoft to be outflanked by Amazon for IaaS (Infrastructure as a Service) and Google for cloud-based email and documents.

Those companies are formidable competitors; but Microsoft’s cloud story is working out better than I had expected. Although Azure sputtered in its early years, the company had some success with BPOS (Business Productivity Online Suite), which launched in the UK in 2009: hosted Exchange and SharePoint, mainly aimed at education and small businesses. In 2011 BPOS was reshaped into Office 365 and marketed strongly. Anyone who has managed Exchange, SharePoint and Active Directory knows that it can be arduous, thanks to complex installation, occasional tricky problems, and the challenge of backup and recovery in the event of disaster. Office 365 makes huge sense for many organisations, and is growing fast – “the fastest growing business in the history of the company,” according to Corporate VP of Windows Server and System Center Brad Anderson, speaking to the press last week.

image
Brad Anderson, Corporate VP for Windows Server and System Center

The attraction of Office 365 is that you can move users from on-premise Exchange almost seamlessly.

Then Azure changed. I date this from May 2011, when Scott Guthrie and others moved to work on Azure, which a year later offered a new user-friendly portal written in HTML5, and Windows Azure VMs and web sites. From that moment in 2012, Azure because a real competitor in cloud computing.

That is only two years ago, but Microsoft’s progress has been remarkable. Azure has been adding features almost as fast as Amazon Web Services (AWS – and I have not attempted to count), and although it is still behind AWS in some areas, it compensates with its excellent portal and integration with Visual Studio.

Now at TechEd Microsoft has made another wave of Azure announcements. A quick summary of the main ones:

  • Azure Files: SMB shared storage for Azure VMs, also accessible over the internet via a REST API. Think of it as a shared folder for VMs, simplifying things like having multiple web servers serve the same web site. Based on Azure storage.
  • Azure Site Recovery: based on Hyper-V Recovery Manager, which orchestrates replication and recovery across two datacenters, the new service adds the rather important feature of letting you use Azure itself as your space datacenter. This means anyone could use it, from small businesses to the big guys, provided all your servers are virtualised.
  • Azure RemoteApp: Remote Desktop Services in Azure, though currently only for individual apps, not full desktops
  • Antimalware for Azure: System Center Endpoint Protection for Azure VMs. There is also a partnership with Trend Micro for protecting Azure services.
  • Public IPs for individual VMs. If you are happy to handle the firewall aspect, you can now give a VM a public IP and access it without setting up an Azure endpoint.
  • IP Reservations: you get up to five IP addresses per subscription to assign to Azure services, ensuring that they stay the same even if you delete a service and add a new one back.
  • MSDN subscribers can use Windows 7 or 8.1 on Azure VMs, for development and test, the first time Microsoft has allows client Windows on Azure
  • General availability of ExpressRoute: fast network link to Azure without going over the internet
  • General availability of multiple site-to-site virtual network links, and inter-region virtual networks.
  • General availability of compute-intensive VMs, up to 16 cores and 112GB RAM
  • General availability of import/export service (ship data on physical storage to and from Azure)

There is more though. Those above are just a bunch of features, not a strategy. The strategy is based around Azure Active Directory (which everyone gets if they use Office 365, or you can set up separately), Office, and mobile.

Here is how this works. Azure Active Directory (AD), typically synchronised with on-premise active directory, is Microsoft’s cloud identity system which you can use for single sign-on and single point of control for Office 365, applications running on Azure, and cloud apps run by third-parties. Over 1200 software as a service apps support Azure AD, including Dropbox, Salesforce, Box, and even Google apps.

Azure AD is one of three components in what Microsoft calls its Enterprise Mobility Suite. The other two are InTune, cloud-based PC and device management, and Azure Rights Management.

InTune first. This is stepping up a gear in mobile device management, by getting the ability to deploy managed apps. A managed app is an app that is wrapped so it supports policy, such as the requirement that data can only be saved to a specified secure location. Think of it as a mobile container. iOS and Android will be supported first, with Office managed apps including Word, Excel, PowerPoint and Mobile OWA (kind-of Outlook for iOS and Android, based on Outlook Web Access but delivered as a native app with offline support).

Businesses will be able to wrap their own applications as managed apps.

Microsoft is also adding Cordova support to Visual Studio. Cordova is the open source part of PhoneGap, for wrapping HTML and JavaScript apps as native. In other words, Visual Studio is now a cross-platform development tool, even without Xamarin. I have not seen details yet, but I imagine the WinJS library, also used for Windows 8 apps, will be part of the support; yes it works on other platforms.

Next, Azure Rights Management (RMS). This is a service which lets you encrypt and control usage of documents based on Azure AD users. It is not foolproof, but since the protection travels in the document itself, it offers some protection against data leaking out of the company when it finds its way onto mobile devices or pen drives and the like. Only a few applications are fully “enlightened”, which means they have native support form Azure RMS, but apparently 70% of more of business documents are Office or PDF, which means if you cover them, then you have good coverage already. Office for iOS is not yet “enlightened”, but apparently will be soon.

This gives Microsoft a three-point plan for mobile device management, covering the device, the applications, and the files themselves.

Which devices? iOS, Android and Windows; and my sense is that Microsoft is now serious about full support for iOS and Android (it has little choice).

Another announcement at TechEd today concerns SharePoint in Office 365 and OneDrive for Business (the client), which is getting file encryption.

What does this add up to? For businesses happy to continue in the Microsoft world, it seems to me a compelling offering for cloud and mobile.

Review: Nuance Dragon Dictate 4 for the Mac

There is something liberating about working without a keyboard – and I do not mean stabbing hopefully at a touch screen. Voice control means you can sit back, easily refer to books or papers,  and input text more quickly and naturally than is possible using a keyboard. Some conditions including RSI (Repetitive Strain Injury) may make dictation a necessity. I use dictation for transcribing interviews and for rapid text input generally. I do not often use dictation for controlling a computer, as opposed to entering and editing text, but this is also a key feature.

image

Nuance has the best voice recognition system available as far as I can tell, though my experience is mainly with Nuance Dragon NaturallySpeaking on Windows. But what about Mac users? For them, Nuance provides Dragon Dictate, which has recently been updated to version 4. It is not a port of Dragon NaturallySpeaking, but rather has its own distinctive features, though it is less comprehensive, and a glance at the Nuance forums suggests that Mac users feel a bit neglected.

Does Dragon Dictate 4 change that? The good news is that the voice recognition engine in Dragon Dictate appears to be just as good as the one in Dragon naturally speaking. The accuracy is superb though you still have to be realistic. Some recognition problems are just very difficult and the software is bound to make mistakes especially in specialist fields – mine is programming and a specialist phrase like “JIT compiler” is bound to cause an error (Dragon thinks I want “Jet compiler”). Similarly, “pull request” became “full request”. Over time you can build up a custom vocabulary, but recognition will never be 100%, so a dictation system has to handle corrections as well as original input.

Setting up Dragon Dictate involves installing the software and then letting it create a profile and doing some training so that Dragon can learn the characteristics of your voice. I highly recommend using a good quality headset since without it we cannot expect accurate recognition. I found the setup process quick and painless and was soon up and running.

Dragon Dictate has five modes:

  • Dictation Mode is what you use most of the time.
  • Spelling Mode is for spelling out problematic words. You can speak the letters naturally or use the International Radio Alphabet (Alpha Bravo Charlie etc). It is a nice feature since if you know Dragon is likely to get something wrong, you can switch to Spelling Mode, enter the difficult word, and then go back to Dictation Mode.
  • Numbers Mode is for typing numbers.
  • Command Mode is for non-dictation commands. However, commands also work in Dictation Mode. The advantage of Command Mode is that Dragon will not misinterpret your commands as text input; but there is no way to configure Dictation Mode to prevent it interpreting speech intended as text as commands. The manual suggests that you use unnatural pauses for this. For example, if you are reviewing Dragon Dictate and want to type “Command Mode”, you can say “Command [pause] Mode” and get what you want.
  • Sleep Mode puts Dragon in a resting state, so for example you can take a telephone call without Dragon trying to transcribe it.

Switching mode is easy: just speak the mode you want. If Dragon is in Sleep Mode, you can say “Wake up”.

My initial experience with Dragon Dictate 4 was not too good. The problems were not with recognition but rather with navigating and correcting existing text, which I found harder than in Dragon NaturallySpeaking on Windows. In fact my attempts to make corrections all too often ended up with more and more errors as a correction went wrong and I would be trying to correct the correction, getting increasingly frustrated.

Using Microsoft Word 2011, I experienced unexpected behaviour. For example, if I put the cursor in between two words and dictated a word to insert, sometimes the word appeared elsewhere in the text.

Another odd thing: I dictated "for example", and Dragon recognised it as "one example”. No problem: Dragon has a Recognition Window which lists alternatives when you say “correct” followed by the word you want to amend. I said "correct one” and the recognition window appeared offering "for example" as one of the choices. I selected it, but Dragon then entered “for example example” in the text. I was not offered the word “for” on its own.

Dragon Dictate 4 was rescued from a terrible review when I studied the manual. Towards the end is a section entitled “The Cache and the Golden Rule”. This explains that you should not combine the use of keyboard and mouse with dictation when editing a document. If you do, Dragon gets confused about the contents of the document and you see unexpected results. You can fix this with a special command, “Cache Document”, which tells the software to clear and rebuild the cache for the entire document.

If you are not aware of this issue, then you are likely to make increasing use of keyboard and mouse as Dragon gets it wrong, making the issue worse. That is exactly what had happened to me.

Another key point is the difference between training and correcting. If you use the Recognition Window to make a change that is not in fact a recognition fault – such as changing “good” to “excellent” – then you will confuse the voice training. Rather, you should say “Select good”, to select the word you want to change, and then say “excellent” to overtype it.

After studying the manual, I got much better results, though Dragon Dictate still occasionally seems to have a mind of its own.

Nevertheless, this fussiness is a weakness in the software. The best software works the way you want it to, rather than making the user do things a certain way. Why cannot Dragon do its cache repairs automatically in the background?

Still, what Dragon offers is of high value, and in this case if you want the best results you have to do the homework.

There are a few others things to mention. Nuance offers a free app for the iPhone that lets you use it as a remote microphone. Personally I find a headset more convenient but I guess there are scenarios where this is useful.

There are also features in Dragon Dictate aimed at general system control. I tried the MouseGrid, which overlays a grid over the entire screen and lets you zoom into the area of interest for accurate mouse control. You can also move the mouse using Up, Down and so on under voice control, and perform single, double or triple clicks.

Conclusion? The software does not feel as complete or as polished as Dragon NaturallySpeaking, but the excellent voice recognition means that this is the best available for the Mac. Recommended, but with reservations.

Microsoft’s new open source direction for C# and .NET (and native compilation too): Anders Hejlsberg explains

At the April 2014 Build conference Microsoft made some far-reaching announcements about its .NET platform and the C# programming language. Yes, there was talk of C# 6.0, the next version, but the real changes are more profound. Specifically:

C# and Visual Basic have a new compiler, itself written in C#, code-named Roslyn. Roslyn is not just a new compiler; Microsoft now calls it the “.NET Compiler Platform”.

There is a new commitment to open source for .NET projects. Microsoft formed the .NET Foundation to oversee existing open source projects, including  ASP.NET, Entity Framework, the Azure .NET SDK, and now Roslyn as well. “When it comes to development projects we are going to operate from the premise that open source is the default. Unless there are reasons why it does not work,” said C# lead architect Anders Hejlsberg.

image

Note that open source does not mean chaos. It does mean that you can fork the project if you want – the Roslyn license is Apache 2.0 – but getting Microsoft to accept new features you have contributed will not be trivial. Hejlsberg makes the point that language features are easy to add, but impossible to take away, so extreme care is necessary.

Microsoft is also supporting cross-platform C# to a greater extent than it has done in the past. The most obvious sign of this is its cooperation with Xamarin, which provides C# compilers for iOS and Android. Xamarin’s Miguel de Icaza got a top billing at Build, and is also involved in the .NET Foundation.

There is more though. The idea of standardised C# is re-emerging:

“The last ECMA standard was C# 2.0. There wasn’t a lot of demand for it, but that demand has recently risen and we have re engaged with the ECMA community to produce a standard for C# 5.0,” said Hejlsberg.

This bears some unpacking. Why was there little demand for ECMA C#? Partly I would guess from the assumption the C# was firmly in Microsoft’s grip, with Java the obvious choice for cross-platform development. The main interest was from the Mono folk (Miguel de Icaza again), which implemented .NET for Linux and the Mac with some success, but nothing to disturb Java’s momentum.

The focus now though is on mobile, and interest in C# is stronger, mainly from Microsoft-platform developers reaching beyond Windows. There is also Unity, which uses C# as a scripting language for developing games for multiple platforms, including iOS, Android, Windows, Mac, Linux, Xbox, PS3 and Wii – PS4 is coming very soon.

Microsoft has now consciously embraced multiple platforms, as evidenced by Office for iOS as well as the Xamarin collaboration. “We want C#developers to build great applications across different form factors and different device platforms,” said Jay Schmelzer Director of Program Management for Visual Studio.

You might observe that this position has been forced on the company by the rise of iOS and Android, a view which likely has some merit, but the impact it has on C# and .NET itself is still real.

I asked Hejlsberg to unpack the difference between the Roslyn project and C# 6.0, bearing in mind that both are covered on the Roslyn open source site; you can see the current status of C# 6.0 and the next Visual Basic here.

Roslyn is the name for the project that encompasses the new C#compiler and the new VB compiler and the new language services that they share. C# 6.0 is the name of the next version of the C #language which will have a specification and which will have an implementation. We are implementing C# 6.0 on the Roslyn platform. We are not going to continue to evolve our old C++ C# compiler – the C# compiler was originally written in C++ and has been evolved up through C# 5.0. That is where we are going to retire that code base, and going forward versions of C# will be built on Roslyn and therefore will be built open source. Unlike previously where, boom. C# came down from the sky with a set of features, it is going to happen more organically now, people will submit pull requests, open up issues, and you will see us work on these features. You will see them from inception to fruition.

“The C# team, the Roslyn team, the VB team, their day to day workplace now is the open source site. That is where they check-in code. It is a community in the making.

Even that is not all. At Build, Microsoft also announced .NET Native, which is a native compiler for C# and Visual Basic, now in preview for x64 Store apps. What is the difference between .NET Native and the existing NGen native compiler for .NET? Over to Hejlsberg:

NGen is the native feature that we currently support. NGen is really, “I’m going to JIT [Just in time compile] your code and then snapshot all the data structures and dump them in a file so that I can quickly rebuild that file later when you run this particular application”. But it is the same code generator and all the same features, and JIT is still there. NGen is really a way to pre-cache the JIT output and therefore get better performance, but it adds to the size of your app because you still have all the assemblies and metadata and then the NGen image as well.

.NET Native is a completely different approach. Instead of the JIT we use the backend from the C++ compiler. You can think of it as a linker that takes as input assemblies, and as output produces a PE [Portable Executable] executable. In the process this linker or code generator will analyse all the IL [Intermediate Language] that goes into the application and it will apply a thing known as tree-shaking where it eliminates all of the code that will never execute based on known execution roots.

In other words, the public static main of your program and also whatever pieces of your app that you designate as reflectable, they also become roots. Based on that we produce an optimised exe, and into that exe we link the pieces of the framework that you are referencing. We link in a garbage collector [GC], and it looks to the operating system just like an exe. When you run it, it runs a local GC in there and it is as efficient really as C++ code.

There are some restrictions associated with .net native, mainly that you can’t just willy-nilly reflect on the whole world. You can’t just generate new code and ask for that to be jitted because they may not be a JIT compiler. We are considering allowing you to link in a JIT compiler, but there are certain execution environments which don’t permit jitting, like Xbox. If you use reflection in your lap you have to tell us what to keep reflectable, because otherwise we will optimise it away.

According to Schmelzer:

The preview out today is scoped to Store app x64 and ARM. We haven’t run into any technical limitation that shows it can’t be done across the breadth, it is just a matter of request and need.

Open source, native code compilation, and an innovative compiler: it adds up to huge changes for C# and .NET, positive ones as far as I can tell.

The Xamarin connection is intriguing though. Developers in general admire the technology as far as I can tell, but it is expensive, and paying out for a Xamarin subscription on top of maybe MSDN for Visual Studio is too much for some smaller organisations and does not encourage experimentation. Might Microsoft acquire Xamarin and build Visual Studio into an IDE targeting all the major mobile platforms, but with special hooks to Azure-hosted services?

That prospect makes sense to me, though it would be a shame if the energetic Xamarin culture became bogged down in big-company bureaucracy. Currently though: no news to report.

Brief reflections on 50 years of BASIC

Beginner’s All-Purpose Symbolic Code (BASIC) has turned fifty, as reported on The Reg and by Jack Schofield on ZDNet. A great moment in computer history, or would we have been better off without it?

My first computer (a Commodore PET) ran Basic from ROM, and without it you could do nothing, though developers bypassed it by using the POKE command to write low-level instructions into memory. The language is meant to be forgiving (as far as a computer language can be) and English-like, at the expense of being a little more verbose. It is case-insensitive and does not require braces or semi-colons to indicate blocks or lines of code, which makes programming look less intimidating for beginners.

I graduated onto an Atari ST, for which there was an excellent Basic implementation called GFA Basic, fast and capable. This was great for writing utilities, though, though serious programming tended to use one of several strong C compilers: Lattice C, Mark Williams C, HiSoft C come to mind.

Basic also had a role, even on the ST, as a macro language for applications. For example, the Superbase database manager used a version of Basic.

The company most strongly associated with Basic though is Microsoft. A version of Basic came with MS-DOS.

image

Microsoft also supported Basic for professional development. Microsoft Basic Professional Development System 7.x was a well-regarded development tool for business applications, though commercial shrink-wrap software tended to be written in C or C++.

That trend followed through to the Windows graphical environment. Visual Basic (VB), which made it easy to code Windows applications, was perhaps the most significant Basic release in terms of its impact, especially when it reached version 3.0 with full database support. Its popularity was such that many developers felt wounded when Microsoft discontinued Visual Basic 6.0, a direct successor, in favour of Visual Basic .NET which is something incompatible and different.

Further, VB 6.0 or something very like it lives on today, in the form of Visual Basic for Applications as found in all recent versions of Microsoft Office.

image

Despite this, Basic is in decline. Most of the professional developers I meet at events like Build use C# in preference to Visual Basic, there being little reason not to. C# is the premier language of .NET, and Visual Basic gets in the way if you want to keep up with latest .NET developments. Xamarin, which lets you code in .NET for iOS and Android, supports C# but not Visual Basic. Once you come to terms with semi-colons, braces and case-sensitivity, there is no real advantage to Visual Basic and C# is no more difficult.

I do see Visual Basic still used in education though, as well as by some developers who either prefer the language or are so used to it that they see no need to change; and to be fair, Xamarin aside, there is little if anything you can do in C# that you cannot also do in VB and the output is more or less the same.

The Roslyn project, which will be part of the next version of C# and probably in the next release of Visual Studio, lets you paste C# code as VB and vice versa.

Nevertheless, I believe we will see further decline in Basic usage, especially as it is little used outside Microsoft’s platform.

Would it have been better if Microsoft has not adopted Basic so wholeheartedly? There are some problems with Basic, though it is possible to write excellent code in Basic just as you can write poor code in C#, Python, C, or other more fashionable languages. Some issues:

  • Early versions of Basic encouraged badly structured programming with keywords like GOTO and GOSUB resulting in intricate loops that were hard to follow or debug.
  • Basic abstracts how software works to such an extent that you do not learn some important programming concepts such as pointers, addresses, memory allocation.
  • There is no natural progression from Basic to the C-like languages which dominate computing (C,C++,JavaScript,C#).
  • Visual Basic encourages developers to mix GUI code and business logic in the same files, as well as building user interfaces that tend not to scale well.
  • Small and declining professional use means that Basic is less useful than many other languages in the job market.

That said, Basic powers many excellent business applications as well as introducing many to the wonders of programming, and deserves our respect.

XAML and C#, or HTML and WinJS for Windows Store, Universal and cross platform apps?

Microsoft designed the Windows Runtime (WinRT, the engine behind the controversial touch-friendly “Metro” user interface in Windows 8) to support three development platforms. These are C++ with XAML (for most GUI apps) or DirectX (for fast games); or C# and XAML; or HTML and JavaScript using the WinJS library for access to Windows-specific functions.

Microsoft’s line is that all three approaches are fine to use, with little performance difference other than that C++ avoids an interop layer. Of course if you have any arbitrary code that runs faster in C++ than in C#, then you will still see that difference in the WinRT environment.

It is also obvious that if you are an HTML and JavaScript expert but know nothing of C# or XAML, you should use WinJS; similarly if you have a lot of C# code to port and know nothing of HTML or JavaScript, C# and XAML chooses itself.

But what if you approach the decision from a neutral perspective. I am going to leave aside the C++ option for the moment as it is more of a leap than C# versus JavaScript. Which is best?

On the WinJS side, a common misconception is that this library is only for Windows. At Build 2014 Microsoft announced that WinJS is now open source, and works on other browsers and devices:

The library has been extended to smaller and more mobile devices with the release of WinJS 2.1 for Windows Phone 8.1, which was announced today at //build. Now that WinJS is available for building apps across Microsoft platforms and devices, it is ready to extend to web apps and sites on other browsers and devices including Chrome, Firefox, iOS, and Android.

In order to sample this, I went along to try.buildwinjs.com on an iPad. All the things I tried quickly worked fine on iOS.

image 

If you used WinJS to build an app, you could use PhoneGap or Cordova to package it as a native application for the iOS or Android app stores.

A further reflection on this is that some of the WinJS controls which you might have assumed are native WinRT controls instantiated from JavaScript are in fact implemented in CSS and JavaScript. That is an advantage for cross-platform, but does suggest that Microsoft has been busy duplicating the look and feel of XAML controls in HTML and JavaScript, which seems a lot of work and an approach which is bound to result in inconsistencies.

Another snag with this approach – leaving aside the questions of performance and so on which I have not investigated – is that you end up with the distinctive look and feel of a Windows 8 app, which is going to be surprising on these other devices.

C# also has cross-platform potential, thanks to the great work of Xamarin and not forgetting RemObjects C#. Note though that I wrote C# rather than XAML. There is no cross-platform XAML implementation other than the abandoned cross-platform Silverlight efforts, Silverlight for the Mac and Moonlight for Linux. Xamarin expects you to rewrite your UI code for each platform – which may in fact be a good thing, though more effort.

If you are focused on the Windows platform though, it seems to me that the pendulum is swinging away from WinJS and towards C# and XAML. Wordament is an interesting case. This is one of the better games for Windows 8, and also available for Windows Phone, iOS and Android. Originally this was implemented in HTML and JavaScript. The developers have blogged about the choices they made:

Wordament grew up very fast. It seemed like it went from an indie game with a handful of players to a full on Microsoft title with millions of users, on lots of platforms, almost overnight. In that transition we ended up writing lots of code. For instance, in the course of a year we had ports of the Wordament client written in JavaScript, Objective C, C++ and C#. Each one of these ports has its own special issues, build processes and maintenance challenges. At the time, we were still a two person team and Jason and I were struggling to continue to innovate on Wordament, while supporting the “in-market” code bases we had shipped. So we started looking for a solution that would allow us to share more code between all of the platforms we were targeting. Funnily enough, the answer was sitting in our own backyard: C#.

As we looked around at the state of “cross platform development” on Windows, Windows Phone, iOS and Android we started to realize that C# was an excellent choice to target all modern mobile devices. So we did just that. With the help of Visual Studio for Windows and Xamarin for iOS and Android, we started a project to build a single version of Wordament’s source code that could target all the platforms we ship on. This release on Windows 8 marks the end of that journey. All of our clients are now proudly built out of one source tree and in one language. Even our service, which runs on Windows Azure, is built in C#. This is a huge efficiency win for our team of four.

Notable also is that the forthcoming Office for WinRT, previewed at Build, is written in XAML and C++ (according to what I was told). This means XAML will get some love inside Microsoft, which is bound to be good for performance and features.

An advantage of the C# approach is simply that you get to use C#, which is well-liked by developers and with some compelling new features promised in version 6.0, many thanks to the Roslyn project – which also promises a smarter editor in Visual Studio.

What about XAML? This is harder to love. XML is out of fashion – too verbose, too many angle brackets – and the initial promise of a breakthrough in designer/developer workflow thanks to XAML and MVVM (Model – View – ViewModel, which aims to separate code from user interface design) now seems hollow. I am writing a game in XAML and C# and do not much enjoy the XAML part. One of the issues is that the editing experience is less satisfactory. If you make an error in the XAML, the design view simply blanks out with an “Invalid Markup” error. Further, the integration between the XAML and C# editors is a constant source of problems. Half the time, the C# editor seems to forget the variables declared in XAML, giving you lots of errors and no code completion until you next compile. Even when it is working, a rename refactor in the C# editor will not rename variable references that are within quotes in the XAML, for example property names that are the targets of animations.

On the plus side, XAML is amazing in what it can do when you work out how. The user interfaces it generates are rich, scalable and responsive. It is way better that the old Win32 GDI-based approach (also used in the Windows Forms .NET API), which is hard to get right for all the combinations of screen sizes and resolutions, and has odd dependencies on system font sizes and dialog units (don’t ask).

Despite the issues with XAML, C# and XAML (or C++ and XAML) is my own preference for targeting the new Windows platform, but I am interested in other perspectives on this.

Getting animated: basics in Windows Store apps

I am not a designer and prefer to avoid things like animation as too difficult. On the other hand, I am writing an electronic card game and it looks bad if the cards move without any animation. There is also an issue in that animation is built into the standard controls, so if you do not animate your own parts of the user interface, it looks inconsistent.

Animation is more significant than it may first appear. You can think of it as a natural progression from a basic graphical user interface, along with things like full window drag. In the real world, objects do not just blink in and out of existence when they move from one spot to another.

My game is a bridge simulation so four cards are laid on the table in turn, at which point the winner of the four cards is calculated and the four cards gathered to form a “trick”. In my original implementation, the four cards simply disappeared at the end of the trick. How can I have it so that they smoothly gather themselves into a pile of four cards?

Like most things in Store apps, it proved a little more complex than I had thought. Originally, I put put each card in a separate cell of a layout grid. In Store apps, there is no built-in way to animate movement between grid cells, though users have come up with custom animations that do this. I thought it would be simpler to put the cards on a canvas instead.

I did some sums and positioned four Rectangle objects at the borders of the Canvas.

image

I want the Rectangle objects to slide smoothly into a pile in the centre. You do this in XAML with a Storyboard element. A Storyboard has child animation elements. When you call Storyboard.Begin() in your code, all the animations run. They run simultaneously unless one or more of the animations has a different BeginTime attribute. If you want a sequence of animations, you can either vary BeginTime, or use a KeyFrame animation which is designed for this.

There are several ways to do what I want. One is to animate Canvas.Top and Canvas .Left using a DoubleAnimation (the name indicates that it targets properties of type Double, not that anything happens twice). Note that if you this, the Storyboard.TargetProperty has to be in parantheses:

Storyboard.TargetProperty="(Canvas.Top)"

because it is an attached property. However, I chose to use a RepositionThemeAnimation. I found the way this works counter-intuitive. In XAML, you define a RepositionThemeAnimation with either or both a FromHorizontalOffset and a FromVerticalOffset. At runtime, the RepositionThemeAnimation moves the object to the offset position instantly, and then back to its starting point with animation. In other words, the animation itself does not move the object; unless you have AutoReverse set to True, in which case it does a second animation back to the offset position.

Once you grasp it, it is not difficult. Here is my Storyboard, set as a XAML Resource:

<Page.Resources>
    <Storyboard x:Name="TrickStoryboard">
        <RepositionThemeAnimation Storyboard.TargetName="CardLeft" SpeedRatio=".5"  FromHorizontalOffset="-185"/>
        <RepositionThemeAnimation Storyboard.TargetName="CardTop" SpeedRatio=".5"  FromVerticalOffset="-200"/>
        <RepositionThemeAnimation Storyboard.TargetName="CardRight" SpeedRatio=".5"  FromHorizontalOffset="185"/>
        <RepositionThemeAnimation  Storyboard.TargetName="CardBottom" SpeedRatio=".5"  FromVerticalOffset="200"/>
    </Storyboard>
</Page.Resources>

And here is my code to gather the trick:

this.CardTop.Margin = new Thickness(0, 185, 0, 0);
this.CardLeft.Margin = new Thickness(200, 0, 0, 0);
this.CardRight.Margin = new Thickness(-200, 0, 0, 0);
this.CardBottom.Margin = new Thickness(0, -185, 0, 0);
this.TrickStoryboard.Begin();

In other words, you move the objects to where you want them, and then apply the animation which moves them temporarily back to their starting point, then smoothly to their destination. At this point I should include a little video, but will leave you to imagine the four Rectangles above sliding and merging to a single central Rectangle.

Something to watch for here is how the animation impacts your code flow. The animation runs asynchronously, so if you have:

this.TrickStoryboard.Begin();
DoSomething();

then DoSomething runs before the animation completes. If this is not what you want, you can break at that point, and handle the StoryBoard.Completed event to resume. In my brief tests, StoryBoard.Completed always fires even if the animation gets interrupted, say by some other code that did something to the objects.

For more on this subject, read up on Animating your UI and get to know the different animation classes, Visual states, easing functions, RenderTransforms and more. It soon gets complex and verbose unfortunately, but on the plus side it is great to have animation baked into the framework, and the result is a more polished user interface.

Microsoft completes Nokia acquisition: what now for Windows Phone?

Microsoft has completed its acquisition of Nokia today, a milestone in the turbulent story both of Nokia and of Windows Phone, which Nokia adopted in the hope of establishing a “third ecosystem” to challenge Apple iOS and Google Android.

Rumour has it that the Nokia acquisition was controversial within Microsoft and a large factor in the departure of Steve Ballmer as CEO. However, even if Microsoft took the view that an independent Nokia was better for Windows Phone, it faced the risk that market pressure would drive Nokia to Android and weaken the platform. The beginnings of that process may have been under way, with the launch of the Nokia X Android-but-not-Google range of phones, but we will never know, since Microsoft decided on acquisition.

image

How important has Nokia been for Windows Phone? In my view, life-saving. Before Nokia, there was no manufacturer nor operator which really cared about the platform, and it showed in lacklustre hardware and half-hearted marketing efforts. Nokia came up with the distinctive Lumia brand and style, added a decent mapping service, and with its focus on the PureView camera technology, gave enthusiasts a reason to take a close look at its devices. It also saw an opportunity at the low end, and created some great value devices that opened up a new market for the operating system.

There were some blunders (the original Lumia 800 suffered many faults and terrible battery life on launch) and Lumia did not grow fast enough to restore Nokia to health, but to my mind it was a good effort.

Today, the general opinion of Windows Phone is that it is a strong smartphone operating system but suffers from a lack of high-quality apps. Users have to put up with the fact that most app vendors feel they are done if they support iOS and Android; and if there is a Windows Phone version of their app, it is often poor. That is not a great position for Microsoft/Nokia to be in, but it could be worse. Blackberry 10, which is also a decent mobile operating system, has been all-but written off as a viable contender.

Microsoft is fortunate in that, unlike Blackberry, it can to some extent create its own ecosystem. Office 365, Bing, OneDrive, Nokia’s maps, Azure for developers needing a cloud back-end: taken together they form a viable alternative. In this respect, Microsoft actually has an advantage over Apple, which lacks this breadth of services.

I have been reading the latest Developer Economics report from Vision Mobile. It is a good example of the neutral perspective on Windows Phone, though you will find it somewhat inconsistent:

Windows Phone sales picked up significantly in Q3 2013, showing a 140% increase year-on-year, fuelled primarily by low-end device sales. According to Kantar, Windows Phone sales in the three months running to Oct 2013, reached double-digit figures in some Western European markets. While this is certainly a positive sign for Microsoft they will continue facing an uphill struggle, in an increasingly unfavourable race against the two runaway leaders, iOS and Android.

The report emphasises that iOS and Android have won the mobile OS wars, but says that there are signs of hope for Microsoft:

Windows Phone Developer Mindshare has finally moved upwards, following positive market signals in the last two quarters. As we have frequently highlighted in past reports, the developer intent has always been there, with Windows Phone figuring at the top of our Developer Intentshare chart, but needed positive market signs in order to convert this interest into Mindshare. While the 26% Developer Mindshare is still less than half of that for iOS, Microsoft can now claim that over a quarter of developers that target mobile platforms are now actively developing for Windows Phone. […]

As a latecomer to a mobile market dominated by strong network effects, establishing a credible footprint in mobile remains a formidable challenge
for Microsoft. We believe that Microsoft may be better served in the long-run by leveraging the Android ecosystem as the deployment platform for Office
and Server businesses which are still growing.

Microsoft is in fact supporting iOS and Android as clients for its cloud services, as noted again at yesterday’s financial webcast, where CEO Satya Nadella talked about a strategy that goes across “devices some ours, some not ours.” It is a bit of both though, and the company is not showing any signs of weakening its own mobile efforts.

In my view reports like that from Vision Mobile miss a couple of factors. One is that Windows and Windows Phone are converging. They already use the same OS kernel, and at the Build conference earlier this month Microsoft announced Universal Apps that will run on both, and the ability for developers to sell an app once and have users install on both phone and full Windows.

This means that the future of Windows Phone and that of Windows itself are closely bound together. Longer term, they will either both fade away, or both succeed.

Windows remains a huge business for Microsoft, despite the decline of the PC, especially in business. Microsoft’s problem though is that adoption of Windows 8 has been relatively weak, and that those who do use it, largely live in the desktop environment rather than running Store apps (of which Universal Apps are a variant).

Despite the dismal progress so far for the Store apps platform and ecosystem, I believe it should be taken seriously. On paper it has many advantages, not only for touch control, but also in deployment, security, roaming data driven by the cloud, and discoverability through the store. Isolation from the core operating system protects users against the things that destroy desktop Windows, like unwanted extras foisted on users who simply need to update Java or Flash.

At Build we saw not only Universal Apps, but also a preview of Office in the Windows Runtime (Store app) environment. We also saw a preview of Store apps running within a window in the desktop environment, solving the jarring transition between desktop and Store app environments that unsettles users. If Microsoft gets this right, both Windows Phone and Windows tablets will be substantially more attractive.

Microsoft also has the ability to bind Windows Phone into its enterprise device management environment, System Center and InTune. In Windows Phone 8.1 the device management and security features businesses need are much improved. More is still needed; but the company should be able to build integration points that make it attractive to business customers already using products such as Active Directory, Microsoft Office, Office 365, System Center or InTune.

Another factor is the strength of Visual Studio for developers, especially as Microsoft improves its integration with cloud services like Azure and Office 365. You can use C# everywhere from cloud or server to mobile client.

image

Cortana is sure that Windows Phone is the best; but check out the Bing ad.

What then is the future of Windows Phone? Uncertain, as ever; but if Microsoft pulls off a smooth Nokia acquisition – leaving in place the things that enabled the company to build the Lumia brand – and if it delivers on the promise we saw at Build, of a strong unified platform, then I expect market share to continue to grow. If it can climb to 10% or 15%, it will be on the map for vendors and the app problems will ease.

On the other hand, if Microsoft/Nokia means a return to the ineffective marketing and strategy we saw before Nokia adopted Windows Phone, then I expect Windows Phone to follow Blackberry into oblivion.

I am positive, but Microsoft needs to execute carefully and quickly to win market share for its mobile platform.

Microsoft financials: strong quarter especially in cloud services. We have a very different way to think about Windows says Nadella

Microsoft has released its financial results for the first quarter of 2014. The year on year segment figures look like this:

Quarter ending March 31st 2014 vs quarter ending March 31st 2013, $millions

Segment Revenue Change Gross margin Change
Devices and Consumer Licensing 4382 +30 3906 -23
Devices and Consumer Hardware 1973 +571 258 -135
Devices and Consumer Other 1950 +294 541 +111
Commercial Licensing 10323 +344 9430 +345
Commercial Other 1902 +453 475 +211

The “Gross margin” figures above do not tell us much other than for hardware, since Microsoft no longer allocates its research and development costs against specific segments.

Overall revenue is slightly down year on year but only because of a $1778 million decline in the “corporate and other” segment. This means it was a better quarter than the overall revenue suggests.

So what is notable? Windows OEM revenue is up, but only thanks to the business market, and partly thanks to upgrades driven by the end of support for Windows XP. Consumer OEM Windows is down by 15%.

Xbox revenue is up 45% thanks to the launch of Xbox One (and I have a hunch we will see less positive figures in future since Sony’s PS4 seems to be winning the console wars).

Surface (Microsoft’s own-brand tablet) revenue is up by over 50% year on year, to $494 million. It is a significant business, though apparently not a profitable one. Cost of sales was $539 million, says Microsoft in its notes.

Windows volume licensing, which accounts for most enterprise usage, is up 11%, also no doubt influenced by the end of XP support. SQL Server revenue is up by 15%, though in relation to server products Microsoft notes the impact of “the transition of customers to Cloud Services.”

The big winner is cloud services. Microsoft says:

  • Office 365 revenue grew more than 100%
  • Microsoft Azure revenue grew more than 150%
  • Cloud services revenue grew $367 million or 101%

These sums are a little puzzling. If growth was 101% overall, and Office 365 grew by more than 100%, where is the Microsoft Azure growth hiding, or was it from a very small base?

Note that consumer Office 365 is accounted for separately, it seems, as part of “Devices and Consumer other”. There are now 4.4 million Office 365 Home subscribers, growing by around 1 million in this quarter.

Questioned in the earnings call, CEO Satya Nadella talked about mobile-first and cloud-first, adding that the strategy goes across “devices some ours, some not ours.” He also mentioned how the advent of Universal Apps means that “we have a very different way to think about [Windows].” That is partly wishful thinking of course: the Universal App framework is still in preview and targets a still unreleased update to Windows Phone (8.1). Still, that is the strategy, even if it means giving Windows away on smaller devices – we have “monetization on the back end,” said Nadella, presumably thinking of Office 365 subscriptions and the like.

On the business and enterprise side (where Microsoft can be more confident) Nadella also spoke of the synergy between Office 365 and Azure; every Office 365 sign-up enables Azure as a business cloud platform, thanks to Azure Active Directory and other integration points.

Microsoft’s segments summarised

Devices and Consumer Licensing: non-volume and non-subscription licensing of Windows, Office, Windows Phone, and “ related patent licensing; and certain other patent licensing revenue” – all those Android royalties?

Devices and Consumer Hardware: the Xbox 360, Xbox Live subscriptions, Surface, and Microsoft PC accessories.

Devices and Consumer Other: Resale, including Windows Store, Xbox Live transactions (other than subscriptions), Windows Phone Marketplace; search advertising; display advertising; Office 365 Home Premium subscriptions; Microsoft Studios (games), retail stores.

Commercial Licensing: server products, including Windows Server, Microsoft SQL Server, Visual Studio, System Center, and Windows Embedded; volume licensing of Windows, Office, Exchange, SharePoint, and Lync; Microsoft Dynamics business solutions, excluding Dynamics CRM Online; Skype.

Commercial Other: Enterprise Services, including support and consulting; Office 365 (excluding Office 365 Home Premium), other Microsoft Office online offerings, and Dynamics CRM Online; Windows Azure.