Category Archives: microsoft

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

Windows 10: Moving Windows into the mobile and app era take 2, and why Windows 8 is not so bad

I attended Microsoft’s Build conference last week where there was a big focus on Windows 10. I spent some time with the latest Build 10074 which came out last week as well attending various sessions on developing for the upcoming OS. I also spoke to Corporate VP Joe Belfiore and I recommend this interview on the Reg which says a lot about Microsoft’s approach. Note that the company is determined to appeal to Windows 7 users who largely rejected Windows 8; Windows 10 is meant to feel more familiar to them.

image

That said, Microsoft is not backtracking on the core new feature in Windows 8, which is its new app platform, called the Windows Runtime (WinRT). In fact, in its latest guise as the Universal App Platform (UAP) it is more than ever at the forefront of Microsoft’s marketing effort.

Why is this? In essence, Microsoft needs a strong app ecosystem for Windows if it is to escape legacy status. That means apps which are store-delivered, run in a secure sandbox, install and uninstall easily, update automatically, and work on tablets as well as with keyboard and mouse. Interaction and data transfer between apps is managed through OS-controlled channels, called Contracts. Another advantage is that you do not need setup CDs or downloads when you get a new PC; your apps flow down automatically. When you think of it like this, the advantages are huge; but nevertheless the Windows 8 app platform largely failed. It is easy to enumerate some of the reasons:

  • Most users live in the Windows desktop and rarely transition to the “Metro” or “Modern” environment
  • Lack of Windows 7 compatibility makes the Windows 8 app platform unattractive to developers who want to target the majority of Windows users
  • Many users simply avoided upgrading to Windows 8, especially in business environments where they have more choice, reducing the size of the Windows 8 app market
  • Microsoft made a number of mistakes in its Windows 8 launch, including an uncompromising approach that put off new users (who felt, rightly, that “Metro” was forced upon them), lack of compelling first-party apps, and encouraging a flood of abysmal apps into the Store by prioritising quantity over quality

History will judge Windows 8 harshly, but I have some admiration for what Microsoft achieved. It is in my experience the most stable and best performing version of Windows, and despite what detractors tell you it works fine with keyboard and mouse. You have to learn a new way of doing a few things, such as finding apps in the Start screen, following which it works well.

The designers of Windows 8 took the view that the desktop and app environments should be separate. This has the advantage that apps appear in the environment they are designed for. Modern apps open up full-screen, desktop apps in a window (unless they are games that are designed to run full-screen). The disadvantage is that integration between the two environments is poor, and you lose one of the key benefits of Windows (from which it got its name), the ability to run multiple apps in resizable and overlapping windows.

Windows 10 takes the opposite approach. Modern apps run in a window just like desktop apps. The user might not realise that they are modern apps at all; they simply get the benefits of store delivery, isolation and so on, without having to think about it.

image

This sounds good and following the failure of the first approach, it is probably the right thing for Microsoft to do. However there are a couple of problems. One is the risk of what has been called the “uncanny valley” in an app context, where apps nearly but not quite work in the way you expect, leading to a feeling of unease or confusion. Modern apps look a little bit different from true desktop apps in Windows 10, and behave a little bit different as well. Modern apps have a different lifecycle, for example, can enter a suspended state when they do not have the focus or even be terminated by the OS if the memory is needed. A minimized desktop app keeps running, but a minimized modern app is suspended, and the developer has to take special steps if you want a task to keep running in the background.

Another issue with Windows 10 is that its attempt to recreate a Windows 8 like tablet experience is currently rather odd. Windows 10 “Tablet Mode” makes all apps run full screen, even desktop apps for which this is wholly inappropriate. Here is the Snipping Tool in Tablet Mode:

image

and here is the desktop Remote Desktop Connection:

image

Personally I find that Tablet Mode trips me up and adds little value, even when I am using a tablet without a keyboard, so I tend not to use it at all. I would prefer the Windows 8 behaviour, where Modern apps run full screen (or in a split view), but desktop apps open in a window on the desktop. Still, it illustrates the point, which is that integrating the modern and desktop environments has a downside; it is just a different set of compromises than those made for Windows 8.

Now, I do think that Microsoft is putting a more wholehearted effort into its UAP than it did for Windows 8 modern apps (even though both run on WinRT). This time around, the Store is better, the first-party apps are better (not least because we have Office), and the merging of the Windows Phone and Xbox platforms with the PC platform gives developers more incentive to come up with apps. Windows 10 is also a free upgrade for many users which must help with adoption. Even with all this, though, Microsoft has an uphill task creating a strong modern app ecosystem for Windows, and a lot of developers will take a wait and see approach.

The other huge question is how well users will take to Windows 10. Any OS upgrade has a problem to contend with, which is that users dislike change – perhaps especially what has become the Windows demographic, with business users who are by nature cautious, and many conservative consumer users. Users are contradictory of course; they dislike change, but they like things to be made better. It will take more than a Cortana demo to persuade a contented Windows 7 user that Windows 10 is something for them.

Note that I say that in full knowledge of how much potential the modern app model has to improve the Windows experience – see my third paragraph above.

Microsoft told me in San Francisco that things including Tablet Mode are still being worked on so a little time remains. It was clear at Build that there is a lot of energy and determination behind Windows 10 and the UAP so there is still room for optimism, even though it is also obvious that Windows 10 has to improve substantially on the current preview to have a chance of meeting the company’s goals.

HoloLens: a developer hands-on

I attended the “Holographic Academy” during Microsoft’s Build conference in San Francisco. It was aimed at developers, and we got a hands-on experience of coding a simple HoloLens app and viewing the results. We were forbidden from taking pictures so you will have to make do with my words; this also means I do not have to show myself wearing a bulky headset and staring at things you cannot see.

image

First, a word about HoloLens itself. The gadget is a headset that augments the real world with a 3D “projected” image. It is not really a hologram, otherwise everyone would see it, but it is a virtual hologram created by combining what you see with digital images.

The effect is uncanny, since the image you see appears to stay in one place. You can walk around it, seeing it from different angles, close up or far away, just as you could with a real image.

That said, there were a couple of issues with the experience. One is that if you went too close to a projected image, it disappeared. From memory, the minimum distance was about 18 inches. Second, the viewport where you see the augmented reality was fairly small and you could easily see around it. This is detrimental to the illusion, and sometimes made it a struggle to see as much of your hologram as you might want.

I asked about both issues and got the same response, essentially “no comment’’. This is prototype hardware, so anything could change. However, according to another journalist who attended a hands-on demo in January, the viewport has gotten smaller, suggesting that Microsoft is compromising in its effort to make the technology into a commercially viable product.

Another odd thing about the demo was that after every step, we were encouraged to whoop and cheer. There was a Microsoft “mentor” for every pair of journalists, and it seemed to me that the mentors were doing most of the whooping and cheering. It is obvious that this is a big investment for the company and I am guessing that this kind of forced enthusiasm is an effort to ensure a positive iimpression.

Lest you think I am too sceptical, let me add that the technology is genuinely amazing, with obvious potential both for gaming and business use.

The developer story

The development process involves Unity, Visual Studio, and of course the HoloLens device itself. The workflow is like this. You create an interactive 3D scene in Unity and build it, whereupon it becomes a Visual Studio project. You open the project in Visual Studio, and deploy it to HoloLens (connected over USB), just as you would to a smartphone. Once deployed, you disconnect the HoloLens and wear it in order to experience the scene you have created. Unity supports scripting in C#, running on Mono, which makes the development platform easy and familiar for Windows developers.

Our first “Holo World” project displayed a hologram at a fixed position determined by where you are when the app first runs. Next, we added the ability to move the hologram, selecting it with a wagging finger gesture, shifting our gaze to some other spot, and placing it with another wagging finger gesture. Note that for this to work, HoloLens needs to map the real world, and we tried turning on wire framing so you could see the triangles which show where HoloLens is detecting objects.

We also added a selection cursor, an image that looks like a red bagel (you can design your own cursor and import it into Unity). Other embellishments were the ability to select a sphere and make it fall to the floor and roll around, voice control to drop a sphere and then reset it back to the starting point, and then “spatial audio” that appears to emit from the hologram.

All of this was accomplished with a few lines of C# imported as scripts into Unity. The development was all guided so we did not have to think for ourselves, though I did add a custom voice command so I could say “abracadabra” instead of “reset scene”; this worked perfectly first time.

For the last experiment, we added a virtual underworld. When the sphere dropped, it exploded making a virtual pit in the floor, through which you could see a virtual world with red birds flapping around. It was also possible to enter this world, by positioning the hologram above your head and dropping a sphere from there.

HoloLens has three core inputs: gaze (where you are looking), gesture (like the finger wag) and voice. Of these, gaze and voice worked really well in our hands on, but gesture was more difficult and sometimes took several tries to get right.

At the end of the session, I had no doubt about the value of the technology. The development process looks easily accessible to developers who have the right 3D design skills, and Unity seems ideally suited for the project.

The main doubts are about how close HoloLens is to being a viable commercial product, at least in the mass market. The headset is bulky, the viewport too small, and there were some other little issues like lag between the HoloLens detection of physical objects and their actual position, if they were moving, as with a person walking around.

Watch this space though; it is going to be most interesting.

Compile Android Java, iOS Objective C apps for Windows 10 with Visual Studio: a game changer?

Microsoft has announced the ability to compile Windows 10 apps written in Java or C++ for Android, or in Objective C for iOS, at its Build developer conference here in San Francisco.

image
Objective C code in Visual Studio

The Android compatibility had been widely rumoured, but the Objective C support not so much.

This is big news, but oddly the Build attendees were more excited by the HoloLens section of the keynote (3D virtual reality) than by the iOS/Android compatibility. That is partly because this is the wrong crown; these are the Windows faithful who would rather code in C#.

Another factor is that those who want Microsoft’s platform to succeed will have mixed feelings. Is the company now removing any incentive to code dedicated Windows apps that will make the most of the platform?

Details of the new capabilities are scant though we will no doubt get more details as the event progresses. A few observations though.

Microsoft is trying to fix the “app gap”, the fact that both Windows Store and Windows Phone Store (which are merging) have a poor selection of apps compared to iOS or Android. Worse, many simply ignore the platforms as too small to bother with. Lack of apps make the platforms less attractive so the situation does not improve.

The goal then is to make it easier for developers to port their code, and also perhaps to raise the quality of Windows mobile apps by enabling code sharing with the more important platforms.

There are apparently ways to add Windows-specific features if you want your ported app to work properly with the platform.

Will it work? The Amazon Fire and the Blackberry 10 precedents are not encouraging. Both platforms make it easy to port Android apps (Amazon Fire is actually a version of Android), yet the apps available in the respective app stores are still far short of what you can get for Google Android.

The reasons are various, but I would guess part of the problem is that ease of porting code does not make an unimportant platform important. Another factor is that supporting an additional platform never comes for free; there is admin and support to consider.

The strategy could help though, if Microsoft through other means makes the platform an attractive target. The primary way to do this of course is to have lots of users. VP Terry Myerson told us that Microsoft is aiming for 1 billion devices running Windows 10 within 2-3 years. If it gets there, the platform will form a strong app market and that in turn will attract developers, some of whom will be glad to be able to port their existing code.

The announcement though is not transformative on its own. Microsoft still has to drive lots of Windows 10 upgrades and sell more phones.

Visual Studio Code: an official Microsoft IDE for Mac, Windows, Linux

Microsoft has announced Visual Studio Code, a cross-platform, code-oriented IDE for Windows, Mac and Linux, at its Build developer conference here in San Francisco.

image

Visual Studio Code is partly based on the open source projects Omnisharp. It supports Intellisense code completion, GIT source code management, and debugging with break points and call stack.

I have been in San Francisco for the last few days and the dominance of the Mac is obvious. Sitting in a cafe in the Mission district I could see 10 Macs and no PCs other than my own Surface Pro. Some folk were coding too.

It follows that if Microsoft wants to make a go of cross-platform C#, and development of ASP.NET MVC web applications beyond the Windows developer community, then tooling for the Mac is important.

image

Visual Studio Code is free and is available for download here.

The IDE will lack the rich features and templates of the full Visual Studio, but if it is fast and clean, some Visual Studio developers may be keen to give it a try as well.

Microsoft financials Jan-March 2015

Microsoft has released figures for its third quarter, ending March 31st 2015. Here is my simple summary of the figures showing the segment breakdown:

Quarter ending  March 31st 2015 vs quarter ending March 31st 2014, $millions

Segment Revenue Change Gross margin Change
Devices and Consumer Licensing 3476 -1121 3210 -807
Computing and Gaming Hardware 1800 -72 414 +156
Phone Hardware 1397 N/A -4 N/A
Devices and Consumer Other 2280 +456 566 +175
Commercial Licensing 10036 -299 9975 -157
Commercial Other 2760 +858 1144 +669

The figures form a familiar pattern: Windows and shrink-wrap (non-subscription) Office is down, reflecting weak PC sales and the advent of free Windows at the low end, but subscription sales are up and cloud is booming. See the foot of this post for an explanation of Microsoft’s confusing segment breakdown.

Microsoft says that Surface Pro 3 is doing well (revenue of $713 million) and this is reflected in the Devices figures. Commercial cloud (Office 365, Azure and Dynamics) is up 106% year on year.

Cloud aside, it is impressive that server products reported a 12% year on year increase in revenue. This is the kind of business that you would expect to be hit by cloud migration, though I am not sure how Microsoft accounts for things like SQL Server licenses deployed on Azure.

Xbox One is disappointing, bearing in mind the success of the Xbox 360. Microsoft managed to lose out to Sony’s PlayStation 4 with its botched launch and market share will be hard to claw back.

Microsoft reports 8.6 million Lumias sold, the majority being low-end devices. Not too bad for a platform many dismiss, but still treading water and miles behind iOS and Android.

The company remains a huge money-making machine though, and Office 365 is doing well. A few years ago it looked as if cloud and mobile could destroy Microsoft, but so far that is not the case at all, though its business is changing.

Microsoft’s segments summarised

Devices and Consumer Licensing: non-volume and non-subscription licensing of Windows, Office, Windows Phone, and “ related patent licensing; and certain other patent licensing revenue” – all those Android royalties?

Computing and Gaming Hardware: the Xbox One and 360, Xbox Live subscriptions, Surface, and Microsoft PC accessories.

Devices and Consumer Other: Resale, including Windows Store, Xbox Live transactions (other than subscriptions), Windows Phone Marketplace; search advertising; display advertising; Office 365 Home Premium subscriptions; Microsoft Studios (games), retail stores.

Commercial Licensing: server products, including Windows Server, Microsoft SQL Server, Visual Studio, System Center, and Windows Embedded; volume licensing of Windows, Office, Exchange, SharePoint, and Lync; Microsoft Dynamics business solutions, excluding Dynamics CRM Online; Skype.

Commercial Other: Enterprise Services, including support and consulting; Office 365 (excluding Office 365 Home Premium), other Microsoft Office online offerings, and Dynamics CRM Online; Windows Azure.

Inside Microsoft: Ex design lead gives perspective on Metro, Office, iOS and Android battles

Here is a must-read for Microsoft watchers. Two days ago a former design lead on the Office on Windows Phone team turned up on Reddit and said I designed the new version of Office for Windows Phone. Ask me anything.

image

The overall theme is that Microsoft did not get the design of Windows Phone quite right and is changing it; that Windows 8 was even worse; and that Windows 10 just might begin to pull it all together at last, though the company is also consciously moving away from a Windows-centric view. The Windows, Windows Phone and Office teams are now working together for the first time, we are told:

Windows didn’t believe in working well with others, certainly not that dumb upstart Windows Phone team.

Office believed it was the greatest software on earth, and didn’t get along with Windows.

Windows Phone was pretty proud of itself despite its middling marketshare. Too proud.

So now when these three teams got together to do something for the good of Microsoft, and the good of customers, there was a ton of ego in the way. Windows believed in the Windows way. Windows Phone believed their way. Office was like "fuck all y’all, we’re Office."

The new situation at the company is way better. People actually do care about working together in a way I hadn’t historically seen in my short time there. (or read about in many MS history books)

So I wouldn’t say Windows Phone caused the shift. You know what did? Sinofsky leaving, Windows 8 being a failure, Windows Phone failing to gain significant traction, and then Ballmer leaving.

They basically had to start working together. And it’s cool to see.

Here are a few more things that caught my eye. There are long discussions about the “hamburger” menu, three lines appearing at top left of many new apps where it is hard to reach if you are using the phone with one hand:

Don’t get me wrong, this is clearly a tradeoff. Frequently used things have to be reachable, even one-handed. But hamburgers are not frequently used, and one-handed use is not ironclad. Combine those two factors together and you see why the industry has settled on this standard. It wasn’t random.

From a developer perspective, the key insight here is that hamburgers are not frequently used. In other words, do not design your app so that users will have to reach constantly for the hamburger menu. Reserve it for stuff that is only needed occasionally.

Why is Microsoft appearing to prioritise iOS and Android over Windows, for example with Office?

When Ballmer saw the iPad version of Office, he reportedly said something like "you’re killing me." It was so fucking good. Way better than anything on any other platform. It leveraged a bunch of iOS stuff in a really good way, but it was still "unmistakably Office," as they say.

Ballmer knew it was good. And he knew the company’s other efforts were years and years out. And he iced it. Because his mentality, and what I’d call dogma, was that Windows had to be first. At all costs.

Good riddance. It was an outdated philosophy.

… The way Microsoft wins the long term war is to remind people where they’re strong. And no, it’s not through withholding Office on iOS. Not anymore. The ship sailed on Ballmer’s watch

I would love to know the date when Ballmer “iced” Office for iPad.

What was wrong with the design of Window Phone?

When Steve Jobs came back to Apple, he said he was going to save the company by reminding people of Apple’s sex appeal. He described colored plastics and technology as fashion. And the board thought "uh-oh, this guy is going to drive us into a ditch."

But from "Bondi Blue iMacs" and "OS X has an interface you just want to lick" you’ll notice their design went more and more subdued over the last 15-20 years. It’s because you need to shock people at first, then you get back to being more practical.

Metro had to shock people. It had to look like its own thing. And it did that really well. Pivots, panos, big text, black everywhere, it looked like art. And more than that it looked different. Something to witness. Steve Jobs even gave kudos to the Windows Phone design team! He said something like "I mean, it’s still clearly a v1, but it’s really beautiful." And he was right.

So what would I change?

Well. The interaction models, honestly. The pivots and the panoramas are a nightmare in day to day use. They’re as distinct as a Flower Power iMac, but it painted the interaction models into a corner.

In another post, there is a discussion of the difficulty with the back button. “when back is good, it’s good. But when it’s bad (from a user experience standpoint) it’s really bad.”

Here is another insight:

The stark look of Windows Phone seemed to turn off more people than fell in love with it. I know here in this forum we’re all fans but in the mainstream marketing was only one problem. Apps was another. But the biggest one was lack of relevance. People didn’t understand why they should care. A lot of people said it looked like a nice phone, but it wasn’t for them.

Despite the criticisms, the ex-Microsoft designer (who now works for Twitter) is optimistic, saying “I do have a lot of hope for Universal apps. It’s not a magic bullet, but given enough time for the system to mature, and the business support, and new initiatives, I see rosy days ahead.”

Microsoft may be well positioned for “the next big shift”:

Look beyond just Windows. Just make amazing software. Get back some relevance that was lost. 2) Of course keep competitive with hardware, and keep improving WP. 3) Then, a few years out, when the market experiences another big shift (it’s not a matter of if but when) I suspect MS’s strength as a multi-OS developer + cloud leader will help Windows regain a ton of relevance

Fascinating stuff, though note the disclaimer:

I have no idea what I’m talking about. I’m one designer and I don’t work at MS anymore.

AWS Summit London: cloud growth, understanding Lambda, Machine Learning

I attended the Amazon Web Services (AWS) London Summit. Not much news there, since the big announcements were the week before in San Francisco, but a chance to drill into some of the AWS services and keep up to date with the platform.

image

The keynote by CTO Werner Vogels was a bit too much relentless promotion for my taste, but I am interested in the idea he put forward that cloud computing will gradually take over from on-premises and that more and more organisations will go “all in” on Amazon’s cloud. He instanced some examples (Netflix, Intuit, Tibco, Splunk) though I am not quite clear whether these companies have 100% of their internal IT systems on AWS, or merely that they run the entirety of their services (their product) on AWS. The general argument is compelling, especially when you consider the number of services now on offer from AWS and the difficulty of replicating them on-premises (I wrote this up briefly on the Reg). I don’t swallow it wholesale though; you have to look at the costs carefully, but even more than security, the loss of control when you base your IT infrastructure on a public cloud provider is a negative factor.

As it happens, the ticket systems for my train into London were down that morning, which meant that purchasers of advance tickets online could not collect their tickets.

image

The consequences of this outage were not too serious, in that the trains still ran, but of course there were plenty of people travelling without tickets (I was one of them) and ticket checking was much reduced. I am not suggesting that this service runs on AWS (I have no idea) but it did get me thinking about the impact on business when applications fail; and that led me to the question: what are the long-term implications of our IT systems and even our economy becoming increasingly dependent on a (very) small number of companies for their health? It seems to me that the risks are difficult to assess, no matter how much respect we have for the AWS engineers.

I enjoyed the technical sessions more than the keynote. I attended Dean Bryen’s session on AWS Lambda, “Event-driven code in the cloud”, where I discovered that the scope of Lambda is greater than I had previously realised. Lambda lets you write code that runs in response to events, but what is also interesting is that it is a platform as a service offering, where you simply supply the code and AWS runs it for you:

AWS Lambda runs your custom code on a high-availability compute infrastructure and administers all of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code, and security patches.

This is a different model than running applications in EC2 (Elastic Compute Cloud) VMs or even in Docker containers, which are also VM based. Of course we know that Lambda ultimately runs in VMs as well, but these details are abstracted away and scaling is automatic, which arguably is a better model for cloud computing. Azure Cloud Services or Heroku apps are somewhat like this, but neither is very pure; with Azure Cloud Services you still have to worry about how many VMs you are using, and with Heroku you have to think about dynos (app containers). Google App Engine is another example and autoscales, though you are charged by application instance count so you still have to think in those terms. With Lambda you are charged based on the number of requests, the duration of your code, and the amount of memory allocated, making it perhaps the best abstracted of all these PaaS examples.

But Lambda is just for event-handing, right? Not quite; it now supports synchronous as well as asynchronous event handling and you could create large applications on the service if you chose. It is well suited to services for mobile applications, for example. Java support is on the way, as an alternative to the existing Node.js support. I will be interested to see how this evolves.

I also went along to Carlos Conde’s session on Amazon Machine Learning (one instance in which AWS has trailed Microsoft Azure, which already has a machine learning service). Machine learning is not that easy to explain in simple terms, but I thought Conde did a great job. He showed us a spreadsheet which was a simple database of contacts with fields for age, income, location, job and so on. There was also a Boolean field for whether they had purchased a certain financial product after it had been offered to them. The idea was to feed this spreadsheet to the machine learning service, and then to upload a similar table but of different contacts and without the last field. The job of the service was to predict whether or not each contact listed would purchase the product. The service returned results with this field populated along with a confidence indicator. A simple example with obvious practical benefit, presuming of course that the prediction has reasonable accuracy.

Quick thoughts on Surface 3 from a long-term Surface user

I’ve been using a Surface as my usual travel PC for a while now – mostly Surface Pro (the first iteration) but also Surface RT and Surface 2. Microsoft has announced Surface 3 – is that a good buy?

image

Note: this is not a review of Surface 3. I intend to review it but have yet to get my hands on one.

First, a quick note on how I have got on with Surface to date. I love the compact size of the devices and the fact that I can do all my work on them. I find full-size laptops unbearably bulky now – though slim ultrabooks or small netbooks still have some appeal.

The main annoyances with my Surface Pro are the small SSD size (I have the 128GB model) and a few technical difficulties, mainly that the keyboard cover (currently the Power Cover) plays up from time to time. Sometimes it stops responding, or I get oddities like the mouse pointer going wild or keys that auto-repeat for no reason. Detaching and re-attaching the keyboard usually fixes it. Given that this is Microsoft hardware, drives and OS, I regard these bugs as disappointing.

Surface power handling is not very good. The Surface is meant to be running all the time but sleeps so that touching power turns it on or off almost instantly. That’s the idea, but sometimes it fails to sleep and I discover that it has been heating up my bag and that the battery is nearly flat. To overcome this, and to save battery, I often shut it right down or use hibernate. Hibernate is a good option – fairly quick resume, no battery usage – except that about every third resume it crashes. So I tend to do a full shutdown.

I find the power button just a little unpredictable. In other words, sometimes I press it and nothing happens. I have to try several times, or press and hold. It could be the contact or it could be something else – I don’t think it is the contact since often it works fine.

The power cover has stopped charging, after 10 months of use. It is under warranty so I plan to get it replaced, but again, disappointing considering the high cost ($199).

A few grumbles then, but I still like the device for is portability and capability. Surface Pro 2 seemed to be better that the first in every way. Surface Pro 3 I had for a week on loan; I liked it, and could see that the pen works really well although in general pens are not for me; but for me the size is a bit too big and it felt more like an ultrabook than a tablet.

What about Surface 3 then? The trade-off here is that you get better value thanks to a smaller size (good) and lower performance (bad), with an Atom processor – Intel’s low power range aimed at mobile computing – instead of the more powerful Core range. Here are some key stats, Surface 3 vs Surface Pro 3:

  Surface 3 Surface Pro 3
Display 10.8″ 12″
Weight (without cover) 622g 800g
Storage 64GB or 128GB 64GB-512GB
Processor Intel Atom x7 Intel Core i3, i5 or i7
RAM 2GB or 4GB 4GB or 8GB
Pen Available separately Included
Cameras 8MP rear, 3.5MP front 5.0MP rear, 5.0MP front

What about battery life? Microsoft quotes Surface Pro 3 as “up to 9 hours of web browsing” and Surface 3 as “up to 10 hours of video playback”. That is a double win for Surface 3, since video playback is more demanding. Anandtech measured Surface Pro 3 as 7.6 hrs light use and 3.45 hrs heavy use; the Surface 3 will fare better.

How much do you save? A snag with the Surface is that you have to buy a keyboard cover to get the best out of it, and annoyingly the cover for the Surface 3 is different from those for Surface, Surface 2 and Surface Pro, so you can’t reuse your old one.

A quick look then at what I would be paying for the Surface 3 vs Surface Pro 3 in a configuration that makes sense for me. With Surface 3, I would max out the RAM and storage, because both are rather minimal, so the cost looks like this:

Surface 3 with 4GB RAM and 128GB storage: $599
Keyboard cover: $129
Total: $728.99

Surface Pro 3 with 8GB RAM, 265GB storage, Intel Core i5, pen: $1299
Keyboard cover: $129.00
Total: $1428.99

In other words, Surface 3 is around half the price.

Will I buy a Surface 3? It does look tempting. It is a bit less powerful than my current Surface Pro and perhaps not too good with Visual Studio, but fine for Office and most general-purpose applications. Battery life looks good, but the 128GB storage limitation is annoying; you can mitigate this with an SD card, say another 128GB for around $100, but I would rather have a 256GB SSD to start with.

However, there is strong competition. An iPad Air, I have discovered, makes an excellent travel companion, especially now that Office is available, provided you have a good keyboard case such as one from Logitech; you could get an iPad Air 2 with 64GB storage and a keyboard for slightly less than a Surface 3.

The iPad comparison deserves some reflection. The iPad does have annoyances, things like lack of direct access to the file system and non-expandable storage (no USB). However I have never encountered foibles like power management not working, and as a tablet it is a better design (not just because there are abundant apps).

It is also worth noting that there is more choice in Windows tablets and convertibles than there was when Surface was first released. Some are poorly designed, but ranges like those from Asus and Lenovo are worth checking out. In a sense this is “job done” since one of the reasons for Microsoft doing Surface was to kick-start some innovation in Windows hardware.

I hope to get some hands-on with Surface 3 in the next few weeks and will of course report back.

Why Windows Server is going Nano: think automation, Cloud OS

Yesterday Microsoft announced Windows Nano Server which is essentially an installation option that is even more stripped-down than Server Core. Server Core, introduced with Windows Server 2008, removed the GUI in order to make the OS lighter weight and more secure. It is particularly suitable for installations that do nothing more than run Hyper-V to host VMs. You want your Hyper-V host to be rock-solid and removing unnecessary clutter makes sense.

There was more to the strategy than that though, and it was at last week’s ChefConf in Santa Clara (attended by both Windows Server architect Jeffrey Snover and Azure CTO Mark Russinovich) that the pieces fell into place for me. Here are two key areas which Snover has worked on over the last 16 years or so (he joined Microsoft in 1999):

  • PowerShell, first announced as “Monad” in August 2002 and presented at the PDC conference in September 2003. Originally presented as a scripting platform, it is now described as an “automation engine”, though it is still pretty good for scripting.
  • Windows Server componentisation, that is, the ability to configure Windows Server by adding and removing components. Server Core was a sign of progress here, especially in the Server 2012 version where you can move seamlessly between Core and full Windows Server by adding or removing the various pieces. It is still not perfect, mainly because of dependencies that make you drag in more than you might really want when enabling a specific feature.
  • PowerShell Desired State Configuration, introduced in Server 2012 R2, which puts these together by letting you define the state of a server in a declarative configuration file and apply it to an OS instance.

I am not sure how much of this strategy was in Snover’s mind when he came up with PowerShell, but today it looks far-sighted. The role of a server OS has changed since Windows first entered this market, with Windows NT in 1993. Today, when most server instances are virtual, the focus is on efficiency (making maximum use of the hardware) and agility (quick configuration and on-demand scaling). How is that achieved? Two things:

1. For efficiency, you want an OS that runs only what is necessary to run the applications it is hosting, and on the hypervisor side, the ability to load the right number of VMs to make maximum use of the hardware.

2. For agility, you want fully automated server deployment and configuration. We take this for granted in cloud platforms such as Amazon Web Services and Azure, in that you can run up a new server instance in a few minutes. However, there is still manual configuration on the server once launched. Azure web apps (formerly web sites) are better: you just upload your application. Better still, you can scale it by adding or removing instances with a script or through the web-based management portal. Web apps are limited though and for more complex applications you may need full access to the server. Greater ability to automate the server means that the web app experience can become the norm for a wider range of applications.

Nano Server is more efficient. Look at these stats (compared to full Server):

  • 93 percent lower VHD size
  • 92 percent fewer critical bulletins
  • 80 percent fewer reboots

Microsoft has removed not only the GUI, but also 32-bit support and MSI (I presume the Windows Installer services). Nano Server is designed to work well both sides of the hypervisor, either hosting Hyper-V or itself running in a VM.

Microsoft has also improved automation:

All management is performed remotely via WMI and PowerShell. We are also adding Windows Server Roles and Features using Features on Demand and DISM. We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging.

Returning for a moment to ChefConf, the DevOps concept is that you define the configuration of your application infrastructure in code, as well as that for the application itself. Deployment can then be automated. Or you could use the container concept to build your application as a deployable package that has no dependencies other than a suitable host – this is where Microsoft’s other announcement from yesterday comes in, Hyper-V Containers which provide a high level of isolation without quite being a full VM. Or the already-announced Windows Server Containers which are similar but a bit less isolated.

image

This is the right direction for Windows Server though the detail to be revealed at the Build and Ignite conferences in a few weeks time will no doubt show limitations.

A bigger issue though is whether the Windows Server ecosystem is ready to adapt. I spoke to an attendee at ChefConf who told me his Windows servers were more troublesome than Linux,. Do you use Server Core I asked? No he said, we like to be able to log on to the GUI. It is hard to change the culture so that running a GUI on the server is no longer the norm. The same applies to third-party applications: what will be the requirements if you want to install on Nano Server (no MSI)? Even if Microsoft has this right, it will take a while for its users to catch up.