Category Archives: tech

Setting up PHP for development on Windows Subsystem for Linux in Windows 10

I have been working a little with PHP, for the first time for a while, and soon found it annoying not to have the convenience of instant application testing and line by line debugging. I have set up a PHP development environment before using XAMPP for Windows and Eclipse, but it was fiddly. I also prefer PHP on Linux, which is where my scripts will be running.

Since Windows 10 now has a Linux environment built-in, called Windows Subsystem for Linux (WSL), I decided to set this up to run Apache, PHP and MySQL and to try debugging my scripts there.

My PC is a recent installation and I had not yet installed WSL. To do so, you have to both download a Linux distribution from the Store (I chose Ubuntu), and enable WSL in Windows features. Then restart, launch Ubuntu, set a username and password, and you are up and running.

Note the Linux commands that follow should be run as root using sudo.

Before doing anything else, I got Ubuntu up to date:

apt-get update

apt-get upgrade

Then I installed the LAMP suite:

apt-get install lamp-server^

(the final ^ is intentional; see the guide here).

To check that everything is working, I created the file phpinfo.php in /var/www/html with the following contents:

<?php phpinfo(); ?>

and restarted Apache:

/etc/init.d/apache2 restart

Note: if you have IIS running in Windows, or another web server, Apache will not be able to listen on port 80. Change the port in /etc/apache2/ports.conf and in /etc/apache2/sites-enabled/000-default.conf

Then I opened a web browser on the Windows side and browsed to localhost:

image

and

image

We are up and running, but not debugging PHP yet. Remember the basic rules of WSL:

  • you cannot change Linux files from Windows.
  • you can access Windows files from Linux.

We want to edit PHP from Windows, so we’ll define a site that uses Windows files. Windows files are under /mnt/c (or whatever drive letter you are using).

So if you example you have your PHP website in a folder called c:\websites\mysite, you can have Apache serve files from that folder.

The quickest way to get up and running is to create a symbolic link in the Apache home directory, in my case /var/www/html. Change to that directory and type:

ln -s /mnt/c/websites/mysite mysite

Now you can view the site at http://localhost/mysite/

This worked first time for me, complete with PHP running. You could also set up multiple virtual hosts in Apache, and use the hosts file in Windows to map other host names to localhost.

Next, you probably want PHP to show error messages. To do this, replace the default php.ini with the development version (or tweak it according to your own preferences. At the time of writing, on Ubuntu, the default PHP version is 7.0 and php.ini-development is located in /usr/lib/php/7.0/php.ini-development. So I backed up the ini file at /etc/php/7.0/apache2, replaced it with the development version, and restarted Apache. My PHP form immediately showed me a non-fatal undefined index error, so it worked.

There is one small inconvenience. Apache in WSL will only run during the session. So before starting work, you have to open Ubuntu and type:

sudo apache2ctl start

Well, background task support is coming to WSL but I do not regard this as a big problem.

OK, this is cool, we can make changes in the PHP code in our favourite Windows editor, save, and view the results directly in the browser. But what about line-by-line debugging? For this, we are going to use Visual Studio Code with the PHP Debug extension:

image

Then on the Ubuntu side:

apt-get install php-xdebug

Restart Apache:

apache2ctl restart

Check that phpinfo.php now shows an Xdebug section. Then edit php.ini and add the following:

[XDebug]
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart Apache again and XDebug is ready to go.

Over in Visual Studio code there is a little more work to do. The problem is that although everything is running on localhost, the location of the files looks different to Linux than to Windows. We can fix this with a pathMappings setting. In Visual Studio code, open the PHP file you want to debug. Click the Debug icon and then the little gearwheel near top left; this will open launch.json. By default there are a couple of settings for XDebug. These are OK for a default setup, but we need to add path mapping so that the debugger knows where to find the files. For example:

image

Now you can set a breakpoint, start debugging, and open the page in your browser:

image

More guidance on the PHP Debug extension by Felix Becker is here.

Final thoughts

This is cool; but is it better or worse than an old-style VM running Linux and PHP? The WSL solution is lightweight and convenient, but unlike a VM it is not isolated and you may hit issues that are unique to WSL, because not everything runs. I did happen to suffer crashes in Visual Studio and in Outlook while WSL was running; it may well be coincidence, but I cannot help wondering if WSL might be to blame.

Still, a great feature of WSL is that when you exit your session, it goes away, so it is not too intrusive. I plan to use it for PHP debugging and will see how it goes.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

HackerRank survey shows programming divides in more ways than one

Developer recruitment company HackerRank has published a survey of developer skills. The first place I look in any survey is who took part, and how many:

HackerRank conducted a study of developers to identify trends in developer education, skills and hiring practices. A total of 39,441 professional and student developers completed the online survey from October 16 to November 1, 2017. The survey was hosted by SurveyMonkey and HackerRank recruited respondents via email from their community of 3.2 million members and through social media sites.

I would like to see the professional and student reponses shown separately. The world of work and the world of learning is different. This statement may also be incomplete, since several of the questions analyse what employers want, which suggests another source of data (not difficult to find for a recruitment company).

It is still a good read. It is notable for example that the youngest generation is learning to code later in life than those who are now over 35:

image

I am not sure how to interpret these figures, but can think of some factors. One is that the amount of stuff you can do with a computer without coding has risen. In the earliest days when computing became affordable for anyone (late seventies/early eighties), you could not do much without coding. This was the era of type-in listings for kids wanting to play games. That soon changed, but coding remained important to getting things done if you wanted to make a business database useful, or create a website. Today though you can do all kinds of business, leisure and internet computing without needing to see code, so the incentive to learn is lower. It has become a more specialist skill. It remains valuable though, so older people have reason to be grateful.

How do people learn to code? The most popular resource is Stack Overflow, followed by YouTube, with books coming in third. In truth the most popular resource must be Google search. Credit to Stack Overflow though: like Wikipedia, it offers a good browsing experience at a time when the web has become increasingly unpleasant to use, infected by pop-up surveys, autoplay videos and intrusive advertising, not to mention the actual malware out there.

No surprises in language popularity, though oddly the survey does not tell us directly what languages are most used or best known by the respondents. The most in demand languages are apparently:

1. JavaScript
2. Java
3. Python
4. C++
5. C
6. C#
7. PHP
8. Ruby
9. Go
10. Swift

If you ask what languages developers plan to learn next, Go, Python and Scala head the list. And then there is a fascinating chart showing which languages developers prefer grouped by age. Swift, apparently, is loved by 75% of those over 55, but only by 15% of those under 25, the opposite of what I would expect (though I don’t know if this is a percentage of those who use the language, or includes those who do not know it at all).

Frameworks is another notable topic. Everyone loves Node.js; but two of the frameworks on offer are “.NET Core” and “ASP”. This is odd, since .NET Core is not really a framework, and ASP normally refers to the ancient “Active Server Pages” framework which nobody uses any longer, and ASP.NET runs on .NET Core so is not alternative to it.

This may be a clue that the HackerRank company or community is not well attuned to the Microsoft platform. That itself is of interest, but makes me question the validity of the survey results in that area.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Strong financial results from Microsoft as it aims for breadth of services

Microsoft reported a big quarter (in terms of revenue) for the three months ending December 31st, with revenue of $28,918 million.

What’s notable? Mainly the big jump in Microsoft’s recent success stories: year on year Office 365 up by 41%, Azure up by 98%, Dynamics 365 up by 67%.

Windows is flat/weak as you would expect, and Surface hardware is standing still. Xbox grew a bit following the launch of Xbox One X.

LinkedIn is growing: revenue of $1.3 billion and “sessions growth of over 20%” in the quarter. In the earnings webcast, Microsoft’s Amy Hood said that the LinkedIn acquisition has both performed better, and seems more strategic, now than it did at the time.

Hood also made reference to the company’s ability to up-sell cloud users to higher-margin services. “Office 365 commercial revenue increased 41 percent from installed base growth across all customer segments, and ARPU [Average Revenue per User] expansion from continued customer migration to higher value offers in the E3 and E5 workloads.”

This point is key and is the answer (from the provider’s point of view) to the lower margins implicit in moving from software to services. When Microsoft sells a licence for you to use Windows or Office, the margin is huge because reproducing the software, or providing it for download, costs almost nothing; whereas with a subscription there is significant cost to providing the service. However the subscription has advantages which offset this, in particular the continuing interaction with the customer that both provides data, which the customer as well as the provider can mine (subject to appropriate privacy controls), and gives opportunity for the provider to extend the relationship into new or upgraded services.

CEO Satya Nadella fielded a good question about Microsoft losing out to Sony in gaming and to Alexa and Google Home in voice devices. On gaming, Nadella referred to the PC alongside Xbox as a strategic asset. “PC gaming is a growth market,” he said, as well as software such as Minecraft now on mobile devices, giving the company a broad reach. He also remarked on Azure as a gaming back end.

As for Cortana in the home (or absence from), Nadella said that the focus is on the server-side cognitive services. He also talked about voice input and control of Office 365. The key point though was that Microsoft wants to work both with its own and other voice assistant devices so it can win on services even when competitor devices are in use. “One-turn dialogs on one speaker in one home, that’s just not our vision,” he said.

Nadella made another key point in the webcast, in answer to a question about how Azure Stack (a packaged version of Azure for installation on-premises) will impact Azure. “Computing is becoming more distributed, not less distributed,” he said. IoT and sensors play a large part in this. Everything goes to the cloud but computing on the edge (the new buzzword for local processing) is important for efficiency.

It is easy to see ways in which Microsoft could stumble. The PC will decline as the number of users who need a desktop or laptop computer diminishes. Microsoft’s failure in mobile could prove costly as competitors use synergy with their own applications and cloud services to steer customers away. There are opportunities such as home automation and payments which seem closed to the company now.

Then again, strong results such as these show how the company can succeed by continuing to migrate its business users to cloud services. It remains deeply embedded in business computing.

Here is my chart summarising Microsoft’s performance:   

Quarter ending December 31st 2017 vs quarter ending December 31st 2016, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 8953 +1774 3337 +284
Intelligent Cloud 7795 +1037 2832 +541
More Personal Computing 12170 +281 2510 -51

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Which .NET framework for Windows: UWP, WPF or Windows Forms?

Yes, mobile is the future of client applications, cross-platform is cool, web applications are amazing; but out there in the real world, there are still a ton of people who work all day with a Windows PC, and businesses that want PC applications in order to get their work done.

So when a business comes to you and says, we want a new Windows application to do this or that, and presuming they do not care about mobile or Macs or access over the internet but just want something that runs on their internal network, what framework do you choose?

image

Let us even assume that they all run Windows 10 so that UWP (Universal Windows Platform) is a realistic option.

If you want to code in .NET (which is a great choice for a Windows-only application, and with the possibility of migrating code to cross-platform via Xamarin’s compiler later), then you have three obvious choices:

Windows Forms

This is the framework for Windows desktop applications that was introduced at the same time as .NET itself, back in 2002. Of course it has been revised many times since. There was a big update in 2006 with .NET 2.0. That said, Microsoft intended it to be replaced by Windows Presentation Foundation (WPF, see below), so it has not been a focus of attention. In 2014, High DPI support was improved, with .NET 4.5.2, reflecting the fact that this ancient framework is still widely used.

Windows Forms is a nice wrapper around the Windows API, and easy to use in that it uses essentially X Y layout. In other words, you can think of your form as a grid of pixels with the position of your controls determined at design time by its size and coordinates. This is great if you are designing and running on the same PC, but not so good when you deploy to other PCs with different display settings. It does kind-of scale if you follow certain rules, but successful scaling in a Windows Forms application is often difficult to achieve, so users may suffer chopped-off controls and text, or just ugly screens. Read this carefully if you use Windows Forms. And then read about High DPI support, which was improved again in .NET Framework 4.7.

If you are writing a database application, you can generate datasets by drag and drop from the Server Explorer in Visual Studio and bind them to controls. I am not a fan of this database framework, which quickly gets convoluted, but you do not have to use it. However the ability to bind list and grid controls to any kind of .NET collection is fantastically useful.

Why is Windows Forms still in use? It is partly legacy and the fact that it is easier to maintain and enhance an existing application than to start again. It is also because, scaling issues aside, Windows Forms is reliable, well supported by both built-in and third-party controls, and easy to learn.

Windows Presentation Foundation

This was Microsoft’s second go at a GUI framework for .NET and in many respects a great improvement. It was introduced with .NET Framework 3.0 in 2006, part of the Vista wave of technology. Unlike Windows Forms, it is based on the DirectX graphics API, so great for multimedia and special effects. Scaling is built-in and based on layout managers. The underlying presentation language is based on XAML, an XML language. As with Windows Forms, there is deep support for binding data to controls.

Why would you not always use WPF rather than Windows Forms? The main issue is that the time you save on figuring out scaling is more than consumed by the time you spend on design. WPF is a designer-centric framework. It will repay your efforts, but if you just want to slap a couple of grids and a few buttons on a form to get a working business application, Windows Forms remains tempting.

Universal Windows Platform

Both Windows Forms and WPF are old, and Microsoft is pointing developers towards its Universal Windows Platform (UWP) instead. UWP is an evolution of the new application platform introduced in Windows 8 in 2012. If WPF was all about scaling and multimedia, the Windows 8 modern app platform is about touch support and Store-based deployment. The application model was also service based, the idea being that your app consumes services published over the internet. Until the Windows 10 Fall Creators Update, you could not use the .NET SQLClient to connect directly to a SQL Server database (you can now). The app platform became UWP with the launch of Windows 10 in 2015. UWP can use XAML for layout design, but it is not compatible with WPF.

Personally I have mixed feelings about UWP. Unfortunately it has suffered from Microsoft’s ever-changing development strategy. The Windows 8 app platform made sense to me as a way of bringing Windows into the tablet era and enabling applications that were more secure and more easily deployed, even if it tended to result in applications that were blocky and ugly. Microsoft then changed its mind about full-screen touch applications and came up with the UWP for Windows 10, where applications again run in a window, but with a new selling point: you could run your application on Windows Phone as well as desktop. Then the company canned Windows Phone, before UWP had properly launched, in effect deleting the “Universal” part of the platform.

UWP still offers Store delivery and isolation from other applications, better for security and stability. However there are a few things against it. First, users require Windows 10. Second, like WPF it is a designer-centric platform and not so good for running up quick business applications. Third, UWP apps behave differently from standard desktop applications, sometimes not in a good way.

I was using Microsoft’s bundled Photos application recently. I work a lot with images so this often pops up, as the default image viewer on Windows 10. I was not stressing it, but it crashed which, as is typical for a UWP app, means it just disappeared without any message or warning.

UWP will be three years old this summer, but I am not convinced that the platform is quite there yet. I find it hard to think of UWP apps that I love. The apps I know best are the built-in ones, Mail, Photos, Groove Music, Calculator, and I do not love any of them. Paint 3D is amazing but not my thing.

At the same time I do see the merits of UWP versus traditional Windows application deployment. The existence of the Desktop Bridge (formerly Project Centennial) means you can get many of those benefits while still using WPF or Windows Forms.

Closing thoughts

Perhaps something like Power Apps will render this discussion irrelevant before long. There are also other options for the desktop, such as Xamarin Forms if you still want to use .NET, or Electron for using web technologies for desktop applications.

Still, while it may seem surprising, even in 2018 I can think of reasons why you might use any of the above frameworks, even Windows Forms, for a business app targeting Windows.

Spectre and Meltdown woes continue as Intel confesses to broken updates

Intel’s Navin Shenoy says the company has asked PC vendors to stop shipping its microcode updates that fix the speculative execution vulnerabilities identified by Google’s Project Zero team:

We recommend that OEMs, cloud service providers, system manufacturers, software vendors and end users stop deployment of current versions, as they may introduce higher than expected reboots and other unpredictable system behavior.

This is a blow to industry efforts to fix this vulnerability, a process involving BIOS updates (to install the microcode) as well as operating system patches.

Intel says it has an “early version of the updated solution”. Given the length of time it takes for PC manufacturers to package and distribute BIOS updates for the many thousands of models affected, it looks like the moment at which the majority of active systems will be patched is now far in the future.

Vendors have not yet completed the rollout of the initial patch, which they are now being asked to withdraw.

The detailed microcode guidance is here. Intel also has a workaround which gives some protection while also preserving system stability:

For those concerned about system stability while we finalize the updated solutions, we are also working with our OEM partners on the option to utilize a previous version of microcode that does not display these issues, but removes the Variant 2 (Spectre) mitigations. This would be delivered via a BIOS update, and would not impact mitigations for Variant 1 (Spectre) and Variant 3 (Meltdown).

I am not sure who out there is not concerned about system stability? That said, public cloud vendors would rather almost anything than the possibility of code running in one VM getting unauthorised access to the host or to other VMs.

Right now it feels as if most of the world’s computing devices, from server to smartphone, are simply insecure. Though it should be noted that the bad guys have to get their code to run: trivial if you just need to run up a VM on a public cloud, more challenging if it is a server behind a firewall.

Office 2016 now “built out of one codebase for all platforms” says Microsoft engineer

Microsoft’s Erik Schweibert, principal engineer in the Apple Productivity Experiences group, says that with the release of Office 2016 version 16 for the Mac, the productivity suite is now “for the first time in 20 years, built out of one codebase for all platforms (Windows, Mac, iOS, Android).”

image

This is not the first time I have heard of substantial code-sharing between the various versions of Office, but this claim goes beyond that. Of course there is still platform-specific code and it is worth reading the Twitter thread for a more background.

“The shared code is all C++. Each platform has native code interfacing with the OS (ie, Objective C for Mac and iOS, Java for Android, C/C++ for Windows, etc),” says Schweibert.

Does this mean that there is exact feature parity? No. The mobile versions remain cut-down, and some features remain platform-specific. “We’re not trying to provide uniform “lowest common denominator” support across all platforms so there will always be disparate feature gaps,” he says.

Even the online version of Office shares much of the code. “Web components share some code (backend server is shared C++ compiled code, front end is HTML and script)”, Schweibert says.

There is more news on what is new in Office for the Mac here. The big feature is real-time collaborative editing in Word, Excel and PowerPoint. 

What about 20 years ago? Schweibert is thinking about Word 6 for the Mac in 1994, a terrible release about which you can read more here:

“Shipping a crappy product is a lot like beating your head against the wall.  It really does feel good when you ship a great product as a follow-up, and it really does motivate you to spend some time trying to figure out how not to ship a crappy product again.

Mac Word 6.0 was a crappy product.  And, we spent some time trying to figure out how not to do that again.  In the process, we learned a few things, not the least of which was the meaning of the term “Mac-like.”

Word 6.0 for the Mac was poor for all sorts of reasons, as explained by Rick Schaut in the post above. The performance was poor, and the look and feel was too much like the Windows version – because it was the Windows code, recompiled. “Dialog boxes had "OK" and "Cancel" exactly reversed compared to the way they were in virtually every other Mac application — because that was the convention under Windows,” says one comment.

This is not the case today. Thanks to its lack of a mobile platform, Microsoft has a strong incentive to create excellent cross-platform applications.

There is more about the new cross-platform engineering effort in the video below.

The mysterious microcode: Intel is issuing updates for all its CPUs from the last five years but you might not benefit

The Spectre and Meltdown security holes found in Intel and to a lesser extend AMD CPUs is not only one of the most serious, but also one of the most confusing tech issues that I can recall.

We are all used to the idea of patching to fix security holes, but normally that is all you need to do. Run Windows Update, or on Linux apt-get update, apt-get upgrade, and you are done.

This one is not like that. The reason is that you need to update the firmware; that is, the low-level software that drives the CPU. Intel calls this microcode.

So when Intel CEO Brian Krzanich says:

By Jan. 15, we will have issued updates for at least 90 percent of Intel CPUs introduced in the past five years, with updates for the remainder of these CPUs available by the end of January. We will then focus on issuing updates for older products as prioritized by our customers.

what he means is that Intel has issued new microcode for those CPUs, to mitigate against the newly discovered security holes, related to speculative execution (CPUs getting a performance gain by making calculations ahead of time and throwing them away if you don’t use them).

Intel’s customer are not you and I, the users, but rather the companies who purchase CPUs, which in most cases are the big PC manufacturers together with numerous device manufacturers. My Synology NAS has an Intel CPU, for example.

So if you have a PC or server from Vendor A, then when Intel has new microcode it is available to Vendor A. How it gets to your PC or server which you bought from Vendor A is another matter.

There are several ways this can happen. One is that the manufacturer can issue a BIOS update. This is the normal approach, but it does mean that you have to wait for that update, find it and apply it. Unlike Windows patches, BIOS updates do not come down via Windows update, but have to be applied via another route, normally a utility supplied by the manufacturer. There are thousands of different PC models and there is no guarantee that any specific model will receive an updated BIOS and no guarantee that all users will find and apply it even if they do. You have better chances if your PC is from a big name rather than one with a brand nobody has heard of, that you bought from a supermarket or on eBay.

Are there other ways to apply the microcode? Yes. If you are technical you might be able to hack the BIOS, but leaving that aside, some operating systems can apply new microcode on boot. Therefore VMWare was able to state:

The ESXi patches for this mitigation will include all available microcode patches at the time of release and the appropriate one will be applied automatically if the system firmware has not already done so.

Linux can do this as well. Such updates are volatile; they have to be re-applied on every boot. But there is little harm in that.

What about Windows? Unfortunately there is no supported way to do this. However there is a VMWare experimental utility that will do it:

This Fling is a Windows driver that can be used to update the microcode on a computer system’s central processor(s) (“CPU”). This type of update is most commonly performed by a system’s firmware (“BIOS”). However, if a newer BIOS cannot be obtained from a system vendor then this driver can be a potential substitute.

Check the comments – interest in this utility has jumped following the publicity around spectre/meltdown. If working exploits start circulating you can expect that interest to spike further.

This is a techie and unsupported solution though and comes with a health warning. Most users will never find it or use it.

That said, there is no inherent reason why Microsoft could not come up with a similar solution for PCs and servers for which no BIOS update is available, and even deliver it through Windows Update. If users do start to suffer widespread security problems which require Intel’s new microcode, it would not surprise me if something appears. If it does not, large numbers of PCs will remain unprotected.

Why patching to protect against Spectre and Meltdown is challenging

The tech world has been buzzing with news of bugs (or design flaws, take your pick) in mainly Intel CPUs, going way back, which enables malware to access memory in the computer that should be inaccessible.

How do you protect against this risk? The industry has done a poor job in communicating what users (or even system admins) should do.

A key reason why this problem is so serious is that it risks a nightmare scenario for public cloud vendors, or any hosting company. This is where software running in a virtual machine is able to access memory, and potentially introduce malware, in either the host server or other virtual machines running on the same server. The nature of public cloud is that anyone can run up a virtual machine and do what they will, so protecting against this issue is essential. The biggest providers, including AWS, Microsoft and Google, appear to have moved quickly to protect their public cloud platforms. For example:

The majority of Azure infrastructure has already been updated to address this vulnerability. Some aspects of Azure are still being updated and require a reboot of customer VMs for the security update to take effect. Many of you have received notification in recent weeks of a planned maintenance on Azure and have already rebooted your VMs to apply the fix, and no further action by you is required.

With the public disclosure of the security vulnerability today, we are accelerating the planned maintenance timing and will begin automatically rebooting the remaining impacted VMs starting at 3:30pm PST on January 3, 2018. The self-service maintenance window that was available for some customers has now ended, in order to begin this accelerated update.

Note that this fix is at the hypervisor, host level. It does not patch your VMs on Azure. So do you also need to patch your VM? Yes, you should; and your client PCs as well. For example, KB4056890 (for Windows Server 2016 and Windows 10 1607), or KB4056891 for Windows 10 1703, or KB4056892. This is where it gets complex though, for two main reasons:

1. The update will not be applied unless your antivirus vendor has set a special registry key. The reason is that the update may crash your computer if the antivirus software accesses memory is a certain way, which it may do. So you have to wait for your antivirus vendor to do this, or remove your third-party anti-virus and use the built-in Windows Defender.

2. The software patch is not complete protection. You also need to update your BIOS, if an update is available. Whether or not it is available may be uncertain. For example, I am pretty sure that I found the right update for my HP PC, based on the following clues:

– The update was released on December 20 2017

– The description of the update is “Provides improved security”

image

So now is the time, if you have not done so already, to go to the support sites for your servers and PCs, or motherboard vendor if you assembled your own, see if there is a BIOS update, try to figure out it it addresses Spectre and Meltdown, and apply it.

If you cannot find an update, you are not fully protected.

It is not an easy process and realistically many PCs will never be updated, especially older ones.

What is most disappointing is the lack of clarity or alerts from vendors about the problem. I visited the HPE support site yesterday in the hope of finding up to date information on HP’s server patches,  to find only a maze of twist little link passages, all alike, none of which led to the information I sought. The only thing you can do is to trace the driver downloads for your server in the hope of finding a BIOS update.

Common sense suggests that PCs and laptops will be a bigger risk than private servers, since unlike public cloud vendors you do not allow anyone out there to run up VMs.

At this point it is hard to tell how big a problem this will be. Best practice though suggests updating all your PCs and servers immediately, as well as checking that your hosting company has done the same. In this particular case, achieving this is challenging.

PS kudos to BleepingComputer for this nice article and links; the kind of practical help that hard-pressed users and admins need.

There is also a great list of fixes and mitigations for various platforms here:

https://github.com/hannob/meltdownspectre-patches

PPS see also Microsoft’s guidance on patching servers here:

https://support.microsoft.com/en-us/help/4072698/windows-server-guidance-to-protect-against-the-speculative-execution

and PCs here:

https://support.microsoft.com/en-us/help/4073119/protect-against-speculative-execution-side-channel-vulnerabilities-in

There is a handy PowerShell script called speculationcontrol which you can install and run to check status. I was able to confirm that the HP bios update mentioned above is the right one. Just run PowerShell with admin rights and type:

install-module speculationcontrol

then type

get-speculationcontrolsettings

image

Thanks to @teroalhonen on Twitter for the tip.