Account options when setting up Windows 10, and Microsoft’s enforced insecurity questions

How do you sign into Windows 10? There are now four options. I ran through a Windows 10 setup using build 1803 (which was released in April this year) and noted how this has evolved. Your first decision: is this a personal or organisational PC?

image

If you choose Setup for an organisation, you will be prompted to sign into Office 365, also known as Azure AD. The traditional Domain join, for on-premises Active Directory, has been shunted to a less visible option (the red encircling is mine). In larger organisations, this tends to be automated anyway.

image

But this one is personal. It is a similar story. You are prompted to sign in with a Microsoft account, but there is another option, called an Offline account (again, the red circle is mine).

image

This “Offline account” was in Windows 7 and earlier the only option for personal accounts. I still recommend having an administrative “offline account” set up so you can always be sure of being able to log into your PC, even without internet. Think about some of the scenarios. Someone might hack your Microsoft account, change your password, and now you cannot even log onto your PC. Unless you have an offline account.

I’ve been awkward and selected Offline account. Windows, or rather Microsoft, does not like it. Note the mind games in the screenshot below. Although I’ve made a positive selection for Offline account, the default and highlighted option now is to change my mind. I do not like this.

image

Now I can set up my offline account. A screen prompts for a username, then for a password, all the time nagging that I should create an online account instead.

image

I type and confirm the password; but now I get this:

image

Yes, I have to create “security questions”, with no option to skip. If you try to skip, you get a “This field is required” message. Worse still, they are from a pre-selected list:

image

I really hate this. These are not security questions; they are insecurity questions. Their purpose is to let me (or someone else) reset the password, forming a kind of back door into the PC. The information in the questions is semi-secret; not impossible for someone determined to discover. So Microsoft is insisting that I make my account less secure.

Of course you do not have to give honest answers. You can call your first pet yasdfWsd9gAg!!hea. But most people will be honest.

Does it matter, given that a PC account offers rather illusory security anyway? Unless you encrypt the hard drive, someone who steals the PC can reset the password by booting into Linux, or take out the disk and read it from another PC. All true; but note that Microsoft makes it rather easy to encrypt your PC with Bitlocker, in which case the security is not so illusory.

Just for completeness, here is what comes next, an ad for Cortana:

image

Hey Cortana! How do I delete my security answers?

I do get why Microsoft is doing this. An online account is better in that settings can roam, you can use the Store, and you can reset the password from one PC to restore access to another. The insecurity questions could be a life-saver for someone who forgot their password and need to get back into their PC.

But such things should be optional. There is nothing odd about wanting an offline account.

Surface Go: Microsoft has another go at a budget tablet

Microsoft has announced Surface Go, a cheaper, smaller model to sit at the budge end of its Surface range of tablets and laptops.

The new model starts at $399, will be available for pre-order today in selected territories, and ships on August 2nd.

In the UK, the Surface Go is £379 inc VAT for 4GB RAM and 64GB storage, or £509.99 inc VAT for 8GB RAM and 128GB storage.

image

I go back a long way with Surface, having been at the launch of the first device, Surface RT, back in 2012. The device was a flop, but I liked it. The design was genuinely innovative and sought to make sense of a Windows in transition from desktop-only to a viable touch/tablet device. It failed primarily because of the poor range of available apps, lack of user acceptance for Windows 8, and somewhat underpowered hardware. There were also keyboard issues: the fabric-based Touch keyboard was difficult to use because it gave no tactile feedback, and the Type keyboard less elegant and still somewhat awkward.

Surface Pro came next, and while it was more useful thanks to full Windows 8 and an Intel Core i5 CPU, it was disappointing, with battery life issues and a tendency to stay on in your bag, overheating and wasting battery. There were other niggling issues.

The big disappointment of Surface for me is that even with full, Apple-like control over hardware and software, the devices have not been trouble-free.

Another issue today is that Windows 10 is not designed for touch in the same way as Windows 8. Therefore you rarely see Windows tablets used as tablets; they are almost always used as laptops, even if they are 2-in-1 devices. The original kickstand design is therefore rather pointless. If I got another Surface it would be a Surface Laptop or Surface Book.

Of course they are not all bad. It is premium hardware and some of the devices are delightful to use and perform well. They are expensive though, and I suggest careful comparison with what you can get for the same money from partners like HP, Lenovo and others.

What about this one? Key specs:

  • 10″ screen, kickstand design
  • 1800 x 1200 (217 PPI) resolution
  • 8.3mm thick
  • USB-C 3.1 port, MicroSD, headphone jack socket
  • Intel® Pentium® Gold Processor 4415Y
  • Windows Hello camera supporting face-recognition log in
  • Up to nine hours battery life
  • Intel® HD Graphics 615
  • Display supports Surface Pen with 4096 levels of pressure sensitivity
  • Signature Type Cover with trackpad supporting 5-point gestures
  • Windows Hello face authentication camera (front-facing)
  • 5.0 MP front-facing camera with 1080p Skype HD video
  • 8.0 MP rear-facing autofocus camera with 1080p HD video
  • Single microphone
  • 2W stereo speakers with Dolby® Audio™ Premium

It sounds a great deal for £379 or $399 but you will pay more, for three reasons:

  • The base spec is minimal in terms of RAM and SSD storage and you will want the higher model
  • The Type Cover is essential and will cost – a Pro Type Cover is $159.99 and this may be a bit less
  • The Surface Pen is £99.99 or $99.99

This means your $399 will soon be $550 or more.

It could still be a good deal if it turns out to be a nice device. The Hello camera is a plus point, but where I would particularly recommend a Surface is if you want Pen support. Microsoft is good at this. Unfortunately I do not get on well with pen input, but some people do, and for artists and designers it is a real advantage.

Ubuntu goes minimal (but still much bigger than Alpine Linux), cosies up to Google Cloud Platform

Ubuntu has announced “Minimal Ubuntu”, a cut-down server image designed for containerised deployments. The Docker image for Minimal Ubuntu 18.04 is 29MB:

Editors, documentation, locales and other user-oriented features of Ubuntu Server have been removed. What remains are only the vital components of the boot sequence.  Images still contain ssh, apt and snapd so you can connect and install any package you’re missing. The unminimize tool lets you ‘rehydrate’ your image into a familiar Ubuntu server package set, suitable for command line interaction.

says Canonical.

29MB is pretty small; but not as small as Alpine Linux images, commonly used by Docker, which are nearer 5MB. Of course these image sizes soon increase when you add the applications you need.

I pulled Ubuntu 18.04 from Docker Hub and the image size is 31.26MB so this hardly seems a breakthrough.

Canonical quotes Paul Nash, Group Product Manager for Google Cloud Platform, in its press release. The image is being made available initially for Amazon EC2, Google Compute Engine, LXD, and KVM/OpenStack. The kernel has been optimized for each deployment, so the downloadable image is optimized for KVM and slightly different than the AWS or GCP versions.

Pusher: a nice solution for sending messages and notifications to web and mobile apps

Pusher is a London company which runs cloud services for publish/subscribe in web and mobile applications. The idea is to deliver real-time updates, a concept that has many use cases. Examples include price updates in finance apps, status updates to track a delivery, news updates, or anything where users want to monitor progress or keep in touch with fast-moving developments.

The service passed my “get up and running quickly” test. I created a free account (limited to 100 connections and 200k messages per day) and a new channel:

image 

I’m guessing it runs on AWS, looking at the datacentre locations:

image

I chose a JavaScript client and ASP.NET MVC for the back end. On my PC I pasted the JavaScript into a web page running locally on Apache (in Windows Subsystem for Linux). I also created a new ASP.NET MVC project and added the sample code with some trivial modifications. I was able to send a message to the web page; it triggers an annoying alert but of course you could easily amend this to update the UI in more user-friendly ways.

image

Of course you could roll your own solution for this but what you get with Pusher is all the plumbing pre-done for many different clients and automatic scalability.

Pusher also has a service called Beams (formerly Push Notifications) which lets you send notifications to Android and IOS apps.

Pusher or roll your own? As with many cloud services, you are putting a high level of trust in Pusher (security and reliability) if you use the service, and you will need a paid subscription:

image

You are saving considerable development time though, and as Google and Apple update their SDKs or change the rules, Pusher will presumably adapt accordingly.

Can Azure easily do this, I wondered? I headed over to Azure Notification Hubs. I noticed that the amount of admin you have to do to support each device is greater. Second, Microsoft promised to support “push to web” in March 2016:

image

… but has not done so nor even bothered to update those asking:

image

It is odd that Microsoft, with all its drive behind Azure, is still in the habit of leaving customers in the dark in certain areas.

Notes from the Field: dmesg error blocks MySQL install on Windows Subsystem for Linux

I enjoy Windows Subsystem for Linux (WSL) on Windows 10 and use it constantly. It does not patch itself so from time to time I update it using apt-get. The latest update upgraded MySQL to version 5.7.22 but unfortunately the upgrade failed. The issue is that dpkg cannot configure it. I saw messages like:

invoke-rc.d: could not determine current runlevel

2002: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock

After multiple efforts uninstalling and reinstalling I narrowed the problem down to a dmesg error:

dmesg: read kernel buffer failed: Function not implemented

It is true, dmesg does not work on WSL. However there is a workaround here that says if you write something to /dev/kmsg then at least calling dmesg does not return an error. So I did:

sudo echo foo > /dev/kmsg

Removed and reinstalled MySQL one more time and it worked:

image

Apparently partial dmesg support in WSL is on the way, previewed in Build 17655.

Note: be cautious about fully uninstalling MySQL if you have data you want to preserve. Export/backup the databases first.

Instant applications considered harmful?

Adrian Colyer, formerly of SpringSource, VMWare, and Pivotal, is running an excellent blog where he looks at recent technical papers. A few days ago he covered The Rise of the Citizen Developer – assessing the security impact of online app generators. This was about online app generators for Android, things like Andromo which let you create an app with a few clicks. Of course the scope of such apps is rather limited, but they have appeal as a quick way to get something into the Play Store that will promote your brand, broadcast your blog, convert your website into an app, or help customers find your office.

It turns out that there are a few problems with these app generators. Andromo is one of the better ones. Some of them just download a big generic application with a configuration file that customises it to your requirements. Often this configuration is loaded from the internet, in some cases over HTTP with no encryption. API keys used for interaction with other services such as Twitter and Google can easily leak. They do not conform to Android security best practices and request more permissions that are needed.

Low code or no-code applications are not confined to Android applications. Appian promises “enterprise-grade” apps via its platform.  Microsoft PowerApps claims to “solve business problems with intuitive visual tools that don’t require code.” It is an idea that will not go away: an easy to use visual environment that will enable any business person to build productive applications.

Some are better than others; but there are inherent problems with all these kinds of tools. Three big issues come to mind:

  1. Bloat. You only require a subset of what the application generator can do, but by trying to be universal there is a mass of code that comes along with it, which you do not require but someone else may. This inevitably impacts performance, and not in a good way.
  2. Brick walls. Everything is going well until you require some feature that the platform does not support. What now? Often the only solution is to trash it and start again with a more flexible tool.
  3. Black box. You app mostly works but for some reason in certain cases it gives the wrong result. Lack of visibility into what it happening behind the scenes makes problems like this hard to fix.

It is possible for an ideal tool to overcome these issues. Such a tool generates human-understandable code and lets you go beyond the limitations of the generator by exporting and editing the project in a full programming environment. Most of the tools I have seen do not allow this; and even if they do, it is still hard for the generator to avoid generating a ton of code that you do not really need.

The more I have seen of different kinds of custom applications, the more I appreciate projects with nicely commented textual code that you can trace through and understand.

The possibility of near-instant applications has huge appeal, but beware the hidden costs.

On Microsoft Teams in Office 365, and why we prefer walled gardens to the Internet jungle

Gartner has recently delivered a report called Why Microsoft Teams will soon be just as common as Outlook, which gave me pause for reflection.

The initial success of Office 365 was almost all to do with email. Hosted Exchange at a reasonable cost is a an obvious win for businesses who were formerly on on-premises Exchange or Small Business Server. Microsoft worked to make the migration relatively seamless, and with strong Active Directory support it can be done with users hardly noticing. Exchange of course is more than just email, also handling calendars and tasks, and Outlook and Exchange are indispensable tools for many businesses.

The other pieces of Office 365, such as SharePoint, OneDrive and Skype for Business (formerly Lync) took longer to gain traction, in part because of flaws in the products. Exchange has always been an excellent email server, but in cloud document storage and collaboration Microsoft’s solution was less good than alternatives like DropBox and Box, and ties to desktop Office are a mixed blessing, welcome because Office is familiar and capable, but also causing friction thanks to the need for old-style software installations.

Microsoft needed to up its game in areas beyond email, and to its credit it has done so. SharePoint and OneDrive are much improved. In addition, the company has introduced a range of additional applications, including StaffHub for managing staff schedules, Planner for project planning and task assignment, and PowerApps for creating custom applications without writing code.

We have also seen a boost to the cloud-based Dynamics suite thanks to synergy between this and Office 365.

Having lots of features is one thing, winning adoption is another. Microsoft lacked a unifying piece that would integrate these various elements into a form that users could easily embrace. Teams is that piece. Introduced in March 2017, I initially thought there was nothing much to it: just a new user interface for existing features like SharePoint sites and Office 365/Exchange groups, with yet another business messaging service alongside Skype for Business and Yammer.

Software is about usability as much or more than features though, and Teams caught on. Users quickly demanded deeper integration between Teams and other parts of Office 365. It soon became obvious that from the user’s perspective there was too much overlap between Teams and Skype for Business, and in September 2017 Microsoft announced that Teams would replace Skype for Business, though this merging of two different tools is not yet complete.

image

To see why Teams has such potential you need only click Add a tab in the Windows client. Your screen fills with stuff you can add to a Team, from document links to Planner to third-party tools like Trello and Evernote.

image

This is only going to grow. Users will open Teams at the beginning of the day and live there, which is exactly the point Garner is making in its attention-grabbing title.

A good thing? Well, collaboration is good, and so is making better use of what you are paying for with an Office 365 subscription, so it has merit.

The part that troubles me is that we are losing diversity as well as granting Microsoft a firmer hold on its customers.

It all started with email, remember. But email is a disaster, replete with unwanted marketing, malware links, and some number of communications that have some possible value but which life is too short to investigate. In the consumer world, people prefer the safer world of Facebook Messenger or WhatsApp, where messages are more likely to be wanted. Email is also ancient, hard to extend with new features, and generally insecure.

Business-oriented messaging software like Slack and now Teams have moved in, to give users a safer and more usable way of communicating with colleagues. Consumers prefer Facebook’s walled garden to the internet jungle, and business users are no different.

It is a trade-off though. Email, for all its faults, is open and has multiple providers. Teams is not.

This will not stop Teams from succeeding, even though there are plenty of user requests and considerable dissatisfaction with the current release. Performance can be poor, the clients for Mac and mobile not as good as for Windows, and there is no Linux client at all.

Third-parties with applications or services that make sense in the Teams environment should hasten to get their stuff available there.

Amazon offering Linux desktops as a service in WorkSpaces

Amazon Web Services now offers Linux desktops as part of its WorkSpaces desktop-as-a-service offering.

The distribution is called Amazon Linux 2 and includes the MATE desktop environment.

image

Most virtual desktops run Windows, because most of the applications people want to run from virtual desktops are Windows applications. A virtual desktop plugs the gap between what you can do on the device you have in front of you (whether a laptop, Chromebook, iPad or whatever) and what what you can do in an office with a desktop PC.

It seems that Amazon have developers in mind to some extent. Evangelist Jeff Barr (from whom I have borrowed the screenshot above) notes:

The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Still, there is no problem using it for any user for productivity applications; it works out a bit cheaper than Windows thanks to removing Microsoft licensing costs. Ideal for frustrated Google Chromebook users who want access to a less locked-down OS.

Notes from the field: Windows Time Service interrupts email delivery

A business with Exchange Server noticed that email was not flowing. The internet connection was fine, all the servers were up and running including Exchange 2016. Email has been fine just a few hours earlier. What was wrong?

The answer, or the beginning of the answer, was in the Event Viewer on the Exchange Server. Event ID 1035, only a warning:

Inbound authentication failed with error UnexpectedExchangeAuthBlobCheckForClockSkew for Receive connector Default Mailbox Delivery

Hmm. A clock problem, right? It turned out that the PDC for the domain was five minutes fast. This is enough to trigger Kerberos authentication failures. Result: no email. We fixed the time, restarted Exchange, and everything worked.

Why was the PDC running fast? The PDC was configured to get time from an external source, apparently, and all other servers to get their time from the PDC. Foolproof?

Not so. If you typed:

w32tm /query /status

at a command prompt on the PDC (not the Exchange Server, note), it reported:

Source: Free-running System Clock

Oops. Despite efforts to do the right thing in the registry, setting the Type key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters to NTP and entering a suitable list of time servers in the NtpServer key, it was actually getting its time from the server clock. This being a Hyper-V VM, that meant the clock on the host server, which – no surprise – was five minutes fast.

You can check for this error by typing:

w32tm /resync

at the command prompt. If it says:

The computer did not resync because no time data was available.

then something is wrong with the configuration. If it succeeds, check the status as above and verify that it is querying an internet time server. If it is not querying a time server, run a command like this:

w32tm /config /update /manualpeerlist:”0.pool.ntp.org,0x8 1.pool.ntp.org,0x8 2.pool.ntp.org,0x8 3.pool.ntp.org,0x8″ /syncfromflags:MANUAL

until you have it right.

Note this is ONLY for the server with the PDC Emulator FSMO role. Other servers should be configured to get time from the PDC.

Time server problems seem to be common on Windows networks, despite the existence of lots of documentation. There are also various opinions on the best way to configure Hyper-V, which has its own time synchronization service. There is a piece by Eric Siron here on the subject, and I reckon his approach is a safe one (Hyper-V Synchronization Service OFF for the PDC Emulator, ON for every other VM).

I love his closing remarks:

The Windows Time service has a track record of occasionally displaying erratic behavior. It is possible that some of my findings are not entirely accurate. It is also possible that my findings are 100% accurate but that not everyone will be able to duplicate them with 100% precision. If working with any time sensitive servers or applications, always take the time to verify that everything is working as expected.

Inside Azure Cosmos DB: Microsoft’s preferred database manager for its own high-scale applications

At Microsoft’s Build event in May this year I interviewed Dharma Shukla, Technical Fellow for the Azure Data group, about Cosmos DB. I enjoyed the interview but have not made use of the material until now, so even though Build was some time back I wanted to share some of his remarks.

Cosmos DB is Microsoft’s cloud-hosted NoSQL database. It began life as DocumentDB, and was re-launched as Cosmos DB at Build 2017. There are several things I did not appreciate at the time. One was how much use Microsoft itself makes of Cosmos DB, including for Azure Active Directory, the identity provider behind Office 365. Another was how low Cosmos DB sits in the overall Azure cloud system. It is a foundational piece, as Shukla explains below.

image

There were several Cosmos DB announcements at Build. What’s new?

“Multi-master is one of the capabilities that we announced yesterday. It allows developers to scale writes all around the world. Until yesterday Cosmos DB allowed you to scale writes in a single region but reads all around the world. Now we allow developers to scale reads and writes homogeneously all round the world. This is a huge deal for apps like IoT, connected cars, sensors, wearables. The amount of writes are far more than the amount of reads.

“The second thing is that now you get single-digit millisecond write latencies at the 99 percentile not just in one region.

“And the third piece is that what falls out of this high availability. The window of failover, the time it takes to failover from one region when a disaster happens, to the other, has shrunk significantly.

“It’s the only system I know of that has married the high consistency models that we have exposed with multi-master capability as well. It had to reach a certain level of maturity, testing it with first-party Microsoft applications at scale and then with a select set of external customers. That’s why it took us a long time.

“We also announced the ability to have your Cosmos Db database in your own VNet (virtual network). It’s a huge deal for enterprises where they want to make sure that no data leaks out of that VNet. To do it for a global distributed database is specially hard because you have to close all the transitive networking dependencies.”

image
Technical Fellow Dharma Shukla

Does Cosmos DB work on Azure Stack?

“We are in the process of going to Azure Stack. Azure Stack is one of the top customer asks. A lot of customers want a hybrid Cosmos DB on Azure Stack as well as in Azure and then have Active – Active. One of the design considerations for multi master is for edge devices. Right now Azure has about 50 regions. Azure’s going to expand to let’s say 200 regions. So a customer’s single Cosmos DB table spanning all these regions is one level of scalability. But the architecture is such that if you directly attach lots of Azure Stack devices, or you have sensors and edge devices, they can also pretend to be replicas. They can also pretend to be an Azure region. So you can attach billions of endpoints to your table. Some of those endpoints could be Azure regions, some of them could be instances of Azure Stack, or IoT hub, or edge devices. This kind of scalability is core to the system.”

Have customers asked for any additional APIs into Cosmos DB?

“There is a list of APIs, HBase, richer SQL, there are a number of such API requests. The good news is that the system has been built in a way that adding new APIs is relatively easy addition. So depending on the demand we continue to add APIs.”

Can you tell me anything about how you’ve implemented Cosmos DB? I know you use Service Fabric. Do you use other Azure services?

“We have dedicated clusters of compute machines. Cosmos DB is a Ring 0 service. So it’s there any time Azure opens a new region, Cosmos DB clusters have provision by default. Just like compute, storage, Cosmos DB is also one of the Ring 0 services which is the bottommost. Azure Active Directory for example depends on Cosmos DB. So Cosmos DB cannot take a dependency on Active Directory.

“The dependency that we have is our own clusters and machines, on which we put Service Fabric. For deployment of Cosmos DB code itself, we use Service Fabric. For some of the load balancing aspects we use Service Fabric. The partition management, global distribution, replication, is our own. So Cosmos DB is layered on top of Service Fabric, it is a Service Fabric application. But then it takes over. Once the Cosmos DB bits are laid out on the machine then its replication and partition management and distribution pieces take over. So that is the layering.

“Other than that there is no dependency on Azure. And that is why one of the salient aspects of this is that you can take the system and host it easily in places like Azure Stack. The dependencies are very small.

“We don’t use Azure Storage because of that dependency. So we store the data locally and then replicate it. And all of that data is also encrypted at rest.”

So when you say it is not currently in Azure Stack, it’s there underneath, but you haven’t surfaced it?

“It is in a defunct mode. We have to do a lot of work to light it up. When we light up it on such on-prem or private cloud devices, we want to enable this active to active pathway. So you are replicating your data and that is getting synchronized with the cloud and Azure Stack is one of the sockets.”

Microsoft itself is using Cosmos DB. How far back does this go? Azure AD is quite old now. Was it always on Cosmos DB / DocumentDB?

“Over the years Office 365, Xbox, Skype, Bing, and more and more of Azure services, have started moving. Now it has almost become ubiquitous. Because it’s at the bottom of the stack, taking a dependency on it is very easy.

“Azure Active Directory consists of a set of microservices. So they progressively have moved to Cosmos DB. Same situation with Dynamics, and our slew of such applications. Skype is by and large on Cosmos DB now. There are still some fragments of the past.  Xbox and the Microsoft Store and others are running on it.”

Do you think your customers are good at making the right choices over which database technology to use? I do pick up some uncertainty about this.

“We are working on making sure that we provide that clarity. Postgres and MySQL and MariaDB and SQL Server, Azure SQL and elastic pools, managed instances, there is a whole slew of relational offerings. Then we have Cosmos DB and then lots of analytical offerings as well.

“If you are a relational app, and if you are using a relational database, and you are migrating from on-prem to Azure, then we recommend the relational family. It comes with this fundamental scale caveat which is that up to 4TB. Most of those customers are settled because they have designed the app around those sorts of scalability limitations.

“A subset of those customers, and a whole bunch of brand new customers, are willing to re-write the app. They know that that they want to come to cloud for scale. So then we pitch Cosmos DB.

“Then there are customers who want to do massive scale offline analytical processing. So there is, Databricks, Spark, HD Insight, and that set of services.

“We realise there are grey lines between these offerings. We’re tightening up the guidance, it’s valid feedback.”

Any numbers to flesh out the idea that this is a fast-growing service for Microsoft?

“I can tell you that the number of new clusters we provision every week is far more than the total number of clusters we had in the first month. The growth is staggering.”

Tech Writing