Creating a secure ASP.Net Core web application with Entra ID (formerly Azure AD) auth by group membership – harder than it should be

Several years ago I created a web application using ASP.NET and Azure AD authentication. The requirement was that only members of certain security groups could access the application, and the level of access varied according to the group membership. Pretty standard one would think.

The application has been running well but stopped working – because it used ADAL (Azure Active Directory Authentication Library) and the Microsoft Graph API with the old graph.windows.net URL both of which are deprecated.

No problem, I thought, I’ll quickly run up a new application using the latest libraries and port the code across. Easier than trying to re-plumb the existing app because this identity stuff is so fiddly.

Latest Visual Studio 2022 17.13.6. Create new application and choose ASP.NET Core Web App (Razor Pages) which is perhaps the primary .NET app framework.

Next the wizard tells me I need to install the dotnet msidentity tool – a dedicated tool for configuring ASP.NET projects to use the Microsoft identity platform. OK.

I have to sign in to my Azure tenancy (expected) and register the app. Here I can see existing registrations or create a new one. I create a new one:

I continue in the wizard but it errors:

This does not appear to be an easy fix. I click Back and ask the wizard just to update the project code. I will add packages and do other configuration manually. Though as it turned out the failed step had actually added packages and the app does already work. However Visual Studio is warning me that the version of Microsoft.Identity.Web installed has a critical security vulnerability. I edit Nuget packages and update to version 3.8.3.

The app works and I can sign in but it is necessary to take a close look at the app registration. By default my app allows anyone with any Entra ID or personal Microsoft account to sign in. I feel this is unlikely to be what many devs intend and that the default should be more restricted. What you have to do (if this is not what you want) is to head to the Azure portal, Entra ID, App registrations, find your app, and edit the manifest. I edited the signInAudience from AzureADandPersonalMicrosoftAccount to be AzureADMyOrg:

noting that Microsoft has not been able to eliminate AzureAD from its code despite the probably misguided rename to Entra ID.

However my application has no concept of restriction by security group. I’ve added a group called AccessITWritingApp and made myself a member, but getting the app to read this turns out to be awkward. There are a couple of things to be done.

First, while we are in the App Registration, find the bit that says Token Configuration and click Edit Groups Claim. This will instruct Azure to send group membership with the access token so our app can read it. Here we have a difficult decision.

If we choose all security groups, this will send all groups with the token including users who are in a group within a group – but only up to a limit of somewhere between 6 and 200. If we choose Groups assigned to the application which can limit this to just AccessITWritingApp but this will only work for direct members. By the way, you will have to assign the group to the app in Enterprise applications in the Azure portal but the app might not appear there. You can do this though via the Overview in the App registration and clicking the link for Managed application in local directory. Why two sections for app registrations? Why is the app both in and not in Enterprise applications? I am sure it makes sense to someone.

In the enterprise application you can click Assign users and groups and add the AccessITWritingWebApp group – though only if you have a premium “Active Directory plan level” meaning a premium Entra ID directory plan level. There is some confusion about this.

You can assign App Roles to a user of the application with a standard (P1) Entra ID subscription. Information on using App Roles rather than groups, or alongside them, is here. Though note:

“Currently, if you add a service principal to a group, and then assign an app role to that group, Microsoft Entra ID doesn’t add the roles claim to tokens it issues.”

Note that assigning a group or a user here will not by default either allow or prevent access for other users. It does link the user or group with the application and makes it visible to them. If you want to restrict access for a user you can do it by checking the Assignment required option in the enterprise application properties. That is not super clear either. Read up on the details here and note once again that nested group memberships are not supported “at this time” which is a bit rubbish.

OK, so we are going down the groups route. What I want to do is to use ASP.NET Core role-based authorization. I create a new Razor page called SecurePage and at the top of the code-behind class I stick this attribute:

[Authorize(Roles = "AccessITWritingApp,[yourGroupID")]
public class SecurePageModel : PageModel

Notice I am using the GroupID alongside the group name as that seems to be what arrives in the token.

Now I run the app, I can sign in, but when I try to access SecurePage I get Access Denied.

We have to make some changes for this to work. First, add a Groups section to appsettings.json like this:

"Groups": {
"AccessItWritingApp": "[yourGroupIDhere]"
},

Next, in Program.cs, find the bit that says:

// Add services to the container.
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd"));

and change it to:

// Add services to the container.
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(options =>
{ 
// Ensure default token validation is carried out
builder.Configuration.Bind("AzureAd", options);
// The following lines code instruct the asp.net core middleware to use the data in the "roles" claim in the [Authorize] attribute, policy.RequireRole() and User.IsInRole()
// See https://docs.microsoft.com/aspnet/core/security/authorization/roles for more info.
options.TokenValidationParameters.RoleClaimType = "groups";
options.Events.OnTokenValidated = async context =>
{
if (context != null)
{
List requiredGroupsIds = builder.Configuration.GetSection("Groups")
.AsEnumerable().Select(x => x.Value).Where(x => x != null).ToList();
// Calls method to process groups overage claim (before policy checks kick-in)
//await GraphHelper.ProcessAnyGroupsOverage(context, requiredGroupsIds, cacheSettings);
    }
    await Task.CompletedTask;
};
}
);

Run the app, and now I can access SecurePage:

There are a few things to add though. Note I have commented a call to GraphHelper. GraphHelper is custom code in this sample https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/ and specifically the one in 5-WebApp-AuthZ/g-2-Groups. I do not think I could have got this working without this sample.

The sample does something clever though. If the token does not supply all the groups of which the user is a member, it calls a method called ProcessAnyGroupsOverage which eventually calls graphClient.Me.GetMemberGroups to get all the groups of which the user is a member. As far as I can tell this does retrieve membership via nested groups though note there is a limit of 11,000 results.

Note that in the above I have not described how to install the GraphClient as there are a few complications, mainly regarding permissions.

It is all rather gnarly and I was surprised that years after I coded the first version of this application there is still no simple method such as graphClient.isMemberOf() that discovers if a user is a member of a specific group; or a simple way of doing this that supports nested groups which are often easier to manage than direct membership.

Further it is disappointing to get errors with Visual Studio templates that one would have thought are commonly used.

And another time perhaps I will describe the issues I had deploying the application to Azure App Service – yes, more errors despite a very simple application and using the latest Visual Studio wizard.

Garmin Connect+: new subscription will be a hard sell

Garmin, makers of sports watches which gather health and performance data on your activities, has announced Connect+, a subscription offering with “premium features and more personalised insights.”

Garmin Connect+

Garmin Connect is the cloud-based application that stores and manages user data, such as the route, pace and heart rate, on runs, cycle rides and other workouts, as well as providing a user interface which lets you browse and analyse this data. The mobile app is a slightly cut-down version of the web app. Until now, this service has been free to all customers of Garmin wearable devices.

The company stated that Garmin Connect+ is a “premium plan that provides new features and even more personalized insights … with Active Intelligence insights powered by AI.” It also promised customers that “all existing features and data in Garmin Connect will remain free.” The subscription costs $6.99 per month or $69.99 per year. UK price is £6.99 per month or £69.99 per year which is a bit more expensive.

The reaction from Garmin’s considerable community has been largely negative. The Garmin forum on Reddit which has over 266,000 members is full of complaints, not only because the subscription is considered poor value but also from fear that despite the company’s reassurance the free Garmin Connect service will get worse, perhaps becoming ad-laden or just less useful as all the investment in improvements is switched to the premium version.

On the official Garmin forums an initial thread filled quickly with complaints and was locked; and a new thread is going in the same direction. For example:

“I paid £800 for my Descent Mk2s with the understanding that there WAS NO SUBSCRIPTION and the high cost of my device subsidised the Connect platform. The mere existence of the paid platform is a clear sign that all/most new features will go to the paid version and the base platform will get nothing. You’ve broken all trust here Garmin, I was waiting for the next Descent to upgrade but I will look elsewhere now.”

A few observations:

  1. Companies love subscriptions because they give a near-guaranteed and continuous revenue stream.
  2. The subscription model combined with hardware can have a strange and generally negative impact on the customer, with the obvious example being printers where selling ink has proved more profitable than selling printers, to the point where some printers are designed with deliberately small-capacity cartridges and sold cheaply; the sale of the hardware can also be seen as the purchase of an income stream from ink sales.
  3. A Garmin wearable is a cloud-connected device and is inconvenient to use without the cloud service behind it. For example, I am a runner with a Garmin watch; when I add a training schedule I do so in the Connect web application, which then syncs with the watch so that while I am training the watch tells me how I am doing, too fast, too slow, heart rate higher than planned, and so on. That service costs money to provide so it may seem reasonable for Garmin to charge for it.
  4. The counter-argument is that customers have purchased Garmin devices, which are more expensive than similar hardware from other vendors, in part on the basis that they include a high quality cloud service for no additional cost. Such customers now feel let down.
  5. We need to think about how the subscription changes the incentives for the company. The business model until now has included the idea that more expensive watches light up different data-driven features. Sometimes these features depend on hardware sensors that only exist in the premium devices, but sometimes it is just that the device operating system is deliberately crippled on the cheaper models. Adding the subscription element to the mix gives Garmin an incentive to improve the premium cloud service to add features, rather than improving the hardware and on-device software.
  6. It follows from this that owners of the cheapest Garmin watches will get the least value from the subscription, because their hardware does not support as many features. Will the company now aim to sell watches with hitherto premium features more cheaply, to improve the value of the subscription? Or will it be more concerned to preserve the premium features of its more expensive devices to justify their higher price?
  7. It was predictable that breaking this news would be difficult: it is informing customers that a service that was previously completely free will now have a freemium model. The promise that existing free features would remain free has done little to reassure users, who assume either that this promise will not be kept, or that the free version will become gradually worse in comparison with the paid option. Could the company have handled this better? More engagement with users would perhaps help.

Finally, it seems to me that Connect+ will be a hard sell, for two reasons. First, Strava has already largely captured the social connection aspect of this type of service, and many Garmin users primarily use Strava as a result. Remarkably, even the free Strava is ad-free (other than for prompts to subscribe) and quite feature-rich. Few will want to subscribe both to Strava and Connect+, and Strava is likely to win this one.

Second, the AI aspect (which is expensive for the provider) has yet to prove its worth. From what I have seen, Strava’s Athlete Intelligence mostly provides banal feedback that offers no in-depth insight.

While one understands the reasons which are driving Garmin towards a subscription model, it has also given the company a tricky path to navigate.

Changing my mind about open ear earphones

I have become a fan of bone conduction earphones. Initially this was because they are great for running since they let you hear everything going on around you which is important for safety. I also came to realise that pushing earbuds into your ear to form a seal is not the best thing for comfort, even though it can deliver excellent sound quality. Bone conduction earphones sound OK but not great, but I found myself willing to sacrifice audio quality for these other characteristics.

That said, bone conduction earphones do have some problems. In particular, if you attempt to wind up the volume you get an unpleasant physical vibration, especially on tracks that have extended bass.

There is another option, which I have seen described as air conduction or open ear. In this design, the sound driver sits adjacent to your ear canal. I tried one of these a couple of years ago and found the audio unbearably tinny. Unfortunately I concluded that this was inherent to this type of earphone and dismissed them.

Recently I was able to review another pair of open ear earphones which has changed my mind. The actual product is a Baseus Bowie MF1 though I do not think it is extra special in itself; however it is pretty good and the sound is excellent, better I think than my usual bone conduction earphones and without any vibration issues.

I notice that market leader Shokz has cottoned onto this and the Openrun Pro 2 (at a much higher price than the Baseus) has dual driver, with the low bass handled by air conduction, again avoiding the vibration problem.

The more I think about it, the more I like the open ear or air conduction idea. No fuss about ear sleeve size or needing a perfect seal; no discomfort from jamming something tightly into your ear; and, I now realise, very acceptable sound quality.

Facebook vs Reddit as discussion forums: why is Facebook so poor?

Facebook’s user interface for discussions is terrible. Here are some of the top annoyances for me:

  • Slow. Quite often I get those blank rectangles which seems to be a React thing when content is pending.
  • UI shift. When you go a post it shows the number of comments with some algorithmically selected comments below the post. When you click on “View more answers” or the comments link, the UI changes to show the comments in a new panel.
  • Difficult navigation. Everything defaults to the algorithm’s idea of what it calculates you want to see (or what Facebook wants me to see). So we get “Most relevant” and “Top comments.” I always want to see all the comments (spam aside) with the most recent comment threads at the top. So to get to something approaching that view I have to click first, View more answers, and then drop-down “Top comments” to select one of the other options.
  • Even “All comments” does not show all comments, but only the top level without the replies.

Facebook is also a horrible experience for me thanks to the news feed concept, which pushes all manner of things at me which I do not want to see or which waste time. I have learned that the only way I can get a sane experience is to ignore the news feed and click the search icon at top left, then I get a list of groups or pages I have visited showing which have new posts.

Do not use Facebook then? The problem is that if the content one wants to see is only on Facebook it presents a bad choice: use Facebook, or do not see the content.

Reddit by contrast is pretty good. You can navigate directly to a subreddit. Tabs for “hot” and “new” work as advertised and you can go directly to “new” by the logical url for example:

https://www.reddit.com/r/running/new/

Selecting a post shows the post with comments below and includes comment threads (replies to comments) and the threads can be expanded or hidden with +/- links.

The site is not ad-laden and the user experience generally nice in my experience. The way a subreddit is moderated makes a big difference of course.

The above is why, I presume, reddit is the best destination for many topics including running, a current interest of mine.

Why is Facebook so poor in this respect? I do not know whether it is accident or design, but the more I think about it, the more I suspect it is design. Facebook is designed to distract you, to show you ads, and to keep you flitting between topics. These characteristics prevent it from being any use for discussions.

If I view the HTML for a reddit page I also notice that it is more human-readable, and clicking a random topic I see in the Network tab of the Safari debugger that 30.7 KB was transferred in 767ms.

Navigate back to Facebook and I see 6.96 MB transferred in 1.52s.

These figures will of course vary according to the page you are viewing, the size of the comment thread, the quality of your connection, and so on. Reddit though is much quicker and more responsive for me.

Of course I am on “Old reddit.” New reddit, the revamped user interface (since 2017!) that you get by default if not logged in with an account that has opted for old reddit, is bigger and slower and with no discernible advantages. Even new reddit though is smaller and faster than Facebook.

Tip: If you are on new reddit you can get the superior old version from https://old.reddit.com/

Fixing a .NET P/Invoke issue on Apple Silicon – including a tangle with Xcode

I have a bridge platform (yes the game) written in C# which I am gradually improving. Like many bridge applications it makes use of the open source double dummy solver (DDS) by Bo Haglund and Soren Hein.

My own project started out on Windows and is deployed to Linux (on Azure) but I now develop it mostly on a Mac with Visual Studio Code. The DDS library is cross-platform and I have compiled it for Windows, Linux and Mac – I had some issues with a dependency, described here, which taught me a lot about the Linux app service on Azure, among other things.

Unfortunately though the library has never worked with C# on the Mac – until today that is. I could compile it successfully with Xcode, it worked with its own test application dtest, but not from C#. This weekend I decided to investigate and see if I could fix it.

I have an Xcode project which includes both dtest and the DDS library, which is configured as a dynamic library. What I wanted to do was to debug the C++ code from my .NET application. For this purpose I did not use the ASP.Net bridge platform but a simple command line wrapper for DDS which I wrote some time back as a utility. It uses the same .NET wrapper DLL for DDS as the bridge platform. The problem I had was that when the application called a function from the DDS native library, it printed: Memory::GetPtr 0 vs. 0 and then quit.

The error from my .NET wrapper

I am not all that familiar with Xcode and do not often code in C++ so debugging this was a bit of an adventure. In Xcode, I went to Product – Scheme – Edit Scheme, checked Debug executable under Info, and then selected the .NET application which is called ddscs.

Adding the .NET application as the executable for debugging.

I also had to add an argument under Arguments passed on Launch, so that my application would exercise the library.

Then I could go to Product – Run and success, I could step through the C++ code called by my .NET application. I could see that the marshalling was working fine.

Stepping through the C++ code in Xcode

Now I could see where my error message came from:

The source of my error message.

So how to fix it? The problem was that DDS sets how much memory it allows itself to use and it was set to zero. I looked at the dtest application and noticed this line of code:

SetResources(options.memoryMB, options.numThreads);

This is closely related to another DDS function called SetMaxThreads. I looked at the docs for DDS and found this remark:

The number of threads is automatically configured by DDS on Windows, taking into account the number of processor cores and available memory. The number of threads can be influenced using by calling SetMaxThreads. This function should probably always be called on Linux/Mac, with a zero argument for auto­ configuration.

“Probably” huh! So I added this to my C# wrapper, using something I have not used before, a static constructor. It just called SetMaxThreads(0) via P/Invoke.

Everything started working. Like so many programming issues, simple when one has figured out the problem!

Amazon Linux 2023: designed to be disposable

Amazon Linux 2023 is the default for Linux VMs on AWS EC2 (Elastic Compute Cloud). Should you use it? It is a DevOps choice; the main reason why you might use it is that it feels like playing safe. AWS support will understand it, it should be performance-optimised for EC2; it should work smoothly with AWS services.

Amazon Linux 2 was released in June 2018 and was the latest production version until March 2023, by which time it was very out of date. Based on CentOS 7, it was pretty standard and you could easily use additional repositories such as EPEL (Extra Packages for Enterprise Linux). It is easy to keep up to date with sudo yum update. However there is no in-place upgrade.

Amazon Linux 2023 is different in character. It was released in March 2023 and the idea is to have a major release every 2 years, and to support each release for 5 years. It does not support EPEL or repositories other than Amazon’s own. The docs say:

At this time, there are no additional repositories that can be added to AL2023. This might change in the future.

The docs also document how to add an external repository so it is a bit confusing. You can also compile your own rpms and install that way; but if you do, keeping them up to date is down to you.

The key to why this is though is in a thing AWS calls deterministic upgrades. Each version, including minor versions, is locked to a specific repository. You can upgrade to a new release but it has to be specified. This is what I got today from my installation on Hyper-V:

Amazon Linux 2023 offering a new release

The command dnf check-release-update looks for a new release and tells you how to upgrade to it, but does not do so by default.

The reason, the docs explain, is that:

With AL2023, you can ensure consistency between package versions and updates across your environment. You can also ensure consistency for multiple instances of the same Amazon Machine Image (AMI). With the deterministic upgrades through versioned repositories feature, which is turned on by default, you can apply updates based on a schedule that meets your specific needs.

The idea is that if you have a fleet of Amazon Linux 2023 instances they all work the same. This is ideal for automation. The environment is predictable.

It is not ideal though if you have, say, one server, or a few servers doing different things, and you want to run them for a long time and keep them up to date. This will work, but the operating system is designed to be disposable. In fact, the docs say:

To apply both security and bug fixes to an AL2023 instance, update the DNF configuration. Alternatively, launch a newer AL2023 instance.

The bolding is mine; but if you have automation so that a new instance can be fired up configured as you want it, launching a new instance is just as logical as updating an existing one, and arguably safer.

Amazon Linux 2023 on Hyper-V

Amazon Linux 2023 came out in March 2023, somewhat late as it was originally called Amazon Linux 2022. It took even longer to provide images for running it outside AWS, but these did eventually arrive – but only for VMWare and KVM, even though old Amazon Linux 2 does have a Hyper-V image.

Update: Hyper-V is now officially supported making this post obsolete but it may be of interest!

I wanted to try out AL 2023 and it makes sense to do that locally rather than spend money on EC2; but my server runs Windows Hyper-V. Migrating images between hypervisors is nothing new so I gave it a try.

  • I used the KVM image here (or the version that was available at the time).
  • I used the qemu disk image utility to convert the .qcow2 KVM disk image to .vhdx format. I installed qemu-img by installing QUEMU for Windows but not enabling the hypervisor itself.
  • I used the seed.iso technique to initialise the VM with an ssh key and a user with sudo rights. I found it helpful to consult the cloud-init documentation linked from that page for this.
  • In Hyper-V I created a new Generation 1 VM with 4GB RAM and set it to boot from converted drive, plus seed.iso in the virtual DVD drive. Started it up and it worked.
Amazon Linux 2023 running on Hyper-V

I guess I should add the warning that installing on Hyper-V is not supported by AWS; on the other hand, installing locally has official limitations anyway. Even if you install on KVM the notes state that the KVM guest agent is not packaged or supported, VM hibernation is not supports, VM migration is not supported, passthrough of any device is not supported and so on.

What about the Hyper-V integration drivers? Note that “Linux Integration Services has been added to the Linux kernel and is updated for new releases.” Running lsmod shows that the essentials are there:

The Hyper-V modules are in the kernel in Amazon Linux 2023

Networking worked for me without resorting to a legacy network card emulation.

This exercise also taught me about the different philosophy in Amazon Linux 2023 versus Amazon Linux 2. That will be the subject of another post.

New Outlook confusion as connection to Exchange Online or Business Basic mailboxes blocked “due to the license provided by your work or school”

What Microsoft gives with one hand, it removes with the other; or so it seemed for users of paid Exchange Online accounts when the company said that “for years, Windows has offered the Mail and Calendar apps for all to use. Now Windows is bringing innovative features and configurations of the Microsoft Outlook app and Outlook.com to all consumers using Windows – at no extra cost, with more to come”.

That post in September 2023 does not mention a significant difference that was introduced with this new Outlook. It is all to do with licensing. Historically, Outlook was always the email client for Exchange, and this is now true for Exchange Online, the email component of Microsoft 365. Microsoft’s various 365 plans for business are differentiated in part by whether or not users purchase a subscription to the desktop Office applications. Presuming though that the user had some sort of license for Outlook, whether from a 365 plan, or from a standalone purchase of Office, they could add their Exchange Online email account to Outlook, even if that particular account was part of a plan that did not include desktop Outlook.

Some executive at Microsoft must have thought about this and decided that with Outlook becoming free for everyone, this would not do. Therefore a special check was added to Outlook: if an account is a business account that does not come with a desktop license for Outlook, block it. The consequence was that users upgrading or trying to add such an account saw the message:

“This account is not supported in Outlook for Windows due to the license provided by your work or school. Try to login with another account or go to Outlook on the web.”

The official solution was to upgrade those accounts to one that includes desktop Outlook. That means at least Microsoft 365 Business Standard at $12.50 per month. By contract, Microsoft 365 Business Basic is $6.00 per month and Exchange Online Plan 1 just $4.00 per month.

Just occasionally Microsoft makes arbitrary and shockingly bad decisons and this was one of them. What was wrong with it? A few things:

  • Administrators of 365 business tenancies were given no warning of the change
  • Exchange Online is supposedly still an email server. Email is an internet standard – though there are already standards issues with Exchange Online such as the requirement for OAuth authentication and SMTP disabled by default. See Mozilla’s support note for Thunderbird, for example. However, Exchange Online accounts still worked with other mail clients such as Apple Mail and eM Client; only Outlook now added this licensing requirement.
  • The new Outlook connected OK to free accounts such as Microsoft’s Outlook.com and to other email services. It was bewildering that a Microsoft email client would connect fine to other services both free and paid, but not to Microsoft’s own paid email service.
  • The description of the Exchange Online service states that “Integration with Outlook means they’ll enjoy a rich, familiar email experience with offline access.” This functionality was removed, meaning a significant downgrade of the service without notification or price reduction.
  • Some organisations have large numbers of Exchange Online accounts – expecting them suddenly to change all the plans to another costing triple the amount, to retain functionality they had before, is not reasonable.
Image from Exchange Online product description showing how it highlights Outlook integration as one of its features
The product description for Exchange Online highlights Outlook integration as one of its features

Users did the only thing they can do in these circumstances and made a public fuss. This long and confusing thread was the result, with comments such as:

The takeaway is: You can no longer add a mail account in the new Outlook if said mail account doesn’t come with its OWN Outlook (apps) license. This is ridiculous beyond understanding. Unacceptable to the point that if they don’t fix this, I’ll cancel BOTH Exchange licenses and move over to Google Business with my domains.

There was also a well reasoned post in Microsoft Feedback observing, among other things, that “At no point is Business Basic singled out as a web-only product in any of the Microsoft Terms or Licensing documents.” 

The somewhat good news is that Microsoft has backtracked, a bit. This month, over 4 months after the problem appeared, the company posted its statement on “How licensing works for work and school accounts in the new Outlook for Windows.” The company now says that there will be a “capability change in the new Outlook for Windows”, rolled out from the start of this month, following which a licensed version of Outlook will work with Exchange Online, Business Basic and similar accounts, provided that an account with a desktop license is set as the primary account. This includes consumer accounts:

“If you have a Business Standard account (which includes a license for desktop apps) added as your primary account, that license will apply, and you can now add any secondary email accounts regardless of licensing status (e.g. Business Basic). This also applies to personal accounts with a Microsoft 365 Personal or Family, as these plans include the license rights to the Microsoft 365 applications for desktop. Once one of these accounts is set as the primary account, you can add Business Basic, E1 or similar accounts as secondary accounts.”

This is a substantial improvement and removes most but not all of the sting of these changes.

What is an operating system for? A friend’s Windows 11 rant shows disconnect between vendors and users

What is an operating system? The traditional definition is something like, the system software that manages computer hardware and provides services for applications.

This definition does not describe what you get though when you install an “operating system” such as macOS, Windows, Android or ChromeOS – or more likely, receive hardware with it pre-installed. What you get is an operating system (OS) plus a ton of stuff that can only be described as applications. In practice, the reach of what we call an operating system has extended over the years. Even in the early days, an OS would come with utilities, including a command line, a command line editor, perhaps a C compiler, file management tools and so on. Then there was a change when pre-installed graphical user interfaces arrived. Windows came with Notepad, Calculator, Write and Paint.

What is a commercial operating system today? We can add to the traditional definition at least the following:

  • A vehicle for advertising
  • A means of lock-in
  • A vehicle for data collection

On Windows, advertising is everything from the pre-installed trials, to the nagging to upgrade OneDrive, to the mysterious appearance of Candy Crush on the Start menu.

The lock-in comes via the ecosystem. Apple is worse than Windows for this in that more of its applications work only on Apple operating systems. On Windows though Microsoft hardly has to bother since a huge legacy of Windows-only applications keeps users from changing, especially in business.

Data collection is via near-enforced login and telemetry. An Apple ID is not required for macOS but it is strongly encouraged and necessary for the App Store. A Microsoft or Entra ID account is not required to use Windows, but the setup points you strongly in that direction.

Is any of this good for the user? A friend is disappointed with Windows 11 – mainly because it is less familiar than Windows 10. His central points are that Microsoft makes irritating changes that disrespect the learning users have invested in Windows, and has left behind the notion of the operating system as a blank canvas waiting for applications to make it useful.

Personally I put up with Windows 11; it is not that different, though there are a few things that I particularly dislike:

  • The taskbar icons in the centre. I routinely move them to the left. Settings – Personalisation – Taskbar – Taskbar behaviors – Taskbar alignment, no registry editing required. This single change makes Windows 11 feel much more familiar, and it is better since left-aligned icons are easier to target.
  • The Start menu. This was great in Windows 95 and improved up until Windows 7. Windows 8 replaced it for … reasons. Windows 10 reinvented it but badly. I have trained myself always to click All apps as a second step after clicking Start. Click on a letter for the letter menu, select a letter, start the app. It works reliably, unlike Search which is a usability disaster when what you want is to start an application.
  • The File Explorer. You right click a file, and instead of a single menu of options, there are three sets of options, one in a row of icons, one in a mysterious subset of options, and one under Show more options. A poor user interface for a common task.

There are other things, of course. I always turn off the distracting Widgets on the taskbar. I always show as many of the “additional System tray icons” as I can, with the exception of consumer Teams. I always open Edge, reflect on the cheap ugly mess that is the default home page, and set about disabling it.

These annoyances are mainly design errors by Microsoft rather than an a direct consequence of the changing role of the operating system; yet they would be impossible without that change.

Imagine for a moment if Windows were optimised for installing and running applications. Oddly, Windows 8 (which most hated for more or less the same reasons my friend cites for disliking Windows 11) did have that vision. Install from the Store, with clean setup and easy removal. Run full-screen with no distractions. Before you say it, yes there were issues, the UI was not good enough, the apps were not there, we missed multiple overlapping windows, and more. There was a good concept in there though.

Windows 11 rant: “I replaced Win 11 with Win 10. It was like walking back into my house”

A friend purchased a Windows 11 laptop and this was his reaction, slightly edited. It caused me some reflection on what is an operating system, which I have posted separately. I also note: Windows 10 goes out of support in October 2025.


“I recently bought a Win 11 laptop. I was stunned. I must apologise for what follows, but it actually made me quite angry to realise that the Chief Product Manager at Microsoft clearly has NO understanding of ‘opportunity costs’, thus wasting millions of our ‘person-hours’ worldwide.

“For many years I worked in health research, where we realised a decade or two ago that something doesn’t just have to give better results to be worth implementing. It’s got to be SUFFICIENTLY better to offset the cost of implementing the change. If you start something new that ‘works better’ but in doing so, fail to consider the additional costs involved in everyone changing how they do things, professional and patient, to not just know but understand how & why the new thing is better it is very easy to end up with everything working worse than before. NEW must be > (OLD + Opportunity Costs). Ideally a lot greater, if you want to bring people with you. This isn’t rocket science, not anymore.

“I get that most IT correspondents are professionals used to having to plough through new Operating Manuals (pdf, sure) every two years, but out here in Userland I am far too busy doing interesting stuff with my computer & applications. Over a few years I learnt where the main knobs & levers of Win 10 are. And haven’t thought about it since. So, for Microsoft to carelessly move everything, just because they believe the new setup will be quicker/easier/more efficient for me is not only staggeringly rude, but stupid.

“Consider: It probably only took me a few hundred hours of use of Win 10 to learn where all the OS stuff was to the point where it was automatic. Since then the OS stuff has usually required no conscious input at all, like riding a bike. Some things might not be easy to find, but once you know, you know. Then along comes Win 11, and all this stuff is a pain in the arse again, nothing is where it used to be. So I don’t CARE if, in theory, the new arrangements are easier to use ONCE YOU KNOW THEM, my point is, why should I, and (hundreds of) millions of other Windows users, have to re-learn all that sh*t?

“IT’S JUST AN OPERATING SYSTEM! (Can someone at Microsoft put up posters?)

“I’m not interested in it! It’s the environment in which the things I AM interested in – applications – video editors, DAWs, office apps etc.- live. Don’t f*ck with it. How would you feel if suddenly you had to learn to speak & walk again, just because someone thought they knew a better way to do these things?

“And consider the hundreds of hundreds of millions of person hours you are WASTING, as we have to re-learn where things are? Double-click when before we had to single click. Settings moved somewhere completely different. Even where on the screen to look: Does Microsoft not employ a single behavioural psychologist who could tell them how much time (and attention) is wasted when you move something that was always bottom left to top middle?

“And then, the final straw: I found that most of these maddening ‘I’m bored, let’s change grass from to green to blue’ ‘improvements’ can be reversed, just by editing the registry. It was only on my fifth edit, I realised what was going on. The old ways of doing things, that I’d invested serious time in learning about to the point where they were automatic, were STILL THERE! It’s just that someone Microsoft couldn’t even raise their eyes from Tiktok (or whatever was distracting them), to add a few lines of code, to make the previous ways of operating, accessible via a menu. Remember them? You put the user in charge? Of their own computer? The very thought…

“At that point I realised that Microsoft’s institutional memory had, ironically, forgotten why Bill Gates got so rich in the first place. Let’s recall – IBM agreed to let him licence rather than sell his OS for their new, pathetically under-powered ‘Personal Computer’, because they thought it would be a small market. I mean, who would want to use a desktop PC , when they could use a terminal to access a mainframe with a brain the size of a planet (sorry, Doug)? History tells us they then discovered, too late, that the Mk.1 Human Being prefers under-powered personal computers over high-powered mainframes, for the same reason we all prefer living in small chaotic houses to living in large, well-organised institutions.

“So I replaced Win 11 with Win 10. It was like walking back into my house. Subsequently, in a typical working day I no longer had to expend any further conscious thought on operating the Operating System – because I learnt how to do that years ago. And then got back to the interesting stuff.”

Tech Writing