Category Archives: microsoft

Windows Mixed Reality: Acer headset review and Microsoft’s (lack of) content problem

Acer kindly loaned me a Windows Mixed Reality headset to review, which I have been trying over the holiday period.

First, an aside. I had a couple of sessions with Windows Mixed Reality before doing this review. One was at IFA in Berlin at the end of August 2017, where the hardware and especially the software was described as late preview. The second was at the Future Decoded event in London, early November. On both occasions, I was guided through a session either by the hardware vendor or by Microsoft. Those sessions were useful for getting a hands-on experience; but an extended review at home has given me a different understanding of the strengths and weaknesses of the product. Readers beware: those rushed “reviews” based on hands-on sessions at vendor events are poor guides to what a product is really like.

A second observation: I wandered into a few computer game shops before Christmas and Windows Mixed Reality hardware was nowhere to be seen. That is partly because PC gaming has hardly any bricks and mortar presence now. Retailers focus on console gaming, where there is still some money to be made before all the software becomes download-only. PC game sales are now mainly Steam-powered, with a little bit of competition from other download stores including GOS and Microsoft’s Windows Store. That Steam and download dominance has many implications, one of which is invisibility on the High Street.

What about those people (and there must be some) who did unwrap a Windows Mixed Reality headset on Christmas morning? Well, unless they knew exactly what they were getting and enjoy being on the bleeding edge I’m guessing they will have been a little perplexed and disappointed. The problem is not the hardware, nor even Microsoft’s implementation of virtual reality. The problem is the lack of great games (or other virtual reality experiences).

This may improve, provided Microsoft sustains enough momentum to make Windows Mixed Reality worth supporting. The key here is the relationship with Steam. Microsoft cheerfully told the press that Steam VR is supported. The reality is that Steam VR support comes via preview software which you get via Steam and which states that it “is not complete and may or may not change further.” It will probably all be fine eventually, but that is not reassuring for early adopters.

image

My experience so far is that native Windows MR apps (from the Microsoft Store) work more smoothly, but the best content is on Steam VR. The current Steam preview does work though with a few limitations (no haptic feedback) and other issues depending on how much effort the game developers have put into supporting Windows MR.

I tried Windows MR on a well-specified gaming PC: Core i7 with NVIDIA’s superb GTX 1080 GPU. Games in general run super smoothly on this hardware.

Getting started

A Windows Mixed Reality headset has a wired connection to a PC, broken out into an HDMI and a USB 3.0 connection. You need Windows 10 Fall Creators Update installed, and Setup should be a matter of plugging in your headset, whereupon the hardware is detected, and a setup wizard starts up, downloading additional software as required.

image

In my case it did not go well. Setup started OK but went into a spin, giving me a corrupt screen and never completing. The problem, it turned out, was that my GPU has only one HDMI port, which I was already using for the main display. I had the headset plugged into a DisplayPort socket via an adapter. I switched this around, so that the headset uses the real HDMI port, and the display uses the adapter. Everything then worked perfectly.

The controllers use Bluetooth. I was wary, because in my previous demos the controllers had been problematic, dropping their connection from time to time, but these work fine.

image

They are perhaps a bit bulky, thanks to their illuminated rings which are presumably a key part of the tracking system. They also chew batteries.

The Acer headsets are slightly cheaper than average, but I’ve enjoyed my time with this one. I wear glasses but the headset fits comfortably over them.

A big selling point of the Windows system is that no external tracking sensors are required. This is called inside-out tracking. It is a great feature and makes it easier just to plug in and go. That said, you have to choose between a stationary position, or free movement; and if you choose free movement, you have to set up a virtual boundary so that you do not walk into physical objects while immersed in a VR experience.

image

The boundary is an important feature but also illustrates an inherent issue with full VR immersion: you really are isolated from the real world. Motion sickness and disorientation can also be a problem, the reason being that the images your brain perceives do not match the physical movement your body feels.

Once set up, you are in Microsoft’s virtual house, which serves as a kind of customizable Start menu for your VR experiences.

image

The house is OK though it seems to me over-elaborate for its function, which is to launch games and apps.

I must state at this point that yes, a virtual reality experience is amazing and a new kind of computing. The ability to look all around is extraordinary when you first encounter it, and adds a level of realism which you cannot otherwise achieve. That said, there is some frustration when you discover that the virtual world is not really as extensive as it first appears, just as you get in an adventure game when you find that not all doors open and there are invisible barriers everywhere. I am pretty sure though that a must-have VR game will come along at some point and drive many new sales – though not necessarily for Windows Mixed Reality of course.

I looked for content in the Windows Store. It is slim pickings. There’s Minecraft, which is stunning in VR, until you realise that the controls do not work quite so well as they do in the conventional version. There is Space Pirate, an old-school arcade game which is a lot of fun. There is Arizona Sunshine, which is fine if you like shooting zombies.

I headed over to Steam. The way this works is that you install the Steam app, then launch Windows Mixed Reality, then launch a VR game from your Steam library. You can access the Windows Desktop from within the Windows MR world, though it is not much fun. Although the VR headset offers two 1440 x 1440 displays I found it impossible to keep everything in sharp focus all the time. This does not matter all that much in the context of a VR game or experience, but makes the desktop and desktop applications difficult to use.

I did find lots of goodies in the Steam VR store though. There is Google Earth VR, which is not marked as supporting Windows MR but works. There is also The Lab, which a Steam VR demo which does a great job of showing what the platform can do, with several mini-games and other experiences – including a fab archery game called Longbow where you defend your castle from approaching hordes. You can even fire flaming arrows.

image
Asteroids! VR, a short, wordless VR film which is nice to watch once. It’s free though!

Mainstream VR?

Irrespective of who provides the hardware, VR has some issues. Even with inside-out tracking, a Windows Mixed Reality setup is somewhat bulky and makes the wearer look silly. The kit will become lighter, as well as integrating audio. HTC’s Vive Pro, just announced at CES, offers built-in headphones and has a wireless option, using Intel’s WiGig technology.

Even so, there are inherent issues with a fully immersive environment. You are vulnerable in various ways. Having people around wearing earbuds and staring at a screen is bad enough, but VR takes anti-social to another level.

The added expense of creating the content is another issue, though the right tools can do an amazing job of simplifying and accelerating the process.

It is worth noting that VR has been around for a long time. Check out the history here. Virtual Reality arcade machines in 1991. Sega VR Glasses in 1993. Why has this stuff taken so long to take off, and remains in its early stages? It is partly about technology catching up to the point of real usability and affordability, but also an open question about how much VR we want and need.

One thing that’s worse in Windows 10 Fall Creators Update: uncontrollable application auto-start

One thing I’ve noticed in Windows 10 recently is that Outlook seems to auto-start, which it never did before. In fact, this caused an error on a new desktop PC that I’m setting up, as follows:

1. Outlook has an archive PST open, which is on a drive that is connected over iSCSI

2. On reboot, Outlook auto-started and threw an error because it could not find the drive

3. In the background, the iSCSI drive reconnected, which means Outlook could have found the drive if it had waited

All very annoying. Of course I looked for the reason why Outlook was autostarting. In Windows 10, you can control startup applications in Task Manager. But Outlook was not listed there. Nor could I find any setting or reason why it was auto-starting.

Eventually I tracked it down. It is not really Outlook auto-starting. It is a new feature in Windows 10 Fall Creators Update that automatically restarts applications that were running when Windows was last shutdown. Since Outlook is pretty much always running for me, the end result is that Outlook auto-starts, with the bad result above.

I presumed that this was a setting somewhere, but if it is, I cannot find it. This thread confirms the bad news (quote is from Jason, a Microsoft support engineer):

This is actually a change in the core functionality of Windows in this development cycle.

Old behavior:
– When you shut down your PC, all apps are closed

– After reboot/restart, you have to re-open any app you’d like to use

New behavior:

– When shutting down your PC, any open apps are “bookmarked” (for lack of a better word)

– After reboot/restart, these apps will re-open automatically

If you want to start with no apps open (other than those set to auto-start via Task Manager/Start), you’ll need to ensure all apps are closed before shutting down or restarting the PC.

Why?

The desire is to create a seamless experience wherein, if you have to reboot a PC, you can pick back up quickly from where you left off and resume being productive.  This has far-ranging impacts across the OS (in a good way).

Not everyone agrees that this “far-reaching impact” is a good thing. The biggest gripe is that there is no setting to disable this behaviour if it causes problems, as in my case. Various entries in the official Windows feedback hub have been quick to attract support.

Workarounds? There are various suggestions. One is to manually close all running applications before your restart. That is an effort. Another is to use a shortcut to shutdown or restart, instead of the Start menu option. If you run:

shutdown /f /s /t 0

you get a clean shutdown; or

shutdown /f /r /t 0

for a restart.

As for why this behaviour was introduced without any means of controlling it, that is a mystery.

A quick look at Surface Book 2: powerful but heavy

Microsoft’s Surface range is now extensive. There is the Surface Pro (tablet with keyboard cover), the Surface Laptop (laptop with thin keyboard), and the Surface Book (detachable tablet). And the Surface Studio, an all-in-one desktop. Just announced, and on display here at Microsoft’s Future Decoded event in London, is Surface Book 2.

image

The device feels very solid and the one I saw has an impressive spec: an 8th Gen Intel Core i7 with 16GB RAM and NVIDIA GeForce GTX 1050 discrete GPU. And up to 17 hours battery life.

All good stuff; but I have a couple of reservations. One is the weight; “from 3.38 lbs (1.534 Kg) ”, according to the spec. By contrast, the Surface Laptop starts at 1.69 lbs (0.767 Kg).

That makes the Book 2 heavy in today’s terms. I am used to ultrabook-style laptops now.

Of course you can lighten your load by just using the tablet. Will you though? I rarely see Windows convertible or detachable devices used other than like laptops, with the keyboard attached. The Surface is more likely to be used like a tablet, since you can simply fold the keyboard cover back, but with the Book you either leave the keyboard at home, and put up with short battery life, or have it at least in your bag.

Which Azure Stack is right for you?

I went in search of Azure Stack at Microsoft’s Ignite event. I found a few  in the Expo. It is now shipping and the Lenovo guy said they had sold a dozen or so already.

Why Azure Stack? Microsoft’s point is that it lets you run exactly the same application on premises or in its public cloud. The other thing is that although you have some maintenance burden – power, cooling, replacing bits if they break – it is pretty minimal; the configuration is done for you.

I talked to one of the vendors about the impact on VMware, which dominates the market for virtualisation in the datacentre. My sense in the VMware vs Hyper-V debate is that VMware still has an edge, particularly in its management tools but Hyper-V is solid (aside from a few issues with Cluster Shared Volumes) and a lot less expensive. Azure Stack is Hyper-V of course; and the point the vendor made was that configuring an equivalent private cloud with VMware would be possible but hugely more expensive, not only in license cost but also in the skill needed to set it all up correctly.

So I think this is a smart move from Microsoft.

Why no Dell? They told me it was damaged in transit. Shame.

image
Lenovo

image
Cisco

image

HP Enterprise

Microsoft announces Office 2019, Exchange Server 2019 and SharePoint Server 2019

This was not one of Microsoft’s most surprising announcements, but even so, confirmation that some of the company’s most significant products are to receive updates a year or so from now. The announcement was made at the SharePoint and OneDrive session at the Ignite event here in Orlando.

image

If you have an hour or so spare, you can view the session here:

Note that fewer people now use these products; that is, increasing numbers of users are on Exchange Online and Office 365. These are the same but not the same, and get updates earlier than the on-premises equivalents. Still, we may well see a makeover for Office 365 at around the time Office 2019 is released.

Either way, we should not expect a radical departure from the current Office. Rather, we can expect improvements in the area of collaboration and deeper integration with cloud services.

You will also need to think about the following dialog, if you have not already (the exact wording will vary according to the context):

image

The deal is that you send your document content to Microsoft in order to get AI-driven features.

Microsoft Ignite: where next for Microsoft’s cloud? The Facebook of business?

image

Microsoft has futuristic domes as part of its Envision event, running alongside Ignite here in Orlando. Ignite is the company’s main technical event of the year, focusing mainly on IT Pros but embracing pretty much the whole spectrum of Microsoft’s products and services (maybe not much Xbox!). With the decline of the PC and retreat from mobile, and a server guy at the helm, the company’s focus has shifted towards cloud and enterprise, making Ignite all the more important.

This year sees around 25-30,000 attendees according to a quick estimate from one of the PRs here; a little bigger than last year’s event in Atlanta.

Microsoft will present itself as an innovative company doing great things in the cloud but the truth is more complex, much though I respect the extent to which the business has been transformed. This is a company with a huge amount of legacy technology, designed for a previous era, and its challenge has been, and still is, how to make that a springboard for moving to a new way of working as opposed to a selling opportunity for cloud-born competitors, primarily Amazon Web Services (AWS) and Google, but also the likes of Salesforce and Dropbox.

If there is one product that has saved Microsoft, it is probably Exchange, always a solid email server and basic collaboration tool. Hosted Exchange is the heart of Office 365 (and BPOS before it), making it an easy sell to numerous businesses already equipped with Office and Outlook. Email servers are horrible things to manage, so hosted has great appeal, and it has driven huge uptake. A side-effect is that it has kept customers using Office and to some extent Windows. A further side-effect is that it has migrated businesses onto Azure Active Directory, the directory behind Exchange Online.

Alongside Office 365, the Azure cloud has matured into a credible competitor to AWS. There are still shortcomings (a few of which you can expect to be addressed by announcements here at Ignite), but it works, providing the company with the opportunity to upsell customers from users of cloud infrastructure to consumers of cloud services, such as Azure IoT, a suite of tools for gathering and analysing data.

The weakness of Microsoft’s cloud efforts has been the moving parts between hosted services and Windows PCs, and legacy pieces that do not work as you would expect.  OneDrive has been a persistent annoyance, with issues over reliable document sync and limitations over things like the number of documents in a folder and the total length of a path. And where are my Exchange Public Folders, or any shared folders, in Outlook for IoS and Android? And why does a PC installation of Office now and again collapse with activation or other issues, so that the only solution is removal and reinstall?

At Ignite we will not hear of such things. Instead, Microsoft will be presenting its vision of AI-informed business collaboration. Think “Facebook of business”, powered by the “Microsoft graph”, the sum of data held on each user and their files and activity, now combined with LinkedIn. The possibilities for better-informed business activity, and systems that know what you need before you ask, are enticing. Open questions are how well it will work, and old issues of privacy and surveillance.

Such things also can only work if businesses do in fact commit more of their data to Microsoft’s cloud. The business case for this is by no means as simple as the company would have us think.

VMware Cloud on AWS: a game changer? What about Microsoft’s Azure Stack?

The biggest announcement from VMWorld in Las Vegas and then Barcelona was VMware Cloud on AWS; essentially VMware hosts on AWS servers.

image

A key point is that this really is VMware on AWS infrastructure; the release states “Run VMware software stack directly on metal, without nested virtualization”.

Why would you use this? Because it is hybrid cloud, allowing you to plan or move workloads between on-premises and public cloud infrastructure easily, using the same familiar tools (vCenter, vSphere, PowerCLI) as you do now, presuming you use VMware.

You also get low-latency connections to other AWS services, of which there are far too many to mention.

This strikes me as significant for VMware customers; and let’s not forget that the company dominates virtualisation in business computing.

Why would you not use VMware Cloud on AWS? Price is one consideration. Each host has 2 CPUs, 36 cores, 512GB RAM, 10.71TB local flash storage. You need a minimum of 4 hosts. Each host costs from $4.1616 to $8.3681 per hour, with the lowest price if you pay up front for a 3-year subscription (a substantial investment).

Price comparisons are always difficult. A big VM of a similar spec to one of these hosts will likely cost less. Maybe the best comparison is an EC2 Dedicated Host (where you buy a host on which you can run up VM instances without extra charge). An i3 dedicated host has 2 sockets and 36 cores, similar to a VMware host. It can run 16 xlarge VMs, each with 950GB SSD storage. Cost is from $2.323 to $5.491. Again, the lowest cost is for a 3 year subscription with payment upfront.

I may have this hasty calculation wrong; but there has to be a premium paid for VMware; but customers are used to that. The way the setup is designed (a 4-host cluster minimum) also makes it hard to be as flexible with with costs as you can be when running up individual VMs.

A few more observations. EC2 is the native citizen of AWS. By going for VMware on AWS instead of EC2 you are interposing a third party between you and AWS which intuitively seems to me a compromise. What you are getting though is smoother hybrid cloud which is no small thing.

What about Microsoft, previously the king of hybrid cloud? Microsoft’s hypervisor is Hyper-V and while there are a few features in VMware ESXi that Hyper-V lacks, they are not all that significant in my opinion. As a hypervisor, Hyper-V is solid. The pain points with Microsoft’s solution though are Cluster Shared Volumes, for high availability Hyper-V deployments, and System Center Virtual Machine Manager; VMware has better tools. There is a reason Azure uses Hyper-V but not SCVMM.

Hyper-V will always be cheaper than VMware (other than for small, free deployments) because it is a feature of Windows and not an add-on. Windows Server licenses are not cheap at all but that is another matter, and you have to suffer these anyway if you run Windows on VMware.

Thus far, Hyper-V has not been all that attractive to VMware shops, not only because of the cost of changing course, but also because of the shortcomings mentioned above.

Microsoft’s own game-changer here is Azure Stack, pre-packaged hardware which uses Azure rather than System Center technology, relieving admins of the burden of managing Cluster Shared Volumes and so forth. It is a great solution for hybrid since it really is the same (albeit with some missing features and some lag over implementing features that come to the public version) as Microsoft’s public cloud.

Azure Stack, like VMware on AWS, is new. Further, there is much more friction in migrating an existing datacenter to use Azure Stack, than in extending an existing VMware operation to use VMware Cloud on AWS.

But there is more. Is cloud computing really about running up VMs and moving them about? Arguably, not. Containers are another approach with some obvious advantages. Serverless is a big deal, and abstracts away both VMs and containers. Further, as you shift the balance of applications away from code you write and more towards use of cloud services (database, ML, BI, queuing and so on), the importance of VMs and containers lessens.

Azure Stack has an advantage here, since it gives an on-premises implementation of some Azure services, though far short of what is in Microsoft’s cloud. And VMware, of course, is not just about VMs.

Overall it seems to me that while VMware Cloud on AWS is great for VMware customers migrating towards hybrid cloud, it is unlikely to be optimal, either for cost or features, especially when you take a long view.

It remains a smart move and one that I would expect to have a rapid and significant take-up.

Generating code for simple SQL Server data access without Entity Framework, works with .NET Core

I realise that Microsoft’s Entity Framework is the most common approach for data access in the .NET world, but I have also always had good results from a simple manual approach using DbConnection, DbCommand and DataReader objects, and like the fact that I can see and control exactly what SQL gets executed. If you prefer using Entity Framework or another abstraction that is fine and please stop reading now!

One snag with this more manual approach is that you have to write tedious code building SQL statements. I figured that someone must have written a utility application to generate this code but could not find one quickly so I did my own. It supports both C# and Visual Basic. The utility connects to a database and lets you generate a class for each table along with code for retrieving and saving these objects, ready for modification. Here you can see a generated class:

image

and here is an example of the generated data access code:

image

This is NOT complete code (otherwise I would be perilously close to writing my own ORM) but simply automates creating SQL parameters and SQL statements.

One of my thoughts was that this code should work well with .NET core. The SQLClient implements the required classes. Here is my code for retrieving an author object, mostly generated by my utility:

public static ClsAuthor GetAuthor(string authorID)
        {
            SqlConnection conn = new SqlConnection(ConnectString);
            SqlCommand cmd = new SqlCommand();
            SqlDataReader dr;
            ClsAuthor TheAuthor = new ClsAuthor();
            try
            {
                cmd.CommandText = "Select * from Authors where au_id = @auid";
                cmd.Parameters.Add("@auid", SqlDbType.Char);
                cmd.Parameters[0].Value = authorID;
                cmd.Connection = conn;
                cmd.Connection.Open();

                dr = cmd.ExecuteReader();

            if (dr.Read()) {

                    //Get Function
                    TheAuthor.Auid = GetSafeDbString(dr, "au_id");
                    TheAuthor.Aulname = GetSafeDbString(dr, "au_lname");
                    TheAuthor.Aufname = GetSafeDbString(dr, "au_fname");
                    TheAuthor.Phone = GetSafeDbString(dr, "phone");
                    TheAuthor.Address = GetSafeDbString(dr, "address");
                    TheAuthor.City = GetSafeDbString(dr, "city");
                    TheAuthor.State = GetSafeDbString(dr, "state");
                    TheAuthor.Zip = GetSafeDbString(dr, "zip");
                    TheAuthor.Contract = GetSafeDbBool(dr, "contract");
                }
            }
            finally
            {
                conn.Close();
                conn.Dispose();
            }

            return TheAuthor;
        }

Everything worked perfectly and I soon had a table showing the authors, using ASP.NET MVC.

In order to verify that it really does work with .NET Core I moved the project to Visual Studio Mac and ran it there:

image

I may be unusual; but I am reassured that I have a relatively painless way to write a database application for .NET Core without using Entity Framework.

Microsoft updates the .NET stack with .NET Core 2.0 and updated Visual Studio. Should you use it?

Microsoft has released .NET Core 2.0, a major update to its open source, cross-platform version of the .NET runtime and C# language.

New features include implementation of .NET Standard 2.0 (a way of targeting code to run under multiple .NET platforms), new platform support including Debian Stretch, macOS High Sierra and Suse Linux Enterprise Server 12 SP2. There is preview support for both Linux and Windows on ARM32.

.NET Core 2.0 now supports Visual Basic as well as C# and F#. The version of C# has been bumped to 7.1, including async Main method support, inferred tuple names and default expressions.

Microsoft has also released Visual Studio 2017 15.3, which is required if you want to use .NET Core 2.0. New Visual Studio features include Azure Stack support, C’# 7.1 support, .NET Framework 4.7 support, and other new features and fixes.

I updated Visual Studio and downloaded the new .NET Core 2.0 SDK and was soon up and running.

image

Note the statement about “This product collects usage data” of which more below.

image

The sample ASP.NET MVC application worked first time.

image

How is .NET Core doing? The whole .NET picture is desperately confusing and I get the impression that most .NET developers, while they may have paid some attention to what is happening, have concluded that the safe path is to continue with the Window-only .NET Framework.

At the same time, .NET Core is strategically important to Microsoft. Cross-platform support means that C# has a life on the Mac and on Linux, which is vital to its health considering the popularity of the Mac amongst developers, and of Linux as a deployment platform for web applications. Visual Studio for Mac has also been updated and supports .NET Core 2.0 in the new version.

Another key piece is the container trend. .NET Core is ideal for container deployment, and the only version of .NET supported in Windows Nano Server. If you want to embrace microservices running in containers, while still developing with C#, .NET Core and Nano Server is the optimum solution.

Why not use .NET Core, especially since it is faster than ASP.NET? In these comparisons, .NET Core comes out as substantially faster than .NET Framework for various algorithms – 600 times faster in one case.

The main issue is compatibility. .NET Core is a subset of the .NET Framework, and being a relative newcomer, it lacks the same level of third-party support.

Another factor is that there is no support for desktop applications, though some solutions have been devised. Microsoft does have a cross-platform GUI story, in Xamarin Forms, which is now in preview for macOS alongside iOS, Android, Windows and Tizen. If Xamarin used .NET Core that would be a great solution, but it does not (though it does support .NET Standard 2.0).

One of the pieces that most concerns developers is data access. If you use .NET Core you are strongly guided towards Entity Framework Core, a fork of Microsoft’s ORM (Object-Relational Mapping) framework. Someone asked on this page, is EF Core usable? Here’s an answer from one user (11 days ago):

Answering 4 months later but people should know: Definitely not, it is still not usable unless you are doing something very trivial and/or have very small DB.
I don’t understand how it is possible for MS to ship it, act like it’s OK and sparsely here and there provide shallow information about its limitations like in this article without warning clearly and explicitly about the serious issues this “v1 product” has.

Someone may jump in and say no, it is fine; but there are undoubtedly missing pieces and I would suggest caution.

You can also access data using the Connection/Command/DataReader approach which avoids EF, and although this is more work, this is what I would be inclined to do personally since you get the best performance and flexibility. Here is an example for SQL Server.

Who is using .NET Core? Controversially, Microsoft gathers telemetry from your use of the command-line tools though you can opt out by setting an environment variable. This means we have some data on .NET Core usage, though unfortunately it excludes Visual Studio usage. I downloaded the most recent dataset and imported it into a database. Here are the figures for OS family:

Total rows 5,036,981
Windows 3,841,922 (76.27%)
Linux 887,750 (17.62%)
Mac 307,309 (6.1%)

image

Given that this excludes Visual Studio users, who are also on Windows, we can conclude that the great majority of .NET Core developers use Windows, and only a tiny minority Mac (I do not know if Visual Studio for Mac usage is included). This is evidence that .NET Core has so far failed in its goal of persuading Mac-using developers to adopt .NET. It does show interest in deploying .NET applications to Linux, which is an obvious win in licensing costs as well as performance.

I would be interested in comments from developers on whether or not they use .NET Core and why.

An overreaching Office 365 integration from Sage

Sage, a software vendor best known for its accounting software, recently introduced an Office 365 integration in its products called Sage 50C Accounts (the “C” is for cloud).

The integration offers several features including:

  • Automatic data backup to OneDrive
  • Contact integration so that you can easily see Sage accounts data for contacts in Office 365/Outlook
  • A mobile app that lets you capture receipts with your smartphone and import them
  • Excel reports
  • A Business Performance Dashboard

image

Very good; but how is this implemented? Users get a special Getting Started email which says:

Are you ready to integrate your Microsoft Office 365 account with Sage 50c Accounts? All you need to do is click Get Started and sign in using the administrator account for your Office 365 Business Premium subscription, and we will guide you through accepting terms and conditions, how to sync your data and setup the Sage apps and users

To sign in, you’ll enter your email and password for your administrator account. Your email is formatted as follows: xxx@xxx.onmicrosoft.com. If you have forgotten your Office 365 administrator password, please click here for more information.

You’ll be asked to accept a provider invitation to give us permission to activate the Sage add-ins for your Office 365 account. Easy.

If you know Office 365 you will spot something odd in the above. Sage is asking you not just to install an Office 365 application, but to “accept a provider invitation”.

It is as bad as it sounds. In order to get its integration working, Sage demands that you appoint it as a Cloud Solution Provider (CSP) for your entire Office 365 tenancy. This does not require that you start paying for your tenancy via Sage, as it can be alongside an existing CSP relationship. However it does give Sage complete access to the tenancy including the ability to reset the global administrator password.

While I do not think it is likely that Sage will do anything bad, this is a lot to ask. It means that in the unlikely event that Sage has its systems compromised, your Office 365 data is at risk.

It gets worse. Once you have agreed to hand over the keys to your Office 365 kingdom, you click a “Let’s get started” button in Sage 50C Accounts on your desktop. You have to log in as manager (a local Sage administrator) and then enter the credentials for your Office 365 global administrator. These credentials are then stored by Sage for 90 days and used to perform synchronization. After 90 days, it will demand that the credentials are entered again.

And by the way, you will need an Office 365 Business Premium license for the global administrator, even though it does not normally use that license for day to day work.

Why is this bad? First, it is a mis-use of the global administrator account. Best practice is that this account is used only for Office 365 administration. It should not be an active user account for email, OneDrive etc, since this increases the risk of the account being compromised.

Second, end users (such as those in the accounts department) do not normally have knowledge of the global administrator credentials. Therefore to perform this operation, they will need to contact their IT support every 90 days.

Third, the fact that Sage has these credentials on a user’s PC, albeit I presume encrypted, adds a possible attack mechanism for your Office 365 tenancy. If the PC became hijacked or infected with malware, some bad guy can now start trying to figure out if there is a way of persuading Sage to do something bad.

Fourth, it is not even wise to enter these credentials on an end user PC. Perhaps I am being excessively cautious, but it is obvious that an end-user PC that is used for day to day work, web browsing and so forth, by someone non-specialist in IT terms, is more vulnerable than an administrator’s PC. If a keylogger were installed, then there is an opportunity to grab the global administrator credentials every 90 days.

Frankly, I do not recommend that businesses use this integration in its current implementation. Nor is it necessary. There are plenty of ways to create Office 365 applications that integrate nicely using the APIs which Microsoft has provided. Maybe there is a feature or two which is difficult to implement without these rights; in this case, the correct thing to do is to badger Microsoft to provide a new API, or perhaps to recognise that the security cost of adding the feature is not worth the value which it adds.

My suspicion is that Sage has gone down this path by a process of evolution. It set itself up as an Office 365 CSP (before doing this integration) in order to get some extra business, which is fair enough. Then it started adding value to its Office 365 tenants, making use of what it could do as the customer’s CSP. Then it wanted to extend that to other Office 365 customers, those for whom it was not the CSP, and went down the path of least resistance, “oh, let’s just require that we become their CSP as well.”

Imagine if other third-party vendors go down this route. Your specialist business software supplier, your CRM supplier, your marketing software, all demanding total access and control over your Office 365 setup.

It is overreaching and disappointing that Microsoft CEO Satya Nadella blessed this integration with a quote about “empowering professionals” when the truth is that this is the wrong way to go about it.