Microsoft’s Pipelines for Azure Kubernetes Service: fixing COPY failed

I like to try new technology when I can so following the Build conference I decided to deploy a Hello World app to Azure Kubernetes Service (AKS). I made a one-node AKS cluster in no time. I built a .NET Core app in Visual Studio deployed to a Linux Docker container, no problem. I pushed the container into ACR (Azure Container Registry) though it turns out I did not really need to do that. The tricky bit is getting the container deployed to the AKS cluster. There is a thing called Dev Spaces but it does not work in UK South:

image

I was contemplating the necessity of building a Helm chart when I tried a thing called Deployment Center (Preview) in the Azure portal.

Click Add Project and it builds a pipeline in Azure DevOps for you.

image

It worked but the pipeline failed when building the container.

COPY failed: stat /var/lib/docker/tmp/docker-builder088029891/AKS-Example/AKS-Example.csproj: no such file or directory

I spent some time puzzling over this error. You can view the exact logs of the build failure and I worked out that it is executing the Dockerfile steps:

COPY [“AKS-Example/AKS-Example.csproj”, “AKS-Example/”]

RUN dotnet restore “AKS-Example/AKS-Example.csproj”
COPY . .

This is failing because there the code in my repository is not nested like that. I eventually fixed it by amending the lines to:

COPY [“AKS-Example.csproj”, “AKS-Example/”]
RUN dotnet restore “AKS-Example/AKS-Example.csproj”

COPY . AKS-Example/

Now the pipeline completed and the container was deployed. I had to look at the Load Balancer Azure had generated for me to find the public IP number, but it worked.

image

Now the Dockerfile has a different path for local development than when deployed which is annoying. I found I could fix this by changing a step in the Deployment Center wizard:

image

Where it says /AKS-Example in Docker build context I replaced it with /. Now the build worked with the original Dockerfile.

I also noticed that the Deployment Center (Preview) used a sample YAML template which is linked directly from GitHub and referred confusingly to deploying sampleapp. It worked but felt a bit of a crude solution.

At this point I realised that I was not really using the latest and greatest, which is the pipeline wizard in Azure Devops. So I deleted everything and tried that.

image

This was great but I could not see an equivalent step to the Docker build context. And indeed, the new build failed with the same COPY failed error I got originally. Luckily I knew the workaround and was up and running in no time.

This different approach also has a slightly different shape than the Deployment Center pipeline, using Environments in Azure DevOps.

Currently therefore I have two questions:

  • Why does Azure offer both the Deployment Center (Preview) and the multi-stage pipeline which seem to have overlapping functionality?
  • What is the correct way to modify the generated YAML to fix the path issue?

I suppose it would also be good if the path problem were picked up by the wizard in the first place.

Automatic transcription for journalists: still not viable despite Microsoft push for “Modern journalism”

I am just back from Microsoft’s developer-focused Build event, where some special sessions were laid on for press, on the subject of “Modern journalism.”

Led by Microsoft’s Ben Rudolph, Modern Journalism is described on his public LinkedIn profile as “a new program committed to helping the news industry fight fake news, tell stories that resonate with modern audiences, and succeed financially.”

The sessions appealed to me for one particular reason, which was the promise of automatic transcription. We were given a leaflet which says:

Tired of digging through hours of recordings to find that one quote? When you record a Teams interview, it’s saved to Microsoft Stream. Here you’ll get game-changing AI features: searchable transcript to jump to exact moments a key word or phrase was used.

Before the transcription thing though, we were taken on a tour of OneNote and Word with AI. The latest AI Editor in Word will tighten up your prose and find gaffes like non-inclusive language. There is lack of clarity over the privacy implications (these features work by uploading everything you type to Microsoft) but perhaps it is useful. I make plenty of typographical errors and would welcome help, though I remain sceptical about the extent to which AI can deliver this.

On to transcription though. Just hit record during a voice or video meeting in Teams, Microsoft’s Office 365 collaboration tool, and it gets automatically transcribed.

Unfortunately I do not use Teams for interviews, though it is possible to use it even for in-person interviews by having a meeting of one and recording it. I am wary though. I normally use an external recording device. Many years ago my device failed one day (I forget whether it was battery or something else) and I used my Tablet PC to record an interview with the game inventor Peter Molyneux. My expectations were not particularly high – I just wanted something good enough that I could transcribe it later. Unfortunately the recording was so poor that you can only make out about one word in ten. This, combined with my written notes and memory, was just about sufficient to write up my piece; but it was not an experiment I felt inclined to repeat – though recording quality has improved since that early disaster.

Still, automatic transcription would be an amazing time-saver. Further, I respect what can be achieved. Nuance Dragon Dictate can give superb results after a bit of training. What about Teams?

Today I put the idea to the test. I took a recorded interview from Build, made with a dedicated device, and uploaded it to Microsoft Stream. I tried uploading an audio file directly, but it would not accept it. I then created a “video” by importing my audio into a one-slide PowerPoint presentation and exporting it as a video. The quality is fine, easily intelligible. Stream chewed on it for maybe 30 minutes, and then my transcript was ready. The subject was the Azure Kubernetes Service. Here is a snippet of what Stream came up with:

 image 

There is an unnecessary annoyance here, which is that you cannot easily select and copy the entire transcript. Notice that it is in short snippets. The best way to get the whole thing is to click the three dots under the video, choose Update Video Details, and then download the caption file.

image

Now you get something like this:

image

The format is, shall we say, sub-optimal for journalists, though it would not take too long to write a script that would extract the text.

The bigger problem is the actual transcription. The section I have chosen is wrong in an interesting way. Here is part of what was said:

With the KEDA announcement today, what you’re seeing is us working with the ecosystem, in this case Red Hat, to solve some tricky problems around how to autoscale containers.

and here is the transcription:

with
the Kate Announcement. Today, which are seeing is also
actually working with the ecosystem in this case. We had
to sell some tricky problems around how to autoscale containers

Many of the words are correct, but the meaning is scrambled. Red Hat has been transcribed as “we had” losing a critical part of the content.

It is not my intention to rubbish this technology. Automatic transcription is very challenging, especially with specialist content. It is not unreasonable for the system to transcribe KEDA as “Kate”: it is a brand new acronym (Kubernetes-based event-driven autoscaling).

Still, the question I ask myself is whether fixing up the auto transcription will save me any time versus the old-fashioned approach. I use a Word macro that plays back the interview with hot keys to pause and backtrack, editing as I go.

The answer is no. It will take me as long or longer to make sense of the automatic transcription, by comparing it to the original, than to type it from scratch.

This might not always be the case. Perhaps with a more AI-friendly subject the transcription will be good enough to save some time. It could also help to find where in the recording a particular quote appears. So it is not altogether useless.

Transcription is difficult, but there are some simpler matters which Microsoft could improve. Enabling upload of audio files rather than video, and providing a continuous transcript that can easily be copied, for example.

Having a team within Microsoft rooting for journalists strikes me as a good thing in that an internal team may have more influence over the products.

It may be more a matter of some bright spark thinking, hey if we get more journalists using Office 365 that will help to promote the product. A strategy which will be more successful if effort goes into making product fit better with the way journalists actually work.

image

The future of WPF for developers who need to support Windows 7

If you talk to Microsoft about what is new for Windows Presentation Foundation (WPF), a framework for Windows desktop applications, the answer tends to revolve around the Windows UI Library (WinUI), user interface controls for the Universal Windows Platform and therefore Windows 10, which you can use with WPF. That is no use if you need to compile applications that work on Windows 7. Is WPF on Windows 7 in effect frozen?

Not quite. First, note that WPF (and Windows Forms) was updated for .NET Framework 4.8, with High DPI enhancements and bug fixes. The complete list of fixes is here. So there have been recent updates.

Microsoft says though that .NET Framework 4.8 is the “last major version” of .NET Framework. This suggests that WPF on .NET Framework will not change much in future. WPF is open source; but the open source project targets .NET Core, the cross-platform version of .NET. In addition, there are a few features in WPF for .NET Framework that will never be ported, including XBAPs (XAML Browser Applications) – probably not something you care about.

The good news though is that .NET Core does run on Windows 7 (currently SP1 is required). You can see the progress of WPF on .NET Core here. It is not yet done and there are a few things that will never be supported. But when this is production-ready, it is likely that the open source WPF will run on Windows 7 and thus benefit from any updates and fixes made to the code.

From what I have learned here at Build, Microsoft’s developer conference, it is that .NET Core work that is currently top of mind for the WPF team. This means that WPF on Windows 7 does have a future – provided that .NET Core continues to support Windows 7. This proviso is important, since it is the decision of a different team. At some point there will be a version of .NET Core that does not support Windows 7, and that will be the moment when WPF cannot really progress on that operating system.

There may also be a special case. Presuming Edge Chromium runs on Windows 7, WPF may get a new Edge-based WebView control that runs on Windows 7.

Summary: WPF (and Windows Forms) on .NET Framework is not going to change much in future. If you can transition to using these frameworks on .NET Core though, there is more hope of improvements, though there is no magic that will make Windows 10 features available on Windows 7.

Windows Subsystem for Linux 2: Microsoft’s change of direction delivers better performance, worse integration

It is s feature which most users are not even aware of, but for developers and admins the Windows Subsystem for Linux (WSL) is perhaps the best feature of Windows 10. It gives you seamless access to Linux applications and utilities without needing to run a virtual machine (VM) or remote session. For example, I use it to develop and debug LAMP (Linux, Apache, MySQL, PHP) applications using Visual Studio Code on Windows as the editor. I also use it for running the Let’s Encrypt certbot utility as well as using Linux OpenSSL utilities. It solves Windows annoyances like path limitations and case insensitivity.

Now at the Build developer conference Microsoft has introduced WSL, advertising “dramatic file system performance increases, and full system call compatibility.” That is great, but there is a downside. Unlike the first version, WSL 2 runs in a VM:

WSL 2 uses the latest and greatest in virtualization technology to run its Linux kernel inside of a lightweight utility virtual machine (VM)

says the announcement from Microsoft’s Craig Loewen.

Although Microsoft also says that WSL 2 “still provides the same user experience as in WSL 1,” this is not altogether true. One specific difference is that currently I can run my LAMP application, fire up a Windows browser, navigate to Localhost, and there is my application. In WSL 2, the LAMP application will have a different IP number so this will not work. To be fair, when I discussed this with a member of the team I was told that they are working to address this and tinker with the networking so that localhost will work again. It also arguable that the different IP number is preferable behaviour, since it will not conflict with other endpoints on the Windows side. But it is different.

The use of a VM for WSL 2 is the conventional approach to this problem. In fact, you have been able to run a Linux VM on Windows for many years. The difference is the work Microsoft is doing to provide the fastest possible startup and deep integration with the file system so that it behaves more like the original WSL than like an isolated VM. In other words, the problem of running Linux binaries by redirecting system calls (WSL) has been exchanged for another.

image

Why the change of direction? There are several reasons.

The first is compatibility. No matter how well WSL worked (and it does work very well), there would always be something that did not work as users attempted to use more and more Linux applications.

Second, performance. Apparently:

Initial tests that we’ve run have WSL 2 running up to 20x faster compared to WSL 1 when unpacking a zipped tarball, and around 2-5x faster when using git clone, npm install and cmake on various projects.

Third, when WSL was first conceived it was intended to work on mobile devices which could not support a VM (maybe this was something to do with Android compatibility efforts on Windows Phone).

Finally, Hyper-V has improved to the extent that running WSL 2 on a VM is more feasible.

It does mean that Microsoft will ship its own (but open source) Linux kernel with Windows and update it via Windows Update, a good thing for security.

The reasons are good ones, but it would not surprise me to see other niggling integration issues. And it is just a little sad that the magic of the original WSL has been replaced by a more conventional approach.

I also feel that if you came to Build looking for support for a narrative that Microsoft is drifting away from Windows and towards Linux, WSL 2 would support that narrative.

One .NET: unification of .NET for Windows and .NET Core, Xamarin too

Microsoft’s forking of the .NET development platform into the Windows-only .NET Framework on one side, and the cross-platform .NET Core on the other, has caused considerable confusion. Which should you target? What is the compatibility story? And where does Mono, the older cross-platform .NET fit in? Xamarin, partly based on Mono, is another piece of the puzzle.

Now Microsoft has announced that .NET 5, coming in November 2020, will unify these diverse .NET versions.

“There will be just one .NET going forward, and you will be able to use it to target Windows, Linux, macOS, iOS, Android, tvOS, watchOS and WebAssembly and more,” says Microsoft’s Rich Turner.

image

Following the release of .NET 5.0, the framework will have a major release every November, says Turner, with a long-term support release every two years.

Some other key announcements:

  • CoreCLR (the .NET Core runtime) and Mono will become drop-in replacements for one another.
  • Java interoperability will be available on all platforms.
  • Objective-C and Swift interoperability will be supported on multiple operating systems.
  • CoreFX will be extended to support static compilation of .NET and support for more operating systems.

A note of caution though. Turner says there are a number of issues still to be resolved. There is room for scepticism about how complete this unification will be.

More details in the official announcement here.

Update: having looked at these plans in a little more detail, it is wrong to say that Microsoft is unifying .NET Framework and .NET Core. Rather, Microsoft is saying that .NET Core is the replacement for .NET Framework for new applications whether on Windows or elsewhere. Certain parts of .NET Framework, including WCF, Web Forms, and Windows Workflow, will never be migrated to .NET 5. .NET Framework 4.8 will still be maintained and is recommended for existing applications.

Microsoft Build and the repositioning of Windows

Microsoft Build is under way in Seattle, with around 6000 attendees here to learn about the company’s latest developer technology. But what is the heart of Microsoft’s platform today? The answer used to be Windows – and this conference was originally the Build Windows event, distinct from the earlier Professional Developer Conference which was run by the Developer Division and had a wider scope.

image
Microsoft’s Satya Nadella introduces Build 2019 

  Today though it is not so clear. The draft Build 2019 press release hardly mentions Windows. Here is the summary of topics: 

In his opening keynote, Microsoft CEO Satya Nadella outlined the company vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and Power Platform, Microsoft 365, and Microsoft Gaming”

Windows is there of course. Azure uses Hyper-V, the Windows Server hypervisor. A Microsoft 365 license is a bundle of Office 365, InTune device management, and Windows Enterprise. Microsoft Gaming includes PC gaming, and Xbox gets its name from the Windows DirectX hardware accelerated graphics API. But no, this is no longer a conference about developing for Windows, and Microsoft seems happy for its operating system to be less visible. PCs remain the devices on which many of us get most of our work done, but it is not a growth market, and cannot really become one unless by some miracle Microsoft returned to mobile or wearables. That would be hard, especially since the Universal Windows Platform, originally conceived as an app platform for touch and mobile as well as desktop, has drifted away from that concept and become something of uncertain relevance unless you are targeting HoloLens or some other niche.

That said, Windows is still evolving and Build remains the best event to keep track of what is new. In the advance news on which this post is based, several key features were announced.

Windows Subsystem for Linux 2 (WSL) now supports Linux Docker containers as well as faster file I/O. This also integrates nicely with new Visual Studio Code Remote Development Extensions which let you edit and debug code in WSL, in Docker containers, or on any remove machine over SSH.

Windows Terminal is a new application for command lines including PowerShell, Cmd and WSL. It includes rich fonts (with hardware accelerated rendering), multiple tabs, and “theming and customization”.

React Native for Windows is an open source project on GitHub that will let you develop high performance Windows applications.

MSIX Core is the next step in Windows setup technology and lets you install MSIX packages on Windows 7 as well as Windows 10.

.NET 5 has been announced and seems to embrace both Windows Desktop and cross-platform – I will be unpacking the details of how this works shortly. .NET 5 will release in 2020.

Microsoft Edge (on Chromium) has new features announced included an IE mode tab (for running Internet Explorer applications/sites), three levels of privacy (Unrestricted, Balanced and Strict) which claim to control third-party tracking, and Collections which is a feature for collecting and sharing web information and integrates with Office.

Of course there is much more news on what Microsoft now sees as its top priority topics: Azure, AI, Microsoft Search, PowerApps, PowerBI, Cognitive Services, Bot Framework, Mixed reality, IoT and Edge computing, Cosmos DB, Azure Kubernetes Service, GitHub and more.

Windows? Still the best way to run Office, and excellent for developing applications. But this is Microsoft Build, not Build Windows.

image
Seattle, Washington the evening before Microsoft Build

Microsoft Office and privacy: happy to send what you type to the cloud for analysis?

I attempted to open a document from on-premises SharePoint recently and was greeted with an error asking me to check my privacy settings.

image

“The service required to use this feature is turned off” I was informed. Hmm, what service is that then? The solution turned out to be in the new Office privacy settings, just as the dialog suggested.

If you disable what Microsoft calls “Connected experiences” it appears to block access to SharePoint. Probably not what the user intended.

image 

This setting is not great for clarity. Privacy-conscious users like myself may disable it because it represents your agreement to “experiences that analyze your content”. Since this means uploading your content to the cloud for analysis it sounds as if it might weaken both privacy and security. If you look at all the options though, it may be possible to agree to access online file storage without agreeing to content analysis:

image

It looks as if by unchecking “Let Office analyze your content” you might be able to stop Office uploading your stuff.

Is there anything to worry about? We need to know more about what happens to our data. There is a Learn More link that takes us here. This lists lots of features but does not tell us what we want to know. Maybe here? This tell us that:

Three types of information make up required service data.

  • Customer content, which is content you create using Office, such as text typed in a Word document, and is used in conjunction with the connected experience.

It is still not clear though what happens to our data, other than that it is “sent to Microsoft”. Even the massive Microsoft Privacy Statement is no more illuminating on this point. In fact, it is arguably rather alarming since it contains this statement:

Microsoft uses the data we collect to provide you with rich, interactive experiences. In particular, we use data to:

  • Provide our products, which includes updating, securing, and troubleshooting, as well as providing support. It also includes sharing data, when it is required to provide the service or carry out the transactions you request.
  • Improve and develop our products.
  • Personalize our products and make recommendations.
  • Advertise and market to you, which includes sending promotional communications, targeting advertising, and presenting you with relevant offers.

We also use the data to operate our business, which includes analyzing our performance, meeting our legal obligations, developing our workforce, and doing research.

In carrying out these purposes, we combine data we collect from different contexts (for example, from your use of two Microsoft products) or obtain from third parties to give you a more seamless, consistent, and personalized experience, to make informed business decisions, and for other legitimate purposes.

This suggests that Microsoft will profile me and send me advertising based on the data it collects. What I need to know is not only the fact that this happens, but also the mechanism, in order to make an informed judgement about whether it is sensible to enable these options. Of course it is also possible that the Office content analysis service does not do this. I am guessing.

What can go wrong? These risks are hard to quantify. If you are typing something confidential, it makes sense not to share it more than is necessary, as further sharing can only increase the risk. There are some interesting scenarios too, such as what happens if Microsoft receives a legal demand to have sight of the content of your documents. Microsoft may not want to give access to your content, but in some circumstances it might not have the choice. Then again, I doubt it retains content sent for the purpose of personalisation, beyond whatever factors the service determines are significant. However this is not stated here.

Is this any different from storing documents on a cloud service such as SharePoint / OneDrive online? It is a bit different since in the Office case you are permitting Microsoft to analyze as well as to store your content.

All of this is up for debate. I accept that the risks are probably small as well as the fact that the wider world has little or no interest in most of the content I type but do not choose to publish.

Nevertheless, there are a few things which seem to me reasonable requests.

– A clear statement concerning what happens to my content if I choose to let it be analyzed by Microsoft’s cloud service, to enable better informed decisions about whether or not to enable this feature. Dumping the user into an all-encompassing privacy policy is not good enough.

– Improved settings (and possibly some fixed bugs) so that privacy-conscious users do not inadvertently disable access to on-premises SharePoint, as in my example, or other unexpected outcomes.

– A simple way to exclude a specific document from the service, conceptually similar to “in-private” mode in a web browser though with more chance of actually protecting your privacy (in-private mode is not really very private).

In general, I do not think the solution to a customer’s reasonable concerns about privacy and security of personal information is to obscure how this data is handled.

A post that can save you money: scheduling Azure Virtual Machines for start/stop

I have written recently about Windows Virtual Desktop, the ability to set up a virtual desktop environment on Azure at a relatively low cost, provided your users have Microsoft 365 accounts. My test setup is minimal but I have been watching the cost which is currently working out at £5.39 per day. This excludes the cost of Microsoft 365; it is purely for the infrastructure including VPN gateway, storage and VM. Bandwidth is a variable cost but almost negligible on my usage. Of that cost, the VM is around 75%. So if I could shut down the VM when not in use the savings are substantial.

It turns out this is pretty easy on Azure though it requires some plumbing. VMs do have a built-in option to shutdown on a schedule, but not to start up. To get start/stop, you need an Automation Account.

image

With the automation account created, select it, hit Start/Stop VM, then click “Learn more about and enable the solution”.  You get this dialog.

image

Here we learn that to save money, we have to spend it, on three new services: Automation, Log Analytics, and Monitor. It is not too bad though as there is a free tier for these services that may be all I need. Hit Create.

image

In this window you have to configure three sections. Nothing challenging, but note that in Configuration you set the Target Resource Group Names. No pick list here, you have to type in the names. Or use a wildcard, which is unlikely to be a good idea since by default it will start and stop ALL your VMs. The schedule is not very smart, just a daily on and off, but see below. Once done, click Create to add the solution.

All done, but what about weekends, for example. This is easily fixed if you create your own schedules. Just go into your automation account and click Schedules under Shared Resources. The wizard-created schedules are listed, and you can modify them or create new ones. It looks as if you might need 5 schedules, one per weekday, recur every week, to make your VMs not run at weekends. There is no Monday-Friday option.

More documentation here. Note that automation can also run PowerShell scripts which will be even more flexible.

Scheduling cloud resources to shut down when not in use must be one of the most effective ways to reduce IT spend.

Update: here is the outcome of my efforts:

image

The Management resource group has the runbook that performs the start/stop action. The cost of this is small. Overall cost has gone down by about £2.00 or about 40% in my case. I appreciate this is a very small test deployment, but it would support maybe 4 or 5 users without any problem and my experience shows that you can indeed make a large saving by scheduling VMs to stop when not in use.

Microsoft Financials: strategic purpose of Github, Teams and PowerApps revealed

Microsoft has announced its quarterly financial statements, reporting revenue of $30.6 billion, up 14% on the same period last year.

The story seems to be largely more of the same, which is good for the company in that all its numbers look good.

The most striking figure is 73% increase in Azure revenue. Azure is the smallest of its three self-defined segments though, though all three are similar in size. “More Personal Computing” (Windows, Surface and gaming) delivered the most revenue, followed by Productivity and Business Processes. That said, at this rate of growth Azure will soon be the biggest of the three segments.

Aside: has there ever been a dafter segment name than More Personal Computing? More than what?

Quarter ending March 31st 2019 vs quarter ending March 31st 2018, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 10100 +1236 3979 +864
Intelligent Cloud 9649 +1780 3208 +554
More Personal Computing 10680 +763 3154 +631

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Some points to note. Microsoft reported a “material improvement” in Azure gross margin, something which does not surprise me. In my experience, the Azure Portal does a great job (from Microsoft’s perspective) in steering you towards premium services and extras, as I found when trying Windows Virtual Desktop – check my note on the VPN Gateway at $140 per month).

Office 365 is still growing, up 30% according to Microsoft’s slides. LinkedIn is also increasing revenue, up 27%.

Despite Chromebooks and mobile, Windows is still a cash cow with revenue from Windows OEM Pro up 15% year on year. Consumer revenue is down 1%.

In the earnings call CEO Satya Nadella called out Teams as “bringing together everything a team needs” (well, apart from a proper shared calendar).

CFO Amy Hood remarked on what she called “strategic areas” by which she means I think areas that drive adoption:

We will invest aggressively in strategic areas like Cloud through AI and Github, Business Applications through Power Platform and LinkedIn, Microsoft 365 through Teams, Security, and Surface as well as Gaming.

Note that Github is seen as a way of persuading developers to use Azure services, and note also the important attached to the Power Platform. Power platform? Here it is:

The Power Platform is today comprised of three services – Power BI, PowerApps and Flow … It is a system that enables users to do three key actions on data that help them drive business: Analyze, Act, and Automate. We do this with Power BI, PowerApps, and Flow, all working together atop your data to help EVERYONE, from the CEO to the front-line workers, drive the business with data.

says CVP James Phillips.

The piece that particularly interests me is PowerApps. Microsoft spent years looking for a modern successor to Visual Basic, app development within reach of non-specialists (kind-of). In PowerApps it believes it has the answer:

PowerApps is a “citizen application development platform” – allowing anyone to build web and mobile applications without writing code. The natural connection between Power BI and PowerApps makes it effortless to put insights in the hands of maintenance workers, teachers, miners and others on the frontline, in tailored and often task-specific applications

says Phillips.

So if VB was a driver for Windows adoption, then PowerApps will push you towards Microsoft’s cloud-hosted business applications.

Microsoft Planner: a good task management solution for small teams?

It is a common scenario for any team: you have projects which break down into various tasks, and you need to assign tasks to team members with deadlines. The low-tech solution is that you have a meeting, you assign the tasks, and each person organises their time in whatever way works for them. A calendar entry with a reminder, perhaps, or a task entry with a reminder, if you use Outlook and Exchange or Office 365.

But what if you want a project-level view of how the tasks are going? Again there are low-tech solutions like Excel spreadsheets or even a whiteboard on the wall. Of course there are software solutions as well. On Microsoft’s platform (which is the subject of this post) you could use Microsoft Project. A user license for Project Online Professional is currently £22.60 per month, though, more than double the cost of an Office 365 Business Premium account (£9.40). Even a team member license (Project Online Essentials) is £5.30. It seems a big leap in cost, and is more than many businesses need in terms of features.

There is an alternative, which is Microsoft Planner. This is one of those Office 365 apps that is not all that well known, and it comes for free with most Office 365 plans. It gives you basic project management, with the ability to assign tasks to team members.

You can find Planner by logging into Office 365 and choosing Planner from the All Apps view.

image

Once Planner opens you can create a plan.

image

I advise a careful look at this dialog before clicking Create plan. If you have one big project, such as perhaps a new product you are developing, a plan dedicated to that project makes sense. If you have multiple small projects though, it would be better to have a single plan to contain multiple projects. The reason is that plans have a relatively high overhead. Each plan by default creates an Office 365 group and an Office 365 Sharepoint site. This could easily become a maintenance nightmare. Within a plan though, you can have multiple buckets, and each bucket can contain multiple tasks.

Note also that you can use an existing Office 365 group. It might make sense to create the group first, if only to get a sensible name. By default, the group gets the game of the plan. Only one Sharepoint site is created per group, so this is more lightweight (phew!).

After thinking this through you hit Create plan. The plan is created and you can get on with adding tasks, the base unit of a plan.

A few things about tasks:

– they have due dates

– they are assigned to one or more team members

– they can have checklists of sub-tasks which you check off

– they can have attachments

– they have a status of “Not started”, “in progress”, or “complete”

image

Tasks can be grouped into buckets (a good idea). Once you have a few tasks you can view charts showing progress and a schedule showing when task completion is due.

image

When members are assigned a task, they get an email notification.

image

And as I mentioned there is Sharepoint site which can have all sorts of junk added to it.

image

Now a few observations. Planner looks useful but as so often with these Microsoft apps, there are things that make you want to bang your head against the nearest wall. The most obvious problem is that Planner tasks do not integrate with Outlook tasks. The best you can do is to export the plan schedule to an Outlook calendar. Guess what is the top user request for Planner?

image

From here we learn of an added complication, that Outlook tasks are being replaced by Microsoft To-Do. Inevitable perhaps but I like Outlook tasks and the fact that everything is in an Exchange mailbox, and therefore easy to manage.

Still, the good news is that it says In Development.

Other limitations? Well, Planner is very basic. You cannot even have dependent tasks. You cannot set status to show the degree to which a task is complete, which even Outlook tasks can do. No Gantt charts either. Or features like milestones, cost tracking, risk assessment, time management, templates, prioritisation, projections, or other such features.

In fact, you cannot even export to Excel, the second most requested feature (the team is working on this too).

image

You cannot help but wonder if Microsoft does not want to make Planner too good, lest it cut into lucrative Project sales.

If so, this is to my mind wrongheaded. For every Project sale lost, there would be three sales won for Office 365 if it came with an excellent project management tool built in. There is also the problem of duplicated effort. Why not get the Project team to develop Project Lite for Office 365, limited by lack of some of the more advanced features, but with a smooth upgrade path, rather than making an alternative product which is still not fully ready?

Still, Planner is free with Office 365, and worth being aware of if you can get it to do what you need.

Tech Writing