Tag Archives: microsoft

Generating code for simple SQL Server data access without Entity Framework, works with .NET Core

I realise that Microsoft’s Entity Framework is the most common approach for data access in the .NET world, but I have also always had good results from a simple manual approach using DbConnection, DbCommand and DataReader objects, and like the fact that I can see and control exactly what SQL gets executed. If you prefer using Entity Framework or another abstraction that is fine and please stop reading now!

One snag with this more manual approach is that you have to write tedious code building SQL statements. I figured that someone must have written a utility application to generate this code but could not find one quickly so I did my own. It supports both C# and Visual Basic. The utility connects to a database and lets you generate a class for each table along with code for retrieving and saving these objects, ready for modification. Here you can see a generated class:

image

and here is an example of the generated data access code:

image

This is NOT complete code (otherwise I would be perilously close to writing my own ORM) but simply automates creating SQL parameters and SQL statements.

One of my thoughts was that this code should work well with .NET core. The SQLClient implements the required classes. Here is my code for retrieving an author object, mostly generated by my utility:

public static ClsAuthor GetAuthor(string authorID)
        {
            SqlConnection conn = new SqlConnection(ConnectString);
            SqlCommand cmd = new SqlCommand();
            SqlDataReader dr;
            ClsAuthor TheAuthor = new ClsAuthor();
            try
            {
                cmd.CommandText = "Select * from Authors where au_id = @auid";
                cmd.Parameters.Add("@auid", SqlDbType.Char);
                cmd.Parameters[0].Value = authorID;
                cmd.Connection = conn;
                cmd.Connection.Open();

                dr = cmd.ExecuteReader();

            if (dr.Read()) {

                    //Get Function
                    TheAuthor.Auid = GetSafeDbString(dr, "au_id");
                    TheAuthor.Aulname = GetSafeDbString(dr, "au_lname");
                    TheAuthor.Aufname = GetSafeDbString(dr, "au_fname");
                    TheAuthor.Phone = GetSafeDbString(dr, "phone");
                    TheAuthor.Address = GetSafeDbString(dr, "address");
                    TheAuthor.City = GetSafeDbString(dr, "city");
                    TheAuthor.State = GetSafeDbString(dr, "state");
                    TheAuthor.Zip = GetSafeDbString(dr, "zip");
                    TheAuthor.Contract = GetSafeDbBool(dr, "contract");
                }
            }
            finally
            {
                conn.Close();
                conn.Dispose();
            }

            return TheAuthor;
        }

Everything worked perfectly and I soon had a table showing the authors, using ASP.NET MVC.

In order to verify that it really does work with .NET Core I moved the project to Visual Studio Mac and ran it there:

image

I may be unusual; but I am reassured that I have a relatively painless way to write a database application for .NET Core without using Entity Framework.

Microsoft updates the .NET stack with .NET Core 2.0 and updated Visual Studio. Should you use it?

Microsoft has released .NET Core 2.0, a major update to its open source, cross-platform version of the .NET runtime and C# language.

New features include implementation of .NET Standard 2.0 (a way of targeting code to run under multiple .NET platforms), new platform support including Debian Stretch, macOS High Sierra and Suse Linux Enterprise Server 12 SP2. There is preview support for both Linux and Windows on ARM32.

.NET Core 2.0 now supports Visual Basic as well as C# and F#. The version of C# has been bumped to 7.1, including async Main method support, inferred tuple names and default expressions.

Microsoft has also released Visual Studio 2017 15.3, which is required if you want to use .NET Core 2.0. New Visual Studio features include Azure Stack support, C’# 7.1 support, .NET Framework 4.7 support, and other new features and fixes.

I updated Visual Studio and downloaded the new .NET Core 2.0 SDK and was soon up and running.

image

Note the statement about “This product collects usage data” of which more below.

image

The sample ASP.NET MVC application worked first time.

image

How is .NET Core doing? The whole .NET picture is desperately confusing and I get the impression that most .NET developers, while they may have paid some attention to what is happening, have concluded that the safe path is to continue with the Window-only .NET Framework.

At the same time, .NET Core is strategically important to Microsoft. Cross-platform support means that C# has a life on the Mac and on Linux, which is vital to its health considering the popularity of the Mac amongst developers, and of Linux as a deployment platform for web applications. Visual Studio for Mac has also been updated and supports .NET Core 2.0 in the new version.

Another key piece is the container trend. .NET Core is ideal for container deployment, and the only version of .NET supported in Windows Nano Server. If you want to embrace microservices running in containers, while still developing with C#, .NET Core and Nano Server is the optimum solution.

Why not use .NET Core, especially since it is faster than ASP.NET? In these comparisons, .NET Core comes out as substantially faster than .NET Framework for various algorithms – 600 times faster in one case.

The main issue is compatibility. .NET Core is a subset of the .NET Framework, and being a relative newcomer, it lacks the same level of third-party support.

Another factor is that there is no support for desktop applications, though some solutions have been devised. Microsoft does have a cross-platform GUI story, in Xamarin Forms, which is now in preview for macOS alongside iOS, Android, Windows and Tizen. If Xamarin used .NET Core that would be a great solution, but it does not (though it does support .NET Standard 2.0).

One of the pieces that most concerns developers is data access. If you use .NET Core you are strongly guided towards Entity Framework Core, a fork of Microsoft’s ORM (Object-Relational Mapping) framework. Someone asked on this page, is EF Core usable? Here’s an answer from one user (11 days ago):

Answering 4 months later but people should know: Definitely not, it is still not usable unless you are doing something very trivial and/or have very small DB.
I don’t understand how it is possible for MS to ship it, act like it’s OK and sparsely here and there provide shallow information about its limitations like in this article without warning clearly and explicitly about the serious issues this “v1 product” has.

Someone may jump in and say no, it is fine; but there are undoubtedly missing pieces and I would suggest caution.

You can also access data using the Connection/Command/DataReader approach which avoids EF, and although this is more work, this is what I would be inclined to do personally since you get the best performance and flexibility. Here is an example for SQL Server.

Who is using .NET Core? Controversially, Microsoft gathers telemetry from your use of the command-line tools though you can opt out by setting an environment variable. This means we have some data on .NET Core usage, though unfortunately it excludes Visual Studio usage. I downloaded the most recent dataset and imported it into a database. Here are the figures for OS family:

Total rows 5,036,981
Windows 3,841,922 (76.27%)
Linux 887,750 (17.62%)
Mac 307,309 (6.1%)

image

Given that this excludes Visual Studio users, who are also on Windows, we can conclude that the great majority of .NET Core developers use Windows, and only a tiny minority Mac (I do not know if Visual Studio for Mac usage is included). This is evidence that .NET Core has so far failed in its goal of persuading Mac-using developers to adopt .NET. It does show interest in deploying .NET applications to Linux, which is an obvious win in licensing costs as well as performance.

I would be interested in comments from developers on whether or not they use .NET Core and why.

The downside of “Windows as a service”: disappearing features (and why I will miss Paint)

Microsoft has posted a list of features that are “removed or deprecated” in the next major update to Windows 10, called the Fall Creators Update.

The two that caught my eye are Paint, a simple graphics editor whose ancestry goes right back to Windows 1.0 in 1985, and System Image Backup, a means of backing up Windows that preserves applications, settings and documents.

I use Paint constantly. It is ideal for cropping screenshots and photos, where you want a quick result with no need for elaborate image processing. It starts in a blink, lets you resize images while preserving aspect ratio, and supports .BMP, .GIF, .JPG, .PNG and .TIF – all the most important formats.

I used Paint to crop the following screen, of the backup feature to be removed.

image

System Image Backup is the most complete backup Windows offers. It copies your system drive so that you can restore it to another hard drive, complete with applications and data. By contrast, the “modern” Windows 10 backup only backs up files and you will need to reinstall and reconfigure the operating system along with any applications if your hard drive fails and you want to get back where you were before. “We recommend that users use full-disk backup solutions from other vendors,” says Microsoft unhelpfully.

If System Image Backup does stop working, take a look at Disk2vhd which is not entirely dissimilar, but copies the drive to a virtual hard drive; or the third party DriveSnapshot which can backup and restore entire drives. Or of course one of many other backup systems.

The bigger picture here is that when Microsoft pitched the advantages of “Windows of a service”, it neglected to mention that features might be taken away as well as added.

Microsoft Edge browser crashing soon after launch: this time, it’s IBM Trusteer Rapport to blame

A common problem (I am not sure how common, but there are hundreds of reports) with the Edge browser in Windows 10 is that it gets into the habit of opening and then immediately closing, or closing when you try to browse the web.

I was trying to fix a PC with these symptoms. In the event log, an error was logged “Faulting module name: EMODEL.dll.” Among much useless advice out there, there is one that has some chance. You can reinstall Edge by following a couple of steps, as described in various places. Something like this (though be warned you will lose ALL your Edge settings, favourites etc):

Delete C:\Users\%username%\AppData\Local\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe (a few files may get left behind)

Reboot

Run Powershell then Get-AppXPackage -Name Microsoft.MicrosoftEdge | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register "$($_.InstallLocation)\AppXManifest.xml" -Verbose}

However this did not fix the problem – annoying after losing the settings. I was about to give up when I found this thread. The culprit, for some at lease, is IBM Trusteer Rapport and its Early Browser Protection feature. I disabled this, rebooted, and Edge now works.

Failing that, you can Stop or uninstall Rapport and that should also fix the problem.

Unhealthy Identity synchronization Notification: a trivial solution (and Microsoft’s useless troubleshooter)

If you use Microsoft’s AD Connect, also known as DirSync, you may have received an email like this:

image

It’s bad news: your Active Directory is not syncing with Office 365. “Azure Active Directory did not register a synchronization attempt from the Identity synchronization tool in the last 24 hours.”

I got this after upgrading AD Connect to the latest version, currently 1.1.553.

The email recommends you run a troubleshooting tool on the AD Connect server. I did that. Nothing wrong. I rebooted, it synced once, then I got another warning.

This is only a test system but I still wanted to find out what was wrong. I tweaked the sync configuration, again without fixing the issue.

Finally I found this post. Somehow, AD Connect had configured itself not to sync. You can get the current setting in PowerShell, using get-adsyncscheduler:

image

As you can see, SyncCycleEnabled is set to false. The fix is trivial, just type:

set-adsyncscheduler –SyncCycleEnabled $true

Well, I am glad to fix it, but should not Microsoft’s troubleshooting tool find this simple configuration problem?

Licensing Azure Stack: it’s complicated (and why Azure Stack is the iPad of servers)

Microsoft’s Azure Stack is a pre-configured, cut-down version of Microsoft’s mighty cloud platform, condensed into an appliance-like box that you can install on your own premises.

Azure Stack is not just a a new way to buy a bunch of Windows servers. Both the technical and the business model are different to anything you have seen before from Microsoft. On the technical side, your interaction with Azure Stack is similar to your interaction with Azure. On the business side, you are buying the hardware, but renting the software. There is no way, according to the latest pricing and licensing guide, to purchase a perpetual license for the software, as you can for Windows Server. Instead, there are two broad options:

Pay-as-you-use

In this model, you buy software services on Azure Stack in exactly the same way as you do on Azure. The fact that you have bought your own hardware gets you a discount (probably). The paper says “Azure Stack service fees are typically lower than Azure prices”.

Service
Base virtual machine $0.008/vCPU/hour ($6/vCPU/month)
Windows Server virtual machine $0.046/vCPU/hour ($34/vCPU/month)
Azure Blob Storage $0.006/GB/month (no transaction fee)
Azure Table and Queue $0.018/GB/month (no transaction fee)
Azure App Service (Web Apps, Mobile Apps, API Apps, Functions) $0.056/vCPU/hour ($42/vCPU/month)

This has the merit of being easy to understand. It gets more complex if you take the additional option of using existing licenses with Azure Stack. “You may use licenses from any channel (EA, SPLA, Open, and others),” says the guide, “as long as you comply with all software licensing and product terms.” That qualification is key; those documents are not simple. Let’s briefly consider Windows Server 2016 Standard, for example. Licensing is per core. To install Windows Server 2016 Standard on a VM, you have to license all the cores in the physical server, even if your VM only has one virtual CPU. The servers in Azure Stack, I presume, have lots of cores. Even when you have done this, you are only allowed to install it on up to two VMs. If you need it on a third VM, you have to license all the cores again. Here are the relevant words:

Standard Edition provides rights for up to 2 Operating System Environments or Hyper-V containers when all physical cores in the server are licensed. For each additional 1 or 2 VMs, all the physical cores in the server must be licensed again.

Oh yes, and once you have done that, you need to purchase CALs as well, for every user or device accessing a server. Note too that on Azure Stack you always have to pay the “base virtual machine” cost in addition to any licenses you supply.

This is why the only sane way to license Windows Server 2016 in a virtualized environment is to use the expensive Datacenter edition. Microsoft’s pay-as-you-use pricing will be better for most users.

Capacity model

This is your other option. It is a fixed annual subscription with two variants:

App Service, base virtual machines and Azure Storage $400 per core per year
Base virtual machines and Azure Storage only $144 per core per year

The Capacity Model is only available via an Enterprise Agreement (500 or more users or devices required); and you still have to bring your own licenses for Windows Server, SQL Server and any other licensed software required. Microsoft says it expects the capacity model to be more expensive for most users.

SQL Server

There are two ways to use SQL Server on Azure. You can use a SQL database as a service, or you can deploy your own SQL Server in a VM.

The same is true on Azure Stack; but I am not clear about how the licensing options if you offer SQL databases as a service. In the absence of any other guidance, it looks as if you will have to bring your own SQL Server license, which will make this expensive. However it would not surprise me if this ends up as an option in the pay-as-you-use model.

Using free software

It is worth noting that costs for both Azure and Azure Stack come way down if you use free software, such as Linux rather than Windows Server, and MySQL rather than SQL Server. Since Microsoft is making strenuous efforts to make its .NET application development framework cross-platform, that option is worth watching.

Support

You will have to get support for Azure Stack, since it is not meant to be user-serviceable. And you will need two support contracts, one with Microsoft, and one with your hardware provider. The hardware support is whatever you can negotiate with the hardware vendor. Microsoft support will be part of your Premier, Azure or Partner support in most cases.

Implications of Azure Stack

When Microsoft embarked on its Azure project, it made the decision not to use System Center, its suite of tools for managing servers and “private cloud”, but to create a new way to manage servers that is better automated, more scalable, and easier for end-users. Why would you use System Center if you can use Azure Stack? Well, one obvious reason is that with Azure Stack you are ceding a lot of control to Microsoft (and to your hardware supplier), as well as getting pushed down a subscription path for your software licensing. If you can handle that though, it does seem to me that running Azure Stack is going to be a lot easier and more productive than building your own private cloud, for most organizations.

This presumes of course that it works. The big risk with Azure Stack is that it breaks; and your IT administrators will not know how to fix it, because that responsibility has been outsourced to your hardware vendor and to Microsoft. It is possible, therefore, than an Azure Stack problem will be harder to solve than other typical Windows platform failures. A lot will depend on the quality control achieved both by Microsoft, for the software, and its hardware partners.

Bottom line: this is the iPad of servers. You buy it but don’t really control it, and it is a delight to use provided it works.

No more infrastructure roles for Windows Nano Server, and why I still like Server Core

Microsoft’s General Manager for Windows Server Erin Chapple posted last week about Nano Server (under a meaningless PR-speak headline) to explain that Nano Server, the most stripped-down edition of Windows Server, is being repositioned. When it was introduced, it was presented not only as a lightweight operating system for running within containers, but also for infrastructure roles such as hosting Hyper-V virtual machines, hosting containers, file server, web server and DNS Server (but without AD integration).

In future, Nano Server will be solely for the container role, enabling it to shrink in size (for the base image) by over 50%, according to Chapple. It will no longer be possible to install Nano Server as a standalone operating system on a server or VM. 

This change prompted Microsoft MVP and Hyper-V enthusiast Aidan Finn to declare Nano Server all but dead (which I suppose it is from a Hyper-V perspective) and to repeat his belief that GUI installs of Windows Server are best, even on a server used only for Hyper-V hosting.

Prepare for a return to an old message from Microsoft, “We recommend Server Core for physical infrastructure roles.” See my counter to Nano Server. PowerShell gurus will repeat their cry that the GUI prevents scripting. Would you like some baloney for your sandwich? I will continue to recommend a full GUI installation. Hopefully, the efforts by Microsoft to diminish the full installation will end with this rollback on Nano Server.

Finn’s main argument is that the full GUI makes troubleshooting easier. Server Core also introduces a certain amount of friction as most documentation relating to Windows Server (especially from third parties) presumes you have a GUI and you have to do some work to figure out how to do the same thing on Core.

Nevertheless I like Server Core and use it where possible. The performance overhead of the GUI is small, but running Core does significantly reduce the number of security patches and therefore required reboots. Note that you can run GUI applications on Server Core, if they are written to a subset of the Windows API, so vendors that have taken the trouble to fix their GUI setup applications can support it nicely.

Another advantage of Server Core, in the SMB world where IT policies can be harder to enforce, is that users are not tempted to install other stuff on their Server Core Domain Controllers or Hyper-V hosts. I guess this is also an advantage of VMWare. Users log in once, see the command-line UI, and do not try installing file shares, print managers, accounting software, web browsers (I often see Google Chrome on servers because users cannot cope with IE Enhanced Security Configuration), remote access software and so on.

Only developers now need to pay attention to Nano Server, but that is no reason to give up on Server Core.

PowerShell documentation stubs: frustrating for users

I’ve been writing a piece on PowerShell, Microsoft’s generally excellent scripting and automation platform. PowerShell is also largely open source, in its cross-platform, PowerShell Core guise.

Of course I went straight to the official documentation as part of my research. Looks good; but I was puzzled. I would find a promising topic like Object Pipeline, which says:

In this chapter, we will describe how the Windows PowerShell pipeline differs from the pipelines of most popular shells, and then demonstrate some basic tools that you can use to help control pipeline output and also to see how the pipeline operates.

Then I clicked around to read the chapter, and struggled to find the content.

Eventually I figured out the problem, by going to the GitHub repository for the documentation. This content is not yet written. Microsoft’s JuanPablo Jofre has written stubs, either with the intention of completing them later, or perhaps in the hope that the community will step up and help.

Open Source is great, but the user experience of finding stub documents in official documentation, that is not clearly marked as such, is frustrating.

I do not recall this kind of issue in Microsoft documentation written in the old closed-source world, which makes me wonder if the documentation team is under resourced – though the most important part of the documentation, the cmdlet reference, is pretty good in my experience.

My view of PowerShell is that it is now a critically important part of the Microsoft platform. It is not only a tool for managing Windows Server, but also for Microsoft’s online services such as Office 365, Azure Active Directory and other Azure services.

It is worth getting the documentation right.

Windows S: another go at locking down Windows, but the Store is not ready and making it ready is a challenge

There were two big ideas behind Surface RT and Windows RT, the 2012 Windows 8 project which left Microsoft (and some OEM partners) with a mountain of unsold hardware. One was to compete with iPads and Android tablets by making Windows a touch-friendly operating system. The second was that Windows had to move on from being vulnerable to being damaged or completely broken by applications. Traditional Windows applications have installers that run with full admin rights and there is nothing much to stop them installing files in the wrong places, setting themselves to start up automatically, or bloating the Registry (the central configuration database in Windows). “My PC is so slow” is a common complaint, and the cumulative effect of successive application installs is one of the key reasons. Vulnerability to malware is another problem, and one which anti-virus software can never solve completely.

Windows RT solved these problems by disallowing application installs other than via the Windows Store. At that time, Windows Store apps were also locked down, so that a malware infection was only possible if there were a bug in the operating system.

Why did Surface RT and Windows RT fail? The ARM-based hardware was rather slow, which was one of the issues, but a more serious flaw was the lack of compelling applications in the Store. Why was that? Complex reasons, but the chief one is that Windows RT was caught in a cycle of failure. Developers want to make money, and the Windows 8 Store was not sufficiently popular with users to give them a big market. At the same time, users who tried the Store found few applications worth their time, and therefore rarely used it.

The problem was compounded by the unpopularity of Windows 8, which was an unfamiliar environment for the existing Windows users who formed the primary market.

Nevertheless, the thinking behind Windows 8 and Windows RT was not completely off the mark. If only it could get over the hump of unpopularity and lack of apps, it could usher in a new era of Windows devices that were secure, touch-friendly, and resistant to performance decay.

It never did, and with Windows 10 Microsoft appeared to give up. The desktop was back, mouse and keyboard was again primary, and Store apps now ran in windows on the desktop. A special Tablet Mode attempted to make Windows 10 equally as touch-friendly as Windows 8, but did not succeed.

Windows still has those problems though, the ones which Windows RT was intended to solve. Could there be another approach which would fix those issues but in a manner more acceptable to users?

image

Windows S and the Surface Laptop, announced today in New York, is the outcome. It is still Windows 10, but Microsoft has flipped a switch that enforces all apps to be installed from the Windows Store. This switch is already in the latest version of Windows 10, the Creators Update, but off by default:

image

Microsoft has also taken steps to make the Store more attractive for developers. It is no longer necessary to develop apps on a new platform within Windows, as it was for the Windows 8 Store. Now you can simply take your existing desktop application and wrap it to enable Store download. This feature is called the Desktop Bridge, or Project Centennial. Applications so wrapped are not as secure as Windows 8 Store apps were; they can write to files anywhere that the user has permission. At the same time, Microsoft has taken steps to make Desktop Bridge apps better isolated than normal desktop applications. You can read the details of how this works here. It is arranged that applications install all files to a private location, instead of system locations, and that Windows hides this fact from the application code by using redirection. The same is true of the registry. This approach means that file version problems and registry bloat are much less likely. Such issues are still possible because the Desktop Bridge does not redirect file or registry calls outside the application package; these are allowed if the user has permission, for compatibility reasons. Nevertheless, it is a big advance on old-style Windows desktop application installs.

When the user removes a Desktop Bridge application, in most cases all its files and registry entries are cleanly removed.

An important additional protection is that applications submitted to the Store are vetted by Microsoft, so malicious or badly behaved instances should not get through.

Windows S will be installed by default both on Surface Laptop and on a new generation of low-end laptops aimed mainly at the education market.

The benefits of Windows S are real; but unfortunately Microsoft still has not solved the Store problem. Currently, your favourite Windows applications are not in the Store. Microsoft Office will be there, thanks to the Desktop Bridge, but many others are not.

image

Microsoft’s big bet is that thanks to Windows S and other initiatives, the Store will be sufficiently attractive to developers, and sufficiently easy to target, that it will soon offer a full range of applications including all your favourites.

Right now though, if you get a Windows S laptop, you will probably end up buying the upgrade to Windows 10 Pro, for $49.00 or equivalent. Then you can install any Windows desktop application. However, by doing so you make it unnecessary for developers to bother using Desktop Bridge to wrap their applications – so they might never do so.

Windows S has a few other limitations:

Microsoft Edge is the default web browser on Microsoft 10 S. You are able to download another browser that might be available from the Windows Store, but Microsoft Edge will remain the default if, for example, you open an .htm file. Additionally, the default search provider in Microsoft Edge and Internet Explorer cannot be changed.

In addition, it cannot join a local Windows domain (a problem for many businesses), though it can join Azure AD, the Office 365 directory.

Microsoft’s goal here is worthwhile: to move Windows into a new place in terms of security and resilience. Getting it there though will not be easy.

Xamarin Challenge shows bumps in Microsoft’s path to cross-platform mobile

Microsoft ran a Xamarin Challenge over on Paul Thurrott’s site. The idea was to demo how to build a cross-platform mobile app with Microsoft’s cross-platform mobile toolkit.

The challenge was in three steps. You build a weather app, complete with crash analytics on the Visual Studio Mobile Center.

image

Someone did a lot of work on this, and the app looks pretty and works nicely once you get it running.

Despite this, I am not sure that the challenge was altogether successful. It is a step-by-step which in theory involves no developer expertise as you simply copy and paste code into your project. I am not sure that is the best way to learn, but that is by the by. I doubt that learning how to code for Xamarin was the primary goal of the challenge. I’d guess it was more about showing how easily you can build a cross-platform app (Android, iOS and Windows UWP) using Xamarin, C# and Visual Studio 2017.

Well, in fact a little bit of developer expertise was required to complete the challenge, because the step by step instructions did not quite work (in my experience). I did not make a note of all the times I had to do something not in the given steps, but there were many occasions, the main issues being around using the Visual Studio Android emulator, NuGet package management, and a few small tweaks to the code itself. The code as given made no allowance for the cloud services it called being offline, or the connection to the internet not being available, but would simply crash in this case.

The challenge could be an excellent resource for Microsoft and Xamarin if the company drills down into the problems developers experienced trying to complete the challenge, recorded in this forum thread. Here are a few examples:

Myself and 5 other developers in our office attempted the challenge and none of us have been able to get past the first challenge. We are not Microsoft Visual Studio experts so we had hoped following the provided instructions would be sufficient.

The upload was failing on a discrepancy between 2 different versions of the Json package, which somehow had crept into the project. Installing over 40 updates in Nuget resolved this.

Many thanks for running this challenge –this was very useful and worthwhile. I just wish modern development did not feel like trying to dance on a mile high stack of chairs with a leg missing on the bottommost one!

I got a late start on the challenge and was able to complete part 1 pretty quickly but was only able to run the UWP locally. I cannot seem to get either the Windows mobile emulator or Android emulators to run successfully. I can’t deploy to the Window Mobile emulator, it returns an error indicating the emulator failed to start. As for the Android emulator, it launches, but the emulator does not have a connection to the network, so the application encounters an exception.

Note that those posting to the forum were more likely to be the ones with problems; there could in theory be many others who breezed through without any issues. But as one participant writes, “I’d be interested in what percentage of participants actually got to the end of the challenge with no problems.”

I like Xamarin; it does an amazing job in enabling cross-platform development with C# and it would be my tool of choice for cross-platform mobile development. It is not always straightforward though, and the kinds of issues experienced by the challenge participants illustrate what can go wrong.

If you just use the native toolkits, such as Android Studio and Xcode, you will have a smoother experience, but of course miss out on the productivity benefit of cross-platform code. That is the trade-off you make.