What is ML? What is AI? Why does it matter?

ML (Machine Learning) and AI (Artificial Intelligence) are all the rage and changing the world, but what are they?

I was asked this recently which made me realise that it is not obvious. So I will have a go at a quick explanation, based on my own perception of what is going on. I am not a data scientist so this will be a high-level take.

I will start with ML. The ingredients of ML are:

1. Data

2. Pattern recognition algorithms

Imagine that you want to identify pictures that contain images of people. The data is lots of images. What you want is an algorithm that automatically detects which images contain people. Rather than trying to code this on your own, you give the ML system a quantity of images that do contain people, and a quantity of images that do not. This is the training process. Once trained, the ML system will predict, when shown an image, whether or not it contains people. If your training has been successful, it will have a high success rate.

The combination of the algorithm and the parameters (these being the characteristics you want to identify) is called a model. There are many types of model and a number of different ML systems from open source (eg TensorFlow) to big brands like Amazon Machine Learning, Azure Machine Learning, and Google Machine Learning.

So what is AI? This is a more generic term, so we can say that ML is a form of AI. IBM describes its Watson service as AI – Watson is really a bunch of different services so that makes sense.

A quick way to think of AI is that it answers questions. Is this customer a good credit risk? Is this component good or faulty? Who is the person in this picture?

Another common form of AI is a chatbot or conversational UI. The key task here is artificial language understanding, possibly accompanied by speech to text transcription if you want voice input, and then a back-end service that will generate a response to what it things the language input means. I coded a simple one myself using Microsoft’s Bot Framework and LUIS (Language Understanding Intelligent Service). My bot just performed searches, so if you wrote or said “tell me about x”, it would do a search for x and return the results. Not much use; but you can see how the concept can work for things like travel bookings and customer service. The best chatbots understand context and remember previous conversations, and when combined with personal information like location and preferences, they can get a lot of things right by conjecture. “Get me a taxi” – “Is that to Charing Cross station as usual?”

Internet search has morphed into a kind of AI. If you type “What is the time?”, it comes up on the screen.

image

The more the search engines personalise the search results, the more assumptions they can make. In the example above, Bing has used what it thinks is my location to give me the time where I am.

AI can also take decisions. A self-driving car, like a human driver, takes decisions constantly, whether to stop, go, what speed, turn this way or that. It uses sensors and pattern recognition, as well as its programmed route, to make those decisions.

Why does AI matter? It feels like early days; but there are obvious commercial applications, now that using ML and AI is within reach of any developer.

Marketers and advertisers are enthusiastic because they love targeting – and consumers often prefer more relevant advertising (though they might prefer less advertising even more). Personalisation is the key, and as mentioned above, ML and AI are good at answering questions. The more data, the more personal the targeting. How much does this person earn? Male or female? Where are they? Single or in a relationship? Do they have children? Even answering these (and many more) questions somewhat inaccurately will greatly increase the ability of marketers to offer the right product or service at the right moment.

Of course there are privacy questions around this. There are other questions too. What about the commercial advantage this gives to those few entities that hold huge volumes of personal data, such as Google and Facebook? What about when showing people “more relevant content” becomes a threat to democracy, because individuals get a distorted view of the world, seen through a tunnel formed by their own preference to avoid competing views? Society is only just beginning to grasp the implications.

Another key area is automation. Amazon made a splash by opening a store where you do not have to check out: object recognition detects what you buy and charges your account automatically. Fewer staff needed, and more convenient for shoppers.

Detecting faulty goods on a production line is another common use. Instead of a human inspecting goods for flaws, AI can identify a high percentage of problems automatically. It may be just a case of recognizing patterns in images, as discussed above.

AI can go wrong though. An example was mentioned at an event I attended recently. I cannot vouch for the truth of the story, but it is kind-of plausible. The task was to help the military detect tanks hidden in trees. They took photos of trees with hidden tanks, and trees without hidden tanks, and used them for machine learning. The results were abysmal. The reason: all the photos which included tanks were taken on an overcast day, and those without tanks on a clear day. So the ML decided that tanks only hide on cloudy days.

ML is prone to this kind of mistake. What similar problems might occur when applied to people? Could ML make inappropriate inferences from characteristics such as beards, certain types of clothing, names, or other things about which we should be neutral? It is quite possible, which is another reason why applications of AI need an ethical framework as well as appropriate regulation. This will not happen smoothly or quickly, and will not be universally implemented, so humanity’s use of ML and AI is something of a social experiment, with potential for both good and bad outcomes.

QCon London: the Ethics track and the psychology of software

The most significant thing about the Ethics track at QCon London, a software development conference I attended last week, is that it existed. I can recall ethics being discussed at QCon in previous years (including a memorable appeal by Martin Fowler at Thoughtworks about rectifying the gender imbalance in IT) but not a specific track.

Why does ethics matter more today? Ethics has always mattered, but the power of software over our lives is increasing. It is possible be that algorithms at Facebook, YouTube and Twitter influenced the result of the last US election and the UK’s Brexit referendum. Algorithms play a large role in influencing many of choices, what to buy, where to eat, where to stay, which airline to book, which vendor to use.

Software also consumes more of our time than ever, as we constantly check our phones for notifications, play games or read online content.

The increasing importance of AI (Artificial Intelligence) also raises ethical questions. Last week I attended the Re-work AI Assistant Summit, also in London. One of the sessions concerned “Building an AI Friend”, presented by Artem Rodichev from Replika. The demos were impressive, showing how a bot can be engaging and help users to talk about what matters to them. I asked though if the company had thought about ethical issues, for example if a child became attached to a bot without realising it was non-human. The answer I got was in effect a blank look, followed by the statement “we have a minimum age limit of 7”. The company has no announced business model, but I would encourage it to form an ethical policy early as these things are hard to bolt on in retrospect – as Facebook is discovering today, in the aftermath of the exposure of how its personal data is being misused by third parties.

AI is also poised to take over more jobs previously done by people. This could be a great liberator for humanity, or alternatively divide society even more deeply into haves and have-nots.

We need more ethics discussion then; but is it too late? Well, it is never too late to improve matters, but perhaps much harm could have been avoided if the industry had focused on this earlier.

I attended a talk by Alexander Steinhart (a technologist at ThoughtWorks) on the psychologist’s perspective on ethics in technology.

image

Steinhart talked about addiction. “We all want to unplug, but cannot”, he said.

“Now we are all connected. On average people are nearly three hours online every day. They check phones every 7 to 15 minutes. Many people have difficulties in finding the right balance.”

When is a habit an addiction? When it “gets into the way of your life and you can’t do anything else, and when you try to change behaviour you don’t manage,” said Steinhart, mentioning that “distraction” is identified as a risk by many people today, including teenagers.

Interruptions and distractions are detrimental to our productivity and also a source of stress, he said. Once you are distracted, it takes 20-25 minutes to recover your focus. “Take care that you are not connected all the time.”

Unfortunately we have also developed an “attention economy” where web sites and apps are rewarded for holding our attention and they have evolved to do that effectively.

A great way, apparently, to get us addicted is to have mechanisms that only occasionally reward us. We will try and try again in hope of reward. Lotteries are like this. So are slot machines. So too, says Steinhart, are things like notifications in apps, or the action of pulling down to refresh emails or other feeds. Most of the time we get nothing of value and we know that. But occasionally something really good arrives. The possibility keeps us hooked.

Another difficulty is that humans do not always cope well with abundance. When a previously scarce resource, food for example, becomes abundant, logically what should happen is that we become more discriminating, selecting only the best and discarding the rest. In practice though this is not the case, and we have seen the ascendance of junk food that does us harm.

We now have abundant information. Answering a question that might once have required a trip to the library or several phone calls can now be done in an instance. That is fantastic; but are we coping well? Somehow, instead of becoming more discriminating about the sources and value of available information, humanity is prone to consuming more and more information of low quality, whether that is banal time-wasting or actual falsehoods and information that is intended to deceive or mislead us.

Steinhart argues that we have moved into a new technological era but have not yet learned how to manage it. He draws an analogy with urbanisation; it tool mankind a while to learn how to build cities that were agreeable places in which to live.

Human needs includes some that are will served by today’s technological landscape. We need to experience “all of the different senses, to small, to taste.” We need privacy and solitude. “If you put managers alone for one hour in a room with nothing to do, they make better decisions the rest of the day,” claims Steinhart. We also need conversations, not just connections. “There is so much human interaction that you cannot digitise, like looking someone in the eye” he said.

How does this translate to ethics in technology? We need positive computing and software design that is “aligned with human goals,” he said.

Free and open source software is helpful in this respect, because the goals of the software are aligned with our needs rather then profit.

What can software developers do? “It is not your fault that technology is distracting,” said Steinhart, “but it’s your responsibility to change something.”

image

It is interesting to imagine what software might look like if designed for human needs rather than business interests. Steinhart’s ideas are around making software quieter, designed to get out of our way rather than to interrupt us, smartphones that encourage us to leave them alone, and of course to avoid anti-patterns which feed addiction or deliberately try to trip us up.

I noticed this tweet today about how an Amazon app behaves when you try to cancel. The user clicks Cancel subscription and gets this:

image

The following screen reverses the button colouring so that if you trained yourself to tap the faint button, you actually do the opposite of what you intend:

image

Until the last screen (there’s another one?) where they switch again:

image

This was not in Steinhart’s talk, but it seems a good example of software designed for the business and not for the user.

I have seen a similar pattern in Amazon’s web checkout where you have to click carefully to avoid being signed up for Amazon’s Prime subscription by accident. Not good.

Ethics and technology

This post is long enough; but there is, I hope, much more to say on this subject.

Despite enjoying Steinhart’s talk and others in the Ethics track, I was not encouraged. We need, of course, regulation as well as more principled businesses, and we do not know what such regulation should look like, nor how to implement it.

One thing though is worth repeating: if as a software developer you are asked to do something that is ethically unacceptable, you should refuse. Professional standards include more than quality of coding.

A week of QCon: introduction

I attended QCon London last week and found it fascinating, but have not written as much about it as I intended because of various other deadlines. In order to address this I will do a quick daily post for the next week or so.

QCon is a software development conference run by InfoQ. It is vendor-neutral and focuses on large-scale enterprise development as well as future trends, language choices and changes, software architecture and more. If you delve into the history of the event it has championed techniques including Agile development, Service Oriented Architecture, Microservices, and now AI. The event has a culture and an ethos, which is something to do with human-centred software, team communications, taking hte side of the user, aversion to unnecessary complexity, and constant exploration of emerging technology.

image
Laura Bell of SafeStack speaks at QCon London on Architecting a Culture of Secure Software.

QCon, like many other events, encourages attendees to give feedback on sessions they attend. At other events I have often seen forms with several categories and questions like “How well did the speaker know their subject” and “What was your biggest takeaway from this session”? While such questions are reasonable, the problem is that they are too difficult and time-consuming and therefore not many respond, or the responses are of low quality. The QCon organisers decided years ago that the only feedback system that works is to have attendees vote good, indifferent or poor as they leave. This used to be done with coloured paper and is now electronic. I mention this because it says something about the event culture: let’s prefer something that works and is not a burden, despite the seeming crudity of a 1-2-3 scoring system. And of course even such basic information is highly valuable in discerning which sessions were most appreciated.

The event prefers practitioners, engineers and team leads over evangelists, trainers and consultants. It attracts a particularly able audience:

image

Of course you can learn plenty outside the actual sessions by chatting to other attendees.

Up next: technical ethics at QCon London.

Kaspersky encrypted connection scanning breaks ADFS login, internet-facing Dynamics CRM

I was asked to look at a case where a user could not log in to Dynamics CRM. This is an internet-facing deployment which uses ADFS (Active Directory Federation Services). The user put in valid credentials but received a 401 – unauthorized: Access is denied due to invalid credentials.

The odd thing from the user’s perspective is that everything worked fine on other PCs; but switching web browsers did not fix it.

I noticed that Kaspersky anti-virus was installed.

image

Pausing Kaspersky made no difference to the error. However I came back to this after eliminating some other possible problems. I noticed that if you looked at the certificate on the ADFS site it was not from the site itself, but a Kaspersky certificate.

image

The reason for this is that Kaspersky wants to inspect encrypted traffic for malware.

I understand the rationale but I dislike this behaviour. Your security software should not hide the SSL certificate of the web site you are visiting. Of course it is particularly dislikeable if it breaks stuff, as in this case. I found the setting in Kaspersky and disabled both this feature, and another which injects script into web traffic (though this proved not to be the culprit here), for the sake of Kaspersky’s “URL Advisor”).

Personally I feel that encrypted traffic should only be decrypted in the recipient application. Kaspersky’s feature is an SSL Man-in-the-Middle attack and to my mind reduces rather than increases the security of the PC. However you made the decision to trust your anti-virus vendor when you installed the software.

There are other anti-virus solutions that also do this so Kaspersky is not alone. As to why it breaks ADFS I am not sure, but regard this as a good thing since the user’s SSL connection is compromised.

image

As it turns out, it isn’t essential to disable the feature entirely. You can simply set an exclusion for the ADFS site by clicking Manage exclusions.

Posted here in case others hit this issue.

Your favourite article on The Register, and what that says about technology and the media

I’m at Mobile World Congress in Barcelona and meeting people new to me who say, “who do you write for”? I’ve been struck by several separate occasions when people say, after I mention The Register, “Oh yes, I loved that Apple article”.

The piece they mean (not one of mine), is this one by Kieren McCarthy. It recounts the Reg’s efforts to attend the iPhone 7 launch; or more precisely, efforts to get Apple PR to admit that the Reg is on a “don’t invite” list and would not be able to attend.

image

Why does everyone remember this piece? In short, because it is a breath of reality in a world of hype.

The piece also exposes hidden pressures that influence tech media. There are more people working in PR than in journalism, as I recall, and it is their job to attempt to manage media coverage in order to get it to reflect as closely as possible the messaging that that their customers, the tech companies, wish to put out.

Small tech companies and start-ups struggle to get any coverage and welcome almost any press interest. The giants though are in a more privileged position, none more so than Apple, for whom public interest in its news is intense. This means it can select who gets to attend its events and naturally chooses those it thinks will give the most on-message coverage.

I do not mean to imply that those favoured journalists are biased. I believe most people write what they really think. Still, consciously or unconsciously they know that if they drift too far from the vendor’s preferred account they might not get invited next time round, which is probably a bad career move.

Apple is in a class of its own, but you see similar pressures to a lesser extent with other big companies.

Another thing I’ve noticed over years of attending technology events is that the opportunities for open questioning of the most senior executives have diminished. They would rather have communication specialists answer the questions, and stay behind closed doors or give scripted presentations from a stage.

Here in Barcelona I’ve discovered the Placa de George Orwell for the first time:

image

Orwell knew as well as anyone the power of the media, even though he almost certainly did not say what is now often attributed to him, “Journalism is printing what someone else does not want printed: everything else is public relations.”

Still, as I move into a series of carefully-crafted presentations it is a thought worth keeping in front of mind.

Finally, let me note that I have never worked full-time for The Register though I have written a fair amount there over the years (the headlines by the way are usually not written by me). The more scurrilous aspect of some Reg pieces is not really me, but I absolutely identify with The Register’s willingness to allow writers to say what they think without worrying about what the vendor will think. 

Microsoft introduces new feedback system for technical documentation, will delete existing comments

Microsoft is introducing a new feedback system for https://docs.microsoft.com, used for its technical documentation.

The new system, which you can already see for certain topics such as the Visual Studio IDE, is based on GitHub issues. When you leave a comment, you can specify whether it concerns documentation or product functionality.

image

So far so good, but the downside is that all existing comments will be deleted:

image

The statement “Old comments will not be carried over. If content within a comment thread is important to you, please save a copy.” is unhelpful. Nobody knows what comments will be useful to them in future.

Few things sap enthusiasm for community participation more than having all the past contributions into which you have put effort suddenly zapped. Nor is this the first time, as user guibirow notes:

As much I like the new system idea, I hate the fact that this is happening over and over.
It used to be a Disqus comment system, then moved to LiveFyre, then moved now to this new system, what will be the next?
The worst part of this all is that MS does not care about past content lost on these discussions, so many times I found issues described in the docs that are gone now.
Please, pay attention to your previous mistakes, don’t let the information be lost again, at lest import them as closed issue in the new system.

Sometimes progress has a cost and that is understood. However it is not impossible to migrate content from one system to another. It just takes effort.

Update: Microsoft’s Rob Eisenberg has responded with an explanation and mitigation plan regarding existing comments. He says that a straight migration of the comments is impossible:

    • There is a lot of garbage, spam and even dangerous content within existing LiveFyre comments which would violate GitHub terms of usage and our open source code of conduct, as well as cause security problems.
    • There isn’t a good way to map LiveFyre users to GitHub users and using a bot account to anonymously add comments is questionable with respect to OSS practices and GitHub terms of use.
    • For legal and privacy reasons, we cannot move user-associated data from one system to another without consent from users (GDPR).
    • LiveFyre conversations are threaded, while GitHub issues are not.
    • Placing the old comments into the GitHub Issues system would derail the entire GitHub Issues workflow for both customers and employees and muddle the data.
    • It isn’t clear whether there is a way to invoke GitHub APIs for a migration of this scale such that it wouldn’t violate GitHub API terms of use.

He also has an archiving proposal:

We would take the comments from an article on docs.microsoft.com and then convert them into a Markdown file. During this process, we would strip all user info (remember GDPR). The Markdown file would then be committed to a GitHub repo. Finally, at the bottom of the feedback section, next to the link that says "View on GitHub" we would add a second link that said something like "View Comment Archive". This link would connect you directly to the Markdown comment file for that page.

This sounds positive. At the same time, it is a mess that illustrates some of the disadvantages of a “best of breed” approach to solving technical problems. If Microsoft could use its own technology to host a documentation and commenting system, and a source code management and issue tracking system for that matter, this issue would not occur, and users would not need multiple accounts, causing the legal issues mentioned above.

Microsoft in fact used to use its own platform for all the above but decided to shift to using third-party solutions because they worked better. That seemed to be a good thing, improving user experience and productivity, but becomes a problem when what seems to be the best third-party option changes.

Setting up PHP for development on Windows Subsystem for Linux in Windows 10

I have been working a little with PHP, for the first time for a while, and soon found it annoying not to have the convenience of instant application testing and line by line debugging. I have set up a PHP development environment before using XAMPP for Windows and Eclipse, but it was fiddly. I also prefer PHP on Linux, which is where my scripts will be running.

Since Windows 10 now has a Linux environment built-in, called Windows Subsystem for Linux (WSL), I decided to set this up to run Apache, PHP and MySQL and to try debugging my scripts there.

My PC is a recent installation and I had not yet installed WSL. To do so, you have to both download a Linux distribution from the Store (I chose Ubuntu), and enable WSL in Windows features. Then restart, launch Ubuntu, set a username and password, and you are up and running.

Note the Linux commands that follow should be run as root using sudo.

Before doing anything else, I got Ubuntu up to date:

apt-get update

apt-get upgrade

Then I installed the LAMP suite:

apt-get install lamp-server^

(the final ^ is intentional; see the guide here).

To check that everything is working, I created the file phpinfo.php in /var/www/html with the following contents:

<?php phpinfo(); ?>

and restarted Apache:

/etc/init.d/apache2 restart

Note: if you have IIS running in Windows, or another web server, Apache will not be able to listen on port 80. Change the port in /etc/apache2/ports.conf and in /etc/apache2/sites-enabled/000-default.conf

Then I opened a web browser on the Windows side and browsed to localhost:

image

and

image

We are up and running, but not debugging PHP yet. Remember the basic rules of WSL:

  • you cannot change Linux files from Windows.
  • you can access Windows files from Linux.

We want to edit PHP from Windows, so we’ll define a site that uses Windows files. Windows files are under /mnt/c (or whatever drive letter you are using).

So if you example you have your PHP website in a folder called c:\websites\mysite, you can have Apache serve files from that folder.

The quickest way to get up and running is to create a symbolic link in the Apache home directory, in my case /var/www/html. Change to that directory and type:

ln -s /mnt/c/websites/mysite mysite

Now you can view the site at http://localhost/mysite/

This worked first time for me, complete with PHP running. You could also set up multiple virtual hosts in Apache, and use the hosts file in Windows to map other host names to localhost.

Next, you probably want PHP to show error messages. To do this, replace the default php.ini with the development version (or tweak it according to your own preferences. At the time of writing, on Ubuntu, the default PHP version is 7.0 and php.ini-development is located in /usr/lib/php/7.0/php.ini-development. So I backed up the ini file at /etc/php/7.0/apache2, replaced it with the development version, and restarted Apache. My PHP form immediately showed me a non-fatal undefined index error, so it worked.

There is one small inconvenience. Apache in WSL will only run during the session. So before starting work, you have to open Ubuntu and type:

sudo apache2ctl start

Well, background task support is coming to WSL but I do not regard this as a big problem.

OK, this is cool, we can make changes in the PHP code in our favourite Windows editor, save, and view the results directly in the browser. But what about line-by-line debugging? For this, we are going to use Visual Studio Code with the PHP Debug extension:

image

Then on the Ubuntu side:

apt-get install php-xdebug

Restart Apache:

apache2ctl restart

Check that phpinfo.php now shows an Xdebug section. Then edit php.ini and add the following:

[XDebug]
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart Apache again and XDebug is ready to go.

Over in Visual Studio code there is a little more work to do. The problem is that although everything is running on localhost, the location of the files looks different to Linux than to Windows. We can fix this with a pathMappings setting. In Visual Studio code, open the PHP file you want to debug. Click the Debug icon and then the little gearwheel near top left; this will open launch.json. By default there are a couple of settings for XDebug. These are OK for a default setup, but we need to add path mapping so that the debugger knows where to find the files. For example:

image

Now you can set a breakpoint, start debugging, and open the page in your browser:

image

More guidance on the PHP Debug extension by Felix Becker is here.

Final thoughts

This is cool; but is it better or worse than an old-style VM running Linux and PHP? The WSL solution is lightweight and convenient, but unlike a VM it is not isolated and you may hit issues that are unique to WSL, because not everything runs. I did happen to suffer crashes in Visual Studio and in Outlook while WSL was running; it may well be coincidence, but I cannot help wondering if WSL might be to blame.

Still, a great feature of WSL is that when you exit your session, it goes away, so it is not too intrusive. I plan to use it for PHP debugging and will see how it goes.

The price of free Wi-Fi, and is it a fair deal?

Here we are in a pub trying to get on the Wi-Fi. The good news: it is free:

image

But the provider wants my mobile number. I am a little wary. I hate being called on my mobile, other than by people I want to hear from. Let’s have a look at the T&C. Luckily, this really is free:

image

But everything has a cost, right? Let’s have a look at that “privacy” policy. I put privacy in quotes because in reality such policies are often bad news for your privacy:

Screenshot_20180211-141004

Now we get to the heart of it. And I don’t like it. Here we go:

“You also agree to information about you and your use of the Service including, but not limited to, how you conduct your account being used, analysed and assessed by us and the other parties identified in the paragraph above and selected third parties for marketing purposes”

[You give permission to us and to everyone else in the world that we choose to use your data for marketing]

“…including, amongst other things, to identify and offer you by phone, post, our mobile network, your mobile phone, email, text (SMS), media messaging, automated dialling equipment or other means, any further products, services and offers which we think might interest you.”

[You give permission for us to spam you with phone calls, texts, emails, automated dialling and any other means we can think of]

“…If you do not wish your details to be used for marketing purposes, please write to The Data Controller, Telefönica UK Limited, 260 Bath Road, Slough, SLI 4DX stating your full name, address, account number and mobile phone number.”

[You can only escape by writing to us with old-fashioned pen and paper and a stamp and note you have to include your account number for the account that you likely have no clue you even have; and even then, who is to say whether those selected third parties will treat your personal details with equal care and concern?]

A fair deal?

You get free Wi-Fi, O2 gets the right to spam you forever. A fair deal? It could be OK. Maybe there won’t in fact be much spam. And since you only give your mobile number, you probably won’t get email spam (unless some heartless organisation has a database linking the two, or you are persuaded to divulge it).

In the end it is not the deal itself I object to; that is my (and your) decision to make. What I dislike is that the terms are hidden. Note that the thing you are likely to care about is clause 26 and you have to not only view the terms but scroll right down in order to find it.

Any why the opt-out by post only? There is only one reason I can think of. To make it difficult.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

HackerRank survey shows programming divides in more ways than one

Developer recruitment company HackerRank has published a survey of developer skills. The first place I look in any survey is who took part, and how many:

HackerRank conducted a study of developers to identify trends in developer education, skills and hiring practices. A total of 39,441 professional and student developers completed the online survey from October 16 to November 1, 2017. The survey was hosted by SurveyMonkey and HackerRank recruited respondents via email from their community of 3.2 million members and through social media sites.

I would like to see the professional and student reponses shown separately. The world of work and the world of learning is different. This statement may also be incomplete, since several of the questions analyse what employers want, which suggests another source of data (not difficult to find for a recruitment company).

It is still a good read. It is notable for example that the youngest generation is learning to code later in life than those who are now over 35:

image

I am not sure how to interpret these figures, but can think of some factors. One is that the amount of stuff you can do with a computer without coding has risen. In the earliest days when computing became affordable for anyone (late seventies/early eighties), you could not do much without coding. This was the era of type-in listings for kids wanting to play games. That soon changed, but coding remained important to getting things done if you wanted to make a business database useful, or create a website. Today though you can do all kinds of business, leisure and internet computing without needing to see code, so the incentive to learn is lower. It has become a more specialist skill. It remains valuable though, so older people have reason to be grateful.

How do people learn to code? The most popular resource is Stack Overflow, followed by YouTube, with books coming in third. In truth the most popular resource must be Google search. Credit to Stack Overflow though: like Wikipedia, it offers a good browsing experience at a time when the web has become increasingly unpleasant to use, infected by pop-up surveys, autoplay videos and intrusive advertising, not to mention the actual malware out there.

No surprises in language popularity, though oddly the survey does not tell us directly what languages are most used or best known by the respondents. The most in demand languages are apparently:

1. JavaScript
2. Java
3. Python
4. C++
5. C
6. C#
7. PHP
8. Ruby
9. Go
10. Swift

If you ask what languages developers plan to learn next, Go, Python and Scala head the list. And then there is a fascinating chart showing which languages developers prefer grouped by age. Swift, apparently, is loved by 75% of those over 55, but only by 15% of those under 25, the opposite of what I would expect (though I don’t know if this is a percentage of those who use the language, or includes those who do not know it at all).

Frameworks is another notable topic. Everyone loves Node.js; but two of the frameworks on offer are “.NET Core” and “ASP”. This is odd, since .NET Core is not really a framework, and ASP normally refers to the ancient “Active Server Pages” framework which nobody uses any longer, and ASP.NET runs on .NET Core so is not alternative to it.

This may be a clue that the HackerRank company or community is not well attuned to the Microsoft platform. That itself is of interest, but makes me question the validity of the survey results in that area.

Tech Writing