All posts by Tim Anderson

Using Windows 10 on a 4K display: issues in multi-monitor setups

I made the mistake of reading this post where programmer Nikita Prokopov explains why it is time to upgrade your monitor, particularly if you are a software developer. “I optimize my setup to showing really, really good letters. A good monitor is essential for that. Not nice to have,” he says, going to to explain why standard 1080p (1920 x 1080 pixel) displays have insufficient resolution to display text nicely (unless the display is also small, such as on a 13” laptop). You can use the tool here to calculate the PPI (pixels per inch). You should aim for 150 or more PPI; a 27″ 1080p display will get you 81.59 PPI.

Prokopov’s point is that if you spend all day looking at text (and I do), then you should make the effort (and expense) of getting to display properly; your eyes will thank you and you can work with less strain.

One of my displays is dying (needs new capacitors I suspect) so I took the bait and stumped up for a 4K screen. I did not do what Prokopov also suggested which is to get a display with 120Hz refresh rate. I looked into it; but you need to get a TN (twisted nematic) display which involves some compromises in viewing angles and colours. I also did not want to spend £1700 or more. So I went for an 4K IPS display.

It has been educational. Prokopov is right; text looks much better. Note though that you cannot run at full resolution unless your display is huge; mine is 27” because it has to fit on the desk. It is quite fun at full resolution but the text is too small to read.

image

What you should do, says Prokopov, is to scale the display by an integer value. Therefore I scale 200% (Windows display settings). The display is now back to 1080p in terms of the size of most text but at higher resolution and text looks great.

There is a snag though, actually a couple of snags. One is that occasionally you hit an application that does not understand the scaling – like Open Live Writer – and the text is tiny. More significant for me though is what happens if you have multiple displays. Windows is smart enough to let you have different display settings for each screen. The problems come in two cases though:

– if you move the mouse from the 4K screen to the 1080p screen, it jumps vertically. Essentially, it retains the pixel coordinates from the 4K display and applies them to the 1080p display. So if the mouse is halfway down the 4K display, and you move it right onto a 1080p display, it jumps to the bottom of the screen.

– if you drag an application so it straddles the two displays, it all goes wrong. I cannot use a screen grab for this.

Exhibit one: what happens if the 4K display is set to 100% scaling and you drag an application to straddle across to a 1080p display (ignore the mottling effect, that’s just an artefact from snapping the screen):

image

Exhibit two: what happens if you have the 4K display set to 200% scaling and perform the same straddling act:

image

I appreciate the difficulties here, but possibly Windows could do better? Incidentally the weirdness fixes itself when you drag the window fully across to the 1080p display, it snaps back to normal.

The solution of course would be to get two or three 4K displays. An expensive solution though.

Developing software for playing bridge

I am a duplicate bridge player in my spare time and enjoyed playing in my local club once or twice a week. That was before COVID-19 and then, in March this year, lockdown. Bridge clubs were no longer able to meet. There are more important things in the world; but bridge is both a lot of fun and a welcome distraction from weightier matters, and my thoughts soon turned to what we could do to continue playing in these new circumstances.

The answer was to play online; but while there are plenty of ways to play bridge online, the existing systems were not designed with the idea of being a way for bridge clubs to meet in a new context. If anything, the reverse is true: online bridge site were designed for people who could not easily get to a club or wanted to play at any time with whoever else happened to be available. Clubs like my own, by contrast, wanted to replicate their face-to-face meetings with an online equivalent. A further complication back in March was that the biggest online bridge site, called Bridgebase, was immediately overloaded and declared that it was unwilling to allow new people to qualify as directors, people allowed to run online bridge sessions.

My immediate instinct was to build a new site for playing bridge. I was not quite starting from scratch. Back in the early days of Windows 8, I started work on a bridge game for Microsoft’s new and as it turned out ill-fated platform. I had got some way with it; I had created a bridge engine that understood about cards and hands and tricks and shuffling and scoring and all the various elements that go into playing bridge. It was written in C# and what is now UWP XAML. It is designed of course for a solo player. Here is the bidding screen:

image

and the play screen:

image

This is how it looks on Windows 10; it looked a bit better on Windows 8 though it would not win any prizes for design. My software could play bridge though; the reason I never finished it was that I never cracked getting the AI working. But for human to human play that did not matter. A weekend or two coding, I thought, and I could have a website up and running so our club could play bridge online. I made an immediate start, registering the domain name YourBridgeClubOnline.co.uk.

Well, three months later and here we are.

image

image

It is, I have to say, still under development. But it works and we have been able to play bridge again, as a club.

What took you so long? Ha! Much of my old bridge engine code remains untouched and has proved useful; it all runs fine on .NET Core. Even the (useless) AI has been handy, as I can test the mechanics of play without involving others. But I had, of course, wildly underestimated the problem of converting a game for solo play on Windows, to a multi-player web application. There is much to think about:

The UI. I am not a designer (I am sure you can tell) but spent ages puzzling over how to get a workable user interface in the browser for everything from tablets to desktops. Not smartphones yet but it is coming. I decided early on to take a view on compatibility. No Internet Explorer. JavaScript fetch API is required. When time is against you, it is easier to say, just use another browser, than to waste too much time supporting old browsers.

Messaging – both the API kind, and the chat kind. I am using C#, ASP.NET Core and SignalR. In general it works well. SignalR uses WebSockets as first preference, but falls back to Server Sent Events or long polling where necessary. In my first experiments I did my own polling and switching to SignalR was a great relief.

Registration and login. I am using the stuff that comes in the box, ASP.NET Core Identity. It has saved me a ton of work. It’s a bit annoying and not too well documented. I don’t really like using GUIDs for the primary key, for example, and I believe there is way to avoid it, but it isn’t top priority when you are going for Minimum Viable Product.

JavaScript. I’ve written tons of it and I don’t even like the language. I have a new respect for it though. The thing is, it is very fast and there is nothing you cannot do. The worst thing is the friction of doing some debugging in the browser, and some in Visual Studio. I am thinking of switching to VS Code for development since it works nicely with ASP.NET Core and is better for JavaScript than Visual Studio.

Scoring. My Windows software could score a hand of bridge. But duplicate is different; you have to compare the scores with others who played the same hands and work out the percentages, then export the results to standard formats for display on club websites and submission to the English Bridge Union. It was more work than I had expected and I am not done yet; the system only understands Pairs at the moment, not Teams (a different way of scoring).

Directing. Someone has to manage an online bridge session, settle any arguments, and fix errors like cards played by accident. It all needs coding and there was nothing like it in the Windows version.

Movements. Imagine you have 28 people playing bridge (or 14 pairs). They need to all play the same hands, but never play the same hand twice, and it has to be so arranged that each pair plays against other pairs in a defined sequence so it is balanced and fair. We call this the movement. Online, you have a bit more flexibility because you don’t need to share physical cards: everyone can play the same hand at the same time if you like. It is still quite fiddly though, and I did not do any of this in the old Windows version. I saved some time by writing an import function to enable re-use of movements made for EBUScore, a widely used scoring and bridge session management application. There is more to do though.

Claims. This is where, half way through the hand, a player says, “There’s no point in playing on, I’m obviously going to win all the remaining tricks.” A trick is a sequence of four cards played one from each hand, which is won by one of the pairs. This statement is called a claim, and has to be agreed by the other players. Getting this working was more difficult than I had expected – because built into my bridge engine was the idea that you could score by counting the tricks each side had won. But claimed tricks are never played. With hindsight, I should have allowed for this from the beginning.

Database. Every detail of play has to be stored on the server. I am using Dapper and SQL Server currently, though it is possible that PostgreSQL would work just as well. I started with Entity Framework Core, still there as it is used by ASP.NET Core Identity, but I am happier with Dapper.

Things that worked well

Three months is longer than I had thought it would take to get to a playable system, but I suppose as a spare time project it is not too bad. It would not be possible without the likes of ASP.NET Core and Dapper and SignalR doing so much for you. C# is a delight for coding. I am also using an Azure App Service for all this test and development and that has worked well. I am deploying to a Linux container of course; but the nice thing about App Service is that it will scale to a considerable extent without the hassle of Kubernetes. If the project succeeds and needs to scale up, there is an Azure SignalR service ready and waiting. I was nevertheless interested to see that AWS now offers .NET Core on Elastic Beanstalk, complete with some nice Visual Studio integration. Trying it there would be an interesting experiment, though I’m not sure AWS is so savvy about SignalR.

Open Source?

Could this have been done quicker by making it open source and seeking collaborators early on? Will it become open source? I need help for sure, though I also feel the code needs some cleaning up before it is fit to share more widely. You will recall though that I had started out thinking that it would be a small matter to convert my solo bridge game to an online multiplayer web application. I figured it would be better to get something working and then ask for help. But I am open to offers! Note: this is not a commercial project.

Rewarding

Most of the software projects I have been involved in have been business applications. Bridge is a lot more fun. I do see software development as a creative act. I recall starting work on the bridge game back in 2011 (I think); starting a new blank project in Visual Studio and thinking, hmm, I had better write a class to represent a pack of cards. From that beginning I ended up with an application that could play bridge, after a fashion, and now one that multiple people can play concurrently. It is rewarding and I will not regret the time spent on it, irrespective of how much actual use it gets.

StackOverflow survey 2020

The StackOverflow developer survey 2020 is out – surveys come out constantly but this one is worth more than most because of the huge reach of StackOverflow among developers. This one has 65,000 responses.  Every survey also has reasons why it is unrepresentative; there is no such thing as a definitive survey because you can never precisely compare like with like, humans are unreliable, and every sample has its biases. Every wondered what a survey of “developers who never respond to surveys” would look like, if you could cajole them into answering questions? I have.

image

Imperfect then, but still interesting. What is notable in this one? The questions that interest me are those in the technology section. I am also more interested in trends than in absolute rankings, because it is the trends that might tell us something about the future. So let’s have a look.

In “Most popular programming languages” the interesting stuff is well down the rankings, since the top places are little changed. Dart has gone from 1.9% last year to 4.0% this year; still small, but that is more than 100% growth, thanks no doubt to Flutter. Rust has also grown, from 3.2% to 5.1%. Swift has fallen slightly, from 6.6% to 5.9%. This no longer seems likely to become a top programming language, important though it is for macOS and iOS. Objective-C is down a bit too (4.8% to 4.1%) and I wonder if this suggests greater interest in cross-platform toolkits and/or web technologies for Apple platforms. Such as Flutter, of course.

What about web frameworks? React.js is up a bit, from 31.3% to 35.9%, and to my mind it does look all-conquering at the moment. jQuery is above it, but that is nonsense really, as jQuery is not an alternative to React.js, being more a low-level plumbing thing. The figures for both Angular and ASP.NET are confusing as hell, since last year there was no separate entry for ASP.NET Core, but this year there is; and this year Angular is split between Angular and Angular.js. So we cannot conclude anything about these technologies. There are signs of growth in JAMstack frameworks. Vue.js is up from 15.2% to 17.3% and Gatsby makes an appearance this year at 4.0%

In “Other frameworks” we see .NET down a little but .NET Core up a little, so probably no change there. Flutter up from 3.4% to 7.2% as noted above. Xamarin down a bit, from 6.5% to 5.8%. Xamarin has not been as popular as I had expected it to become when Microsoft acquired it; the reason seems obvious, which is that Microsoft has given developers mixed messages about whether to use it, with the Windows and Office teams seemingly preferring React Native.

In databases, PostreSQL had another good year, rising from 34.3% to 38.5%. More than MySQL, it seems a natural destination for those migrating from Microsoft SQL Server or Oracle. Though note too that SQL Server is up, from 32.8% to 34.8%. Microsoft pushes developers strongly towards SQL Server in its developer tools and frameworks, and the strategy seems to work (plus it is a pretty good database manager).

In platforms, both Linux and Windows have increased use among developers. Not so surprising when you consider that Microsoft now ships the Linux kernel with Windows 10. MacOS is going in the right direction too, from 22.2% to 24.0%.

StackOverflow stuffs cloud platforms into this part of the survey too, as well as things like WordPress (talk about not comparing like with like!). Still, note that AWS is up from 26.6% to 26.7% (well, hardly moved); Azure is up from 11.9% to 14.5%; and Google Cloud (GCP) from 12.4% to 14.1%. Oh yes, and Kubernetes is here too, up from 8.5% to 11.5%. All of this chimes with my perception that GCP is doing pretty well from a developer perspective, now only just behind Azure in this particular community.

Then there is the entertaining most loved, dreaded and wanted. Rust is still move loved by miles (86.1%, up from 83.5% last year). Not much else I want to say about this section, other than to note that Python tops the “most wanted” list by a bigger margin (30.0% up from 25.7%); and poor old VBA continues to be “most dreaded”, again by a bigger margin (80.4% up from 75.2%).

In web frameworks, the good news for .NET developers is that ASP.NET Core now tops the list of “Most loved”, whereas ASP.NET is well down the list (36.9%). Again you cannot really compare with last year. Angular.js (the old version) tops the list of “Most dreaded”.

Similarly, .NET Core tops the list of most loved “other frameworks, libraries and tools” at 71.5%, though it only manages  8.3% in “Most wanted.” Translation: .NET Core developers love it, but it is still not doing that well in terms of appeal to those not currently using it. Chef should be asking itself why it is top of “Most dreaded” for two years running, in fact more dreaded than last year (72.4% up from 66.7%). Puppet is not far behind. Ansible seems both better liked and less dreaded by developers.

The “Most dreaded platforms” list is also notable, with WordPress at the top (PHP spaghetti anyone?) and IBM “Cloud or Watson” second, both positions unchanged from last year.  Android as overtaken Windows as the most dreaded operating system. All the top cloud platforms are more dreaded than last year, and so is Kubernetes.

Make of all that what you will. The survey seems to me valuable as evidence of things we already know, but there are no huge surprises – and why should there be?

Hands On ASP.NET Core

I’ve been putting together a quick web application (well, I thought it would be quick) in my spare time (hah!) and I picked ASP.NET Core on Linux as a sensible option given that I like working in C#. Overall it has been a reasonable experience so far and I still love the language. This is the most extensive work I have done so far with ASP.NET Core though and I have a few observations.

It is not a difficult framework to work with but I believe it could be made more approachable. This is largely a matter of documentation though another point of confusion is the transition Microsoft has been making from ASP.NET MVC to Razor Pages. These two frameworks are similar but different, they share a lot of technology but some things work in one but not the other, and sometimes it is not clear whether what you are reading applies just to ASP.NET MVC, or just to Razor Pages, or to both, or to both but with a little tweaking to account for differences. I started with MVC because I am more familiar with it but have shifted to Razor Pages because that seems to be the preferred direction; really I am equally happy with either.

If you are thinking of getting started with ASP.NET Core I recommend you start not with the framework, but with making sure that you are familiar with the following topics:

Dependency injection. If you are puzzling about something in the framework the answer may be “add it to the constructor and it magically works.” This is obvious if you are familiar with it but not otherwise.

Anonymous types. These seem to crop up quite a lot.

Lambdas and the arrow operator =>

LINQ queries

Now, the documentation. Unless you have perhaps found a good and up to date book you will probably start here.

image

Now, I do think there are lots of good things about docs.microsoft.com, the fact that it is all on GitHub and open for comment and improvement, the fact that it performs well, and the obvious effort that has gone into many of the topics.

That said, I do not much like this page. My biggest problem with it is that there is no simple link to a comprehensive reference. It is a bunch of little tutorials which may or may not tell you what you need to know. It gets better if you click into one of the topics and I like this page, for example, much better, with the hierarchical list of topics on the left.

image

It is still not great though. There is a big emphasis on tutorials, and while I agree that learning through doing is a great way to learn, the problem with the tutorials is that they tend to leave you with lots of questions and no obvious route to answers.

I will give you an example. I decided to use the ASP.NET Identity system in my application, because it saves a ton of tedious work doing registration, password reset, login, and so on, plus it is security-critical code that I would likely get wrong if I did it myself.

The problem you will immediately hit though is that you want to store additional data about users. This could be any kind of data but let’s call it additional profile data. For example, you want to let users upload an image which is then displayed in the application. There are some heavy articles about customizing identity but there is also this one on adding custom user data to an ASP.NET Core web app. It’s great but it does not actually tell you how to retrieve the custom user data in your application. Eventually I figured out a way of doing it. You just have to use dependency injection to get an instance of the UserManager class. So you pop this in the constructor for one of your classes:

UserManager<YourCustomUser> UserManager

and store it in a private variable. Then you can do:

var MyTask = _userManager.GetUserAsync(User);
MyTask.Wait();
var MyUser = MyTask.Result;

or something similar (if it is a synchronous method) and it just works.

Let me add something else. The actual API reference for ASP.NET Core is almost useless. It faithfully documents each class and method while often saying nothing about how or why to use it.

Data access

My application is really forms over data as so many are, so data access plays a big role. There seem to be plenty of tutorials on data access in the ASP.NET Core documentation but I don’t much like them. The problem is Entity Framework. Most of the documentation assumes it. It is not that Entity Framework is bad; it does seem to work well and while there is debate about how well it performs, in many cases it does not matter, and in other cases you can fine-tune it. My problem rather is that what Microsoft calls a “complex data model” is actually the normal case, where you have many-to-many relationships, and dealing with this in Entity Framework soon gets fiddly. I am guilty of lacking patience, but being familiar with SQL it is easier for me just to write the SQL and to know exactly what data is being saved and what data is being retrieved. I have left Entity Framework in place because the Identity system uses it (and it looks non-trivial to replace) but for the rest I have migrated to Dapper which seems ideal. It is not a full-featured ORM and it expects you to write the SQL but does a lot that saves time. My only complaint about Dapper is that (again) the documentation isn’t great but I’ve found it much simpler to grok than the more advanced aspects of Entity Framework.

One thing I do like about Entity Framework is data migrations. Like most developers I have a local database and another one online and code-first data migrations save a lot of work creating database tables and keeping the schema in sync. Dapper does not have this.

StackOverflow

Of course it is true that no matter what is your question, someone has asked it before, and often the best place to find the answer is on StackOverflow. Big appreciation for the folk who take the time to answer questions there, though I’d add that it is not a place from which to copy code, it is a place to understand a solution. Out of date information is a problem, as it is in Microsoft’s own documentation.

Finally

I think ASP.NET Core is a great framework (or frameworks) but not as approachable as it could be. Documenting it in the best way is not an easy problem to solve, and every developer comes with different skills and requirements. Perhaps Microsoft could get someone suitable to write a nice book aimed at intermediate coders, and one that does not assume you want to use Entity Framework. Then offer it as a free download and/or publish it online as part of the documentation, and keep it up to date as new versions appear.

Android: a lack of software to take advantage of high-end mobiles?

I’ve got an pre-release version of Oppo’s Find X2 smartphone for review. It’s a great device and with an outstanding camera. I reviewed it for The Register here.

image

In preparing the review I asked a few people what they would like in a high-end android mobile. Things like an excellent display, fast performance, strong camera and plenty of storage all get mentioned, but then I see people spending most of their time on social media and wonder if even mid-range mobiles are perfectly good enough for the average user. 5G has huge potential (and the Find X2 is a 5G device), but coverage is limited, you will pay the operator quite a bit more, and arguably it needs to reach a tipping point where enough people have it that we can design new applications to take advantage; until then, it’s nice to have faster internet but not game-changing.

Notably missing from our Find X2 press briefing was any demonstration along the lines of “you should see the amazing performance on application x”, where application x is something familiar rather than a benchmark. Gaming is one area where faster hardware does make a difference, having said which, Android and iOS still tend to be the home for casual games and a mobile platform can never compete with the monster GPUs you can plug into a desktop or even a gaming laptop. Unless you stream the games, and shift the need for intensive compute power into the cloud.

Local storage? Handy for a video library to play on a train or flight, or for audiophiles to store huge files full of arguably inaudible data, but for most of us cloud storage makes a local storage less important once it is enough. What is enough? Probably less than the 512GB on the Find X2. It is nice to have, say for capturing 4K video without anxiety, but a lot of people simply do not need to think about how much storage they have once it is beyond 128GB or so.

Photography remains a key feature and one where local compute power does make a big difference – since mobile cameras are as much about digital processing as about optics. I would argue though that one of the reasons vendors have got carried away with multiple lenses and amazing capabilities in mobile cameras is that it remains an area where useful improvements are attainable – perhaps beyond the importance of these improvements to most users. As with gaming, there is a problem in that actual cameras will always be preferable to the camera in a smartphone, for the best results, though the great thing about smartphones is that they are always in your pocket.

The question then: could there be another wave of software that will make the hardware on a modern high-end smartphone more desirable?

Another way of putting is is that it is software, not hardware, that will radically improve smartphones, which explains the lack of excitement around today’s big releases.

Annoying Azure capacity problems in UK West region

I have a test setup of Windows Virtual Desktop (WVD) and was experimenting with adding an additional VM. At least, I tried to. My WVD virtual network is in the UK West region. And when I try to create a VM I get the message: Your subscription doesn’t support virtual machine creation in UK West. Choose a different location.

image

This was annoying because my WVD virtual network is in UK West, so no, another region would not do. If you click Learn More you get this page which says that if you get this message and still want to deploy a VM in the region, you have to raise a support case.

I am guessing but I presume this is a capacity problem and that Microsoft is discouraging VM creation in the region. The problem for the customer is that such things are opaque; there is nowhere you can see which Azure regions are running close to capacity.

Five facts about Rust

Rust is a programming language aimed at system programming – for which high performance and low-level system access is essential – but with safety features that make it harder to write dangerous or insecure code (though it is still possible). Since all programmers value both speed and stability, Rust is being used for tasks other than system programming as well. Rust is open source and sponsored by Mozilla, which uses Rust in its own development including parts of the Firefox web browser.

Rust is not one of the most-used programming languages; according to a StackOverflow survey only 3.2% of developers use it. Among professional developers that figure drops to 3.0%.

Yet Rust comfortably tops the list of most loved languages.

image

Second, Rust has built-in support for unit tests, in conjunction with Cargo, the Rust build system and package manager. Cargo will both generate test functions and run tests for you. You can do unit tests in any language, but this is a great way to prompt developers to use them.  Tests are a big deal. I recall Sqlite developer Dr D Richard Hipp telling me that testing was core to the project and without it, it could not progress as it does. Sqlite has 662 times more test code than the code in the Sqlite library itself.

Third, Rust can be compiled to WebAssembly so you can run it in a web browser.

Fourth, Microsoft is considering using Rust on the basis that it “could eliminate an entire class of vulnerabilities before they ever happened”.

Fifth, work is under way to build a new operating system with Rust, called Redox. I wrote about this briefly for the Register.

If asked to think of a language that is as efficient and powerful as C++ but nicer and for many of us more productive to use, I think of Delphi (or Object Pascal). Delphi has an ardent niche following but is unlikely to grow its usage much beyond it. Rust on the other hand is a modern language that benefits from things we have learned about programming in the last forty years (C++ was first thought by Bjarne Stroustrup when writing his PhD thesis, though the name dates from 1983), and with a refreshing lack of legacy. And Delphi is not open source, unless you mean Lazarus.

Worth a look if you have a moment – see here for how Verity Stop got on.

Mad but great: Sony Walkman 2019 NW-A105

image

Who would want an expensive dedicated mobile music player in 2019, when any mobile phone is capable of excellent sound quality, more likely streamed from Spotify or Apple Music than played directly from music files on the device? It is a bit crazy; but Sony is still out there promoting high resolution audio and believes that smartphones are not the last word in audio quality. The new NW-A105, which retails at £320 in the UK, is not even the top of its range. The Walkman WM1Z Signature Series is £2500, complete with gold-plated oxygen-free copper chassis, making the humble A105 seem quite a bargain.

The audio world is replete with misleading claims about what makes for good sound, and you can make the case that you will not get any audible benefits from spending this kind of money. That said, I attend Sony events from time to time – the latest was IFA in Berlin earlier this year – and I am always impressed by the sound quality of Sony’s high-end portable devices. I was glad to get the opportunity to review the NW-105 therefore. Who knows, it may not be quite the sonic equal of the WM1Z, but as soon as I tried it I was delighted by the almost uncanny realism of some of the best-recorded tracks I have available.

Which tracks? For example, I played Let me touch you for a while from the Live album by Alison Krauss and the Union Station, and was transported to the Louisville Palace in April 2002. There is space around the instruments, the guitars sound like guitars, you can follow the bass, the applause sounds like you are in the audience. Then Claire Martin’s cover of Bowie’s Man Who Sold the World. a demo track from Linn that is beautifully recorded, and you can hear immediately that the sound quality is a notch above what we normally hear. It is spacious, the instruments sound distinct and realistic, the vocals have great presence. Then the Cranberries, I Still Do, not demo quality this time, but you get the ethereal quality of the much-missed Dolores O’Riordan’s voice, the dense instrumentation, the thunderous bass at the end of the track. I just wanted to keep playing, in a way that I have not done for a while.

The A105 (which is more or less the same as the A100 and some other models) is notable for running Android 9.0, unlike some of the other models which run Sony’s own custom operating system. Running Android has pros and cons. On the plus side, it means you can run any Android app, such as Spotify, YouTube, Google Play Music, Apple Music, and so on. You can also connect to public wi-fi using your preferred web browser. The disadvantage is that Android consumes more space and drains the battery faster than Sony’s dedicated firmware.

I love this device, but it does have a number of annoyances. Here are the main ones:

  • Just 16GB of on-board storage, which soon fills up if you put a few hi-res albums on there. In fact, available storage is less than 7GB thanks to Android. A DSD album in SACD quality is typically between 1.5 and 2.0 GB. Fortunately there is a microSD slot (supports microSD, microSDHC, microSDXC) which lets you expand storage up to a theoretical maximum of 2TB. I fitted an inexpensive 200GB card.
  • You can play music either from the Sony Music Player or from other Android apps. If you play from the Sony player you get maximum sound quality and volume is controlled only by the Sony volume control. If you play from Android apps you are limited to 48 kHz/16-bit and higher resolutions will be downsampled, and volume is controlled by the Android media volume as well as by the Sony control. It’s best to turn the Android media volume to max and just use the Sony control.
  • The maximum volume is not that loud. If you have inefficient headphones and want to listen in noisy environments this could be a problem. I found that with Sennheiser HD 600 headphones, for example, it was not always loud enough. With other more efficient headphones, or with Shure earbuds it was fine. The volume depends on multiple factors, including the volume of the source, and whether you engage the “Dynamic Normalizer” sound affect.
  • The battery seems to drain quite fast if the unit is on standby. Turning wi-fi off helps, but you need to turn it off completely if you want to extend battery life. I recommend powering it off when not in use.

Format support is comprehensive, including MP3, FLAC, MP4 including Apple lossless, DSD right up to 11.2896 MHz, and MQA-encoded FLAC. DSD is converted to PCM. Hi-res is supported up to 32-bit/384 kHz

The home screen is standard Android with a link to a detailed manual, and three Sony apps: Music player, Sound adjustment and Ambient sound settings. The player app is basic but easy to use. The Sound Adjustment has various sound processors, including Dynamic Normalizer for normalizing volume between tracks, Vinyl Processor which supposedly “recreates the warm, rich playback of a turntable”, Clear Audio +, graphic equalizer, and DSEE HX which supposedly makes CD quality more like hi-res, and DC Phase Linearizer which is meant to make low frequencies “more analog”. You can also set Direct mode which bypasses all these and is my preferred setting. The Ambient control lets you enable noise cancelling and ambient sound mode (letting you hear external sounds through a headset); but these settings only work with a specific Sony headset.

image

A fun feature is the cassette screen that you can set to appear on playback.

image

The quality of the cassette varies according to the format. You can even see the reels spin faster if you fast forward or back. An nice touch.

As I experimented, I installed Spotify, tried Google Play Music, and used some Bluetooth headphones. Everything worked, but I have to say that some of the magic seems to disappear with all these options. On the Bluetooth side, the unit supports Bluetooth 5.0 and the A2DP, AVRCP, SPP, OPP and DID protocols. Codecs are SBC, LDAC, aptX, aptX HD and AAC. The quality you will get does depend partly on whether your headset supports the best resolutions. Unfortunately I don’t have a Sony headset that supports LDAC, a Sony-developed codec that supports 96 kHz/24-bit though with lossy compression. Perhaps that would make a difference. The sound is not by any means bad, just not as special as with a wired connection.

Similar reservations apply to the sound from Android sources other than the Sony player. I conjecture that the Sony player has some special support for the custom hardware that you do not get when playing via the Android sound system. Again, the quality is very good, but there is a noticeable difference to my ears.

The A105 supports Meridian’s MQA, a controversial effort to improve quality by folding high resolution into space in the audio file that would otherwise be unheard. I have a number of MQA demo files and can report that they do sound exceptionally good on the Sony, though whether this is because of MQA or simply that they are demo-quality recordings is open to question.

Update: I tried this on a flight for the first time. I used some Jabra headphones which have both a wired and a Bluetooth connection. In a quiet environment the wired connection sounds better. On the plane though, with the background roar of the engine, the volume was barely sufficient with wired. I switched to the Bluetooth which overcomes this since you are then using the built-in amplifier in the headset. In the end I felt this was preferable. Wireless is also an advantage in a somewhat cramped environment. It certainly made the flight pass more pleasantly.

Hardware

Android is fast and responsive on this player, thanks to 4GB RAM and a 4x 1.8 ARM chipset. CPU information is below:

image 

What has Sony done to achieve better quality? The specifications refer to things like the aluminium milled frame, film capacitors and “fine sound” resistors. There is also a circuit board layout optimized for sound with the audio. There is a bit more detail on the hardware here if you are curious. What makes a difference to the sound, and what is just marketing? Hard to say, but as I mentioned in the opening of this post, all I can say is that the sound quality is real.

Conclusion

Despite the high price (or low price if you measure it against other premium portable devices such as those from Astell & Kern, or higher in Sony’s range), this is a great device and one which offers many hours of enjoyment. There are a few cautions though. The annoyances are real, including the short battery life and limited volume. I am not sure it is worth it if you plan to use wireless headphones most of the time. And if you are impatient with the idea of downloading files or rippling physical media, in this age of streaming, it is not quite so compelling. None of these issues are dealbreakers for me; I am just enjoying the sound.

The problem with price comparison sites

A piece in the weekend Guardian by its excellent personal finance correspondent Patrick Collinson includes, almost as an aside, an explanation of why price comparison sites are bad news for some customers.

Collinson’s report concerns a man who discovered that his elderly parents-in-law were being asked for £579.08 for home insurance from the Halifax. He considered this excessive, went online to get a quote direct from Halifax for the same house, and was quoted £108. In other words, the renewal was more then five times more expensive, a shocking penalty for loyalty or inertia.

Why is this happening? In part, because insurance companies can get away with it, but that is not the whole story. The problem, Collinson explains, is the price comparison sites which “drive nearly all new business.”

It is obvious that price comparison sites tend to increase prices, since they are financed by commission on sales made through the site. This effect in itself will not make such a dramatic difference though. The bigger problem is that in order to secure the sale, prices for new business have to be cut to the bone. The only viable way to quote such low prices is to subsidise new customers with profits made from existing customers.

That does not justify the behaviour of the Halifax, which actually increased the premium demanded from this elderly couple by £96.52 from that asked the previous year. But it does show why these sites tend to increase unfair pricing.

Microsoft posts another strong set of results, does not know how to invest its profits

Microsoft has announced its quarterly financial statements, reporting revenue of $33.1 billion, up 14% on the same period last year (though fractionally down on the previous quarter).

It does not know how to invest the money it is making. It returned $7.9 billion to shareholders via dividends and buybacks.

What’s notable? The fastest-growing business is Azure, with revenue up by 59%, followed by Dynamics 365 up by 41%.

Office 365 commercial revenue up by 25%, Dynamics 365 up by 41%.

Microsoft notes that it is achieving “higher average revenue per user” on Office 365, indicating some success in adding premium features.

LinkedIn is performing well, revenue up by 25%.

Xbox hardware revenue is down by 34%, but gaming revenue overall down by only 7%. The next hope for gaming will be when the next generation of Xbox appears, Project “Scarlett”, expected this time next year.

In Windows. business revenue is up in both “commercial revenue” (Microsoft 365 and other license sales) and OEM Pro revenue (PCs with Windows 10 Pro installed). However consumer Windows is down 7%. Microsoft says “pressure in the entry level category”, but my guess is that home PCs are just not being replaced and that Chromebooks and iPads are eating into laptop sales.

Quarter ending Sept 30th 2019 vs quarter ending Sept 30th 2018, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 11077 +1306 4782 +901
Intelligent Cloud 10845 +2278 3889 +958
More Personal Computing 11133 +387 4015 +872

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware