All posts by Tim Anderson

Abba 10 CD box set 2022

Does anyone still buy CDs? There are a few reasons: they sound great when well mastered, you can rip them and not be dependent on an internet connection to listen to them, they can have nice packaging and booklets, and if you are a little bit obsessive about sound quality you can play the exact version you want to play instead of whatever the streaming service decides to send you.

I love Abba, who could not? and since I don’t have many of their albums I bought the new 10 CD box. I am not quite sure what the proper name for the set is, on the box it just says Abba CD Album Box Set. It ties in with the current Abba avatar-based show.

image

A few observations. The music is great, Abba never made a bad album in my opinion. You get all the studio albums plus a Tracks CD of singles like Fernando and Gimme! Gimme! Gimme! a Man after Midnight. So far so good; and the set is reasonable value considering the number of albums. The latest, Voyage from November 2021, is included.

Now for a few gripes. Abba’s albums in general are not superlative in sound quality but some releases have been better than others. If you want the full dynamic range you have to seek out the earliest CD releases, and this box is no different. The sound is not crushed to death but it still falls short for no good reason other than needless pursuit of “loudness”. There is also a bit too much bass boost on some of the tracks, not least Gimme! Gimme! Gimme! where the percussion sounds a little distorted. It is fine on the early CD box Thank you for the Music.

Second, why is it that an Abba fan like this one can do a better job fixing problems in the source tapes, than the professionals can manage? These are little details like a click in Money Money Money at the 0.33 mark and a slight glitch in Dancing Queen at around 2.06-2.11.

Third, why is this set so short, omitting any bonus tracks other than the singles on Tracks, and that CD is just 38 minutes long? My hunch is that the label made the CDs short enough that they would also fit on the vinyl box, which is all rather topsy-turvy in my opinion.

Fourth, the CDs are packaged in simple card sleeves reproducing the LPs, which means tiny type on things like the rear of Arrival, and you don’t get the lyrics which were on the inner sleeves of the original albums.

Overall then I’m disappointed. Nevertheless, Thank you for the music.

Installing Ubuntu 22.04 on Apple M1 with UTM

I  started with Arch Linux for Linux development  on  M1, which works, but succumbed to Ubuntu just because it is so widely used and therefore easier to find help. It is also supported by VS Code for remote development, as I am aiming for something similar to a WSL setup on Windows and using VS Code on the host side. I had problems installing 22.04 though;  the install completed but trying to boot resulted in:

EFI stub: booting Linux Kernel

EFI stub: Using DTB from configuration table

EFI stub: Exiting boot services

and there it would stay.

The fix I found was to update QEMU:

sudo port selfupdate

sudo port install qemu

after which Ubuntu started without issue (no need to reinstall).

Following this I was able to install and use the Remote – SSH extension in VS Code which worked first time.

Small points to note:

  • I accepted the option in Ubuntu to install the OpenSSH server during installation
  • In UTM I changed the networking for the VM to Emulated VLAN rather than Shared Network, in order to use port forwarding. I forwarded both the SSH port 22 and also the HTTP port 80, to different ports on the Mac, so that I can test web applications running on Ubuntu.

Thanks also to Liz Rice for her post here.

Fixing an Xbox controller broken by Elden Ring

I have been enjoying Elden Ring on the Xbox but not so much when my controller broke. I recall the same thing happening with Dark Souls. Maybe it’s the way I play, but the problem is that the right bumper is used for a quick attack, which I use constantly. The bumpers seem to be less robust than the triggers, so after a while it breaks.

Fortunately the current Xbox controllers (I have a Carbon Black) are easy to fix. The hardest part is getting the textured panels off the controller handles; like so many modern electronics cases, these are a press fit and have to be levered off while trying not to break or scratch them. Then you undo two screws each side using a Torx T8 security screwdriver and another screw under the label in the battery compartment. Then you can carefully remove first a central rear panel and next the rear bumpers.

This revealed the problem: a small plastic tab had broken.

Gluing the tab back probably would not last long; but fortunately compatible bumper parts are available for a few pounds on eBay. I bought two (one for next time) and everything is fine.

The two key things I do with a new Mac

My Windows laptop is ancient (2015) and my company decided to replace it with a MacBook Pro especially since we need to develop software compatible with Apple Silicon. The new Mac works well  and I have been busy putting the essentials (for me) on it: Xcode, Visual Studio Code, .NET 6.0 SDK, Microsoft Office and so on.

Tip for .NET developers: when you put .NET and VS Code on an M1 Mac, you might get a CPU not supported error from Mono, at one time a dependency of the OmniSharp language server. You can fix this either by installing rosetta 2, the x86 translator for Apple silicon, or by setting omnisharp.useModernNet – see here for details.

There are two things I always have to do with a new Mac. The first is to go to  System preferences – Trackpad  – Scroll & Zoom and uncheck the mischievously worded option Scroll direction: Natural. This seems to set this for the mouse too. The reason is that as far as I can tell the Apple preference is no more or less natural than the older approach and having it different depending which operating system you are using is confusing. My suspicion is that Apple introduced this in order to make it harder to switch between Mac and Windows.

image

The second thing is a bit trickier which is to install my password manager. I use Password Safe which does not offer an up to date Mac download, nor an Apple Silicon version. There is a commercial version in the App Store, but since it is open source my solution is to build from source. I recall doing some tweaking the last time I did this, a couple of years back for an Intel Mac, but the process seems to be smooth now as a few fixes have been added for Xcode and arm64 support. I used the latest development release of wxWidgets, 3.1.6, which has to be built first. My build declares itself to be v 0.01 OSX.

image

Without the password manager a laptop is almost unusable for me since I don’t know many of the passwords I use and generally prefer not to save them in the  browser.

My desktop PC which I use for the majority of my work remains Windows and I am a fan of WSL (Windows Subsystem for Linux) which from my perspective is the best new feature of Windows since the release of Windows  7. I miss WSL on the Mac though it is less necessary because macOS is a Unix-like operating system.

In general I do not have a strong preference between Mac and Windows, though I feel that Microsoft and its OEM partners have some work to do to get Windows on Arm working as well as M1 Macs. I was also disappointed by Windows 11, particularly by its lack of support for slightly older CPUs, and the new Start menu and taskbar which is a step backwards from Windows 10. The appearance of ads in the user interface is a concern too, though it is minimal if carefully configured.

Microsoft moves towards UDP in place of TCP for Azure Virtual Desktop, claims lower latency and higher reliability

Microsoft has announced the public preview of Azure Virtual Desktop RDP Shortpath for public networks – a bit of a mouthful, but what this really means is a switch towards UDP as the first choice transport for remote desktop sessions on the Azure cloud.

“Long running TCP sessions are problematic” said Senior Program Manager Denis Gundarev. “UDP is more tolerant to the temporary network interruptions caused by wireless interference or by changes in dynamic routing.”

UDP in itself is not enough; for example, UDP “does not care about each individual packet’s packet order or delivery. It does not have built-in congestion or rate control,” explains Gundarev. The implementation for RDP (Remote Desktop Protocol) uses a thing called URCP (Universal Rate Control Protocol) which Microsoft developed back in 2013, for real-time communications.

AVD already supported UDP for private networks, but many users do not have a private connection to Azure like ExpressRoute, hence the introduction of the public network version. Microsoft says that the benefits include lower latency, better network utilization, and high tolerance to packet loss.

Implementing the preview is done by setting a registry key on the AVD session host, so this can be done experimentally for just a few hosts in order to try out the feature. That said, it will not always be possible. “RDP Shortpath may fail if you use double NAT setups,” said Gundarev. Users should not notice as the old TCP-based connection will be used automatically instead.

Thoughtworks: do not choose to develop Single Page Applications by default

I have a lot of time for Thoughtworks, a global software development company, and always look at its Technology Radar, the latest version of which appeared recently. Plenty to digest, but what caught my eye was this comment regarding SPAs (Single Page Applications):

The sheer prevalence of teams choosing a single-page application (SPA) by default when they need a website has us concerned that people aren’t even recognizing SPAs as an architectural style to begin with, instead immediately jumping into framework selection. SPAs incur complexity that simply doesn’t exist with traditional server-based websites: search engine optimization, browser history management, web analytics, first page load time, etc. That complexity is often warranted for user experience reasons, and tooling continues to evolve to make those concerns easier to address (although the churn in the React community around state management hints at how hard it can be to get a generally applicable solution). Too often, though, we don’t see teams making that tradeoff analysis, blindly accepting the complexity of SPAs by default even when the business needs don’t justify it.

This struck a chord with me because of my adventures creating an online bridge playing platform using ASP.NET Core. I picked the platform because I was in a hurry, like C#, and had some existing code for implementing a bridge game, done for Windows. Any online game though needs lots of JavaScript and I soon became aware that the traditional ASP.NET approach, where each web page is a separate .cshtml file with server-side rendering and C# code-behind, is at odds with trends towards SPAs and JAMstack (JavaScript, API and Markup, where “Markup” is HTML and CSS).

Note that you can of course do SPAs and JAMstack with ASP.NET; ASP.NET is a nice technology for implementing an API and there are Visual Studio templates for things including “ASP.NET Core with React.js and Redux”. A Razor Pages application is still the default though, and gives you a UI for the ASP.NET Core Identity for free which saved me a lot of gruntwork. Still, as I got deeper into JavaScript libraries, including the AWS JavaScript SDK which I am using for audio and video, I found myself being steered towards React.js (resisted so far) and JavaScript bundling with Webpack (tried but was not a good fit). I also found that even switching my JavaScript code to TypeScript was surprisingly awkward, considering that the creator of TypeScript works for Microsoft. I found myself wondering if I should have started with an SPA, or convert my application to an SPA, in order to fit in well with the new world.

Separately, I’ve been involved with another project, in PHP and JavaScript, which is an SPA, and hitting some of the potential issues. For example, the application made a ton of database queries on first load, the data from which was in most cases never used, as users did not visit the parts of the application that required them. Refactoring to load this data on demand has made the application faster and more efficient.

A problem, which Thoughtworks alludes to in a remark about “closing the gap on user experience,” is that staying in JavaScript rather than loading a new page from the server generally makes for a smoother application. The way my bridge application has evolved is that the main play screen is a kind of SPA: everything is done in JavaScript and API calls, and I have written a ton of JavaScript code for things like rendering HTML tables where server-side rendering with Razor would be much easier, but unacceptable for usability. However, different parts of the application still use separate Razor pages, for things like viewing results, configuring a user profile, finding a game, and admin screens for managing members and running sessions.

JavaScript, now TypeScript, has exceeded my expectations in terms of performance and capability. It is annoying at times but a modern web browser is a phenomenal platform. I was glad though to see Thoughtworks noting that going the SPA route is not always the right decision

Drupal 7 is the version that refuses to die as the majority of sites have not upgraded

image

Drupal, which may be the 2nd most popular content management system after WordPress according to these stats, is now at version 9.2. Version 7.0 was released 11 years ago but when 8.0 was being developed (it was released in 2015) the team decided that there there were so many key improvements, including mobile-first design, multi-language support and HTML 5 forms, that in-place upgrade from 7.0 was too hard. In addition, some modules (used to extend Drupal) had no Drupal 8 version. Read all about the migration story here. It is not trivial.

From 8 on, the team promised, compatibility would be preserved so that upgrades would be easier.

What happened? Did every Drupal 7 site migrate to version 8 in order to enjoy the new features and promised future upgrade path?

No. Last month the team confessed that “a majority  of all sites in the Drupal project are still on Drupal 7.” The date for ending support for Drupal 7 keeps getting pushed back and is now November 1 2023, but to be reviewed annually.  “We will announce by July 2023 whether we will extend Drupal 7 community support an additional year,” said the post.

While this is good news in one sense for Drupal 7 site maintainers, it is not good news for the Drupal project. Having more than half Drupal sites on what is now an ancient version is unhealthy and maintaining it is a distraction.

Should the team have compromised the improvements in Drupal 8  for the sake of compatibility? It is imponderable but underlines a general truth in software development: breaking compatibility in major ways is expensive and can only be worth it if the benefits are correspondingly huge.

Another example that come to mind is Visual  Basic .NET which was incompatible with Visual Basic 6.0 and in consequence there are many  VB 6.0  applications still out there, that have never been upgraded.

Python 2 is another example.

What this also means is that time invested in making upgrade easy, or preserving compatibility in a widely-used project, may seem unrewarding but has a big payback.

Multi-page ASP.NET Core, TypeScript, and ES JavaScript Modules

One of the messier aspects of the modern web is the situation with JavaScript modules. Modules, and the ability to import code from one module into another in a coherent and efficient manner, are fundamental  programming concepts but JavaScript originally had no concept of them. Developers came up with  CommonJS, originally for server-side JavaScript, using the keyword require to reference one module from another. Node.js borrowed and refined this system. It does not work in web browsers but can be made to do so by processing the code before deployment to make it browser compatible, or by using require.js or an equivalent.

In the  meantime the ECMAScript standard evolved to develop its own module system, often referred to as ES modules or ES6 modules (modules changed a lot in ES6, also known as ES2015). Browsers implement ES6 in their JavaScript engines, not CommonJS. The two systems are not compatible.

The situation today is that although most agree that ES6 is the way forward, Node.js and a huge amount of existing code uses Node.js modules. The Node.js team is trying to migrate towards ES6 but it is inherently a difficult path. Deno, an alternative to Node but with a tiny userbase by comparison, uses ES6 and that is one of its attractions.

ASP.NET Core and JavaScript

ASP.NET was originally designed to be a server-side code generator like PHP. You write code in C# or VB.NET but what gets sent to the browser is just HTML, CSS and JavaScript. The JavaScript piece was not too important at first, just handy for the occasional client-side confirmation dialog or the like.

This is no longer the case and increasing numbers of applications make heavy use of client-side JavaScript. I am working on a multi-user game for example and have written a ton of JavaScript. Perhaps I should have started with a single-page application (SPA) and used React or Vue but I did not, I did what I was most familiar with and started with a basic ASP.NET Core application. I was in a hurry and took full advantage of everything I could get built-in, including ASP.Net Identity and the SignalR real-time communications library.

Everything went fine but I wanted to shift to TypeScript and take advantage of JavaScript minification. There is WebOptimizer, an unofficial project with involvement from the .NET team at Microsoft, but I started going down the WebPack path for reasons you can read here. I got this working more or less, but abandoned it: essentially, WebPack is not designed for multipage projects and I was running into some awkward problems and spending more time on WebPack configuration than on developing my application.

Using ECMAScript modules

I am in favour both of simplicity and of keeping up to date, so I took a closer look at using the JavaScript emitted by the TypeScript compiler more directly, rather than transpiling it to browser-compatible JavaScript (one of the things WebPack does). The main issue is that the JavaScript code will now include import and export statements. You can try and use TypeScript without ever using import or export but I do not recommend it. Brower compatibility is pretty good if you can manage without Internet Explorer.

Quite a lot changes though when you start using import and export and your JavaScript files become modules. Here are a few things I found.

1. Any links to JavaScript files will now need to include type=”module” like so:

<script type=”module” src=”~/js/myscript.js”></script>

2. Any scripts that are imported by other scripts must not use the asp-append-version Tag Helper for cachebusting. Cachebusting is to prevent old versions of JavaScript files being used because they are cached by the browser. The asp-append-version helper adds a hash value as an argument when retrieving the script. The reason it causes problems is that the scripts that import that file do not know about the hash  value and use its unadorned name. This means the browser loads the script twice with unpredictable results. Removing asp-append-version is not as bad as it first appears, thanks to Etags that inform the browser whether the file has been modified. See the discussion here.

3. If you have controls that call JavaScript functions on your web page, they will no longer work unless you import them. That is how modules work. There are a few solutions. The best is to attach things like click handlers in JavaScript rather than coding them in the HTML. This can be problematic though especially if you have server-side ASP.NET code that creates controls that call JavaScript programmatically. An alternative is to add the function to the window object, which you can do either in the ASP.NET Razor .cshtml page or in the TypeScript/JavaScript. I find it easiest to have an initialisation function in the TypeScript that I call from the web page. Scripts defined as modules never run until the page has loaded.

4. You need to be aware of side effects. Imagine you have three JavaScript files, page1.js, page2.js and shared.js. Your web page page1.cshtml uses page1.js and page2.cshtml uses page2.js. Both files import functions from shared.js. Everything works fine, but then you find that shared.js needs to import a function from page2.js. You run the application and find that page2.js has been loaded by page1.cshtml. This is by design: when you import the function you are telling the browser to load that file. It could catch you out though if you have initialisation code in both page1.js and page2.js and do not want them both to run.

The solution is either to plan for this and code accordingly, or not to import functions from page1.js or page2.js in shared.js. Of course if you follow the path of least resistance in an ASP.NET Core application and the only JavaScript code directly referenced in .cshtml is site.js then it is not a problem.

A working example with Visual Studio 2022

Imagine you have a multi-page ASP.NET Core application such as the one created by default in Visual Studio. It has site.js in wwwroot/js and that is about it. Here is what you might do:

a. Create a directory called Scripts in your project  and add a file demo.ts

b. Add a file called tsconfig.json to the Scripts folder. If you use the Add Item wizard it will be prepopulated with some defaults. You will need to add as a minimum a compiler option to support ES modules and an outDir, for example:

{
   “compilerOptions”: {
     “module”: “es2015”,
     “noImplicitAny”: false,
     “noEmitOnError”: true,
     “removeComments”: false,
     “sourceMap”: true,
     “target”: “es5”,
     “outDir”: “../wwwroot/js”

  },
   “exclude”: [
     “node_modules”,
     “wwwroot”
   ]
}

c. demo.ts looks like this:

export function clickme() {
     alert(“You clicked”);
}

d. Add the following to Index.cshtml:

<script type=”module” src=”js/demo.js”></script>
<script type=”module”>
        import {clickme} from ‘./js/demo.js’;
        window.clickme = clickme;       
</script>

e. Now a button on the page will work, for example:

<p><button onclick=”clickme()”>Click me</button></p>

image

Note: when you add a TypeScript file to a Visual Studio 2022 project you get a message inviting you to install a NuGet package:

image

The TypeScript will still get compiled by Visual Studio, with or without this  package. However, without it the .NET Core compiler will not compile the TypeScript (dotnet build etc).

Minification

Minifying the JavaScript is pretty easy. For the time being I am just running terser in a script called by a post-build event. I am deploying to a Linux Azure app service using Azure DevOps Pipelines and have had to workaround the issue that build events do not seem to handle the cross platform scenario very well, and Visual Studio does not provide much of an editor for build events in ASP.NET Core projects, but it is working.

I hope this proves a better long-term solution for me than WebPack.

Microsoft’s “new commerce experience” for 365 services: not just price increases

Microsoft stated in August that it is increasing prices for Microsoft 365 (formerly known as Office 365), the increase being around 20%, from March 1 2022. The company argues that prices have not changed substantially for ten years – perhaps contentious since it has introduced premium plans that are more expensive – and that “this updated pricing reflects the increased value we have delivered to our customers over the past 10 years.”

There has been inflation of around 2% per annum since 2011 and there have been need features, so a price increase is not unreasonable. However there are some other changes in the pipeline that are more difficult. This is the thing called the New Commerce Experience that impacts both customers and resellers. Finding out what has really changed is not that easy but if you dig through the fluff about “agility” and “alignment” and “streamlining”, there are some standout changes:

  • Customers that want the flexibility to reduce seat count will pay 20% more. Until now, it has been possible to reduce seat count without penalty, even though Microsoft presents its pricing as for an “annual term.” With NCE, customers can either pay by the month with premium prices but the ability to reduce seat count with a month’s notice, or pay less but commit to seats for one or three years. During that period, seat count can be increased but not decreased.

    Reasonable? The problem perhaps is that it means giving up one of the benefits of cloud, which is elasticity. Or at least, you can still have elasticity but it is going to cost more. We have also seen this with reserved instance pricing on AWS, Azure and Google Cloud Platform: the price comes down substantially if you commit to paying for one year or more.

  • There will be no cancellation allowed after the first 72 hours of a term, as explained here. This may impact partners more than customers. Scenario: partner sells 1,000 seats of Microsoft 365 for a 3-year term to some company. Three months into the term, the company goes bust. Partners are saying that this leaves them on the hook for the remaining cost. Here, for example, Australian distributor Dicker Data states that “If a customer (who has the agreement with Microsoft) no longer want or can finish the payment of the contract (bankruptcy for example), the partner will incur the costs of paying the remainder of the contract to Microsoft.”

One hopes that such matters are negotiable, but it is a significant risk especially in these unpredictable times of pandemic and climate change.

Converting a scanned image to text in Office 365

I was emailed an attachment scanned from a magazine; it was a nuisance and I wanted to convert it to text. There are of course a million ways to do this and I recall that every multifunction printer used to come with an OCR facility but what is the easiest way now? For a while I’ve used Microsoft OneNote for this, you just paste in an image, right-click, and there is a Copy Text from Picture option:

image

This normally works OK but not this time. The results were not completely useless but included lots of errors; words missing and words wrongly recognised or scrambled. I am not sure, for example, how the word “score” got recognized as “scMe”.

So I looked for a better solution online, trying to avoid ad-laden free OCR sites of unknown quality. I found Convertio which has a straightforward introductory service with no registration or ads for the first 10 pages. It did a much better job with only 3 or 4 errors, text converted correctly to two columns in a Word document, and a table converted to a Word table. The main issue was that the text was tiny – 4pt – but that was reasonably easy to fix up. It seems that it has a much better recognition engine than OneNote.

I’ll be inclined to use Convertio again, but it also seems that Microsoft has got behind with this little corner of Office 365. Perhaps it should do something based on its Cognitive Services.