Tag Archives: windows

When Windows 8 will not boot: the Automatic Repair disaster

“My PC won’t boot” – never good news, but even worse when there is no backup.

The system was Windows 8. One day, the user restarted his PC and instead of rebooting, it went into Automatic Repair.

Automatic Repair would chug for a bit and then say:

Automatic Repair couldn’t repair your PC. Press “Advanced options” to try other options to repair your PC, or “Shut down” to turn off your PC.

Log file: D:\Windows\System32\Logfiles\Srt\SrtTrail.txt

image

Advanced options includes the recovery console, a command-line for troubleshooting with a few useful commands and access to files. There is also an option to Refresh or reset your PC, and access to System Restore which lets you return to a configuration restore point.

System Restore can be a lifesaver but in this case had been mysteriously disabled. Advanced start-up options like Safe Mode simply triggered Automatic Repair again.

Choosing Exit and continue to Windows 8.1 triggers a reboot, and you can guess what happens next … Automatic Repair.

You also have options to Refresh or Reset your PC.

image

Refresh your PC is largely a disaster. It preserves data but zaps applications and other settings. You will have to spend ages updating Windows to get it current, including the update to Windows 8.1 if you originally had Windows 8. You may need to find your installation media if you have any, in cases where there is no recovery partition. You then have the task of trying to get your applications reinstalled, which means finding setup files, convincing vendors that you should be allowed to re-activate and so on. At best it is time-consuming, at worst you will never get all your applications back.

Reset your PC is worse. It aims to restore your PC to factory settings. Your data will be zapped as well as the applications.

You can also reinstall Windows from setup media. Unfortunately Windows can no longer do a repair install, preserving settings, unless you start it from within the operating system you are repairing. If Windows will not boot, that is impossible.

Summary: it is much better to persuade Windows to boot one more time. However if every reboot simply cycles back to Automatic Repair and another failure, it is frustrating. What next?

The answer, it turned out in this case, was to look at the logfile. There was only one problem listed in SrtTrail.txt:

Root cause found:
—————————
Boot critical file d:\windows\system32\drivers\vsock.sys is corrupt.

Repair action: File repair
Result: Failed. Error code =  0x2
Time taken = 12218 ms

I looked up vsock.sys. It is a VMware file, not even part of the operating system. How can this be so critical that Windows refuses to boot?

I deleted vsock.sys using the recovery console. Windows started perfectly, without even an error message, other than rolling back a failed Windows update.

Next, I uninstalled an old vmware player, using control panel. Everything was fine.

The Automatic Repair problem

If your PC is trapped in the Automatic Repair loop, and you have no working backup, you are in trouble. Why, then, is the wizard so limited? In this case, for example, the “boot critical file” was from a third-party; the wizard just needed to have some logic that says, maybe it is worth trying to boot without it, at least one time.

Finally, if this happens to you, I recommend looking at the logs. It is the only way to get real information about what it going wrong. In some cases you may need to boot into the recovery console from installation media, but if your hard drive is working at all, it should be possible to view those files.

Asus bets on everything with new UK product launches for Android, Google Chromebook and Microsoft Windows

Asus unveiled its Winter 2014 UK range at an event in London yesterday. It is an extensive range covering most bases, including Android tablets, Windows 8 hybrids, Google Chromebooks, and Android smartphones.

image

Asus never fails to impress with its innovative ideas – like the Padfone, a phone which docks into a tablet – though not all the ideas win over the public, and we did not hear about any new Padfones yesterday.

The company’s other strength though is to crank out well-made products at a competitive price, and this aspect remains prominent. There was nothing cutting-edge on show last night, but plenty of designs that score favourably in terms of what you get for the money.

At a glance:

  • Chromebook C200 dual-proc Intel N2830 laptop 12″ display £199.99 and C300 13″ display £239.99
  • MeMO Pad Android tablets ME176C 7″ £119 and 8″ ME181 (with faster Z3580 2.3 GHz quad-core processor) £169
  • Transformer Pad TF103C Android tablet with mobile keyboard dock (ie a tear-off keyboard) £239
  • Two FonePad 7″ Android phablets: tablets with phone functionality, LTE in the ME372CL at £129.99  and 3G in the ME175CG at £199.99.
  • Three Zenfone 3G Android phones, 4″ at £99.99, 5″ at £149.99 and 6″ at £249.99.
  • Transformer Book T200 and T300 joining the T100 (10.1″ display) as Windows 8 hybrids with tear-off keyboards. The T200 has an 11.6″ display and the T300 a 13.3″ display and processors from Core i3 to Core i7 – no longer just a budget range. The T200 starts at £349.
  • Transformer Book Flip Windows 8.1 laptops with fold-back touch screens so you can use them as fat tablets. 13.3″ or 15.6″ screens, various prices according to configuration starting with a Core 13 at £449.
  • G750 gaming laptops from £999.99 to £1799.99 with Core i7 processors and NVIDIA GeForce GTX 800M GPUs.
  • G550JK Gaming Notebook with Core i7 and GTX 850M GPU from £899.99.

Unfortunately the press event was held in a darkened room useless for photography or close inspection of the devices. A few points to note though.

The T100 is, according to Asus, the world’s bestselling Windows hybrid. This does not surprise me since with 11 hr battery life and full Windows 8 with Office pre-installed it ticks a lot of boxes. I prefer the tear-off keyboard concept to complex flip designs that never make satisfactory tablets. The T100 now seems to be the base model in a full range of Windows hybrids.

On the phone side, it is odd that Asus did not announce any operator deals and seems to be focused on the sim-free market.

How good are the Zenfones? This is not a review, but I had a quick play with the models on display. They are not high-end devices, but nor do they feel cheap. IPS+ (in-plane switching) displays give a wide viewing angle. Gorilla Glass 3 protects the screen; the promo video talks about a 30m drop test which I do not believe for a moment*. The touch screens are meant to be responsive when wearing gloves. The camera has a five-element lens with F/2.0 aperture, a low-light mode, and “time rewind” which records images before you tap. A “Smart remove” feature removes moving objects from your picture. You also get “Zen UI” on top of Android; I generally prefer stock Android but the vendors want to differentiate and it seems not to get in the way too much.

Just another phone then; but looks good value.

As it happens, I saw another Asus display as I arrived in London, at St Pancras station.

image

The stand, devoted mainly to the T100, was far from bustling. This might be related to the profile of Windows these days; or it might reflect the fact that the Asus brand, for all the company’s efforts, is associated more with good honest value than something you stop to look at on the way to work.

For more details see the Asus site or have a look in the likes of John Lewis or Currys/ PC World.

*On the drop test, Asus says: “This is a drop test for the Gorilla glass, and is dropping a metal ball on to a pane of it that is clamped down, not actually a drop of the phone itself.”

Microsoft CEO Satya Nadella promises “One Windows” in place of three, but should that be two?

Microsoft released its latest financial results yesterday, on which I will post separately. However, this remark from the earnings call transcript (Q&A with financial analysts) caught my eye:

In the year ahead, we are investing in ways that will ensure our Device OS and first party hardware align to our core. We will streamline the next version of Windows from three Operating Systems into one, single converged Operating System for screens of all sizes. We will unify our stores, commerce and developer platforms to drive a more coherent user experiences and a broader developer opportunity. We look forward to sharing more about our next major wave of Windows enhancements in the coming months.

What are the three versions of Windows today? I guess, Windows x86, Windows RT (Windows on ARM), and Windows Phone. On the other hand, there is little difference between Windows x86 and Windows RT other than that Windows RT runs on ARM and is locked down so that you cannot install desktop apps. The latter is a configuration decision, which does not make it a different operating system; and if you count running on ARM as being a different OS, then Windows Phone will always be a different OS unless Microsoft makes the unlikely decision to standardise on x86 on the phone (a longstanding relationship with Qualcomm makes this a stretch).

Might Nadella have meant PC Windows, Windows Phone and Xbox? It is possible, but the vibes from yesterday are that Xbox will be refocused on gaming, making it more distinct from PC and phone:

We made the decision to manage Xbox to maximize enterprise value with a focus on gaming. Gaming is the largest digital life category in a mobile first, cloud first world. It’s also the place where our past success, revered brand and passionate fan base present us a special opportunity.

With our decision to specifically focus on gaming we expect to close Xbox Entertainment Studios and streamline our investments in Music and Video. We will invest in our core console gaming and Xbox Live with a view towards the broader PC and mobile opportunity.

said Nadella.

As a further aside, what does it mean to “manage Xbox to maximize enterprise value”? It is not a misprint, but perhaps Nadella meant to say entertainment? Or perhaps the enterprise he has in mind is Microsoft?

Never mind; the real issue issue is about the development platform and making it easier to build applications for PC, phone and tablets without rewriting all your code. That is the promise of the Universal App announced earlier this year at the Build conference.

That sounds good; but remember that Windows 8.x is two operating systems in one. There is the desktop side which is what most of us use most of the time, and the tablet side (“Metro”) which is struggling. Universal Apps run on the tablet side. The desktop side has different frameworks and different capabilities, making it in effect a separate platform for developers.

“One Windows” then is not coming soon. But we might be settling on two.

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Review: Sonocent Audio Notetaker, making sense of recorded interviews and meetings

Why bother taking written notes, when you can simply record the audio of a meeting or interview and listen to it later? I do this a lot, but it is problematic. You end up with an MP3 which has all the info within it, but with no quick way to find a half-remembered statement. Of course you can transcribe everything, or get it transcribed, but that is not quick; it will likely take longer than the original event if you want to transcribe it all, and even selective transcription is a slow process. You can get better at this, and I have formed a habit of noting times when I hear something which I am likely to refer to later, but standard audio players (such as Foobar 2000 or iTunes) are designed for music and not great for this kind of work.

There is also an annoying problem with application focus if you want to transcribe a recording. You have Word open, you have your recording open in Foobar, but to control Foobar you have to switch focus away from Word, which means you cannot type until you focus back. There are utilities around to overcome this – my solution was to write my own Word macro which can pause and rewind a recording with keyboard shortcuts – but it is another issue to fix.

Sonocent Audio Notetaker is an application for Windows or Mac dedicated to making sense of speech recordings. Audio Notetaker lets you create documents which include audio, text and images. If you have an existing audio recording, you can import it into a new Audio Notetaker documnent and start to work with it. The audio is copied into the document, rather than being added as a reference, so these documents tend to be large, a little larger than the original.

The primary feature is the the way recordings are visualised and navigated. When you import a recording, it shows as a series of bars in a large panel, rather than the single horizontal scrolling view that most audio players present. Each bar represents a phrase, determined by Audio Notetaker according to pauses in the speech. This is not altogether reliable since speakers may pause mid-phrase, but you can split or merge bars if needed. The length of each bar varies according to the content, but typically seems to be around 3-15 seconds. You navigate the recording by clicking on the bars, and annotate it by assigning colours to bars according to your own scheme, such as blue for a potential quote, or brown for “boring, skip this”.

If you are transcribing, you can type into either to two text panes, one of which is called Reference and the other just Text. When you are typing in one of these panes, you can use keyboard shortcuts to control the audio, such as Ctrl+Space for play/pause, Ctrl+\ to skip back, and Ctrl+/ to skip forward. The Reference and Text panes are functionally identical, but let you keep two different types of notes with one recording. There is also an image pane, which can include images, PDFs or PowerPoint presentations.

image

How do you synchronise your notes or transcription with the audio to which it relates? Audio Notetaker does not do this automatically, but does allow you to insert section breaks which split the document into vertical sections. You can create these breaks with keyboard shortcuts. I would prefer it if Audio Notetaker automatically set hotlinks so that I could tell exactly what audio was playing when I made a note, but sections are nevertheless useful.

For example, if you have an interview, a logical approach would be to make each question and each answer a section. Then you can easily navigate to the answer you want.

You can use background colouring to further distinguish between sections.

A common problem with audio recordings is that they are at too low a level. Audio Notetaker has its own volume control which can boost the volume beyond what is possible with the Windows volume control.

There is also a noise cancellation button, to remove the dreaded hiss.

image

Advanced features

Those are the basics; but Audio Notetaker has a few other capabilities.

One idea is that you might want to record the content of an online conference. For this purpose, you can record from any of your input or output devices (it might seem strange to record from an output device, but this is the equivalent of a “what you hear” setting).

image

This approach is further supported by the ability to capture a screen and insert it into the document. When you choose the screen capture tool, you get a moveable, resizeable frame that you position over the area you want to capture.

image

Another scenario is that you want to create a simple video with a PowerPoint slide show and an audio voiceover. You can do this by importing the PowerPoint and recording your speech, then choosing Export Audio and Images as Video (MP4 or WMV).

image

You can also export the text and images in RTF format (suitable for most word processors).

Internally, Audio Notetaker uses Opus Audio Encoding which is an internet standard.

You can also have Audio Notetaker read back text to you using the Windows text to speech engine (I am not sure how this works on a Mac).

Final words

The best feature of Audio Notetaker is the way it lets you navigate an audio file. It is quicker to click on a bar in the panel than using a horizontal scroller or noting the time and going to that point.

The sections work OK but I would personally like some way of embedding notes that are hotlinked to points in the audio with a finer granularity than sections.

I am not sure of the value of features like importing PowerPoint slides, adding audio, and exporting as a video, when PowerPoint itself has support for narrations and export to video. I would prefer it if the developers focused on the core proposition in Audio Notetaker: making it easy to index, annotate and navigate speech recordings.

I would also like to see integration with a transcription service. Automated transcription would be great but does not usually work well with typical field recordings; more realistically, perhaps Sonocent could integrate with Amazon’s Mechanical Turk or another service where humans will transcribe your recording for a fee.

Nevertheless, Audio Notetaker is nicely designed software that addresses a poorly-served niche; well worth consideration for journalists, students, secretaries, takers of minutes, or anyone who uses audio recordings as part of their workflow.

You can find Audio Notetaker on the Sonocent site, and obtain it as a free trial, or by subscription for a period, or with a perpetual licence. For example, six months for an individual license is £29.99; a perpetual licence is £95.99 (including VAT).

It is available for PC or Mac.

Fixing a low-tech computer attack by fake “Microsoft”

For the second time this week, I wasted some time fixing an infected Windows PC. The intriguing aspect of this infection though is that it was not really a virus – unless you count crude scripts designed to scare and inconvenience the user.

The problem started when an elderly friend was called, so she thought, by Microsoft. It was not Microsoft at all, but a fraudster from, it appears, India. He explained that there was a problem with her PC and offered to fix it. I am not sure of all the details, but she ended up paying £20 (after negotiating down from a higher figure) to a bank account in Calcutta.

While this does not sound like something any sane person would do, no doubt these people are suitably convincing after years of practice. It is also true that Microsoft has support staff in India though note that the real company NEVER rings out of the blue with a virus warning so if this happens to you, it is a scam.

I found some payment forms on her PC. They include all the right logos.

image

The criminal got her to install TeamViewer and I found an entertaining batch file which perhaps he ran to simulate a security product. Here is part of it:

echo license key received
start /w wscript.exe C:sleep2000.vbs
echo:
echo:
echo:
echo Windows License is activated for Lifetime.
start /w wscript.exe C:sleep2000.vbs

and concludes:

echo Your license key has been succesfully activated in your computer..
echo Now computer is protected from hackers.

She thought that was the end of it, until she restarted her PC. First, she was prompted to run an executable called AA_v3.exe. If she cancelled, she got a message:

You have been hit by a stuxnet virus, you may lose all your files and folders

and then:

image

and

image

This is a simple .VBS script that displays message boxes in a loop.

Next, the computer shuts down. Why? Because the “stuxnet” message was a command in her startup folder that looks like this:

%windir%\system32\shutdown.exe -s -t 120 -c "You have been hit by a stuxnet virus, you may lose all your files and folders"

This runs before the other messages so you end up with a scary command prompt, more scary messages, and then your PC shuts down.

I am not sure what happens if you DO run AA_v3.exe. This, it turns out, is free remote control software called Ammyy Admin. This is so often used by scammers that there is a warning about it on the vendor’s web site:

!!! If you receive a phone call claiming to be from ‘Microsoft’ or someone claiming to work on their behalf, telling you that you have a virus on your computer or some errors which they will help you to fix via Ammyy Admin, it is definitely a scam.

Of course victims will not see this warning.

If you run it though, maybe the criminal can connect and cancel the shutdown before two minutes is up, and use the PC in a botnet. Or maybe there is a follow-up call demanding more money to fix the problem. Who knows?

The attraction of these low-tech scripts (for the fraudsters) is that anti-virus software will not detect anything amiss – though in fact, Ammyy Admin is so widely used for criminal purposes that 10 out of 50 anti-virus products used by Virustotal do report it as a “risky” executable.

image

The fix in this case was to log on using a different user profile – Safe mode would also have worked but I was working remotely. Once logged on I was able to remove the startup entries and run some other malware checking tools; ideally you would reinstall Windows but this is inconvenient for a home user.

The problem as ever is that if you know criminals have had use of a machine, you do not know what else they may have done.

This scam still seems to be common and profitable for the fraudsters, and will continue I imagine, unless both source and target countries make a real effort to find and prosecute those responsible.

Google, Bing: time to junk these parasitic download sites

“Users of today’s PCs live on a precipice. One false click and the adware and malware invades,” I remarked in a recent comment on Microsoft’s Surface Pro 3 launch.

The remark was prompted by a recent call from a friend. His PC was playing up. He was getting all sort of security warnings and being prompted to download more and more apps supposedly to fix problems. It all started, he said, when he went to Google to install iTunes.

After the clean-up, I wondered what had happened. I went to Google and typed in iTunes.

image

The top hit is Apple, which perhaps to prevent this kind of problem has actually paid for an ad on its own brand name. However my friend, understandably, went for the link that said iTunes Free Download (actually I am not sure if this was the exact link he clicked, but it was one like it).

Note how the ads are distinguished from the organic hits only by a small yellow indicator.

Microsoft’s Bing, incidentally, is even worse; I presume because Apple has not paid for an ad:

image

Using a secure virtual machine, I investigated what happens if you click one of these links (I advise you NOT to try this on your normal PC). I clicked the Google one, which took me to SOFTNOW.

image

I hit the big Download button.

image

It is downloading a setup from drive-files-b.com which claims to be iTunes, but it is not, as we will see.

The file passes Microsoft’s security scan and runs. The setup is signed by Perion Network Ltd.

image

Now here comes iTunes – or does it?

image

I clicked to see the Terms of Service. These are from Perion, not Apple, and explain that I am going to get an alternative search service for my browser plus other utilities, on an opt-out basis.

image

However I doubt my friend clicked to see these. Probably he hit Next.

image

Apparently I have “elected to download Search Protect”. There are more terms to agree. The Skip and Skip All buttons are in grey; in fact, the Skip button looks disabled though perhaps it is not.

image

Now here comes a thing called Wajam which is going to recommend stuff to me.

image

And another horror called WebSteroids with more terms of use:

image

I am going to get “display ads (banner ads), text ads, in-text ads, interstitial ads, pop up ads, pop under ads, or other types of ads. Users may see additional ads when using their internet browser or other software”.

Thanks.

Now “iTunes” seems to be downloading.

image

Once it downloads, I get an Install Now button. Apparently all those Next buttons I clicked did not install iTunes after all.

image

This last button, of course, downloads the real setup from Apple and runs it. Unfortunately it is the wrong version.

image

Who is to blame for all this? Well, the warning signs may be obvious to those of us in the trade, but frankly it is not that unreasonable to go to your trusted search engine, type in iTunes, and click the download link.

The blame is with Google (and Bing) for taking money from these advertisers whose aim is to get to you download their intrusive ad-laden extras.

Apple iTunes is free software and you can get it from Apple here.

Note that Google is experimenting with removing the address bar altogether, so you can only navigate the web by searching Google (which is what people do anyway). This would make users even more dependent on the search providers to do the right thing, which as you can see from the above, is not something you can count on.

Microsoft Small Business Server to Server Essentials R2: not a smooth transition

Recently I assisted a small business (of around 10 users) with a transition from Small Business Server 2003 to Server Essentials R2.

Small Business Server 2003 had served it well for nearly 10 years. The package includes Windows Server 2003 (based on XP), Exchange, and the rather good firewall and proxy server ISA Server 2004 (the first release had ISA 2000, but you could upgrade).

image

SBS 2003 actually still does more than enough for this particular business, but it is heading for end of support, and there are some annoyances like Outlook 2013 not working with Exchange 2003. This last problem had already been solved, in this case, by a migration to Office 365 for email. No problem then: simply migrate SBS 2003 to the latest Server 2012 Essentials R2 and everything can continue running sweetly, I thought.

Sever Essentials is an edition designed for up to 25 users / 50 devices and is rather a bargain, since it is cheap and no CALs are required. In the R2 version matters are confused by the existence of a Server Essentials role which lets you install the simplified Essentials dashboard in any edition of Windows Server 2012. The advantage is that you can add as many users as you like; the snag is that you then need CALs in the normal way, so it is substantially more expensive.

Despite the move to Office 365, an on-premise server is still useful in many cases, for example for assigning permissions to network shares. This is also the primary reason for migrating Active Directory, rather than simply dumping the old server and recreating all the users.

The task then was to install Server Essentials 2012 R2, migrate Active Directory to the new server, and remove the old server. An all-Microsoft scenario using products designed for this kind of set-up, should be easy right?

Well, the documentation starts here. The section in TechNet covers both Server 2012 Essentials and the R2 edition, though if you drill down, some of the individual articles apply to one or the other. If you click the post promisingly entitled Migrate from Windows SBS 2003, you notice that it does not list Essentials R2 in the “applies to” list, only the first version, and there is no equivalent for R2.

Hmm, but is it similar? It turns out, not very. The original Server 2012 Essentials has a migration mode and a Migration Preparation Tool which you run on the old server (it seems to run adprep judging by the description, which updates Active Directory in preparation for migration). There is no migration tool nor migration mode in Server 2012 Essentials R2.

So which document does apply? The closest I could find was a general section on Migrate from Previous Versions to Windows Server 2012 R2 Essentials. This says to install Server 2012 Essentials R2 as a replica domain controller. How do you do that?

To install Windows Essentials as a replica Windows Server 2012 R2 domain controller in an existing domain as global catalog, follow instructions in Install a Replica Windows Server 2012 Domain Controller in an Existing Domain (Level 200).

Note the “Level 200” sneaked in there! The article in question is a general technical article for Server 2012 (though in this case equally applicable to R2) aimed at large organisations and full of information that is irrelevant to a tiny 10-user setup, as well as being technically more demanding that you would expect for a small business setup.

Fortunately I know my way around Active Directory to some extent, so I proceeded. Note you have to install the Active Directory role before you can run the relevant PowerShell cmdlets. Of course it did not work though. I got an error message “Unable to perform Exchange Schema Conflict Check.”

This message appears to relate to Exchange, but I think this is incidental. It just happens to be the first check that does not work. I think it was a WMI (Windows Management Instrumentation) issue,  I did not realise this at first though.

I should mention that although the earlier paper on migrating to Server Essentials 2012 is obsolete, it is the only official documentation that describes some of the things you need to do on the source server before you migrate. These include changing the configuration of the internet connection to bypass ISA Server (single network card configuration), which you do by running the Internet Connection Wizard. You should also check that Active Directory is in good health with dcdiag.exe.

I now did some further work. I removed ISA Server completely, and removed Exchange completely (note you need your SBS 2003 install CD for this). Removing ISA broke the Windows Server 2003 built-in firewall but I decided not worry about it. Following a tip I found, I also used ntdsutil to change the DSRM (Directory Services Recovery Mode) password. I also upgraded the SBS AD forest to Server 2003 (it was on Server 2000), which is necessary for migration to work.

I am not sure which step did the trick, but eventually I persuaded the PowerShell for creating the Replica Domain Controller to work. Then I was able to transfer the FSMO roles. I was relieved; I gather from reading around that some have abandoned the attempt to go from AD in Server 2003 to AD in Server 2012, and used an intermediate Server 2008 step as a workaround – more hassle.

After that things went relatively smoothly, but not without annoyances. There are a couple to mention. One is that after migrating the server, you are meant to connect the client computers by visiting a special URL on the server:

Browse to http://destination-servername/connect and install the Windows Server Connector software as if this was a new computer. The installation process is the same for domain-joined or non-domain-joined client computers.

If you do that from a client computer that was previously joined to the SBS domain (having removed unwanted stuff like the SBS 2003 client and ISA client) then you are prompted to download and run a utility to join the new network. You do that, and it says you cannot proceed because a computer of the same name already exists. But this is that same computer! No matter, the wizard will not run, though the computer is in fact already joined to the domain.

If you want to run the connect wizard and set up the Essentials features like client computer backup and anywhere access, then as far as I can tell this is the official way:

  • Make sure you have an admin user and password for the PC itself (not a domain user).
  • Demote the computer from the domain and join it to a workgroup. Make sure the computer is fully removed from the domain.
  • Then go to the connect URL and join it back.

If you are lucky, the domain user profile will magically reappear with all the old desktop icons, My Documents and so on. If you are unlucky you may need manual steps to recover it, or to use profile migration tools.

This is just lazy on Microsoft’s part. It has not bothered to create a tool that will do what is necessary to migrate an existing client computer into the Server Essentials experience (unless such a tool exists and I did not find it; I have seen reports of regedit hacks).

The second annoyance was with the Anywhere Access wizard. This is for enabling users to log in over the internet and access limited server features, and connect to their client desktop. I ran the wizard, installed a valid certificate, used a valid DNS name, manually opened port 443 on the external firewall, but still got verification errors.

image

Clicking Repair is no help. However, Anywhere Access works fine. I captured this screenshot from a remote session:

image

All of the above is normal business for Microsoft partners, but does illustrate why small businesses that take on this kind of task without partner assistance may well run into difficulties.

Looking at the sloppy documentation and missing pieces I do get the impression that Microsoft cares little about the numerous small businesses trundling away on old versions of SBS, but which now need to migrate. Why should it, one might observe, considering how little it charges for SBS 2012 Essentials? It is a fair point; but I would argue that looking after the small guys pays off, since some grow into big businesses, and even those that do not form a large business sector in aggregate. Google Apps, one suspects, is easier.

An underlying issue, as ever with SBS, is that Windows Server and in particular Active Directory is designed for large scale setups, and while SBS attempts to disguise the complexity, it is all there underneath and cannot always be ignored.

In mitigation, I have to say that for businesses like the one described above SBS has done a solid job with relatively little attention over many years, which is why it is worth some pain in installation.

Update: A couple of further observations and tips.

Concerning remote access, I suspect the wizard wants to see port 80 open and directed to the server. However this is not necessary as far as I can tell. It is also worth noting that SBS Essentials R2 installs TS Gateway, which means you can configure RDP direct to the server desktop (rather than to the limited dashboard you get via the Anywhere Access site).

The documentation, such as it is, suggests that you use the router for DHCP. Personally I prefer to have this on the server, and it also saves time and avoids errors since you can import the DHCP configuration to the new server.

Office, Azure Active Directory, and mobile: the three pillars of Microsoft’s cloud

When Microsoft first announced Azure, at its PDC Conference in October 2008, I was not impressed. Here is the press release, if you fancy a look back. It was not so much the technology – though with hindsight Microsoft’s failure to offer plain old Windows VMs from the beginning was a mistake – but rather, the body language that was all wrong. After all, here is a company whose fortunes are built on supplying server and client operating systems and applications to businesses, and on a partner ecosystem that has grown up around reselling, installing and servicing those systems. How can it transition to a cloud model without cannibalising its own business and disrupting its own partners? In 2008 the message I heard was, “we’re doing this cloud thing because it is expected of us, but really we’d like you to keep buying Windows Server, SQL Server, Office and all the rest.”

Take-up was small, as far as anyone could tell, and the scene was set for Microsoft to be outflanked by Amazon for IaaS (Infrastructure as a Service) and Google for cloud-based email and documents.

Those companies are formidable competitors; but Microsoft’s cloud story is working out better than I had expected. Although Azure sputtered in its early years, the company had some success with BPOS (Business Productivity Online Suite), which launched in the UK in 2009: hosted Exchange and SharePoint, mainly aimed at education and small businesses. In 2011 BPOS was reshaped into Office 365 and marketed strongly. Anyone who has managed Exchange, SharePoint and Active Directory knows that it can be arduous, thanks to complex installation, occasional tricky problems, and the challenge of backup and recovery in the event of disaster. Office 365 makes huge sense for many organisations, and is growing fast – “the fastest growing business in the history of the company,” according to Corporate VP of Windows Server and System Center Brad Anderson, speaking to the press last week.

image
Brad Anderson, Corporate VP for Windows Server and System Center

The attraction of Office 365 is that you can move users from on-premise Exchange almost seamlessly.

Then Azure changed. I date this from May 2011, when Scott Guthrie and others moved to work on Azure, which a year later offered a new user-friendly portal written in HTML5, and Windows Azure VMs and web sites. From that moment in 2012, Azure because a real competitor in cloud computing.

That is only two years ago, but Microsoft’s progress has been remarkable. Azure has been adding features almost as fast as Amazon Web Services (AWS – and I have not attempted to count), and although it is still behind AWS in some areas, it compensates with its excellent portal and integration with Visual Studio.

Now at TechEd Microsoft has made another wave of Azure announcements. A quick summary of the main ones:

  • Azure Files: SMB shared storage for Azure VMs, also accessible over the internet via a REST API. Think of it as a shared folder for VMs, simplifying things like having multiple web servers serve the same web site. Based on Azure storage.
  • Azure Site Recovery: based on Hyper-V Recovery Manager, which orchestrates replication and recovery across two datacenters, the new service adds the rather important feature of letting you use Azure itself as your space datacenter. This means anyone could use it, from small businesses to the big guys, provided all your servers are virtualised.
  • Azure RemoteApp: Remote Desktop Services in Azure, though currently only for individual apps, not full desktops
  • Antimalware for Azure: System Center Endpoint Protection for Azure VMs. There is also a partnership with Trend Micro for protecting Azure services.
  • Public IPs for individual VMs. If you are happy to handle the firewall aspect, you can now give a VM a public IP and access it without setting up an Azure endpoint.
  • IP Reservations: you get up to five IP addresses per subscription to assign to Azure services, ensuring that they stay the same even if you delete a service and add a new one back.
  • MSDN subscribers can use Windows 7 or 8.1 on Azure VMs, for development and test, the first time Microsoft has allows client Windows on Azure
  • General availability of ExpressRoute: fast network link to Azure without going over the internet
  • General availability of multiple site-to-site virtual network links, and inter-region virtual networks.
  • General availability of compute-intensive VMs, up to 16 cores and 112GB RAM
  • General availability of import/export service (ship data on physical storage to and from Azure)

There is more though. Those above are just a bunch of features, not a strategy. The strategy is based around Azure Active Directory (which everyone gets if they use Office 365, or you can set up separately), Office, and mobile.

Here is how this works. Azure Active Directory (AD), typically synchronised with on-premise active directory, is Microsoft’s cloud identity system which you can use for single sign-on and single point of control for Office 365, applications running on Azure, and cloud apps run by third-parties. Over 1200 software as a service apps support Azure AD, including Dropbox, Salesforce, Box, and even Google apps.

Azure AD is one of three components in what Microsoft calls its Enterprise Mobility Suite. The other two are InTune, cloud-based PC and device management, and Azure Rights Management.

InTune first. This is stepping up a gear in mobile device management, by getting the ability to deploy managed apps. A managed app is an app that is wrapped so it supports policy, such as the requirement that data can only be saved to a specified secure location. Think of it as a mobile container. iOS and Android will be supported first, with Office managed apps including Word, Excel, PowerPoint and Mobile OWA (kind-of Outlook for iOS and Android, based on Outlook Web Access but delivered as a native app with offline support).

Businesses will be able to wrap their own applications as managed apps.

Microsoft is also adding Cordova support to Visual Studio. Cordova is the open source part of PhoneGap, for wrapping HTML and JavaScript apps as native. In other words, Visual Studio is now a cross-platform development tool, even without Xamarin. I have not seen details yet, but I imagine the WinJS library, also used for Windows 8 apps, will be part of the support; yes it works on other platforms.

Next, Azure Rights Management (RMS). This is a service which lets you encrypt and control usage of documents based on Azure AD users. It is not foolproof, but since the protection travels in the document itself, it offers some protection against data leaking out of the company when it finds its way onto mobile devices or pen drives and the like. Only a few applications are fully “enlightened”, which means they have native support form Azure RMS, but apparently 70% of more of business documents are Office or PDF, which means if you cover them, then you have good coverage already. Office for iOS is not yet “enlightened”, but apparently will be soon.

This gives Microsoft a three-point plan for mobile device management, covering the device, the applications, and the files themselves.

Which devices? iOS, Android and Windows; and my sense is that Microsoft is now serious about full support for iOS and Android (it has little choice).

Another announcement at TechEd today concerns SharePoint in Office 365 and OneDrive for Business (the client), which is getting file encryption.

What does this add up to? For businesses happy to continue in the Microsoft world, it seems to me a compelling offering for cloud and mobile.