Category Archives: microsoft

WinFS reborn: SQL Server as a file system

Fascinating interview with Quentin Clark, who led the cancelled WinFS project at Microsoft. Jon Udell is the interviewer.

Clark talks about how technology from WinFS is now emerging as the Entity Framework in ADO.NET (part of .NET 3.5 SP1) and the FileStream column type in SQL Server 2008 – a connection I’d already made at the Barcelona TechEd last year. He also mentions the new HierarchyID column type that enables fast querying of paths, the concept of rows which contain other rows. He adds that a future version of SQL Server will support the Win32 API so that it can support a file system:

In the next release we anticipate putting those two things together, the filesystem piece and the hierarchical ID piece, into a supported namespace. So you’ll be able to type //machinename/sharename, up pops an Explorer window, drag and drop a file into it, go back to the database, type SELECT *, and suddenly a record appears.

Put that together with the work Microsoft is doing on synchronization, and you get offline capability too – something more robust than offline files in Vista. Clark says SharePoint will also benefit from SQL Server’s file system features.

Note that Live Mesh does some of this too. I guess SQL Server is there in the Live Mesh back end, but it strikes me Microsoft is at risk of developing too many ways to do the same thing.

The piece of WinFS that shows no sign of returning is the shared data platform, which was meant to enable applications to share data:

… all that stuff is gone. The schemas, and a layer that we internally referred to as base, which was about the enforcement of the schemas, all that stuff we’ve put on the shelf. Because we didn’t need it.

Misunderstanding Vista

Microsoft has posted a 9-page document on Five Misunderstood Features in Windows Vista. Apparently these “cause confusion and slow Windows Vista adoption for many folks.” Here they are:

  1. User Account Control
  2. Image Management
  3. Display Driver Model
  4. Windows Search
  5. 64 bit architecture

I thought I did understand User Account Control, but now I’m not so sure. I understand the long-term goal of UAC, which is to move Windows to the position enjoyed by Unix-like operating systems, where users run with limited rights. Fixing this means fixing applications that require local administrator rights; but making third-party app vendors change their practice is hard. UAC takes a multi-pronged approach. It makes it safer to run as local administrator; it makes it possible to run some applications that used to require admin rights without really having those rights; and it is sufficiently annoying that app vendors will feel under some pressure to fix their next release.

This statement caused me to pause:

Enterprises should not run as default in Protected Admin mode, because there are really no benefits—only the pain of prompts. Instead, strive to move users to a Standard User profile.

The highlighting is mine. If there are no benefits, it seems odd that most Vista installations I see are set up in this way. I realise that in this context UAC is not a security boundary. Nevertheless, I figure there are some benefits, in that the user is running most of the time with standard user credentials. If there are no benefits … why does the feature exist?

I’m not sure the Image Management is “widely misunderstood”; it mostly matters only to network administrators whose business it is to understand it. Windows Display Driver Model … again, not sure; I think it is Desktop composition which is misunderstood; people dismiss this as eye-candy, when in fact it “fundamentally changes the way applications display pixels on the screen”, as the referenced article explains.

Windows Search is an interesting one. I think it is misunderstood, but not in the way explained by this new paper. People have questions like, “why does it not index all my files?”

What about performance? In my view, this is far and away the primary problem users have with Vista. It is not in any sense a misunderstanding, however Microsoft spins it. It is bewilderment: why does my new machine, which should be fast, spend so much time spinning its little bagel when I want to get on with my work?

Here’s what this document says:

We’ve heard some of you say that Windows Vista runs slower than Windows XP on a given PC. So what‘s really happening here? First, we need to avoid comparing apples to oranges – Windows Vista is doing a lot more than Windows XP, and it requires resources to conduct these tasks.

It goes on to say that:

On machines configured with the appropriate specifications for their operating system, the speed of most operations and tasks between Windows Vista and Windows XP is virtually on parity. Which is pretty remarkable when you consider one key thing Windows Vista is doing that Windows XP isn’t: indexing for near instantaneous search results for desktop files, even embedded in email messages. The result is users can find information significantly faster (measured in minutes), increasing productivity far in excess of the loss in speed of operations (measured in milliseconds).

Microsoft is off-target here, despite the sleight of hand about “appropriate specifications”. First, search can be a big drain on performance; sorry, not just a few milliseconds. Second, Vista can be dramatically slower than XP, often thanks to poor configuration by OEMs. See Ed Bott’s discussion about fixing a Sony laptop.

There’s recently been discussion about Windows Server 2008, which performs very well, versus Vista, which tends to perform badly. It’s all to do with configuration and disabling unnecessary processes. This is the core of Vista’s problems, not a series of “misunderstandings”.

Update: the document is no longer online. Perhaps it will reappear with amendments?

Further postscript: The Guardian has posted the document here.

Small Business Server 2008: less for more?

The announced prices for SBS 2008 are substantially higher than those for SBS 2003. Client Access Licenses (CALS) for standard edition users are slightly lower than before, but a new CAL for premium users is remarkably expensive: $189.00, on top of the cost of the client Windows OS itself. In the old scheme, an SBS CAL applied to both Standard and Premium users and had a single price of $97.80.

How price sensitive is SBS? From what I see, the cost of installing and configuring SBS is usually more than the license cost, presuming a business gets a specialist to do this. In addition, the announced figures do not cover cheaper OEM editions. In other words, probably not very price sensitive.

This still strikes me as a surprising move. SBS 2008 has removed some features, including the ISA Server firewall. Further, SBS has more competition than before, both from Linux and from cloud-based offerings. Is this really the moment to hoist prices? Google will be pleased.

Visual Basic returning to Mac Office

Microsoft will restore VBA to Mac Office:

The Mac BU [Business Unit] announced it is bringing VBA-language support back to the next version of Office for Mac. Sharing information with customers as early as possible continues to be a priority for the Mac BU to allow customers to plan for their software needs. Although the Mac BU increased support in Office 2008 with alternate scripting tools such as Automator and AppleScript — and also worked with MacTech Magazine to create a reference guide, available at http://www.mactech.com/vba-transition-guide — the team recognizes that VBA-language support is important to a select group of customers who rely on sharing macros across platforms. The Mac BU is always working to meet customers’ needs and already is hard at work on the next version of Office for Mac.

There’s a couple of ways to take announcements like this. The positive: the company is listening. The negative: what was it thinking when it cut the feature?

By the time Mac Office vNext is out of the door, I imagine many potential VBA users will have found other solutions.

The other point of interest: while Microsoft’s Mac BU is benefiting from Apple’s strength, I doubt that is enough to compensate for the lost Windows sales which are also implied.

How Outlook 2007 deletes your messages without asking

A puzzled Outlook 2007 user asked me why his Outlook 2007 archive folders were empty. Investigation led me to this dialog, found at Tools – Options – Other – AutoArchive:

This is actually from my own Outlook; but as you can see, it is set to move old items to an archive folder. Note that the option to Move rather than delete is set by default.

However, I was puzzled by the option to Delete expired items (e-mail folders only). What does this mean? In particular, why does it refer to expired items when the rest of the dialog refers to old items? The word expired suggests some kind of non-validity, like an expired subscription, or password, or credit card.

Pressing F1 did not yield anything helpful; but this article explains:

Delete expired items (e-mail folders only)   This option is not selected by default. You can choose to have e-mail messages deleted when their aging period has expired. The default period for your Draft and Inbox items is six months, and three months for your Sent Items, but you can change these periods using the Clean out items older than option.

As I understand it, this means that items are deleted after as little as three months if the option is checked; and expired means exactly the same as old. But that’s OK; it isn’t checked by default.

Or is it? For sure, I have never checked that option, nor did my contact, but it is checked on all my Outlook installations, and on his. Take a look: is your Outlook set up like this? I’d be interested to know.

The consequence is that old emails simply disappear. The only dialog the user will see is that auto-archive wants to run. By the way, most people would not imagine that an archive process will delete items. Archive means long-term storage. Words like prune or purge imply deletion, but not archive.

Now, I happen to think that archiving in Outlook is a mess anyway. If you have several machines on the go (which is one of the reasons for using Exchange and Outlook), then you usually end up with several archives, buried deep in hidden folders where nobody is likely to find them without help. It is easy to miss these archive files when migrating to a new machine.

Still, I hadn’t realised that Outlook actually deletes old emails without asking – that is, if I am right and this is (incorrectly) the default.

It may seem a small matter; but there are times when finding that old email, sent or received, is critically important. It is another reason why I am fed up with Outlook 2007: its amusingly obscure dialogs, its broken RSS support, and its disgracefully slow performance.

Update: Duncan Smart below suggests that the “Expired items” refers to emails that have an expiry date set in message options. I must say that makes more sense to me. On the other hand, it isn’t what the help document says, and it doesn’t explain why why my contact had no messages in his archive folder, until I changed the setting. I’ll try some experiments … [slightly later] … if I archive a folder with File – Archive, it does not delete old messages (good); on the other hand this dialog is different because you specify the archive date so it is not a perfect test.

I suspect it is not as bad as I first thought, that the help document is incorrect, and that some other factor must have messed up my contact’s archiving. I hope that is the case.

See also this official help document:

Choosing an option to have items deleted permanently deletes the items automatically when they expire. They are not archived. For instance, if you click Delete expired items (e-mail folders only), this option deletes all messages in all your e-mail folders, such as Inbox, Sent, or Drafts, when they reach the end of their aging periods. The messages are not archived.

So … either Outlook really is deleting messages without asking; or I’m not the only one confused.

Substantial .NET, Visual Studio 2008 update in Service Pack 1

Microsoft’s Scott Guthrie has announced .NET 3.5 SP1 and Visual Studio 2008 SP1 beta. Some of the things which caught my eye:

  • Performance: up to 40% faster startup for desktop .NET apps, up to 10% faster ASP.NET.
  • Smaller runtime in .NET “Client profile”. There is a new cut-down runtime for Windows Forms or WPF client apps, bringing the setup down to “only” 26MB. The key point here is the size of the file a user must download and run if she does not already have .NET installed in the right version. Tim Sneath has more details about the new client profile.

A bit of context: the .NET 2.0 runtime is only 22.4MB. This ballooned to 50.3MB for .NET 3.0, and 197MB for .NET 3.5 (check the size of the full package, not the 2.7MB bootstrapper which launches further downloads) – though there are ways to reduce the size of the 197MB monster, which actually includes several versions of the .NET Framework.

  • New vector shape, Printing, and DataRepeater controls for Windows Forms – echoes of old VB controls.
  • A datagrid for WPF – not actually in SP1, but promised shortly afterwards.
  • WPF interop with Direct3D
  • ADO.NET Data Services (formerly Astoria) and Entity Framework

The new SP offers compatibility with SQL Server 2008, and the database product itself is still expected “third quarter” as far as I’m aware. I guess it may go final at the same moment as SP1 for .NET and Visual Studio.

The smaller runtime for .NET desktop apps is welcome, but those in search of a lightweight .NET runtime should look at Silverlight 2.0, which is currently 4.38MB.

Xobni: Outlook users should try this now

Yes, Xobni is brilliant.

Have you ever tried sorting an Outlook inbox by conversation? Of course Outlook goes into a thrash while it prepares the view. Then when it has finished, it does not work right. It has a limited view of what a conversation is, based on the email title. It does not show your sent items, unless you sort them into the same folder. In fact, it is more frustrating than useful, which is why I never use it.

Xobni (the name is inbox reversed) does this right. When you select an email, a panel shows your previous emails from that person, with your replies, which you can read without changing the focus from the message you are attending to. It is based on an index together with some simple analytics. Who else has appeared in the cc list on emails from this person? Where are their messages? What is the sender’s phone number? All of this information is shown automatically; no need to hit confusing menus like Arrange By or Current View.

There’s also a search box; it’s smoother and quicker than Microsoft’s desktop search, also used by Outlook in the latest version. Under the covers lies my favourite desktop database engine: sqlite. I’ve turned off the official Outlook search; anything to speed performance.

Xobni is free right now (it is a beta), so what’s the business model? Still up in the air, apparently. However, given the number of Outlook users, I expect it will be possible to monetize it. Apparently Microsoft tried to buy the company and was refused.

Technorati tags: , , ,

Microsoft to Yahoo: Forget it, then

Microsoft is walking away. The right thing to do in my opinion.

Could Microsoft have bought Yahoo? Clearly, it could have done – for more money:

In our conversations this week, we conveyed our willingness to raise our offer to $33.00 per share, reflecting again our belief in this collective opportunity. This increase would have added approximately another $5 billion of value to your shareholders, compared to the current value of our initial offer. It also would have reflected a premium of over 70 percent compared to the price at which your stock closed on January 31. Yet it has proven insufficient, as your final position insisted on Microsoft paying yet another $5 billion or more, or at least another $4 per share above our $33.00 offer.

It follows that the withdrawal of the offer is a strategic decision, not just a victory for Yahoo, its insistence on a higher price, and its dalliance with Google.

I suspect many voices within Microsoft were saying that the deal would not deliver the benefits the company sought – namely, to pull closer to Google in the search market.

We are also seeing some interesting internal developments, from Silverlight to Popfly to Live Mesh – that suggest Microsoft does have an internet story, albeit still an uncertain one.

I wonder how this saga will look twelve months from now?

Live Mesh: Hailstorm take 2?

So says Spolsky, in a rant about both unwanted mega-architectures, and the way big companies snaffle up all the best coders.

Is he right? Well, I attended the Hailstorm PDC in 2001 and I still have the book that we were given: .NET My Services specification. There are definitely parallels, not least in the marketing pitch (from page 3):

.NET My Services will enable the end user to gain access to key information and receive alerts about important events anywhere, on any device, at any time. This technology will put users in total control of their data and make them more productive.

Swap “.NET My Services” for “Live Mesh” and you wouldn’t know the difference.

But is it really the same? Spolsky deliberately intermingles several points in his piece. He says it is the same stuff reheated. One implication is that because Hailstorm failed, Live Mesh will fail. Another point is that Live Mesh is based on synchronization, which he says is not a killer feature. A third point is that the thing is too large and overbearing; it is not based on what anyone wants.

Before going further, I think we should ask ourselves why Hailstorm failed. Let’s look at what some of the people involved think. We should look at this post by Mark Lucovsky, chief software architect for Hailstorm and now at Google, who says:

I believe that there are systems out there today that are based in large part on a similar set of core concepts. My feeling is that the various RSS/Atom based systems share these core concepts and are therefore very similar, and more importantly, that a vibrant, open and accessible, developer friendly eco-system is forming around these systems.

Joshua Allen, an engineer still at Microsoft, disagrees:

All of these technologies predate Hailstorm by a long shot.  There is a reason they succeeded where Hailstorm failed.  It’s because Hailstorm failed to adopt their essence; not because they adopted Hailstorm’s essence …. the “principles” Mark’s blog post cites are actually principles of the technologies Hailstorm aimed to replace.

but as Allen shows in the latter part of his post, the technology was incidental to the main reasons Hailstorm failed:

  1. Hailstorm intended to be a complete, comprehensive set of APIs and services ala Win32.  Everything — state management, identity, payments, provisioning, transactions — was to be handled by Hailstorm.
  2. Hailstorm was to be based on proprietary, patented schemas developed by a single entity (Microsoft).
  3. All your data belonged to Microsoft.  ISVs could build on top of the platform (after jumping through all sorts of licensing hoops), but we controlled all the access.  If we want to charge for alerts, we charge for alerts.  If we want to charge a fee for payment clearing, we charge a fee.  Once an ISV wrote on top of Hailstorm, they were locked in to our platform.  Unless we licensed a third party to implement the platform as well, kind of like if we licensed Apple to implement Win32.

Hailstorm’s technology was SOAP plus Passport authentication. There were some technical issues. I recall that Passport in those days was suspect. Some smart people worked out that it was not as secure as it should be, and there was a general feeling that it was OK for logging into Hotmail but not something you would want to use for online banking. As for SOAP, it gets a bad rap these days but it can work. That said, these problems were merely incidental compared to the political aspect. Hailstorm failed for lack of industry partners and public trust.

Right, so is Live Mesh any different? It could be. Let me quickly lay out a few differences.

  1. Live Mesh is built on XML feeds, not SOAP messaging. I think that is a better place to start.
  2. Synchronization is a big feature of Mesh, that wasn’t in Hailstorm. I don’t agree with Spolsky; I think this is a killer feature, if it works right.
  3. Live Mesh is an application platform, whereas Hailstorm was not. Mesh plus Silverlight strikes me as appealing.

Still, even if the technology is better, what about the trust aspect? Will Mesh fail for the same reasons?

It is too soon to say. We do not yet know the whole story. In principle, it could be different. Mesh is currently Passport (now Live ID) only. Will it be easy to use alternative authentication providers? If the company listens to its own Kim Cameron, you would think so.

Currently Mesh cloud data resides only on Microsoft’s servers, though it can also apparently do peer-to-peer synch. Will we be able to run Mesh entirely from our own servers? That is not yet known. What about one user having multiple meshs, say one for work, one personal, and one for some other role? Again, I’m not sure if this is possible. If there is only One True Mesh and it lives on Live.com, then some Hailstorm spectres will rise again.

Finally, the world has changed in the last 7 years. Google is feared today in the way that Microsoft was feared in 2001: the entity that wants to have all our information. But Google has softened us up to be more accepting of something like Live Mesh or even Hailstorm. Google already has our search history, perhaps our email, perhaps our online documents, perhaps an index of our local documents. Google already runs on many desktops; Google Checkout has our credit card details. What boundary can Live Mesh cross, that Google has not already crossed?

Hailstorm revisited is an easy jibe, but I’m keeping an open mind.

What is Microsoft’s new language?

From Douglas Purdy’s blog:

It is not very often that you get to be part of a team that is developing a programming language that aspires to be used by every developer on the Microsoft platform.

In addition, it is not very often that you can be part of a team that aspires to radically change the dynamics of building a new language, to the extent that a developer can write their own model-driven language in a straightforward way while getting all the language services (Intellisense, colorization, etc.) for “free”.

I am lucky enough to be on such a team – and if you are interested you could be as well.

Something to do with Oslo I guess. And Live Mesh?

All will be revealed at PDC.

Technorati tags: , , , ,