Category Archives: software development

Hands on debugging an Azure application – what to do when it works locally but not in the cloud

I have been writing a Facebook application hosted on Microsoft Azure. I hit a problem where my application worked fine on the local development fabric, but failed when deployed to Azure. The application was not actually crashing; it just did not work as expected. Specifically, either the Facebook authentication or the ASP.NET Forms Authentication was failing; when I tried to log on, the log on failed.

This scenario, where the app works locally but not on Azure, is potentially a bad one because you do not have the luxury of breakpoints and variable inspection. There are several approaches. You can have the application write a log, which you could download or view by using Remote Desktop to the Azure instance. You can have the application output debug messages to HTML. Or you can use IntelliTrace.

I tried IntelliTrace. It is easy to set up, just check the box when deploying.

image

Once deployed, I tried the application. Clicked the Log On button, after which the screen flashed but still asked me to Log On. The log on had failed.

image

I closed the app, opened Server Explorer in Visual Studio, drilled down into the Windows Azure Compute node and selected View IntelliTrace Logs.

image

The logs took a few minutes to download. Then you can view is the IntelliTrace log summary, which includes a list of exceptions. You can double-click an exception to start an IntelliTrace debug session.

image

Useful, but I still could not figure out what was wrong. I also found that IntelliTrace did not show the values for local variables in its debug sessions, though it does show exceptions in detail.

Now, if you really want to debug and trace an Azure application you had better read this MSDN article which explains how to create custom debugging and trace agents and write logs to Azure storage. That seems like a lot of work, so I resorted to the old technique of writing messages to HTML.

At this point I should mention something you must do in order to debug on Azure and remain sane.  This is to enable WebDeploy:

image

It is not that hard to set up, though you do need to enable Remote Desktop which means a trip to the Azure management portal. In my case I am behind a firewall so I needed to configure Web Deploy to use the standard SSL port. All is explained here.

Why use Web Deploy? Well, normally when you deploy to Azure the service actually builds, copies and spins up a new virtual machine image for your app. That process is fundamental to Azure’s design and means there are always at least two copies of the VM in existence. It is also slow, so if you are making changes to an app, deploying, and then testing, you will spend most of your time waiting for Azure.

Web Deploy, by contrast, writes to your existing instance, so it is many times quicker. Note that once you have your app working, it is essential to deploy it properly, since Azure might revert your app to the last VM you created.

With Web Deploy enabled I got back to work. I discovered that FormsAuthentication.SetAuthCookie was not working. The odd thing being, it worked locally, and it had worked in a previous version deployed to Azure.

Then I began to figure it out. My app runs in a Facebook canvas. Since the app is served from a different site than Facebook, cookies may be rejected. When I ran the app locally, the app was in a different IE security zone, so different rules applied.

But why had it worked before? I realised that when it worked before I had used Google Chrome. That was it. IE worked locally; but only Chrome worked when deployed.

I have given up trying to fix the specific problem for the moment. I have dug into it a little, and discovered that cookie handling in a Facebook canvas with IE is a long-standing problem, and that the Facebook C# SDK may have bugs in this area. It is not essential for my sample; I have found I can get by with the Facebook session. To get the user ID, for example:

FacebookWebContext.Current.Session.UserId

The time has not been wasted though as I have learned a bit about Azure debugging. I was also amused to discover that my Azure VM has activation problems:

image

The frustration of developing for Facebook with C#

I am researching a piece on developing for Facebook with Microsoft Azure, and of course the first thing I did was to try it out.

It is not easy. The first problem is that Facebook does not care about C#. There are four SDKs on offer: JavaScript, Apple iOS, Google Android, and PHP. This has led to a proliferation of experimental and third-party SDKs which are mostly not very good.

The next problem is that the Facebook API is constantly changing. If you try to wrap it neatly in an SDK, it is likely that some things will break when the next big change comes along.

This leads to the third problem, which is that Google may not be your friend. That helpful article or discussion on developing for Facebook might be out of date now.

Now, there are a couple of reasons why it should be getting better. Jim Zimmerman and Nathan Totten at Thuzi (Totten is now a technical evangelist at Microsoft) created a new C# Facebook SDK, needing it for their own apps and frustrated with what was on offer elsewhere. The Facebook C# SDK looks like it has some momentum.

C# 4.0 actually works well with Facebook, thanks to the dynamic keyword, which makes it easier to cope with Facebook’s changes and also lets it map closely to the official PHP SDK, as Totten explains.

Nevertheless, there are still a few problems. One is that documentation for the SDK is sketchy to say the least. There is currently no reference for it on the Codeplex site, and most of the comments are the kind that produces impressive-looking automatic documentation but actually tells you nothing of substance. Plucking one at random:

FacebookClient.GetAsync(System.Collections.Generic.IDictionary<string,object>)

Summary:
Makes an asynchronous GET request to the Facebook server.

Parameters:
parameters: The parameters.

Another problem, inherent to dynamic typing, is that IntelliSense (auto-completion in Visual Studio) has limited value. You constantly need to reference the Facebook documentation.

Finally, the SDK has changed quite a bit in different versions and some of the samples reference old versions.

In particular, I found it a struggle getting OAuth authentication and access token retrieval working and ended up borrowing Totten’s sample code here which mostly works – though note that the code in the sample does not cope with the same users logging out and logging in again; I fixed this by changing his InMemoryUserStore to use a ConcurrentDictionary instead of a ConcurrentBag, though there are plenty of other ways you can store users.

I’m puzzled why Microsoft does not invest more in making this easier. Microsoft invested in Facebook and it is easy to get the impression that Microsoft and Facebook are in some sort of informal alliance versus Google. Windows Phone 7, for example, ties in closely with Facebook and is probably the best Facebook phone out there.

As it is, although I prefer coding in C# to PHP, I would say that choosing PHP as the platform for your Facebook app will present less friction.

ReSharper 6.0 arrives: intelligent editing and decompiling for Visual Studio

JetBrains has released ReSharper 6.0, an add-on for Visual Studio 2008 and 2010 that delivers a remarkable range of tools, mostly focused on code editing and static analysis. There is also a unit test runner and a source code decompiler.

The heart of ReSharper is refactoring, hence the name, and it adds a large number of refactoring options to Visual Studio. These are nicely integrated with the editor, not only as right-click menu options, but with light-bulb suggestions that appear automatically. Here, for example, ReSharper is telling me that I could use implicit type declaration, and offering to make the change for me, or alternatively to suppress this type of suggestion forever if I do not like it:

image

Source code decompiling is also nicely done. In the above code, IClaimsIdentity is part of the .NET Framework so the source code is not normally available. With ReSharper though, I can navigate to decompiled source:

image

This could be legally sensitive, so I have to pass a Decompiler Legal Notice in which JetBrains attempts to disclaim liability.

image

Then I am in, though the results are not exciting in this instance:

image

If you only want the decompiler, you may find the free dotPeek is all you need.

The what’s new list in ReSharper 6.0 is long. It includes support for JavaScript, ASP.NET Razor, CSS and HTML, better XAML support including creating properties and dependency properties from usage, and macros for file headers which automates things like inserting current date and time.

The pricing is not excessive: in the UK it costs £148 for a personal license or £259 for a commercial license. If you think ReSharper will save you time and improve your code quality, which it likely will, it will soon pay for itself.

Internet Explorer 10 Platform Preview 2 gets web workers, HTML5 sandbox

Microsoft has released Internet Explorer 10 Platform Preview 2 which adds a number of features. These include:
  • Web Workers for background JavaScript.
  • File Reader API
  • HTML 5 drag and drop
  • CSS3 positioned floats
  • HTML 5 sandboxing
  • Some features of HTML 5 forms
I asked Microsoft’s Ryan Gavin and Rob Mauceri why IE seems so far behind its rivals in HTML 5 support if you look at a test site such as html5test.com, where IE9 scores 141 and Google Chrome 329. I was given several reasons. The site does not cover CSS3, SVG, yet does include “specs that are still under development, specs that have been superseded by other things, you have look at what it is actually testing,” said Mauceri. He added that the site only tests for the existence of the feature rather than how well it is implemented.
Fair points, but my sense is that Microsoft, while hugely ahead of where it used to be in terms of HTML standards support, is likely behind Google and Mozilla and likely to remain so. Microsoft has a slower release cycle, and a greater burden of legacy issues to worry about.
That said, Microsoft is pushing forward energetically compared to pre-IE9 days and the new features are interesting, particularly in the light of the greater role of HTML5 which has been promised for Windows 8.
Web Workers, for example, enables more responsive web pages through concurrent programming.
image
I also asked how Microsoft will enable greater access to the Windows API in Windows 8 without polluting the standards, but got the non unexpected answer “wait for the Build conference”.
No formal word on timing, but I would expect the delivery of IE10 and Windows 8 to be connected.

Microsoft Office 365: the detail and the developer story

I attended the UK launch of Office 365 yesterday and found it a puzzling affair. The company chose to focus on small businesses, and what we got was several examples of customers who had discovered the advantages of storing documents online. We were even shown a live video conference with a jerky, embarrassing webcam stream adding zero business value and reminding me of NetMeeting back in 1995 – which by the way was a rather cool product. Most of what we saw could have been done equally well in Google Apps, except for a demo of the vile SharePoint Workspace for offline editing of a shared document, though if you were paying attention you could see that the presenter was not really offline at all.

There seems to be a large amount of point-missing going on.

There is also a common misconception that Office 365 is “Office in the cloud”, based on Office Web Apps. Although Office Web Apps is an interesting and occasionally useful feature, it is well down the list of what matters in Office 365. It is more accurate to say that Office 365 is for those who do not want to edit documents in the browser.

I am guessing that Microsoft’s focus on small businesses is partly a political matter. Microsoft has to offer an enterprise story and it does, with four enterprise plans, but it is a sensitive matter considering Microsoft’s relationship with partners, who get to sell less hardware and will make less money installing and maintaining complex server applications like Exchange and SharePoint. The, umm, messaging at the Worldwide Partner Conference next month is something I will be watching with interest.

The main point of Office 365 is a simple one: that instead of running Exchange and SharePoint yourself, or with a partner, you use these products on a multi-tenant basis in Microsoft’s cloud. This has been possible for some time with BPOS (Business Productivity Online Suite), but with Office 365 the products are updated to the latest 2010 versions and the marketing has stepped up a gear.

I was glad to attend yesterday’s event though, because I got to talk with Microsoft’s Simon May and Jo Carpenter after the briefing, and they answered some of my questions.

The first was: what is really in Office 365, in terms of detailed features? You can get this information here, in the Service Description documents for the various components. If you are wondering what features of on-premise SharePoint are not available in the Office 365 version, for example, this is where you can find out. There is also a Support Service Description that sets out exactly what support is available, including response time objectives. Reading these documents is also a reminder of how deep these products are, especially SharePoint which is a programmable platform with a wide range of services.

That leads on to my second question: what is the developer story in Office 365? SharePoint is build on ASP.NET, and you can code SharePoint applications in Visual Studio and deploy them to Office 365. Not all the services available in on-premise SharePoint are in the online version, but there is a decent subset. Microsoft has a Sharepoint Online for Office 365 Developer Guide with more details.

Now start joining the dots with technologies like Active Directory Federation Services – single sign-on to Office 365 using on-premise Active Directory – and Windows Azure which offers hosted SQL Server and App Fabric middleware. What about using Office 365 not only for documents and email, but also as a portal for cloud-hosted enterprise applications?

That makes sense to me, though there are still limitations. Here is a thread where someone asks:

Does some know if it is possible to make a database connection with Office365, SharePoint (Designer) and SQL Azure database ?

and the answer from Microsoft’s Mark Kashman on the SharePoint team:

You cannot do this via SharePoint Designer today. What you can do is to create a Silverlight or javaScript client application that calls out to SQL Azure.

In the near future, we are designing a way to make these connections using the base SharePoint technology called BCS (Business Connectivity Services) where then you could develop a service to service to SQL Azure.

If you cannot wait, check out the Cloud Connector for SharePoint 2010 from Layer 2 GmbH.

It seems obvious that Office 365 and Azure together have potential as a developer platform.

What about third-party applications and extensions for Office 365? This is another thing that Microsoft did not talk about yesterday; but it seems to me that there is potential here as well. It is not well integrated, but you can search Microsoft Pinpoint for Office 365 applications and get some results. If Office 365 succeeds, and I think it will, there is an opportunity for developers here.

Microsoft partners with Joyent to bring node.js server-side JavaScript to Windows

Microsoft will port node.js to Windows in partnership with Joyent. This will work on Windows Azure as well as other versions of Windows back to Server 2003.

But can you not already run node.js on Windows? This is possible using Cygwin and instructions are here. Cygwin makes Windows more like Linux by providing familiar Linux tools and a Linux API layer. Cygwin is a great tool, though it can be an awkward dependency, but a true Windows port should be higher performance and more robust, particularly as the intention is to use the IOCP API. See here for an explanation of IOCP:

With IOCP, you don’t need to supply a completion function, wait on an event handle to signal, or poll the status of the overlapped operation. Once you create the IOCP and add your overlapped socket handle to the IOCP, you can start the overlapped operation by using any of the I/O APIs mentioned above (except recv, recvfrom, send, or sendto). You will have your worker thread block on GetQueuedCompletionStatus API waiting for an I/O completion packet. When an overlapped I/O completes, an I/O completion packet arrives at the IOCP and GetQueuedCompletionStatus returns.

IOCP is the Windows NT Operating System support for writing a scalable, high throughput server using very simple threading and blocking code on overlapped I/O operations. Thus there can be a significant performance advantage of using overlapped socket I/O with Windows NT IOCPs.

I was impressed by node.js when I saw it presented by author Ryan Dahl at a pre-Dreamforce event last year. Since then it has become better known. This is an interesting move, particularly in the context of an greater focus on JavaScript in the forthcoming version of Windows known as Windows 8. End to end JavaScript for your next-generation real time networking applications?

Hands On with Flash Builder 4.5.1 for Apple iOS

Flash 4.5.1 has been released recently, the first with integrated support for Apple iOS as well as Google Android and RIM Blackberry Tablet OS. I was keen to try my calculator app on iOS, having already tested it for Android. You can do most of the development on Windows, but I moved the project to OS X so I could try it in the iOS simulator and then on an actual iPhone 4.

Adding iOS as a target platform was easy: right-click the project, choose Properties, check to add the platform.

image

Then I worked on the UI. The buttons on my design were too small. The answer I guess is to use relative sizes, but I thought for a quick test I would simply set the device to Apple iPhone 4 and resize the layout for that.

image

After a bit of tweaking I got the app working nicely in the iOS simulator, again set to iPhone4. I was also able to set breakpoints and debug the app easily.

image

Then I tried it on the device. I did the Apple provisioning dance. I then compiled a release build, which took a long time and featured a thermometer that stuck on zero the entire time. It worked though, and I got the app into iTunes and synched.

On the device the app did not look too good.

image

Well, I have read up on supporting multiple screen sizes and on setting mobile project preferences and I am still not sure why this did not work, especially as it looked OK in the simulator. I had auto-scaling on, and the docs say:

When you enable automatic scaling, Flex optimizes the way it displays the application for the screen density of each device.

I fixed the the immediate issue though by adding the attribute applicationDPI=”320” to the ViewNavigatorApplication element.

Now it works fine.

image

So how is performance? I have managed to create some rather poorly performing calculator UIs in my various tests of cross-platform mobile tools, and this is one of the better ones, though not as responsive as the Titanium app on iOS. However the Flex app is more consistent across Android and iOS, whereas Titanium was poor on Android. Loading takes a few second, but it is acceptable. The app size is only 6MB which is not bad, considering that the necessary bits of Adobe AIR are compiled into it.

Note that this is little more than a Hello World app. My reasoning is that if this does not work well, then nothing will.

So far I am encouraged. Taking into account the development experience and performance across both Android and iOS, this is one of the best I have tried so far with my simple example.

Adobe announces strong results though much of the business looks flat

Adobe has announced its financial results for its second quarter. Revenue is up 9% year on year, and profits are up too, so it looks like a strong quarter. However, the success is really limited to a couple of business segments.

Here is the comparison with the equivalent quarter last year:

  Q2 2010 Q2 2011
Creative and interactive 429.3 433.1
Digital Media 139.3 136.7
Digital Enterprise 231.9 283.5
Omniture 91.9 115.9
Print and publishing 56.6 54

Adobe has changed the segmentation of these figures since last time I looked, removing the confusing Platform and splitting out Digital Media. Broadly:

  • Creative and interactive is most of Creative Suite and the Flash platform including both developer tools and streaming servers. It also includes the nascent Digital Publishing Suite for  Apple iPad and tablet publications.
  • Digital Media is Creative Suite Production Premium and individual sales of Photoshop. Premiere Pro, After Effects and Audition.
  • Digital Enterprise Solutions is the LiveCycle middleware, now rebranded as part of the Digital Enterprise Platform, plus the content management platform acquired with Day Software in October 2010, and Acrobat.
  • Omniture is self-explanatory; this is the analytics business acquired in 2009.
  • Print and Publishing is a bunch of tools including, oddly, ColdFusion but not InDesign. Technical authoring sits here, as does Director.

So what do these figures tell us? Creative Suite is trundling on OK, but no more than that, particularly when you consider that Q2 included the release of a paid-for upgrade, CS5.5. Revenue from Digital Media is slightly down, as is Print and publishing.

The strong results are in Digital Enterprise, following the acquisition of Day, and in Omniture.

Both of these were smart acquisitions in my view, though I am not a financial analyst. In a connected era, analytics is crucial, with great potential for integration with the design and development tools.

The enterprise middleware also seems to be going well. This is really a strange amalgam of the old Adobe document publishing and workflow servers with the application services that came from Macromedia. Throw Day software into the mix, with Roy Fielding’s content-centric vision for application development, and you have an interesting platform.

Adobe is also benefiting from the Apple-led revolution towards design-centric software.

That said, not everything is going Adobe’s way. The momentum behind both HTML5 and Apple iOS is a threat to the Flash business. Never mind the technical arguments, the fact is that designers are more likely to be working on removing Flash from their web pages than putting it in. Adobe also needs to sustain its prices, and there is plenty of downward pressure on software prices today, partly driven by Apple and its App Store model. I also get the impression that the hosted services at Acrobat.com have not taken off in the way Adobe had hoped.

C# vs C++ and .NET vs Mono vs Compact Framework performance tests

A detailed benchmark posted on codeproject investigates the performance of basic operations including string handling, hash tables, math generics, simple arithmetic, sorting, file scanning and (for C#) platform invoke of native code. These are the conclusions:

  • There is only a small performance penalty for C# on the desktop versus C++.
  • Mono is generally slower than Microsoft .NET but still acceptable, and all the benchmarks ran without modification.
  • The Compact Framework, an implementation of .NET for mobile devices, performs poorly.

My observations: this matches my own experiments. Why then do some .NET applications still perform badly? When Evernote switched its application from .NET to native code it got much better performance.

The main reason is a couple of issues that this kind of benchmark hides. One is the GUI layer, which involves a ton of platform invoke code under the covers. Another is the large size of .NET applications because of the runtime and library overhead; a lot more stuff gets loaded into memory.

One thing to like about Silverlight is that it is truly optimized for client programming and load time tends to be faster than for a desktop .NET application.

Note that for mobile these benchmarks suggest that C++ still has a big advantage. It would be interesting to see them applied to Silverlight apps on Windows Phone 7. As I understand it, the Silverlight .NET runtime in Windows Phone 7 shares code with the Compact Framework on Windows Mobile, so it is possible that the poor results for the Compact Framework would also apply to Silverlight on Windows Phone 7. Unfortunately developers do not have the option for C++ on Windows Phone 7.

RESTful and modernised: making sense of Adobe’s new Enterprise platform

Adobe has announced its Digital Enterprise Platform for Customer Experience Management. My tip to Adobe: that is too many words with too many syllables for busy IT people who are trying to get their work done. What on earth is it? The same old stuff repackaged, or something genuinely new?

The answer is a bit of each. Adobe has made several big acquisitions over the last few years, starting with the Macromedia merger in 2005 that really formed a new Adobe, bringing together digital publishing and the Flash platform. In September 2009 Adobe acquires Omniture for web analytics, and in October 2010 Day Software. This last one seems to be having a huge impact. Day’s product is called CQ5 Web Content Management and is built on CRX, a content repository which conforms to JCR 2.0 (Java Technology API 2.0), a Java API. Here’s Roy Fielding, formerly at Day and now Principal Scientist at Adobe, from this white paper [pdf]:

The Content Repository API for Java Technology (JCR) is poised to revolutionize the development of J2SE/J2EETM applications in the same way that the Web has revolutionized the development of network-based applications. JCR’s interface designers have followed the guiding principles of the Web to simplify the interactions between an application and its content repository, thus replacing many application-specific or storage-specific interfaces with a single, generic API for content repository manipulation.

JCR is a boon for application developers. Its multipurpose nature and agnostic content model encourages reuse of the same code for many different applications, reducing both the effort spent on development per application and the number of interfaces that must be learned along the way. Its clean separation between content manipulation and storage management allows the repository implementation to be chosen based on the actual performance characteristics of the application rather than some potential characteristics that were imagined early in the application design. JCR enables developers to build full-featured applications based on open source implementations of a repository while maintaining compatibility with the proprietary repositories that are the mainstay of large data centers.

Adobe already has an application platform based on LiveCycle Enterprise Suite, which you will notice now redirects to the Digital Enterprise Platform. Ben Watson, Adobe’s Principal Customer Experience Strategist, explained it to me like this:

The core of the platform now becomes the repository that we got from the Day acquisition. We are also following their leadership around the use of RESTful technology, so changing how we do our web services implementation, how we do our real time data integration into Flash using data services. There’s really four technologies at play here. There’s CQ5, Adobe LiveCycle which is all the business process management on the back end, the online marketing suite with Omniture, and Creative tools which allow to both design and develop all of this content and assets … We had two Java platforms and we brought them into one.

adobe-slide

You can read up on the Digital Enterprise Platform here or see a chart of capabilities here. Much of it does look like rebranding of existing LiveCycle modules; but as a statement of direction it is an interesting one.

Is this for on-premise deployment, or cloud hosted? Adobe has a tie-up with Amazon for hosted deployment, though there is no no multi-tenant hosting from Adobe yet; I got the impression from Watson that it is being worked on.

Adobe is aware that it does not stand alone, and there are several connectors and integration points for third-party applications, such as a SAP data services connector.

Adobe also has a series of “solutions”, which are permutations of web content management, analytics, document processing, social media and so on.  There is also a Unified Workspace, currently in beta, which is a dashboard application.

The company’s line is that it is well placed to address the challenge of the mobile revolution, and to bring greater usability and social interaction to business applications, the consumerization of IT.

Although that sounds a strong pitch, melding all this together into something new while keeping hold of existing developers and designers is a challenge. Another issue for Adobe is that the company’s strong presence in design, multimedia and marketing makes it hard to appeal to more general enterprise developers. Nevertheless, the combination of Fielding’s influence and Adobe’s strength in design, documents and cross-platform clients makes this a platform worth watching.