Category Archives: software development

RemObjects previews native Apple Mac IDE for C#, .NET, Oxygene

RemObjects is previewing a new native Mac IDE for its Oxygene and C# compilers. Oxygene is a Delphi-like language (in other words, a variant of Object Pascal) which targets iOS, Mac, Android, Windows Phone and Windows. RemObjects C# shares the same targets. Both can compile to .NET assemblies for Windows, or to Mono for cross-platform .NET, or to a Mac or iOS executable (using the LLVM compiler), or to Java bytecode for the Android Dalvik runtime. You can get both Oxygene and RemObjects  C# bundled in a product called Elements.

In the past, RemObjects has used Visual Studio as its IDE. While this is a natural choice for Windows users, much development today is done on the Mac. Requiring Mac users to develop in a Windows Virtual Machine adds friction, so RemObjects is now working on a native IDE for the Mac codenamed Fire.

image

I gave Fire the briefest of looks. Here are some of the options for a new .NET application:

image

Note the appearance of ASP.NET MVC 4, and even Silverlight.

Here are the options for a new Cocoa application:

image

If you are developing for Cocoa, you can edit the resource file in Apple’s Xcode and use it in your application. I started a new C# Cocoa app, made a few changes and and then ran it from the IDE:

image

I imagine Microsoft will be keeping an eye on tools like this – if it is not, it should – since they fit with the strategy of supporting Microsoft services on multiple devices. Visual Studio is a fine tool but if Microsoft is serious about cross-platform, it needs strong Mac-native development tools. Xamarin came up with Xamarin Studio, which is cross-platform for Windows and Mac, but the RemObjects approach also looks worth investigating.

PS The first release of RemObjects C# lacked full generic support, for which failing Xamarin and Mono founder Miguel de Icaza took RemObjects to task on Twitter. I was amused to see this in the changelog for April 2014:

 image

65764 Full support for Generics on Cocoa, as requested by Miguel

For more details on Fire, see here.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Amazon Mobile SDK adds login, data sync, analytics for iOS and Android apps

Amazon Web Services has announced an updated AWS Mobile SDK, which provides libraries for mobile apps using Amazon’s cloud services as a back end. Version 2.0 of the SDK supporting iOS, and Android including Amazon Fire, is now in preview, adding several new features:

Amazon Cognito lets users log in with Amazon, Facebook or Google and then synchronize data across devices. The data is limited to a 20MB, stored as up to 20 datasets of key/value pairs. All data is stored as strings, though binary data can be encoded as a base64 string up to 1MB. The intent seems to be geared to things like configuration or game state data, rather than documents.

Amazon Mobile Analytics collects data on how users are engaging with your app. You can get data on metrics including daily and monthly active users, session count and average daily sessions per active user, revenue per active user, retention statistics, and custom events defined in your app.

Other services in the SDK, but which were already supported in version 1.7, include push messaging for Apple, Google, Fire OS and Windows devices; Amazon S3 storage (suitable for any amount of data, unlike the Cognito sync service), SimpleDB and Dynamo DB NoSQL database service, email service, and SQS (Simple Queue Service) messaging.

Windows Phone developers or those using cross-platform tools to build mobile apps cannot use Amazon’s mobile SDK, though all the services are published as a REST API so you could use it from languages other than Objective-C or Java by writing your own wrapper.

The list of supported identity providers for Cognito is short though, with notable exclusions being Microsoft accounts and Azure Active Directory. Getting round this is harder since the federated identity services are baked into the server-side API.

image

Supporting developers: how could Microsoft improve?

Microsoft invests substantial resources in supporting developers; yet the last two topics I have explored in earnest – the Azure blob storage service, and ASP.NET MVC with Azure Active Directory integration – have been frustrating and difficult. Admittedly I am only an occasional developer, but I suspect my experience is common. What is going wrong, and how could Microsoft improve?

Among the problems I have encountered:

  • Abundant documentation of simple first steps with a vacuum for anything more advanced
  • Samples that do not run without tweaking
  • Samples designed for old versions of Visual Studio
  • Samples which use obsolete or deprecated libraries
  • Samples which are poor solutions for the problem they are supposed to address
  • Documentation or samples which use preview, beta or even alpha libraries. Microsoft sometimes seems to make more effort documenting what is in preview than what is fully released.
  • Posts on a topic which are out of date, but for which it is hard to find something current
  • Circular links – click here for more information – you get another article which links back to the first one, perhaps with an intermediate step
  • Poor quality responses to questions on official Microsoft forums

On the positive side, the reference documentation is not too bad. StackOverflow is a great resource and seems to attract higher quality responses (even sometimes from Microsoft staff) than the company’s own forums.

Here then are some of the improvements I would like to see:

1. A sharper distinction between what is in preview and what is production-ready. For any given problem, it would be great to find a clear statement of how you should address it for production now, with fully released and supported libraries, and another statement showing how you will be able to address it with the latest and greatest (but perhaps less stable) technology which is in preview.

2. For key teams in Microsoft to maintain sites which offer clearly delineated production and preview sections and which are kept rigorously up to date.

3. More short samples and fewer “this demonstrates everything” samples. Large samples are more difficult to install and study and have more complex dependencies.

4. Posts and their accompanying code inevitably go out of date and I do not favour removing them, which causes more difficulties than it solves (broken links). However it seems to me reasonable for teams to maintain a number of key samples for their product area and keep them up to date.

What am I missing – or am I complaining too much about what is normal in software development? As ever, I welcome your views.

Developing an ASP.NET MVC app with Azure Active Directory: an ordeal

Regular readers will know that I am working on a simple (I thought) ASP.NET MVC application which is hosted on Azure and uses Azure Blob Storage.

So far so good; but since this business uses Office 365 it seemed to me logical to have users log in using Azure Active Directory (AD). Visual Studio 2013, with the latest update, has a nice wizard to set this up. Just complete the following dialog when starting your new project:

image

This worked fairly well, and users can log in successfully using Azure AD and their normal Office 365 credentials.

I love this level of integration and it seems to me key and strategic for the Microsoft platform. If an employee leaves, or changes role, just update Active Directory and all application access comes into line automatically, whether on premise or in the cloud.

The next stage though was to define some user types; to keep things simple, let us say we have an AppAdmin role for users with full access to the application, and an AppUser role for users with limited access. Other users in the organisation do not need access at all and should not be able to log in.

The obvious way to do this is with AD groups, but I was surprised to discover that there is no easy way to discover to which groups an AD user belongs. The Azure AD integration which the wizard generates is only half done. Users can log in, and you can programmatically retrieve basic information including the firstname, lastname, User Principal Name and object ID, but nothing further.

Fair enough, I thought, there will be some libraries out there that fill the gap; and this is how the nightmare begins. The problem is that this is the cutting edge of .NET cloud development and is an area of rapid change. Yes there are samples out there, but each one (including the official ones on MSDN) seems to be written at a different time, with a different approach, with different .NET assembly dependencies, and varying levels of alpha/beta/experimental status.

The one common thread is that to get the AD group information you need to use the Graph API, a REST API for querying and even writing to Azure Active Directory. In January 2013, Microsoft identity expert Vittorio Bertocci (Principal Program Manager in the Windows Azure Active Directory team at Microsoft) wrote a helpful post about how to restore IsInRole() and [Authorize] in ASP.NET apps using Azure AD – exactly what I wanted to do. He describes essentially a manual approach, though he does make use of a library called Azure Authentication Library (AAL) which you can find on Nuget (the package manager for .NET libraries used by Visual Studio) described as a Beta.

That would probably work, but AAL is last year’s thing and you are meant to use ADAL (Active Directory Authentication Library) instead. ADAL is available in various versions ranging from 1.0.3 which is a finished release, to 2.6.2 which is an alpha release. Of course Bertocci has not updated his post so you can use the obsolete AAL beta if you dare, or use ADAL if you can figure out how to amend the code and which version is the best/safest to employ. Or you can write your own wrapper for the Graph API and bypass all the Nuget packages.

I searched for a better sample, but it gets worse. If you browse around MSDN you will probably come across this article along with this sample which is a Task Tracker application using Azure AD, though note the warnings:

NOTE: This sample is outdated. Its technology, methods, and/or user interface instructions have been replaced by newer features. To see an updated sample that builds a similar application, see WebApp-GraphAPI-DotNet.

Despite the warnings, the older sample is widely referenced in Microsoft posts like this one by Rick Anderson.

OK then, let’s look at at the shiny new sample, even though it is less well documented. It is called WebApp-GraphAPI-DotNet and includes code to get the user profile, roles, contacts and groups from Azure AD using the latest Graph API client: Microsoft.Azure.ActiveDirectory.GraphClient. This replaces an older effort called the GraphHelper which you will find widely used elsewhere.

If you dig into this new sample though, you will find a ton of dependencies on pre-release assemblies. You are not just dealing the Graph API, but also with OWIN (Open Web Interface for .NET), which seems to be Microsoft’s current direction for communication between web applications.

After messing around with Nuget packages and trying to get WebApp-GraphAPI-DotNet working I realised that I was not happy with all this preview code which is likely to break as further updates come along. Further, it does far more than I want. All I need is actually contained in Bertocci’s January 2013 post about getting back IsInRole.

I ended up patching together some code using the older GraphHelper (as found in the obsolete Task Tracker application) and it is working. I can now use IsInRole based on AD groups.

This is a mess. It is a simple requirement and it should not be necessary to plough through all these complicated and conflicting documents and samples to achieve it.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Google I/O 2014: impressive momentum, no wow moments

I am not in San Francisco but attended Google I/O Extended in London yesterday, to hear the keynote and a couple of sessions from Google’s annual developer conference.

image

I found the demographics different than most IT events I attend: a younger crowd, and plenty of start-ups and very small businesses, not at all enterprisey (is that a word?)

image

The main announcements:

A new version of Android, known as Android L (I don’t know if this will expand eventually to Lollipop or Liquorice or some such). Big release  with over 5,000 new APIs, we were told (when does Android start being called bloated, I wonder?). Themes include a new visual style called Material Design (which extends also to the Web and to Chrome), and suitability for more device types including Android TV, Android Wear (smart watches) and Android Auto. A new hardware accelerated graphics API called Android Extension Pack which implements OpenGL ES for better game performance, with support from NVIDIA Tegra. Android graphics performance will be good enough for a considerable subset of the gaming community and we saw Unreal Engine demoed.

Android L does not use Dalvik, the virtual machine that runs Java code. In its place is ART (Android Runtime). This is 64-bit, so while Java code will run fine, native code will need updating.

Google is working hard to keep Android under its control, putting more features into its Play Services, the closed part of Android available only from Google and which is updated every 6 weeks, bypassing the operator obstacle to OS updates. There is also a new reference design including both hardware and software which is designed for affordable smartphones in the developing world: third parties can take this and build a decent Android mobile which should sell for under $100 as I understood it. I imagine this is designed to ward off fractured Android efforts like Microsoft’s Nokia X, aimed at the same kind of market but without Play Services.

There are new Android smart watches on the way, and we saw the inevitable demonstration of a user using voice control to the watch for ordering taxis or pizzas, getting notifications, and sending simple messages.

Voice control demos always seem to be nervous moments for presenters – will they be understood? Unfortunately that uncertainty remains for real users too, as evidenced by Xbox One Kinect which is amazing in that it often works, but fails often enough to be irritating. Voice recognition is a hard problem, not only in respect of correctly translating the command, but also in correctly detecting what is a command (if the person standing next to me shouts “Taxi please” I do not want my watch to order one for me).

The smart watch problem also parallels the TV problem. The appeal of the watch is that it is a simple glanceable device for telling the time. The appeal of the TV is that it is a simple sit-back screen where you only have to select a channel. Putting more smarts into these devices seems to make sense, but at the same time damages that core feature, unless done with extreme care.

Android TV puts the OS into your television, though Google’s messaging here is somewhat confusing in that, on the one hand, Chromecast (also known as Googlecast) means that you can use your Google device (Android or Chromebook) as the computer and the TV as the display and audio system, while on the other hand you can use Android on the TV itself as an all-in-one.

We are inching towards unified home entertainment, but with Google, Microsoft (Xbox One), Sony (PlayStation) and Apple all jostling for position it is too early to call a winner.

Material Design – Metro for Android?

We heard a lot about Material Design, which is Google’s new design style. Google borrowed plenty of buzzwords form Microsoft’s “Metro” playbook, and I heard expressions like “fast and fluid”, clean typography, signposting, and content-first. Like Metro, it also seems to have a blocky theme (we will know when the next design wave kicks in as it will have rounded corners).

image

Material Design is not just for Android. You can also implement the concept in Polymer, which is a web presentation framework built on Web Components, a standard in draft at the W3C. Support for Web Components (and therefore Polymer) is already in Chrome, advancing rapidly in Mozilla Firefox, probably coming in Apple Safari, and maybe coming in Microsoft IE. However, a JavaScript library called Polyfill means that Polymer will run to some extent in any modern browser.

Whenever IE was mentioned by a presenter at Google I/O there was an awkward/knowing laugh from the audience. Think about what that means.

One of the ideas here is that with a common design concept across Android and web, developers can make web apps (and therefore Chrome apps) look and behave more like Android apps (or vice versa). Again, there is a similar concept at Microsoft, where the WinJS library lets you implement a Metro look and feel in a web app.

Microsoft may have been ahead of Google in this, but it has done the company little good in that adoption for Metro has been weak, for well-rehearsed reasons connected with the smartphone wars, legacy Windows desktop and so on. Google has less legacy weighing it down.

How good is Material Design though? Apple’s Steve Jobs once said of a new OS X design update that it was so good you want to lick it. Metro lacks that kind of appeal, and judging from yesterday’s brief samples, so does Material Design, whatever its other merits in terms of clarity and usability. It is early days though.

Business features: Samsung Knox, Office support, unlimited storage

Google announced a couple of  features aimed at business users. One is that Samsung Knox, app sandboxing and data security for business users, has been donated to Google for integration into Android. Another is that Google Docs will get the ability to edit Microsoft Office documents in their native format, removing an annoyance for users who previously had to convert documents to and from Google’s own format when exchanging them with Microsoft Office users.

This seems to be an admission that Microsoft Office is the business standard for documents, and you can take it either way – good for Google because compatibility is better, or good for Microsoft because it cements Office as the standard. There will be ifs and buts of course.

Google is also offering unlimited online storage for business users, called Drive for Work, at $10 per user per month, upping the ante for everyone in the online storage game – Microsoft, Dropbox, Box and so on.

Google’s Cloud Platform

Google showed new features in its cloud platform, with a focus on big data analytics using an approach called Cloud Dataflow. “We don’t use MapReduce any more”, said the presenter, explaining that Cloud Dataflow enables all of us to use the same technology Google uses to analyse big data.

Greg DeMichille, a director of product management for the cloud platform, appeared on stage to show features for in-browser tracing and debugging of cloud applications. I recall DeMichille being much involved in Microsoft’s version of Java back in the days of the battle with Sun; he also had a spell at Adobe getting behind Flash and Flex for developers.

No Wow moments

The Google I/O 2014 keynote impressed in terms of numbers – Android growth continues unabated – and in terms of partners lining up behind initiatives like Android TV and Android Auto. The momentum seems unstoppable and the mass market for mobile and embedded devices is Google’s to lose.

On the other hand, I did not notice any game-changing moments such as I experienced when first seeing the Chromebook, or the Google Now personalisation service. Both of those still exist, of course, but if Android will really change our lives for the better, Google could have done a better job of conveying that message.

Embarcadero AppMethod: another route to cross-platform mobile, now with C++ support

Embarcadero has updated AppMethod, its IDE for cross-platform mobile and desktop applications. The IDE now supports C++, and as a special offer, you can develop Android phone “free forever”, according to the web site.

AppMethod is none other than our old friend Delphi, combined with the FireMonkey cross-platform framework. The difference between AppMethod and the older RAD Studio product line (current version is XE6) is twofold:

1. AppMethod does not include the VCL, the Delphi framework for Windows applications. It does let you develop for Windows or Mac OS X using FireMonkey.

2. You can buy RAD Studio outright with a perpetual license, from £1342.00 plus VAT for a new user (RAD Studio Professional). AppMethod is only available on subscription.

AppMethod pricing is per developer per platform per year. Currently this is £179.83 plus VAT for individuals (very small businesses up to a maximum of 5 employees in the entire organisation) or £600 for larger businesses (a rather large premium).

C++ support is new in AppMethod 1.14 and supports all target platforms except the iOS Simulator (an annoying limitation). It supports ARC (Automatic Reference Counting) on Android as well as iOS. Mac OS X is supported from 10.8 (Mountain Lion) and up.

There are also a few changes in FireMonkey. You can load HTML into the TWebBrowser component using LoadFromStrings. There is a new date picker component.

Another new feature is in the RTL (run time library). Called App Tethering, it lets applications communicate with each other, for example using TCP. These can be apps on the same device or remote apps. Once paired, apps can run remote actions and share standard data types and streams.

There are also updates to push notifications for iOS and Android, Google Glass support, updated OpenGL and DirectX support on Windows, and more: see here for the complete documentation of what is new.

A Quick Hands-on

I installed the latest AppMethod on Windows 8. The install warns that AppMethod cannot co-exist with RAD Studio XE6, presumably because it is essentially the same thing re-wrapped. The product name is relatively new, but there is plenty of old stuff under the covers. AppMethod still has a dependency on JSharp, Microsoft’s Java implementation for .NET. Java code in the IDE dating back to who knows when?

image

There is a 10-field dialog conforming paths for Android tools, which is a reminder of how many moving parts there are here. It is more complex that most Android development environments because it uses the NDK (Native Development Kit) as well as the usual SDK.

image

Once up and running, you can start a new project such as a FireMonkey mobile application:

image

and then you are in an IDE which would not be entirely unfamiliar to a Delphi user in 1995 (or I suppose, a C++ Builder user in 1997) – I am not saying this is a bad thing, though the IDE feels dated in comparison to Microsoft’s Visual Studio.

image

After coming from a spell of development with XAML it feels odd to have a form builder that defaults to xy layout, but layout managers are available:

image

Compile and run, and after the usual slow initialization of the Android emulator, the app appeared.

image

Why AppMethod?

In the crowded world of cross-platform mobile development, why use AppMethod?

Embarcadero makes a big play of its native development, though it is “native” in respect of code execution but not in GUI fidelity since by default visual controls are custom-drawn by the framework. This is in contrast to Xamarin (the obvious alternative for developers from a Windows background) which does no custom drawing but only uses native controls; however for raw performance AppMethod may have the edge (I have not done comparisons).

Delphi developers should also look at RemObjects Oxygene which also uses a Delphi-like language but is hosted in Visual Studio and, like Xamarin, uses native UI components.

The AppMethod approach does make sense if you prioritise maximum code-sharing over getting exactly the right look and feel for each supported platform, and need better performance or more capability than HTML and JavaScript can get you. There is no support for Windows Phone though; if that is in your plans, Xamarin or HTML and JavaScript development is a better fit.

Visual Studio “14” announced, preview available with “Roslyn” open source compiler

Microsoft’s Soma Somasegar has announced the next version of Visual Studio, currently known as Visual Studio 14, but likely to be fully released in 2015 (and, I am guessing, likely to be called Visual Studio 2015).

This is a major release. It includes a new VB and C# compiler which is itself written in managed code, codenamed Roslyn. The open source Roslyn project provides new APIs that enable more powerful IDE features. Visual Basic is getting refactoring support for the first time.

The preview also includes a major update to ASP.NET that unifies ASP.NET MVC and the ASP.NET Web API, and has a new deployment model and developer experience:

Thanks to the Rosyln compiler, if you change ".cs” files or project.json file and want to see the change in the browser, you don’t need to build the project any more. Just refresh the browser.

There is no IIS express, nor IIS involved when you run from the command line. It means that you can publish your website to a USB drive, and run it by double clicking the web.cmd file!

On the C++ side, there is improved C++ 11 support and more features from C++ 14:

The Visual Studio "14" CTP includes support for user-defined literals, noexcept, alignof and alignas, and inheriting constructors from C++11, generalized lambda capture, auto function return type deduction, and generic lambdas from C++14, as well as many more new C++ features.

says Somasegar. There is also a refactored C Runtime (CRT):

msvcr140.dll no longer exists. It is replaced by a trio of DLLs: vcruntime140.dll, appcrt140.dll, and desktopcrt140.dll.

If you install the CTP (mine is downloading) use a spare machine or VM; it is an early preview that does not work side-by-side with other versions and the only uninstall may be to flatten the machine:

Installing a CTP release will place a computer in an unsupported state. For that reason, we recommend only installing CTP releases in a virtual machine, or on a computer that is available for reformatting.

Apple’s Swift programming language: easy coding for OS X and iOS at last?

Apple has announced a new programming language, called Swift. (There was already a language called Swift, used for parallel scripting, but Apple links to the other Swift in case you land on the wrong page. So far it looks like the other Swift has not returned the favour).

For as long as I can remember, serious Apple developers have had to use Objective-C, an object-oriented C that is not like C++. I have only dabbled in Objective-C but when I last tried it I was pleasantly surprised: memory management was no hassle and I found it productive. Nevertheless it is an intimidating language if you come from a background of, say, JavaScript or Microsoft .NET. Apple’s focus on Objective-C has left a gap for easier to use alternatives, though the main reason developers use something other than Objective-C, as far as I am aware, is for cross-platform projects. Companies such as Xamarin and Embarcadero (with Delphi) have had some success, and of course Adobe PhoneGap (or the open source Cordova) has had significant take-up for cross-platform code based on HTML and JavaScript.

I should mention that RAD (Rapid Application Development) on OS X has long been possible using the wholly-owned Filemaker, a database manager with a powerful scripting language, but this is not suitable for general-purpose apps.

Overall, it is fair to say that coding for OS X and iOS has a higher bar than for Windows because Apple has not provided anything like Microsoft’s C# or Visual Basic, type-safe languages with easy form builders that let you snap together an application in a short time, while still being powerful enough for almost any purpose. This has been a differentiator for Windows. Visual Basic is almost as old as Windows itself, and C# was introduced in 2000.

Now Apple has come up with its own equivalent. I am new to Swift as are most people outside Apple, but took a quick look at the book, The Swift Programming Language, along with the announcement details. A few highlights:

  • Swift is a type-safe language that compiles to native code using LLVM.
  • The IDE for Swift is Xcode. It supports Cocoa development (Apple’s user interface framework) via import of the existing Objective-C frameworks, which become Swift APIs via the import keyword:

import UIKit

  • You can mix Swift and Objective-C in a single project. In Objective C you can use #import to make Swift code visible and usable.
  • Swift is a C-family language and you will find familiar features like curly braces and semi-colons to terminate lines (though semi-colons are optional).
  • Swift uses reference counting for automatic memory management. There is rather complex section in the book about weak references and unowned references, to solve some of the problems inherent in reference counting.
  • Type inference is the preferred approach to declaring the type of a variable, but you can state the type if required. You can also declare constants.
  • Swift supports single inheritance for classes and multiple inheritance for protocols (protocols are more or less equivalent to interfaces in other languages).
  • There are advanced features including closures, generics, tuples, and variadic parameters. (I am not sure if “advanced” is the right word, but other languages such as C# and Java took a while to get these).
  • Swift has something like destructors which it calls deinitializers.
  • There is an interesting feature called Extensions which lets you add methods to any existing type. For example, you could extend Int with a prettyprint method and then call 3.prettyprint.
  • Swift variables are not normally nullable; they must have a value. However you can declare optional types (add a ?, such as Int?) that can be set to nil. You can also declare implicitly unwrapped optionals which can be nil, but once assigned a value cannot be nil thereafter.
  • Swift includes the AnyObject type which can represent anything.

Swift seems to me to have similar goals to Microsoft’s C#: easier and safer than C or C++, but intended for any use right up to large and complex applications. One of the best things about it is the smooth interoperability with Objective-C; this also saves Apple from having to write native Swift frameworks for its entire stack.

A smart move? I think so, though Swift is different enough from any other language that developers have some learning to do.

What difference will Swift make? Initially, not that much. Objective-C developers now have a choice and some will move over or start mixing and matching, but Swift is still single-platform and will not change the developer landscape. That said, Swift may make Apple’s platform more attractive to business developers, for whom C# or Java is currently more productive; and perhaps Apple could find ways of using Swift in places where previously you would have to use AppleScript, extending its usefulness.

If Apple developers were tempted towards Xamarin or Delphi for productivity, as opposed to cross-platform, they will probably now use Swift; but I doubt there were all that many in that particular group.

I would be interested to hear from developers though: what do you think of Swift?