Category Archives: development

Delphi XE2 FireMonkey for Windows, Mac, iOS: great idea, but is it usable?

I am sure all readers of this blog will know by now that Delphi XE2 (and RAD Studio XE2) has been released, and that to the astonishment of Delphi-watchers it supports not only 64-bit compilation on Windows, but also cross-platform apps for Windows, Mac OS X and even iOS for iPhone and iPad (with Android promised).

I tried this early on and was broadly impressed – my app worked and ran on all three platforms.

image

However it is an exceedingly simple app, pretty much Hello World, and there are some worrying aspects to this Delphi release. FireMonkey is based on technology from KSDev, which was acquired by Embarcadero in January this year. To go from acquisition to full Delphi integration and release in a few months is extraordinary, and makes you wonder what corners were cut.

It seems that corners were cut: you only have to read this post by developer and Delphi enthusiast Chris Rolliston:

To put it bluntly, FireMonkey in its current state isn’t good enough even for writing a Notepad clone (I know, because I’ve been trying). You can check out Herbert Sauro’s blog for various details (here, also a follow up post here). For my part, here’s a highish-level list of missing features and dubious coding practices, written from the POV of FireMonkey being a VCL substitute on the Mac (since on OS X, that is what it is).

Fortunately I did not write a Notepad clone, I wrote a Calculator clone, which explains why I did not run into as many problems.

Update: See also A look at the 3D side of FireMonkey by Eric Grange:

…if you want to achieve anything beyond a few poorly texture objects, you’ll need to design and write a lot of custom code rather than rely on the framework… with obvious implications of obsolescence and compatibility issues whenever FMX finally gets the features in standard.

There has already been an update for Delphi XE2 which is said to fix over 120 bugs as well as an open source licensing issue. I also noticed better performance for my simple iOS calculator after the update.

Still, FireMonkey early adopters face some significant issues if they are trying to make VCL-like applications, which I am guessing is a common scenario. There is a mismatch here, in that FireMonkey is based on VGScene and DXScene from KSDev, and the focus of those libraries was rich 2D and 3D graphics. Some Delphi developers undoubtedly develop rich graphical applications, but a great many do not, and I would judge that if Embarcadero had been able to deliver something more like a cross-platform VCL that just worked, the average Delphi developer would have been happier.

The company must be aware of this, and one reading of the journey from VSCene/DXScene to FireMonkey is that Embarcadero has been madly stuffing bits of VCL into the framework. Eventually, once the bugs are shaken out and missing features implemented, we may have something close to the ideal.

In the meantime, you can make a good case for Adobe Flash and Flex if what you really want is cross-platform 2D and 3D graphics; while VCL-style developers may be best off using the current FireMonkey more for trying out ideas and learning the new Framework than for real work, pending further improvements.

On the positive side, even though FireMonkey is a bit rough, Embarcadero has delivered a development environment for Windows and Mac that works. You can work in the familiar Delphi IDE and code around any problems. The Delphi community is not short of able developers who will share their workarounds.

I have some other questions about Delphi. Why are there so many editions, and who uses the middleware framework DataSnap, or other enterprisey features like UML modeling?

There appear to be five editions of Delphi XE2: Starter, Professional, Enterprise, Ultimate and Architect, where Architect has features missing in Ultimate – should the Ultimate be called the Penultimate? It breaks down like this:

  • Starter: low cost, restrictive license that is mainly non-commercial (you are allowed revenue up to $1000 per year). No 64-bit, no Mac or iOS. $199.00
  • Professional: The basic Delphi product. Missing a few features like UML diagramming, no DataSnap. Limited IntraWeb. $899.00.
  • Enterprise: For more than double the price, you get DataSnap and dbExpress server drivers. $1,999.00
  • Ultimate: Adds a developer edition of Embarcadero’s DBPowerStudio. $2999.00
  • Architect: Adds more UML modeling, and a developer edition of Embarcadero’s ER/Studio database modeling tool. $3499.00

The RAD Studio range is similar, but adds C++ Builder, PHP and .NET development. No Starter version. Prices from $1399.00 for Professional to $4299.00 for Architect. The non-Ultimate Ultimate is $3799.00.

All prices discounted by around 40% for upgraders.

The problem for Embarcadero is that Delphi is such a great and flexible tool that you can easily use it for database or multi-tier applications with just the Professional edition. See here, for example, for REST client and server suggestions. Third parties like devart do a good job of providing alternative data access components and dbExpress drivers. I would be interested to know, therefore, what proportion of Delphi developers buy into the official middleware options.

As an aside, I wondered about DataSnap licensing. I looked at the DataSnap page which says for licensing information look here – which is a MIDAS article from 2000, yes Embarcadero, that is 11 years ago. Which proves if nothing else what a ramshackle web site has evolved over the years.

Personally I would prefer to see Embarcadero focus on the Professional edition and improve humdrum things like FireMonkey documentation and bugs, and go easy on enterprise middleware which is a market that is well served elsewhere.

I have seen huge interest in Delphi as a productive, flexible, high-performance tool for Windows, Mac and mobile, but the momentum is endangered by quality issues.

Google offers the web a new language called Dart – but why?

Google has announced an early preview of Dart, a new language for web applications. The news is not a surprise, especially if you have been keeping track of the developer conference GOTO Aarhus, whose organisers had pre-announced that Google would be announcing its new language there, as indeed it did.

image

Dart is a curly-brace language like JavaScript, Java, C, C++ and C#. In Dart, as in C# and Java, a class can implement multiple interfaces, but only inherit from a single class. Dart supports both static and dynamic typing. Google says it can be executed by a Dart VM, or converted to JavaScript:

Dart code can be executed in two different ways: either on a native virtual machine or on top of a JavaScript engine by using a compiler that translates Dart code to JavaScript. This means you can write a web application in Dart and have it compiled and run on any modern browser. The Dart VM is not currently integrated in Chrome but we plan to explore this option.

Google also says that you will be able to “execute Dart code directly in a VM on the server side”, so you can infer that Google has Dart in mind as an alternative PHP as well as to JavaScript. The company is using the phrase “structured web programming” to describe Dart, and this phrase appears in the announcement and as the subtitle on the Dart site. The implication is that JavaScript code tends to be poorly structured and that Dart will promote more maintainable code.

In the preview Dart only runs in Chrome, Safari 5 and Firefox 4+ – spot the missing browser vendors.

At first glance, Dart looks like a promising language, though I find myself asking what it is really for, when it bears a strong family resemblance to existing languages, and bearing in mind that the Google Web Toolkit, which compiles Java to JavaScript, already enables structured programming for web applications. The list of problems which Dart solves in the technical overview is not all that compelling.

Google states that:

Developers have not been able to create homogeneous systems that encompass both client and server, except for a few cases such as Node.js and Google Web Toolkit (GWT).

This is or was one of the attractions of Microsoft Silverlight, presuming you use C# on both server and client, but Silverlight is a plug-in that was never going to run on an iPad and from which Microsoft itself is now retreating; though it is worth noting that Dart is not unlike C#, especially the latest version of C# with dynamic features.

I guess that Dart is a consequence of the failure of ECMAScript 4.0, which was a cooperative effort to create a more modern and advanced JavaScript. Google is now going it alone; the key question is whether it can win support from others such as Apple and Microsoft, or whether this will be a Google language for Google on the server and Chrome on the client, or an interesting experiment that never really catches on.

Do we need Dart? I would value hearing from others what you think of Google’s proposal.

CodeRage free online conference for Delphi and RAD Studio starts next week

Embarcadero’s CodeRage virtual conference starts next week from October 17 2011, and is worth a look if you have any interest in Delphi or the new RAD Studio XE2.

There are sessions on 64-bit Delphi, the new cross-platform FireMonkey framwork, the new LiveBindings data binding system, Prism (Delphi for .NET), and extras including a session on Regular Expressions in Delphi and elsewhere, Dependency Injection and Delphi Spring, unit testing with Delphi, and using 3D graphics in business applications.

Of course you could wait for the replays to be available, but if this is like previous events there is a chance to ask questions to people who might actually know the answers, so there is an advantage to the live event – though the event is schedules for Pacific Time so the afternoon ones involve a late night if you are in the UK.

Developers keen to get apps on Barnes & Noble Nook

I took a quick look round the exhibition here at Adobe MAX in Los Angeles, and was intrigued to see crowds round the Barnes & Noble Nook stand, a newcomer to Max.

image

Barnes & Noble has its own app store for Color Nook, the AIR runtime is on the device, and in fact is used for some of the built-in apps. It is not the most powerful of tablets, and it only has wi-fi for internet connectivity, but nevertheless is proving a worthwhile market for apps. The store is curated to maintain quality, and one of the points made to me on the stand is that owners expect to pay for their content, making it easier to sell paid-for apps.

image

Unfortunately this device is not available globally, and of course everyone is waiting to see what impact Amazon’s Kindle Fire will have on Nook’s sales. Even so, for developers who have a suitable app this is a significant market.

Installing Windows 8 developer preview on VirtualBox

I have installed the Windows 8 developer preview on Oracle VirtualBox. It does not work on Virtual PC since 64-bit guests are not supported. It is probably fine on Hyper-V, but I don’t have spare Hyper-V capacity for it at the moment.

image

I had a few hassles and thought it would be worth sharing my notes.

I gave the VM 2GB of RAM, 2 processors, and the maximum amount of video ram, but these settings are up to you.

The main problem I encountered was with the mouse. I found that it worked a bit in the Windows 8 guest, but only a bit. The pointer jumped around and was too frustrating to use.

The solution I found was to remote desktop to the VM from my Windows 7 desktop. I could not get the remote desktop built into VirtualBox to work, on a brief try, so I used pure Windows to Windows.

In order to do this, I first set networking in VirtualBox to Bridged. This means it is on the same subnet as the host computer. Then I enabled remote desktop access in the Windows 8 control panel. I opened a command prompt to check the IP address – Windows key + R opens the Run prompt and is a useful combination when the mouse is not working.

Then I was able to use remote desktop to that IP address. Note that unless you join the Windows 8 machine to a domain, the username is:

machinename\email address

or alternatively

WindowsLiveID\email address

presuming you do the default thing, which is to hook up Windows 8 to a Live ID.

Now, if you do this you will have two GUIs showing, which is untidy. You can fix this by running the VM headless. Shut down the VM, navigate to the VirtualBox directory and run the following command:

vboxheadless –startvm yourvmname

Now you can log on to the Windows 8 VM without having any other instance on the screen.

You might not have the same problem with the mouse, of course.

Incidentally, I am not sure what is the best way to shutdown the VM, but I use a command prompt or WindowsKey – R and type:

shutdown /s

My final observation: Windows 8 with just mouse and keyboard is a lot less fun than on a real tablet. It raises the question of just how much value there is in Windows 8 for non-tablet users. I suspect rather little, which is why Windows 7 is set for a long life on the corporate desktop, and for other users who do not have touch screens.

Continuous Integration vs Continuous Delivery vs Continuous Deployment: what is the difference?

I am reading the excellent book Continuous Delivery by Jez Humble and David Farley. But what is Continuous Delivery and how does it differ from the other “continuous” development methodologies?

It helps to understand that all these methodologies spring from the Agile software development movement, and the expression Continuous Delivery is a quote from the Agile Manifesto:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Now, the starting assumption is that most software projects integrate a number of smaller projects, whether from third-parties or from team members. Since these pieces are developed to some extent independently there is a risk that changes made to one piece will require modifications to another piece; hence according to Humble and Farley:

Most software developed by large teams spends a significant proportion of its development time in an unusable state.

The business of getting all the parts to work together is called integration, and if this involves serious work you need to have an integration phase where this is the sole objective. This is a bad idea for all sorts of reasons, slowing development and preventing proper testing other than at the end of these integration phases.

The solution is called Continuous Integration (CI). You have a frequent automated build that assembles all the pieces from all the teams into a working application. If the build fails, or if automated tests run against the build fail, then this is a bug that should be fixed immediately, not later in some separate phase of development.

Tools for CI include Cruise Control, Hudson, TeamCity and others; .NET developers can also configure Visual Studio Team Foundation Server for CI.

The problem with CI alone is that the development environment is not the same as the production environment. What if the CI build works and tests pass, but once deployed the application breaks or performs badly? Perhaps the development environment runs a multi-tier application with all the tiers on a single box, but when deployed onto actual multiple machines or VMs, something goes wrong. Permission problems are another common source of errors.

Continuous Delivery means that you not only build the software, but also deploy it frequently. This usually means provisioning servers, which you can automate using a tool like Puppet for Unix-like servers, or with Virtual Lab Management in a Visual Studio environment. Automation is pretty much essential for this to work. The more closely the test environment matches the production environment, the better.

Generally though, Continuous Delivery means deployment to a test environment. What about taking the next step, and deploying continuously to production? That is the methodology called Continuous Deployment. It sounds risky; but if you have a very extensive and thorough set of automated tests, then the risks are mitigated, especially as the extent of the changes in any one deployment is reduced.

Other suggestions for reducing risk include deploying to a small subset of users first, called “canary testing”; and making rollback easy.

That said, to judge by the Humble/Farley book the distinction between Continuous Delivery and Continuous Deployment is just a little blurred. The authors acknowledge that continuous deployment into production is not always a good idea. They also imply that Continuous Deployment might mean only that your application is always ready and easy to deploy into production, not that you necessarily deploy it constantly:

Your implementation should make it possible to deploy any version of your application that has made it past the automated tests into any of your environments at the push of a button, given the correct credentials.

Compliance and security are also factors that may rightly make it impossible to automate deployment to production completely.

Client-installed applications present some special difficulties which Humble and Farley discuss.

In summary then:

Continuous integration: your application always builds and passes its tests, including all the pieces from different sub-teams.

Continuous delivery: your application always builds and deploys to a test environment and passes its tests.

Continuous deployment: your application is always ready to deploy to production through a largely automated process.

Update: I received an email from Martin Fowler about this post. He refers to Jez Humble’s post on Continuous Delivery vs Continous Deployment and adds:

– I would use your definition of Continuous Deployment for Continuous Delivery

– I would change the definition of Continuous Deployment to say something like "every good build is released to production"

However, I clarified with him that if you building for a test environment but are confident that any build that passes would be OK to deploy to production, then you are still doing Continuous Delivery. In the end, while I am sure you should use Fowler and Humble’s definitions rather than mine, it seems to me a fine distinction and that if you are doing Continuous Delivery properly then the transition to Continuous Deployment is largely a matter of policy.

Google Native Client: browser apps unleashed, or misconceived and likely to fail?

Last week Google integrated Native Client into the beta of Chrome 14. Native client lets you compile C/C++ code to run in the browser. It depends on a new plug-in API called Pepper. These are open source projects sponsored by Google and implemented in the Chrome browser, and therefore also likely to turn up in Chrome OS which is an operating system in which all apps run in the browser.

Native Client is cool. For example, NaCLBox lets you run old DOS games in the browser by porting DOSBox to Native Client.

image

Another project is Qt for Google Native Client, a project currently in development. Qt is an excellent and popular GUI and application framework which would speed development of Native Client apps as well as enabling many existing applications to be ported.

It is also worth mentioning that Native Client provides another way to run .NET code in the browser, via Mono with NaCl support.

Why Native Client? Google’s vision, or at least the part of it that focuses on Chrome OS rather than Android, is that everything runs on the Internet and in the browser, making the local operating system unimportant and easily replaced. Native Client removes any performance compromises in managed languages such as JavaScript, ActionScript or Java, as well as easing migration for businesses with existing C/C++ code.

Writing native code for the browser is nothing new. Both Microsoft’s ActiveX and the NPAPI plug-in API used by non-Microsoft browsers let you extend the browser with native code. However Native Client is seamless for the user; you do not have to install any additional plug-in. The main limitation is that Native Client applets do not have access to the local operating system, for security reasons.

It is also worth noting that Native Client apps are not altogether cross-platform. They must be recompiled for different CPU instruction sets, with the current implementation supporting x86 and ARM though you have to compile two binaries. Google says it will support LLVM output to enable cross-platform binaries though this will impact performance.

But is Native Client secure? That is an open question. Google was aware of the security challenge from the beginning of the project. Unlike the plug-in mechanisms which rely mainly on trust in developer competence and signed code to verify the origin of the plug-in or ActiveX control, Native Client inspects the actual code for unsafe instructions before allowing it to run. There is also an “outer sandbox” which intercepts system calls.

However, adding any new way for code to run makes the browser less secure. Google ran a Native Client Security Contest to help identify vulnerabilities, and the contestants did not have any problem finding security flaws. Of course all of these discovered flaws will have been fixed, but there may be others and likely will be.

And is Native Client necessary? The latest JIT-compiled JavaScript engines are fast enough to enable most types of application to run at a satisfactory speed. This is not just about performance though; it is about reusing existing skills, libraries and applications. There is no doubt that Native Client is nice to have; whether its benefits outweigh the risks is harder to judge.

The last question, which may prove the most significant, is political. Google has forged ahead on its own with Native Client, saying as vendors always do that it hopes it will become a web standard. In the early days of the project, it looked like a Native Client plug-in might enable the feature in other browsers, but abandoning NPAPI for Pepper makes this difficult. Will other browser vendors support Native Client?

Here is a comment from Google’s Ian NI-Lewis that I find remarkable:

As you probably know, the rule in Web standards is "implementation wins." So we’re concentrating on getting a good quality implementation out the door. We’re doing that in Chrome. That doesn’t mean that NaCl is intended to be "Chrome only," just that we have to start somewhere.

So Native Client is non-standard, and therefore less interesting than HTML 5 until either Google has a Microsoft-Office-like de facto monopoly of web browsers, or it persuades Mozilla, Microsoft and Apple to support it.

That said, you can think of Chrome as an installable runtime in the same way as the Java Virtual Machine or Adobe Flash, just a potentially more intrusive one. Here is our app, you have to install the free Chrome browser to use it. If this happens to any great extent, I can foresee other browser makers hastening to support it.

Building PasswordSafe for the Mac: Lion development hassles

I am doing some work on a Mac at the moment. On Windows I store passwords in PasswordSafe, an open source utility that works well, so I wondered if I could access my PasswordSafe database from the Mac.

image

I could have run the Windows version in Parallels, which I have just installed, but I figured a Mac version would be more convenient. I didn’t see a Mac build among the downloads, but PasswordSafe is cross-platform, so I downloaded the source to do a quick compile.

I was glad to find README.MAC.DEVELOPERS.txt in the PasswordSafe source and set to work. The first task is to download wxWidgets, a cross-platform GUI library, so I went off to download that. Ran the osx-build-wx script as instructed. Result: error message stating C compiler cannot create executables.

The problem seems to be that PasswordSafe expects GCC 4.0 but the latest Xcode installs GCC 4.2. The solution suggested here is to remove Xcode 4, install Xcode 3, and then reinstate Xcode 4. There are related issues concerning PPC fat binaries and older versions of the Mac SDK.

That solution seemed risky and ardous to me, and I remembered that I still had an old Mac Mini from which I was forced to upgrade in order to install Lion, the latest OS X. I hooked it up, removed Xcode 4, installed Xcode 3, and set to work again.

I get the impression not many people build PasswordSafe for the Mac. The first issue I discovered was that the steps in in the README.MAC.DEVELOPERS.txt don’t mention that after running osx-build-wx you also have to run make in order to build static libraries. That was easy though. The next thing is to load the supplied PasswordSafe project into Xcode and build.

I did that but got an error – the linker could not resolve SizeRestrictedPanel. The fix was to add SizeRestrictedPanel.cpp and SizeRestrictedPanel.h to the project. PasswordSafe then built and seems to work fine, on Lion as well as earlier versions of OS X, though there are a few cosmetic issues. You can see from the image that the caption for the New Database button is slightly awry.

If anyone wants my build, it is here. There is also a Java version, and some people have success with that on the Mac.

Delphi for Windows, Mac and iOS: screenshots and video of cross-platform development

Embarcardero is drip-feeding information about its forthcoming RAD Studio XE2 in an annoying manner; nevertheless the product does look interesting and promises cross-platform native code apps for Windows 64-bit, Windows 32-bit, Mac OS X and Apple iOS. I have grabbed some screens from a video recently posted by Embarcadero’s Andreano Lanusse; the video is also embedded below.

Here is Delphi XE2 showing a FireMonkey application in the designer. FireMonkey is a new cross-platform GUI framework.

image

Note the list of target platforms on the right. If you squint you can see 64-bit Windows, OSX, and 32-bit Windows.

image

How do you compile for the Mac? It is clear from the demo that Lanusse is running in a VMWare virtual machine on a Mac. He also has a Remote Profile option set to target the host Mac:

image

He then refers to a “Platform assistant” which you can see running in a terminal window on the Mac.  He is then able to compile and run from the Windows IDE:

image

Finally, he targets iOS, though this is a separate project, not just another target. The process exports the project to Xcode, Apple’s Mac and iOS IDE:

image

Next, we see the app running on the iPad simulator:

image

The ability to target the Mac is nice to have, but I suspect it is iOS that will attract more interest, given the importance of Apple’s mobile platform.

Here’s the complete video where you can perhaps puzzle out a few more details.

Update: there is also some Q&A in the comments here.

Graphics rendering is Direct2D or Direct3D on Windows, OpenGL on Mac. FireMonkey renders all components through the graphics API, it does not support use native OS components, though Embarcadero’s Michael Swindell says:

FireMonkey client area controls are rendered by OpenGL on Mac, but appear and work just like Cocoa controls – or however you want them to. There are many different Cocoa UI styles in OSX apps, and Firemonkey can render any of them – including iTunes, or Prokit which is an Apple UI style for Pro apps like Final Cut, not available to devs via Cocoa. Windows are Cocoa Windows and the client areas and all user controls are rendered by OpenGL in HD(2D) or 3D. Menus are std and rendered by Cocoa in the menu bar, and common dialogs are rendered by Cocoa. If the “true OSX” look isn’t for you, you’re welcome to use any included Style, download a custom style, or create your own custom style.

Swindell also addresses the matter of Linux and Android:

We do plan Linux and Android. But no eta yet until we get Win/OSX/iOS out. We would also like to provide language bindings for other languages.

Finally, a bit more about that Platform Assistant:

Developer requires a PC and a Mac (or Mac with VM running Windows). You will develop on Windows, and use the platform assistant (PA running on your Mac) to compile natively to your Mac and the PA handles debugging communication between the Mac and your IDE running on Windows. Delphi (or C++Builder) and Firemonkey create compiled stand alone OSX executables that you can sell/distribute to your users. They are native Mac apps. They “copy install” and run like any other Mac app, or you can use a Mac installer if you like.

PhoneGap is at version 1.0

I’ve just spotted that PhoneGap has reached version 1.0. The release was announced at PhoneGap day in Portland, on Friday 29th July.

I have spent some time trying out various cross-platform mobile development tools. PhoneGap is among the most interesting and popular, and is also open source and free to use. If you believe that using the browser engine as an application runtime is the most sensible route to cross-platform mobile applications, then PhoneGap is the leading contender. It wraps your application to look like a native app, and also provides ways to call the native API when necessary.

PhoneGap received a boost when Adobe built it into Dreamweaver 5.5. I tried it out and was impressed with the design environment, but I am not sure how serious Adobe is about PhoneGap since there is no documentation on how to package your PhoneGap app for release, and my post has comments from puzzled users. My solution was to export the project to Eclipse and the standard PhoneGap tools, which misses part of the value of having it integrated into Dreamweaver.

Adobe installs PhoneGap into the Dreamweaver directory, so another issue is how to take advantage of the latest version if you are using Adobe’s tools. Overall I would suggest that using the PhoneGap SDK and Eclipse is a better option, though there is no problem with bringing in Dreamweaver for parts of the design.

I interviewed Nitobi president André Charland about PhoneGap earlier this year.