Category Archives: mobile

Fast JavaScript engine in Apple iOS 4.3 is in standalone Safari only, but why?

Now that Apple iOS 4.3 is generally available for iPhone and iPad, users have noticed something that seems curious. The fast new “Nitro” JavaScript engine only works in the standalone Safari browser, not when a web app is pinned to the home screen, or when a web view is embedded into an app.

This link at mobilexweb.com shows the evidence. Here is the SunSpider test standalone, showing a time of 4098ms, and pinned to the home screen as an app, where it shows 10391.9ms:

image image

The consequence: apps created using WebKit as a runtime, for example using PhoneGap, will not get the benefit of Nitro.

It would be easy to conclude that Apple is deliberately hobbling these apps in order to protect native apps, which can only be installed via Apple’s App Store and are subject to its 30% cut. However that might not be the case. It could be a bug – according to Hacker News it has been reported as such:

To add another note to this, its a bug that Apple seems to know about. I can’t link to it because its marked CONFIDENTIAL across the top of the dev forums, but in short its known about and being investigated.

or it could be a security feature. Using a just-in-time compiler exposes the operating system more than just interpreting the code; perhaps Safari has more protection when running standalone.

Either way, with the increasing interest in WebKit as a de facto cross-platform application runtime for mobile, this particular limitation is unfortunate.

Update: There are also reports of the HTML 5 offline cache not working other than in full Safari:

I’ve tested this by switching apple-mobile-web-app-capable from ‘yes’ to ‘no’ and offline cache works as expected. But whenever it’s switched back to ‘yes’, it’s not working anymore. This occurs when the app is standalone at home screen. As a website, viewed with Safari cache is working as expected.

The Apple iPad post-PC era in education

I am at the QCon conference in London where I attended a session by Fraser Spiers mysteriously titled The Invisible Computer Lab.

Spiers is the guy who won a certain amount of fame or notoriety by issuing all staff and pupils with Apple iPad devices at the Scottish private school where he teaches computing.

image

The session blurb did not mention the iPad but said, “this talk will argue for a new direction in school ICT.” I went along because I am conscious that the way computing is taught in UK schools is often ineffective. Problems include kids knowing more than teachers; out of date hardware; too much Microsoft Office; and often an exclusive focus on general purpose applications rather forming any understanding of what computers are and how they work.

There is probably a connection between this and the low interest in computer science in higher level education.

Spiers did mention this; but most of the talk was an iPad love-in. He is an Apple fan and showed us pictures of the original iMac and various Mac notebooks which preceded the arrival of the iPad at his school.

Nevertheless, he made a persuasive case for how the iPad had transformed teaching (not only computing) at the school. According to Spiers, the children write longer essays because they have discovered word processing for the first time; they have new artistic creativity; they use the web far more and the school had to upgrade its internet connectivity; they are escaping from a word-based approach to learning and presenting their work to one which makes use of multiple media types.

He added that some of the expected snags did not materialise. They were concerned about the virtual touch keyboard on the iPad and offered keyboard accessories to everyone; but in practice few wanted it. The kids, he said, now dislike plastic keyboards with their tiresome buttons.

It is not a new model of computing, it is a new model of education. Handwriting may longer be an important skill, said Spiers.

Now, I do make due allowance for the over-exuberance of an Apple evangelist; and that the reality may not be as rose-tinted as he describes it.

At the same time, you can see how well Apple’s controlled computing environment works in a school environment, where kids may try to break computers or do bad things with them, as well as how the design and usability revolution plays out in a school environment.

Note, however, that Apple is not yet really geared up for iPad in education and Spiers encountered silly issues like the inability to buy site licences for apps delivered over iTunes; each one has to be purchased individually, and they have to fudge the accounts since nobody under 13 can use the app store. I am sure issues like this will be fixed soon.

Objections? Well, there is the cost of Apple’s premium hardware and its tax on the software. There is the ethics of using Apple at all – today, as it happens, there are posts by Bill Thompson and by Tom Arah which do a good job of spelling out concerns about Apple’s authoritarian and increasingly greedy business practices, especially with iOS and the App Store. I would rather be writing up the impact of Linux or Android or open source in education.

However, I will close with my question to Spiers and his answer. What will happen, I asked, when these kids with their experience of iPad computing get jobs and are confronted by offices full of PCs?

“A child that starts this year is going to graduate in 2024,” he replied. “I don’t know what the business environment is going to be like in 2024. I think there will be convergence between iOS and the Mac. I think businesses that stick with the PC infrastructure will not be around in 2024.”

Mono project: no plans for cross-platform WPF

Miguel de Icaza’s report from the Game Developer Conference is upbeat, rightly so in my view as usage of Mono is continuing to build, not only in game development with Unity, a development tool that uses Mono as its scripting engine, but also for mobile development for Apple’s iOS with Monotouch and for Android with Monodroid. These mobile toolkits also give Mono a stronger business model; many sites use Mono for serving ASP.NET applications on Linux, but without paying or contributing back to the project.

Mono is an open source implementation of C# and Microsoft’s .NET Framework.

That said, it is interesting that Mono is still struggling with an issue that has been a problem since its first days: how to implement Microsoft’s GUI (Graphical User Interface) framework on other platforms. Mono does have Gtk# for Windows, Mac and Linux, but this does not meet the goal of letting developers easily port their Visual Studio client projects to Mono. There is also an implementation of Windows.Forms, but de Icaza mentions that “our Windows.Forms is not actively developed.”

Apparently many tools vendors asked the Mono team at GDC when Windows Presentation Foundation (WPF) would be implemented for Mono. WPF is the current presentation framework for Microsoft.NET, though there is some uncertainty about where Microsoft intends to take it. I remember asking de Icaza about this back in 2003, when the WPF framework was first announced (then called Avalon); he said it was too complex and that he did not plan to implement it.

This is still the case:

We have no plans on building WPF. We just do not have the man power to build an implementation in any reasonable time-frame.

That said, Mono has implemented Silverlight, which is based on WPF, and there are some signs that Microsoft might merge WPF and Silverlight. What would the Mono team do then?

Miguel de Icaza says:

Silverlight runs on a sandbox, so you can not really P/Invoke into native libraries, or host DirectX/Win32 content inside of it.
There are other things missing, like menubar integration and things like that.

Of course, this is no longer true on Windows: Platform Invoke is coming in Silverlight 5.

Perhaps the Mono team will knuckle down and implement Silverlight with desktop integration, which would be good for cross-platform Silverlight and compatibility with Microsoft .NET.

Then again, it seems to me that Mono is increasingly divergent from Microsoft .NET, focusing on implementing C# in places that Microsoft does not touch, such as the mobile platforms from Apple and Google.

That is actually a sign of health; and you can understand why the Mono team may be reluctant to shadow Microsoft’s every move with Silverlight and WPF.

Google fails to protect its mobile platform

The discovery of viruses in apps on Google’s Android Market is troubling. I like the fact that Android is open, and that you can easily install an APK (Android Package) from any source onto your device if you want to. That said, it is reasonable to expect that apps downloaded from the official Android Market will be virus-free, or at least that some attempt has been made to check them for malware.

Another problem which is apparently rampant in the Android market – and also to some extent in Apple’s app store – is app stealing, where someone takes an existing app, copies and re-uploads on their own account. In most cases it seems that the malware was on apps pirated in this manner.

Note that while it took Google less than five minutes to pull the malicious apps from the store, the original developer had apparently been trying for more than a week to get them pulled on copyright violation grounds.

Google takes 30% transaction fee for apps sold in the market. Enough, you would think, to check for malware.

Most seriously for the Android market, the situation for users is that apps on Android Market might be malware, whereas apps on Apple’s App Store are not. That is a big advantage for Apple, and one that you would have thought Google would want to counter.

The only winners here are the anti-virus companies, who will be delighted to inflict their subscriptions on mobile users just as they have on Windows desktops.

Apple announces slightly better iPad, world goes nuts

Apple CEO Steve Jobs says the iPad 2, announced today, is “magical, revolutionary and at an unbelievable price”.

The new iPad is dual-core, has front and back cameras, and a new magnetic cover which also forms a stand. It is also 33% thinner and 15% lighter.

image

These are nice improvements, but the truth is that it will not be very different from the first one.

It was enough though for the press to announce catastrophe for the competition:

Larry Dignan on ZDNet:

Apple just ensured that the other tablets are dead on arrival.

James Kendrick on ZDNet:

Here’s all you need to know about the iPad 2: it’s thinner, lighter, faster, got cameras and is more capable than the iPad, for the same price. Apple had dominated the tablet wars with the original iPad, and with the iPad 2 it is game over … Apple will continue to maintain or grow its market share in the tablet space, and the competition will release tablets that are not as good and cost lots more than the iPad/iPad 2. Rarely does one company in the technology sector dominate a product category so totally as Apple does the tablet space.

I am still mulling this over. There is a lot to like about the iPad – convenience, design, long battery life – but there are also annoyances; and while Dignan and Kendrick may be right, I would like to think there will be healthy competition and that at least some of the interesting devices on show at Mobile World Congress earlier this month will find a market.

Another question is how the appearance of ever more powerful smartphones will influence the tablet market. It is hard to believe that the average person will carry three devices: smartphone, tablet, laptop. Personally I would like to get it down to one, which is why I find the Motorola Atrix an interesting concept: it plugs into a laptop-like external keyboard and screen when required.

Apple’s advantage though is its focus on quality and design, rather than features. Few other manufacturers have learned this lesson. There is always something not quite right; and rather than fix it, a new model six months later with something else not quite right.

There was something else interesting about today’s event. iMovie for iPad 2, priced at $4.99. What is happening to the price of software, and what are the implications for developers? Something I will explore in another post shortly.

Spare a thought for Microsoft. Remember Bill Gates, telling us that one day tablets would dominate portable computing? Fumbling tablet computing may have been Microsoft’s biggest mistake.

Solar charge your mobile: sounds good, but how practical is it?

Charge your mobile for free while out and about, and also do your bit to save energy: the new Freeloader Classic from Solar Technology International has obvious appeal. But how practical is it?

image

The Freeloader has two solar panels, and measures 123 x 62 x 17mm when folded. After 8 hours in the sun, it can deliver power to an Apple iPhone for 18 hours, a Nintendo DS for 2.5 hours, and an Apple iPad for 2 hours. Take care that it does not walk while your back is turned.

image

It comes with all sorts of tips, and can also be charged via USB in 3 hours in the event that the sun is not shining. For example, if you are in the UK.

image

While I like the idea of solar charging a mobile device, it is another gadget to pack, and could end up as more of a burden than an asset. Instead of just charging your mobile, you have to think about charging your Freeloader and then charging your mobile. 8 hours in the sun is far from instant.

Still, if you are planning a long hike in a remote part of the world, this could be just what you need.

Update: I have now been sent a Freeloader for review. The good news: the unit looks great. The bad news: initial tests are disappointing. It arrived 75% charged … I left it on a windowsill for several days and by the end it had lost all its charge! I am not giving up though and will report in due course.

Freeloader Classic costs £39.99 including VAT.

What’s in HP’s Beats Audio, marketing aside?

If you are like me you may be wondering what is actually in Beats Audio technology, which comes from HP in partnership with Beats by Dr Dre.

The technical information is not that easy to find; but a comment to this blog directed me to this video:

http://www.precentral.net/what-beats-audio

image

According to this, it comes down to four things:

1. Redesigned headphone jack with better insulation, hence less ground noise.

image

2. Discrete headphone amp to reduce crosstalk. This is also said to be “more powerful”, but since we do not know what it is more powerful than, I am not going to count that as technical information.

3. Isolated audio circuitry.

4. Software audio profiles which I think means some sort of equalizer.

These seem to me sensible features, though what I would really like to see is specifications showing the benefits versus other laptops of a comparable price.

There may be a bit more to Beats audio in certain models. For example, the Envy 14 laptop described here has a “triple bass reflex subwoofer”.

image

though this user was not greatly impressed:

I ran some audio tone test sites and found out the built in laptop speakers do not generate any sound below 200 Hz. In the IDT audio drivers speaker config there is only configuration for 2 speaker stereo system, no 2.1 speaker system (which includes subwoofer). I’m miffed, because on HP advertising copy claims “HP Triple Bass Reflex Subwoofer amplifiers put out 12W total while supporting a full range of treble and bass frequencies.” Clearly I am not getting “full range” frequencies.

Still, what do you expect from a subwoofer built into a laptop?

Don’t be fooled. 24-bit will not fix computer audio

Record producer Jimmy Iovine now chairman of Interscope and CEO of Beats by Dr Dre, says there are huge quality problems in the music industry. I listened to his talk during HP’s launch event for its TouchPad tablet and new smartphones.

“We’re trying to fix the degradation of music that the digital revolution has caused,” says Iovine. “Quality is being destroyed on a massive scale”.

So what has gone wrong? Iovine’s speech is short on technical detail, but he identifies several issues. First, he implies that 24-bit digital audio is necessary for good sound:

We record our music in 24-bit. The record industry downgrades that to 16-bit. Why? I don’t know. It’s not because they’re geniuses.

Second, he says that “the PC has become the de facto home stereo for young people” but that sound is an afterthought for most computer manufacturers. “No-one cares about sound”.

Finally, he says that HP working with, no surprise, his own company Beats by Dr Dre, has fixed the problem:

We have a million laptops with Beats audio in with HP … HP’s laptops, the Envy and the Pavilion, actually feel the way the music feels in the studio. I can tell you, that is the only PC in the world that can do that.

Beats Audio is in the Touchpad as well, hence Iovine’s appearance. “The Touchpad is a musical instrument” says Iovine.

I am a music and audio enthusiast and part of me wants to agree with Iovine. Part of me though finds the whole speech disgraceful.

Let’s start with the positive. It is true that the digital revolution has had mixed results for audio quality in the home. In general, convenience has won out over sound quality, and iPod docks are the new home stereo, compromised by little loudspeakers in plastic cabinets, usually with lossy-compressed audio files as the source.

Why then is Iovine’s speech disgraceful? Simply because it is disconnected from technical reality for no other reason than to market his product.

Iovine says he does not know why 24-bit files are downgraded to 16-bit. That is implausible. The first reason is historical. 16-bit audio was chosen for the CD format back in the eighties. The second reason is that there is an advantage in reducing the size of audio data, whether that is to fit more on a CD, or to reduce download time, bandwidth and storage on a PC or portable player.

But how much is the sound degraded when converted from 24-bit to 16-bit? PCM audio has a sampling rate as well as a bit-depth. CD or Redbook quality is 16-bit sampled at 44,100 Hz, usually abbreviated to 16/44. High resolution audio is usually 24/96 or even 24/192.

The question then: what are the limitations of 16/44 audio? We can be precise about this. Nyquist’s Theorem says that the 44,100 Hz sampling rate is enough to perfectly recapture a band-limited audio signal where the highest frequency is 22,500 Hz. Human hearing may extends to 20,000 Hz in ideal conditions, but few can hear much above 18,000 Hz and this diminishes with age.

Redbook audio also limits the dynamic range (difference between quietest and loudest passages) to 96dB.

In theory then it seems that 16/44 should be good enough for the limits of human hearing. Still, there are other factors which mean that what is achieved falls short of what is theoretically possible. Higher resolution formats might therefore sound better. But do they? See here for a previous article on the subject; I has also done a more recent test of my own. It is difficult to be definitive; but my view is that in ideal conditions the difference is subtle at best.

Now think of a PC or Tablet computer. The conditions are far from ideal. There is no room for a powerful amplifier, and any built-in speakers are tiny. Headphones partly solve this problem for personal listening, even more so when they are powered headphones such as the high-end ones marketed by Beats, but that has nothing to do with what is in the PC or tablet.

I am sure it is true that sound quality is a low priority for most laptop or PC vendors, but one of the reasons is that the technology behind digital audio converters is mature and even the cheap audio chipsets built into mass-market motherboards are unlikely to be the weak link in most computer audio setups.

The speakers built into a portable computer are most likely a bit hopeless – and it may well be that HPs are better than most – but that is easily overcome by plugging in powered speakers, or using an external digital to analog converter (DAC). Some of these use USB connections so that you can use them with any USB-equipped device.

Nevertheless, Iovine is correct that the industry has degraded audio. The reason is not 24-bit vs 16-bit, but poor sound engineering, especially the reduced dynamic range inflicted on us by the loudness wars.

The culprits: not the PC manufacturers as Iovine claims, but rather the record industry. Note that Iovine is chairman of a record company.

It breaks my heart to hear the obvious distortion in the loud passages during a magnificent performance such as Johnny Cash’s version of Trent Reznor’s Hurt. That is an engineering failure.

Microsoft still paying the price for botched Vista with muddled development strategy

Professional Developers Conference 2003. Windows Longhorn is revealed, with three “pillars”:

  • Avalon, later named Windows Presentation Foundation (WPF)
  • Indigo, later named Windows Communication Foundation (WCF)
  • WinFS, the relational file system that was later abandoned

With the benefit of hindsight, Microsoft got many things right with the vision it set out at PDC 2003. The company saw that a revolution in user interface technology was under way, driven by the powerful graphics capabilities of modern hardware, and that the old Win32 graphics API would have to be replaced, much as Windows itself replaced DOS and the command-line. XAML and WPF was its answer, bringing together .NET, DirectX, vector graphics, XML and declarative programming to form a new, rich, presentation framework that was both designer-friendly and programmer-friendly.

Microsoft also had plans to take a cut-down version of WPF cross-platform as a browser plugin. WPF/Everywhere, which became Silverlight, was to take WPF to the Mac and to mobile devices.

I still recall the early demos of Avalon, which greatly impressed me: beautiful, rich designs which made traditional Windows applications look dated.

Unfortunately Microsoft largely failed to execute its vision. The preview of Longhorn handed out at PDC, which used Avalon for its GUI, was desperately slow.

Fast forward to April 2005, and Windows geek Paul Thurrott reports on Longhorn progress:

I’m reflecting a bit on Longhorn 5048. My thoughts are not positive, not positive at all. This is a painful build to have to deal with after a year of waiting, a step back in some ways. I hope Microsoft has surprises up their sleeves. This has the makings of a train wreck.

Thurrott was right. But why did Longhorn go backwards? Well, at some point – and I am not sure of the date, but I think sometime in 2004 – Microsoft decided that the .NET API for Longhorn was not working, performance was too bad, defects too many. The Windows build was rebased on the code for Server 2003 and most of .NET was removed, as documented by Richard Grimes.

Vista as we now know was not a success for Microsoft, though it was by no means all bad and laid the foundation for the well-received Windows 7. My point though is how this impacted Microsoft’s strategy for the client API. WPF was shipped in Longhorn, and also back-ported to Windows XP, but it was there as a runtime for custom applications, not as part of the core operating system.

One way of seeing this is that when Longhorn ran into the ground and had to be reset, the Windows team within Microsoft vowed never again to depend on .NET. While I do not know if this is correct, as a model it makes sense of what has subsequently happened with Silverlight, IE and HTML5, and Windows Phone:

  • Windows team talks up IE9 at PDC 2010 and does not mention Silverlight
  • Microsoft refuses to deliver a tablet version of Windows Phone OS with its .NET application API, favouring some future version of full Windows instead

Note that in 2008 Microsoft advertised for a job vacancy including this in the description:

We will be determining the new Windows user interface guidelines and building a platform that supports it. We’ll eliminate much of the drudgery of Win32 UI development and enable rich, graphical, animated user interface by using markup based UI and a small, high performance, native code runtime.

In other words, the Windows team has possibly been working on its own native code equivalent to XAML and WPF, or perhaps a native code runtime for XAML presentation markup. Maybe this could appear in Windows 8 and support a new touch-oriented user interface.

In the meantime though, Microsoft’s developer division has continued a strong push for .NET, Silverlight and most recently Windows Phone. Look at Visual Studio or talk to the development folk, and you still get the impression that this is the future of Windows client applications.

All this adds up to a muddled development story, which is costly when it comes to evangelising the platform.

In particular, eight years after PDC 2003 there is no clarity about Microsoft’s rich client or RIA (Rich Internet Application) designer and developer story. Is it really WPF, Silverlight and .NET, or is it some new API yet to be revealed, or will IE9 as a runtime play a key role?

There is now a little bit more evidence for this confusion and its cost; but this post is long enough and I have covered it separately.

Appcelerator releases Titanium Mobile 1.6

Appcelerator has released Titanium Mobile 1.6, an update to its cross-platform app framework for Apple iOS and Google Android.

The update adds 26 features for Android and 9 features for iOS. The Facebook API has been completely redone, keeping up-to-date with the latest Facebook API. There is beta support for the Android NDK – native code development.

Android 1.6 is now deprecated and will not be supported in future releases.

While not a big release in itself, Titanium Mobile 1.6 is require for using forthcoming Titanium+Plus modules, libraries which add support for features such as barcode reading and PayPal payments.

There is no sign yet of Aptana integration, following the acquisition of this JavaScript IDE in January.

Updating to the 1.6 SDK was delightfully easy on Windows. Just open Titanium Developer and click the prompt.

image