Wednesday, 12 June 2013

How Apple's iOS fragmentation problems distort design thinking

As someone who uses both Android and iOS regularly, I'm getting increasingly frustrated by fragmentation. However it's not on my Android devices I see this, but on the iOS ones. I install a popular, well-funded application like Instagram, Flickr, or Circa on my iPad, but when I launch it 3/4 of the screen is black bars, with a teensy little app in the middle. Or I can choose to scale it up without smoothing, so jagged pixels I have't seen since the 1990s reinforce the sense that I am doing something wrong by attempting to run this app here. Every affordance is pushing back at me, saying I'm doing it wrong.

on Android Nexus 7 Instagram looks great, on iPad it looks like a Victorian death notice

By contrast, on Android applications scale up to fill the space sensibly - buttons stay button sized, text and image areas expand well. Developers can add alternative layouts to better handle varying sizes, but if they don't things remain legible and touchable.

One hand or on your knees?

More pernicious is the artificial dichotomy that the iOS world leads our design thinking into. You're either on the held-in-one-hand phone, briefly standing somewhere, or you're sitting down in the evening using your iPad (so big and heavy that you have to rest it on your knees - Steve Jobs even brought out a sofa to sit on at the launch). This false 'mobile versus desktop' dichotomy even misled Mark Zuckerberg when he said "iPad's not mobile, it's a computer." and at the Facebook Home Launch, a tablet version was said to be "months away", though a working version was hacked together by outside programmers within days.

Meanwhile, nobody told Barnes and Noble, whose 7" Nook did so well that Amazon launched a Kindle range the same size, leading to the lovely Nexus 7 from Google and finally to the iPad Mini from Apple. This is the form factor, tested for years by paperback books, that makes one-handed long form reading comfortable. If you spend any time on public transit, being able to read standing up or in narrow bus seats is an obvious benefit. But the hermetically sealed company-coach commuters at Apple missed this for years.

Steve Jobs even said you'd have to file down your fingers to use it. The thing is, on iOS it does feel like that. The iPad sized apps have too-small buttons, the iPhone ones are too big if zoomed. There is no way for an app to know how big your finger is compared to the screen, let alone a website.

The supposed advantage of iOS is fewer sizes to design for, but now you need 12 different layouts to cope with the horizontal and vertical orientations of each device, and the general layout tools don't handle this as well as Android, requiring complex updates. Chiu-Ki Chan explains the pain, whereas Android Studio just made this even easier for developers.

No App is an island

The other fragmentation problem on iOS are the missing links. Not in the evolutionary sense, but the ability to readily connect between applications using URLs, as we're used to on the web. Applications can't claim parts of a URL, only a whole scheme. As Apple says:

If more than one third-party app registers to handle the same URL scheme, there is currently no process for determining which app will be given that scheme.

On Android, any app can claim any scheme, host and path, and the OS will resolve as appropriate, giving you a choice if more than one possibility is available.

On Android, you get a choice for links, if there is one.

On iOS, each app ends up embedding webviews inside it rather than linking to a browser, fragmenting your web history. I have to remember whether I got to the page via Twitter, or Facebook or email to go back to it later on, and I only get to share it to the iOS-approved places, Twitter, Facebook, email or iMessage. On Android, any installed app that can handle that type of media is an option to share to, or create a photo, make a phonecall, any of hundreds of "intents" that act as bridges between apps and, through browsers, the web.

This means that Android apps don't end up doing everything, but hand off to each other as needed, meaning that you can replace any function cleanly across apps. Better keyboards are the obvious example here, but camera apps, browsers, and SMS apps can drop themselves in, with the choice made by the user.

On iOS, you have to explicitly link to the scheme, which is per-app. Ironically, this means that Google (or Facebook) can release a set of iOS apps that connect to each other preferentially, leaving the Apple applications out of the loop. But it also makes it harder for developers to specialise in doing one thing well and linking to the rest.

What of the Web?

The other pernicious influence of iOS fragmentation has been the rise of the mobile-only site - the m. layout that was tuned for iPhone use, then slightly adapted for other mobile users, often giving rise to farcical single column layouts with huge whitespace on tablets. In the early iOS days this was a bonus, as it encouraged function-first design, without the organisation-chart-inspired cruft around the edges that websites accumulate over time. As the effective resolution of mobile devices has increased, now often exceeding what is available on desktops, the assumptions that drove mobile-specific sites are breaking down.

Now that Android is the dominant operating system, Google is getting serious about it as a web platform too, which is very welcome. The Android Browser was installed as part of the OS, and didn't get upgraded over time. This has changed, with Chrome now the default Android browser, and it is on a regular update pipeline like Desktop Chrome and Firefox. iOS's Safari updates are frequent too now, and Microsoft is now pleading with developers to give them modern web apps too.

Truly responsive design has succeeded mobile-first as the best choice for websites, and we're seeing this spread as browsers converge on HTML5 features. What this means is that the web platform is now evolving rapidly, without any one browser provider being a bottleneck. The installed base for SVG passed Flash a while back, and even Adobe is now backing the web fully, bringing their design know-how to HTML5 features such as regions and exclusions. Also in the pipeline for HTML5 is browser to browser audio, video and text chat via WebRTC.

Hoping Apple continues the revolution

This web platform revolution was catalysed by Apple with WebKit and Mozilla with Firefox, and picked up by Google, Microsoft, Adobe and others. We now have a Browser Battle to be more standards compliant and consistent, rather than a Browser War to be different and proprietary. What I'll be hoping for from Apple at next weeks WWDC is a clear recognition of these design lacunae and new and better ways for developers to succeed both with native apps and on the web.

This was also published on TechCrunch.

Monday, 6 May 2013

Finally, some progress in video codecs.

An announcement on Friday via Brendan Eich:

ORBX.js, a downloadable HD codec written in JS and WebGL. The advantages are many. On the good-for-the-open-web side: no encumbered-format burden on web browsers, they are just IP-blind runtimes. Technical wins start with the ability to evolve and improve the codec over time, instead of taking ten years to specify and burn it into silicon.
I think the 'remote-screen viewing of videogames' use case is bogus (if anyone notices latency it's gamers), but this is a really important development for the reasons Brendan mentions and more.

Nine years ago, I wrote:

I'd say video compression is maybe 2-4 times as efficient (in quality per bit) than it was in 1990 or so when MPEG was standardised, despite computing power and storage having improved a thousandfold since then.

Not much has changed. The video compression techniques we're using everywhere are direct descendents of 1980s signal processing. They treat video as a collection of small 2D blocks that move horizontally and vertically over time, and encode all video this way. If you want to make a codec work hard, you just need to rotate the camera. Partly this is because of the huge patent thicket around video encoding, mostly it's because compression gets less necessary over time as network capacity and storage increases. However, it was obvious 10 years ago that this was out-dated.

Meanwhile, there has been a revolution in video processing. It's been going on in video games, and in movies and TV. The beautiful photorealistic scenes you now see in video games are because they are textured 3D models rendered on the fly for you. Even the cut scenes work this way, though their encoding is often what compression researchers dismissively call a 'Graduate Student Algorithm' - hand-tweaking the models and textures to play back well within the constraints of the device. Most movies and TV has also been through 3d-modelling and rendering, from Pixar through visual effects to the mundane superimposition of yard lines on sports. The proportion of YouTube that is animation, machinima or videogame run-throughs with commentary keeps growing too.

Yet codecs remain blind to this. All this great 3d work is reduced to small 2D squares. A trimesh and texture codec seems an obvious innovation - even phones have GPUs in them now, and desktops have for 20 years. Web browsers have been capable of complex animations for ages too. Now they have the ability to decode bytestreams to WebGL in real time, we may finally get codecs that are up to the standards we expect from videogames, TV and movies, with the additional advantage of resolution independence. It's time for this change.

Thursday, 4 April 2013

Forking, Spooning or Knifing?

Reading the tech news this week, there's a lot of talk about forking. Google Blink forking AppKit. Apple not forking Chromium because that would be hostile. Facebook 'forking' Android. Even Tim O'Reilly forking the memetic nature of Free Software into Open Source.

However not all of these things are really forks, and forking is no longer necessarily a hostile act. Lets go through them. Google Blink is a fork of Webkit, or rather of Webcore. Alex explains this is to reduce the amount of time they need to spend merging back to Webkit, but it doesn't preclude anyone continuing to do this if desired. Maciej explained that the reason for the difference in multiprocessing implementations that precipitated this was Apple not wanting to do so.

Facebook did not need to fork Android, because it is designed to support substitutable components. You can swap out any OS components, and you can communicate between apps using intents. Indeed, Facebook could make a deal with handset manufacturers or carriers that don't offer the 'with Google' experience to replace it with a Facebook one. Expect to see this in overseas markets, especially the ones where Facebook Zero works with carriers.

The more subtle thing is that forking is no longer perjorative. It used to be a last resort, what you did when your open source community had broken down. It meant that people had to pick sides, and choose which fork to adopt, because open source had a hierarchic nature. Now, forking is what you do to show interest. If you go to github, where much open source lives now, forking a project is a single click. Sucessful projects will have many forks, and will accept pull requests from some of them.

This is the real difference between the Free Software and Open Source worldviews that were debated this week - the web enables more parallel, less centralised forms of co-operation and ownership. The monolithic projects and integrations are giving way to ones with better defined boundaries between them, and the ability to combine components as needed. Which means tech companies don't get to tell each other to "Knife the baby" any more.

Tuesday, 2 April 2013

Hosting and Impermanence

I linked to my boys' old blog today when Christopher asked for an April Fools prank idea, and noticed that the images were missing. This is due to the demise of iDisk, Apple's handy built-in version of DropBox that I paid them about $100/year for until they shut it down and broke all my links to it.

To fix this, I copied the old iDisk Sites folder to Google Drive, and manually changed the links in the blog posts. Then I had a thought - could I host my twitter archive this way? As you can see it works.

Here's how to do it:

  1. Download your Twitter Archive by clicking Request your archive on the Account Settings page.
  2. Install Google Drive
  3. Expand the archive and copy it into the local Google Drive folder
  4. Go to Google Drive in the browser, and set the folder to public using the Share button.
  5. Copy the URL which will be something like: https://drive.google.com/#folders/0B7cAS_dEul22V3d5c0d5WkpEOFE
  6. Edit the first part to be 'googledrive.com/host' so you get https://googledrive.com/host/0B7cAS_dEul22V3d5c0d5WkpEOFE
  7. (Optional) Go over to a URL shortener and make a short link for it like I did http://j.mp/kmtweets

Now you have your old tweets hosted on Google. Tweet a link to them.

Friday, 15 March 2013

DTLI Panel on 1201 rulemaking

As I'm in twitter jail for tweeting too much, here's an old-fashioned liveblog.
 Speakers

Rob Kasunic: the DMCA was passed in 1998. The new rule-making started in 2000.
(someone is testing a radio mic on the same frequency. Ironic given White Spaces interference lobby by radio mic users).
Rulemaking was originally designed to be formal, like a courtroom proceeding. It became less formal, and a periodic review of exemptions. The exemptions expire each three years and must be examined again. 2000 rulemaking was difficult as the provisions weren't yet in force. "It would be nice if legislation could be understood by the general public. Failing that it would be nice if it was understood by Copyright lawyers."
We had to interpret what a "class of works" consisted of as that was what we coudl exempt, but was not defined in the statute.

Marcia Hofmann: I've been involved in all rounds so far. What does a successful argument look like?

Rob Kasunic: Seth Finkelstein documented what he had done to achieve exemptions. Look at what had been successful. Many people who have got them repeatedly were not lawyers. Presenting a very strong factual case is key, compared to making a legal argument. It helps to come to the hearing but that is not a requirement.

Rebecca Tushnet: I work with Vidders, a remix artists community. The hearings feel like Alice in Wonderland - the content people say that screen capture and other tools are circumvention, but they are available, and they won't go after fair use but they reserve the right to decide what isn't and so no exemption is needed.
Vidders are primarily made up of women, working with popular culture and non-commercial. Even though the Copyright Office should represent all creators, as outsiders we have to make this argument in perpetuity. Despite copyright protecting all creations, we need to make a case for creatiev quality and critical message, which is like explaining opera to non-fans.
The question 'Who gets to say what tools artists can use?' is very difficult. The content people argued that screen capture software was good enough, so cracking encrypted DVDs wasn't needed, despite generational loss. They said you don't need nice-looking stuff to use in your art. They also said that if we weren't getting good enough quality from screen capture they were doing it wrong. Hearing a bunch of guys who don't edit video telling a bunch of women who do that they are doing it wrong is a feature of the proceedings. You have to come back every 18 months are start again, and eventual people give up - like the dongle guy did. The copyright office cuts down your proposal each time, and more so if you don't come to the hearings. The burden of proof and the standards required to show that the use is substantial requires you to break the law in advance to show that it should be legal, which is highly problematic. Bruce Lehman told us about the process of enacting the DMCA. People making the next generation of media don't have lobbyists. they don't even have drivers licences today. They will surprise us like Facebook or Google. We need to let them surprise us in future.

Christian Genetski: The EFF brought an exemption request for jailbreaking consoles that followed that from jailbreaking cellphones. As we (Video Game Manufacturers) prevailed, I see this as a fair process. In the mobile phone example there was a competition issue wrapped up in the DMCA. We made the case that this was different for game consoles, where we were protecting 3rd party creative works - the games. We didn't question the legitimacy of homebrew and indie games, but said we were trying to promote these consistent with protecting commercial ones. The evidence showed that vast majority of the tools were used for infringement, not for development or indie distribution.
I don't think the DMCA 1201 rule is broken per se, The use of the statute by creative litigators is not unique to the DMCA, there were other statutes cited in the same complaints.
If we need to adjust tot he reality of what is being used, taking a fresh look every 18 months seems like a good idea. This is better than going to Congress and meeting on K Street. Perhaps there is an execution and burden-shifting problem.

Rob Kasunic: Burden-shifting is something we should consider for existing exemptions, to move the burden to opponents. The rule-making is not necessarily the answer to these issues. It's an adjunct to the statute. But for the rule-making process, Vidders would have all been unlawful. The Copyright office is not assessing 'good' works or legitimate art, but non-infringing use. When we use the term 'substantial' we didn't mean a higher burden of proof, but it needs to not be just mere inconveniences or anecdotal use. With vidding it was questionable if the use was non-infringing, but there needed to be sufficient that were.
Although 1201 exemptions only apply to the use exemptions, not the trafficking ones, when they passed they mean people can buy illegal tools for a legal purpose which is very odd.

Rebecca Tushnet: The exemption we got said we could only use circumvention if necessary for sufficient quality. This was an artistic judgment encoded in the exemption.

 Granick: Trackphone continued to use 1201 against bulk unlockers as they were not doing it to unlock to connect to a network, but to profit by reselling phones, and have won these cases.

Q: Why would it be up to Congress to change the burden? Couldn't the Copyright office change this?
Kasunic: there is a lot of expectations from Congress in the lawmaking, even if not in the legislation. We're trying to implement what Congress intended. We would be delighted to have Congress give us more information.

Q: Copyright prevents actual copying and derivative works. What of we narrowed it down so derivative works weren't protected? How many problems would go away then?
Kasunic: Many of derivative works matter a lot eg movie adaptations. The line between derivative and transformative is a fine one.
Genetski: in the game industry the expansion packs and sequels are derivative and need to be protected.
Tushnet: people have been thinking hard about this. Substantial similarity has eaten this up. I wrote an article about this.