Peter's cartoon avatar

Peter O'Shaughnessy

Web technologies and browser-based experiments

The Impossibilities and Possibilities of Google Glass development

In the last few days, Google have released the API and developer documentation for Google Glass.

They also have some videos (such as the SXSW talk, plus these) to guide us through the capabilities.

I thought I’d put together a quick list of the Impossibilities and Possibilities for third party developers (as I see it, from the information so far):

The following are not possible:

‘Apps’

You can’t develop 'apps’ as such, or actually install anything on the device. But you can develop services through timeline cards. These cards can contain small amounts of text, HTML, images, or a map, but there’s no scrolling, JavaScript, or form elements.

Update: This isn’t quite true! It turns out it is possible for techies to install Android APKs - by plugging it in with USB and enabling debug mode, on the Explorer version of the device at least. See this post by Mike DiGiovanni:

https://plus.google.com/116031914637788986927/posts/Abvh8vmvPJk

Realtime picture/video, or voice integration

It’s only possible to tap into user’s images and video if they choose to share it through your service, after they’ve been taken. And it doesn’t seem possible for 3rd party developers to do anything with voice input. “At the moment, there doesn’t appear to be any support for retrieving a camera feed or an audio stream” (source)

Update: Except if you root it, of course! See:

http://arstechnica.com/security/2013/05/rooting-exploit-could-turn-google-glass-into-secret-surveillance-tool/

AR

Early discussions about Google Glass kept referring to it as an AR device. It’s not really AR at all. It doesn’t give you the capability to augment the user’s real-world view, except indirectly, through the small, fixed screen. (It’s actually less of an AR device than a mobile phone held up in front of your face).

Web browsing

“Users don’t browse the web on Glass (well, they can ask questions to Google but there is no API for that yet)” (Max Firtman)

Notifications

“We push, update and delete cards from our server, just for being there if the user thinks it’s time to see the timeline. It’s probable that our card will never be seen by the user… It’s not like a mobile push notification.” (Max Firtman)

Eye-tracking

Early unofficial reports said there would be a second camera facing towards you, for eye tracking. From the official tech specs, it seems that’s not the case.

Update: I was right first time - it’s not mentioned in the tech specs (maybe they just don’t want to shout about it much right now?) but there’s definitely an eye tracking camera - that’s what enables 'Winky’:

http://arstechnica.com/gadgets/2013/05/google-glass-developer-writes-an-app-to-snap-photos-with-just-a-wink/

Location, unless paired with Android 4+ phone

It was popularly reported that Glass would work with phones other than Android. But MyGlass, which includes the GPS and SMS capability, requires Android ICS or higher (source)

Direct revenue

There’s no charging for timeline cards, no payment for virtual goods or upgrades, and no advertising (source)

So what kind of services are feasible?

Services for often-updated content

To provide short snippets of content that the user will often want to have a quick glance at, to see the latest. For example, news headlines.

Update: You can also have short amounts of content read out for the user, using the “read-aloud” feature. See:

http://thenextweb.com/media/2013/04/25/the-new-york-times-releases-a-google-glass-app-that-reads-article-summaries-aloud/

Location services

To provide advice/information about nearby locations. For example, travel information or tourist destination tips.

Share services

For sharing your photos and video with your friends. Or sharing them with services (automated or not) that can do something with them and send you something back.

Simple communication / social networking

It’s possible not just to consume 3rd party content, but to reply with text or respond with selections. So reading and creating emails, text messages, Facebook status updates, tweets…  should all be possible.

To summarise…

The possibilities for third party developers are more limited than many hoped. But, there’s still an exciting amount to explore. And remember this is the very first API for the very first commercial device of its kind. (Compare it to the first version of the iPhone, which didn’t have an SDK or an App Store).

To quote Timothy Jordan, “It’s early days… We’re really just getting started”.

wearables future-tech