The Impossibilities and Possibilities of Google Glass development
In the last few days, Google have released the API and developer documentation for Google Glass.
They also have some videos (such as the SXSW talk, plus these) to guide us through the capabilities.
I thought I’d put together a quick list of the Impossibilities and Possibilities for third party developers (as I see it, from the information so far):
The following are not possible:
Update: This isn’t quite true! It turns out it is possible for techies to install Android APKs - by plugging it in with USB and enabling debug mode, on the Explorer version of the device at least. See this post by Mike DiGiovanni:
Realtime picture/video, or voice integration
It’s only possible to tap into user’s images and video if they choose to share it through your service, after they’ve been taken. And it doesn’t seem possible for 3rd party developers to do anything with voice input. “At the moment, there doesn’t appear to be any support for retrieving a camera feed or an audio stream” (source)
Update: Except if you root it, of course! See:
Early discussions about Google Glass kept referring to it as an AR device. It’s not really AR at all. It doesn’t give you the capability to augment the user’s real-world view, except indirectly, through the small, fixed screen. (It’s actually less of an AR device than a mobile phone held up in front of your face).
“Users don’t browse the web on Glass (well, they can ask questions to Google but there is no API for that yet)” (Max Firtman)
“We push, update and delete cards from our server, just for being there if the user thinks it’s time to see the timeline. It’s probable that our card will never be seen by the user… It’s not like a mobile push notification.” (Max Firtman)
Early unofficial reports said there would be a second camera facing towards you, for eye tracking. From the official tech specs, it seems that’s not the case.
Update: I was right first time - it’s not mentioned in the tech specs (maybe they just don’t want to shout about it much right now?) but there’s definitely an eye tracking camera - that’s what enables 'Winky’:
Location, unless paired with Android 4+ phone
It was popularly reported that Glass would work with phones other than Android. But MyGlass, which includes the GPS and SMS capability, requires Android ICS or higher (source)
There’s no charging for timeline cards, no payment for virtual goods or upgrades, and no advertising (source)
So what kind of services are feasible?
Services for often-updated content
To provide short snippets of content that the user will often want to have a quick glance at, to see the latest. For example, news headlines.
Update: You can also have short amounts of content read out for the user, using the “read-aloud” feature. See:
To provide advice/information about nearby locations. For example, travel information or tourist destination tips.
For sharing your photos and video with your friends. Or sharing them with services (automated or not) that can do something with them and send you something back.
Simple communication / social networking
It’s possible not just to consume 3rd party content, but to reply with text or respond with selections. So reading and creating emails, text messages, Facebook status updates, tweets… should all be possible.
The possibilities for third party developers are more limited than many hoped. But, there’s still an exciting amount to explore. And remember this is the very first API for the very first commercial device of its kind. (Compare it to the first version of the iPhone, which didn’t have an SDK or an App Store).
To quote Timothy Jordan, “It’s early days… We’re really just getting started”.