Because Google Asked For It: Project Glass Suggestions

No matter how many times I watch this video. I still have a hard time believing that Project Glass will someday be a real, tangible product that I can strap to my face. A heads-up display is pretty much the holy grail of computing interfaces. If Google can deliver, it'll be mind blowing. There's just so much to talk about. How often does an entirely new computing form factor come along?

In fact, you could probably say that a HUD is the last UI form factor. Computer augmented vision is as integrated with a computer as you can ever get. Sure, eventually the tech would move from glasses, to contacts, to a brain implant, but the UI will essentially be the same.

And boy, is this a wild form factor. The keyboard-and-mouse to touchscreen transition is pretty straightforward: Touch input maps to mouse commands fairly easily, and you even get a virtual keyboard. A HUD is an entirely different matter. There is no keyboard and mouse. You don't get a touchscreen either. You're pretty much going to have to throw out the last 40 years of computer interface concepts. Your primary forms of input are now your voice, your vision (via a camera), and some accelerometers in the glasses. Scary.

A total reinvention of the computer is a big deal, and a huge challenge. I'm sure it's with this in mind that Google has put out an open call for Project Glass ideas. Every time the video is posted to G+, it's always accompanied by the same closing question: "What would you like to see from Project Glass?"

What would I like to see from Project Glass? Oh, about a million things. To the mockup machine!

The Killer App: Facial Recognition

Facial recognition is the killer app for AR glasses. Anytime the camera detects someone in your contacts list, it could display the appropriate, customizable information. Just think about the possibilities there for a moment.

How much could something like this change your life? Are you bad with names (like me)? Well, you aren't anymore. Do you always forget to ask that guy about that thing, only to remember 10 minutes after you've said "goodbye"? Just set a reminder for the next time you (literally) see that person.

Facial recognition could connect all the info normally hidden away in a smartphone to the real world. You could surface some relevant info, like your last text/IM with that person (for the forgetful), or their latest status update. If you're at a loss for something to talk about, you can casually browse their social network feed.

Professionals would be more interested in someone's name, employer and title. Just something simple like that would instantly be +1000 to networking skills. Again, it would need to be customizable, maybe even with something like G+ circles. For good friends, you wouldn't want their name and employer poping up every time you look at them, that would be for the "professional acquaintances" circle.

This picture is for the fun, consumer scenario, but just imagine how facial recognition + a database query could revolutionize customer service. A repeat customer walks into an establishment (retail store, doctors office, you name it) and the employee instantly knows that persons name and their history with that establishment, even if that employee has never seen that person before.

This is the perfect blend of human and computer. Forgetting things is purely a human problem, computers always remember. Allowing me to tag people (and places) with information fills a big gap in human ability, and it perfectly emulates how our brains work. Normally, looking (or talking) to Paul would make me remember information about Paul, so displaying "Paul" information when I'm talking to him would be perfectly natural.

Google Goggles Tech: Text Recognition And Image Search


I've suspected for a while that Google Goggles was all about teaching a computer to see, and now it's time to use it. Not having a keyboard is going to be a serious problem for data entry, and voice isn't ideal for every scenario.

Imagine a professional meeting someone for the first time, and he wants to program the facial recognition feature for this new person. The new guy hands over a business card, but speaking all the information into your glasses would be time consuming and fraught with errors (Voice typing is never going to nail "Shailesh Nalawadi"). Goggles has the technology, right now, to scan a business card and input all that delicious data into your Android contacts. So, in an ideal world, the glasses could auto-detect the new face while you are talking, and after the interaction is over, a casual snap of the business card and this new person is now in your "memory" forever. The next get-together could be 6 months down the line, but you would still "remember" Shailesh.

Goggles also aims to answer the question: "Hey, what is that?" just by aiming a camera at something. When said camera is attached to your head, that ability becomes a lot more convenient and useful. Goggles was made for a device like this. Sell a bunch of Project Glass devices and suddenly Goggles becomes Google's most important product, similarly to how smartphones have made mobile incredibly important. Up until now Goggles has been a side project, but now is the time to throw more manpower at it and make the whole world search-able.

Augmented Reality And Dealing With Information Overload

The biggest missing feature in the video (other than facial recognition) was the complete lack of augmented reality. When looking through computerized glasses, the world should be a seamless blend of real and virtual, but the video showed a regular world with a flat, 2D computer space on top of it. Anytime the glasses displayed information it did so front and center, in a big, white box. A heads-up display can be so much more than this.

Maps is an excellent example of "doing it wrong." An overview map is fine, initially, but if I'm trying to navigate somewhere, why not paint the navigation line on the sidewalk?
Navigation the way it's supposed to be done. I also added time and distance to the popup, because, why not?

Real and virtual should live side-by-side, in 3D. It's not just incredibly cool; it's a much less disruptive way to display information. Take my facial recognition mockup, for example. It would be annoying to have a box like that constantly in your face, but imagine if it "floated" next to Paul's head. So if he were farther away, it would look like this:

Paul is further away from you, thus the information about Paul is less important. Since the info is anchored to his head, it automatically becomes smaller and maybe even more transparent. If you turn and look away from Paul, the white box would leave your view completely. This will automatically and naturally solve a lot of the information overload and popup-swatting depicted in the Project Glass parody videos, and it allows the UI to "offer" information in a less intrusive way than a popup. Not everything needs to be "pinned" to my vision.

Microsoft Office does something similar to this with the Mini Toolbar. It pops up when you right click highlighted text, and fades in and out in relation to its distance from the mouse cursor. Office basically knows what you're looking at via the mouse position, and tones down the toolbar if it doesn't look like you want to interact with it. Project Glass should work the same way. It should use all the information available to it to try to figure out if I deem something important or not, and adjust the UI accordingly.

For instance, in the Navigation scenario, if I stop and grab a hot dog from a vendor, or stop to read a poster on a building, the camera should be able to tell that I've stopped looking towards my destination, and the navigation instructions should fade out until I look down the road again. How well Project Glass is received will depend greatly on its ability to know when I want to see information and when I don't. Anchoring information to things in the real world seems like an easy and natural way to deal with UI focus.

Blending the real world with virtual objects also opens up a great interface for "offering" information. I don't necessarily want my shopping list blasting into my eyeballs every time I walk by a supermarket, but I don't want to have to dig through a note app either. Imagine my virtual shopping list "hanging" on the door, the same way a promotional poster would. If I want my shopping list, I can grab it on my way in and pin it to my vision. If I want to ignore it, just do nothing; ignore it the same way you would any other poster. It's a great way for the glasses to ask "Hey, do you want this possibly relevant information?" without the user having to explicitly dismiss it. Again, this is a perfect combination of human and computer, because "Shopping list enumeration" is usually the first thing on everyone's mind when they walk into a supermarket.

Most of this should only require Kinect-level intelligence, but there's always the possibility that none of this was in the video because it's just too hard. Project Glass v1 might not have the processing power (or battery life) to decipher world geometry at 60fps. That's not really conductive to blogging though, or any fun. If version 1 really is that limited, file this section under "version 2."

An Easy Way To Share Information In Real Life

Currently, these things are going to kill casual, in-person, information sharing. I'm not carrying a screen around anymore, so "Hey, look at this" means I have to pull the glasses off my head, and hand them to my friend. I not going to do that. "Settings -> Bluetooth -> Search for devices -> enter pin" isn't going to fly either; my cat picture isn't that important. And no, I don't want to do it through Google+ (or email).

Information transfer between two headsets needs to be as simple as "handing a phone over" is now. Anyone on my sharing whitelist should be able to virtually "pass" me a picture or web site with very little friction. Even something as simple as finding a friend on my contact list and IMing them seems stupid when they are right next to me.

Speaking of sharing, ever see the Microsoft Holodesk? It lets you "hold" virtual objects using just a transparent screen, a Kinect, and some software magic. Handing over a virtual object would be way cooler than "Do you accept this item from Paul?" These are HUD glasses, after all, feel free to crank the sci-fi factor up a notch.


Well, that's about all I can think of for right now. It sure is fun to imagine an interface like this, isn't it? Hopefully we'll hear more about Project Glass soon. And, hey Googlers! If you ever need a beta tester, just let me know. =D

Thanks for reading.

-Ron

8 comments:

  1. I recommend that you read the Daemon series by Daniel Suarez. In the stories there is a complete description of an interface like this. In addition to the augmented reality display there is the interaction of the user with the display via hand gestures. The sharing issue is also addressed in those stories through public and private view layers of the digital dimension.

    ReplyDelete
    Replies
    1. oooo. Ok. Will do. I've been glued to Dennou Coil for the last few days too.

      Delete
    2. The series is AMAZING and when I read that Google was creating this I thought of the possibilities .. the D-space concept will turn the world on its ear!

      Delete
    3. I always thought that something like this should come coupled with a wrist strap (with optional vibrate). Like that, when you look at your wrist, the glasses show you a Mass effect style overlay with a keyboard.

      You won't be writing a novel on it, but it would help for when voice isn't necessarily the best option.

      Delete
  2. I want real time text translations. Like you are in a foreign country and you see a sign, normally the sign is unreadable to you, but the glasses would automatically traslate the text, showing the translations on the sign, rather than the original text.

    ReplyDelete
    Replies
    1. Yes! I think there's even an iPhone app that's supposed to do this. http://www.youtube.com/watch?v=h2OfQdYrHRs

      Now we're talking.

      Delete
    2. And after further thought, why not have the glasses present a keyboard layout when appropriate? Say you wanna type an email rather than dictate it, while you are sitting, eating lunch. So the glasses overlay a keyboard on the table, and using some kinect like software, detects where one's fingers tap. And, since the purchase of that awesome adaptive keyboard software, they already have an edge for that type of thing.

      Delete
  3. There are so many possibilities. I keep thinking of new ideas...

    Imagine playing a game of golf with these. It would give you yardage on every shot, be able to keep score automagically, and could possibly recognize your playing partners. It may even be able to capture the spin of the ball on contact, and then give you tips on how to not slice next time. A virtual helper would talk to you if you're on a downhill like, and could even recommend a club based on past experience.

    ReplyDelete