I find myself obsessing over the potential of wearable augmented reality tools such as Google Glass.I can't wait for the hardware to drop in cost because I can see so many applications for this technology.

I also see several SaaS that will be valuable to app developers for these new Wearable Augmented Reality Devices. We are on the verge of being able to turn our entire world into a digital interface. And I think I see some ways to innovate.

A brief history of interfaces:

Static 1 way interfaces:

Think about tv remotes back and garage door openers. You press a button and something happens. Each button had one function and the interface could not give you feedback.

Static 2 way interfaces:

Think about that calculator you had in high school. Each button had a specific function or two. You pressed a button and got a limited amount of feedback.

Liquid interfaces:

The iPhone. Yeah there were others before but the iPhone symbolized the first device where the the input interface changed dynamically in accordance with the functionality you were trying to achieve. There were no more static buttons(besides the power, volume, etc). This expanded our capability to interact exponentially.

Liquid interfaces + other sensors:

Orientation, speech recognition, even location are now affecting what interface shows up. As you pull into your driveway your phone now knows to show your home controller so you can pop open that garage door without needing to browse to the home app. Its all automatic.

What is next?

You can look at an object in the house and the interface for controlling that object will now pop up. Just imagine looking at the thereastate or a light switch and having an "on/off/dim" interface pop up which you can then control from your Google Glass or other wearable.

The problem: Precision

Have you ever tried to use GPS indoors? It can easily be 60 - 1000 feet off. For a phone or tablet: oh well right? Now imagine you are trying to access a light switch in your bedroom 6 feet away without getting out of bed and for some reason the kitchen blenders interface keeps poping up. This is a deal breaker.

The necessity for accuracy is drastically increased with a device using augmented reality to display the interface as only a couple of degrees of head tilt can completely change what device is selected. Without precision we cannot move forward.

The opportunities:

Luckily we are Entrepreneurs and hackers. Instead of sitting around complaining about these issues we can build a service and start making money helping other solve these issues.

Signal Trilateration:

Project Nickname: Adam GPS is just one of many signals that bounce around us all day(makes you worry about cancer but we will save that for another day). Everyday more and more devices are becoming “smart”. What happens if we start repurposing the signals from your statically positioned devices to use as an anchor. Your printer or your smart home thermostat. I just bought my parents a Chrome Cast that temporarily broadcasts a network. These signals, once their origin has been determined and anchored can be used to trilaterate(kinda like triangulate) your exact position.

About 9 months ago I started working on trilaterating bluetooth, wifi, and other signals using my android phone. It allowed me to map out the world and begin using these other non gps signals like an anchor to better determine my position and the position of the devices that, in the future, might be controlled by my Google Glass or other wearable.

Image Recognition:

Project Nickname: IRaaS(Image Recognition as a Service) What if you have no GPS signal? If your eyes recognize that you are looking at the garage door from front of your house why shouldn't your wearable and pull up the appropriate interface.

Personally I am a fitness freak and would like to be able to look at a beer and see the caloric value. Imagine an app that had a database of that data that would pull up a detailed report of that beer just by recognizing the label on the bottle as you looked at it.

All of this is already possible. A more challenging evolution of this interface is using your own hands and gestures to control what is on the screen. I am not talking just a swipe gesture. I'm talking straight Tony Stark. Rendering objects 1-2 feet from your face that you can manipulate with your hands. Imagine pulling up your todo list then moving your hand up to a task pinching it with your fingers and dragging it to the top of the list. All of this is just around the bend.

Recently an amazing app Word Lens was released. My mind was blown so I started to replicate the technology in my spare time and it is coming along pretty well. Hopefully I will have a working alpha demo up in the near future.

Conclusion:

We maybe on the edge of a gold rush here. My initial feasibility studies are proving that the tech is not that difficult to build. Perhaps it is time for me to get back on the horse. If I don't maybe one of you, my readers, will. I am excited to hear your thoughts. Email, comment, tweet or telegraph. Until I hear from you thanks for reading.

Tags

Business , Tech

About the author