Is 13th Lab the Layar of Indoors?

Well, I'm pretty sure the guys at 13th Lab will get mad at me for comparing them to Layar. Most importantly, they don't consider themselves as an augmented reality company. They view themselves as a computer vision company, and AR only serves as a cool proof of concept for their technology. And what exactly is their tech? For now it's implementing SLAM algorithm on iPad2, as can be seen in the video below. Next they plan to implement more computer vision algorithms for mobile platforms.

SLAM, if you are too lazy to read the wikipedia article and prefer to learn this kind of stuff from a blogger, enables the device to locate its position in a pre-scanned room while continuously update its stored map of the room, all this without using markers. Here's a cool demo from Oxford, showing SLAM assisted augmentation of a museum, which suggests one way this technology can be used. Another scenario may be something like an ikea store where using an iPad you could change the color of the sofa which is right in front of you (or locate the exit).

This lead me believe that with some luck 13th Lab may become a force to be reckoned with in indoors AR. Moreover, 13th Lab aims to be a platform provider, like, well, Layar (and admittedly, many other companies in the AR space).

Writes Petter Ivmark, one of the founders:

The ambition of this company is not just to make a game though, but rather to take this pretty complicated technology, that requires a lot of specific math and low level programming skills, meaning that very few developers work with it today, and make it available to developers as a platform that doesn't require these skills at all. Hopefully, this will spur a lot more innovation in computer vision. We strongly believes that, as computer vision and artificial intelligence evolves, the camera will take over from the GPS as the device's most important sensor to understand, interpret and navigate the world.
We have had the idea that the camera has the potential to be the most important sensor for a long time.

A few years ago when we started talking about doing something in this area, the devices where not powerful enough to do SLAM and other advanced computer vision work. When we started looking at this, the iPhone 3GS had not yet been released (let alone a dual core device like the iPad 2 or some of the newer Android devices). iOS didn't even have a public camera API. But we made a bet on the exponential growth in computing power on devices, that if we started working on this, the devices would catch up quickly. This turned our to be true. Apple released the camera APIs for iOS, they put gyros in their devices, and finally released the iPad 2 which had a camera, gyro and a fast dual core processor. This was around the time we had a first working prototype of our platform, so the timing was great.
If you buy into their vision, you can sign up to their developers network. Better yet, if you live in Sweden, they are hiring - I bet it's going to be worthwhile to join them.


Post a Comment