Unlike previous similar AR projects (see Mobilizy's Wikitude and Tonchidot's Sekai Camera), SREngine uses image recognition to identify locations and items, which gives it the edge when it comes to working on the compass-less iPhone.
Some interesting points from Kanemura's post:
- SREngine is a server/client application. The client does some of the image processing, the server is responsible for image matching.
- "SREngine is able to distinguish scenes by using original image processing techniques." - Judging from Kanemura's academic past, I'm guessing some form of neural nets are involved.
- Weak Points: "SREngine can recognize static scene only. Even shaded object is discriminated by the engine. Simple scenes cannot be recognized, such as a solid white wall."