Guess what, it is too good to be true (but at least it's not vaporware). Techcrunch now reports about a private demo that Tonchidot arranged in Tokyo. Apparently, the Sekai Camera really doesn't know at what the user looks through her iPhone, and she has to pick from the available options:
Many people speculated how Sekai Camera works technically. The answer is simple: The user’s location is identified through GPS (no cell-tower triangulation or image recognition technology is being used). As the iPhone doesn’t have an internal compass, the direction of where the viewfinder is pointed at can’t be measured: Users need to flick fingers left or right to find relevant tags that are around them (as demonstrated in the video I took below). Tap a tag and the information it contains appears in the form of a window, for example a picture with a comment box below it or a voice message someone left earlier.
It can still work and become a great success, but I'll go with Wikitude for now. Anyway, here's how it looks now, you may compare it in your spare time against the previous one