auki labs review

Auki Labs brings privacy and fast calibration to AR

One of the most interesting encounters I had at AWE has been with Nils Pihl, CEO of Auki Labs, a company that aims at improving multiplayer experiences in mobile AR making them faster and more privacy-oriented. He started speaking with me while I was just taking a five minutes break in the media lounge, and even if he interrupted my relaxing moment, he did it for a good cause.

Auki Labs’s fast calibration

Auki Labs has developed a system to make sure that two people that want to enjoy the same multiplayer AR experience in the same place can do it in a fast, accurate, and privacy-safe way. I remember seeing a video about it a few weeks ago, and I let you watch it here below before you can go on reading.

Auki Labs demo

It was interesting speaking with Nils so that I could get some more technical details about how the system works. Basically, the first player starts an augmented reality experience (maybe made with ARCore) on his phone as usual. Then, the second player launches the same app and he wants to play with the first one, but to do that, he must be in the same reference system as the first one. That is, if in the AR application there is for instance the augmentation of a dog, both players should see the dog in the same physical position.

To do that, Auki Labs uses a smart trick: a QR code. The first phone shows a QR code that the second player can scan to synchronize their reference systems. The QR code has a double purpose:

  1. It contains an ID of the first player, that can be identified in the cloud of Auki Labs, so that the second player knows who he is connecting with
  2. It acts as a marker that shows the orientation and position of the first phone in the coordinate system of the second phone. If you are not much into coordinate systems stuff, I’ll try to explain this a bit better: the QR code has the same position and orientation of the screen of the first phone, so if I find it and I see how it is oriented from the view of the first phone, I can detect how the first phone is oriented, so I can find a way to convert the coordinate system of the first phone to the one of the second phone, that is what is needed to see the augmentations in the same position. For instance, if I detect that the first phone is rotated 90 degrees right (clockwise) with respect to the second one, and the virtual dog has an orientation of 0 degrees when seen from the first phone, it means I have to show the dog oriented 90 degrees right on the second phone to see the dog with the same orientation on both devices.

This means that after you have scanned the QR code of the other player, you can immediately see the augmentations that he sees, exactly where he sees them.

Now, if you are a bit into tracking and computer vision, I’m sure you have some questions in mind: “What about the drift?” and “What about if one of the phones has a glitch in the tracking?”. Well, to account for this, and also for the initial registration error, the two phones keep sending each other information about the scene that they “see”. For instance, they could send each other info about the planes that they detect: since they are just a bunch of positions and rotations, they are not heavy on the network. The system then keeps aligning the detected planes from the two phones, and guarantees that they remain consistent. This way, the two phones keep having the same aligned views in all conditions.

To implement all of this is an application, the only thing that the developer should do is to implement the Auki SDK, which takes care of all the heavy lifting.

Advantages of this solution

The solution provided by Auki Labs has a long set of advantages over its competitors, which are working with world anchors:

  • It requires no scanning of the environment, so it is much faster to initialize. Cloud anchors require knowledge of the environment where the augmentations happen, while here we just perform a scan of a QR Code and a registration of coordinate systems, so it is much faster
  • It requires sharing less data on the network, because you don’t have to communicate all the data about the anchor and the surrounding environment, but just an id and then go on with the alignment checks from time to time
  • It is privacy-safe because you don’t know anything about the other player (apart from an ID) and you don’t have to scan the environment and send its description to the cloud. This means that when you play, you don’t send data about your home to some foreign company
  • It’s more environmental-friendly because you don’t have to store huge cloud anchors or AR Cloud data on the servers
  • It is more accurate (according to Nils… I have performed no independent testing on this statement)

The idea is very smart: it’s simple and effective, but it works, and also with good accuracy. Nils told me that the basic idea is simple (I admit having had a similar one a few years ago), but developing it so that everything works with the right accuracy and reliability is incredibly complex and it required Auki months of work.

Disadvantages of this solution

I personally think that this is a good solution for relative positioning. Someone is already playing, and I want to join his match, so I scan his QR Code. But what if I’m alone and I just want to recognize a place where to see an augmentation? What if I need contextual information on the elements I see where I walk around the city? Or what if I want to play with other people but without interacting actively with them (maybe because they are not my friends)? I personally see many conditions in which I would prefer to have the usual cloud anchoring approach. I tried to say this to Nils, but he said that his company has solutions also for those cases. He didn’t unveil me those ones, though. I remain curious.

Future Directions

Reading the Auki Labs website I’ve found their vision of creating a persistent future AR metaverse (dubbed “Aukiverse”… damn, you know how I hate all this “-verse” stuff…) that lets people interact together with persistent AR in a way that is efficient and preserves privacy. There is also some integration with Web3 stuff, and the whitepaper talks about a token that you can use to access the Auki Labs system (e.g. when you scan a QR Code you have to use an Auki token to access the cloud and being able to join the other players).

The company is experimenting also on how to bring hand tracking into its experiences, and a video shared today on Twitter shows a promising direction.

Nils told me the company already got big investments (the latest one being $13M) to fulfill this vision and I wish him and the whole Auki Labs team to be able to perform that.

(Header image by Auki Labs)


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.

Releated

xpanceo smart contact lenses hands on review

XPANCEO smart contact lenses hands-on: AR prototypes from the future

At AWE, XPANCEO, a Dubai-based company working on smart contact lenses, showcased a few interesting prototypes of its futuristic technology. I was able to even put my eyes close to one of them and I want to tell you everything about this experience! XPANCEO If you’re not new to this space, you will surely remember […]

meta quest 3s skarredghost

The XR Week Peek (2024.10.22): Quest 3S has been launched, Ray-Ban Meta shows consumer appeal, and more!

What a great week for XR with the release of the Quest 3S! I have already started playing with this little toy, and I will write my impressions very soon on this blog (probably Friday or Monday). Tonight, I also can’t wait to play a bit of Batman as soon as it becomes available! What […]