In December 2020, I had a nice chat about the Open AR Cloud with two amazing professionals that are working with it. Unluckily, that was a very complicated period for me (e.g. I was working to deliver the WTTOS concert with Jean-Michel Jarre) and I forgot about publishing it. Thanks to an e-mail from Suzan Oslin, that is organizing these days a workshop at IEEE VR about the Open AR Cloud (more info here, if you are interested), I have remembered again about it, and so I’m publishing it now. Better late than never, I guess.
The Open AR Cloud defines itself like this
Open AR Cloud’s mission is to drive the development of open and interoperable spatial computing technology, data and standards to connect the physical and digital worlds for the benefit of all.
Basically, it is an open and distributed platform to provide the AR Cloud (which is, in turn, a distributed, shared, and persistent augmented reality), that opposes itself to the private AR Clouds that companies like Facebook, Apple, Microsoft, and Niantic are building. AR Cloud can give to its owner an enormous amount of knowledge and power since it means having a digital twin copy of the world, that’s why all the tech giants are working on it, and that’s why they are all trying to create their own walled garden that competes to the ones of the others. The Open AR Cloud (or OARC, in short) works exactly in the opposite direction, trying to put a big focus on the privacy of the users, with a distributed architecture, and a system built for interoperability. An amazing project few people are talking about.
In December, I had this talk about it with:
- Vladimir Ufnarovskii, the co-founder of Augmented.City, an Italian startup that does AR-Cloud-based visual positioning: they let people live AR experiences together.
- Julia Beabout, the CEO of Novaby, an award-winning, virtual art production company specializing in 3D/AR/VR. They are content creators interested in the democratization of AR offered by smartphones. They needed an AR Cloud solution, and the OARC was what they found optimal, also thanks to the help of Augmented.City.
Here you are the original footage of our friendly chat that went out on Google Meet, with many interesting insights on what is the AR Cloud and what are its advantages:
And here you are below a written summary if you’re too busy to watch all the video.
Vladimir and Julia said in the interview what are the main advantages of the Open AR Cloud. And the first one, highlighted by Julia, is that it is accessible. With this, she meant two things. First of all, it is already available. Apple, Microsoft, Facebook, etc… are already building the AR Cloud, but can you use it? If you want to create a shared experience across a city in AR, can you do it? No. You can have some of the functionalities of the AR Cloud (e.g. shared anchors or geolocation-powered applications), but not the possibility of developing your own shared persistent AR experiences across a large space. Open AR Cloud instead, is already there, and you can already use its SDK to develop your applications. Developers that are reading me… this means that you have already the possibility of using it to dip your toes in the future of AR so that to gain experience for when all the other AR Clouds will be fully developed.
Then, the AR Cloud is also accessible because you can do with it what you want yourself: the power is all in the hands of the creators. If you want to use the future AR Cloud by Facebook, for instance, and you want to do an AR installation in your city, you have to wait for Facebook to scan your city. Or that someone that is using a Facebook application goes there, do the scanning, Facebook merges and approves the scan, and that provides everyone that area via its APIs. You have not any kind of control on this process. With OARC, instead, you have that control. When Novaby needed a large installation, it just went to that place, used the Augmented.City scanning app, uploaded that location on the AR Cloud, and then it could use it. She did that without having to ask for any kind of approvals, permissions, or needing someone from whatever company to do the scanning for her. This is very important to empower the content creators that need a certain location in a short amount of time.
Of course, we have also talked about privacy: some AR Clouds of the future may be driven by advertising (I think you know who I am talking about), and this can create huge privacy concerns for its users. Open AR Cloud is built around high-standards for privacy, the consortium doesn’t sell your data to 3rd parties, and does its best to guarantee the security of all the data it has stored. This is something we should care a lot for our future: distributed AR is going to create huge privacy concerns, and if we all used an open system, we could all live in a safer way. I asked also about private places, and Vlad answered me that it is possible to ban a location from the AR Cloud systems so that it can’t be scanned (this is useful for private places, but also for governmental buildings, military barracks, etc…). If the system has already been scanned before the ban request, the system prevents its use for localization, so it can’t be used anymore.
Scanning a place has its own advantages for who performs the operation, too. Augmented.City has the idea of a possible revenue-sharing business for the future: if you scan a location and upload it to the cloud, OARC could give you a share of its earnings for all the calls it receives about that location. Currently, big parts of the world are not scanned yet, so it can be the occasion for earning some good money (or other rewards) once the Open AR Cloud becomes widespread and this system get set up. First come, first served, so if you are the first one scanning an area, you could become the one getting some rewards for all the AR operations happening in that area (as long as you also keep it updated). The world has finite places, so the more you go on, the less opportunity you have to do this.
UPDATE (2021.03.29): I have a bit reworded the above paragraph to clarify that this revenue-share model is still not in place, but it is a possibility for the future. Augmented.City is storing who are the people that are providing the uploads of the mesh of the various cities, and could use this information in the future to provide rewards for them, as the one envisioned above. Currently, no money transaction or rewards are in place. Thanks Vlad and Stephen for the clarification after the publication of the article.
I asked them about the scanning process: how does it work? Well, there is a scanning app that you use to perform it. It can ran on any kind of smartphone, provided it has a back camera (are there smartphones without one?). You run the application, and there is a wizard that guides you in performing all the scanning of the area that you want to add to the cloud. After you have done the scanning, the system will perform the merge with the other portions of the digital twin of the world directly in the cloud. Vlad explained to me that it has pros and cons if compared to other solutions in the market, like Niantic making Pokemon Go users scan the world via gamification. The big con is that the process is a bit complicated, and it must be performed actively: at the moment it requires a bit to get used to it, and for this reason, it can’t be integrated as a passive process in a game, like Niantic is doing. The good news is that once you are used to it, you can scan much wider places in just 1 pass, something that is not possible with the product by Niantic, for instance. Once you developed the required skills, you can even go pretty fast: the team of Augmented.City has scanned the whole city of Bari (circa 100 squared kilometers), in southern Italy, in just 2 months with a team of 4 people. The app is currently in public beta, and to use it, you have to contact Augmented.City and provide your Google ID or Apple ID to be enabled to use it (and they would be very happy to do this for you).
Vlad also talked to me about the GeoPose service: you can have an accurate estimate of your position (the position and the orientation of the camera of your smartphone) in a place that has been scanned for OARC with just 10-30cm of accuracy, by shooting a single photo. For comparison, the accuracy of the GPS is measured in meters (usually 5m). 10-30 cm is enough to add big augmentations to a place, and for instance, insert some informative panels about the building that there are around you.
If you are a developer, you can also start using the Open AR Cloud in places that have already been scanned in the world. At the time of the interview, there were some areas already available in 77 cities, with the biggest one being the whole city of Bari. Now, after 3 months, for sure these numbers have grown. You can make some little experiments for free if you are a lonely dev, and then there are paid solutions depending on the tier that you want to buy, as all cloud services. The usual prices are in the range 0.1-1cent per call to the server.
The last amazing thing about the Open AR Cloud they told me about is its interoperability: the ecosystem is wide open, and as long as you support the protocols the consortium has created, you can be part of the system. The AR Cloud in the OARC is currently offered by not one, but two providers: one is Augmented.City and the other one is Immersal (that has for instance mapped a part of Helsinki). Since both companies use the OARC protocols, their two systems can cooperate, and they are not two walled gardens like it would happen with two private companies not adhering to the AR Cloud. This openness and interoperability are very refreshing, and I would love they were the values driving all the AR Cloud initiatives happening all around the world so that we could all live in a shared mixed reality experience together.
I asked Julia and Vlad about some projects they did together, and they showed me some nice AR installations. They also talked with me about the shared experience developed with my friend and XR creator Stephen Black, about his cartoon character Bubiko flying on a zeppelin around the city of Bari and then landing on a table of a bar. Stephen ideated the experience, Novaby made the animation of the character and Augmented.City made possible the tracking with the physical animation of the character that followed a specific path from the sky to its final destination. Anyone in Bari could see it doing exactly that path, at the same time, using the dedicated app. An amazing city-scale application, something pretty unique in the AR ecosystem, about which strangely no one has talked about.
Regarding the future, they both look positively to 5G, which can make the system faster (just half a second for a GeoPose call, for instance), and also they hope to be able to increase the overall accuracy, reliability, latency, and repeatability of the AR-tracking system. To realize that, they hope always more people use the Augmented.City and Open AR Cloud solutions, incrementing the areas where the tracking is available and providing feedback on what can be improved. I sincerely invite you to do that: you can find all the contact information on Augmented.City (for development) or Novaby (for content creation) websites. Let’s all support open systems together!