Last week, Google delivered its most important event of the year, the Google I/O. During its keynote, the CEO Sundar Pichai announced many new products of the company. Of course, AI has been the star of the show, with the public release of Google Bard and the announcement of a new language model, but there have been also interesting pieces of news about XR. During the last years, XR has never been relevant at Google I/O, but this year it has been relevant inside the keynote, again. Let me briefly tell you what have been in my opinion the 5 top pieces of XR news to keep an eye on from this event.
More information about Google’s XR headset are coming “later this year”
Google, Qualcomm, and Samsung are working on an XR headset. This has been revealed months ago, and in particular, it is well known that Google should work on the operating system (an immersive version of Android), Qualcomm on the chipset, and Samsung on the device itself. At this Google I/O, Sundar Pichai has found it relevant to tell us that the project is still ongoing, and we will have more information “later this year”.
It is not clear what they will tell us later this year, but it’s important to know that the project is still going on. I guess this line has been added to the keynote speech because Apple is going to announce its device very soon at WWDC (at least according to the rumors), and Google wanted to show that it’s part of the game, too.
It’s interesting to notice that when Google will announce its device, we risk having an XR market that will propose again on the immersive market what is already happening with mobile phones: Apple and its walled garden on one side, and Google with its own Android ecosystem on the other side. In particular, Samsung, which is one of the best smartphone brands, and has also always produced good VR hardware, is positioning itself to become also a leader in immersive headsets and challenge Apple in this field, too. In this scenario, it will be interesting to evaluate Meta’s role: will the experience gained all these years give Meta a top spot in this new market (thanks to its first mover advantage), or the company will fail spectacularly like NOKIA when the iPhone arrived? I guess only time will tell…
Links for Further Information
Project Starline has been optimized
Project Starline is Google’s innovative project to revolutionize remote calls. You sit down at a desk in a dedicated room, and in front of you there is a special big lightfield display through which you can see another person, that maybe is a thousand kilometers from you, but you see him/her in full 3D as if he/she were straight in front of you. The idea of the project is to make remote meetings more realistic, natural, immersive, and break the dullness of Zoom meetings.
Starline works by putting many cameras around you so that the system can reconstruct your 3D image, which is then compressed in a special way, and sent to the Starline booth of the other person. The other person’s setup decompress your 3D image and shows it on the lightfield display, so that it can be seen in full 3D, like in real life, from the vantage point of the chair of the other user that is in front of his desk.
The news of the week is that Starline has been optimized and improved, so its setup can now be carried on more easily. For this reason, Google managed to take two Starline booths to its conference and make some selected journalists try it.
I saw the hands-on video by Engadget’s Cherlynn Low (who couldn’t take photos, though) and after it, I have a better idea of how it is, at the current stage of the project. Starline works, and for real makes two people see in front of each other in 3D. Cherlynn highlights how at a certain point, the person on the other side, took an object and moved the hand forward towards her, and she had a kind of WOW moment seeing the hand of the other person getting closer to her. It was a bit like when trying 3D cinema for the first time, you see an object going outside the screen towards you and it feels magical. But the prototype had also issues: for instance, there was a lag in the communication, so it was easy for her to speak above the other person and vice-versa. Google blamed the event Wi-fi for this problem, but with all that data streamed between the two booths, it wouldn’t surprise me if this was a network optimization problem.
But it was one point of her review that made me think: basically she told that while Starline is cool, when you see a person in front of you, you don’t have that sense of depth anyway, even in real life. This is exactly my thought: while technically amazing, this product is mostly useless: if you can’t go and touch the other person, being in front of another person, with depth or not, is very similar. Having the sense of depth is cool, but it is not worth having this huge booth, while to use Zoom I can use my smartphone from everywhere. Plus I question how you can have realistic group calls with this: it’s just impossible, it is made for 1-on-1 meetings.
My candid opinion is that the project Starline as is will be abandoned and will enter the Google Graveyard. But its technology will be reused in another product meant for realistic connection with remote people, maybe inside AR or VR glasses.
Links for Further Information
Google Maps Immersive View for Routes
It’s cool how Google is adding always more immersive features to Google Maps. The latest one is called Immersive View for Routes. Basically, it means that when you are preparing your trip, you can see a 3D drone shot of the route you have to follow, and so be better prepared when you will be inside it in real life. Together with this “drone shot” you can see on the map other information you may be interested in, like the usual traffic situation of those places.
Immersive View for Routes will start to roll out in the coming months in 15 cities, including Amsterdam, Berlin, Dublin, Florence, Las Vegas, London, Los Angeles, Miami, New York, Paris, San Francisco, San Jose, Seattle, Tokyo, and Venice.
Links for Further Information
Geospatial Creator
Geospatial Creator is one of the coolest solutions launched at this Google I/O. It lets you create geolocation-based augmented reality content by positioning it directly into a 3D version of Google Maps. It is already integrated into Unity and Adobe Aero, so content creators can use it within the tools they already use today to develop AR content.
With Geospatial Creator you can open Unity, and see inside your Unity window a 3D view of your city, and then visually position exactly your geolocalized AR content where you want it to appear. This is incredibly handy to use: you can see inside your editor window the exact point where your AR-powered 3D element appears, and how it fits the surroundings. This is much easier to use, effective, and cooler than just doing that on a 2D map.
When your users will go to the place you have chosen, they will all see the augmentation you have developed appear. This is how we will have shared AR content in the “metaverse”.
Geospatial Creator and ARcore are so answering the Lightship SDK launched by Niantic. Google is also going to launch a game to showcase the features you can build with these tools. It has partnered with Taito Corporation, the original developer of the arcade hit Space Invaders, to build a brand new city-scale AR game called Space Invaders: World Defense. Details are tight on this game: we only know that it will be launched this Summer. Plus we have a teaser trailer that you can watch here below:
Links for further information
- https://developers.google.com/ar/geospatialcreator
- https://www.roadtovr.com/space-invaders-world-defense-google-ar-geospatial-creator/
Photorealistic 3D Tiles and Aerial View API
Google has launched other interesting features for developers, in particular Photorealistic 3D Tiles and Aerial View API.
Photorealistic 3D Tiles, available through the Map Tiles API, let developers create experiences that use the 3D maps of Google Earth. As Google says “This new geodata product offers a seamless 3D mesh model of the real-world, textured with our high-res RGB optical imagery, and uses the same 3D map source as Google Earth. Specifically designed for visualization at city block to citywide scale, you can use Photorealistic 3D Tiles to visualize over 2500 cities across 49 countries and create 3D visualization experiences directly in your app.”
Long story short, it means that you can take the 3D reconstruction of the world that Google has created for Google Maps/Earth and use it in your application. You can customize that visualization, add other elements that you want, and even superimpose other information like real-time traffic data. This can be great for people wanting to make immersive experiences where you can navigate real cities in XR like Google Earth VR used to do. I wonder if, with these APIs, someone will create a new version of Google Earth VR, which was one of the most beloved VR experiences before it was basically abandoned.
Google has also adopted the commonly-used Open Geospatial Consortium’s 3D Tiles standard created by Cesium. So you can use these APIs within every Cesium-compatible application, including the ones built with CesiumJS and deck.gl.
If you don’t need to have a live 3D map, but you just need to show your users a birds-eye view of a particular landmark (e.g. a stadium), you can instead use the new Aerial View APIs. They are able to provide you with “cinematic videos of points of interests built with the same 3D map source used by Google Earth. With Aerial View, developers have a simple way to highlight places like hotels, attractions, and shops to help people make informed location-based decisions virtually”.
To showcase how these Aerial View APIs can be useful, Google partnered with Rent.com . Rent will show its users the surroundings of an apartment they want to rent with Aerial View. This way they can not only see if they like not the house, but also what is around it. It seems to me a great idea.
The only problem I find with both of these APIs is that the reconstructed 3D models from Google Earth are not perfect, and they have lots of visual imperfections, which will be very noticeable if you enter them in virtual reality.
Links for further information
- https://cloud.google.com/blog/products/maps-platform/create-immersive-3d-map-experiences-photorealistic-3d-tiles
- https://cloud.google.com/blog/products/maps-platform/create-immersive-cinematic-video-experiences-aerial-view-api
Conclusion
XR has finally been relevant again in Google I/O, after so many years it was mostly ignored. And this trend can only get better, especially after Google will have launched its XR version of Android. I’m sorry for all the people that are wasting their time writing articles about the metaverse being dead, while actually it is being built just in front of their eyes.
One last thing: if you liked this roundup of news, subscribe to my newsletter, so that you can receive straight to your e-mail my curated roundup of all the most interesting XR news of the week, every week 😉
(Header image by Google)