The most interesting AR/VR news from Google I/O 2018

(Image by Google)

As I’ve promised in my latest newsletter, I’m going to briefly summarize the most interesting AR/VR news from last week’s Google I/O 2018. Exactly as Facebook F8, we had no disruptive news, but we could appreciate some interesting stuff nonetheless.

ARCore 1.2 New features

Google ARCore reaches version 1.2 and introduces a lot of interesting features.

The first and most important one is local multiplayer: from now on, it will be possible to share AR experiences with people around you. Until now, it was possible only using HoloLens, that used a system called World Anchor: the device was able to describe a part of the world using some kind of features and then send this description to another user so that its device could identify exactly the same point in the environment surrounding him. This way, the two users were able to share a common AR experience together, because the devices had a common reference system. Today, something similar is finally available for ARCore, both on Android and iOS: it is called Cloud Anchors. I have to admit that I’m quite surprised to see that it is compatible with iOS too and I’d like to understand what it does mean, considering that ARCore runs only on Android. Anyway, if you want to see it working, you can watch this example video from their Just a Line drawing app:

(GIF by Google)

I think this is a fundamental advancement: finally, two people in the same place can share the same AR experience. While this may be useful for multiplayer games, I think that it has far more interesting outcomes for serious enterprise applications. Think about two architects sharing the same AR experience about a house that they’re designing: they could see it from different angles and discuss it and work together in designing it (this would be something similar to what people can do with HoloLens Sketchup Viewer). The same may hold for an app for maintenance workers, that can discuss a broken machine together, without having to look at the problem with the same phone.

Apart from this, there are also other great features. ARCore now can detect vertical planes and also images. The latter feature is called “Augmented Images” and will let you use up to 1000 images as markers to showcase your virtual objects. If you’re thinking “This is like Vuforia”, yes, you’re right. Visual markers are the top reason why people use Vuforia and I’m afraid that after this upgrade, Vuforia may have a drop in the users’ count. For sure Vuforia will survive because it offers a lot of other features (like Object tracking and 3D model tracking), but image tracking was the most used one.

Google has also introduced Sceneform, a framework to develop AR applications easier using Java. It offers high-level abstractions and performances optimized for mobile.

(GIF by Google)
Google Maps get a cool AR mode

Google Maps is getting a lot of improvements and among them, there is one involving image processing, AI, and AR.

Google is trying to address the problem of when we’re trying to follow the directions from Maps but we don’t know exactly what is the direction that the phone is heading: this has happened me a lot of times and it is especially annoying when I’m visiting a city for the first time and so I have no known landmark. “You have to turn right” says the phone… but where exactly is the right? Well, Google is trying to solve this with an AR mode that will super-impose the information directly on your camera feed, so you surely know where you have to go because you have just to follow some arrows (or a cute fox that Google is maybe adding to the app).

It is very cool to have both the map and some visual info that will help us in the moments we’re stuck (Image by Google, from Road To VR)

Apart from the directions, Google is planning also to add where are some interesting places in your surroundings, so that you can not only have some visual landmarks, but also discover new interesting places. And the cool thing is that all this stuff won’t substitute the actual map system since you’ll continue seeing the standard map in the lower part of the screen, so that you can still have an overall understanding of where you are and where you’re headed.

These improvements will be possible thanks to a technology called VPS (Visual Positioning System), that will try to match the camera feed with the 360 photos of Street View of all the places around you, detecting visual features and understanding your exact position in space. This, together with the rich informations from Maps will enable a new kind of 3D AR maps.

Do you want to know the release date? Well… me too 🙂

Google Chrome is always more XR-oriented

We have two great XR news regarding Chrome:

  • Mobile standalone headsets are getting a VR version of Chrome: currently, there’s no Chrome browser for devices like the Lenovo Mirage Solo (apart from using some hacks to install apps from Google Play, as highlighted by Upload VR) and that’s a pity, especially if you want to try WebVR experiences. Google is working hard on this and a VR version of Chrome is arriving soon: they are having some problems in removing some dependencies on 2D UI elements in the project, but things are going well and Chrome should arrive soon;

    Google Chrome running in VR! (Image by Google from Road To VR)

  • Chrome is going to implement the new WebXR standard APIs and so will soon be able to show not only VR but also AR content. At Google I/O, people have been able to try a custom build of the Chromium browser (the opensource version of Chrome) that showed a webpage that dropped an Aztec historical element on the floor in augmented reality. Everything worked straight from the web, without installing anything, and worked really smoothly. I’m a huge fan of WebXR, I think it is the future to make XR user-friendly (think about the social experience Mozilla Hubs that just works by sharing a link), so this news is more than welcome.

    All this runs on the web! (Image by Engadget)

Google Lens updates

Do you remember Google Lens, the app by Google that is able to analyze the camera stream of your phone to detect things like for instance the breed of a dog you’re framing? Well, it is getting some updates:

  • You will be able to summon it easily from the Camera app of your phone: there will be a native integration in many Android phones, like for instance the LG G7;
  • Smart Text Selection will detect and interpret the texts in the images you’ll shoot and will let you select it, copy it, translate it and do with it whatever you want. Basically, the phone will be able to read texts: this is very powerful for a lot of applications;

    You can just shoot a page of a book and have the text on your phone! (Image by Digital Trends)

  • Style Match will let you frame any kind of shoes, shirt, skirt or whatever and then find on the web items with similar patterns. The idea is that you see in the street someone wearing a piece of dress that you like, you take a picture of her and the system tells you where you can buy visually similar items, that is items that you may like as well because are similar to the one you pictured. Engadget reports that from some tests of theirs, sometimes the system even detected the exact item;
  • Real-time results let you use Lens directly with a camera stream: you just move your phone and the system will try to detect in the scene in real-time everything that it is able to detect, without waiting for you to take a picture. This is possible thanks to Google optimizations.
Google Tour Creator

Do you want to create a virtual tour easily? Do you want to show someone in VR a collection of 360 photos with highlighted some points of interest? Then Google Tour Creator is what you need.

Google Tour Creator lets you create a virtual tour super-easily directly from the web: you can add your own 360 photos or take them from Google Maps (Google StreetView) and create a tour that makes the user go through all of them. You can add points of interest to those photos and add texts or images for those points. In the end, you can publish the tour you’ve just created to Google Poly so that everyone can see it.

People will be able to enjoy your creation using their browser or a Poly-supported headset like Google Cardboard or Daydream.

I’ve given Tour Creator a fast shot and I’ve been impressed by how it is actually very easy to be used. If you want to create a tour using images from Google StreetView, it is the tool that I would advise everyone to use. If you want to give it a try, you can find it here.

Putting my face at random positions inside the tour can surely amaze the viewer. Jokes apart, look how the interface of Tour Creator is neat and simple
Google Lookout will help the visually impaired

A new app by Google, called Google Lookout, may help the visually impaired by analyzing the world around them and giving them pieces of advice like where they can find some tools or where are the furniture. All that is needed is that the person wears the phone inside a pocket in the shirt or with a necklace so that the camera can frame the world around him/her. After that, all the times that it will be triggered, Lookout will give instructions and advice. We don’t know the release date of Lookout but seems something that can help a lot of people.


And that’s it with this summary. I really hope that you liked it… and if it is the case, please share it and subscribe to my newsletter to sustain my magazine!

(Header image by Google)

Skarredghost: AR/VR developer, startupper, zombie killer. Sometimes I pretend I can blog, but actually I've no idea what I'm doing. I tried to change the world with my startup Immotionar, offering super-awesome full body virtual reality, but now the dream is over. But I'm not giving up: I've started an AR/VR agency called New Technology Walkers with which help you in realizing your XR dreams with our consultancies (Contact us if you need a project done!)
Related Post
Disqus Comments Loading...