google i/o 2018 ar vr

The most interesting AR/VR news from Google I/O 2018

As I’ve promised in my latest newsletter, I’m going to briefly summarize the most interesting AR/VR news from last week’s Google I/O 2018. Exactly as Facebook F8, we had no disruptive news, but we could appreciate some interesting stuff nonetheless.

ARCore 1.2 New features

Google ARCore reaches version 1.2 and introduces a lot of interesting features.

The first and most important one is local multiplayer: from now on, it will be possible to share AR experiences with people around you. Until now, it was possible only using HoloLens, that used a system called World Anchor: the device was able to describe a part of the world using some kind of features and then send this description to another user so that its device could identify exactly the same point in the environment surrounding him. This way, the two users were able to share a common AR experience together, because the devices had a common reference system. Today, something similar is finally available for ARCore, both on Android and iOS: it is called Cloud Anchors. I have to admit that I’m quite surprised to see that it is compatible with iOS too and I’d like to understand what it does mean, considering that ARCore runs only on Android. Anyway, if you want to see it working, you can watch this example video from their Just a Line drawing app:

google arcore multiplayer
(GIF by Google)

I think this is a fundamental advancement: finally, two people in the same place can share the same AR experience. While this may be useful for multiplayer games, I think that it has far more interesting outcomes for serious enterprise applications. Think about two architects sharing the same AR experience about a house that they’re designing: they could see it from different angles and discuss it and work together in designing it (this would be something similar to what people can do with HoloLens Sketchup Viewer). The same may hold for an app for maintenance workers, that can discuss a broken machine together, without having to look at the problem with the same phone.

Apart from this, there are also other great features. ARCore now can detect vertical planes and also images. The latter feature is called “Augmented Images” and will let you use up to 1000 images as markers to showcase your virtual objects. If you’re thinking “This is like Vuforia”, yes, you’re right. Visual markers are the top reason why people use Vuforia and I’m afraid that after this upgrade, Vuforia may have a drop in the users’ count. For sure Vuforia will survive because it offers a lot of other features (like Object tracking and 3D model tracking), but image tracking was the most used one.

Google has also introduced Sceneform, a framework to develop AR applications easier using Java. It offers high-level abstractions and performances optimized for mobile.

Google I/O sceneform
(GIF by Google)
Google Maps get a cool AR mode

Google Maps is getting a lot of improvements and among them, there is one involving image processing, AI, and AR.

Google is trying to address the problem of when we’re trying to follow the directions from Maps but we don’t know exactly what is the direction that the phone is heading: this has happened me a lot of times and it is especially annoying when I’m visiting a city for the first time and so I have no known landmark. “You have to turn right” says the phone… but where exactly is the right? Well, Google is trying to solve this with an AR mode that will super-impose the information directly on your camera feed, so you surely know where you have to go because you have just to follow some arrows (or a cute fox that Google is maybe adding to the app).

Google ar maps
It is very cool to have both the map and some visual info that will help us in the moments we’re stuck (Image by Google, from Road To VR)

Apart from the directions, Google is planning also to add where are some interesting places in your surroundings, so that you can not only have some visual landmarks, but also discover new interesting places. And the cool thing is that all this stuff won’t substitute the actual map system since you’ll continue seeing the standard map in the lower part of the screen, so that you can still have an overall understanding of where you are and where you’re headed.

These improvements will be possible thanks to a technology called VPS (Visual Positioning System), that will try to match the camera feed with the 360 photos of Street View of all the places around you, detecting visual features and understanding your exact position in space. This, together with the rich informations from Maps will enable a new kind of 3D AR maps.

Do you want to know the release date? Well… me too 🙂

Google Chrome is always more XR-oriented

We have two great XR news regarding Chrome:

  • Mobile standalone headsets are getting a VR version of Chrome: currently, there’s no Chrome browser for devices like the Lenovo Mirage Solo (apart from using some hacks to install apps from Google Play, as highlighted by Upload VR) and that’s a pity, especially if you want to try WebVR experiences. Google is working hard on this and a VR version of Chrome is arriving soon: they are having some problems in removing some dependencies on 2D UI elements in the project, but things are going well and Chrome should arrive soon;

    google ar vr browser chrome
    Google Chrome running in VR! (Image by Google from Road To VR)
  • Chrome is going to implement the new WebXR standard APIs and so will soon be able to show not only VR but also AR content. At Google I/O, people have been able to try a custom build of the Chromium browser (the opensource version of Chrome) that showed a webpage that dropped an Aztec historical element on the floor in augmented reality. Everything worked straight from the web, without installing anything, and worked really smoothly. I’m a huge fan of WebXR, I think it is the future to make XR user-friendly (think about the social experience Mozilla Hubs that just works by sharing a link), so this news is more than welcome.

    google webxr chrome
    All this runs on the web! (Image by Engadget)
Google Lens updates

Do you remember Google Lens, the app by Google that is able to analyze the camera stream of your phone to detect things like for instance the breed of a dog you’re framing? Well, it is getting some updates:

  • You will be able to summon it easily from the Camera app of your phone: there will be a native integration in many Android phones, like for instance the LG G7;
  • Smart Text Selection will detect and interpret the texts in the images you’ll shoot and will let you select it, copy it, translate it and do with it whatever you want. Basically, the phone will be able to read texts: this is very powerful for a lot of applications;

    google i o lens smart text selection
    You can just shoot a page of a book and have the text on your phone! (Image by Digital Trends)
  • Style Match will let you frame any kind of shoes, shirt, skirt or whatever and then find on the web items with similar patterns. The idea is that you see in the street someone wearing a piece of dress that you like, you take a picture of her and the system tells you where you can buy visually similar items, that is items that you may like as well because are similar to the one you pictured. Engadget reports that from some tests of theirs, sometimes the system even detected the exact item;
  • Real-time results let you use Lens directly with a camera stream: you just move your phone and the system will try to detect in the scene in real-time everything that it is able to detect, without waiting for you to take a picture. This is possible thanks to Google optimizations.
Google Tour Creator

Do you want to create a virtual tour easily? Do you want to show someone in VR a collection of 360 photos with highlighted some points of interest? Then Google Tour Creator is what you need.

Google Tour Creator lets you create a virtual tour super-easily directly from the web: you can add your own 360 photos or take them from Google Maps (Google StreetView) and create a tour that makes the user go through all of them. You can add points of interest to those photos and add texts or images for those points. In the end, you can publish the tour you’ve just created to Google Poly so that everyone can see it.

People will be able to enjoy your creation using their browser or a Poly-supported headset like Google Cardboard or Daydream.

I’ve given Tour Creator a fast shot and I’ve been impressed by how it is actually very easy to be used. If you want to create a tour using images from Google StreetView, it is the tool that I would advise everyone to use. If you want to give it a try, you can find it here.

google tour creator
Putting my face at random positions inside the tour can surely amaze the viewer. Jokes apart, look how the interface of Tour Creator is neat and simple
Google Lookout will help the visually impaired

A new app by Google, called Google Lookout, may help the visually impaired by analyzing the world around them and giving them pieces of advice like where they can find some tools or where are the furniture. All that is needed is that the person wears the phone inside a pocket in the shirt or with a necklace so that the camera can frame the world around him/her. After that, all the times that it will be triggered, Lookout will give instructions and advice. We don’t know the release date of Lookout but seems something that can help a lot of people.


And that’s it with this summary. I really hope that you liked it… and if it is the case, please share it and subscribe to my newsletter to sustain my magazine!

(Header image by Google)


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.

2 thoughts on “The most interesting AR/VR news from Google I/O 2018

  1. Have missed all the Google Lense updates before. The text recognition seems nice. Despite there are a lot of apps out there rather capable of outputting text from images, I guess having all Google services available right upon scanning will be cool (translate, text-to-speech, etc). I’m a bit confused about the Style Match though.. I mean, the recognition and matching features are nice, but are we supposed to be taking pictures to random people on the street? like “hey stranger, please let me take a picture of your cool shoes so Google tries to find similar ones for me” or even worst, photographing people without their consent. Donno…

    Regarding the Augmented Images, the features seems really good but I guess a lot of devs will still be using Vuforia for a while as it’s more flexible in what respects to hardware requirements and does a decent job for image recognition (haven’t tried the floor detection features and stuff though).

    1. I agree that the style match is weird… but maybe can have sense if you picture something that you see on some magazines: you see a model in an ad wearing a dress you may like and you look for it. Don’t know. Or it can be a nice pickup-line: “hey girl, can I make a picture to you? It is for Google Lens…” 😀

      Vuforia is more flexible and works with Android and iOS. But the more we go on, the less it will be compelling if Android and iOS will make all the job that it does. We’ll see…

Comments are closed.

Releated

vps immersal visual positioning system

Visual Positioning Systems: what they are, best use cases, and how they technically work

Today I’m writing a deep dive into Visual Positioning Systems (VPS), which are one of the foundational technologies of the future metaverse. You will discover what a VPS service is, its characteristics, and its use cases, not only in the future but already in the present. As an example of a VPS solution, I will […]

vive focus vision hands on review

Vive Focus Vision and Viverse hands-on: two solutions for businesses

The most interesting hands-on demo I had at MatchXR in Helsinki was with the HTC Vive team, who let me try two of their most important solutions: the new Vive Focus Vision headset and the Viverse social VR space. I think these two products may be relevant for some enterprise use cases. Let me explain […]