AWE 2023 Day 1 Highlights: Neal Stephenson on Stage, Qualcomm, Zapbox, and more!
My first day at AWE has just finished, and while I’m sleepy because of jet lag, let me tell you my highlights of the day! Today the showroom was closed, so it was mostly about networking and random demos of devices.
The opening
Ori Inbar kickstarted AWE 2023 with a great keynote. Kudos to him for the preparation of everything: his speech was motivating and fun, and there were some parts, like the dialogue between him and his holographic avatar which were unexpected surprises.
After that, Ori had a fireside chat with Neal Stephenson, the guy that invented that word that we saw on Linkedin too many times last year. I had never heard Stephenson speak in public before, and I had great expectations about what he could say. Actually, the chat never took off, and it was slow and lifeless like a philosophy lesson or the F1 grand prix of Montecarlo. Neal was very objective in his answers and said that while he may have invented the word “metaverse”, he actually didn’t invent what it was about, and he was mostly inspired by the years of research that had been in the field at the time he wrote Snow Crash. Anyway, his book has not to be taken as a manual of how to build the metaverse… it is just a book. He also said that just now we have started seeing some glimpses of what the metaverse may be and that it is too early to understand if it is going towards an autopia or a dystopia. As for Lamina1, he said that he established it because he wanted to see good creators get properly rewarded.
The impression I got in general about Neal Stephenson is that he’s a very shy and intelligent person. Not the guy that can rock the stage, but definitely someone that can provide interesting insights. And thanks to my friend Tony Parisi, I even managed to meet him for 30 seconds and shake his hand. Which has been great!
Qualcomm
After the keynote, Hugo Swart from Qualcomm took the stage to talk about the amazing work his company is doing in XR. He reminded a few cool things which are happening, like:
- Digilens and TCL glasses becoming compatible with Snapdragon Spaces
- Goertek and Niantic working on reference designs based on Qualcomm Snapdragon AR2
- Lenovo VRX being the first device implementing MR capabilities thanks to Snapdragon Spaces
- A collaboration with Adobe to bring USD format to Spaces
- A collaboration with Microsoft to bring MRTK to Spaces
Then he announced two interesting things.
Oppo MR headset
The first one is a new mixed-reality headset by Oppo. I heard a rumor about this device coming during my latest trip to China, and it’s cool to see that it is finally happening. From the specs shared during the presentation, it looks like a device that is more powerful than Quest: the 2160×2160 resolution per-eye is for instance even superior to the one of Quest Pro. But I’ve learned these years that headsets have to be tried before being judged, so let’s just say that it “looks good on paper” for now. What surprised me in the features list is the presence of a heart rate sensor, which is not exactly clear how it is installed on the device (is it in the handles of the controllers? Or does it require additional hardware?)… but it sounds very good for fitness and healthcare applications.
It will be available as a Snapdragon Spaces Development Kit in the second half of 2023.
Dual Render Fusion
Dual Render Fusion is a new feature of Snapdragon Spaces that lets smartphone applications treat the AR glasses connected to the phone as an additional viewing display. The application is still 2D on the smartphone screen, but it can show in parallel something else in AR. The AR content may be:
- The actual experience meant to be seen, and in this case, the smartphone acts just as a “controller” of the AR experience. For instance, in a cooking experience they teased, you can select on the smartphone what video of the recipe you want to watch, and then you watch it in AR in front of the eyes while you cook. The main experience is cooking while checking the videos of the recipe in front of your eyes and interacting with them hands-free, and the phone app just acts as the UI through which you can get there
- Additional content that enhances the actual experience on the phone. One of the first video demos shown on stage was about using Google Maps on the phone, and have augmentations showing you the 3D reconstruction of the places you are looking at in AR. You can still use the phone alone without AR and be able to use Maps, but of course, the experience with the 3D augmentations is nicer and adds value to the 2D app.
My demo with Qualcomm
In the afternoon, I had the opportunity of meeting with Qualcomm people, and finally, I had the occasion of speaking in person with Macey and Hugo, which was great. Katie gave me a demo of Dual Render Fusion in Snapdragon Spaces.
I tried the kitchen experience, and it was interesting using the phone to select what I wanted to see in AR. I found it to be an interaction that “just works” and that treats the AR device as a complement to the smartphone experience, which is good. I didn’t have to mess around with floating AR menus and hands tracking, but I just selected what kind of recipe I wanted to see on the phone app with my fingers, and then those instructions appeared in front of my eyesin AR. Easy and efficient. I start seeing a lot of potentials in the UX that can be created with Dual Render Fusion.
Zapbox
I had the opportunity of having a demo of the Zapbox, the affordable AR viewer manufactured by Zappar. I’m going to write a dedicated hands-on article on this one, but for now, let me give you a few hints about my experience with it.
The biggest pro I have found with it has been the controller tracking. I wasn’t expecting to find it so stable: controllers are perfectly tracked, and I was able to do very naturally whatever I wanted with them. Not bad for two cylinders that are tracked thanks to a marker drawn on top of them. The biggest con I have found is the passthrough visual quality. The displayed passthrough appeared slightly distorted, with a wrong depth, slightly blurred, and a bit laggy. I’m pretty sure it can make some people sick. So tracking is approved, while all the optics require some update.
The Monocle
In the end, I had the opportunity to quickly check an open-source AR monocle. It is a devkit composed of a monocle with a clip that lets you attach it to the frame of your glasses. This monocle contains a small AR display, a camera, and two touch sensors around it. It is transparent so that you can see its circuits.
The idea of the creators is to release to the community a device that is fully open-source and can be used to experiment with AR. This monocle became pretty popular a few weeks ago when some students playing with it connected it with ChatGPT and simulated the use of ChatGPT and AR to give people live suggestions during work interviews or dates.
When not used, the Monocle can sit inside its cool charging case.
The cost of the device is $350, and people from the company told me it’s selling pretty well. Overall is an interesting device for makers, and I’m curious to see what the community will build with it.
And that’s it for my first day at AWE! As I’ve told you, with the showroom still closed, I haven’t been able to test many things yet. But it was already an interesting day for me!
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.