All you need to know about Android XR
After so much time teasing it, Google has finally announced Android XR, the version of Android for XR devices, and gave hands-on demos of the headset it is building with Samsung and Qualcomm. It is huge news and there is a lot to unpack, that’s why I decided to write this article to summarize the most relevant features of AndroidXR from all the points of view (hardware, content, development, etc…) so you are sure not to lose any detail about it! As with all my summaries, I won’t just report the news, but also give you many links to go into deeper details at the end of the article.
Android XR
Android XR is the Android version dedicated to XR devices. It can work with any type of XR glasses, may they be mixed reality visors (a la Apple Vision Pro), AR glasses (a la XREAL), or smartglasses (a la Ray-Ban Meta). It is tightly coupled with the rest of the Google ecosystem, both mobile and AI. Google hopes it to become the to-go open operating system that XR device manufacturers choose as the runtime for their headsets and glasses.
Compatible devices
The first device to run Android XR will be the mixed-reality headset that Google is building with Samsung and Qualcomm, codenamed Project Moohan. After that, other devices will come, and three manufacturers have already committed to release AndroidXR-powered headsets for the upcoming times: Sony, XREAL, and Lynx. XREAL is the only brand of the three that specifically works on AR glasses, so it may be the company releasing the first pair of AR glasses that work with Android XR. Regarding smartglasses, there are many rumors about Samsung also working on smartglasses with Google and Qualcomm, but there is no prototype available with it. For this reason, Google has provided demos of the smartglasses integration with Android XR with its internal AI-powered smartglasses project, which goes under the codename of Project Astra.
The few selected journalists that have been able to have a preview of Android XR have tried Project Moohan, a pair of unnamed monocular smartglasses, and a pair of unnamed binocular smartglasses. I will talk about the hands-on impressions in a dedicated chapter, now let me give you some details about the Samsung headset we have been desiring for months.
Project Moohan
We have been craving details about the Samsung MR headset for months and now finally Google and Samsung have given some demos, so we know something more about it. The device is still being finalized, so Google unluckily has not shared many details about it. So we have no data about resolution, field of view, lenses… literally nothing. The journalists trying it were even forbidden from taking pictures, so the only images we have are an official render and some short videos of the headset in use shared by Google itself where the headset is seen from a distance.
Anyway from the hands-on sessions we were still able to get some interesting information about this device:
- Design: the headset looks like the child of a Vision Pro and a Quest Pro. It seems to have the fitting mechanism of the Quest Pro, but the front part with the display seems taken from the Vision Pro. It looks quite good. The headset, like the Quest Pro, leaves the peripheral view of the real world to the eyes, but this can be covered with some magnetic blinders
- Visuals: the displays have a high resolution and the visuals are very good. The headset uses pancake lenses, so it can have a compact form factor. The FOV is slightly smaller than Quest 3 according to Ben Lang
- Tracking: 6 DOF inside-out tracking guarantees the headset is aware of its position. Eye tracking and hand tracking are supported, too. The facing-down cameras make sure hands are tracked also when they are at rest
- Controllers: the headset only works with eye and hand tracking for now. Before the release, Samsung will also provide some controllers, but it is not sure if they will shipped with the headset or separately
- Chipset: this is the only technical detail that has been officially declared. We know it’s a Qualcomm Snapdragon XR2+ Gen2, the most powerful chipset currently developed by Qualcomm
- Comfort: no one complained of the fit within 30 minutes of the test, meaning that it is at least decent. This makes me think that it is better than the fit of the Quest Pro, also known as the Forehead-Destroyer. The headset features automatic IPD adjustment but seems to need prescription lenses for those who wear glasses
- Audio: no remarks have been made about it, so I guess it features the usual decent integrated speaker of standalone headsets
- Passthrough: mixed reality offered by this device seems great, considering that Ben Lang stated that “passthrough cameras appear to have a sharper image than Quest 3 and less motion blur than Vision Pro”. There is no reverse passthrough, so external people can not see the eyes of the user of the headset
- PCVR: Virtual Desktop is already officially an available app for Moohan, so it will be possible to play PCVR content on this headset
- Shareability: The headset is meant to be shared with friends, in stark contrast with the Apple Vision Pro, which is mostly meant for use by a single person.
All in all, the articles seem to describe it as a good pro headset.
There is no information on the price (the latest rumor talked about $2000 circa), and availability will be sometime in 2025.
Available content
Not much content has been announced for AndroidXR, but we have some pieces of information on some of the available apps.
First of all, similarly to what Apple did with iPad apps and Apple Vision Pro, almost all 2D apps on the Google Play Store will be usable from Day 1 on Android XR devices. This is pretty huge because it allows Android XR headsets to be compatible with millions of apps (the estimates on the apps available on Android that I have found range from 1.5M to 3.3M). In particular, this means also making the headset automatically compatible with video streaming apps like Netflix or productivity apps like Slack, giving it immediate utility. Of course, like on Vision Pro, developers can opt out of this XR compatibility, but I guess very few will do that.
Google is also moving all its G-suite to AndroidXR, giving the apps also some extra XR-related features. Many of the reviewers have for instance been able to try YouTube and watch 3D, 180 VR, and 360 videos; Google TV where they could watch cool movies; Google Photos where they could see their pictures: there is also a cool feature that can transform both photos and videos automatically from 2D format to 3D, making all our memories more immersive;
and Google Maps, through which they have been able to explore the map of the world in 3D, in a way similar to what Google Earth VR allowed in the past. Google Maps also let them enter specific places (like restaurants) in immersive VR rendered with Gaussian Splats (like Varjo Teleport). It is not clear if this required custom scanning of the places or if it was done using just the internal street view pictures of the place (my bet: for the demo, they rescanned the place for best result, but the mission is to re-use the images they already have). On AR/smart glasses, Google Maps instead can show you a little map in the lower region of your vision, so if you are getting lost while following the directions, you can always get a glimpse of the map without having to take the phone out of your pocket (I want this now!).
As for the immersive content, Owlchemy Labs (which is owned by Google) and Resolution Games (which makes XR content for all platforms ranging from the Vision Pro to refrigerators) are already on board: Job Simulator, Vacation Simulator, and Demeo will be available for Android XR. Owlchemy Labs will also develop “Inside [Job]”, the introductory experience to teach people who just turned on their first AndroidXR MR headset how to interact with it (it will be something similar to Oculus First Contact).
Google is of course looking to enrich its content catalogue, so I guess it is trying to partner with many other development studios to have much more content at the launch of Project Moohan.
UI and Interactions
Android XR is an open ecosystem, so it is open to multimodal interactions: users can interact with the devices by many means, including controllers, voice, hands, and eyes. Even if Project Moohan supports both eye and hand tracking, the system does not force you to interact with the same gaze and pinch mechanic of the Vision Pro (because otherwise Android XR would be limited only to headsets supporting eye tracking). You can choose between that modality and a classical ray cast that just involves hand tracking. The fact that the system supports both hand tracking and controllers is a great thing, so on mixed reality devices, the user can choose which is the best interaction mean for the particular application he/she’s going to launch (e.g. use controllers if he/she is going to play a game, or use the hands if he/she’s watching a video). Google anyway aims at having many interactions performed via queries to the AI, as we’ll see in the next paragraph.
The UI on mixed reality headsets like Project Moohan seems a mix of the ones designed by Apple and Meta. The starting menu of Moohan is like the home menu of the Vision Pro, while the windows seem more inspired by Meta UI. Every window can be grabbed, moved in space, and resized. The browser also can open multiple tabs in different windows.
AI features
The headsets and glasses based on Android XR have Google Gemini built in as an assistant to the user. But it is not the classical assistant that you invoke with a command like “Hey Gemini”, ask a single question maybe about an object you have in front of you in real life and that’s it. It is a full assistant to your mixed reality experience, that is potentially always on and it can analyze both your physical and virtual reality, and can support you in everything you want.
The assistant can help you launch an application. If you are looking at a map of a certain city in virtual reality, you can ask for information about the place you are looking at by just asking it in a very natural way (so you don’t have to say “Please find me more information about the place I’m looking at”, but you just invoke the assistant and you say “oh tell me more about this building”). The same is true if you are looking at a building in real life with AR glasses, Gemini can tell you what it is. Gemini can automatically open Google Maps if you are asking for info about a certain restaurant. It also reminds the previous interactions (currently working memory is about 10 minutes), so you can ask for something contextual to your previous conversation: you can say “Oh Gemini, find me some places similar to the restaurant I liked 1 minute ago” and it would just do as expected. It can also re-arrange the windows in your MR workspace. When you speak with people that speak a foreign language, Gemini can translate the conversation on the fly, so that for instance whatever language the other people are speaking, you will always read the translation of what they are saying written in English. Gemini reminds whatever is in your room so that if you lose your keys, it knows where they are. And so on, it can do many things. It is your Jarvis that is always on, ready to help you with whatever tasks you want, in VR, MR, and AR.
When asked what is the main feature of Android XR, people at Google said that it is the AI assistant because it is so useful and also so unique in the XR space (both Meta Quest and Apple Vision Pro have only some kind of lightweight AI integration). And the reviews of journalists completely agree: everyone just praised the AI assistant, how incredible it was, and how the integration between MR and AI was done so well.
Google found its unique cut to XR: for Meta, it was gaming and affordability; for Apple, it was premium quality and movies; for Google, it is AI. The attention to AI is so big, that every article I’ve read about this headset talked more about AI than XR. The demos, like the one during the event that I’m linking here below, were again mostly about AI. Google is truly betting on the mix between MR and AI, not just on MR.
Privacy implications
As a technologist, I’m amazed by the idea of having an always-on assistant who can support me in every task. But as an advocate of privacy, I’m scared about having Google’s eyes always on me. What about having Gemini examine my view while I’m looking at some secret documents from my company? And what when I’m having an intimate moment with my special one and I’m wearing AR glasses? And what about “that friend of mine” that watches VR porn? I mean, it is a bit strange that there is an assistant always on in all the moments, and that tries to learn what is useful for me. Yes, you can always opt-out by pausing the assistant in these delicate moments, but by experience, I can tell you that it is very easy to forget that you have these services running in the background and so forget about disabling them.
And while we all are scared about Meta’s management of privacy, we should all recall that Google with all the data it collects with Google, Chrome, and all the other products, is a company harvesting a huge amount of data from us. Sure, Google had fewer scandals and seems to have been better at managing the data of people, but still, it’s an ad company, and earns money by selling our data.
Hands-on
I’ve read multiple hands-on articles on Android XR and many of them report similar feedback. All the journalists tried three types of devices: Project Moohan, a pair of monocular smartglasses, and a pair of binocular smartglasses.
The monocular smart glasses are the device that got the least interest. As for the binocular ones, the favorite feature of everyone was Google Maps, with a little map of where you are that you can always get a glimpse at. Project Moohan was mostly appreciated because it feels like a Pro headset.
But in general, everyone was just astonished by the AI capabilities of these devices. Gemini was able to always support the users, both in tasks in the virtual and in the real world, or in the mix between the two. And the fact that it had memory of everything it had seen or it had talked about, was considered impressive. For instance, the assistant was able to remember the book a journalist currently had behind him, even if he hadn’t looked at that book specifically before.
It has to be said, anyway, that the demos were completely scripted and even happened in a predefined room. The journalists did not have many possibilities to try to “break” the headset. This means that the journalist just participated in a “show” meant to show them the best part of the headset and the AI assistant and hide the things that were not working. I’m pretty sure the experience people will have when the headset is in the wild will be pretty different.
How to develop content for it
It is possible to build content for Android XR in various ways:
- Developers of 2D apps have their applications automatically made compatible with the headset if they do not opt-out
- There is a Jetpack XR SDK for Android developers who want to use their standard development tools (Android Studio, Kotlin, etc…) to create 2D apps with XR effects or directly XR apps. There is also a new emulator where to test the experiences
- Unity developers can use the “Unity OpenXR: Android XR” package together with the usual tools like the XR Interaction Toolkit, Unity XR, and AR Foundation to create XR experiences. You need the latest Unity 6 to develop this new platform. There is no mention of Unreal Engine, yet: I guess the litigation over Fortnite did not help Epic Games gain the sympathies of Google
- Native developers can exploit OpenXR 1.1 compatibility of the Android XR runtime to create applications
- You can use full-featured frameworks like three.js, A-Frame, or PlayCanvas to create virtual worlds in WebXR
- Qualcomm is also releasing soon a plugin to make it easier the creation of content that is compatible both with Android XR and with its Snapdragon Spaces ecosystem.
Porting existing content from other platforms to Android XR is very easy if it was already made with cross-platformness in mind. Owlchemy Labs ported their existing Unity games in a ridiculously short time: if your application already uses the XR Interaction Toolkit, probably nothing should be changed to port it to Android XR. The same is true for native OpenXR applications: Virtual Desktop’s Guy Godin ported its product very easily, too. This means that this is a new opportunity for developers to earn money by releasing a product on a new platform.
Guy Godin released the following statement to Upload VR:
It supports a majority of the same OpenXR extensions that Quest/Pico supports today. Bringing my native OpenXR app over took only a few hours and the basics just worked out of the box. I think it’s refreshing to work with a platform that wants to collaborate with developers rather than one that tries to block and copy us. Grateful to have more options for consumers shortly and I’m very excited to bring the best PC streaming solution to Android XR.
He’s clearly hinting at Meta trying to copy and eliminate his solution with Air Link. The fact that there are more hardware players out there is good for us developers, because so we have more platforms to sell our products on, and we avoid the risk of having a monopoly where a single company does with us what it wants. We have three big companies now competing for our products, and this offers new possibilities to us, even for more funding.
Google is offering dedicated bootcamps for developers who are interested in this new platform: during these bootcamps, it is possible to try to port an existing piece of content to Android XR with the support of Google technicians. This is a copy-paste of a similar initiative that Apple did after the launch of Vision Pro. You can find the link to register to bootcamps in the references at the end of this article.
Release date
Android XR and the relative compatible headsets, starting with Project Moohan, will be available in 2025. The Android XR SDK developer preview is instead already available now so that developers can start building content for it.
Editorial
I’m pretty excited about the launch of Android XR for the following reasons:
- A huge company like Google is entering the space, validating XR even more
- Google is offering a new open alternative to Meta. This will create competition between the two companies and we users and developers are going to benefit from it. Google has done a genius move because it knew it could not compete with Meta on the content catalog or the deep XR features, so it is betting on two different things: integration with mobile Android apps (all the apps on Google Play) and AI assistants, two things in which Meta is pretty weak
- The fact that it is Android allows us to use on the headset all the apps we are used to employ on the phone, like Google Maps, Google Docs, or Youtube. For sure Google will create a cohesive ecosystem between mobile phones and MR headsets. This is something that Meta can’t do
- Google was not arrogant like Apple. Apple entered the field as if it was the company inventing “Spatial Computing”, ignoring what the others were doing and repeating their same mistakes (like making a headset that is very front-heavy). Google entered by copying the best features from its competitors and creating a great device and operating system
- The AI assistant, if it works smoothly like in the demo, can be truly helpful and make the interaction with the headset very natural
- The product is very interesting and well-made
There are a few things about which I’m not happy, though:
- The announcements and the demos were so focused on AI that we all lost sight of the true XR features. For instance, in these early tests, there was no mention of immersive features like room scanning, avateering, shared experiences, etc… It is not clear how much Android XR is truly advanced on the “XR” part. In my opinion, Horizon OS is still ahead of it
- An always-on AI assistant is pretty scary on the privacy side
- Google has a long tradition of killing interesting projects and this may not be an exception.
Summing all of this up, I think Android XR is still a great thing for the whole ecosystem, and I can’t wait for the Samsung headset to launch… and you?
References
These are some of the best resources I have found about this reveal of Android XR:
- Official material:
- Google blog post about Android XR: https://blog.google/products/android/android-xr/
- Google blog post about Android XR SDK: https://android-developers.googleblog.com/2024/12/introducing-android-xr-sdk-developer-preview.html
- Google’s vision video about Android XR: https://www.youtube.com/watch?v=Pn5uG1ys-pE
- Qualcomm’s official blog statement about the Android XR launch: https://www.qualcomm.com/developer/blog/2024/12/qualcomm-google-collaborate-to-launch-android-xr-platform
- Video of the demo performed by Google at the launch of Android XR: https://twitter.com/bilawalsidhu/status/1867250466674888748
- Announcement articles:
- Google announces Android XR (The Verge): https://www.theverge.com/2024/12/12/24319538/google-android-xr-ar-vr-smart-glasses
- Google announces Android XR (Road To VR): https://www.roadtovr.com/google-android-xr-announcement/
- Controllers for Project Moohan arriving in 2025: https://www.roadtovr.com/samsung-android-xr-motion-controllers-pc-vr-streaming/
- Headsets:
- Lynx, Sony, and XREAL working on Android XR headsets: https://www.uploadvr.com/sony-lynx-xreal-android-xr-devices/
- Content:
- Vacation Simulator, Job Simulator, and Demeo available on Android XR: https://www.uploadvr.com/job-simulator-vacation-simulator-and-demeo-will-be-available-on-android-xr-2/
- Inside [Job] is the tutorial game experience for Android XR headsets: https://www.uploadvr.com/inside-job-aims-to-onboard-android-xr-users-with-a-new-mixed-reality-game/
- Hands-on:
- Video hands-on by Scott Stein (CNET): https://www.youtube.com/watch?v=Uoe5fdw7AXs
- Textual hands-on by Scott Stein (CNET): https://www.cnet.com/tech/computing/i-tried-google-and-samsungs-next-gen-android-xr-headsets-and-glasses-and-the-killer-app-is-ai/
- Textual hands-on by Victoria Song (The Verge): https://www.theverge.com/2024/12/12/24319528/google-android-xr-samsung-project-moohan-smart-glasses
- Textual hands-on by Ian Hamilton (Upload VR): https://www.uploadvr.com/samsung-android-xr-headset-ships-in-2025-hands-on/
- Textual hands-on Project Moohan by Benjamin Lang (Road To VR): https://www.roadtovr.com/samsung-android-xr-vr-headset-project-moohan-hands-on/
- Resources for developers:
- Google’s page on how to develop for Android XR: https://developer.android.com/xr
- Google’s page on Android XR SDK: https://developer.android.com/develop/xr
- Unity blog on how to start developing for Android XR: https://unity.com/blog/6-ways-to-start-building-for-android-xr-with-unity-6
- Unity Android XR package documentation: https://docs.unity3d.com/Packages/[email protected]/manual/index.html
- Android documentation about Unity development with Android XR: https://developer.android.com/develop/xr/unity
- Road To VR’s article about development for Android XR: https://www.roadtovr.com/unity-supports-android-xr-quest-content-ports/
- Page to apply for the Android XR development boot camps: https://d.android.com/develop/xr#bootcamp
(Header image by Google)
Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.