How to start developing with camera access for Meta Quest 3 on Unity 6

how to camera access meta quest 3 unity 6how to camera access meta quest 3 unity 6

Today I want to talk about one of my favorite topics: passthrough camera access. Meta has just released the APIs to let developers access the frames grabbed by the front cameras of the Quest so that developers can craft applications that can understand the environment around the user. In this tutorial, I will explain step-by-step how you can create a project using Passthrough Camera Access of the Meta SDK, both starting from the samples and starting from a blank Unity 6 project. Exciting, isn’t it?

This camera access tutorial

I am a big fan of passthrough camera access because it lets us developers bridge mixed reality with artificial intelligence. You can for instance apply AI/ML algorithms to the images got from the front cameras of the device and create applications like this AI-powered Pictionary game I developed for Pico 4 Enterprise:

A little game I did with AI+MR on Pico

But starting with this new Meta camera access functionality, especially with the new Unity 6, is not easy because the SDK is still experimental and there are some problems here and there that forced me to bang my head on the wall a few times. So I prepared this guide to save you a few headaches and get you started with the wonderful world of passthrough camera access in a very short time!

You may wonder what application I will guide you in doing in this tutorial… well, as the Unity Cube guy, of course, I will lead you to create a cube textured with the passthrough frames!

How to get started with Meta camera access – Video

As I usually do with my most important tutorials, I’ve prepared also a video version of this article where I explain to you everything you need to know about camera access for Meta Quest in Unity 6 with my sexy Italian accent. I recommend you watch the video if you are a beginner with Unity or the Meta SDK, because in the video you can clearly see the process step by step as I do it.

I think I did a pretty cool video about this topic

If you are instead a bit knowledgeable about XR development in Unity and you prefer texts over videos, keep reading for the wall-of-text tutorial!

Before we start…

The SDK for camera access is experimental and is going to change a lot in the next weeks, with Meta ironing out its bugs and improving its usability. I’m going to update this article from time to time, but some of the information you may find here may be outdated when you are reading it. The general concepts will probably stay the same, but some details are changing. So adapt what I’m telling you here to the situation you find when you use this SDK.

How to get started with camera access – prerequisites

Before starting, let’s see what are the prerequisites for working with camera access on Meta Quest:

  • You must have a Meta Quest 3 or Meta Quest 3S. At the time of writing, the other headsets are not supported
  • Your Quest must have at least the runtime v74. Update your device if you have a previous version
  • You must have a recent version of Unity. Meta suggests Unity 2022.3.58f1 or 6000.0.38f1. A slightly different version should still work, though. I did everything with 6000.0.34f1

Notice that at the time of writing, the feature is marked as experimental, so you can not publish on the Horizon Store any application employing it. It is not the first time that Meta has released a new feature as experimental, and usually, it lifts the publishing ban in a few months, so I expect that later this year it will be possible to publish passthrough-enabled applications on the store.

The foundations of camera access on Meta Quest 3

Camera Access for Quest 3 has been built using the classical Android tools used to access the camera frames on the phones. In particular, it has been built on top of the class Camera2. Even if Quest is actually an Android system, these features were blocked before, when Meta prevented us from accessing the camera frames. Now they have been re-enabled so you can build native applications directly querying the Android system functionalities.

In Unity, almost no one uses directly the Camera2 class, because it would require mingling with JNI calls, which are always a pain. We Unity developers usually employ the WebCamTexture class, which is able to grab the camera frames into a dynamic Unity texture. On Quest, with the new passthrough APIs, it is possible to follow exactly this procedure and get a WebCamTexture of the frames of the left or right front camera of the device. Under the hood, the WebCamTexture will query Camera2.

Once you have a WebCamTexture, you can do whatever you want: you can show the texture on screen, or grab the frame pixels and analyze them (and maybe send them to an AI algorithm). It becomes standard Unity knowledge, for which you can find many tutorials online. Remember anyway that every time you grab the pixels from the Texture and move them to your “CPU” memory to analyze them, you introduce a delay because moving data from GPU to CPU is a kinda slow operation. I’ll give a few hints about this later on.

A good thing about this approach by Meta is that it is the same one used on Android phones and it is the same one that Google will follow with Android XR according to what they told me. So it is going to be easy to create cross-platform applications.

Another good thing about this approach is that it gives control to the user: accessing cameras on Android always requires permission from the user and this is what is going to happen on Quest, too: Meta has created a special permission you have to ask the user to access the camera frames. The user must trust you, otherwise you will not be able to get the camera frames. Giving control to the user regarding his/her privacy is always the best approach and personally, I’m a big fan of it.

Getting Started with Passthrough Camera Api Samples

The suggested way to get started with the Passthrough Camera Api, which is even suggested by the official documentation, is to start with the official samples about it provided by Meta. Meta suggests starting from this sample project called Unity-PassthroughCameraApiSamples, which is available on the GitHub account of the Oculus Samples (yeah, the Oculus name is hard to die), and modifying it according to your needs. Later on, I’ll tell you about how to get started from a blank project, but for now, let’s dig into the suggested way.

The Unity-PassthroughCameraApiSamples project provides you not only with facilities to access the camera, but also with various practical use cases of the PCA (Passthrough Camera Access) feature: there is a sample for instance to estimate lighting, another one to go from a 2D pixel on the camera frame to the corresponding 3D world position, and even one with object detection that employs a machine learning model run locally through Unity Sentis (this is a pretty cool one, you should try it). The code of the samples is well-commented and there is also an extensive Readme.md file.

This sample project is so a great way to get started learning about how to use camera access. You can learn from the existing code, you can learn even more by modifying it, and you can even copy-paste some of its code into your own project to speed up your development. Having a working sample is also good because it can act as a “ground truth”: Meta is giving you something that works, so if your own project using PCA is not working, you can compare it with the official sample project, spot the differences, and find your bug.

(Btw, apart from the official samples, there is also another cool sample repository made by Roberto Coviello, who works at Meta, that gives you other interesting examples of things that you can do with camera access, like QR code scanning. You should dig into it after you have checked the official samples)

I have to tell you, anyway, that at the time of writing, there are some little problems that prevent this project from running smoothly on Unity 6. Theoretically, the project should run out of the box, but this is not the case. So let me tell you how to get, fix, and run this project on the latest version of Unity.

Building Passthrough Camera Api Samples

The procedure to get and build the samples is:

  • Clone the repository of the samples using your favorite Git client
  • Open Unity Hub
  • Select Add -> Project From Disk and select the folder where you just cloned the project
  • The project should appear as one to open with Unity 2022. Click on the label that says the version of Unity of the project, and in the upcoming window select your version of Unity 6 and also the Android build platform. This is necessary because this tutorial is about using Unity 6
The button to press to change the Unity version with which opening the project
How to select Unity 6 and Android platform
  • Confirm that you want to change the version in any popup that you may see
  • Let Unity open the project
  • Fix the Activity name for Unity 6:
    • Click on the “Meta XR Tools” dropdown on the top of the editor window, and from the dropdown list select Project Setup Tool
The Meta XR Tool that checks the project is ok
  • (continues)
    • In the following Project Setup Tool window, you should be presented with an error saying something like “Always specify single GameActivity…”. Click on the Fix button associated with it, the error should disappear
    • If there are other errors, fix them as well
    • Close the Project Setup Tool window
  • Fix the Android Manifest file (this fix has been suggested to me by the amazing Takashi Yoshinaga):
    • In the menus, select Meta -> Tools -> Update AndroidManifest.xml
    • If there are popups asking for confirmation for overwriting the file or things like that, confirm the operation
How to update the Manifest File of the project
  • Fix the initialization timing for the camera texture. The camera SDK should give more time to the system to initialize before playing the WebCamTexture
    • Open the script WebCamTextureManager.cs, located in \Assets\PassthroughCameraApiSamples\PassthroughCamera\Scripts\
    • Locate the OnEnable function and substitute the last lines that start the camera initializationwith a coroutine that adds a two-frames delay for the initialization. So at the time of writing the code is like this:

and you have to make it become like this (notice the modifications related to OnEnable starting from line 19):

(There is an alternate approach, always by Takashi Yoshinaga, that suggests going to line 109 of the same file, where there is already a yield return null that makes you wait a frame before playing the WebCamTexture and add more yield return null functions to make the system wait more. You can choose any one of the two approaches, both work)

  • Save the file. Thanks to this modification, the application won’t crash when you launch it multiple times. And you just added such a short delay in initialization that no one will notice
  • Now you can build:
    • Connect your Quest 3 to your PC
    • Select File -> Build And Run
    • Choose where to save the build
    • Wait for the build to end
  • Now you can put your headset on and enjoy the samples!
A frame from the official Meta samples running on my Meta Quest headset

Modifying the samples

As I’ve said before, I strongly suggest you play around with the samples. Study the various scenes and see how Meta did the various things. Create your own new scene following the instructions contained here. In general, read the README file of the repo, read Meta’s official documentation on camera access, and play around with this sample to learn more about this exciting topic.

Starting a camera-access project from scratch

I love that Meta provided a sample that works out of the box. But there are times when you actually want to start a project from scratch and avoid all the junk that there is in a sample project. Or other times you already have a Unity project and you want to integrate camera access functionalities inside it. In these cases, you can not start from the provided sample project, but you are on your own implementing PCA into your project. Luckily for you, I’ve got you covered also for this scenario, so let me show you how to create a Unity Cube textured with camera frames starting completely from a blank project!

The procedure is the following:

  • Open Unity Hub
  • Create a new Unity Project, select Unity 6 as its version, and specify the type of project “Universal 3D Core” (this is not strictly necessary, but usually when you develop for Quest, you always use URP)
  • Wait for Unity to create and open the new project
  • Switch to Android build platform. Let’s do it immediately to avoid reimporting a lot of stuff later on. If you don’t know how to do it, it’s about selecting File -> Build Profiles and then in the window that pops up, selecting the Android tab and hitting the “Switch Platform” button
  • Now that the project is on the right platform, if you want, you can change the Project Settings specifying the right name of the product, of the building company, and selecting the name of the Android package name. This step is completely optional, it is just so that you can customize the metadata of your application
  • Open the Unity Package Manager (from the Windows menu)
  • Install the MRUK package: hit the “+” button in the upper-left corner of the package manager and select “Install package by name…” and specify as name “com.meta.xr.mrutilitykit”. Press Install to confirm. After a few seconds, the installation should start
  • If Unity prompts to restart the editor, do it
  • Close the package manager and any opened popup
  • Re-open the Project Settings (Edit -> Project Settings…)
  • In the project settings window, click on the left tab where it says “XR Plug-in management” and select the button on the right that proposes you to install the XR Plug-in management
  • When the XR Plugin management is installed, the right part of the window will be populated. Go to its tab related to PC and select the OpenXR plugin. If a popup asks you to install some Meta XR Features or something like that, say yes.
How to configure the right XR plugin for PC and Android in the XR Plugin Management
  • If you have been moved to another tab related to Project Validation, move back to the previous tab of the XR Plug-In management. Then go to its sub-tab related to Android and also there select the OpenXR plugin. This is what Meta suggests to do now
  • Now we have to add the Meta Quest controllers’ interaction profiles. Select on the left the tab XR Plugin Management -> OpenXR. You should see a section of the window on the right saying “Enabled Interaction Profiles”. Click on the “+” button next to it and add “Meta Quest Touch Plus Controller Profile”. Then do the same for “Meta Quest Touch Pro Controller Profile”. Repeat this operation both for PC and Android. PC is not useful for the build, but it is useful in case you want to do some tests of your application in the editor.
Adding the interaction profiles of the controllers
  • Select on the left the tab XR Plugin-In Management -> Project Validation. If you are an experienced user, evaluate what to do with the various suggested entries. For the sake of this sample, let’s do the simplest of things and hit Fix All both for the PC and Android tabs.
  • You can now close the Project Settings window
  • You should have the SampleScene opened in the project. Let’s set it up!
    • Delete everything that is in the scene
    • In the menu, head to Meta -> Tools -> Building Blocks
    • Drag into the scene the Camera Rig block, Passthrough block, and Controller Tracking block. This way we can have a scene with passthrough and visible controllers
    • Close the Building Blocks
The blocks to add to the scene
  • Fix the warnings for the Meta XR setup (the same tool we used when we dealt with the samples):
    • Click on the Meta XR Tools dropdown on the top of the editor, and from the dropdown list select Project Setup Tool
    • In the following Project Setup Tool window, you should be presented with some warnings and errors
    • The last one should talk about some permissions for the Scene (“When using Scene in your project, it’s required to perform…”). Click on the three dots next to it, and select “Ignore”. It should disappear. We ignore it because we do not want Meta to ask automatically for the Scene permission since the Camera Access SDK is going to do it itself in one of its scripts
    • Now you can click on “Apply All” to fix all the remaining issues
    • Close the Project Setup Tool window
  • Import the basic facilities for Camera Access from the samples project. Notice that we could also write these facilities ourselves, but since Meta gives us them ready out of the box… why should we do that? Let’s take them:
    • Clone the repository of the samples using your favorite Git client (if you haven’t done it with the previous tutorial)
    • Copy the folder \Assets\PassthroughCameraApiSamples\PassthroughCamera\ from the samples project into the Assets folder of your new project. You can use drag-and-drop from Windows File Manager to the Unity Project Window if you want
    • Apply the fix related to the file WebCamTextureManager.cs described above in the article, if it is necessary (it is not necessary if Meta fixed the bug or if you already fixed it in your sample project following the tutorial above)
After the next step, the Unity scene should look like this
  • Finalize the scene:
    • Drag the prefab Assets/PassthroughCamera/Prefabs/WebCamTextureManagerPrefab.prefab into the scene. This prefab contains the script WebCamTextureManager, which will manage the camera access and provide you the WebCamTexture and also the metadata about the camera. Notice that in this prefab there is also a permissions manager that will ask the user permission to use the camera and also to use the Scene data (that’s why we ignored that Fix related to Scene before). You can use this script also to ask for other runtime permissions, if you want
    • Create a new cube in the scene
    • Create a new script in the project (hopefully creating before a Scripts folder), call it WebcamTextureAssigner, and paste on it the below code. Basically the code waits for the WebCamTextureManager to be ready to have the WebCamTexture of the camera of the headset, then assigns that texture to the material of the object the script is installed on. This will apply the camera texture to our cube
    • Assign the script WebcamTextureAssigner to the cube
  • We have now to fix the Android Manifest to ask for all the required permissions:
    • On the menu, select Meta -> Tools -> Create store-compatible AndroidManifest.xml . This is going to replace the current manifest with one requiring all the right permissions
    • If asked for a confirmation to Replace the manifest, confirm it
    • Now open the manifest file, which is located at Assets/Plugins/Android/AndroidManifest.xml
    • At the end of the file, where there are various tags, add this one:
      <uses-permission android:name=”horizonos.permission.HEADSET_CAMERA” />
      This will make sure the user will be prompted about the permission of accessing his camera
How to create a store compatible manifest
How the manifest of my project looks like with all the permissions
  • Save the scene, save the project
  • Connect to the Quest to your PC via USB
  • Build And Run on your device (File -> Build And Run)
  • If it complains about Input Handling being set as “both”, say Yes to go on (in this sample, we don’t care about it, but in a production environment, you should just use the new Input System)
  • Enjoy the camera-textured cube! You will see the cube colors are a bit dim… this is because we removed the Light from the scene and we are using the default Lit material. Changing the cube material to an unlit one should fix the issue
The Unity Cube is now textured with the camera frames!

Getting data from WebcamTexture

Now that you know how to get a WebcamTexture using the prefab facility provided by Meta, you should learn also how to get the actual pixel data, e.g. to feed that into AI. To do that, you can use some well-known methods. One is to use GetPixels and the other is to use AsynGPUReadback. The first is a blocking call but provides directly all the pixel data that you want in an easy way, while the second doesn’t block the application, but it is a bit more complex to work with (it provides you the data in a callback) and is going to give you the data with some frames of delays. Check the documentation of both of them and decide what you want to use:

In any case, remember that moving data from the GPU (where Textures are) to the CPU (where you can read the camera pixels data) is a very slow and expensive operation. So do it only if it is necessary, and do it only in the moments it is necessary (e.g. you may decide to analyze only a frame out of 20 to speed things up if you don’t need quick reaction times).

Further References

Here you are some links to go deeper into the topic of camera access

Special thanks

Special thanks for helping me understand how to operate the Camera Access plugin to Roberto Coviello, Dilmer Valecillos, and Takashi Yoshinaga. This article has been possible also thanks to you, guys!

And now…

… have fun playing around with camera access! I suggest you start with the existing samples and then start a project of your own.
Since I invested a lot of time and effort in this long post, if you found it useful, please share it around in your communities, and if you want, also support me on Patreon. Thank you!

Skarredghost: AR/VR developer, startupper, zombie killer. Sometimes I pretend I can blog, but actually I've no idea what I'm doing. I tried to change the world with my startup Immotionar, offering super-awesome full body virtual reality, but now the dream is over. But I'm not giving up: I've started an AR/VR agency called New Technology Walkers with which help you in realizing your XR dreams with our consultancies (Contact us if you need a project done!)
Related Post
Disqus Comments Loading...
whatsapp
line