openvr introduction tutorial

Introduction to OpenVR 101 Series: What is OpenVR and how to get started with its APIs

Today on my blog I’ll host a guest post by Matias Nassi, a great XR developer from Uruguay. You may remember his name because he has read and commented a lot of articles on this website. Well, reading his comments I realized that he’s super-skilled on AR/VR/Kinects/all-this-cool-stuff and thought it was a great idea if he could share his expertise on The Ghost Howls. I’m sure you’ll like this post (and even the future ones of his), because it covers some very interesting technical details about OpenVR. Go Mati, show us all what you’re capable of!


This is the first post in a series of technical posts in which we will be covering the bare bones of a Virtual Reality API (Application Programming Interface) and coding a few examples using it. In this first post we will be describing what OpenVR is, what it may be useful for and finally, we will go through a little OpenVR application to demo some of the core functionality of the API in what respects to interacting with a VR system (get connected devices, get tracking data, etc). We will discuss it in detail later in the post, but here’s a glance at the simple application we will be developing for now in case you are as anxious as me 😛

We will be using this simple application as a foundation for the following posts in which we will start adding more interesting stuff like rendering motion controllers, rendering simple objects into a stereoscopic view so they can be seen properly in VR, and adding some simple interactions. So, let’s start!

Motivation

While we wait for the OpenXR initiative by Khronos Group (working together with some of the biggest companies in the VR scene) to be released, we developers have very few options to develop multi-platform VR applications on PC. You may be saying “Come on! We can deploy to a vast of platforms using game engines like Unity, Unreal Engine, the more recent Godot engine or any of the other engines out there, which have all started to add VR features”. I couldn’t agree more, but this series won’t be aimed to creating applications/games at a high level of abstraction as when using a game engine (you can find a lot of tutorials and even a few nice complete courses on this topic on the Internet). Instead, it will try to detail how to use a VR API at a lower level in order to understand how all those engines make use of it to enable multiplatform VR applications by obtaining tracking information from an HMD and any additional trackable device, rendering a scene from both eye’s viewpoint so to achieve stereo vision, and so on.

Introduction to OpenVR

That being said, one of the best “write once, deploy everywhere” VR API nowadays is OpenVR, an open source programming interface created by Valve to allow communication with a VR system. Thanks to its openness and support from its creators, OpenVR is actually supported by almost any of the major high-end headsets in the market, including HTC Vive, Oculus Rift, any of the Windows Mixed Reality headsets, Pimax 5K/8K, and even some mobile headset like the ones using NoloVR and next-generation headsets like the Varjo bionic display headset or Kopin’s small-sized headset. That means that if we use this API for talking with the virtual reality system in our VR application, we will be able to run it in each one of the previously mentioned platforms with zero modifications (maybe I should say almost zero, as there might be a few caveats in some particular cases).

openvr application
Where OpenVR stands between our application and the hardware (Image by Valve)

At this point, you may have already guessed that what most of the engines with cross-platform VR support do is just adding OpenVR into their pipeline, so the deployed application can run in all the aforementioned VR platforms. Thus, the “Your Software” component in the image above can be Unity, Unreal Engine, Godot or John Doe’s self-made engine or application (as the one we will be coding!). What all of them have in common is that they use OpenVR calls behind the scenes to communicate with the VR system, which also understands OpenVR language and responds accordingly.

Anyway, it’s worth mentioning that OpenVR is not the only VR API out there, so game engines usually support others VR APIs such as Oculus SDK (enabling it to create applications that run directly on Oculus runtime), GoogleVR SDK (enabling it to create apps which runs over Google Cardboard and Google Daydream runtimes), etc. As a developer, we haven’t control over the runtime in which our application will execute (i.e.: we can’t use OpenVR and choose to execute directly in Oculus runtime); that will be dictated by the API/SDK we are using instead. For example, if we are executing our application in an Oculus Rift headset and we developed it using OpenVR, at runtime the execution workflow will be OpenVR API > SteamVR runtime > Oculus API > Oculus runtime > Oculus drivers > Oculus HMD, in contrast to what would happen if we develop with the Oculus API, in which case at runtime the application’s workflow will be more direct, Oculus API > Oculus runtime > Oculus drivers > Oculus HMD.

To further exemplify, a few more well-known examples of this kind of cross-platform APIs could be OpenGL for graphics programming, OpenCV for using computer vision algorithms and OpenCL for controlling program execution between different processors. All those are examples of interfaces which are supported in a vast kind of systems, ranging from desktop computers to mobile devices and embedded systems, so every application that uses the interface can be run in all of the supported systems. To put a concrete example, Unity engine support for OpenGL means that every game developed in Unity can run either in a Windows desktop computer, Linux, Mac or even an OpenGL-enabled console like the Sony PlayStation, with very little to none modifications, which is very good news for us developers. And the same happens with OpenVR! The fact that Unity supports it, implies that any game developed in Unity will be able to run either in the HTC Vive, the Oculus Rift or any other OpenVR-enabled hardware. A caveat to this is that Unity natively supports only the rendering and tracking part of OpenVR, delegating more specific behaviors to plugins such as SteamVR plugin, which is necessary to get controller input as well as more complex information provided by the hardware, but also this kind of plugins are using the VR API at the end. But that’s just how Unity works, and this post isn’t about Unity, so let’s move on 🙂

unity openvr
OpenVR support in Unity game engine (Image by Valve)

Finally, another great example of the idea of “write once, deploy everywhere” related to VR is WebVR, an API based on web standards to enable creation of VR experiences that can be distributed through URLs as we know them from the actual Web, and executed regardless we are using a high-end VR headset like the Oculus, the Vive, one of the Microsoft Mixed Reality headsets, a mobile VR headset such as the Gear VR or Daydream Viewer, one of the standalone headsets coming in the next months such as the Oculus Go or the Lenovo Mirage Solo or even non-VR devices as long as they include a WebVR-enabled browser such as smartphones and tablets. It can even accommodate AR applications as part of the same API, thanks to an extension called WebXR! So the future of VR on the web is very promising, and in what respects to distribution and cross-platform level, this kind of web APIs can be considered even more flexible than OpenVR, but it is still in its infancy and we are yet to see how well it will evolve.

webvr
WebVR for cross-platform VR (Image by Arturo Paracuellos)
Working with OpenVR

OpenVR is distributed by Valve through a Github project that can be found here. Its documentation consists of just a few pages as part of the Github project Wiki, which describe most of the operations provided by the API. The descriptions are not exactly thorough and there’s no step by step guide or something so one of the most useful assets to get started is the example code (e.g.: the hellovr_opengl included in the samples folder of the repository). Even this way, digging into the code may require a couple of hours to understand what the hell it is doing, even more considering that most of the code is based in plain modern OpenGL/GLSL and there is a lot of stuff not purely related to OpenVR itself such as rendering the scene to separate render buffers and showing them in stereo, applying the correct perspective transformations for each eye so all looks fine, loading the shaders to be used for rendering, and many other computer graphics stuff that we will be covering little by little in following posts.

So one of the goals of this series is to incrementally explain the easy way what are the most useful methods to be used and how to use them, what is the main workflow when working with the API, etc., so we can better understand how things work in the background when a VR scene gets rendered in the HMD with OpenVR. Anyway, take into account that there will be a lot of stuff that the VR runtime itself will be still doing behind the scenes (e.g.: applying correct barrel distortion to the rendered image so to counter the natural lenses pincushion effect, ensuring a proper frame rate applying rendering techniques such as Asynchronous Reprojection, and so on), but at least we will be working in a lower level of abstraction in comparison with working with a high-level game engine and this will allow us to better understand how it works. Yeah, it may be scary low-level stuff at first, but as we already mentioned, all this is what Unity or any other game engine does to support OpenVR.

Ok, and now what?

At this point, you may be saying “Ok, and how the *** would all this be useful to me?”. But I’ve got you covered! And I will try to answer below:

  1. If you want to have a better understanding of what the hell is going on in the background of a VR application, roll up your sleeves and keep reading, you are in the right place.
  2. If you have a game engine and want to add OpenVR support, the same applies: so stay with us and keep reading. For example, I’m working on a little hobbyist game engine and that’s why it was worth to me at the time.
  3. If you want to create your VR scene from scratch without using any game engine for learning purposes, it is pretty much useful.
  4. If you want to make a VR game, publish it and survive the process, forget all this and better go reading your favorite game engine’s manual 😉 As we noted before, most game engines nowadays already support all this kind of stuff transparently and with very little effort on the part of the developer. But if you still want to code it from scratch, this series should be useful (let me send you my kudos, you are the man/woman… may the force be with you!).
  5. If you want to make your OpenVR enabled driver for your HMD or whatever utility application, all this might be helpful but additionally you should read the driver part of the OpenVR documentation and put hands on the driver example code. I haven’t dug pretty much into either of those, so let us know in the comments if you have! Maybe developing a simple driver is a nice idea for the last posts of this series…

And now that you know that this tutorial is essential for your life existence and it’s totally worth your time, let’s put our hands on our OpenVR application!

Our first OpenVR application

So let’s start coding our Hello World application using OpenVR, while we describe which are the most relevant operations needed to obtain data from the VR system. The final application will be very simple this time, showing all the recognized devices on screen (HMD, controllers, trackers, etc) and their 3D positions as they are recognized. In the case of the controllers, it will also color its data differently depending on whether a button is being pressed or not. We have already introduced the video of the demo at the beginning of this post, but in case you haven’t seen it, I’ll facilitate the link again to save you some mouse scrolling (I know, I rock. You’re welcome! 😛 )

OpenVR simple demo application screenshot

Let’s first say some things about the code/project. The code is simple C with a few features from C++ (not object-oriented though, just plain old procedural programming to get things as simple as possible for now). You can find the whole project on my Github repository. It’s a Visual Studio project tested in Windows 8.1 64 bit, although as all the third-party libraries used are multi-platform, it should be able to be ported to any other platform as needed with just a few tweaks. If you run into any issues executing the code on another platform just let me know and I’ll do my best to help.

The project references just a couple of third-party libraries for now (SDL2 for windows management, SDL TTF for rendering 2D text and OpenVR for obvious reasons) and they are all included as part of the project, referenced relative to the main solution path, so hopefully there shouldn’t be any issues in what respect to using the external libraries. As the main focus of this post is OpenVR, we won’t be describing how to use other external libraries different from it, so feel free to ask in the comments or check its documentation on your own (it’s easily found within the links above).

Starting to dig into OpenVR code, it’s worth saying that the API is further subdivided into different modules, each one in charge of different tasks within the VR application:

  • The IVRSystem is the main interface and will enable us to interact and collect information about the connected devices and controllers, as well as being in charge of calculating the lens distortion discussed earlier and other display-related stuff.
  • Another relevant module is the IVRCompositor, which will enable the application to properly render 3D content in the display and it’s in charge of controlling all the rendering-related stuff inside the VR context.
  • There is also the IVRChaperone and IVROverlay modules, which allows to access all the information about the virtual bounds system being used and render 2D content as part of the VR overlay (menus, buttons, etc), respectively.

Although there are a few additional modules, these are the most important for our task. Each of the modules is instanced and used as needed, so for this first application, we will be using just the IVRSystem to obtain basic data from the VR system.

The first thing we have to do is to initialize the IVRSystem module (check init_OpenVR function within the code). We can do this by calling the VR_init function, which receives the type of VR application we will be targeting as a parameter and in case all is fine it returns a VR context we can use later to retrieve data from the system. The application type can be a 3D application such as a game or any other 3D app, an overlay application such as a utility application we access inside the runtime itself (think about the Revive dashboard or the OpenVR Advanced Settings), or a few others. Despite we won’t be doing any graphics (yet!) we will be creating a 3D Application context, which corresponds to the VRApplication_Scene parameter.

Prior to initializing the context, it may be also useful to check whether there is an HMD connected and the runtime is correctly installed. This can be done using the VR_IsHmdPresent and VR_IsRuntimeInstalled calls respectively. With all this, the OpenVR initialization should have the following form (NOTE: take into account that all the code snippets that will follow are just pseudo-code, and might not compile exactly as they are written. It’s recommended to refer to the API docs to check the details of each function):

// Check whether there’s an HMD connected and a runtime installed 
if (VR_IsHmdPresent() && VR_IsRuntimeInstalled()) 
	print "It's all ready to run the VR application!"; 
else 
	return -1;

As we already have our VR context initialized we can also take the chance to obtain some information about the devices being tracked by the VR system as part of our initialization function. In order to do this, we have to iterate through all the devices being recognized by the system. The VR context at runtime maintains a list of all the devices being tracked, and we can use a few variables and functions to traverse that list. The OpenVR variable k_unTrackedDeviceIndex_Hmd maintains the ID of the first recognized device (which is always the headset itself), and the variable k_unMaxTrackedDeviceCount maintains the ID of the last possible to track device. Additionally, there is also a useful function called IsTrackedDeviceConnected as part of the VR context which allows us to query whether that particular device is being tracked or not.

Thus, we can use these variables and that function to iterate through the whole list of possible devices and obtain data from those which are being correctly tracked. Once we know one device is being tracked we can query some basic information such as its type (HMD, controller, base station, standalone tracker, etc) using the function GetTrackedDeviceClass, or its name and additional properties using the function GetStringTrackedDeviceProperty. Both functions are provided by the VR context, but you will see that in the code they are wrapped into a few utility functions so to get a string from the returned data structure. The code snippet illustrating this simple information retrieval from the VR system is shown below:

// Iterate through all the allowed devices and print basic data for those connected 
for (int td=k_unTrackedDeviceIndex_Hmd; td<vk_unMaxTrackedDeviceCount; td++) 
{ 
    if (vr_context->IsTrackedDeviceConnected(td)) // Check if that device is connected 
    { 
        td_class = vr_context->GetTrackedDeviceClass(td); 
        td_name = vr_context->GetStringTrackedDeviceProperty(td,Prop_TrackingSystemName_String);

        print "Type: " + td_class + ", " + "Name: " + td_name; 
    } 
}

It’s worth noting here that if we check the OpenVR API code, we realize that all these functions have no implementation, as they are included as pure virtual functions in the header file. This is because the one in charge of implementing them is the runtime which will end up talking with the underlying hardware through the driver. So each hardware vendor which would like to get its headset talk OpenVR will implement the OpenVR driver API using the proper hardware-specific directives to access and retrieve the needed information. The important thing is that as they implement the API, the returning structures will share the same types and data structures, regardless the vendor, and that’s the main reason why this kind of APIs are multi-platform (check the first picture of this post!).

So now that we have initialized OpenVR and retrieved some basic information from the connected devices, we are in condition to code the application’s main loop, in which we will obtain and print the position of all the connected devices on screen in each frame. You’ll see in the code that the PollNextEvent function is used to check for events, but those are just events used to print relevant information such as when a new device is recognized or lose tracking, when buttons are pressed on the devices, etc., so it’s not essential for the task of retrieving positions and we will not detail it here.

In this line, the most relevant call is the call to GetDeviceToAbsoluteTrackingPose, which receives the coordinate system relative to which the positions are going to be notified, an empty array of TrackedDevicePose_t data structure elements to be filled as a result and the length of this array. The TrackedDevicePose_t array of elements will contain all the information related to the position, orientation and other properties of each tracking device (called pose), so this is the array we will have to query in order to obtain the position of each tracked device. More specifically, the mDeviceToAbsoluteTracking attribute of this structure is the one used to maintain the device position and orientation. As usual, positions and orientations are represented using matrices. In this case, both are included in a single 3×4 matrix of the form:

where the r values correspond to the rotation of the device (i.e.: its orientation around each one of the three axes), and the t values correspond to the translation (i.e.: its position). We will use only the translation part for now, so with this in mind, the code snippet for retrieving the position of each device in each frame is as follows:

while (true) // Main loop 
{ 
	if (vr_context != NULL) 
	{ 
		// Obtain tracking data for all devices 
		vr_context->GetDeviceToAbsoluteTrackingPose(TrackingUniverseStanding, 0, td_pose,k_unMaxTrackedDeviceCount); // Iterate through all devices and get its tracking data when valid 

		for (int td=0; td<vr::k_unMaxTrackedDeviceCount; td++) 
		{ 
			if ((td_pose[td].bDeviceIsConnected) && (td_pose[td].bPoseIsValid)) 
			{ 
				// Set proper color in case the device is a controller with the trigger pressed 
				if (vr_context->GetControllerState(td,&controller_state)
				{ 
					if (!(ButtonMaskFromId(EVRButtonId::k_EButton_Axis1) & controller_state.ulButtonPressed)) 
						color = green; 
					else 
                        color = blue;
				} 

				// Fill the position vector with the position of the device (last column of the matrix) 
				float v[3] = { td_pose[td].mDeviceToAbsoluteTracking.m[0][3], 
							   td_pose[td].mDeviceToAbsoluteTracking.m[1][3], 
							   td_pose[td].mDeviceToAbsoluteTracking.m[2][3]} 

				// Print the position on screen with the given color at certain screen position 
				print_on_screen(v, color, screen_coords); 
			}
		} 
	} 
}

Some additional details about the above code snippet:

  • As we mentioned earlier, the first parameter of the GetDeviceToAbsoluteTrackingPose function is the coordinate system in which the positions will be issued. In this case, we are using the coordinate system given by the TrackingUniverseStanding value, which means we are using the origin and orientation from the configured play area for standing experiences (we could have also used the coordinate system corresponding to seated experiences or even a generic uncalibrated one).
  • The bDeviceIsConnected and bPoseIsValid attributes are just flags included in the TrackedDevicePose_t data structure which indicate if that particular pose corresponds to a connected device and whether it is a valid pose or not.
  • The GetControllerState function allows us to access the controller state at any specific moment, so it’s useful to check in real time whether there is a button pressed or not. We can also use the helper function ButtonMaskFromId for this task, in order to obtain a bitmap in which every bit will be zero, except for the one corresponding to the button given as parameter. Then we can bitwise-OR this bitmap with the ulButtonPressed attribute returned by GetControllerState to check if there’s actually something pressed.

Finally, it’s always important to properly destroy the VR context we have created so to free all the allocated memory and get the system clean and ready for future executions. This is done with a call to the VR_Shutdown function provided by OpenVR before exiting our application.

Final thoughts and what’s next

So, that’s it for now. With the application you’ve just coded you can put a Vive Controller on your head and measure your height with submillimeter precision. Pretty cool, eh?

Pretty cool? What is this guy talking about?

Just kidding, I promise the follow-up posts will be pretty much compelling visually. But in any case don’t expect AAA quality stuff because as we mentioned before, we will be focusing on how to use the API, not on the graphics, interaction design or any other relevant stuff for an actual non-prototype VR application. To be more specific, in the next post we will be covering how to render objects in stereoscopic mode so they can be viewed correctly in the VR system using OpenGL graphics library. Stay tuned!

I hope you have enjoyed the post and that it had been useful to understand how virtual reality works at mid-level right between a generic VR software and the underlying hardware. Don’t hesitate in leaving any comments and share the post if you think it could be useful for someone else and it’s worth spreading. Also, feel free to follow me on Twitter if you want to get some news and personal thoughts on the AR/VR landscape, as well as updates on a few projects I’m working on about AR/VR, computer graphics, shaders and other stuff 🙂

(Header image by Valve)


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.

8 thoughts on “Introduction to OpenVR 101 Series: What is OpenVR and how to get started with its APIs

  1. Since Mati always comments on my posts, I want to comment to his post to return the favour 🙂

    Epic article, it is not easy to dig inside that low level stuff… that’s why I always use Unity! I can’t wait to see the next chapters of this series!

  2. Thank you for taking the time to do this series Matias. The book Oculus Rift In action is the closest paid option to learning things like this.
    Eagerly looking forward to the entire series.
    Cheers,
    Behram

    1. Hi Behram, thank you for taking the time to read it! Didn’n know about the Oculus Rift in Action book, thanks for the tip. Seems it does includes nothing about OpenVR (as expected!) but about the Oculus C APIs, so I’ll be taking a more in-depth look to it later.

      1. Yes, which is why I like the potential for your tutorial series. The ORIA book is the only option if one wants to go a little deeper.

        Looking forward to the next instalment ( YouTube ? )

        Cheers,
        Behram

  3. I can understand publishing code without testing it, or even compiling it as seems to be the case here, but you should really edit this to at least fix the missing parenthesis *smh*

  4. Hi Matias, I am trying to compile the code on your github project. Perhaps you can help me? I got all the header files hooked up correctly, and when I try to compile the project in visual studio 2017 I get “Severity Code Description Project File Line Suppression State
    Error LNK2019 unresolved external symbol __imp__VR_ShutdownInternal referenced in function “class vr::IVRSystem * __cdecl vr::VR_Init(enum vr::EVRInitError *,enum vr::EVRApplicationType)” (?VR_Init@vr@@YAPAVIVRSystem@1@PAW4EVRInitError@1@W4EVRApplicationType@1@@Z) openvrsimplexamples C:UsersJoeyDropboxAppsopenvrsimplexamples-masteropenvrsimplexamplesmain.obj”

    Is there something I am missing?

Comments are closed.

Releated

vive focus vision hands on review

Vive Focus Vision and Viverse hands-on: two solutions for businesses

The most interesting hands-on demo I had at MatchXR in Helsinki was with the HTC Vive team, who let me try two of their most important solutions: the new Vive Focus Vision headset and the Viverse social VR space. I think these two products may be relevant for some enterprise use cases. Let me explain […]

valve deckard roy controllers

The XR Week Peek (2024.12.02): Valve Roy Controllers 3D models’ leak, Black Friday VR deals, and more!

Happy Thanksgiving weekend to all my American friends! We don’t have Thanksgiving in Italy, but I know it’s a very important celebration in the US, Canada, and a few other countries, so I hope all of you who celebrated it had a great time with your family.  To all the others who did not participate in […]