Yesterday everyone was excited in talking about Apple entrance in the AR/VR world, while I was actually excited by another news: the upgrade of Lighthouse tracking of SteamVR. After having leaked various previews in the last months, Valve has officially announced the upgrade with a community post. Let’s see together what this announcement is and why it is so important.
SteamVR tracking v1.0
Before explaining the announcement, it is better to take a step back and explain how Lighthouse tracking works. I admit I’ve never understood this completely until today, when I did a lot of research to write this article, so I’m going to make a little recap for you people that like me didn’t get it completely. If you’re already a Lighthouse master, go directly to next paragraph.
Basically the tracking is composed of two agents: the Lighthouse stations and the various sensors on the headset and VR controllers. Each Lighthouse station is composed by IR leds flashing at regular intervals and of two little motors throwing laser beams into the room, one spinning horizontally and the other vertically. Since we’re talking about IR light, we can’t see it, but actually sixty times a second this little stations are irradiating our rooms with light.
The big flash you’re seeing is lit by some IR LEDs. The two cylinders that rotate contain a laser light emitter (the bright spot that you see on them) and that irradiates the room with laser light. What is all of this useful for? We’ll see that in a while.
Let’s take a single sensor on a Vive headset: currently it is a TS3633 circuit by Triad Semiconductor. When it sees the bright spot of the IR leds (basically it detects a big burst of light), it some kind of “resets itself” and starts counting. It continues counting until it gets hit by the moving laser rays of the other two cylinders. If it gets hit by them, it communicates to the “brain” of the Vive tracking system the time at which it has got hit. So, this circuit will communicate something like “I’ve been hit after 0.001 seconds after the reset light , I’ve been hit after 0.002 seconds, I’ve not been hit” and so on, forever. Knowing the frequency at which the reset light appears, the speed at which the cylinders move and all this stuff, the “brain” is able to detect some information about the position of each sensor.
Let’s look again the above GIF: there is a reset flashing every circa 16.6ms (60Hz) and as you can see, in this time the horizontal cylinder spans all its 180degrees; then there is another reset and then the vertical cylinder spans all its 180 degrees; then the loop begins again. Suppose that after the first reset, our Triad sensor gets hit after exactly 8.3ms. This means that our sensor has been hit when the laser ray attached to the horizontal motor was at 90° (8.3ms is half of 16.6ms, so the cylinder was at half of its 180° span, so was at 90°), so completely frontal to the station. We now know that this sensor is somewhere horizontally in front of the station. At next iteration, the vertical laser starts and we get hit after 5.5ms. This means that the vertical beam was circa at 60° (5.5ms = 16.6ms / 3; so the angle must be 60° = 180° / 3). We have now some certain data about the position of the sensor: it is in the point that is in front of the station, at 30° vertical orientation from it. We don’t know the exact position, but we have a line of possible positions of the sensor (we can’t reconstruct the distance, so the possible positions lay on a ray).
(As a note for purists, I know that between horizontal and vertical beam the headset has moved, so in reality the overall calculation is a bit different, but I think that the simplification I made conveys the idea better)
This video helps surely you in visualize better the process: in it you can see the flashing resets light and the spanning lasers going through the room.
Since we have all synchronization stuff between the base stations, we’ve performed calibration of SteamVR system and we exactly know the relative position of each sensor wrt the others (we’ve manufactured the headset, so we know exactly the position of all sensors on it), we can take all these data, feed them to the tracking brain and with some math magic reconstruct the position of the headset with great accuracy. Vive also mounts IMU sensors, so this data too is taken into count in the reconstruction of position and rotation of the device.
For many people Lighthouse tracking is the best available tracking technology at the moment, since it is very precise, cheap and high customizable (it allows tracking of many objects, including the versatile Vive Trackers).
I just want to clarify two misunderstandings that I myself had:
- Lighthouse doesn’t necessarily need two stations. Once you’ve configured everything, you can just use one as reddit users confirm. From the above explanation is in fact clear that you just need one source of lights to do everything. The second station is fundamental to increase precision of tracking and to guarantee robustness in case of occlusion: putting one station in front of the other makes really probable that at least one of the two manages to hit the sensors;
- Lighthouse is all about IR (infrared) light. I’ve lived until now convinced that Lighthouse was using laser and not infrared, until today, when, digging in technical specs, I understood that I was wrong. But also right. In fact, as Wikipedia says, laser just means
A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term “laser” originated as an acronym for “light amplification by stimulated emission of radiation“
So, laser is just a mode to emit light. And Lighthouse stations use IR lasers. That’s why it has so many interferences with IR devices like Kinect.
SteamVR tracking v2.0
Valve has decided that it Lighthouse tracking wasn’t cool enough, so they announced Lighthouse tracking v2.0. We already had in the past some news by Triad semiconductor announcing a new type of sensor reducing the overall cost of the Vive, but the news is even bigger.
Improvements regard both the sensors and the stations, let’s see why.
First of all let’s talk about the new sensors, the amazing TS4231. These are the ones I was talking about in my previous article, because since they have only 5 components instead of 11, they make possible the reduction of costs. Futhermore this little device can also enter in power saving mode, enabling also the device to have less power consumption. But the big news is the addition of the DATA output pin. What does this mean? It does mean that while the previous TS3633 model could only communicate to the brain simple data on the ENVELOPE pin (basically the “I’ve been hit after 5.5ms” information), this can also communicate some other complex data (like “the answer to life the universe and everything is 42”). What this data is taken from? Certain not from the book “The Hitchhiker’s Guide to the Galaxy”, but from the wave of the IR light intercepted by the sensor. The sensor will mount a light-digital converter, able to get information modulated inside the wave and traduce it in digital data in output on the DATA pin. We’ll see why this is a game changer in a while.
Base station will be optimized so they won’t need anymore two different motors to sweep lights. Thanks to engineering magic, a single rotation cylinder will be able to sweep rays both horizontally and vertically. This is an incredible reduction of costs (you cut the price of motors by a half!). Furthermore, the “reset” LEDs have been removed, since they’re now useless. Wait… what?? Useless?
Yes, useless. Since now we’re able to encode/decode data directly into the IR light beams, we don’t need a reset signal anymore. Let me explain that with a simple example. Let’s suppose that the horizontal beam now throws rays with encoded in them the information about their current angle: you don’t need timing anymore. I mean, if at 30° the ray is able to contain an information like “I’m a ray at 30°” or “I’m a ray radiated after 5.5ms”, you don’t need anymore to have a stupid timing reset. You just get that number and give it to the tracking “brain”. How to encode this information inside the light beams is some signals magic, basically I think that like with the radio, you have a carrier wave with a certain frequency and then add secondary waves with the actual data (music in case of the radio, numbers in case of this tracking system)… but I’m not an expert in this field at all. The only thing clear to me from the sensor official datasheet is that the sensor is able to traduce light waves to digital data. So, the important thing is that modifying the waves of the laser beams, we are able to communicate information to the sensors.
Thanks to this engineering efforts, the Lighthouse stations are highly modified and their cost is incredibly reduced. Look how a Lighthouse station box becomes empty in its second version:
In a period where everyone is trying to reduce VR costs, this update means an incredible costs reduction, both on headsets and tracking stations. Vive 2 price will be surely lower than Vive 1 one.
But there’s even more: since we’re removing the IR synchronization signal, that was the one causing the most interferences, Lighthouse tracking will become more stable and I guess that it could become also usable with external sensors like Kinect.
Another great info will be that we’ll also be able to use more than two Lighthouses: since we can encode info in each laser beam, the beam could also contain the ID of the Lighthouse station casting it and this would reduce tracking ambiguity (the sensor would be able to know exactly by which station it has been hit in a certain moment) and make possible the use of any number of tracking stations. So we may use a bazillion of Lighthouse stations and track entire warehouses, something that is awesome for VR arcades. As Triad says:
This higher speed digital link opens up the capability of using more than two base stations. Multiple base stations installations are useful for digital out of home experiences, arcades, “house-scale” tracking, and non-VR tracking applications such as warehouse-scale or grocery-store-scale tracking.
Vive product is becoming the best one for VR arcades: it has an incredible tracking system, a clear commercial license, a completely open architecture that makes all the customizations possible (for example it is possible to create custom controllers with Vive trackers or using directly Triad sensors). And Vive is Taiwanese (so Chinese) and arcades are enormous in China, so I expect lots of money for them.
UPDATE: according to Road To VR, when this new tracking technology will be out, it will allow out of the box to use 4 tracking stations to track an area of 10m x 10m (33 x 33 foot). This is incredible.
But there is a problem: since we’ve removed the synchronization blink, a Vive v1 can’t surely work with this new kind of base stations. Its TS3633 sensors would just count forever, waiting for an IR blink that never happens. This means that there is no backward compatibility of the stations: every Vive v1 hardware can’t work with these base station evolutions. The contrary instead holds: since the TS4231 has been made with backward compatibility in mind, it can be used with Lighthouse v1.
This means that we’re talking about a disruptive innovation of this VR tracking technology. Once we go 2.0, we can’t go back. This has huge implications.
Valve writes in its statement that
Valve will have base stations available in production quantities starting in November 2017. If you would like engineering samples of those base stations, let us know. Those will be available in June.
This means that from now on, everyone wanting to create hardware for SteamVR has to think it in terms of the new TS4231 sensor. This means HTC itself and this leads me to my last point…
Valve has until know worked on a Valve 1.5 device, creating add-ons compatible with Vive 1.0 (like the Audio Strap, for instance). Now is instead proposing a non-backward compatible upgrade, a tracking 2.0 incompatible with Vive 1.0. This makes me immediately think about a Vive 2.0, which will surely use the new Triad sensors.
Considering that at the end of this year mass production of this new Vive technology will begin, I can envision that this mass production is made in prevision of the release of a new device using this kind of stations, i.e. the Vive 2. I think that this could be the time that Vive will announce its new device, that presumably will be shipped in 2018, 2 years after the first one (and that seems a resonable product cycle to me). The new Vive will be far cheaper, thanks to these upgrades and will have new features, like house-scale tracking and maybe eye tracking and foveat rendering (maybe… it would be great). Surely it will add Oculus Touch-like controllers, with full hands tracking.
People with Vive 1.0 will be able to just buy the new 2.0 headset, without buying again Lighthouse stations, thanks to the backward compatibility of sensors; while new users will be able to buy a complete system cheaper than the first one. In both cases, people won’t spend again the full price of a Vive 1.0.
The ascent of the new cheaper Vive, maybe released at half 2018, will contribute to the widespread of VR, that according to John Riccitiello is expected from second half of 2018 – beginning of 2019. Everything makes sense.
Of course this last paragraph is all speculation, because I love speculating about VR future :). Don’t take it for granted!
Let me know your impressions on this new version of the Ligthhouse tracking… and don’t forget to share this article and to subscribe to my newsletter. Thanks 😉
(Header image by Road To VR)