The uprising of holographic displays: what they are and how they work

Recently, while I was browsing for some fresh VR news, I came around with some posts referencing Microsoft Andromeda project. Andromeda (apart from being also the codename of the new OS for mobile) is a new mobile device that Microsoft is secretly designing: it should feature 2 displays that can be folded one over the other and a stylus to interact with them. Think about Andromeda as a digital agenda for professional people, not as a new smartphone… Microsoft has already realized that its mobile market is by now almost died.

Form factor of the Andromeda device: seems a mix of a phablet and a Surface device (Image from 360 VR Community)

Its new and weird form factor has surprised everyone (two screens for a mobile device? Why?), but I’m quite sure that Microsoft has its good reasons to propose it (maybe it will be useful as having multiple screens on a PC). What has instead astonished me has been the reference to Alex Kipman and to the words “Holographic display”.

I admit that everytime that I hear someone talking about holo-something, I automatically assume that it is some kind of PR bullshit, but when Alex Kipman, the inventor of Kinect and HoloLens is working on something, I immediately take it very seriously. So I started thinking that this “Holographic display” should be something real. And I got a bit curious… what is a holographic display? And how does it work?

So, I started investigating…

In every article talking about holograms, Star Wars can’t be missing (Image from a movie by LucasFilms)
3D display

Someone in the comments of that article suggests that Andromeda’s holographic display is just a 3D display, like the one of Nintendo 3DS. If you, like me, don’t know how a Nintendo 3DS works… well, I’ve got you covered. Basically, the screen of the 3DS console is made so that if your eyes stay in an exact position in space, you can see the screen being 3D even if you don’t wear any glasses. This is possible because every even vertical line of pixels of the screen is producing the image for an eye (e.g. the right eye), while every odd vertical line of pixels is producing the image for the other eye (e.g. the left eye). You know that to have stereoscopy, that is 3D, you have to produce an image for the left eye and another one for the right eye, so that our brain is happy to see the compound image in 3D and this is what the system does. The problem is that on a traditional screen, you could just see the two images interleaved and everything would become a mess: there would be no way to direct the left image only to the left eye and vice-versa. That’s why on such kind of screen there is the “Parallax Barrier”, that is a system that makes sure that each pixel can cast light only in the direction towards which the proper target eye is predicted to be.

This is exactly how a 3D screen works: notice how thanks to the parallax barrier, every pixel targets only its objective eye (Image from CNET)

The result is that, if you put your eyes in the positions that the manufacturer has decided for you to enjoy 3D (the so-called “sweet spot”), you can live amazing 3D without wearing any glass! The following video explains everything very well, in case my words were not clear.

3D displays are cool, but the fact that you have only one sweet spot is terrible. You have to be in that exact position to live the magic. Furthermore, you’re just feeding to your eyes a set of images that come from a screen… so there’s the same problem of VR: you have to focus on the screen to see objects that have a different virtual depth from the one of the screen. Things do not seem real, they do not seem holograms.

That’s why no, this can’t be the technology that a genius like Alex Kipman can use.

Floating image display

The technology behind Andromeda has been leaked as always by the Twitter account Walking Cat, that spends days and nights reading patents and proposing leaks to people (thanks cat for existing!). He pointed us all to a specific patent filed by Microsoft for a “Floating Image Display”, plus one more for “Array-based floating display” and another one for “Exit Pupil-Forming display with reconvergent sheet”.

I’ve read some posts talking about Andromeda that cited the patents, but they all avoided mentioning how the display worked. So, I started reading the patents to understand better… and after having read terms like “Fourier Transform array”, “microlens array” and “telecentric”, the only thing I understood is why all other journalists preferred to be so vague about it.

That’s me

Honestly, I have not the right competencies to understand that document… someone with expertise in optics and displays would be needed (if you are, please read it and interpret it for me). But I really wanted to understand, so using my grit I kept on reading until maybe I got something. It’s pretty vague and I don’t guarantee that it is exact, but I’ll try to explain it to you nonetheless.

Let’s get back to how vision works: what we perceive is just light. So, if we’re seeing a potato, it is because there is light in the room that bounces on that potato. The light bounces on the potato and then hits our eyes. Our eyes will receive the light rays coming directly from the different parts of the potato, at different angles. Every eye will obviously get a slightly different portion of the potato and so we can see a 3D potato in front of us. That’s nice.

The yellow light comes into the room and bounces all over the potato. Since the potato is rough, every little area of it scatters light rays in all directions: some (the red ones) reach our left eye, some others (the green ones) reach our right eye, while other ones (the cyan ones) go in other directions. It is because of all red and green rays that we actually see that there is a potato there.

Now let’s make a little game: suppose that we have a magician (maybe someone from the game Elemental Combat), that can remove the real potato leaving all the light rays from the potato untouched. From our sight point of view, the potato is still there: as I’ve already told you, we only see lights, not objects. We can say that our magician has created the hologram of a potato.

The potato is not physically there, but since the light is still there, for our eyes nothing has changed: we have a hologram of a potato

Since we have our good magician, we could ask him to show other objects: a banana, a mango, etc… all of them without actually inserting the real objects into the world. I could also use the Unity plugin of the magician so that I can code a game for my PC (Fruit Ninja, since we’re talking about fruits and potatoes) and he could show holograms of these objects in front of me, according to what the PC wants to show. The magician has just become the hologram display of my computer. He bends light rays and pretends that those objects are there, right in front of me and the experience is very very realistic. Holograms have the cool advantage that they appear as real not only for the shape and color, but also because of depth: the object is not on a fixed-depth screen, but it is shaped by the light at the correct depth, in every detail. Look the above picture… the eyes are still seeing the light rays as starting from the exact potato’s points. It’s amazing.

The problem is that we don’t really have a magician to which we can ask to bend the light for us, so what can we do? There are various strategies, so let’s focus on the one proposed by Microsoft. The idea is to use the tools that we have to bend the light, that is optical lenses, to obtain something similar. What I understood from the patent is that inside the screen there is a microlens array, basically a film made by microscopic lenses, one next to the other, that serve to bend the light from the underlying display. Another thing that I understood is that the system makes use of the Fourier transform: using lenses it is possible to apply the Fourier transform to a certain signal, but I didn’t understand what is its purpose. The third thing that I got is that the system has been conceived to try to offer the maximum image quality, with the least number of artifacts and tiling effect (number of 3D voxel of the Hologram). So, we have a display that receives the images to be shown, encoded in some way, and some lenses that make some optical stuff (Fourier transform, etc…) and then diffract the light so that it appears to come from a certain depth. So, we can emulate a hologram coming from a fixed depth (a flat image present at that depth) thanks to lenses and other stuff.

It seems a bit rough to work as our little Merlin and in fact, this practical solution has surely various issues. First of all, it can’t reconstruct a 3D Hologram that can be seen from all points of space. Andromeda display works only if seen from certain “vantage points”. This means that the various calculations are made so that the light can be perceived correctly only from these vantage points: this is surely a simpler solution, that can be handled with current tools. Looking the below picture taken from the Patent, you can get an idea of how this does work: the display + the optics have to bend the light so that the viewer from the 614 and 616 points of view, can see the holographic object as if it were at depth 612. Notice the point 617A: it is as it is emitting all rays in the direction of the various vantage points, as if it were a real object hit by light, scattering light in all directions. Point 617A is the point of a Hologram. Everything that you can see at its left is the display and the optics that are necessary to create such Hologram. Notice that two different parts of the screen (606A, 606B) are creating the visuals of the point 617A and that then the optics bend the light rays so all of them seem to have started from point 617A. To make a comparison with the pictures of the potato, imagine the ray 610A as red and 610B as green. We have created a hologram point that appears to hover at a certain depth above the screen.

This image conveys the idea of how the hologram display works: the part on the left is the display + the optics (Image from Microsoft patent)

From this description, it seems that this display can only show holographic images at a certain fixed depth. But there’s more: if we combine optics with different focuses (or if we can change the focus dynamically), we can project different part of the holograms at different depths, so obtaining a real 3D hologram. This depth can be both positive and negative, showing content that appears behind the actual screen. And this hologram can be static, or animated. Once the display works, from the OS point of view, this is just a screen… so you can use it to show what you want.

Potato example using the Andromeda display. Considering that the two eyes are at two vantages points, they receive by the Andromeda display the lights exactly as before, so the eyes still perceive a potato: if you compare this picture to the ones above, you see that the red and green lines are exactly the same, the only difference is that they now start from the display. At the eye doesn’t interest where the ray has started: the intersection of the red-green rays still represent the same physical points of the object and this means that the brain will still continue to see the hologram being at the right depth.

There’s an intriguing part of the patent about the various vantage points. There’s written that if the eyes of the user get tracked to see which vantage points they’re next to, this information can be used to modify the behavior of the screen. For instance, the parts of the screen that are targeting the other unused vantage points could be shut down, to save power. Or the system could show different images depending on the position of the user, maybe given him the illusion that the hologram of that zombie is continuously following its eyes. Or, even better, it could show different perspective of the holograms from the different vantage points, letting the user move around the hologram to see it from all sides. That would be pretty impressive.

Talking about issues, instead, I have to also mention that holograms seen through a screen have the issue of low FOV: since rays can be generated only inside the screen surface, all the hologram parts that would require a ray coming from a position outside the screen can’t be generated, so the image can’t be showed.

As you can see, this hologram stuff is absolutely cool and absolutely real. The problem is in understanding its quality (I’d be very curious about trying such a device and see) and in creating the right applications for it so that it doesn’t appear only a gimmick. And this has to be high-quality 3D content: on this side, Microsoft has its famous Mixed Reality studios (the one that recorded the avatars for Blade Runner VR experience) that is able to create incredible holographic recordings, so it can play safe.

RED Holographic display by Leia

But there is not only Microsoft. Some months ago, RED made a bold claim about the fact that it was creating a smartphone with a holographic display and AR & VR functionalities. Recently, these claims have started to have more sense: the screen of the RED Hydrogen One will be made by a startup called LEIA, that is specialized in holographic displays.

The peculiarity of LEIA hardware is that its screen can work in standard mode and in lightfield mode. In standard mode, it behaves exactly as a standard screen, while in lightfield mode it uses a substrate of nano-stuff (“a nanostructured light guide plate” according to their website) to bend the light and create 3D holograms on top of the smartphone display. Their “Diffractive Lightfield Backlighting” technology should be ideal for smartphones since it is very compact and consumes little power. The fact that it can work in both modes is awesome, considering that most of mobile apps won’t include holographic content.

LEIA doesn’t come from nothing: it is a spin-off of HP. In 2013 HP showcased a prototype of display able to show a 3D hologram seeable from 200 different points of view. The secret was in the display technology: the display, using some tech similar to the ones seen above, could show contemporarily all 200 images of the object seen from the various angles, with every image rendered only by some pixels of the screen and sent only to a certain angle of view. So, moving around the screen, you get for the left and right eye the appropriate images of the virtual object from that point of view, giving you the illusion that you’re moving around a real 3D object! Imagine it as a 3DS with 200 images rendered at the same time and with the Parallax Barrier handling 200 angles: that’s amazing. If you want to see how it worked in the details, you can read their article on Nature.

An image that shows easily how HP display works. Above you have the Nintendo 3DS and below you have HP display. The more images you can project, the more the display becomes cool (Image from Business Insider, copyright Dodgson et al., Nature)

Results were impressive and that’s why the project evolved until it reached the current stage. Startup LEIA is there, ready to astonish us all with its display. People that tried the RED have said that they were amazed by the results.

And apart from holograms, light bending technology of LEIA can also be used to create a privacy zone for the smartphone screen, so people around us can’t read what we’re writing… or can’t see that we’re watching some dirty stuff

The future

It seems that holographic displays are going to become part of our future. According to Wikipedia, MIT researcher Michael Bove has predicted in 2013 that holographic displays would have entered the mass market within the next ten years, adding that we already have all the necessary technology. This is surely fascinating. While we’re all talking about smartphones being substituted by AR glasses, seems that smartphones are slowly blurring the line with AR and VR devices, adding AR functionalities through ARCore and ARKit and 3D holographic functionalities thanks to this new kind of displays. This is surely intriguing, even if we have to wonder how much such kind of features will be useful and will not only be a wow effect generator for the first time that we try a new device. We’ll see.


Hope you liked this journey inside the holographic displays world. What’s your opinion about them? Do you have a better idea of how they work? Let me know in the comments!

Skarredghost: AR/VR developer, startupper, zombie killer. Sometimes I pretend I can blog, but actually I've no idea what I'm doing. I tried to change the world with my startup Immotionar, offering super-awesome full body virtual reality, but now the dream is over. But I'm not giving up: I've started an AR/VR agency called New Technology Walkers with which help you in realizing your XR dreams with our consultancies (Contact us if you need a project done!)
Related Post
Disqus Comments Loading...