There are days when I’m really happy to be a blogger. The day when I interviewed Donald Greenberg is one of these days.
Donald Greenberg is one of the fathers of the CGI. Have you ever heard the name “radiosity”? It is one of the fundamental algorithms that let you render a correct lighting in your virtual scenes and that are implemented in all rendering engines. Well, it has been invented BY HIS TEAM. We’re talking about a person that has created a new field, that has an enormous expertise, and that at 84 years still works in these technologies.
And he is also a very approachable person: at View Conference, he just hung around, listening to talks of other people (really, I found him in the audience a lot of time!), making compliments to everyone and exchanging opinions about VR and CGI with whoever wanted to talk with him. He is wonderful, I have no other words to define him.
I absolutely wanted to interview him and I managed to do that. Being sit on a sofa next to such an important person, was an incredible experience. I could really feel all his knowledge and his passion for the technology, he really inspired me. One of the best experiences of my blogging life. And he also said a lot of cool stuff regarding CGI, VR, and BCI. And after some hours, he also made a public talk, that I of course attended and that I found incredibly interesting.
I want to let you feel as being on the sofa at my place talking with Greenberg, so I’ll propose this interview in three fashions:
- The integral video, to make you feel really there;
- The non-exact transcription (sorry for the errors there!), to let you skim fast through questions (I’ve also provided a link to the part of the video related to each question so that you can watch only the parts you are most interested in);
- A final sum-up with the key points told by Greenberg.
Enjoy Don Greenberg’s knowledge 😉
VR has tried a lot of time to become mainstream, but it has always failed. This time again, lots of people are saying that it is a fad. Do you think that it is the right time for VR to become widespread or not? And why? (0:56)
The idea of AR and VR is around since a lot of time, but in the last 5 years, technology has become available to make it economical enough for lots of people. The question is: is it going to fail economically and there will be the technology to do it?
In the 60s, I wanted to use computer graphics for computer-aided design (cars, buildings, etc…) and nobody wanted to do it: it was still expensive, it had high computational costs, “why waste resources to make pictures?”. I was rejected by Disney (the company) in 1973, he said that he would never have used computers in the animation studio. The technology was so far behind wrt what you could do by hand, so their reaction was understandable.
Right now, what people don’t see is that virtual reality is still a baby. We don’t have enough bandwidth, we don’t have enough processing power, we isolate people with goggles, so I can’t see the expression of your eyes when we share the same environment. It’s a one-way go, but it’s just at its beginning. And today I come to View Conference, I see these exquisite experience by storytellers, but it’s a linear sequence of events, it’s directed by an art director, so you don’t have a choice to decide what to look at, you look at what they give you and they do that wonderfully (AN: he’s talking about traditional movies). And when we move in AR and VR, it’s a totally new technology, we need millions of more times computer power, and much higher resolution, closer to the one of your eyes. I think Augmented Reality is even more.
I think it is being misjudged, but I think it is going to be here forever, because ultimately what we’ve learned so far is going to be thrown away and we will be using AR as a means of communications. It is not for making movies, I am talking about communication, that’s why I say that it is going to stay. The immersive experience is so powerful, even if it is in a crude fashion, it is still so powerful. You can’t step over that cliff. I think that’s just the beginning.
So you think that people are misjudging this VR as the final stage, while we have a long way to go… (06:44)
We’ve TWENTY YEARS to go. Fifteen years to go.
And in fifteen years will we have a VR experience that is indistinguishable from reality? (07:00)
Close
And in how much time we’ll get to perfect similarity between the virtual and the real? (07:14)
The first well-received three minutes (CGI) movie was Luxo Jr. by Pixar in 1986. The first full-length computer animation was Toy Story in 1995. That’s 9 years, from the three minutes short to the full movie. And from 1995 to now, 23 years later, it is a standard! They are all doing the same thing, using all the same software! The story is different, the artistry is fantastic, they know how to use the tools… but that’s 2-3 decades.
Do you wanna switch ages? So that I can spend the next 30 years to solve the problems of VR! There are lots of unsolved problems. (AN: of course I would… he is a genius, he would discover a lot of amazing stuff and benefit all the virtual reality community!)
Why have you started working in CGI and VR? (8:52)
I am separating VR from CGI.
I got started in computer graphics because I wanted to create computer software for the design of cars, buildings… I didn’t like looking at numbers coming out of computers, I wanted to understand these simulations, so I started writing graphics software. It had nothing to do with entertainment. Nobody else was doing this, apart from a few people. I had very creative, energetic, courageous students who came to this oasis in the desert and they started to do stuff and then we got funding from the National Science foundation and started working on all the technical aspects of computer graphics. By the 1970s we were doing cel animations, I was working with Hanna and Barbera… Pixar didn’t start until 1986. I (with prof. Ken Torrance) set a measurement lab to measure the behavior of light and to measure material properties and try to do photorealistic renderings and a lot of the algorithms that are used today have been invented by my students.
Then I went to a SIGGRAPH conference, about five years ago, I think it was in VANCOUVER, and there was a table from these people from Basel, Switzerland and they had a VR system where I could put flaps on my hands, a goggle on my face and a fan on my non-existent hair and so it was like there was the wind blowing and I could feel as flying over the city of Vancouver. And it was so immersive, that I said “I can’t stop now. I have to move into virtual reality”. And then I started to teach virtual reality in my courses 5 years ago. I took 100-150 students, made their own models, animated some of the characters, walk through these non-existent spaces using high-quality rendering tools… and students are teaching me now.
So, now, what are you working on? (13:31)
A bunch of things. I’m still experimenting. The first set of experiments is in seeing how accurate we can be. There’s a chair over there… let’s assume I model this room and that chair and so we’re down at a centimeter accuracy, actually a quarter of centimeter accuracy. Then I put a goggle on you, so you only see the virtual environment: would you be so confident to move there and sit on that chair? If there is no handle, nothing you can touch to verify that the chair is actually there, would you actually do it? Well, we can do it. I’ve done it. I went there and I sat down. I want to see how accurately we can do that.
https://gfycat.com/orangequaintgaur
Then I started going to the fact that we don’t have the computational power to do that fast enough. The geometry is accurate enough, but we need to make it look realistic, but there is not the computational power to do that. So I got to eye tracking, rendering with higher resolution what you are looking and in lower resolution all the rest, that’s foveated rendering. I’m working on the algorithms for foveated rendering. And that’s working beautifully.
But now I have a lot more computer power, probably because NVIDIA is sending me Christmas presents. They’re really nice. Regarding the computation, I have 400 million times more power than when I worked on my first graphics software… and it’s not enough! And I can’t display fast because the resolution and the bandwidth are not high enough.
I’m now working on going a step further: if you put on the goggles, you can walk around and look around this environment everywhere you want and I can render all the possible path faster, so I’m trying to understand… in the opposite direction of what all the other else are doing, they are trying to figure out how to animate the pictures so they can tell a story and that’s very hard in virtual reality (because you can’t control the user)… I’m trying to say: how can I use all the possible cues or influences to have you walk a path in this virtual space that you think that’s your own path, but what I have really done is influence the path that you’re taking, so I don’t have to provide you with all the paths. I wanna go backward: I want to find a way of telling a story or putting in some sort of motions like pulling the rabbit out of my hat to influence the path that you will take and only display the high-resolution in the areas that you are looking.
So you can make believe that you are listening to me, that you are looking at me at my eyes, but when a beautiful girl walked by the door, you saw that girl walk by… and I tracked your eyes, I saw that! So I’m trying to figure out what are the cues that I can use to reduce the computational complexity so that I can do this. Since I’m not in the entertainment business, I’m trying to get how to use this also for design, communication, to how we can be in different locations and share the same environment. And I need to work on that for the next twenty years (laugh).
That’s really what I am working on.
VR Companies are now betting on standalone headsets, that have less computational power than PC. How people can create a compelling experience even on these less powerful devices? (20:12)
You know that cell phones have more computational power than big computers in the 80s and 90s, so I am a firm believer that we’ll have an exponential growth of computer power. It is not going to be sufficient, but it’s going to be 1000 times faster than we have now. I know you have questions and I have a lot of scars all over my body on why it is not going to happen, but it’s going to happen.
The issue is if we can get this immersive experience on a smartphone, tablet, laptop. Well, I don’t know what the form factor will look like, but if we can make glasses where I can still see your eyes and you can still see the environment, the augmented reality, then we can merge the simulated with the real and you can’t tell apart you are still immersed. That’s what’s going to happen. I don’t know what the shape is going to be. I’m anticipating it will be like regular glasses, I have to be able to see your eyes and your mouth. We’re not there yet. What I’m worried about is that a lot of money is going into the sphere from companies who have a market size that’s measured in billions of people and they might not have been informed that the market may only be of tens of millions… I don’t know how the economy will be.
I’m trying to understand how the brain works, what is necessary to create an image that has sufficient information that it is acceptable to you as being totally real, you can possibly see everything that your brain has interpreted, I mean, you get information from your visual system, but you don’t see them all, the brain is absorbing part of them. That’s beyond my scope.
What do we need to trick completely the brain? (24:46)
We have to start being able to measure stuff. And to us, measure the light that bounces off this material (AN: the one of the couch we were on) is easy today, measure the temperature of your body is very easy, to measure what neurons in our brain are fired, that’s tough.
There is a whole field evolving in measurement at the minute scale. What’s really great about it is that in neuroscience, we are now making a neural probe. It is a device to measure the electric response of a neuron, a single neuron. Measuring a neuron is useless because the brain has millions of them. The size of that is one micron thick. A hair is about 80-100 microns in diameter. And now we are at 768 neurons that we can measure simultaneously. Now let’s suppose that next year we can do the double of that, so 1500. So we should be able to provide a stimulus and see how neurons responded to that.
This is based on… last year at View Conference, I showed the work of Ramon y Cajal and Golgi (Nobel prize in 1908)… that is Italian. He developed the stain to see the neurons, to see the structure of the brain, of the eyes and the connectivity between the cones of the eye and the brain. They couldn’t do that on a human, but they could that on mice. Nobel prize this year was given for electronic tweezers… you can move a molecule.
The woman who developed the Pixel-Qi display, that is black and white and uses some light reflectivity, it is like the display you can have in Kindle during the day and during the night you can plug in and run at the same energy, she is now working with deep infrared light to read the response of the brain. She’s Mary Lou Jepsen. She’s wonderful, I love her. That’s exactly what I’m doing. I called her and I asked if she wanted to spend some time with me. She said, “I’m following your work for 30 years, so come”. So I am going to spend some time with her.
So, all these things are happening. These are the things that are only happening now today with regard to the human visual system. The analogy with what has happened in physics is what Newton has experimented in 1600, what Faraday has done with the dynamo… all those things have happened with 300 years of experimentations of physicists and engineers… we are just beginning!
I’m very intrigued by brain-computer interfaces… how much time do we need to get to something like The Matrix, where we can not only read but also push information into the brain? (31:25)
I’m not interested in it. I hope that this doesn’t happen. It will happen and that’s the problem.
We have a problem in the media today, with “alternative facts”… like with the other Donald of U.S.A…
I’m not going to go down in this field, but this is going to create a technology that can be misused and I’m worried about that. I’m really worried about that. I’m happy to hear you say that you are worried as well. Lots of people will say that, but not everybody in the world will say that.
I think that while we’re developing the technology, we’ll have to educate people regarding that.
You have a lot of expertise… what advice would you give to people starting now with VR? (34:12)
What’s your degree? (AN: I answered with “computer science engineer”).
Ok, so how many art courses did you take? How many ethics, philosophy, history courses did you take? (AN: I answered with “no one”).
I know… you didn’t have the time. We have to change our educational system. Because the fields interact with each other… we should not only understand a discipline, but also the lessons learned in other disciplines. And the difficulty that we have in the USA is the cost of education. People can’t afford it, so there is a gap between the rich and the poor and then people try to go through college quickly to spend less money. And that’s terrible, that’s a true problem. I would advocate for requiring that students take several courses on fields outside their major one. I’ve seen on myself… you never mature. I still don’t know what to do when I’ll be grown up. But I’m happy to look at things in a different way. I don’t know if this will help other people, but I have had a very fortunate life inspired by art, architecture, in my maths schools.
Do you know raytracing, radiosity, lightfields? They’ve all been made by my students. They teach me now. I’m so proud.
It has been an impressive dialogue, really. I could really feel all his expertise and his passion flowing towards me.
Since this interview has been full of interesting concepts, let me summarize some key ones:
- When Greenberg started experimenting with CGI, he had hard times, people said that it couldn’t work, that it was useless… can you see a comparison with what most people say about VR now?
- Virtual Reality (and Augmented reality, too) is still a baby. We’ve not arrived, we are just at the beginning, so all these discussions about VR being a fad or not are a nonsense: it’s like complaining about a baby that has not a degree yet. We probably need 15-20 years to have a really compelling virtual reality experience that is very very close to reality. There is a very long road to go, so we must be patient. But VR is here to stay. Forever;
- Augmented reality is very powerful and will be the next-gen communication means (this seems in line with what also John Gaeta has said). AR is great because lets us see in the eyes with the other people in the room we cooperate with;
- He started with CGI because he didn’t like seeing the computer just giving numbers as output, he wanted to have visual computer-aided design applications. It’s fantastic how revolutions start with these powerful yet simple considerations;
- He is working on reducing the complexity of the scene that has to be rendered. He is working on foveated rendering, that is pretty normal, but he is also experimenting with something that is more original. He wants to predict what will be the journey of the users: using all the possible tricks (visual cues, audio, storytelling), he wants to influence the journey of the user in a given VR experience, so that, also thanks to some AI predictive stuff, it will maybe be possible to predict exactly what the user will look at and so render only that at maximum resolution;
- In 15 years, we’ll have 1000x the computational power that we have now;
- There can be an economical problem become some VR companies (cough cough Facebook) expect billions of users, while at the moments there are tens of millions;
- We don’t have to recreate exactly all reality, but just the signals that our brain decodes. Our brains interpret only a little part of reality, so we just need to recreate that part virtually;
- We have to understand how the brain works, to be able to design how to trick it. And we are just at the beginning of the understanding of our brain. But new technologies, like the one developed by Mary Lou Jepsen, can help a lot this field to evolve fast;
- Brain-Computer Interfaces are a technology that can be misused… and since they are going to happen, we have to educate people in using them the right way;
- We have to change the educational system so that it lets people have an interdisciplinary approach and not only study a particular topic, but also other ones that may be somewhat related to it.
And that’s it. I really want to thank Donald Greenberg for this amazing interview and for all that he has done for computer graphics. He’s an amazing man, that has fought for CGI, and if we have VR today is also because of his.
I hope that you liked this interview, and if this is the case, won’t you mind subscribing to my newsletter using the form below? Thanks 🙂