brain computer interface

All you need to know about Brain Computer Interfaces in Virtual and Augmented reality

Since a long time I’ve a lot of curiosity about Brain Computer Interfaces: I think that they’re the future of human computer interactions and I would really love to experiment with them, especially in conjunction with the technologies I love the most: Virtual Reality and Augmented Reality. The recent showcase of Neurable at SIGGRAPH 2017 increased even more this interest of mine, so I thought that I should document a lot on the topic to write an epic article about it.

While documenting, I actually got a bit depressed, because I found that Tim Urban of WaitButWhy has already written an incredible 38000-words-article on the topic and so there’s no way for me to beat that epicness. That’s ok: as Chinese people say, 人外有人天外有天.

Okay BCI VR
My face after having read that awesome article (Image from Know your meme)

So, I strongly advise you to read that article (really, it will take you hours, but it will give you a complete introduction to BCI from all sides: how the brain works, which kind of BCIs do we have, which one will have in the future, etc…). And I will try now writing mine: I will be satisfied in even writing a decent article on the topic, highlighting the uses in XR technologies.

What is a BCI?

BCI stands for Brain Computer Interface; it is a synonym of BMI, that is Brain Machine Interface. The meaning is exactly the same: some interface that let the human brain communicate with an external device, as a computer.

Imagine if a computer would be able to read your mind, so that if you think “Open Microsoft Word”, the computer actually opens Word without you even touching the keyboard or the mouse. It would be pretty cool. This could happen with a brain computer interface: if inside your brain you have a device that reads what you’re thinking and communicates it to the computer, you could interact with your computer just with your thoughts and use your hands only to smash the keyboard when Word crashes and lose your last 4 hours of hard work. This doesn’t apply only to computers in the sense of PC, but to any device, like smartphones (you could talk with Siri with your mind), cars (you could start the engine just by thinking it), etc… It’s a pretty cool technology and there are many companies working for this magic to happen.

If you want a famous movie example, you can think about Matrix: the plug into your head is a BCI that you can use to enter the virtual world of The Matrix.

Matrix brain computer interface virtual reality
Who wants a plug into his brain just to open Microsoft Word without using the mouse? (Image by Instructables, from movie The Matrix)
Why is this important for AR and VR?

Because to exploit the full potential of XR we have to exploit the full potential of our brain.

Let’s think about Virtual Reality: at the moment we’re able to simulate decently only 2-3 senses: vision, audio, and touch. And I say decently because we’re far away from a perfect emulation: the commercial VR device with the highest resolution possible, the Pimax 8K, is far away from reaching the performance of human vision (near to 16K per eye). The remaining two senses, smell and taste, as I’ve already talked in my dedicated articles, are just fields for experiments, but we’re really in the early stages. Also recreating a lot of other sensations is hard: we don’t have a way to transfer hot or for instance pain to the VR user. Furthermore, VR suffers from the problem of motion sickness and of finding a proper locomotion method.

virtual reality taste eating emulation
The Digital Taste Lollipop. This is the best device we have right now to simulate taste in VR. It seems pretty rough to me… (Image by Nimesha Ranasinghe)

A lot of people dream to enter the world of full dive immersion as in Sword Art Online, but with current technology this is impossible. And even in the future may be problematic recreating a complete immersion just by using sensors and actuators on our body: to have haptics on all our body we would need a haptic suit, but wearing it is awkward. To try smell and taste we would need devices into our nose and mouth. To have a complete VR, we would have to wear devices in every single part of our body… this is unfeasible.

The ideal solution to create a full immersion would be interfacing directly into the brain: instead of stimulating all the parts of the body to create all possible sensations, it would be more efficient to stimulate only one, that is our brain. For instance, we could tell the brain to feel hot if we’re in a game inside the Sahara desert and so we would feel all our body feeling heat and sweating as if we were really there. If we manage to do this, we could have a complete immersion inside virtual reality, as in The Matrix… a bit scary, but also the dream we all enthusiasts have (apart from the “machines taking control of the world” part of the story).

Being a bit less visionary and more concrete in the short term, reading the brain would be very important in psychological applications of VR: when treating a particular phobia inside VR, the BCI could detect the stress levels of the user and adapt the experience to them. Having a BCI would solve one of the biggest issues of VR interfaces: input. Typing on a virtual keyboard is really a bad experience… entering words just thinking them would be a great step forward for XR user experience. Then, of course, there are the marketing scenarios to detect the engagement of the user when he/she sees a particular product.

V virtual reality dashboard
The keyboard of the V dashboard app. Typing with the VR controllers is really really difficult.

Similar considerations hold for AR. But there is a step further: since in the future we’ll wear AR all day, having an interface that would avoid us to continuously move our hand in the air to perform the tap (as in the HoloLens) would make AR experience much more comfortable. Furthermore, there could be an AI analyzing what we see through our glasses and this AI could be connected to our brain and continuously analyze what we think and use all these data to give us contextual suggestions (e.g. it detects I’m hungry + there’s next to me a bakery shop that makes awesome croissants: the AI could trigger a notification on my glasses advising me to go there eating croissants… Nom nom nom…). This could empower a lot augmented reality.

croissant Brain Computer Interface Virtual Reality
If there were a brain computer interface connected to me now, it would feel that I want to eat this croissant-based breakfast!

Having a direct communication with the brain would allow us to make amazing things with XR technologies… so, why aren’t we already doing that? There are various reasons… one of this is that we must know well our brain.

How much do we know about the brain?

Wait But Why gives us a detailed representation of how much we know our brain:

Brain Computer Interface VR
Accurate representation of knowledge of our brain (Image by Wait But Why)

Our brain is one of the most complicated organs of our body and so is one we know the least. Quoting Tim’s post:

Another professor, Jeff Lichtman, is even harsher. He starts off his courses by asking his students the question, “If everything you need to know about the brain is a mile, how far have we walked in this mile?” He says students give answers like three-quarters of a mile, half a mile, a quarter of a mile, etc.—but that he believes the real answer is “about three inches.”

There are various reasons for this. One of the most fascinating ones is that we’re studying the brain using the brain itself.

A third professor, neuroscientist Moran Cerf, shared with me an old neuroscience saying that points out why trying to master the brain is a bit of a catch-22: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.”

So, if the brain would be simple to understand, we would have simple intelligence, so we couldn’t understand it anyway. That’s intriguing.

Anyway, the practical reasons why it is so complicated to understand our brain are:

  • It is made of 100 billion neurons;
  • Every neuron is connected with 1000-10000 other neurons in a complicated network through which they exchange information to create all our memories, feelings, actions, etc… Multiply this connections’ number for the 100 billion neurons and you obtain a network that is super complicated to be understood completely;
  • There are not only neurons: inside this intricate mess, there are also other kinds of cells that help neuron in making their job and the blood vessels. In the end, there are bazillion microscopic elements even in a cubic centimeter of the brain;

    Brain Computer Interface Virtual Reality
    Beautiful representation of brain and its neurons (Image taken from Wait But Why)
  • Connections change over time: if the brain were static, we would be always the same our whole life. But we learn, adapt, change behaviors and this happens thanks to little changes to our neural network: new connections come to life, others stop, others change their electric characteristics. This means that we should study an organ while it is evolving;
  • Every brain behaves in a different way: basic functions are the same, but for instance how the word “Tree” is memorized in the brain is different from person to person. Unluckily there’s not an UTF-8 standard anywhere in the brain. For me, the word “Tree” for instance is associated with the Italian word “Albero”, while for a native English speaker it may be associated with the first time his father showed him/her a tree and said “This is a tree”. Different memories, different connections, different storage techniques in the brain;
  • Brain is very delicate and few people are comfortable in having a researcher drilling a hole in their head just to make some studies (if you want, you can volunteer… personally I’m ok that research is going slowly);
  • We don’t have adequate means to study the brain: we would need something that doesn’t require the user to have his scalp drilled as Swiss cheese, that has an enormous area understanding (able to read data from all the brain, so to understand all the interconnections and give an understanding of what is happening at a large scale), a big spatial resolution (able to see even the single neuron) and a very quick response time (as soon as an impulse is detected, it can be seen immediately). We don’t have anything even a bit close to this scenario. Even drilling the head we aren’t able to have a good analysis of what’s happening inside our brain.

We have a good understanding of how a single neuron work and a good one on how the brain operates at a high level. We also know the purpose of various zones of the brain (e.g. that the cerebellum purpose is to make us walk without thinking about the single actions of moving the legs). We also have some good knowledge of some of these zones: for instance we have a rough map of the motor and somatosensory cortex. This means that we know which areas of the brain activate to make us move the different parts of the body and which areas are used to feel sensations on different parts of the body. At least it’s something… and let us make prostheses that disabled people can move with their brains. But apart from this, we know very few.

Brain Computer Interface Virtual Reality
Mapping of parts of the body to the brain areas dedicated to their movement and sensations feeling (Image taken from Wait But Why)

This complexity explains why we currently don’t have the knowledge to create any valid BCI. But this doesn’t mean that we can do nothing…

What kind of BCIs can we have?

BCIs are of two kinds:

  • Interfaces that read information from the brain;
  • Interfaces that inject information into the brain.

The first are the ones that are most popular right now in XR ecosystem since they’re easier to implement… but there’s something also about the second type. Of course to have full dive VR we need both.

The most popular BCIs currently used in VR are EEG-based. EEG, that is electroencephalography, is a technology that lets you read the brainwaves from outside the skull. Basically, you put a helmet with various sensors on your head and these sensors can magically read your brainwaves. Companies like the already cited Neurable all work using EEG sensors inside virtual reality headsets.

Neurable Brain Computer Interface Virtual Reality
Neurable setup: as you can see there is a game that you play with a modified version of the Vive including an EEG (Image by Christie Hemm Klok for The New York Times)

EEGs have two enormous problems:

  1. Spatial resolution is terrible. With EEG is only possible to analyze the behavior of a group of neurons together. And with “a group”, I mean something in the order of “millions or billions”. This means that we can analyze areas of the brain and nothing more;
  2. There’s the skull trough which we analyze the data. The sensors read the electrical data of the neurons that pass through the skull. And since the bone is a bad electrical conductor, the information that flow through it get blurred and delayed.

That’s why Jack Gallant, head of UC Berkeley’s Gallant Neuroscience Lab doesn’t believe that much in EEG applied to VR. In an interview with the Guardian, he declared that “It’s conceptually trivial but just about impossible to do. The problem with decoding EEG signals from outside the brain is that the skull is a horrible filter. It’s a bridge too far in my experience.” Then he also added that the process involves vast computing power and is prohibitively expensive in both time and money. So, EEGs are able to make us perform nice things inside Virtual Reality, but nothing more. They’re the present but not the future.

A sample of human EEG with prominent resting state activity – alpha-rhythm (Image by Andrii Cherninskyi – Own work, CC BY-SA 4.0)

To overcome these problems we would need to come back to the idea to drill the skull and arrive directly into the brain. We can put some micrometric needles inside the brain and so have a channel directly with a little area of it (Local Field Potential technology) or even a single neuron (Single Unit Recording). That’s amazing. But again, we have two problems:

  1. We have to drill holes in our heads. Again, I’m not that comfortable with this idea;
  2. We can interact only with a little area, so we miss the big picture of the brain. Neuralinks engineers state that to have a good BCI we need to interact at least with 100.000-1.000.000 neurons, while actually we are like at 100-1000 with these methods. To interact with all the brain we would need the Swiss cheese scenario of our skull… and I’m not that comfortable with this idea, either. In Switzerland, I prefer chocolate to cheese :).
Local Field Potential array: it contains 100 needles and each one of them interacts with a little area of our brain (Image taken from Wait but Why)

So, again, we’re screwed. That’s why a lot of research is needed: we have to find an efficient way to interact with at least 100.000 neurons, and then evolve it until we reach all 100 billion in the end. There are lots of technologies that are being studied (even nanobots injected into the blood vessels, as Max suggests me), but there’s no proper solution right now.

What seems certain at the moment is that the invasive solution is the future. This may seem super-scary, but if you think about it for a moment, there are already people having electrical devices in their body: for instance who has a cochlear implant. And they live pretty well with that, so this is possible for everyone.

There are two further technologies that I want to talk about:

  • EMG (Electromyography): instead of reading information from the brain, we could read them from other parts of the nervous system, like the nerves that we use in our body to move the limbs. Here the gist is simple: instead of trying to read complicated information from the brain, we could use an armband to simply intercept the signals that the brain is sending to the limbs or other parts of the bodies it is going to move. A simpler task that can be useful for lots of applications;
  • TMS (Transcranial magnetic stimulation): basically it is a coil that is put outside the skull that induces some electrical currents inside some areas of the brain. Long story short, it lets you to stimulate the brain altering its state, so it could be used to inject information into the brain.
This image shows how TMS works: a big coil above your head generates currents inside your brain (Image by Eric Wassermann, M.D)

And remember that it is not only a technological solution. Even if we had the technology to communicate with the brain, we would still have to understand how the brain works to communicate properly with it. In this, Neuralink engineers are a bit more comfortable. They say that if we have a way to read the output of the brain, we could plug that into some AI algorithm and from this understand better the internal behavior of the brain. Using these discoveries, we could create better technologies that interact with the brain, through which we can get more precise output data to be fed to another AI… creating a virtuous loop for BCIs.

Ok, so… now we have a decent idea of what we can do now. Which companies are working now in the field?

What are some notable companies/experiments in the field? And what are they achieving?

Making an exhaustive list of the companies in the BCI field would be impossible for me. So I’m going to concentrate on the most famous ones, especially in the XR field.

While looking for these companies on Google, I’ve interestingly found that there are various startups operating on BCI and virtual reality, while no one working with augmented reality. There are some research works on this topic, but no company saying on its website “Hey, we do BCI with AR! Investors, give us your money!”. So, my dear readers… it is your turn to enter this field 😀

Just to quote some notable names:

  • Facebook. Some months ago, during an event, Regina Dugan talked about the studies of Facebook on brain interfaces. Apart from the amazing marketing opportunities, Facebook is particularly interested in studying how to make the users write texts just by using their brains. This would allow people to share statuses on all the XR versions of facebook very easily. “In a few years, Facebook hopes to have a system that allows people to type with their thoughts five times faster than they now type using a smartphone keyboard”. She says that something like that is closer than we think;
  • Valve. Some months ago, in a reddit AMA, Gabe Newell said that he’s very interested in BCI. This means that Valve is experimenting on this topic, surely to integrate it in its virtual reality solutions;

    BCI virtual reality brain half life 3
    Will Half Life 3 be controlled only by our brains?
  • Palmer Luckey. In a tweet of his of some time ago he complained about the lack of people willing to have a chip implanted in their head to make experiments. Still don’t know why he doesn’t use his :D;
  • Neuralink. This name couldn’t tell that much to you, but if I tell you Elon Musk, surely you know who I’m talking about. Elon Musk started Neuralink with the purpose of creating an interface that makes computers interact directly with the brain, by using a device inside the brain to perform this connection. He has gathered some of the best people around all the required field for this venture (software developers, electronic engineers, neuroscientists, etc…) with the dream of creating an AI layer inside our brain. I’ll explain better his vision in the last topic of this article. There’s not a product out yet, they’re still in stealth mode;
  • Kernel. Venture very similar to Neuralink, it has been founded by Bryan Johnson, the founder of BrainTree (a payment solution. If you consider that Elon Musk is the founder of Paypal, that is a competitor of BrainTree, things become very fascinating). They’re working to create the first neuroprosthesis that can mimic, repair, and improve our cognition;
  • Paradromics. This company is working on BCI for disabled people, with a product to be expected in Q1 2018. They are working especially on the bandwidth of the BCI. Our brain can output a lot of information and there should be a technology to be able to handle the transmission of all this data (their product goes at 8Gbit/s);
  • Neurable. We all know Neurable: using a modified version of the HTC Vive that includes EEG sensors, they’re able to let people play brain controlled games in virtual reality. At SIGGRAPH they presented the Awakening, a little VR game into which the player has to escape from a laboratory using his telekinetic powers. The experience starts with a training stage into which the user is required to look at some items, so that the system can learn which brainwaves are triggered in response to the various items. This way, while the user plays, the system can detect which object the user is concentrated on as well as when the user is in a concentration stage. They also use eye tracking to detect what the user is looking at, so that inside the game the user can look at an object, thinking about selecting it and magically select it. Very cool, that’s why they got a lot of attention by the press. As I’ve said, EEG is not the ideal way for reading the brain, that’s why Neurable has invented some new algorithms that are better in interpreting the thoughts of the player. But, given what we’ve already said about EEG, this approach most probably won’t lead them to detect complex brain thoughts, only certain fixed states;
  • Arworks. Using a solution very similar to Neurable, it has proposed MindControl VR, a framework where you activate objects by concentrating on them. Again, using EEG it is possible to detect if the user is in a concentrated state when he’s looking at an object and thus selecting it. I’ve tried some time ago an application that misured my concentration state using brainwaves (not made by Arworks) and I can assure you that it is not that natural yet. For instance, if while looking at an object you start thinking about some memory of the past that that object reminds you, concentration goes to 0 (memory activates other brainwaves than concentration) and so you don’t select the object anymore;
  • EyeMynd. Again, EEG plus VR to give input to virtual reality experiences. Its founder Dan Cook told Digital Trends: “We’re creating a brainwave technology for controlling VR. We think we have the brainwave code cracked at a very deep level. We’re using the mathematics of quantum physics to build a new type of deep brain learning software — not reliant on neural networks — that works really well for human brainwaves. Even though only a very small amount of information is able to make it through the human skull, it is still good enough that we can animate human avatars using brainwaves.” He predicts brain-controlled VR headset will cost around $100 in the upcoming years (I really hope so!) and that they’ll be able to create a brain-controlled operating system. We’ll see!
  • Open BCI. It is a Brooklyn startup that tries to create an open ecosystem around Brain Computer Interfaces. They’ve made some successful Kickstarter campaigns for the creation of Brain reading devices. There’s also a page on their website where they present WaVR, a technology to use brainwaves with virtual reality. The demo video is not that clear, but if you go to the presentation, you can get better their vision. Of course, I love their open approach, it is what is needed for enthusiasts to experiment in this field;
  • MindMaze. MindMaze has got recent attention because it added Leonardo Di Caprio to its board of investors (cool, eh?). This company is working since years in using brain activity for healthcare applications, but Di Caprio saw some potential even for the moviemaking field. Currently, MindMaze is having more an EMG approach: they put tiny sensors on the foam of the VR headset that get in contact with your skin and try to read the electrical impulses that pass by your face muscles. This lets them understand the facial expression you’re having and this can be used both for animating the face of an avatar (this is why Di Caprio saw potential in this company) and for detecting the emotions of the user (if you’ve ever seen Lie To Me, you know that from facial expressions you can detect a lot of stuff about people). The magic of their mask lies in the fact that they can “predict” the facial expression you’re going to make before you’ll actually do it. This way your VR avatar can show it the exact moment you perform it in real life, with no lag;

    Virtual Reality brain computer interfaces emotions
    VR headset wearing Mindmaze sensors. Startup is having huge investments, so its technology must be really awesome (Image by Mindmaze)
  • CTRL-Labs. This company is using a bracelet that you wear and tries to detect the inputs you’re going to make with your hand by intercepting the brain signals through EMG. This lets you perform your tasks in games and other applications without having a controller in your hand. During a demo they made to Wired, one of the founders started typing on the keyboard without having a physical keyboard under his hands and this worked! In VR this would be incredible. What I like about this company is their practical approach: instead of dreaming about the future reading of the brain waves, they know that now reading the impulses of the brain towards the hands is far easier and have more immediate practical uses (e.g. typing on the smartphone without actually using your hands), so they’re betting on this. I also love their vision of thinking about all the potentialities of our brain: they want to investigate if our brain is deeply programmed for 5 fingers or has the possibility to theoretically control more (but use only 5 five because our hands only have 5 fingers). Discovering the brain can control an arbitrary number of fingers would allow us to use super-hands in computer applications and this would open an enormous world of opportunity. Think for instance a 20-fingers hand that can type super-fast! Of course, I also envision particular Japanese porn using this… but that’s another topic!
  • LooxidLabs. It is offering a mobile VR headset with embedded eye tracking and EEG sensors, so that you can detect and psychological state of the user. The kit is quite expensive, but it got a lot of attention at CES 2018;
  • OpenWater. This company is working on a device to perform brain scanning with a cheap but accurate device: they claim that its device is able to get better results than a MRI scan of the brain. “It is a thousand times cheaper than an MRI machine and a billion times higher resolution,” says Mary Lou Jepsen, founder of Openwater and former exec at Oculus.

And I just want to also remember two interesting experiments:

  • In November 2016, an American team has managed to make people navigate inside a tiny 2D world using only BCI. Notice that the cool part is that I’m not talking about they moving the character using their brains… they knew they had to move it through information injected into their brains. They couldn’t use their eyes and through TMS the scientists injected into their brain the information that their avatar had a wall in front of his or not. This way they could make the character move inside a 2D map without any additional clue than the ones injected into their brain. This is just an experiment, but shows how we’re already capable of creating some little games that work by inserting information inside our brain. You can find the full article here, if you’re interested;

    Virtual Reality brain computer interfaces gaming
    These are the maps into which the users could move just using info injected into their brains. As you can see they’re very very simple, so this is just an early stage experiment (Image by Frontiers Media)
  • In 2016 another experiment gained the attention of all people in the VR field: scientists took 8 chronic spinal cord injury (SCI) paraplegics and made them undergo an innovative rehabilitation program. They used an exoskeleton controlled by brainwaves (so if patients thought about raising a leg, the exoskeleton made them raising the leg) and a Virtual Reality environment (so, for instance, patients could feel as really walking inside a garden). So these simulations made them feel as they could walk again. After 12 months, all patients had improvements in their ability to move and 50% of them went from a diagnosis of full paralysis to a diagnosis of a partial paralysis. They started to feel again sensations in the lower part of the body and even having some muscular movements. Brainwaves and VR can make people walk again!

So, just to make a summary of what companies are good at doing now using BCI in XR:

  • Detecting your emotions;
  • Detecting the object in the scene you want to interact with;
  • Detecting something you’re thinking about, knowing there are a finite set of choices;
  • Detecting inputs you want to do with your hands (or other limbs).

Then there are companies working on a futuristic BCI where everything becomes possible.

With which devices can I experiment right now in the field?

If you’re getting passionate about BCI by reading this article (I hope so, it’s an interesting field!), what you can do now is buying some devkits and start experimenting. Most probably you won’t have the money of Elon Musk (if this is instead the case, contact me and create something awesome together!), so let’s see what are the BCI devices on the market that are affordable:

  • Neurosky. This is the company that makes the brainwaves readers that are used the most by hobbyists. For around €200 you can get a Neurosky device on Amazon and start toying around with brainwaves read through EEG sensors. You can combine it with a headset like a Cardboard and make interesting applications. Microsoft MVP Sebastiano Galazzo, for instance, used it to evaluate concentration of skeet shooting athletes while they are playing;

    Brain Computer Interface VR
    Me while trying a BCI application at DTC event. Microsoft MVP Sebastiano Galazzo made a game that measured concentration and I was really terrible at playing it! He said that maybe I’m a psychopath…
  • Emotiv. Similar to Neurosky, Emotiv offers solutions to read brainwaves through EEG. The two devices offered on their website have $300 and $800 price tag. The second one should offer a hi-res reading of brainwaves;
  • WaVR. I’ve already talked about this in the previous paragraph. The shop page on OpenBCI website offers various options and you can buy an EEG headset for $349. It’s interesting that being an open ecosystem they offer both devices for EEG and EMG brain computer interfaces.
  • Neurable. On their website, it is possible to request a devkit of their Virtual Reality BCI headset. Don’t know about the price, but you can contact them and ask;
  • LooxidLabs. If you have a lot of money, you can get the devkit of its emotion-scanning headset;

So, you have various solutions at various prices to start entering this marvelous world. The only problem of the various brainwaves headsets is how to make them stay on your head together with the VR headset… but with a bit of fantasy, you can overcome this difficulty.

Virtual Reality brain computer interfaces emotiv
Emotiv Epoc: with this you can read and analyze high resolution brainwaves (Image by Emotiv)

Once you’ve made a prototype, you can ask for an investment and try to become the next BCI unicorn! A recent report by Crunchbase says that investments in BCI startups have dramatically increased and that last year the VC total investment in this kind of companies has been around $600 million.

Which is the vision of the future?

As we’ve seen, in the short term future we’ll have EEG devices that will make us interact with AR/VR experiences by just using our brain. Using eye tracking, brainwaves and AI we’ll be able to select and click objects without using our hands; we’ll be able to give simple commands and maybe even type words without messing around with controllers. Detection of our emotions will be used by advertisers as Google and Facebook to understand engagement of ads (and this is scary), but will also be incredible for social experiences (e.g. detecting if a person is feeling harassed can help him or her to play safely in the social environment).

Sooner or later, there should be a switch from this tech and we’ll have to start using chips inside our brains. It’s scary, I know, that’s why at the beginning these technologies will be used by disabled people, that will want to experiment anything possible to be healthy again. That’s why both Neuralink and Kernel are targeting this kind of people first and aim at helping people having a paralysis of brain cancer or whatever problem. As I’ve already said, most BCI technology already out there that require something implanted inside the body are currently used on disabled people (blind or deaf, for instance).

When we’ll be able to create a decent implanted BCI, that will have no risk for the user, the technology can start becoming more widespread and target healthy people. At first, innovators and rich people will experiment it (as it is happening with every innovation, like AR and VR themselves). Then the technology will start becoming cheaper and safer and will become widespread: everyone will use it naturally. At the beginning, there would be social resistance, like for every new tech, but then everyone wouldn’t be able to live without it.

What will this chip be useful for? The idea of Musk and the others is creating something like a new layer of our brain that will connect us to other people and to AI.

Our brain is like made of three layers, where the first one is the one operating on a more animal level (eat, sleep, have sex, etc…), the second one handles more complicated tasks typical of mammals and the third one is made by the intelligence that makes us truly human (mathematics, thinking, ethics, etc…). Of course, we use our brain naturally and we don’t think about which layer has created whatever decision we’re taking. Sometimes we have some love issues that make our brain and hearth to have a battle and there we get the fact that we’re made of different components.

Elon’s idea is creating through a chip a fourth layer that makes us connected to other people (so brain-brain interfaces) and to an AI (so brain-computer interfaces). This fourth layer should be used naturally, as we use the other three. To convey the idea: when you try to remember who Julius Ceasar is, you search inside your brain for that information. You don’t know how this search happens… you just think. With Musk’s vision, when you make this search, this search can happen collaborating also with other brains and with a Google search for instance. When you get to remember the life of Julius Ceasar, you’ll just know that you remembered it, not how this search has happened.

This will augment the full potential of the brain: since it could exploit also artificial intelligence. Musk also thinks that this could save mankind. If we develop an AI separated by us, it could overcome us. But if the AI is a part of our brain, if we are the AI, there’s no more a battle, because we’re all on the same side. We’re like super-humans and the Terminator scenario ends.

Furthermore, it will allow us to collaborate together without using an inadequate means as the language: why explaining your emotions of having seen a beautiful landscape (it’s impossible, even with 100000 words), when you could send those emotions directly to your friends through a brain-to-brain interface? It would be incredible. Enabling brain-to-brain communication will enhance our ability to communicate to unthinkable levels: we could communicate fast, sending us images, emotions, thoughts, etc…

So, all brains connected together and with artificial brains. Every one of us could have superpowers.

Virtual Reality brain computer interfaces speed
Bandwith of different actions. Notice how writing and talking are really slow compared on how fast the brain can work. If two brains could talk directly instead of passing through such inadequate means, our communication will be much more efficient (Image from Wait But Why)

Of course, there are scary things to consider:

  • First of all, I don’t believe we’ll ever make a common AI that belongs to all humanity. There will be maybe a lot of different AIs from different companies in different countries. And… what will be the price of such services? Will rich people be able to access to more artificial intelligence than poor ones? And… most importantly… who’s going to control those AIs that enter inside our brain?
  • What about hacking? What if someone can enter your brain and modify what you are? Or if he can force you to vote for a candidate or suicide yourself?
  • What about privacy?
  • What about if the BCI breaks or has a short circuit?
  • Our third layer of the brain controls most of our lives and it is what makes us identify as ourselves. If we add a fourth AI layer, more clever than the other ones, this means that it will start defining who we really are, so we could lose our identity in favor of an AI.

Anyway, these are problems that will unfold throughout the years. To reach the point of BCIs installed on healthy people a lot of time is needed, because there is a lot of research to be performed in so many fields. Musk is confident in having first results in 8-10 years, while his engineers express timing also of 25 and 50 years. And I’m talking about decent results, not a full BCI in every person as I’ve described. So, we have a long road in front of us. Actual timings will depend on how the industry will evolve fast… and if someone still driven by human stupidity will nuke the planet in the meantime.

About XR implications: at first, we’ll use XR devices in conjunction with BCI devices, as we’re doing now… but when technology will advance more, the BCI will include the functionalities of XR devices. For instance, it will be able to inject images as if were seen by eyes and sounds as if they entered the ears. Headsets will become useless. Maybe we’ll discover that the brain is able to handle images and sounds far better than the ones provided by eyes and ears and we could have super-crisp images for instance. A direct bridge into the brain could give us a super-brain and super-senses, maybe.

And the final future?

Our brain goes faster than our body: it is like a CPU that finds that all other components of the PC are so damn slow. The body is a limitation for our brain, that with a brain interface could unleash all its potential, like in a dream, like in VR. Why would we need to still have a body?

Virtual Reality brain computer interfaces futurama
Futurama predicted it: we could live only with our heads (Image from Futurama Wikia)

The final stage of full dive VR is The Matrix. We could enter a simulation with all our body and see it as true life and inside this simulation we could do whatever we want. We could have endless possibilities, since in a simulation there are no rules to follow. So it can also be more than The Matrix: in The Matrix people are just people in a physical world… why should we have it? We could fly… we could have six legs… as long as the brain+the AI can think about something, that thing can happen. In the end BCI will make VR possible without a headset… just by injecting sensations in our heads. VR could become our life. Or our lives… being a computer we could live various lives in parallel, who knows. It all depends on what this super-brain can handle. And maybe thanks to the smart AI that will coordinate us through our fourth brain level, we will start thinking about living in love with each other, so that we’ll stop killing each other as in all those years. It’s all a mind-bending thought, but it could be the final evolution of humanity. Fermi’s paradox says that we haven’t still met aliens because they don’t need to live in the physical world anymore: they evolved so much that they live in a full dive VR all the time. This could be the case for us as well.

Maybe it will be an amazing future. Who knows, but I’m still too much in love with this simple and crazy world. When in Deus Ex I had the opportunity to give full control of humanity to a wise AI, that could make the Earth a perfect and peaceful place, I preferred making the humanity return to the middle ages activating a technological blackout (that destroyed the AI, too). I love being so deeply human, free to be myself.

“Yesterday we obeyed kings and bent our necks before emperors. But today we kneel only to truth…”
– Kahlil, Gibran


And that’s it. Hope you’ve liked this super-long article since it took me ages to write it (and I had no AI helping me!). To support me and show your appreciation, please share this article on your social channels and subscribe to my newsletter using the form on the right!


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.

2 thoughts on “All you need to know about Brain Computer Interfaces in Virtual and Augmented reality

  1. Yet another great post! But that one from the first link… OMG a whole damn book on BCI

    Would also mention the MYO armand (https://www.myo.com) in case you didn’t know it. It’s based on the same idea as the bracelet used by CTRL-Labs, capturing signals through EMG sensors in an armband. I purchased one of those almost as soon as it was released few years ago and at the time it was pretty useless from a consumer perspective as they were marketing it. I have played a bit with it as a developer though and what’s good about it is that you can easily get the raw data coming from the muscles and do whatever you like with it. As you mentioned in the post the hard stuff is understanding that data in order to recognize gestures and trigger specific actions. Btw that makes me think I should dust if off and try it again now, maybe they have done some software improvements which make it more reliable (they hadn’t released new hardware afaik). The set of recognized gestures at the time was very limited and it triggers a lot of false negatives and false positives, but I remember successfully using it to control Spotify on my mobile just with hand motions while commuting (rotate hand right to play next song, rotate hand left to play previous song, grab gesture and twist to sides for controlling volumen, etc).

    (quick note: there’s a typo in the “Every brain behaves in a different way” section, should be “word” instead of “world” in few places. Anyway it made me search the translation for “Albero” so now I know how to say “Arbol” (spanish) in italian! :P)

    1. Yes, I know about the MYO, but as you said it had some issues. One great sentences that engineers of CTRL-Labs say is that their device should be like a mouse, working 100% of time. No one would ever use an input system working 90% of time, it would be annoying. I’ve seen people using Myo to change slides in Powerpoint and then resorting using a remote because it didn’t work well ahahahah. It was very innovative for the time, though… I hope they’re still in business.

      Thanks for the tip, I’ve corrected them. The post was so long that Grammarly crashed and didn’t help me in spotting typos. You’ve been precious, as always.
      By the way: do you know that I’ve studied Spanish one year? 😉

Comments are closed.

Releated

playstation vr 2

The XR Week Peek (2024.03.26): Sony halts production of PSVR 2, Meta slashes the price of Quest 2, and more!

This has been the week of GDC and of the NVIDIA GTC, so we have a bit more pieces of news than the usual because there have been some interesting things announced there. Oh yes, also IEEE VR had its show. Actually, a lot of things happened for tech people in the US, which is […]

We need camera access to unleash the full potential of Mixed Reality

These days I’m carrying on some experiments with XR and other technologies. I had some wonderful ideas of Mixed Reality applications I would like to prototype, but most of them are impossible to do in this moment because of a decision that almost all VR/MR headset manufacturers have taken: preventing developers from accessing camera data. […]