neuralink vr

Neuralink announces new BCI: why us VR people should care

Yesterday, finally Elon Musk has detailed in a public event the work that its secretive BCI startup Neuralink has done in these two years of stealth mode. What has been announced there is astonishing and I want to discuss with you here what has been revealed, what I think about it and what are the possible future projections of this work for AR and VR.

What does BCI mean?

“BCI” stands for Brain Computer Interface, that is a hi-tech solution (hardware+software) that lets you read or insert information directly into your brain.

So, imagine that you can think about a movie that you would really love to watch and that your smartphone could automatically launch that movie on Netflix just because you have thought it. Imagine that you can turn on the lights of your house just by thinking about it when you enter your home at night with the hands full of groceries you have just bought. All of this will be possible in the future thanks to Brain Computer Interfaces, or Brain Machine Interfaces (BMI) as someone else calls them.

BCIs is a very fascinating topic, that I really love. If you are interested into it, you can read this long deep dive on them (related to AR and VR) that I wrote some years ago.

The problems of BCI

BCIs are really in their early stages of development. There are lots of problems in developing them.

The first one is that we don’t know how the brain works… we have understood only a little part of all the brain processes. And creating an interface with something that you don’t know ain’t easy.

To interface to the brain you may use invasive or non-invasive methods. The most famous example of non-invasive BCI in VR is Neurable, a Vive-X startup that lets you use your brain to select objects in VR applications. This is possible thanks to a Vive headset with EEG sensors installed in the back. It feels like black magic and it is cool, but the problem of EEG is that they can only process the average signal of millions of neurons together, furthermore after this signal has been distorted by passing through the skull. So, the applications of EEG are pretty limited and can’t lead us to the Matrix.

Neurable Brain Computer Interface Virtual Reality
Neurable setup: as you can see there is a game that you play with a modified version of the Vive including an EEG (Image by Christie Hemm Klok for The New York Times)

Regarding insertion of data into the brain with non-invasive methods, there are some experimentations with TMS (Transcranic Magnetic Stimulation), that has already let some people play a dull 2D game without actually seeing it with their eyes.

To overcome the limitations of non-invasive methods, there are invasive methods that actually put sensors inside your brain. This direct contact allows for easier reading and insertion of data into the brain (think about the cochlear implant that injects audio data into the brain), but at the same time, this creates other problems, like the fact that patient must have the skull drilled, or the risk of infections. Furthermore, they can only access a very specific part of the brain (the one they are installed onto), and so lose a general view of what happens into the brain, where actually multiple areas activate every moment to perform every action.

The research is investing a lot in BCI, on both types of methods. Regarding non-invasive methods, that genius that answers to the name of Mary Lou Jepsen is investigating how to use infrared light to read brain data even from neurons that are deep inside the skull.

For what concerns invasive methods, instead, after yesterday’s talk, Neuralink appears as one of the most innovative companies on the market.

When the technology will be ready, we will have other non-technical problems haunting BCIs, like various social concerns that we will have to address. One of them is privacy. As my friend Amir Bozorgzadeh of Virtuleap has told me: “In BCI, there’s no privacy. The computer is you”. Companies like Facebook may be able to harvest all the data from your brain. And regarding injecting data, governments could put propaganda directly into your brain. That’s why when I asked prof. Greenberg about BCIs, he told me “I’m not interested in it. I hope that this doesn’t happen. It will happen and that’s the problem.

As Mister President Alvin Wang Graylin loves to say, the problems that a technology can carry are directly proportional to its benefits. BCI can give us enormous benefits, but also dystopian scenarios.

Neuralink vision

Before going on discussing what has been announced yesterday, it is interesting to understand what is the vision of Neuralink and of its CEO Elon Musk.

Elon Musk is convinced that the long-term progress of AI (and robotics) are a risk for humanity. Long down the road, AIs will become more intelligent than humans and so they can take possession of the whole planet, dominate humans or even eliminate them.

He thinks that a smart way to solve this Terminator-scenario is that we don’t see ourselves as an alternative to AI, but that we blend with it. If we will be able to let AI become an additional part of the brain, AI can’t dominate us, because it will be part of us, it will be like an additional layer of our brain.

His vision is that we will become superhumans: when we will have to think about how to play chess, we will trigger automatically the AI-layer and see what will be the best move, but we will still be able to use other parts of the brain to love other people or to enjoy watching cat videos on Youtube.

I’ve detailed this vision in the article on BCIs that I linked you before, if you need more details.

Neuralink announcements

In a very long livestream, Neuralink has detailed the work that it has done in these two years of history.

If you want to read all what has been announced, I advise you to read the just-released Neuralink whitepaper (thanks Eloi Gerard for having linked me this!) or this cool article on VentureBeat.

In short, Neuralink has been able to create an innovative invasive BCI technology. Instead of relying on rigid electrodes as the ones mostly used until now, it uses flexible threads full of sensors. These threads are safer to be inserted into the brain because, being flexible, they cope better with brain movements and cause less bleeding in the brain and fewer scars. This means less inflammation of the brain and better reading of data from it (because the brain doesn’t create scars around the sensors). These threads are all scattered with electrodes that are able to detect the activity of the neurons.

Neuralink threads
The threads full of electrodes that Neuralink uses to read your brain data (Image by Neuralink)

To insert these threads into the brain, of course you need to drill the skull and connect the threads to the brain in a way that they remain fixed in position. For this purpose, Neuralink has created a “sewing machine”, that has a very tiny needle and that uses it to make the threads enter into the brain surface. This sewing machine is a robot that can insert the threads pretty fast and without causing bleeding (it avoids the vessels of the brain). Its nominal speed is up to 6 threads per minute. The robot operates automatically, but with the supervision of a surgeon, that can adjust the process if he sees some little problems. Drilling of the skull is still necessary, but in the future, Neuralink plans using lasers. The idea is having a process that is fast and painless… like doing a Lasik surgery for the eyes.

Neuralink robot
Zoom on the part of the robot that installs the threads into the brain of the patient (Image by Neuralink)
Neuralink needle size
Zoom on the needle of the sewing machine robot. A coin is used for scale (Image by Neuralink)
Neuralink inserting threads into brain
Process of inserting the threads into your brain (Image by Neuralink)

These flexible threads have a diameter of 4 to 6 micrometers, a length of around 2cm and each one of them contains 32 electrodes. These threads are connected to a little chip, called the N1, that contains a little ASIC that reads the data from the threads, amplifies it, filters it and then transmits it to an external processing unit.

N1 chip prototype
Neuralink’s current prototype of chip for experimentation on rats. It is composed of various parts, as you can see in the caption. The part tagged with B are the actual threads we have discussed before. D is the USB-C connector, that you can use for scale (Image by Neuralink)

Currently, the transmission is wired (via USB-C), but in the future, the plan is making everything completely wireless, so you won’t have cables going out from your head. Every N1 chip that will be installed inside the skull, can connect to up to 96 flexible threads and so have a clear reading of a tiny specific region of the brain.

To read different brain areas, multiple N1 chips, with attached threads, must be installed in the skull. Every one will have a diameter of 8mm. Neuralink says that currently, up to 10 can be installed and that the first human patients will probably have 4: 3 in the motor areas, and 1 in the somatosensory area of the brain.

Neuralink cables
Neuralink technology operates on a very little scale (Image by Neuralink)

The chips have also already the logic to transfer data into the brain, even if in current tests, Neuralink is only experimenting with reading data.

These N1 chips will connect (at the beginning through wires inside the head and then wirelessly) to a pod that can be installed behind the ear. This pod contains a battery to power the whole system and a wireless module to communicate with external softwares. The communication can be useful to configure the system through a smartphone app or to let doctors analyze brain data.

Neuralink setup in brain
The vision of Neuralink setup. Various N1 sensors are inserted under the head’s skin and communicate via wires with a pod that will give them power and that will transmit the data to external elements for further analysis (Image by Neuralink)

To let you understand how Neuralink is disruptive, I can tell you that the Utah Array, one of the most famous current invasive BCI device, can read up to 128 electrode channels. The N1 chip can read up to 3072. Yes, it is a 30x improvement, and with a technology that is also more suitable to stay inside the brain (because of its flexibility).

Rats, monkeys and humans

What is the current status of the project?

The company is experimenting with implants on animals. It has already implanted the experimental chip depicted above inside rats and successfully read their brain data. These tests are detailed in the whitepaper distributed by the company.

Neuralink rat implant
Implant of Neuralink technology inside rats (Image by Neuralink)

There were rumors about experiments on primates, and Elon Musk has surprised us all confirming that “A monkey has been able to control a computer with its brain”. We have no further info on this, but it shows that the tech is already close to be implanted into humans.

Regarding us homo sapiens sapiens, Neuralink hopes to start experimenting with humans at the end of this year, but more realistically thinks about doing that next year. One of the big problems is burocracy: this technology has to be approved by the Food and Drug Administration and the company has to find some patients interested in having something experimental inserted into their heads.

Of course, the company will start with disabled people and Musk hopes that disabled people will be able to use their computers and smartphones again just by thinking with their minds. In the future, thanks to 4 N1 sensors implanted that will be able to control the movements and sensations of the patient, the company hopes to let some kind of disabled people to move again their legs and their arms, or to be able to have smart prosthetics able to simulate well the lost limbs. You may wonder why there is the need of 3 chips on the movement area and 1 in the somatosensory one: just moving a prothesis of the hands without having the sense of touch (offered by the somatosensory area) would be weird and less efficient, that’s why there is the need of an additional sensor.

Why should we XR enthusiasts care about this technology?

Someone says that AR is the ultimate computing platform, because it will merge us with technology. Actually, BCIs are the ultimate computing platform, that will let us and technology become only one thing.

BCI is the technology that will let the Matrix become possible. There’s no way that people will wear sensors all over the body to simulate in a realistic way all the sensations of a virtual world. The only way to obtain this is creating those sensations by directly injecting them in only one place: the brain. Stimulating the brain so that to simulate all the possible sensations that every inch of your body may feel (heat, cold, pain, touch, etc…), it will be possible to re-create perfectly virtual worlds that will be as real. So, BCI is the ultimate form of Virtual Reality.

And yes, BCI can also create some weird situations… if you have watched Striking Vipers, you know what I mean 😉

And even before getting to the Matrix, BCI can help us in better interacting with the virtual world. Typing on a virtual keyboard is one of the pain points of AR and VR and currently no one has found a valid solution for that. If we could just think about the words that we’d like to type, it would be much more comfortable, and it would let maybe the Word of AR to become reality. Virtual typing is also possible with EEG to some extents, but technologies like Neuralink may increase the accuracy a lot. Musk hopes that Neuralink will let people type up to 40 words per minute.

Not to mention problems like locomotion in VR or virtual sickness: if we can interact with the parts of the brain that connect to the vestibular system, we can solve a lot of problems that currently haunt virtual reality.

Since the first targets of Neuralink will be disabled people, we can also think about accessibility. If Neuralink will let people operate not only PCs and smartphones, but also virtual reality systems, we will have disabled people that may play with us in Rec Room maybe in the future. They could have their hands back in VR, since by reading what the motor system of the brain wants to move, it could be possible to recreate the hands movements in VR. And have the feedbacks thanks to the stimulation of the sensorial area. That would be fantastic: even if the technology may not allow them to move in the real world yet, they could move and have fun in virtual reality, for a far better quality of life. I really hope that Elon Musk will think about this scenario.

In the end, I also think that we should care because technologies do not exist in a void, but in an ecosystem where multiple technologies cooperate together. That’s the spirit of the last book of Charlie Fink, for instance, that shows how AR can explode also thanks to the advancements in Artificial Intelligence and 5G. I am sure that BCI will be another cool technology that will be added in the mix sooner or later.

BCI can help UX in AR/VR, as we have seen… and 5G may help in streaming brain-sensations… so probably people will be able to stream some sensations to the brain of other people that are distant away. You could feel what other people feel. Furthermore, BCI will allow us to discover better how the brain works (since they collect data about it), and this will let us discover better neuroscience tricks that we can apply in AR/VR to create a better sence of presence with our systems.

Are we close to Sword Art Online?
sword art online
Cover of a book relative to Sword Art Online (Image from Wikipedia, probable copyright to ASCII Media Works)

After all this journey, you may think that we are close to have full virtual worlds or to have a brain implant on our head.

Well, no. I know that I am a bad person since I’ve let you dream so much… but now it is time to wake up and see reality for what it is. We are nowhere close to the Matrix, or Sword Art Online. We are instead very far away: all Neuralink people have clarified this during the live event.

As I’ve explained before, there are too many inherent technical, social and bureaucratical problems for BCI. Neuralink can connect up to 10 tiny areas of the brain, not to the whole brain. It can’t inject information in a reliable way, yet. The road in front of it is very long.

We already have AR headsets in our head, but we see mainstream adoption of AR at 5-15 years ahead of now. Imagine how you should push the horizon of mainstream adoption further for a technology that is not even ready for humans. And we say that VR is still not successful because it forces the user to wear something bulky, imagine what we can say of a technology that requires you to drill your head!

To have mainstream adoption of BCI, we have to have something that is so big that users will accept some holes in the head to have it. Disabled are the first target market for Neuralink because they want to be healthy so much that the drilling part is an acceptable drawback for them, to have some functionalities back again.

We also need a technology that is safe to wear for years. Neuralink has still not demonstrated how many years such an implant can survive inside a hostile environment like the one of the human brain. And the installing robot has around 90% of success rate… I think that no human will accept that number until it becomes close to 100.

Brain Computer Interface Virtual Reality
Beautiful representation of brain and its neurons (Image taken from Wait But Why)

This shows you how going from an experimental hardware to a product that is ready for mainstream adoption is a very long road. As I have highlighted in my previous article, to reach the point of BCIs installed on healthy people a lot of time is needed, because there is a lot of research to be performed in so many fields. Musk is confident in having first results in 8-10 years, while his engineers express timings also of 25 and 50 years. And I’m talking about decent results, about interesting invasive BCIs installed in healthy people, not a full Matrix-style BCI in every person, as I’ve described. So, we have a long road in front of us. Actual timing will depend on how the industry will evolve fast… investments and disruptive discoverings will define the actual timeline.

Neo, your time has still not come. But it is now closer.

(Header image by Neuralink)


Disclaimer: this blog contains advertisement and affiliate links to sustain itself. If you click on an affiliate link, I'll be very happy because I'll earn a small commission on your purchase. You can find my boring full disclosure here.

Releated

virtual reality hog awe europe

My experience at AWE Europe and a comparison with the US chapter

I’ve just come home after my trip to Vienna to attend AWE Europe. Many people asked me about my experience with the event and especially what are its differences with the US version, so I thought it could be worth writing a little post about it. AWE is one of the best XR events out […]

shiftall meganex superlight 8k review hands on

AWE EU – MeganeX Superlight 8K hands-on: impressive resolution but at a price

Yesterday was my first day at AWE Europe, and I tried some very interesting devices there. My first test was with MeganeX Superlight 8K, Shiftall’s new headset with a very high resolution in a very compact design. MeganeX Superlight 8K Just a brief history of the MeganeX Superlight 8K, before we delve into the real […]