On the occasion of Metaverse Safety Week (which I invite you to check out), I had the amazing opportunity to speak with Kavya Pearlman, the head of XRSI, who is fighting for fundamental rights like privacy and safety in the metaverse. We talked about the sweet spot between human rights and technology advancements, how to guarantee a safe environment for children in immersive worlds, and how we hope the metaverse of the future could be. You find all of this in the video below and in the (slightly edited) transcription. Enjoy!
Tony: I want to ask you if you can introduce yourself to people who are reading this interview. You are a superstar in our environment, but maybe there is still someone who needs a recap about your life.
Kavya: For sure, Tony, and that’s very kind of you to introduce me as a superstar. I think I consider myself more like an information security researcher. I am constantly trying to learn these technologies that are evolving and emerging, and then trying to put a collective, critical perspective on it. While some of the world is moving fast and trying to innovate without really thinking about consequences, we’ve taken it upon ourselves.
I started XRSI in 2019 and took it upon myself to bring collective human intelligence to really pay attention to what could possibly go wrong. Then if something could go wrong, we should proactively think about if there is something that we can do here from a multi-stakeholder perspective to maybe reduce some harm before it happens, and potentially mitigate some risk.
That’s where you have X Reality Safety Intelligence. We call ourselves human intelligence in the loop, and we claim you should really consult with us when you’re innovating. It’s been almost five years now, and we have been focused on so many issues that are so important for emerging technologies. We’ll dive in later into all that, for sure.
Tony: Yes, very cool. I’m a bit curious, but why privacy and safety in the metaverse? Why did you start chasing this goal? How has your career in this field started?
Kavya: Well, I do have to go back to my experience with the very first virtual world, Second Life. Even before that, I think the real interest in the consequences of ignoring the risks [related to technologies] started when I was over at Facebook, which is now Meta, and I was the third-party security advisor there. This was in 2016 during the US presidential election time. I was trying to build a scalable third-party safety security model… we were trying to figure out if somebody comes through the front door, what they all need to be checked on security-wise.
Anyhow, that experience of this whole 2016 election, as everybody saw… there was a lot of misinformation, disinformation… and a lot of these cybersecurity issues came about. It opened my eyes to what happens when you ignore the risks of these massive systems that have the potential to influence humanity, and democracy, and undermine our elections even. It was like a shift in my thought process.
Right after that, I was hired by Linden Lab, which some of you may know, is the creator of Second Life, the very first prototypical metaverse. This is where the expansion of that perspective happened… we’re now talking about people representing themselves in virtual avatar forms and having the ability to be anonymous, assume any identity, and do transactions using virtual currencies. The very first ever virtual currency was the Linden dollar, and with it, we discovered that you have to comply with money rules and money laundering possibilities.
When two people come together in the virtual world, they can have all sorts of experiences. Second Life offered a tremendous amount of freedom, which included sexual experiences and all kinds of gaming experiences. That brings up its own element of challenges and problems… and try explaining that to the regulators [laughs]. It’s like, we had a revolving door of regulators.
All of it informed me that our world is moving towards a more immersive Internet. Now we call it the metaverse, which is still building up and evolving. That’s when I was like, “You know what? Somebody really needs to think about these issues very carefully because this is going to impact all of us.”
Tony: I want to challenge you a bit about safety and privacy. Because I’m a tech guy, a developer (you know it very well because we collaborate on a few things) and sometimes there are some compromises I have to make in what I create because of privacy or safety. Let me give you an example: you talk about safety in social worlds, like Second Life. But if you put too much attention on safety from the beginning, then people don’t feel free to do what they want. They will probably go away from this virtual world because it’s even more restrictive than real life. At the same time, if you don’t put restrictions, people will start doing sexual harassment, pedophilia, and whatever other horrible things they may come up with. How can we find a sweet spot between the two things?
Kavya: I think this is where we are quite unique as a nonprofit. Instead of just saying “Don’t do this or ban the bill”, and we must not do this, we are researching technologies to find that balance. Let’s take an example. What you’re saying is so important. There are two things here: one is the ability to scale and also ability to innovate timely, not having too many hurdles. The second thing is really allowing multi-stakeholder folks to weigh in on those decisions. You’re making trade-offs for a billion people, or let’s say, a hundred people that could scale to a billion later on. When do you introduce safety? What level of safety should you introduce? The right approach is not saying “Hey, let’s put all these 60 controls over there so we don’t have any problems”, that’s not the way to build technologies. Because if you put 60 controls, the user is completely stifled to be like “I have to put a safety bubble around me. Oh my God, I can’t even connect to any person unless I drop the safety bubble.”
The safety and privacy is an art and a science. You have to learn the science, and the technology, but then there is this art piece where you consult multi-stakeholder experts, which is what XRSI does, to inform you where those safeguards should be, to what level, and when should they be introduced. For instance, if you have 40 people, and it’s a closed community, you can pretty much rely on self-moderation. You don’t need to really kick, ban, all of this.
But when you have over a thousand people in an immersive environment, that’s when we need to think about, “Hey, we’ve got five incidents per day of harassment. That could lead to a bad reputation. Maybe it’s time to think about introducing some of these kick, ban, mute, or other type of safety controls.”
This is, again, a balance and then a trade-off. When you make the trade-off, you can’t be completely anonymous and be safe at the same time. If you’re completely anonymous, that means the safety people have no visibility into what you do. Somebody has to make that trade-off.
When you make that trade-off, some people will be marginalized, like the journalist community or the vulnerable population. What we’re trying to do is avoid this broad brush and all these decisions happening just from the company terms and services. That’s not the way to go. You have to keep a collective multi-stakeholder human intelligence in the loop that is informed by the consequences of real-deal technology and allows innovation. It’s not like, “Hey, no, we must do this so that we can stop any issue from happening.” We have to allow innovation to happen and introduce these controls timely when it’s necessary.
Yes, I think that was a very important question because most of the time developers fear bringing a safety privacy person, believing they’re all like heavy hammers, “Let’s not do this, let’s not do that.” That’s not how we should be approaching these emerging worlds.
Tony: You mentioned the services that the XRSI offers, and people know that I work at VRROOM, this company making social VR concerts, and well, we are partners of XRSI. I can say that when we worked together, your experience has been very valuable to me. The advice you have given to us has always been very on-point and very useful. I just want to say that I really appreciate what you do for us and other companies in the field because you offer very valuable advice.
Now let me do another provoking question on the privacy sector. The hype of everyone is on artificial intelligence. Apart from the various “Terminator will kill us all” things, there is also the problem with the training data. For instance, some people say that China may be ahead of other countries just because they have no privacy laws, so basically, big AI companies could have huge training sets with which to advance the technology. Sometimes privacy, safety, and other good values are somehow slowing down technological advancements or even preventing them. If you “move fast and break things” like Mark Zuckerberg used to say, you can be very fast… but of course, this has consequences. Again, where is the sweet spot between technological advancement and preserving a good life for people?
Kavya: I personally don’t have the answer. I really don’t know. This is the reason why we are bringing in so many global experts, many governments, many multidisciplinary human rights advocates, child safety advocates, and policy people from all around the world to the Metaverse Safety Week.
What I do know, especially concerning artificial intelligence and this economic power disbalance, is that The United States says “Hey, let’s be very careful.” and Europe is like, “Hey, we need regulation and stuff.” On the other side of the planet, these people do not have that much of a stringent regulation and they’re moving fast. This could be an economic disbalance that could happen in overall global structures. I do know that’s happening. […]
We are talking about brain-computer interfaces being able to extract data directly from the brain and then pipe it through augmented reality devices in order to make very quick real-life profiling and whatnot. What I’m personally invested in learning because I really don’t know where the balance is, is THAT balance, is THAT conversation about emerging and immersive technologies, and what that should look like.
What should the regulation look like in these immersive worlds? What should the regulation look like when AI, PCI brain-computer interfaces, augmented reality, and virtual reality, will evolve? All this is converging to create a much more immersive internet where you don’t see through the screen, but you interact through avatars. You have the industrial metaverse that is building… what should those things technologically have control over? Should we have robotics controls that converge with generative AI? Because so far, we haven’t set the rules of engagement for many things: What are the rules of engagement between humans and AI?
I see so many reports, but none targeting those very emerging evolutionary systems, processes, and policies that we need the answer to. Hopefully, this is why we have Metaverse Safety Week coming up from the 10th to the 15th of December. We will have this assembly. We’ll do multiple roundtable discussions with some of the top data protection, and human rights… all these multi-stakeholder professionals. Maybe Tony, if we are lucky, we might have at least some baseline understanding as to where all we should pay attention. We would still not have all the answers, but the goal is to try to find that very answer that you’re asking about.
Tony: Since you mentioned the Metaverse Safety Week, why don’t we speak a bit about it? Can you repeat when it is happening and what people can expect from it? Also, how can people attend it?
Kavya: Metaverse Safety Week is an annual safety awareness campaign, which is more directed towards immersive and emerging technologies. It happens every year, starting 10th of December, which is Human Rights Day, and then it ends on the 15th of December. It’s like a whole week’s worth of activities. We invite all of the communities, governments, and global policymakers, mostly targeting the people who could potentially then take this kind of concept and tell global citizens, to tell their constituents.
We ask senators to participate. A couple of years ago we had Senator Mark Warner talk about how he plans to provide support for these technologies. Last year we had US Representative Laurie Trahan. We had the e-Safety Commissioner of Australia. What we’re trying to do is we are trying to influence world leaders, professionals, and the organizations like Meta.
We work the campaign, really trying to create some positive experience around leadership, and then shift some of this leadership responsibility that I’ve taken upon myself, along with so many of these advisors that we have at XRSI, to really prioritize safety, but not just to build controls, it’s from the perspective of, how do we build trust? How do we build trust in these environments?
It’s a campaign to create a safe and positive experience for global citizens. How do we protect vulnerable populations? We divide it into five days, with different themes, and then we invite all of our stakeholders, platform providers, creators, educators, and people like you. We are inviting journalists from the Washington Post, from various other outlets that are covering these technologies and we’ll have these unique discussions. We’ll have a post-roundtable report. There’s something that you can take away and it would live on.
This year, it’s the most accessible one. We used to do this in virtual reality, where we hosted a conference-like agenda. This year, it’s even more accessible. We are only doing Zoom events. We will have three and a half hours of discussion every day via Zoom. Anybody from all over the world can log in, they can observe, or if they have something to say, they can also add their voices via chat, et cetera. We will have pre-approved contributors. We will have some statements from some of these world leaders that I mentioned. This is going to be fantastic. Again, this will help us answer the question that you raised, like “Where is the balance?” or “How should we be thinking about all these things?”
Tony: Okay, that’s good. How can people find more information about it? Is there a website about it?
Kavya: Yes. The website is www.metaversesafetyweek.org. There is a very simple roundtable entrance form. You can fill out that form and send it. If you’re an organization, you can sponsor, and there is another form for that. Then if you are an organization like VRROOM or a smaller organization, we certainly need finances, support, and sponsorship, but we don’t need to let that be a constraint. Be a community partner. Allow your representatives, send your developers, send people who have something to say or something to learn, and just be a part of the overall agenda.
Several organizations and governments are looking to adopt Metaverse Safety Week as well. Hopefully, what we anticipate in the future, is that this becomes a global phenomenon. The responsibility is not just on XRSI: we just started it and people adopted it. Like Cybersecurity National Awareness Month, there are many weekly campaigns, privacy, et cetera. Last year, we even had the cyber director policy person from the White House. It goes to show the reach and the impact is far and wide, and it is targeting specifically the people who will make those policy decisions that will impact global citizens. Hopefully, your presence could inform them better instead of us, who are just a few experts. That’s the goal.
Tony: That’s wonderful. I think that these events are very important because they make people with different points of view speak together. Especially, many experts in the field can also teach important lessons to people like me who are in the XR space but are not experts in privacy and safety. I love this event. One of the things that is important to speak about is our future. There is a lot of talk, for instance, about the new generations, people who will be metaverse natives, who are the children of today. And there is the problem of adults that sometimes don’t want children in their virtual worlds. On the other side, there is the problem of safety of these children because there have been cases of harassment. Let’s start digging into this topic. What do you think is the situation of the safety of children in the metaverse?
Kavya: It’s a very profound challenge that we have, and it requires the entire world almost. A lot of us are going to need to get this. First of all, we do have a special track. December 12th is dedicated to child safety and children’s rights, and it is co-hosted by UNICEF. I couldn’t think of a more credible organization to co-organize such a roundtable session where we are inviting, again, people who are involved in preventative policymaking, and companies that are providing technology, like Yoti, with age-appropriate, age assurances kind of technologies.
We are inviting certain senators and policymakers from around the globe who are involved in safeguarding children. There are some OECD, which is the Organization for Economic Development and Cooperation (about 38 countries are part of their members). We have Standards Australia. All of these multi-stakeholder. If you go to the website and look at the agenda, you notice that the discussions are really around what we need to safeguard children from this AI-augmented world perspective.
Because of that conversation I earlier said, the artificial intelligence conversations are happening, but we’re not talking about what that impact is on the emerging realities, and so: how do we safeguard children? Children are not just going to be venturing into these worlds, they’re going to create these worlds, and they’re also going to interact with AI chatbots, artificial intelligence beings, et cetera. Whose responsibility should it be when these emerging playgrounds are the place where you have children hanging out? How do we prevent harm and enable opportunities for young people? This is the discussion that we should have.
Then make a call to action on that very day to everybody, to parents, to guardians, to big tech companies (we’ll have representatives from Meta Policy Team), to policymakers… how are we going to safeguard? Currently, the approach to lawmaking is generally that when the harm happens, then you can seek some reward like, “Hey, I need to be compensated”, but this should be different. We need to be preventative. I’m going to cite Julie Inman Grant. She’s an Australian eSafety Commissioner. She talks about safety by design. Australia is leading that safety by design conversation and leadership, and that’s what needs to happen. On that day, we will be able to dive into this very perspective.
One other unique thing, Tony, and I think you will appreciate this one, is that during the entire Metaverse Safety Week, each day, we would also do something called a Swarm AI Intelligence Gathering. We will open up some questions to anyone who is attending, any contributor, or observer. We will raise some targeted questions, and the multi-stakeholder input will be received through a Swarm AI exercise, which collects input from a group to understand what should be the ideal response. It answers the question “What does the group want?” It will be very interesting. This is the first time we would use the sort of a swarm AI from the context of making decisions around Metaverse safety, children’s safety in the Metaverse, or human rights in the Metaverse. I’m excited about that part. Then all those results would go right into that post-roundtable report, where everyone who contributed will also be cited and attributed. It’s a remarkable agenda. Very unique. We are always using technology to experiment towards the solution, so this is yet another experiment that I’m looking forward to.
Tony: That’s very exciting. Just continuing the discussion about children… sometimes people ask “What kind of world are we leaving to our children?” I would also add “What kind of DIGITAL world are we leaving to our children?” because there will be now this always continuous mix of different realities and different intelligences. Everything will be more fluid between the real and the digital. There’s a lot of talking about a dystopian future, like the video by Keiichi Matsuda, or movies like Terminator and things like that. Is there a way we can escape from that, in your opinion, or are we doomed because the moment that there are cameras everywhere on our faces and AI controlling us, it can’t end well? I know it’s a weird prediction to ask, but what Kavya thinks about that? I’m very curious.
Kavya: Yeah, I am actually waiting to receive my Meta Stories glasses so that I can capture some of these moments. Of course, from a research perspective, the one specific thing that is possible is to find the balance. We don’t have to say, “No technology, ban the technology”, we just have to establish trust. I remember when the first version of Meta Stories glasses was launched, I was actually in Milan and I was mugged, twice. Right then, I was like, “Damn, where are those AI glasses when you need it the most?” Even when I went to the police to inform them that my phone was lost, it was like ten o’clock, these police people in Milan… and I have an actual document and article on this… somebody interviewed me and I talked about this… they literally said they couldn’t help me. I was like, “Here’s my phone, I can see it, you can come with me and we can catch the thief,” and they didn’t help me. The next day, they denied it. They said, “Oh, no, you were never here” or “I’ve never said anything.” I got the police chief.
Then I’m thinking all this while as personal experience… I’m originally from India. I spent 23 years in India, growing up as a female in India, and safety for me was a real challenge. I could take one wrong turn and end up in a really bad situation. I was thinking during this time the glasses were launched, “Man, this could be a lifesaver.” I was looking down, but we could be a heads-up society. We’re currently a heads-down society and we’ve just zoomed into that information.
With the correct design, we could gain a sense of awareness around us. We could record the critical moments, like think about Rodney King, and James Floyd, those types of video recordings. If they didn’t exist, there would be no revolution around these things. There is a very crucial aspect of safety that is enhanced with the use of these technologies. The only question that we have to establish is, where is the balance between oversharing, and not sharing, and who makes those decisions?
Let’s go back to Stories glasses. Why can’t the contextualized AI have information, and make that decision that, “This is a bathroom, don’t record, this is a bedroom, don’t record.” We do that with Roomba: we draw boundaries like, “Hey, these are the boundaries, stay in that,” and you can teach that. With VR, we have safety boundaries. We create that. In these immersive realities, which is inevitable, we have to utilize the artificial intelligence algorithm to inform based on our preferences. I don’t prefer to record my kitchen. I don’t prefer to record the bedroom. Private spaces could remain private, but we have to be able to trust the device, the company that’s making it, their terms and use, and policy. That’s why we’re sitting with them. That’s why we’re trying to find that balance, because if we are not involved, if we don’t do this, those decisions will be made anyway. They will be made by people who either don’t care or they simply don’t understand. That’s why, this unique role of XRSI is we’ve got to first understand the technology, then we got to inform them critically where is the right balance.
Another example I can give you is… do you remember when Meta mandated the login ID? Mandating login ID, a federated ID over time… it’s going to be billions of people using these devices. That was so wrong. I was talking in a closed-door conversation with Nick Clegg and I said to him that this was unacceptable. At that point, one of their privacy policy people was like, “Hey, we have to find the balance.” I said, “Sure, let’s find the balance, but we have to find the balance together”. We have to not ignore the minorities, people like me, whose identity once lost, it’s a dangerous world if I step into certain demographics, or certain countries like Iran, China, or even India for that matter. I’m a Muslim convert. There are several identities, and cultures… all issues that could put people of color, people of minority, and different demographics at risk. Hopefully, we are on the track to establishing that, to figuring out how we can use these technologies to not just say, “No, I don’t want this,” or “I don’t want technology in my life,” but instead embracing it with trust, like, “Oh, I can actually trust this company.” That would be ideal if we can reach that.
Tony: Yes, that will be great. Of course, it’s difficult to trust after many problems that happened in the past with some of the companies that are working in the field. So let’s see how things will turn out to be. I want to do a little jump in the past with you. You said you were working at Linden Labs during the Second Life hype and the consequent success, and anyway, even if Second Life is not the newest of the metaverse platforms, it still has a good number of very passionate people in its community. Some people started at that time working with Metaverse, and now they say that we are forgetting the errors that we did in the past and we’re repeating them and redoing the same story instead of evolving over it. Do you have the same impression? If yes, in your opinion, what lessons should we still learn from the Second Life past?
Kavya: Wow. That’s the question that I asked on day one when I went into Linden Lab. I have to say, I’m not one of those early pioneers. I came in around right after 2016. It’s still early for some people, but that’s when GDPR was getting introduced in 2018. I remember what a nightmare that was. Even though I wasn’t, on day one, I created my Twitter account, Sansar, and stuff. I’m thinking to myself, “Oh my God, this is a platform with 16 years of unique legacy. I need to be like a sponge and learn all that has gone wrong.”
They had reputational issues. They had like really messy situations happen. It was a fertile ground for experimentation, for extreme inclusivity. What happens? People are furries, they’re cats, they’re dogs, and they’re doing something called “age play”, which is like borderline pedophilia. There was just these complex issues from cultural, policy, and technology perspective.
Simple example: people are sharing links, and cybersecurity-wise, they’re also sharing malicious links. A link that is validated through our system would have a completely different color. We would use different technology to indicate, “Hey, what’s a safe link?” within the chat in real time. How do you know if it’s a safe link or not? We would run it through the system. These are unique issues that I was there to learn.
The one thing that I can share and I realized working on even their VR platform, Sansar, was that we have to slice and dice these things. We can’t broad-brush, we also have to contextualize these issues. Something that applies to a platform that’s a social VR may necessarily not apply to a platform that is used for medical treatment and even what kind of surgical treatment or maybe a training simulation.
Different aspects have to apply to different contexts. There was so much to learn, and there still is from Second Life. There is a framework that some of the product people, and we started to think about this problem, had to think about how to deal with this revolving door of regulators. What kind of safety, privacy policy cybersecurity controls should we apply when we connect? That’s like a social element. It’s completely different.
What happens is disinformation and misinformation, people will harass each other in real-time, and people will have biases against each other. Those different issues. When you create, and you are just creating a model and stuff, somebody could introduce a malicious script. The risks are different. When you create, risks are different, when you connect with people, risks are different, but then when you introduce the element of money, commerce, then you have money laundering, then you have microtransaction, then you have all these other issues.
I started at Linden, but this has become a concept that I evolved further together with XRSI advisors and our community. Now we have the Privacy Safety Framework, which is using that foundation, and even the definition of the metaverse, that’s what I say, is we got to have these ethos, standards, policies, et cetera, we got to secure the infrastructure, we got to secure the component of create, we got to secure the component of connect. All of these things, they’ve been really informative in how to approach these worlds.
Slice and dice them, contextualize them, use the technology to make those decisions, and bring multi-stakeholder folks to make decisions around terms and policies, et cetera. Everybody should be studying those years of data, and pretty much research it if they haven’t.
Tony: One last question is: how do you imagine the future? If you could shape it, how would you imagine the metaverse, maybe 15 years from now?
Kavya: Oh my gosh. I try to avoid the “future” questions, I even try to avoid this whole “futurist” label and stuff, but in a perfect world, the one thing that I would like to foresee is our ability to trust these technologies and the companies that are making them. Hopefully, in the future, I wake up with my AR glasses and instead of looking down through my phone, I automatically know I wake up, and I always check the weather. It should inform me what the weather is like and what kind of clothes I want to wear today, et cetera. A super-intelligent system that I can immerse myself in without even worrying about where is my data going to end up. Is somebody going to create an awful avatar of Kavya Pearlman and turn it into some kind of pornography or something? It might happen. If it does, I know that… just like we can trust our credit cards… and if something bad happens with our credit cards, I simply just call the company and be like, “Hey, I lost that much money,” and they just wave it off. American Express simply does that like that.
That much of a trust that I have a relationship with this company and it’s going to take care of me. I’m not just a user or a consumer who is being exploited for data but I’m being nurtured and cared for. My child, who is, lastly, honestly, I’m just about to make the LinkedIn announcement about this personal news that I have. I am now seven months pregnant and soon going to bring a whole…
Tony: Compliments!
Kavya: thank you… human consciousness into this emerging world. Talking about safeguarding the future or that vision of the future, just got a little bit more personal to me. Because of this, I would like to see where my child is learning and has opportunities, is engaging with technology, and feels safe just by design. Hopefully, that’s the future that I anticipate and that’s why I’m so invested in making sure that that’s the future we have once it is fully materialized.
Antony: Oh, compliments again for this beautiful announcement. I’m very happy for you.
Kavya: Thank you.
Antony: I have just one last question. This is the usual one I ask at the end of every interview, and it is: Whatever else you want to say… if there is something that didn’t come out during this interview, but you want to say to people who are reading this article, it’s your time to say it.
Kavya: Thank you for that. I think I said it enough, but I can never say it enough, is to get engaged. When we talk about this Metaverse Safety Week campaign, we talk about the very existence of XRSI, the purpose is to involve people, inform people, and get them engaged in helping shape the future that we want to live in. We don’t want a future that is only built by the tech bros, we don’t want the future that’s built by VCs, we want a future that’s built by all of us who come from different backgrounds, including children.
That would be my one call to action, is just don’t roll your eyes and throw your hands up in the air. There is certainly hope in getting invested in trying to safeguard our future. There is certainly a purpose in that. If you feel this purpose resonates with you, then right now go to xrsi.org and sign up to be a volunteer, to be a supporter, to do anything that you can to inform us, to engage with us. That’s why I’d say, Tony, thank you so much for all this wonderful discussion. As always, love talking to you, and hope to see you IRL sometime soon.
Tony: Yes, the same for me. I hope to meet you maybe next year in the US. Thanks for this beautiful interview, and thanks also to everyone who has followed this. As Kavya said, apart from just watching this, please take action with us to try to shape a better metaverse (or however you want to call our future). Have a good day, everyone. Bye-bye.
(The image of Kavya Pearlman in the header is courtesy of Kavya Pearlman)