Policy Prompt

Decoding Brain Data (the possibilities and pitfalls of neurotech with Jared Genser)

Episode Summary

Neurotechnology is here. Do we have the proper protections in place?

Episode Notes

Neurotechnology is a dual-use technology transforming lives — from implantable devices that use deep brain stimulation to ease tremors from Parkinson’s disease to commercial wearables that promise more effective meditation. But without the necessary legal, ethical, and regulatory safeguards, the misuse and abuse of neurotechnology and the data it collects becomes inevitable.

In this episode, hosts Vass Bednar and Paul Samson speak with Jared Genser about neurotechnology, its implications for humanity, and the emerging dilemmas around neuro-rights, freedom of thought, and mental privacy. Jared is an international human rights lawyer and managing director of the law firm Perseus Strategies. He is a co-founder and general counsel to the Neurorights Foundation, and a special adviser on the Responsibility to Protect to the Organization of American States.

Mentioned:

Further Reading:

Credits:

Policy Prompt is produced by Vass Bednar and Paul Samson. Our supervising producer is Tim Lewis, with technical production by Henry Daemen and Luke McKee. Show notes are prepared by Lynn Schellenberg, social media engagement by Isabel Neufeld, brand design and episode artwork by Abhilasha Dewan and Sami Chouhdary, with creative direction from Som Tsoi. 

Original music by Joshua Snethlage. 

Sound mix and mastering by François Goudreault. 

Be sure to follow us on social media. 

Listen to new episodes of Policy Prompt on all major podcast platforms. Questions, comments or suggestions? Reach out to CIGI’s Policy Prompt team at info@policyprompt.io

Our guest’s opinions and facts are their own. Enjoy the chat!

Episode Transcription

Jared Genser:

Our internal thoughts in the form of text is on the cusp of being able to be decoded with a wearable device. And do you think that information should be kept private or not? 99.9% of people we talk to across the full range of the political spectrum, their response is, my gosh, that scares the heck out of me. I don't think that most policymakers have any idea of what's coming and how fast.

Vass Bednar:

Hey, Paul.

Paul Samson:

Hey, Vass.

Vass Bednar:

Do you remember when Mind reading was just a sci-fi plot line?

Paul Samson:

Yeah, totally. It was kind of a Star Trek episode or a Black Mirror episode. But I feel like we didn't get enough warning that it's actually here now.

Vass Bednar:

It kind of is, right? It kind of is. From Neuralink to Chile writing neuro rights into its constitution, that future has rather quietly arrived and it's wired straight to our brains.

Paul Samson:

Yeah, it does feel like a merger no one agreed to, brains and machines. I don't remember signing that waiver or a little check mark. So neurotech has its advantages. Obviously there's huge progress on restoring movement, sight, more. It opens up a lot of questions though as well about who controls our thoughts, our data in our heads, and even ultimately maybe even decisions.

Vass Bednar:

It's super challenging when it comes to autonomy and predictive natures. So in other words, a huge dilemmas shaping up, but very, very quietly. There's immense promise with this technology and also great peril. Neurotechnology gives us the tools to, you could say, heal the mind, reshape the mind, change personalities, but also to hack it.

Paul Samson:

Yeah, which is why today's podcast is not just about the new technologies themselves. We're going to be talking about new rights, neurorights, the idea that freedom of thought, mental privacy, identity need protection at that level as well in the digital age.

Vass Bednar:

Absolutely right. Today's conversation runs the whole gamut, the intersection of technology, ethics, and what it means to remain human. And to help us navigate it, we are joined by a remarkable guest, Jared Genser. He's directly involved in defining this new era.

Paul Samson:

Yeah. Jared Genser is an international human rights lawyer and managing director of the law firm, Perseus Strategies. He has many other roles, including as co-founder and general counsel to the Neurorights Foundation.

Vass Bednar:

I'm so excited to speak with him. Jared, welcome to Policy Prompt. Why don't we start at the beginning and just sort of set things up a little bit? What exactly counts as neurotechnology? And how did you first get involved in all this?

Jared Genser:

Yeah, so neurotechnologies are devices that seek to record or alter the activity of the brain and the wider nervous system. Central to neurotechnologies are what are referred to as brain-computer interfaces or BCIs, which are basically machines, computers that connect brains to them and can also facilitate bidirectional communication between the brain and the outside world, either by taking out neural data or altering brain activity. There are direct and indirect methods to measure brain activity. And there are two kinds of neurotechnology devices that are worth thinking about. The first kind are invasive devices which require neurosurgery to implant the device inside the skull and the brain itself, and then non-invasive or wearable devices that are becoming much more commonplace and publicly available for purchase, which are things like caps, hats, helmets, and wristbands.

Vass Bednar:

Well, you said there's direct and indirect ways to measure brain activity.

Jared Genser:

Yeah.

Vass Bednar:

Could you just like... How do you directly do it? How do you indirectly do it?

Jared Genser:

Yeah, so I mean there are direct methods that are capturing neural data such as an EEG scanning device, electroencephalogram, and then there are indirect methods which look at other kinds of things like blood flows, which are a proxy for brain activity like an FMRI, a functional magnetic resonance imaging scanning device.

Paul Samson:

So we were talking in the intro a minute ago about the sci-fi origins of this technology. Like many things, you kind of see them, hints of them in sci-fi. So you're seeing, Jared, that it's here now. We've got the brain-computer interfaces already. A lot of this is invisible to many people. How much of this is going on now? And is it ramping up quite quickly? How real is this right now already?

Jared Genser:

Yeah, I mean this is already science. This is not in the realm of science fiction, although people will have a lot of memories, when I explain some of the examples of emerging neurotechnologies, that will feel very much like things they've seen in movies before. To give you a sense of what can be done with invasive devices, I mean, neurotechnologies have been around for a long time. A lot of people have heard of deep brain stimulation that can be used to treat diseases like Parkinson's and reduce the tremors that people have. Cochlear implants to help people here or also brain-computer interfaces as well, neurotech devices. But in recent years, we've seen an explosion of extraordinary developments in science and medicine with these implantable devices.

For example, Dr. Eddie Chang out at the University of San Francisco made a major breakthrough about two years ago now. It was on the front page of the New York Times. And this was at the woman who was paralyzed and nonverbal and unable to communicate to her family for 20 years, and with an implantable brain-computer interface, not only can she now communicate with her family with thought to text translation at about 85 words a minute, 95% accuracy, but even her intended facial muscle movements can be decoded and projected onto an avatar, and she now is regularly communicating with her family having been locked in for 20 years and not had a conversation with anybody over that entire time. I mean, that's just an extraordinary development. We're also seeing devices that are now being used to treat things like severe depression, we're seeing and other diseases as well.

On the wearable side, there are now more than 30 publicly available for purchase wearable devices that can do a lot of different things, which include, for example, one device that helps with meditation. You wear the device for an hour connected to your phone, and with that data, the company feeds back to you, after you've let's say meditated for an hour, when were you actually in a meditative state over that hour, and then over time you can train your brain to do a more effective job. But we've also seen wearable devices being used, for example, for someone who was paraplegic to drive a Formula 1 race car around the track using their thoughts. A kickoff for the World Cup a number of years ago, I think it was like 2014, it was a person who was also paralyzed wearing a brain-computer interface who was able to control a robotic exoskeleton to kick off the World Cup using their thoughts of kick the ball. And so there are lots of really interesting developments. On the one hand, this is very exciting, but on the other hand, technology being dual use for good or for bad, there are obviously enormous risks of misuse and abuse of this tech as well.

Vass Bednar:

We should go into the risks, and we will, in the conversation. I also wanted to get a better sense of what kinds of companies or institutions are kind of leading the charge here. You were mentioning dual use. So are we seeing a lot of investment from a defense perspective in this kind of technology? Because I think often the hook in the media is lots of what you've shared in terms of, wow, how can we connect and unlock opportunities for people that have otherwise been limited? It's incredible. And yeah, let's explore more of the motivations for those range of investments maybe.

Jared Genser:

Yeah, I think the real wrapping up of investments in emerging neurotechnologies happened with President Obama's BRAIN Initiative that he launched in 2014 where the US had been spending up to about $800 million a year for the last many years in primary science and research, took more than 500 labs across the United States to develop emerging neurotechnologies. A portion of that money from the US was being put into DARPA, the Defense Department's research branch, about a third of that money, and then the other two thirds went to the National Science Foundation for grants, for sub-grants, and to the National Institute of Health. That BRAIN Initiative has been replicated around the world with BRAIN initiatives in Europe, in Japan, in China, and so forth. And China has been spending in recent years, a billion dollars a year through their military in developing emerging neurotechnologies with very limited visibility into what they're actually doing, which obviously I think is quite worrying. I think that there are, as I said, lots of exciting things that can be done with this technology that will be able to help us understand our brains and how they operate and function and address a wide array of brain diseases and also help us as humans be more effective and efficient in our lives. But again, there are a lot of severe risks of misuse and abuse.

I would say that on the corporate side, the two biggest leaders right now are really Meta and Apple. Meta, for example, spend about $800 million to acquire a company that is a neurotech company to bring it within its ownership and control, and they're developing a wristband that one could wear that can do a wide array of different things, including decoding thought to text, typing on a keyboard with your thoughts, moving a mouse around the screen with your thoughts, and so forth. And that's in prototype form and they're working on that right now. Apple has in its development a next generation AirPod that is going to actually have built-in EEG scanners in it, which means they'll be able to cram into a very little AirPod on both sides the ability to capture neural data and code it and do things with that data. That application was filed two or three years ago. It's not clear yet what they're going to use it for.

But one of the things that's quite clear is that anything that can be done as a matter of science and medicine with a neurotic device that's implantable will eventually be able to be done with a wearable device. It's just a question of getting through the skull, having a high enough level of resolution, and so forth. And so undoubtedly Apple, for example, is thinking about, although they haven't said so publicly, but they must be thinking about the fact that there are already prototypes of a wearable device that can decode thought to text. There's a device in Australia, and it's been reported publicly, that's in prototype form that can decode thought to text written out at about 15, 20 words a minute with about 40% accuracy. So it's still-

Vass Bednar:

40%? Yeah.

Jared Genser:

40%. Yeah. So it's still a distance away from being ready for market, but we're really talking maybe three years, four years until that kind of a device is ultimately going to become available. And of course, we've always thought as humans that what is within our brains has been entirely up to us.

Vass Bednar:

In private, yeah.

Jared Genser:

Our so-called forum internum. And the idea that you're going to be able to breach that barrier that has long through the entire all of human history been totally within the realm of just the person who's powered by their own brain will now be able to be accessed and decoded, not just thought to text, but with a wide array of other things as well.

Vass Bednar:

But can I just ask, what if your mind is just totally wandering? I have a friend whose named Siri, and she gets a lot of text messages that aren't intended for her because she gets the "Hey, Siri," from people yelling at home and checking the weather. I know. I am so curious about everything that she gets. But it feels like there's a difference between me maybe thinking I should send an article to Paul and article being sent to Paul. This is not a great example. You know what I mean?

Paul Samson:

I've been getting a lot of articles, Vass.

Vass Bednar:

How does the technology even conceive of that? I know I'm getting ahead of it, but I'm just reacting to everything that you're saying because it's fascinating.

Jared Genser:

That may be something that my colleague, Rafael Yuste at Columbia University could speak to best. But what I would say is that undoubtedly one thinks about text in the form of words and your internal thoughts that you have throughout the day, we all know our thoughts wander all over the place and this, decoding of thought to text could enable literally a transcript of our thoughts in a day, and I can only say for my own purposes, wow, it would cover a lot of subjects, go in a lot of different directions, and people would be like, gosh, this guy's brain is a little bit like scrambled eggs. But so undoubtedly that will be part of it. But we're going to get beyond even just decoding thought to text and being able to decode images for example as well. There's a prototype of a device that's starting to be able to decode images that are from people's dreams. We're going to be able to decode ultimately a person's subconscious thoughts, not just their conscious.

Thoughts today from an EEG scanning device for example, you can only decode about one to 2% of the data. But it's important to understand that with EEG data, which is the data that all these wearable devices is now powered by with gen AI software, that an EEG device, depending on its level of resolution, can gather between five to about 50 megabytes worth of neural data in a given hour. And today, from existing technologies, you can already decode from EEG data, neural data, if a person has any of about a half dozen different brain diseases and also about a dozen mental states. Are you angry? Are you happy? Are you sad? And so when you look at these consumer devices that are out there on the market, and I gave the one example of course of the meditation device, that company could be potentially learning a lot more about you than you might have intended for them to learn when you gave permission for them to download and to do things with your neural data and to report back to you on a totally unrelated topic of let's say meditation.

Paul Samson:

Yeah. Another thing that you were saying that made me think was the companies that are applying for patents and other uses for wearables going beyond the initial intent for kind of jogging and stuff. I think of the AirPods used as medical devices, as hearing aids. And there starts to be a bit of a blurring of a line here between what is this device that you're wearing, what purposes is it serving, what information is it gathering? It doesn't sound good that there could be some unintended uses. But the question that I wanted to get at was a little bit maybe more on that, but also on how it's evolving. Is the public sector still heavily involved in where this is going? You mentioned the PPP a minute ago. Or is it going to be taken over by the private sector we've seen in AI where now most of what's going on is private sector driven? What's the state of play there?

Jared Genser:

Yeah, I mean, there've been enormous cuts under the Trump administration to scientific research, including in this area. It's not entirely clear yet, but I mean it looks like the US is reducing by about two thirds its investments in this. I mean, I think that the investments from government are only increasing over time as people are seeing much more clearly the upside potential for economic development and basic science and medicine. Today, one in three people in the world will have at one point in their lifetimes a neurological disease and it's enormous number of people, billions of people around the world. And today, there are no cures for any brain diseases. At best what we can do when you diagnose a neurological disease is to slow its progression. And ultimately neurotechnologies will enable that progression to be stopped and ultimately reversed. The curing of brain diseases is decades off, but the slowing of brain diseases with neurotech is already beginning. And ultimately this is going to have a dramatic impact on humanity.

When you think about all the developments that can flow from that as a matter of science and medicine, you can imagine entire new sectors of economic activity being generated out of this as well. And then there's the wide array of ways in which emerging neurotechnologies can be used to help people in a wide array of ways that are not within a medical and scientific context. But I do think the point that you're making about the blurring of lines between science and medicine and consumer products is an important one, and it's also an important distinction to understand the current legal and regulatory environment for the invasive versus the wearable devices.

So for an implantable brain-computer interface in a person's mind, the device itself has to be, typically around the world, licensed as a medical device by the relevant regulatory authority of a government. And then ultimately because it's being implanted in a medical procedure and is being undertaken in scientific and medical circles, the data, the neural data that's being gathered is typically protected as healthcare data at a federal or sub-national level. The wearable devices though were something else entirely, and this is where I think our foundation, the Neurorights Foundation, has been most concerned because if these devices are being used in a non-medical, non-scientific context, there really is little to no regulation out there about that data and how it's being captured. And so wearable devices today, even though they're using medical-grade EEG scanning devices, if it's not being used in a scientific or medical context, it doesn't have to be licensed by the FDA in the United States, let's say the Food Drug Administration or globally in that way.

Vass Bednar:

Same here in Canada actually, what's not a medical device, and yet... Yeah, anyway.

Jared Genser:

Yeah, so I mean they're sort of classified as wellness devices, which are not scientific or medical, and the data that's being gathered has no protections at all. It's not healthcare data because not being gathered in a healthcare context. And in essence, the data is governed by, the status of the data is really governed by the user agreement that users have when they sign up for these products. But even global consumer data privacy laws have unintentional loopholes in them around the world that really don't include currently protections for neural data specifically, and that's a massive legal and regulatory gap that needs to be closed. And we can of course talk more about that as well.

Paul Samson:

Let's go into the rights space for a moment, and I think we can loop back to some of what are the future applications of technology and things that are coming. But so you mentioned the Neurorights Foundation. There's a lot of discussion in legislatures around the world, parliaments, about what are the impacts here. We kind of assumed the mind, freedom of thought was protected, as you said before, and that we didn't need to go there, but now suddenly they feel like they do. And Chile and others have been out in front of this perhaps in some ways. What's going on around the world right now of how governments are thinking about this and how they're addressing that challenge, maybe including the United States, but elsewhere too?

Jared Genser:

Yeah, I think that there are parallel tracks of activities being undertaken. I mean, I'd say at a high level, these issues need to be examined and ultimately addressed at four levels, the multilateral level, the national level, the company level, and then the individual consumer or individual human level. And so there are activities in kind of all those different areas. At the multilateral level, a lot of multilateral organizations have come up with reports and reviews that they've been undertaking on this. So for example, the UN Human Rights Council's Advisory Committee did a major report on this, the UN Special Rapporteur on Privacy did a major report on emerging neurotechnologies and neural data very recently. I've actually been appointed as a member of the expert advisory group to a joint Interpol and UN Office on Drugs and Crime project that's going to be looking at the uses of neurotech in law enforcement and criminal justice context. So this is what's happening at a multilateral level.

At a national level, different countries around the world are engaging. Chile is the only country in the world so far that's actually adopted in their case, a constitutional amendment to protect mental privacy. And we are seeing national legislation being advanced in various places. There's been a lot of action in the US at the state level. We've worked with three states, very different politically, Colorado, California, and Montana, that have adopted amendments to the state consumer data privacy laws to explicitly define and protect neural data as a new category of data to be protected. The problem with these consumer data privacy laws around the world is they typically define pretty specifically what is protected, and they talk about things like biometric data or genetic data or your Social Security number or passport or your home address, GPS, et cetera. But neural data doesn't fall within any of the categories that are really out there today, would fall within the existing protections. Again, this is really an unintentional loophole.

And so our view at the Neurorights Foundation has really been that the first, although there are lots of areas of risks of misuse and abuse of this tech, our initial focus is really on mental privacy and making sure that neural data as a starting point is protected. But we're also concerned about risk to mental agency, mental identity, to non-discrimination in the development and application of emerging neurotech, and also fair access to mental augmentation, which is another very interesting area to discuss because neurotech is actually already beginning to enable mental augmentation in ways that are going to be both very exciting, but are also going to only create further inequality across peoples around the world.

Vass Bednar:

What does that mean? Like making me be able to read faster?

Jared Genser:

Yeah, so I'll just give you one example. There's a study that was published by Boston University about three years ago now with senior citizens in the lab, this is a peer-reviewed study, and with a wearable device and with what's called neurostimulation, which is electrical, minor electrical shock to the outside of the skull, which is the part of the skull underneath which short-term memory-

Vass Bednar:

It doesn't sound good. It sounds like a dressed up term. Yeah, okay. Minor and electric stimulation feel just very funny to put together. Sorry.

Jared Genser:

Well, health and safety concerns are undoubtedly a key thing to examine. But this is stimulation to the part of the skull underneath which short-term memory lies, and with senior citizens in the lab, they were able to increase the short-term memory of a senior citizen by 40% over the course of a month in their memorization, in their ability to memorize a list of 20 items. And they're literally getting on average eight more that they're going to remember as a result of this stimulation. This is obviously in the very earliest of stages, but a lot more is going to be forthcoming. And this sounds a lot more like science fiction than science, but I'll give you another example of where fundamental questions about what it means to be human are raised. It turns out that with deep brain stimulation, as I was mentioning before, you can treat, for example, the tremors of Parkinson's. There's a very, very small percentage of people that have that device implanted that have a personality change that appears when you turn on the device. And there's a pretty famous case of a woman who is continuing to being studied right now, her name has not been made public, but her case has been made public, where she goes from being a very strong introvert to a very, very strong extrovert when the device is turned on, to the extent that her family doesn't recognize her.

Vass Bednar:

That feels more than minor, but I get it. Wow.

Paul Samson:

Dancing in the hallways.

Jared Genser:

I know. I mean, yeah-

Vass Bednar:

Yeah, yeah, no, I know.

Jared Genser:

As a minor, yeah, well, as a side effect, I mean, there are worse side effects to have than I think being an extrovert. But obviously she likes her new personality. Her family, as I said, is worried about that because if you spent your whole life as an introvert, then you don't necessarily even know the dangers of the world. You might just talk to strangers on the streets and do other kinds of things. But because there's only this one narrow effect, scientists are now studying her brain to try to understand what part of the brain is responsible for being an introvert versus an extrovert. Is it consolidated in one place or is it multiple places? And if you're able to isolate that, and ultimately you can turn on and turn off neurons in ways that I think are-

Vass Bednar:

Oh my God.

Jared Genser:

Then theoretically you'd be in a position in the future to be able to implant the device to change a person's personality and make them into an extrovert if they were born an introvert. And to let you know that help you understand how that's actually going to be possible in the future, it's worth thinking about my colleague Rafael Yuste's experiment he did at Columbia University in their neurotech lab from a decade ago now with mice, which woke him up to the dangers of emerging neurotech and the risks of misuse and abuse that would have to be managed.

And he did a study in his lab with regards to a mouse. And he taught a mouse to lick a sugar syrup when it saw in front of a little TV screen a set of black bars moving from left to right across it. So you turn on the screen, the mouse licks. You turn it off, the mouse stops licking. And with an FMRI brain scanning device, you can actually look into the mouse's brain into its visual cortex, and you can see about 500 neurons firing in a particular sequence when the mouse sees the black bars. And you can then implant a brain-computer interface in that spot in the mouse's brain and then play the mouse's neurons like a piano, replicate the exact firing and sequence, and then the mouse will lick the sugar syrup went with the screen off. And what you've just done is implant a hallucination into the mouse's brain and it believes it's seeing the black bars, and then it licks the sugar syrup accordingly.

A mouse brain is the same as a human brain, just a lot smaller and less complex. And so what can be done in a mouse today, in a decade or two will ultimately be able to be done in a chimpanzee and ultimately a human. And this was an experiment that was done a decade ago. Gen AI is powering enormously more rapid advances in neurotechnologies. What took five years to get done a few years back is now taking one to two years because of emerging neurotech, sorry, emerging gen AI. And this means that these kinds of developments are going to be fast and furious.

 

Paul Samson:

Wow. Just a lot of images came into my head with that of thinking of Jean-Claude Van Damme kind of movies where they have these future soldiers that are programmed. But a question I wanted to ask was about, there's such a wide set of issues that get opened here, and you touched on a number of them, but you came back several times to the idea of mental rights, protection, mental protection rights. Is that agreed as the starting point in a way for getting things in order from a legal protection perspective or is there still debate about what cornerstone number one is?

Jared Genser:

Yeah, I mean, I think we're really, really early days, which is why it's great to collaborate with CIGI and to be on this podcast and to try to raise awareness from the public facing side of the education awareness raising that needs to happen. I think for the most part, these are issues that most people in the world have not heard of, and that goes for policymakers too. I mean, I don't think that most policymakers have any idea of what's coming and how fast. And what has been very, very heartening and important is that across the political spectrum, when you explain to anybody who doesn't have much personal knowledge in advance of a discussion what's going on and what's coming, and the fact that our internal thoughts in the form of text is on the cusp of being able to be decoded with a wearable device, and do you think that information should be kept private or not, 99.9% of people we talk to across the full range of the political spectrum, their response is, gosh, that scares the heck out of me. And of course, yes, you should be able to control the content of your neural data.

And this is what's now enabled us to be so successful in advocating, most especially in the US, which is of course where we're based, but the fact that we had Montana and California, two states that couldn't be any more far apart from a political perspective, along with Colorado, all virtually unanimously adopting in their state legislatures these amendments to the state consumer data privacy laws to protect neural data is I think a sign of how simple and clear a message it is and how obvious the risk is when you get to people and get them thinking and talking about it. But at the end of the day, I mean there are at least, there are 30 consumer neurotic devices that are on the market today that are already capturing neural data. Our foundation did a major study of the user agreements of those devices looking at these 30 companies and found that there was, in 29 of the 30 cases, a massive amount of data being downloaded and kept by the companies. And this is an immediate concern today right now because we have no idea what those companies might be doing with that data other than using it for the narrow purpose that the device itself was designed to serve.

And so just one company, the meditation company I was mentioning earlier, they talk about on their website, they've already downloaded a hundred million hours of consumer neural data from their meditation device being purchased and used all around the world. As the gen AI-powered software improves over time, they will be able to go in even after the fact once, for example, EEG data can be decoded in a wearable device to decode thought to text, and they'll be able to go backwards in time, look at the data they downloaded from years earlier, and theoretically, if users don't withdraw their consent, be able to decode what these people were thinking, for example, when they were meditating. And so when people are giving away their neural data today, I don't think anybody has any idea really what it is they're actually giving away. In the same way that a decade ago or maybe 25 years ago, let's say, when more widespread genetic testing began, people didn't realize how much would ultimately be able to be decoded from just a simple piece of your DNA.

So I think that the risk is enormous, but it's even more enormous, I think when we're talking about brain data than even DNA, because our brains and our minds are what create the essence of who we are. Our aspirations, our dreams, all the relationships we have, our personality, our goals in life are all being generated by our brain. And if you can crack the code and decode all of that, then you'll understand everything that goes into making up our brains and our minds, and will be able to, as I said, even decode people's subconscious thoughts ultimately. And that has always been something that humanity has been able to rely on. Whatever the circumstances the person might be in, at least their own internal thoughts and dreams and fears would remain strictly confidential with themselves. But when that code gets cracked and spills wide open, there are just enormous implications for humanity.

Vass Bednar:

Policy Prompt is produced by the Center for International Governance Innovation. CIGI is a nonpartisan think tank based in Waterloo, Canada with an international network of fellows, experts, and contributors. CIGI tackles the governance challenges and opportunities of data and digital technologies, including AI and their impact on the economy, security, democracy, and ultimately our societies. Learn more at cigionline.org.

Maybe we can also tie back to some of the monetization models. I'm thinking of hearing implants with a software component where the company goes out of business and there's no way, or people are left with no way to continue going forward. Is part of the anticipated monetization model here related to subscription, recurring revenue, exchange of your thoughts as a form of something that's valuable, replicating other more extractive elements of the internet? What are you seeing in terms of models that seem sort of maybe promising, more palatable, maybe more ethical, and some of the other more concerning elements that might fall under that wellness space? I say that with a wink, but you can't hear a wink in a recording.

Jared Genser:

Yeah, yeah. So look, I do think that there are a lot of companies that are coming to realize that work in this space that the only way consumers are, ultimately in the longer run as people come to understand this technology and what it can do, the only way that they're going to be willing to sign up and purchase, for example, a wearable consumer neurotic device is going to be if they believe there's an absolute protection for their neural data and it can only be used for the narrow purpose that the company is selling to you for that device. And so we're working now with a number of companies, our foundation is, that are like-minded in orientation to develop what I would describe as a model user agreement that was at the high end of protective across the board. And it's a balancing act because of course we saw with DNA testing companies that built into their model was the ability to basically do whatever they want with your DNA data de-identified. The problem with de-identification in neural data is that you can't actually be sure that you're de-identified when you hand it over because we can only decode today one to 2% of it. If you're later able to decode a person's thought to text, then there might be information in that, for example, that would reveal who you are and enable re-identification in ways that DNA would be harder to do.

So I do think that when it comes to things like companies going out of business, especially with let's say implantable devices, this has already happened actually with deep brain stimulation, there are enormous legal, ethical, and humanitarian considerations that need to be considered. I mean, I think the reality is though that the number of people that will need or want implantable devices, even if implantable devices allow in the future for mental augmentation, is going to be obviously substantially more limited than those that are going to use the wearable devices. And I'm less concerned about the implantable devices in the longer run because there's definitely a large market for that, for example, for the treatment of Parkinson's and so forth, where ultimately there've been a lot of circumstances where people have been able to repurpose another company's device and in essence move a subscription for ongoing use of that technology to a different, let's say, deep brain stimulation company and so forth. But obviously the more complex the implants become and the more specialized they become, that becomes obviously an enormous and real issue.

I think that from our point of view, there is a question of sequencing, as, Paul, you were mentioning earlier, which is kind of where do you start with all of this. And I do think that as a starting point in general, our focus is really on mental privacy as a most important first step forward because it's one that everybody understands in a 30-second elevator talk, and it's something that people across the political spectrum are horrified at the idea that the content of their thoughts could be decoded. And so it doesn't matter where you come at it politically, almost everybody thinks that protecting that data is important. I think that when we build momentum around that, that will then create the space to have more momentum to address the other kinds of issues that come up with the use of this data and its potential use for other purposes or its sale or transfer to third parties and so forth.

But I do think that the bottom line is that all it would take would be for an Apple or Meta when their devices move from prototype form to use form, unless they have the highest standards in place around the protection of neural data, very few people are going to want to use their devices. Nonetheless, I mean, even myself as a person who knows more than the average person about this, I can't imagine a scenario, even if I were to want to use let's say an Apple AirPod in the future that has an EEG scanning in it for certain kinds of things, otherwise I would pretty much make sure I'm never wearing those AirPods and make sure that they're off and in a Faraday box or something.

Vass Bednar:

Yeah, buried in the backyard.

Jared Genser:

Because we all have this experience with our mobile devices now where obviously they seem to be in always active listing mode where you're having a conversation with something about someone and then you pick up your device and log into Meta and then all of a sudden you see an advertisement for the exact thing you were talking about. So I think everybody is, people are wary of this. But I think that especially in the near term, in addition to mental privacy, I think there are also urgent action that needs to be taken, Paul, as well in terms of protecting against discrimination based on the use of neural data in the same way that there are, in most countries around the world now, laws that forbid let's say healthcare companies from discriminating against a person based on their genetic data and so forth. And so I do think in the nearer term, in addition to mental privacy, I think non-discrimination is important.

And these devices are being used not just in a consumer context, but also in an employment context. And so there are also lots of different kinds of issues that come up in that way. And so for example, you've already seen a rather horrific way experiments done in China with actually an American made neurotic device where kids in several schools were actually tested out wearing concentration devices on their heads during the day, and that data was being sent to the teacher up front and then they were sending out to the parents, here's where your kid ranked for how much they were concentrating during the day versus not.

Vass Bednar:

Oh my gosh.

Jared Genser:

Yeah, exactly. And this has already happened. It's been reported on and exposed and it stopped. But one of the places where it could be of value, and is already being deployed right now, is wearable devices, for example, that help long distance truck drivers maintain their alertness. Because with a wearable device that can measure concentration levels, you could tell from your brain signals if you're starting to get fatigued or if your brain is starting to emit different levels of reduced ability to concentrate and so forth. And in that case, I mean, I think it's a good thing that this kind of a device would be on long-term truck drivers. But of course the same data that's being gathered, which is EEG data, for these kinds of devices for truck drivers can also, for example, determine if you have epilepsy.

And so for example, what if the company was running software against the EEG scan and then is firing employees because they have epilepsy because their brain scan that was being used for the narrow purpose of making sure that they remained alert can be used for other purposes? And so there are really important issues that come up in the employment context. Similarly, this is also being used in factories in China and some other places now for workers on the line doing repetitive work and what's their level of concentration and so forth. So I think that the surveillance aspect of emerging neurotechnologies is also something to keep a close eye on.

Paul Samson:

Yeah. I'm thinking when, let's imagine a few years or even a few months into the future when legislation is evolving and some of these guardrails or frameworks start to get established. But nevertheless, the technology is moving fast and it's always listening, always gathering in some ways. Even as we've just heard, even if you didn't ask it a question, it's responding. It's presumably some of these technologies are going to be gathering things. How would we ultimately safeguard against, practically speaking, regardless of what the law might say or the regulations might say, against advertising, targeted surveillance, political manipulation? You can read a lot in somebody's face and you get a bit more data and you've now targeted something to them.

Jared Genser:

Yeah, I mean, I think advertising and the altering of brain states are very much top of mind in terms of risks of misuse and abuse of the technology, and I think that that's obviously going to have to be seriously regulated. A huge amount can be decoded from a person, for example, if they were wearing an EEG device and watching an ad for something on their screen. Like with your mobile phone today, when you're looking at let's say a particular video that might be on TikTok or whatever the case might be, they would theoretically be able to know what your emotional reaction was in real time to when you're watching a cat video, for example, or whatever the case might be.

Vass Bednar:

We've already seen some of that testing with audience testing, not from brain, but from expressions. Yeah, so this is just-

Paul Samson:

But it's going further, right? With the AI applications, you can read a lot from the face, you can read... There's the obvious stuff, and then there's the really subtle pattern stuff that could even be customized where you're really, the poker face is no more. Even no sign of change is still something readable.

Jared Genser:

And I would note that there is also going to be a need to regulate certain kinds of non-neural data collection that are able, as you're suggesting correctly, can enable a person to infer a mental state from a subject. So there's neural data, which is the most sensitive because of the scale and scope of what is in it and the fact that we can only decode one to 2% of it right now, that is kind of the urgent priority. Particularly face scanning and eye scanning in particular are more sensitive. But other kinds of biosensing devices, whether it be a heart rate monitor or a gait measurer or these kinds of things have to a greater or lesser extent the ability to infer certain things about a person's brain state as well. So I mean, ultimately some of those are going to require further regulation. A lot of those are already regulated. For example, if you're talking about, let's say a face scanner or an iris scan, those are often already regulated as biometric devices if they're being used for the purposes of identifying somebody. But there are obviously non-biometric uses of the same kind of technologies from which information can be inferred.

So I think that this is all stuff that should concern us. But again, when we're talking about the neural data, it's something else quite different. If we're watching a commercial with our eyes on a screen somewhere, then we are aware that there's an external stimulus to what's happening with us and how we're viewing it and whether or not we're being persuaded, in the same way that you get text messages or all these kinds of things that come at us, social media that influences us as well. But when you're talking about altering a person's neural data and changing what's going on in their own minds, then you're getting one step closer towards the mouse who has the hallucination projected in its mind, where the mouse can't distinguish between seeing it on the screen because it's there or not.

And this, by the way, is consistent with studies that have been done in the past, correctly, accurately, and peer-reviewed, about people who are, let's say, paranoid schizophrenic. And if you look at their brains when they're hearing voices as they would describe it, what's happening is it's in the exact same part of the mind where if you have an external stimulus coming into your ears and you're actually hearing a voice, it's the same part of the brain that is activated for a person who's paranoid schizophrenic but there's no external stimulus. But they can't distinguish... But when a person who's paranoid schizophrenic says they're hearing voices, they're literally hearing voices. It's no different than us talking and they're hearing us talking and believing it's from nowhere because they see no one in front of them speaking to them. But this is the very same part of the brain that is activated. And I think that when you're talking about being able to substitute in ultimately in the future stimulus that feels like it's internal to us rather than external to us, then obviously it's ability to influence us is much greater than an external stimuli.

Paul Samson:

It's the ultimate manipulation ability, obviously, because then what's going on externally in the real world really doesn't matter because you're directly plugged in like that mouse. That's why I thought earlier about the military applications, which do seem scary here, about just programmed soldiers. Although maybe drones are putting them out of business anyway, right?

Jared Genser:

Let me mention two other areas of concern that relate in fact to law enforcement and criminal justice systems. And again, we're talking about dual use technology. But ultimately emerging neural technologies are going to enable an unbeatable lie detector test. And that's both exciting and daunting. On the one hand, when you think about a person trying to exonerate themselves having been accused of a crime, if you get to the point of an unbeatable lie detector test, which by the way is probably only a few years away because there are much, much higher tech brain scanning devices that are already out there in the market that just right now are not affordable, but for which software is being written for a million different purposes right now. The kind of most high profile example of the high-end brain scanning device is from a company called Kernel, and it's their Flow device. And this is in essence like an MRI machine in a wearable cap, which uses optical lasers to get through the skull, and has the highest resolution ever of any device, dramatically more than EEG scanning.

And in a documentary film that we worked on with the German filmmaker, Werner Herzog, there's a scene in the movie where my colleague Rafael Yuste is wearing the Kernel Flow device and is being asked to answer the question two plus two, what does that equal? And he first says it's four, and then he says it's three. And you can see that the firing neurons look totally different in his mind when he's lying versus telling the truth. But in terms of the risks to a person's human rights, if there's unbeatable lie detector test and that's well known and widely known by the public at large, then is a jury or a judge in a criminal trial in any country in the world going to follow a judge's instruction that a person's decision to not testify on their own behalf or to decline to take the unbeatable lie detector should not be taken as an adverse inference against them when deciding whether or not there's proof beyond a reasonable doubt that a person has committed a crime? It's going to raise serious questions because I think most people would think in their own minds if they're on a jury, most ordinary people would think, well, I mean they could have exonerated themselves. They chose not to. How can I not infer that they might be guilty?

Another use of these wearable devices is ultimately and tragically going to be the use as a torture device. There are already implantable devices today that are starting to treat people that have chronic pain syndrome. And chronic pain syndrome is when a person's, let's say, feeling pain in their hand, but there's no external stimuli to their hand. It's just misfiring neurons in their brain. And those neurons can be identified and turned off. The most extreme version of this disease is called complex regional pain syndrome or CPRS. It's referred to as the suicide disease because people who have this disease feel the most intense pain a human can feel. And right now there are no cures for it at all. And 80% of the people that get this disease end up killing themselves because they can't stand or bear the pain. Ultimately, as we are already seeing implantable devices that can address the milder forms of the disease, what this means is that ultimately the most extreme forms will eventually be able to be treated in this way. But the problem is if you learn how to turn off misfiring neurons, you're also going to learn how to turn them on.

And it's an axiom of neurotechnology that anything that could be done, as I was saying before, in an implantable device today, will eventually be able to be done in a wearable device. It's just a question of the advancing technologies. And what that means is you'll eventually have a wearable device that you could put on a person if you were a unscrupulous dictatorship, for example, bind someone to a chair, put this wearable device on them, and then you could literally flip a switch and start by having two minutes with that level of pain on them, and you could break a person very, very quickly with very little effort. And so again, this kind of also demonstrates the dual use nature of these technologies for good or for ill. And I think we need to be very, very concerned about the use of emerging neurotechnologies for both law enforcement and criminal justice purposes. And that's going to really require both legal and ethical guidance to very strictly ensure that any of those uses are consistent with global human rights and ethical standards.

Vass Bednar:

And that we don't have a ton of regulatory lag and don't have these technologies on the market for a long time before we're actually catching up. Since we've gone down I think speaking a lot more about the potential risks and harms and how intimidating and frightening they are, maybe this is a good time to ask you, what gives you hope with the power and potential of the technology? Where do you see neurotech kind of more genuinely improving human dignity or democratic capacity? I mean, no matter how it's used, it's an absolute marvel and we are going to learn more about our brains and how we work and how our personalities are formed and help with disease elements. But maybe tell us more about where your head's at there.

Jared Genser:

So to speak.

Vass Bednar:

Yeah. I didn't even know I was doing a pun. Usually I do a lot. Yeah, okay.

Jared Genser:

So look, I mean, I think I wouldn't be involved in this if I didn't believe there are enormous upsides, especially as a human rights lawyer. While I may be more concerned about the risk of misuse and abuse than an ordinary person because of my experience... I'm best known, for example, for my work freeing political prisoners around the world, and I'm concerned about those kinds of people and how they might face emerging neurotech in a negative sense. I'm excited about what it means for humanity to be able to eventually cure brain diseases given the kinds of impacts that they have on people all around the world, and especially dementia. I mean, when you think about one of the cruelest brain diseases that are out there for people with Alzheimer's or who have other very serious kinds of brain diseases like ALS, the idea that you're going to be able to take these horrible diseases and begin to not only slow them, but ultimately reverse their effects or stop them entirely.

I mean, the amount of human suffering of people with these kinds of diseases is impossible to really understand unless you have had a family member go through something like that, but that are beyond anyone's imagination to really think about unless you've been through it. And I think that that to me is incredibly exciting. It's just as exciting to me to be able to think about how we can understand ourselves and how we think and feel and why we think and feel more effectively and efficiently. You can already change your behavior or change your mental states through both, say, medication and psychotherapy.

Vass Bednar:

And chocolate, alcohol, positive affirmations. No, I'm kidding.

Jared Genser:

And chocolate and alcohol. Exactly. There are many things we can do to change how our brains operate ourselves, but that's not a substitute for the most severe kinds of problems that people have and the need for much more help that people are going to ultimately require in order to be maximizing their own potential. And so I think the idea of being able to understand our brains and how they operate and function, understand our strengths and weaknesses, understand our inclinations, all of that will enable us to be, I think, much happier in the longer run because we're going to be able to really be able to engage in much more powerful self-reflection and to understand that for a lot of the things we might do that we may not like, for example, about ourselves or aspects of our personality, that some of that is actually biological embedded in how our brains are wired. And in a lot of cases, you're going to be able to accelerate addressing people's serious challenges they face, whether it be severe depression or other self-destructive behaviors with the assistance of neurotech devices. Neurotech devices are already being used to treat, for example, major depression. I think that this is I think really, really exciting.

I think it's also exciting when you think about human enhancement, which of course there are flip sides of the same coin, I mean both exciting and also worrying. But the exciting part to me is when you think about what you're capable of doing and the impact you're able to have on the world, the idea of being able to take the best parts of yourself and be able to focus more effectively over time, and to be able to be more self-aware of the impact you're having on people and the ability to become more effective and efficient in your work, the ability to improve the personal relationships you have because you understand some of the way in which your brain functions currently. All of those things to me are exciting as well.

Obviously, there are dystopian futures that we can think about that come from, for example, changes to a person's personality and what does it mean to be human. And when you think about the woman I was mentioning earlier, who becomes an extrovert when her deep brain stimulation is turned on to address her Parkinson's syndromes, it does raise fundamental of questions like what does it mean to be human? And is she an extrovert or an introvert? And are we who our brain has made us to be naturally? What is considered to be abnormal? And as one addresses or changes parts of who that person is, it means that who we are as humans becomes much more fluid. And that's both, I think, exciting if there are parts of oneself that one doesn't like that one would like to change, and it's also worrying because when you also think about the kind of secondary effects that this could have on our species and also the enormous divides between have and have-nots, people who can access this technology and afford it versus those who don't. I mean, simply as a matter of increased memory, which is already now being able to be done in more rudimentary ways, imagine a scenario of kids in school who can afford to have these memory enhancing devices and those who can't, and what that's going to mean for people's overall performance.

So I mean, I think on balance, the downside risks are always there with any kind of emerging tech, but I try to focus on the upside potential, and I think the upside potential far outweighs the downside risk. That said, I'm under no illusion that there's going to be by both bad governments, even maybe some good governments and bad actors in general, attempts to access neurotechnologies for the purposes of doing bad things. That's something I think we're just going to have to shine a bright light on to the best of our ability and manage as best as we can. But I don't think we can stop this kind of human progress. And I think that we need to, as quickly as possible, ensure that the public at large understands what's coming and how both exciting and worrying it is so that we're in a better position to be able to get in place the legal and regulatory frameworks, internationally and nationally, that enable us to maximize the good and minimize the bad.

Paul Samson:

Yeah. Well, as we're wrapping up here, I just want to say that you've stressed several times this notion of dual use or good and bad applications. It seems to me that there will be A, lot of pressure to do more because of the demographics alone, aging populations and there's going to be a desire for these technologies which are further ahead than most people realize, certainly further ahead than I realized before this conversation. But there will also be some tough choices to be made. I was fascinated by your example of the perfect lie detector methodology. If that's true that it's perfect, wouldn't there be a desire to have it mandatory? And then you've got a system where you know exactly who's guilty and who isn't. And there's a big choice to be made there going down that road.

Jared Genser:

Yeah. And of course there are people who believe they're telling... I mean, if you're a sociopath for example, and you don't have a sense of what's right and wrong, then you actually could believe a lie that you tell yourself. So technically I don't think it'll be fully unbeatable.

Paul Samson:

Still not perfect. It's not infallible then. Okay. That reassures me.

Jared Genser:

Not infallible, but for most people it will be strongly infallible. But you're right. I mean, it's a huge choice. And when you think about the savings that could come and be redirected, let's say, into primary education from legalizing the use of these devices to try to exonerate people, I think the reality is that there's not going to be a scenario where I think it would be mandatory for the reason I just described because I don't think that you can... Because even if you determine if a person is lying or telling the truth, there are obviously circumstances where people are working with others. And culpability, ultimately when you think about criminal sentencing for people convicted of crimes, a lot is determined based on what is their past record? Have they committed crimes before? What is their culpability for the crime in question? Were they a major central player or a more peripheral player? So there's still a lot of variables that would need to be addressed, even if you could with perfect ability know whether they're telling the truth or lying.

But I agree with you. I mean, there's going to be a lot of pressure as this kind of technology becomes available from ordinary people and from people who elect people to government jobs, to say, why don't we streamline our criminal justice system and our process by creating a pathway to accelerate the hearing of cases where a person's willing to use that kind of technology? And ultimately, I think this could be a good thing. There will also be temptations to use it in ways that, of course, egregiously violate due process, including by people in law enforcement who might want to put that kind of a device on a person that they're interrogating who is only suspected of a crime. And then you can imagine, even if this technology is available, that a person, for example, without accessing a lawyer if they wanted to... You don't have an obligation to have a lawyer, but you should be advised you have a right to have one. But what about people who agree to use this device and are pressured into agreeing to use it, didn't consult a lawyer and didn't think about what other trouble they could get themselves into. Because once a person's wearing a device, you could ask them not just about the particular circumstances in question, but about have you ever broken the law before or these kinds of things.

So there's a huge amount of complexity and massive global system-wide shifts that could come from doing this the right way and the wrong way. And we really have to be thinking about those kinds of things like right now, anticipating what's coming and beginning to develop protocols for thinking this through. Because the bottom line is that I am very worried about how these kinds of technologies could be misused or abused. And when you talk about already the way the enormous imperfections and discrimination, for example, that takes place in criminal justice systems around the world in a wide array of different variables, I mean, this could only accelerate or enhance those discriminatory inclinations that these systems often have and further lead to more disparate outcomes. So I think that it is an enormous imperative, I think, to work on these issues.

I think it is very good that Interpol and the UN Office on Drugs and Crime are both beginning now to think about what are the potential uses for emerging neurotech in law enforcement and criminal justice systems. And this is just the beginning of a multi-year process that they're going to be going through. But I'm glad they're going through it and there's going to be wide public consultation and consideration of these issues. And I think that the more that we can get guidance from multilateral organizations that are viewed as credible and non-political, I think the better able we'll be at a national level to be able to adopt laws that protect neural data in general and very clearly circumscribe the ways in which these kinds of technologies can be used by government for a wide array of different purposes.

Paul Samson:

Thanks so much for your time today. This was a really enlightening conversation.

Jared Genser:

Well, my pleasure. Happy to be here.

Vass Bednar:

I think something I didn't appreciate about this technology is that it may be more productive for us to think about legislating or regulating more narrow uses and applications. Typically, when we're talking about technology governance, we're talking about kind of blanket. Red light or green light. Where are the guardrails? I think with neurotechnology, we need to be very careful and cautious about where and when we're using this technology. I think the risk of it very quickly coming to a wellness market or being part of the cult of optimization really risks becoming harmful and discriminatory too quickly and sort of tarnishing, again, the promise that's here. There may be many instances where we don't want it used at all, as Jared was saying, with children as well. What sticks out for you, Paul? I don't know. There's just so much.

Paul Samson:

There's so much. I mean, I've been following it a bit. And one bottom line for me is it's coming faster and, to your point, wider than was obvious.

Vass Bednar:

Wider, yeah.

Paul Samson:

There's a lot of entry points here, and it's not just physical entry points, but just a lot of ways to get into the head. It certainly is below the radar in that way about the implications of it. There are going to be a lot of choices to be made about will, as you say, at a pretty specific level, will you be allowed to do X or Y, whether it's an assistant kind of leveraging assistant capacity augmentation of I need help with something here. Can you help me? Why would you say no to that? But at the same time, it'll have another application about, yeah, I'm going to use that on my exams or I'm going to use that to get an advantage, or I'm going to secretly give my own brain little push the buttons in a way and perform in different ways and not even tell anyone.

Vass Bednar:

Easy to imagine it being used in an abusive situation for sure. Also, I'd seen a social media post, not to be the extremely online one, but I feel like I ingest more social media than you, which is totally fair. I don't know how to fact check that. I'd noticed a lot-

Paul Samson:

It must be a lot because I'm...

Vass Bednar:

It's a lot because you're huge online.

Paul Samson:

I'm on a bit too.

Vass Bednar:

I think I saw someone online who was like, oh, I can't wait for brain-computer interfaces to be more widespread because instead of reading, I could just upload books to my brain. Yeah. And I thought, what an interesting, what is it about this moment that we're in where we rush and it's about counting the number of books instead of just spending time with text. But that is an idea or goal that people would have. Just how can we learn faster and why would you disagree with that? Isn't it good? Don't you want people to be intelligent? It's really, really thorny. So I'm glad we're able to kind of scratch the surface a little bit and point to it.

Paul Samson:

You put your finger exactly on it. Everyone's in a hurry. I want more and I want it now. It makes me think the first time I saw that acronym, TLDR, too long didn't read. And I was just like, what's that? And now it's everywhere. It's like, hey, let me tell you what you need to know because you don't have time to read. Let me program you so you've got these hundreds of books. People want it. It's spooky though.

Vass Bednar:

Shortcuts.

Paul Samson:

TLDR is also weird too.

Vass Bednar:

Yeah, it is weird. Well, thanks for reading and thanks for listening, and I'll talk to you soon.

Paul Samson:

Yeah.

Vass Bednar:

Policy Prompt is produced by me, Vass Bednar and CIGI's Paul Samson. Our Supervising Producer is Tim Lewis, with technical production by Henry Daemen and Luke McKee. Show notes are prepared by Lynn Schellenberg. Social media Engagement by Isabel Neufeld. Brand design and episode artwork by Abhilasha Dewan and Sami Chouhdary, with creative direction from Som Tsoi. The original theme music is by Josh Snethlage. Please subscribe and rate Policy Prompt wherever you listen to podcasts, and stay tuned for future episodes.