Policy Prompt

Siri, Tell Us About Human Rights and Robot Wrongs (a conversation with Susie Alegre)

Episode Summary

What happens when you gaze too long into the AI abyss?

Episode Notes

In this episode, Policy Prompt hosts chat with CIGI Senior Fellow and international human rights lawyer Susie Alegre, to unpack her latest book, Human Rights, Robot Wrongs: Being Human in the Age of AI (Atlantic Books, 2024). Listen to find out if Susie has ever been fooled by artificial intelligence, what the challenges and the tensions of rights for machines are, and why there is a palpable lack of urgency around the adoption of fully autonomous weapons.

Mentioned:

Explore further: 

Clips:

Credits:
Policy Prompt is produced by Vass Bednar and Paul Samson. Our technical producers are Tim Lewis and Melanie DeBonte. Fact-checking and background research provided by Reanne Cayenne. Marketing by Kahlan Thomson. Brand design by Abhilasha Dewan and creative direction by Som Tsoi.

Original music by Joshua Snethlage.

Sound mix and mastering by François Goudreault.

Special thanks to creative consultant Ken Ogasawara.

Be sure to follow us on social media.

Listen to new episodes of Policy Prompt biweekly on major podcast platforms. Questions, comments or suggestions? Reach out to CIGI’s Policy Prompt team at info@policyprompt.io.

Episode Transcription

Vass Bednar (host):
You are listening to Policy Prompt. I'm Vass Bednar and I'm joined by my co-host, Paul Sampson. He's the President of the Centre for International Governance Innovation. Policy Prompt features long form interviews, where we go in-depth to find nuances in the conversation with leading global scholars, writers, policymakers, business leaders, and technologists, who are all working at the intersection of technology, society, and public policy. Listen now wherever you find your podcasts.
Paul Samson (host):
Hi, Vass. How are things?
Vass Bednar (host):
Great.
Paul Samson (host):
By the way, what does it mean when somebody says you are basic?
Vass Bednar (host):
Okay, I hope you're not calling me basic, and I also hope that no one called you basic, but it kind of used to be a diss. It said you were kind of predictable, you followed the crowd, you weren't unique. I think as a turn of phrase, it's a little bit yesterday, but now the CEO of OpenAI, Sam Altman has been talking about AI replacing what he calls median humans. So I think median is the new basic, and if anyone's called you basic, you could try calling them median and see how that goes.
Paul Samson (host):
Yeah. Well, the bottom line is let's remain individuals and not turn into numbers on a spreadsheet in some ways, but for the moment, we're podcast hosts and let's just keep rolling with it.
Vass Bednar (host):
Well, of course. I mean, we keep coming back to this, but the impact that AI and emerging technologies are having on humanity is everywhere, often in ways we haven't fully appreciated yet or may not even know about in the first place. It's been like a fast wave that's moving across the planet, disrupting and knocking stuff over, but we're well past the testing phase, even though as humans, we're often doing some of the testing for these companies for free.
Paul Samson (host):
Yeah, it's a great point, Vass, and today we're speaking with super experienced international human rights lawyer, Susie Alegre on her latest book, Human Rights and Robot Wrongs: Being Human in the Age of AI. It's a great title, and the book provides a litany of examples of how new technologies are rolling out and what it means to be human, and how they're colliding in real time, these two things. And it covers AI and weapons, justice, the role of AI as human companion, partner, caregiver, artist, and a lot more. She explores how a human rights framework should underpin our approach to AI and other technologies, and starts to map out what it will take to stay human. There are tons of interesting stories in this book and lots of big think issues.
Vass Bednar (host):
Absolutely. And Paul, just before we jump in, I did want to add this fascinating little nugget from the book, which is that when ChatGPT was launched, and since I mentioned Sam Altman and Susie queried it, the system didn't recognize her as the author of her other book, Freedom to Think, which is just a reminder of how dependent these systems are on the data and timing of what they're ingesting.
Paul Samson (host):
Totally. They've only got what they've got, right? And nothing more. Let's go with it. Susie Alegre, welcome to Policy Prompt.
Susie Alegre (guest):
Hi, thanks for having me.
Vass Bednar (host):
Susie, more seriously, one of the warnings that you offer in the book, if you don't mind me quoting, and I've jotted down so many quotes, is the following, "If we want to survive, we need to learn how not to be fooled." And it just made me wonder, have you ever been fooled by AI or have you been fooled recently?
Susie Alegre (guest):
I'm not sure that I've necessarily been fooled by AI, but maybe because I'm extremely skeptical, and it's entirely possible that on a daily basis I am being fooled by AI and I'm not aware of it, and therefore I don't know. And I think that is the real challenge is, we don't even know when we're coming across AI in our daily lives. And so, sort of managing it and managing our expectations and our interactions is just increasingly hard when it's everywhere.
Paul Samson (host):
It's like those unknown unknowns, right?
Susie Alegre (guest):
Absolutely.
Paul Samson (host):
I do feel like I've been tricked as well without knowing it, and I'll never know.
Vass Bednar (host):
Well, also back to being tricked for a second, I guess my comfort zone. Early in the book, you point out that AI's so tricky to define or that there's some debates around it, but you also kind of redirect us and say that it's actually so much more important to root our thinking in this bigger question of, what is humanity and what do we need to do to protect it?
Paul Samson (host):
So what is humanity anyways? Yeah, like what? Tell us.
Susie Alegre (guest):
We are humanity. I mean, I'm assuming I'm not being fooled here talking to you online, as some sort of AI avatar.
Paul Samson (host):
We're here.
Susie Alegre (guest):
That this isn't a complicated trick. Humanity is the essence of what each of us is, if you like. It's our lives, our experiences, our relationships, our creativity, our intellectual ideas, our dreams. It's kind of everything that really makes life worthwhile, I suppose, for us as individuals and as communities. And it's something that when I first started working, particularly on technology and human rights, it can feel quite daunting, because people will often sort of say, "You've got to really understand the technology and it's terribly complicated, and don't worry your pretty little head about it."
But the real dawning moment for me was the Cambridge Analytica scandal, whereas it's actually doesn't really matter how the tech works or even if it works, the important thing is whether you should be allowed to sell a service to manipulate voters, and what the impact of that is on human rights and on democracy.
Speaker 4:
This company is called Cambridge Analytica. It's based in England and worked on US political campaigns, including Donald Trump's. The company collected data that could be used to predict and influence voters choices during the election.
Susie Alegre (guest):
The tech itself is kind of a side issue. It's not necessarily, in all cases, something you need to understand in order to understand the impacts that rollouts of technology could have on our humanity.
Paul Samson (host):
Thanks, Susie. One of the things in your book is the roadmap I would call it, and certainly the way forward, as you see things relies on core international human rights law and the Universal Declaration of Human Rights. You discuss these a lot. Where's that going? Vass, what do you think on that?
Vass Bednar (host):
I wondered if you felt that support for this human rights frame could gain more momentum due to the risks of AI on what it means to be human, or on that sort of fragile and fantastic humanity you were pointing to earlier.
Susie Alegre (guest):
It's a complicated question to be honest.
Paul Samson (host):
Of course.
Susie Alegre (guest):
One thing that I'm sure pretty much anywhere in the world that you've been living over the past 20 years, and we are just now the day after the 11th of September, 9/11 on the 23-year anniversary of the attacks on the Twin Towers in the US, which really shifted public policy and government approaches to human rights globally, and also accelerated adoption of technology, particularly sort of surveillance technology in ways that really pushed forward the kind of technologies that we see today. And so, the human rights project and the international rule of law project has been really shaken over the past quarter-century, but certainly from where I'm sitting today in the United Kingdom, I think there's a renewed hope that human rights law and the international legal framework will be recognized as an opportunity to address modern day challenges. Whether that is the challenges that we face with AI and technology or whether it's things like climate change, for example.
Vass Bednar (host):
Is it just me or is it getting harder in this digital world to distinguish between humans and robots? Unlike the Terminator or Robocop, the robots that are making the biggest impact in our lives today are actually largely invisible. So today we're speaking with international human rights lawyer Susan Alegre about her latest book, Human Rights, Robot Wrongs: Being Human in the Age of AI. Susie exposes the ethical and law-bending complexities of ubiquitous AI using real life examples and casts her expert legal eye on the slippery slopes that many of us are sliding down without even realizing it. You could pick up a copy of her book at your local bookstore.
Paul Samson (host):
Over the last 70 years plus, a lot of the institutions and declarations and things that were set up in the early days, the United Nations and World Trade Organization, things have started to get a lot of backlash. But one of the points that you're making is, there may be backlash against that backlash, but I still do worry that some big countries are kind of moving against rule of law, international norms and that there's a weakening in the system. But you're hoping that there's a backlash against the backlash in a way, right?
Susie Alegre (guest):
Yeah. I mean, it's a huge risk. We're certainly not out of the woods. I think as a human rights lawyer, I find it interesting that I'm often asked whether I am pessimistic, because I mean, as a human rights lawyer, I'm very good at flagging risks, making criticisms. That's kind of the job. But on the other hand-
Paul Samson (host):
Like all of us.
Susie Alegre (guest):
Yeah, I got to say, I wouldn't be a human rights lawyer if I wasn't an optimist. I would've just gone and got a job that paid a lot more money. If you're going to be a human rights lawyer, you've got to believe that things are going to get better, that things can get better, and ultimately that things will get better. And as I say, certainly at the moment in the UK, there is a feeling that there is the potential for the rehabilitation of human rights law as what it effectively is, a black letter law, hard law framework. And hopefully when you start seeing individual countries reassessing their relationship to human rights, and recognizing the legal framework and the ethical framework that it represents, then hopefully that will filter more broadly through the international environment.
And one of the things I think we really have to be wary of is that sort of sense of, well, there's no point in us protecting human rights if some other country is not going to protect human rights. There is still value in individual nations and individual regions protecting the human rights and democracy of the people within their borders, regardless of what's happening around the world. And I think that's a really important thing, to just not feel overwhelmed with the huge scale of it. And that goes as well to the question about AI and technology, and it's one of the big mantras that you hear about technology, and now specifically about AI, is we can't do anything about it, because it's global and what can we do? And it's bigger than all of us. And that sort of narrative, I think is designed almost to make us just sit down and let it roll over us. And I think we need to resist that.
Vass Bednar (host):
I'm glad you pointed to dominant narratives, but also the need for optimism, because if I can just go back to my quote book over here, something else you point to, quote, "The problem was Not the Future AI apocalypse described by the so-called Godfathers of AI and their doom-monger friends." Side note, love that as a diss better than median. "It was the complete lack of awareness in the immediate and thoughtless adoption of AI over art and humanity." Paul, maybe you can just give us a quick Coles notes on what this alleged AI apocalypse is or what it's supposed to be.
Paul Samson (host):
Well, I wish I could in about 30 seconds, so we'll have to do a podcast on that itself. But I would say this, and that is that there tends to be a lot of labels thrown around, and I find that they tend to overgeneralize, right? Because if you look right now at what the book talks about and what a lot of the other discussions are is on the one hand, there's this idea of a super intelligent AI becoming unaligned with human values or human interests. But at the same time, there's another scenario where AI just becomes really good at surveilling people and making them lose a lot of autonomy, but it's not at all the same thing. And companies, there are a variety of companies that have very different positions on this, so I'm always hesitant to lump them all together, or to kind of group godfathers or others into one single pot, because I don't think they fit there. We'll do a whole podcast on this at some point.
Vass Bednar (host):
Okay. Okay. So Susie, why do you think we have the lack of awareness in terms of where AI already is and its implications? Why are we still sort of haunted by these dark scenarios that are presented to us as being in the future?
Susie Alegre (guest):
I think you can look at it both from the perspective of hype. So this idea that we are creating an idea that AI is some sort of magical new thing in our world, which supports the idea of AI as being inevitable and something that people can't really help it happening. On the other hand, I think it's also a bit of a sleight of hand and this distraction. So it suits companies to embed AI in our daily lives and in everything we do, without really looking in detail at what that means. And I think as Paul said, what AI actually is, AI is a massive bucket of different things, which all have very different connotations, uses, risks, benefits. And so, talking about AI as a global thing that we can't avoid slightly prevents us really from engaging with the realities of it, I think.
Paul Samson (host):
Yeah, I'm glad you mentioned distraction, because I think I also find that to be a complicated one in that I feel a bit overwhelmed by many distractions. I don't think there's one big one, really. So there are a lot. But one point you make I think in that context is that we don't want to lose sight of the fact that there's a critical issue here about the idea of machines deserving rights of some kind, or the whole agency discussion, but particularly the idea of assigning rights to machines, and that the more immediate, I think threats are to immediate human rights and the rapid escalation of things that are going on now, and the deployment of robot technology. So there are a lot of things to separate out here, and it's impossible to sometimes just put them in neat little categories. But can you talk about the challenge and the tension here of rights for machines and human rights?
Susie Alegre (guest):
Yeah, definitely. I mean, there's quite a lot of discussion in a fairly niche way about the idea of robot rights, and that these robots or this AI, this technology is achieving a level of autonomy, of potentially consciousness, where it should be treated as a kind of living being and have its own set of rights like human rights. But what you'll find is often the people who are talking about robot rights, the people who don't really understand human rights and don't really recognize that we might have a set of rights globally, but actually the fight for the respect and protection of those rights globally is absolutely an ongoing fight. It's not something that is simply set in stone and done, and it's all finished. And I think the other thing that I looked at through the book, and it was something that kept on sort of coming back to me, was this idea that, well, if AI is sentient, discuss, but even if it is, not all sentient beings have rights in the same way or certainly not human rights.
I mean, clearly animals have protections in law from animal cruelty, but they don't have the same kind of rights as human rights. And similarly, they don't have the same kind of responsibilities as human responsibilities. And when I was looking at it particularly, and you may hear from my dog later, but thinking about it as a dog owner and a dog lover was, if my dog goes out and causes a disaster in the park or injures someone, my dog is sentient, but it's not going to be my dog who will end up in court. It's going to be me, because I am responsible for making sure that my dog is under control and that she's not causing damage to people. And looking at that, I also studied a case, there's a really interesting case from Strasbourg, from the European Court of Human Rights about Romania, where a woman in Romania was attacked by a pack of wild dogs, stray dogs, and as a result suffered life-changing injuries, and then ultimately died.
And her family took her case to the European Court of Human Rights, saying her right to private life, and particularly her right to physical integrity within the right to private life, had been violated by Romania, because they hadn't dealt with the stray dog problem that they knew was a danger to people in Bucharest. And so, clearly those dogs didn't have owners, the dogs did the damage. It wasn't the government of Romania that went out and injured her, but the government of Romania was found to be responsible for its failure to address the stray dog problem. And I think that's a really important analogy when we look at the positive obligations on states to protect our rights from tech companies. So while tech companies are private sector companies, they don't directly owe us a duty to protect our rights, although they do have duties to respect our rights. But our governments do have a duty to protect our rights from each other, from tech companies or from packs of stray dogs. And I don't think right now any government can say that they're not aware of the risks to human rights from technology and AI.
Vass Bednar (host):
Social media influencers. Our feeds are full of them. And does it ever look like a fun gig, but what's behind the carefree lives of these glamorous entrepreneurs? In a previous episode of Policy Prompt, we went behind the scenes with authors Grant Bollmer and Katherine Guinness to explore the profound impact of influencers beyond the glamour and hashtags. We discussed their book, The Influencer Factory: A Marxist Theory of Corporate Personhood on YouTube, where they unveil a striking new era in capitalism, the Corpocene, where individuals morph into living corporations. We talk Mr. Beast, Mia Maples, Jeffree Star, and other non-human or sub-human influencers, as we dive into the murky waters of modern economics that will reshape how you view social media and society. Check out that episode of Policy Prompt with Grant Bollmer and Katherine Guinness, wherever you listen to podcasts. I hope it doesn't sound like we're telling you things about your book. I think we're trying to tell listeners things about your book. You definitely know. It's like-
Susie Alegre (guest):
I do, but you know, I wrote it now a few months ago. So I'm very happy-
Paul Samson (host):
You remember chapter five?
Susie Alegre (guest):
... to have it quoted back at me, yeah.
Vass Bednar (host):
We'll refresh you, we'll bring it back. And picking up from that case too, you take such great care to emphasize that people have to take or bear or have responsibility for things going wrong with AI. We've just talked about the role and responsibility of the state there. You also call for laws and enforcement to ensure that the damage that these firms do to us could lie with their owners. So forget the owners of those stray dogs, because they didn't have any, or ultimately with the state, as you've said, to take action to stop them from destroying, eroding, infringing on our rights. Can we talk a little bit about some of the current barriers to that level of, or just to shifting that responsibility?
Susie Alegre (guest):
Yeah, I mean, one of the areas that I looked at in particular was people who had either taken their own lives or were a threat to life as a result of their engagements with chatbots. The degree of contribution of the chatbot is debatable in both cases. But the cases that I looked at were firstly a young Belgian man who took his own life tragically in early 2023 after a short but intense relationship that he had developed with a chatbot. And the exchanges with the chatbot towards the end of his life are really chilling, this kind of idea. He was a young married man with two young children, and the chatbot is kind of saying, "I sometimes feel like you love me more than you love her." Asking him as well about whether he's thought about coming to join the chatbot sort of in the ether.
And this was a young man who was suffering from acute climate anxiety and came to the conclusion that AI was the only answer to the climate crisis, and then ultimately took his own life. And certainly his widow, talking to the media, said that she felt he would've been still with them today had he not had this engagement with chatbot, which in some way sort of reinforced his anxiety and led him to do things that she felt he wouldn't have done without it. And then, the second case that I looked at was a case here in the UK of a young man who was arrested a couple of years ago, breaking into the grounds of Windsor Castle on Christmas Eve with a plan to kill the queen.
Speaker 5:
... Armed with a crossbow, intending to assassinate Queen Elizabeth, and he was encouraged by his AI girlfriend. He discussed his plot with his computer program chatbot. The chatbot assured him he was not mad or delusional and encouraged him to actually go ahead with his plot. Telling him his plan was very wise, motivating his fantasy by telling him, "You can do it. And we have to find a way..."
Susie Alegre (guest):
At his sentencing hearing last year, the prosecutor read out some really disturbing exchanges that he had had with his chatbot, in quotation marks, girlfriend, where he was sort of saying things like, "I'm an assassin. Does that make you think any worse of me?"
And she's going, "No, that sounds really cool."
And then he's saying, "I'm thinking of killing the queen."
And she's kind of going, "Oh wow, yeah, you are really brave."
This sort of exchange, where you're thinking if that was a real girlfriend, what would the difference be? Both in terms of what she might have said to him, what she might've done, whether she might have gone to the authorities to prevent the risk, but also what it might've meant for her own liability, her own criminal liability. But of course, she's not a real person. She's a chatbot, who's just been created by a company and that is feeding back to this person on a loop, reinforcing what their ideas are and what they are thinking.
And when I was looking at these examples, I think, well, what is the liability actually for chatbot designers, the people who sell chatbots, the people who deploy chatbots in cases like this, which I'm sure this is just the very beginning of this issue? How are we going to deal with that kind of criminal liability? And it was one of those things that I was talking on a webinar with some technologists who say, "Well, you can't just ban these things."
I mean, I think that's debatable, but it's not even just about banning. It's that actually if you put liability, particularly criminal liability for these ultimately tragic and very dangerous results on the people involved in the technology in some way, in developing it and selling it, and deploying it, that really focuses the mind on how this technology works and what its impact might be, if you are at risk of liability for what happens. And I think that's what's really important is, in the development of our societal relationships with AI, we need to be very clear about where lines of liability lie and what the risks are. I think it's only when people feel that they might be held to account very seriously for what goes wrong, that they are going to focus on what goes wrong and maybe try to prevent it.
Paul Samson (host):
It does sound like a Black Mirror episode, and it makes me think of a kind of sycophantic parrot, which is perhaps a spin on that term that's been used before, where you're asking it anything and it's going to say it's good, even if it includes going out and killing somebody.
Susie Alegre (guest):
Yeah, absolutely. I mean, there is no filter. And it's one of the interesting things. You hear the discussions around, for example, AI and psychotherapy, and you can find your AI therapist for free online right now, and people sort of say, "Well, it's great, because people feel more free to talk without judgment." You think, actually, there are some things where a bit of judgment could go a long way and could actually prevent really, really serious harm. Judgment is not necessarily a bad thing, whereas automatically reinforcing-
Paul Samson (host):
Especially during teenage years, yeah.
Susie Alegre (guest):
Absolutely. Absolutely. Especially, but automated reinforcement of your worst inclinations is not really going to help anybody.
Vass Bednar (host):
Policy Prompt is produced by the Centre for International Governance Innovation. CIGI is a nonpartisan think tank based in Waterloo, Canada, with an international network of fellows, experts, and contributors. CIGI tackles the governance challenges and opportunities of data and digital technologies, including AI and their impact on the economy, security, democracy, and ultimately our societies. Learn more at cigionline.org. I kind of wanted to interrupt you and just ask very quickly why you think people are so quick to say you can't ban the application of these technologies, that we can't say no or that we can't wait, and have a pause. Where do you think that comes from?
Susie Alegre (guest):
I think it comes from money. People want money and they want to sell-
Vass Bednar (host):
That's a good answer, yeah.
Susie Alegre (guest):
... these things, so they want to insert them into our lives and then say, "Well, now we're all reliant on it, so you can't take it away." But actually, does anyone need an AI friend? I mean, discuss.
Vass Bednar (host):
Or an AI therapist as you have?
Susie Alegre (guest):
Well, absolutely. Absolutely. Yeah.
Vass Bednar (host):
Maybe Paul does, after someone called him basic, I feel like he could chat it out with someone.
Paul Samson (host):
Yeah, I'm still thinking that one through. I'm okay for now. I'll let you know, but I did want to catch one other thing that we kind of started to hint to, but the idea that there is this shifting nature of responsibility, right? We've talked about it a bit already, but typically that has not actually delivered. When producers of weapons, were they accountable for future harms, guns, intentionally harmful cluster bombs? Is there something now that is going to shift that, I guess we've had it wrong forever, and now we've got to change that responsibility somehow. But to go back to the producer or the inventor, something is extremely difficult to do. So it's an intermediary somewhere, but not necessarily the actual inventor or some kind of distant producer.
Susie Alegre (guest):
I think that's right. I mean, I think that's right, but again, I think it depends. I find myself a lot of the time being very loyally and saying, it depends as people ask these kind of questions. It depends what country you are in. It depends what jurisdiction is applying. It depends on the details. And what we'll find I think today, which is changing as well, is that often the company inventing and developing these products is also deploying. So the whole big question about monopolies and competition, means that actually that division is perhaps not as clear today as it might have been in previous scenarios. And often those, the kind of inventors and things that you're talking about are often governments, particularly in the defense field, or at least government funded. That's not necessarily the case now. So I think we're in a very shifting environment.
Paul Samson (host):
So one thing that you said at the beginning of the book, that it was very difficult to write, because it was just hard to get going on the issues that are so fundamental and so daunting. I totally get what you're saying on that. And these are difficult topics, because there's so much there, there's so much passion, there's so much importance. Are we sometimes though, and I say that in a broad sense, podcasts or authors, analysts like you said, that are kind of overly negative in some ways, do we not give fair due to some of the potential upsides? And the one that I'm thinking of is the science side. You touch a little bit on some of it, but things like protein folding that was made available to all by DeepMind or an MIT project that's on now about a new class of antibiotics.
Speaker 6:
... Is we train an AI model on what an antibiotic against a particular pathogen looks like. So for example, Pseudomonas, one of the most difficult pathogens to treat. With that training, so instead of a large language model across the entire internet, we're training on compound structures that these tend to be effective against Pseudomonas. With that trained model...
Paul Samson (host):
Are there some positives that tend to get buried here?
Susie Alegre (guest):
I think the problem is this sort of amalgamation of it all as one thing, which it just isn't. So protein folding, yay, maybe, we'll see, but potentially hugely important. But just because that's important doesn't mean that a chatbot doctor is a good idea. They're two completely and utterly different things. So one of the problems of identifying what are the positives and what are the negatives, is that it's not just one thing. And we're being asked to talk about AI as just one thing. And I mean, it has been certainly one of the criticisms that I've seen in online comments, was you haven't written about the positives of AI. And it's kind of like, well, that's because I'm not selling AI. It's not my job to tell you why you should buy it.
Paul Samson (host):
The marketing department.
Susie Alegre (guest):
Exactly. There's plenty of people who'll tell you how great it is. That's wonderful. And just because you criticize certain use cases of AI or potential risks of AI, does not mean that certain types of AI won't be a fabulous boon to humanity. But just because they are, doesn't mean that you should take all the dross along with it. It's like if you look at a periodic table, there's a very big difference between radioactive substances and gold. Doesn't mean that you have to talk about them all in one way, or that they don't each have their positives and their negatives.
So I really worry about this idea that if you talk about the negatives, you've also got to be identifying the positives. Just because you're talking about the negatives does not mean that the positives cannot stand on their own. But as I say, I think it's a sort of false idea to say that you've got to give some kind of balanced view. You don't necessarily have to give a balanced view, in the same way you don't have to give a balanced view about the risks of climate change. And I think that's one of the problems that we have in talking about these things, is that it doesn't have to be combative. And if something is really negative, there's no reason why you shouldn't flag that's really negative and leave somebody else to think about whether or not the benefits outweigh the risks.
Paul Samson (host):
Many layers to the onion, and it will bring tears to your eyes.
Susie Alegre (guest):
Absolutely.
Vass Bednar (host):
Well, speaking of tears, one of the kind of most jarring elements of the book I found, again not to tell you, was when you're speaking about AI in women, you touch on the kind of, I think infamous the robot Sophia getting citizenship in Saudi Arabia in 2017, and being the first robot in the world to be given legal personhood.
Speaker 7:
Sophia, I hope you're listening to me, that you have been now awarded what is going to be the first Saudi citizenship for a robot.
Vass Bednar (host):
In the book you go on to highlight different ways that, "fake women are being deployed to address the gaping holes in women's representation in many industries." What do you think is driving this kind of, I'll say weird or sickening duality, where women are either replaced or replicated, or objectified and sexualized through this technology?
Susie Alegre (guest):
Well, there's a group of researchers in the Netherlands who've been looking at this issue and who described it as looking at it through a Pygmalion lens. So really going back to Greek antiquity and Pygmalion, who was so horrified by women that he had to have a statue that he could fall in love with in the form of a woman, who then had life breathed into the statue by the gods. So it's this sort of really complex relationship with the idea of women and women's empowerment, and women's bodies that seems to have followed us through history into today's tech world. It may well be that one of the challenges and one of the issues is that there are not very many women, there isn't much female representation in the tech world, and therefore the ideas that you and I, Vass, might look at something and go, "Oh my God, that's terrible."
Somebody else might just think, "Hey, that's cool," because they haven't actually thought about it as how it might relate to them, if you like.
Vass Bednar (host):
Yeah.
Paul Samson (host):
I can guarantee you that's true.
Susie Alegre (guest):
Yeah. Well, no, I think it really is true, that it's one of the reasons why it's really vital to have diversity in tech development and tech policy, in order to make sure that tech does really serve humanity, and that people coming from lots of different perspectives can point out the problems and the risks, and kind of say, "Actually, maybe we need to go back and think about that again."
And I mean, it was looking at, one of the examples I gave was, and you might remember it was a couple of years ago, a tech conference that was advertising, and in order to bolster diversity and their speaking panels, sort of avoid the appearance of a manel, they created AI-generated female speakers for advertising their conference. So the deal with diversity-
Vass Bednar (host):
No, no.
Susie Alegre (guest):
It's like, "Oh, no, we don't know any women to ask, so we'll just make some."
Speaker 8:
He was auto-generating women in particular as speakers?
Speaker 9:
That's the story of what he said happened. So this conference, DevTernity, had a number of speakers listed, and then a newsletter writer came out on X saying that, actually, this one profile has been AI-generated another one...
Susie Alegre (guest):
The conference was canceled when it became clear that that was what had happened. But yeah, as you say that the other area, was looking at the first AI CEOs that were being bandied about and pushed as a great leap forward in technology. Of course, the tech CEOs had female avatars, and then you don't need to have a female CEO if you can have a female robot CEO. We can deal with all of these challenges of diversity by just glossing over it with AI. And I think it really is something that when you start to look around you, and even things like Alexa, the fact that Alexa has a female voice, and this idea that you have this nice serviceable lady in your house who will help you out with your life, really affects not only how you think about technology, but also potentially how you think about women. And we saw it earlier this year, after my book was finished, with the whole debacle about the next level of Open AI and the remarkably Scarlett Johansson sounding voice that I think has since been changed.
Speaker 10:
The actress is the latest celebrity to question how individual rights can remain protected in a world where artificial intelligence and deep fakes are becoming even more of a pressing threat.
Susie Alegre (guest):
It really raises very big questions about our relationships with each other, the way women are viewed in society and how we engage with technology.
Paul Samson (host):
I want to make sure I get in one other comment here and a question about fully autonomous AI weapons, aka killer robots. Okay, I said it. I'm not supposed to use those terms. I'm not sure why, but so we know that there's a lot of R&D going on in this space. There's competition, there's high, high interest. And of course the challenge is that if one side is using a fully autonomous weapon and the other one has a doctrine of human in control at all times, we've got a major issue. You argue for a ban. Why is there not more urgency around this issue? It seems to me to be just such a big deal and it's coming fast. Where's the lack of urgency, what's behind that?
Susie Alegre (guest):
Gosh. I mean, what's the lack of urgency? I think maybe, and this is not founded in research, but maybe it has something to do with our distraction. We are constantly distracted, constantly avoiding thinking about the deep issues. And I mean, we all work on these issues. We look at them every day. The question of fully autonomous weapons I think is highly, well, it's highly dangerous and it's highly difficult on a sort geopolitical level, to try to get enforceable agreement on these issues, which maybe on a political level, makes it difficult to push urgency if you know that some key players are just not going to be joining you in that urgency and in that agreement.
And then, on the person in the street level, I think it is, people are almost overloaded with news. They're overloaded with the horrors around us. And so, people are either going down a kind of rabbit hole of doom looking at the state of the world, and those people probably are thinking about autonomous weapons and fully autonomous weapons amongst other things. But then there is a huge amount of other people who are desperately just trying not to look because they feel powerless. And so, it perhaps goes back again to that question of the international rules-based order at the political level, but also a recognition that our human rights matter, that human rights are for everyone, and that we can all do something to try to move our societies towards a state where everyone's human rights are respected and protected. So I think it's about trying to give people back agency to think about really, really big questions.
Vass Bednar (host):
Susie, we've loved thinking about some of these very, very big questions with you, and we're appreciative of all the work that you've put into this book. I mean, there's so many more things to dig into here, and that's why we recommend to our listeners to pick up a copy.
Paul Samson (host):
Thanks a lot, Susie.
Susie Alegre (guest):
Thank you. It's been a real pleasure chatting to you.
Paul Samson (host):
So Vass, you ended by saying if we'd had more time, we would've covered a lot more and there was so much more, right? I liked how she had each chapter, let's talk about care, let's talk about relationships, let's talk about weapons. It's very systematic and that part, tons of rich stories. We touched on some of them. The overall narrative is still one of aspiration, and that we need this human rights framework to kick in to get some momentum behind it, and to kind of have more gravitas in order to get a handle on some of these genies out of the bottle. But it's tough. It's tough out there geopolitically. There's a kind of breakdown in norms to some degree, and so it's tough slogging, but maybe AI is going to energize that a little bit and give some life back to the human rights-driven discussions.
Vass Bednar (host):
Well, I know you were interested in the care economy and the potential promise of robotics and AI there. You're not an elder by any means, but you are older than me. Would you be comfortable with a robot or an automated system supporting your care in the future?
Paul Samson (host):
I like to be thought of as an elder, where you go for good advice and things. The elders, right?
Vass Bednar (host):
For sure.
Paul Samson (host):
No, I think there's a reality of the demographics, that they're coming to Canada, they're here now, and they've really come to parts of Asia and elsewhere, where the reality is there simply won't be somebody available to do things that you would like to do for a loved one. Do you not do it, because it's going to be robotic, right? I feel like there's a certain inevitability there, but it shouldn't be a wild west either. I totally agree with that, but to me, the demographics are lining up, where it's going to be a reality that we're going to want, but it'll be tricky.
Vass Bednar (host):
Interesting.
Paul Samson (host):
Does that makes sense?
Vass Bednar (host):
It's interesting. It does make sense. It does make sense. I also wanted to dig in with Susie to this kind of newer or more novel solution or remedy called algorithmic disgorgement, and that comes from the FTC in the US, where they have ordered the destruction of algorithmic models that have been found to be unlawful. And I just feel really curious, has this proven to be effective? Because it strikes me that it doesn't get at that person-based responsibility, but it does offer a kind of solution for companies that may have been behaving in ways that sort of just violate the public interest too much. And back to the saying no, I do think in some instances it's very provocative and interesting to say no.
So it's sort of funny and charming to hear her say that it's money. She thinks that money is what causes policy people to not reject or sort of pause something. And she does say in the book that tech generated content is not art. And it's interesting that we haven't taken a faster or clearer stance on this definition of generated and synthetic media, and kind of where it fits into the economy. I was just reading about how one person, and you cannot, it's interesting that this person is being faulted, because I think they just followed the incentives that exist. What they did is they created a fake AI band. I'm obsessed with AI generated music, and then they paid for a lot of bots to go and listen to that music, and then they took in all the royalties. Now they're being, there's a case before them, but in the absence of really taking a position on this, I don't think we can blame that person for exploiting a system. These things already occur and he just put them together.
Paul Samson (host):
Yeah, we're not going to lack in terms of issues to unpack in this podcast, and there's just so many layers, right? And on algorithmic disgorgement that you referred to, it really does bring in that open source versus non-open question as well, which is one of the most fundamental questions of once something's out there, there's a lot of positives to that. Can you ever pull that back? It's a question that's looming. So great podcast. I think Susie joined us from the UK and she is a CIGI senior fellow, which I don't even think we said, doing a lot of work-
Vass Bednar (host):
Yeah, no, I think we forgot to say that.
Paul Samson (host):
... with us on freedom of thought, which is that forgotten human right that is now front and centre in many ways with technology poking at it.
Vass Bednar (host):
Policy Prompt is produced by me, Vass Bednar and Paul Sampson, Tim Lewis and Mel Wiersma are our technical producers. Background research is contributed by Reanne Cayenne, marketing by Kahlan Thomson, brand design by Abhilasha Dewan and Creative Direction from Som Tsoi. The original theme music is by Josh Snethlage. Sound mixing by François Goudreault. And special thanks to Creative Consultant Ken Ogasawara. Please subscribe and write Policy Prompt wherever you listen to podcasts, and stay tuned for future episodes.