Counterfeit People
When we're talking about things like death bots and leaving a digital legacy behind, it's always a way to really make it clear what you care about, who you care about, and the sorts of intimacies that really matter to you.
Lizzie O'Shea | Patrick Stokes | Emily van der Nagel | Rob Brooks
The late philosopher and scientist, Daniel Dennett talked about ‘counterfeit people’ as one of the great dangers of AI – but are we now willing to court the same dangers through our adoption of multiple identities across the metaverse. Moving from the confinement of physical reality to the landscape of the metaverse, where looks, preferences, and genders are limitless, we can each acquire many digital selves.
Is a ‘virtual you’ a truer reflection of your deepest self – revealing desires and aspects that otherwise remain hidden? What is the human cost of leaving the physical world behind? Hear Lizzie O'Shea, Patrick Stokes, Emily van der Nagel and Rob Brooks discuss.
Presented as part of The Ethics Centre's Festival of Dangerous Ideas, supported by UNSW Sydney.
Transcript
Rob Brooks: Welcome. Anybody ready for some dangerous ideasing? My name is Rob Brooks, I'm Professor of Evolution at UNSW. I study why sex is so complicated for animals and for humans too, and increasingly online.
I'm going to push everybody's books today. My book Artificial Intimacy, Virtual Friends, Digital Lovers and Algorithmic Matchmakers, like all the others that we're going to talk about is on sale out there at the bookshop, considers what happens when new technology collides with how humans form relationships and fall in love. We have a fantastic and varied panel today.
In the far side, Lizzie O'Shea, sues companies and governments that do the wrong thing, and for that we are very grateful. She's run major cases against technology companies on behalf of thousands of people who have been harmed by them. She's also founder and the chair of Digital Rights Watch, which advocates for human rights in online spaces. She's the author of Future Histories, on sale in the in the foyer, which was shortlisted for the Victorian Premier's Literary Award.
Emily van der Nagel is a lecturer in social media at Monash University. She researches social media identities, platforms and cultures with a particular focus on digital intimacies. Her book Sex and Social Media, co-authored with Katrin Tiidenberg, takes a feminist sex positive approach to how social media platforms shape and restrict sex. Emily is currently working on a research project about how Australians use social media to create and subscribe to content on OnlyFans.
On my immediate left, Patrick Stokes is Associate Professor of Philosophy at Deakin University and a writer, radio producer and media commentator on philosophical matters. He's currently engaged in a three-year ARC discovery project, Digital Death and Immortality. His most recent book is Digital Souls, a philosophy of online death.
Please join me in welcoming our fantastic panelists.
Applause
Let us begin with another philosopher, Daniel Dennett, who passed away at the age of 82 in April this year and is in many quarters greatly missed. I declare I'm a fan as an evolutionary biologist. He was probably the most important philosopher of evolution, among other things, in the last century and the thinker who gave us the meme of dangerous ideas in discussing that most dangerous of all ideas, Darwin's dangerous idea of natural selection.
Dennett talked in an essay, one of his last essays, in 2023 about counterfeit people as one of the great dangers arising and being made worse by AI. He said that perhaps counterfeit people were as great a danger as counterfeit money has been since the beginning of currency. But now it seems that we are prone to counterfeiting by villainous sort of bad actors as well as by ourselves. We often willingly adopt and create multiple identities and that is an issue obviously made more extreme by the internet and amplified by new technologies, particularly artificial intelligence. Now there is a lot of tech in this panel and we will try to obviously talk in ways that transcend all of that tech, but it's very fast moving. So from time to time, we may jump around in terms of which tech we're talking about.
But I'm going to begin with Emily. It's really tempting when we talk about this kind of tech and the way in which it's changing our worlds to sort of pick out the negative side, but much of your work considers online anonymity and the fabrication of online personas and the considerable upside that those can have for the individuals involved and for societies. Can you give us a bit of a sense of some of those upsides?
Emily van der Nagel: Yeah, thank you.
So, I feel like when we're talking very broadly about being anonymous on the internet, for me this has always been something that people tend to associate with the negative and indeed there's heaps of negative things that happen across social media if you take the time to look for them. People harass each other, people flame, people you know make horrible memes and try to humiliate other people and there's so much negativity that you can get wrapped up in that for me when I'm approaching social media anonymity I always want to take that step back and remember that there are also so many positives that come out of that too. We are increasingly mediating our identities using these platforms and some good, you know, some real social positives that come out of that is that we're ever more able to try to recontextualize the sorts of communications that platforms often flatten for us.
If we think about all of the roles that we play over a single day in terms of the contexts that we're in and the way that we try to shape our behavior socially, having multiple identities, using multiple platforms, this can seem like something duplicitous but actually it's much more that people are using a range of digital spaces and negotiating them to get to the things that they really care about and for me that's something that always has to be part of the conversation about multiple fragmented and anonymous identities on social media.
Rob Brooks: And we can to some extent relax about that?
Emily van der Nagel: To some extent. Harassment is still a huge issue on social media and can't be taken lightly but I feel like, and Lizzie knows this as well with her work on Digital Rights Watch, that if ever a solution to people behaving badly and anti-socially on social media comes up and that solution is framed as if you just take the anonymity off the platform, everyone will be nicer to each other, absolutely forget it.
Rob Brooks: Lizzie, you've written plenty and obviously in some ways a lot of your work involves questions of privacy and we're going to get to that in a second but I'd really like to know where you stand on questions of online anonymity.
Lizzie O'Shea: Yeah, I think this is a really interesting topic because it's one of the solutions that's often posed as a catch-all solution to the problem of online life being unpleasant and I've always really appreciated Emily's contributions here because it's not entirely clear that imposing real identities in online spaces necessarily improves behaviour and it's not necessarily true either that the use of anonymity necessarily means that everybody lets go of social rules. And so many of these discussions that we have about how to make online life a better place I think too often reflexively look to quite simplistic solutions, the removal of anonymity being one but also things like imposing age restrictions on access to social media.
These are solutions that create problems of their own. I mean online anonymity has been something that's been defended by digital rights activists for a very long time, not least because many human rights defenders in lots of authoritarian regimes require anonymity. I mean as part of political life it's been a long-standing tool that advocates have relied on, you know, plenty of politicians have written into publications in the 19th and 20th century criticising even sometimes their own government in using a pseudonym or anonymously and there's a utility in being able to speak the truth away from your personal context that can be very, that can contribute to the functionality of democracy and so that's traditionally the kind of first generation human rights way in which it's been framed.
But as we move into online spaces where there's a lot more going on in life than just political debate but in fact ways in which you express your more intimate thoughts there's real utility and protection from that. The other component that I keep returning to as a digital rights advocate is that I'm often very critical of data-driven business models and so I'm very critical of large tech companies who see their main ambition to be to extract as much information from us as possible and often when they try to do that they are also advocates for what other thinkers might describe as kind of contextual collapse. You should have a single identity across multiple spaces because that allows us to join the dots of who you are and better sell advertising space that will appear before your eyeballs and that I think is an enormous problem.
That requirement that you be the same across these platforms I think is something we should resist. It's not technically a form of anonymity but it's preserving the right to have some space free from surveillance that is corporate in nature. I mean as well as what we talked about before you know sort of surveillance or you know watching by the state.
There is watching going on by corporations that is so damaging I think to our sense of identity and forcing us to fit a mould that suits that business model is something I think we should resist.
Rob Brooks: So, this spills over into the privacy what you've written about privacy. This is a great line I want to share with the audience that you wrote in 2019. The right to privacy is the right to exist in a world in which data generated about you cannot be used as an indelible record of your identity. How does that notion of privacy then sort of relate to what you were just telling us at the end of the bit about anonymity?
Lizzie O'Shea: Yeah I'm a massive devotee of privacy which I know sounds technical or boring but I actually think it's this amazing right because you can think about it in lots of different positive ways. You know the right to obviously say no to terms and conditions that are really onerous but also the right to be able to explore your sense of self without that being used against you later.
And if you look at a lot of the you know there's all sorts of research now being done about how much social media is dependent on predatory industries as a source of advertising revenue and it's not dissimilar to broadcast media in a lot of ways in the junk food, gambling, vaping, diet products. These are the mainstays, huge source of revenue for these companies. So, what do gambling companies want to know about you? They want to know if you're feeling vulnerable, if maybe you've got some other social disadvantage, if you have a history of mental illness. These things are useful for them to know when to target you with certain kind of advertisements.
Now that sounds, I mean it is predatory but what I was going to say is not even necessarily some intent behind it, that's just the way the business model works. These are the kinds of fields of interest that the platforms are collecting about you and can discern about you from how you use these online spaces that is then used in a way that is not how you would have expected, it's probably not what you would have wanted and can be hugely harmful.
And I think we have to think about this in terms of how we understand what we can do about these spaces. How do we limit these business models that aren't interested in you exploring your personality and making friends and trying something out and maybe not wanting to do it again and being able to be vulnerable online and share some of your weaknesses or concerns or fears without then that becoming a whole way in which you're shaped as a consumer and as a product that can then be sold to companies. And I think that's one of the ways in which I think privacy is really interesting, right? Because that's not just being able to have a life behind closed doors away from view, it's about existing in public life in a way that's dynamic, that is liberated, that gives you the capacity to form connections in unexpected and interesting ways without being cajoled or channeled into certain forms of behaviour by large corporate interests and at times also government surveillance.
Rob Brooks: Patrick, different tech, you've been spending a lot of time thinking about people who are no longer with us and how their digital remains are used and reused, often in ways that go beyond just remembering them. What are we learning from these technologies and approaches that preserve identity and some sense of individuality beyond the grave?
Patrick Stokes: Well, one thing we're learning is that we're not necessarily very good at putting norms together on the fly, which is what we're having to do, because essentially, we've all been living on the internet in some form or another for basically this entire century. We're all generating vast amounts of data and a huge amount of that survives when we die.
Now, on one level, that's actually a really wonderful repository because it means that we are able to preserve so much more of people than organic memory has normally allowed us to do, however imperfectly or vulnerably that data is preserved, because of course, you know, the famous saying, the internet lasts forever or five years, whichever comes first.
Laughter
You know, data gets lost, servers break down, catch fire, all sorts of stuff happens, companies go under, which is the big risk for a lot of this data. But what we're starting to do now is work out how we actually memorialise this stuff, which means, first of all, we're feeling our way through the moral issues of who has a right to preserve or delete the dead.
That can be something as simple as, you know, your aunt dies, and you're left with her Facebook profile. Who gets to make the decision as to whether to leave that up or switch it off? Who gets access to the emails? Who gets access to the, you know, the data that is left behind and what form can it be preserved in? Can it be preserved? Must it be deleted? All of that intersects with a whole range of economic questions, environmental questions, because of course, data storage is really emissions intensive. All that's in the mix.
But then we come to the question of whether you can reanimate the dead with AI, which is increasingly starting to happen. And all this stuff is happening in a reasonably wild west way where people are reacting to new advances in this space and saying, wow, hang on, this is not okay. We need to find some regulatory framework or some way of handling this that respects the dignity of the dead, that preserves the dead or protects the dead as a fairly vulnerable class of people.
That's already a very controversial way to put that. And then in the next breath, it's kind of like, we need to do this, but oh, this stuff is already happening. So, it's kind of like, how do we actually build norms around this while the horse is already bolting, so to speak.
Rob Brooks: So, I guess for folks who haven't encountered this, could you maybe just paint a picture of what some of the more sophisticated but wildest wild west technologies of today are doing?
Patrick Stokes: Sure. Yeah. So, we already can set up chatbots based on a dead person. You can take a whole bunch of written material that a dead person has left behind. This was done in 2015, incidentally, by a Russian tech entrepreneur named Roman Mazurenko, who was killed in a car accident in Moscow. One of his closest friends was the tech developer Eugenia Koida, who also set up the Replica app.
And she took all of his text messages. He actually wasn't a big social media user. But he took her text messages. She put the text messages into a bot, basically trained up a large language model bot, and put him out into the world as an app that anyone can download and chat to this dead guy wherever you get your apps from. So that we've been able to do for quite a few years now. And of course, as all this stuff comes together, the idea of building a really compelling, convincing online replica of a dead person is becoming more and more achievable.
And it's already starting to happen on a commercial level in China, for instance. We're starting to see more chatbots of the dead. We don't have a good name for this yet. Some people call them deathbots, griefbots, ghostbots, thanabots. There's a whole range of different names. My colleague Adam Bubin and I like IPCDs, Interactive Personality Constructs of the Dead, but it doesn't really trip off the tongue.
So yeah, there's a whole bunch of stuff happening in that space. And some of us have been sounding the warning for it for a few years and saying, this is coming and we're not ready for it. Now it's basically here and we're still not ready for it.
Lizzie O’Shea: So, can I ask a question? Have you got plans for your own death, Pat?
Patrick Stokes: Ahh
Lizzie O’Shea: Not now.
Laughter
Rob Brooks: Forever or five years, whichever comes next.
Lizzie O’Shea: Have you put in your will that you don't want to be a deathbot?
Patrick Stokes: No, I haven't. But that's because I haven't updated my will in a while.
Lizzie O’Shea: Okay. As your lawyer, can I advise you to do that?
Patrick Stokes: My mum's a retired will solicitor. She'll be furious when she hears this. No, I haven't.
And this is actually something that is being discussed more and more, is people need to actually start planning their digital legacies in that way, putting them into wills, letting people know, setting up legacy contacts and things like that. But it's hard. And part of the reason that people don't do that is the same reason that people don't make wills, which is we really don't like thinking about death.
Even people like me, who think about death for a living, don't like to think about our own deaths as an actual thing. And in fact, some people say to me, why do you do philosophy of death? Isn't that depressing? And I'm like, it's the easiest way not to think about your own mortality is to make it into this big abstract thing out there that you don't have to get too close to.
Rob Brooks: Those who can't do teach thing. You're hoping that's true.
Laughter
Patrick Stokes: Those who can't die. Yeah.
Rob Brooks: Emily, I guess having had that sort of view of these technologies of dead people, it's not a huge leap to living people or fictional people. Can you give us a bit of a sense of what the state of the art is in some of those texts?
Emily van der Nagel: I mean, the thing about contemplating death is that it really viscerally makes what's most important to you really take on a focus of its own. So, I feel like this is the thing that drives anybody to make a will. The purpose of a will, of course, it's about dividing up your assets after you're gone and articulating your own wishes for your legacy. But more than that, it's about showing the people who are left behind when you leave the world that you care about them and that there's something more to you and your life. There are people who you care for.
And I feel like when we're talking about things like death bots and leaving a digital legacy behind, it's always a way to really make it clear what you care about, who you care about, and the sorts of intimacies that really matter to you. Because if the idea of the death bot takes on any sort of resonance at all, it is just another way to articulate the feeling that when somebody you care about dies, you're really going to miss them. And if someone you care about isn't there anymore for you to talk to or send a text message to, or there's no possibility that they will ever update their Instagram feed or their Facebook profile ever again, you are really going to feel that loss.
And for me, the death bot itself is about trying to make digital that feeling that you're going to have of never being able to talk to somebody again and trying to capture whatever you have left of that kind of feeling with them.
Rob Brooks: Is this something that your sort of rights framework is going to help us with?
Lizzie O’Shea: I mean, as a litigator, I think about this all the time because I can't talk to a colleague at Melbourne Uni about this, about product liability, what your rights are as a consumer. Because, you know, the other example of Replica is that recently, so Replica is a chatbot app, which was developed by the woman Pat was talking about.
But essentially, it allows you to have a, build out a relationship with the AI using your own criteria. And there's some elements of it that involve a bit of whimsy, a bit of unpredictability about the personality. But you're able to create something that's suitable to you, including being very complimentary of you, for example.
But critically also, being able to have sexy chats with them. And so recently, Replica then decided to strip that capacity away for various, it's not entirely clear, maybe a legal obligation, maybe they never intended the app to be used in that way, to great uproar from users. So, it's clearly something people use the app for.
But I do, in the course of one of the updates, a whole bunch of data was lost, and customers felt very aggrieved that someone they'd spent a lot of time with disappeared and had to be recreated again. And I keep wondering, is it better that we think about this as something that you might own and be able to protect from being deleted? But the negative consequences that come from that is that you're also able to own something that is created as a person almost, and certainly having a persona that you talk to and can shape. And what are we saying about that in legal terms to give that protection to something that someone's having potentially a sexual relationship with as well? So, I do think often when we talk about this space, at least in legal terms, people often discuss the requirement to disclose that you're talking with a bot.
And I always thought that seemed a bit functional or whatever. But increasingly, I'm like, yes, actually, we need to keep disclosing that for a variety of different reasons, because it can be quite dangerous giving people total control over someone else they perceive as a person, even if they're in an online setting. And it can also be dangerous if they become too dependent on it, and it is owned by a company that may not be here in five years' time and doesn't care about your well-being.
So, there is something quite troubling about applying the legal terminology to that and then a disclosure requirement to that, because it does undermine that relationship. But I also think it's kind of part of making the product safe, that we do need to continuously disclose these things, because the damage that can come about can be quite profound.
Rob Brooks: It can still be profound, even when it's disclosed.
Lizzie O’Shea: Of course. I mean, that's one rudimentary solution I've got at this point, but I'm happy to come up with others.
Rob Brooks: Yeah, no, I mean, I've road tested Replica for my own research purposes. And, you know, obviously, my interests are in romance and sex and intimacy and stuff. So as soon as you start flirting with it, it goes, oh, okay, we can have that conversation, but you need to pay $90 a year.
Laughter
Which is very transactional, but, you know, at least it's up front what the transaction is.
Lizzie O’Shea: It’s a kill vibe.
Laughter
Rob Brooks: But I think, I mean, the people who made Replica said this is going to help people with their mental health. It's going to help them ameliorate their anxiety. And you just go like, yeah, right, you know, corporate babble.
But then as soon as that great, what they call the great lobotomy happened, it's true in this sort of weird counterfactual kind of way, in that once it was gone, people were super anxious and were lonely. And strangely enough, the predictions, what the tech was hyping could happen. It was actually happening probably more than they even knew. Which is kind of weird.
So, let's go back to the internet for a second. When I was young, and the internet was new and bright and full of promise, going online was quite a liberating experience. In chat rooms and in online games, people could sort of be as cryptic as they wanted to be. The kinds of things that Emily has spoken about already today. And for many people, that was really important. They were able to find their tribe. They were able to find out more about who they were, especially if they lived in some kind of physical or social isolation. You know, just super important stuff.
But we've always, at that time, I certainly thought about it this way as there being this real distinction between online and IRL in real life. And our sort of physical encumbered selves and these decoupled freewheeling selves that we have. I'd like to know from each of you, and maybe we'll go from Pat across, you know, do you think that that distinction is of any use? Do you think that distinction is harmful? Or do we do we stick with it?
Patrick Stokes: I think that distinction is nearly dead. Very nearly dead. And the bits of it that survive can be quite harmful. Because we're occasionally lapsing into this very 90s way of thinking. You used the phrase go online or going on to the internet. Yeah, because the internet was a place you had to go to. You had to physically sit in front of a square box and wait for it to make those, you know, really very comfortingly material boinging noises that the old 56k modems used to do.
And this made it easy to conceptualize the internet as a separate realm, as cyberspace, right? It was its own little self-contained world that you visited, and then you left. And that made it really easy to think in terms of what happens in Vegas stays in Vegas, right? It made it really easy to be like, it's a place you go to, and it's not real. It's fundamentally not real. So, what you do there doesn't really matter, or at least not in the same way as what you do in meet space. And thank God that term disappeared.
Laughter
You know, it's not real in the same way. And so it doesn't kind of matter. That's just not how we the internet anymore, right? The internet is embodied in the way you move through the world. We're wearing it. We're constantly interacting with it as we do things. It's simply part of our embodied existence as we move through the world. Now, existence is kind of made frictionless for us by this vast kind of commercial set of infrastructures behind it, all of which have their own kind of reality to it, which is largely hidden from us. But on the experiential level, that's kind of how we experience the internet. One term that has been used for this, they've used, they've said we no longer have the online-offline distinction. Now it's on life.
Now, I don't know that on life is going to become a dinner party word, but it's nonetheless a good summation of how we live now. That distinction is just not really there. And to the extent that it's still there, it can actually be used to sort of shield yourself from moral responsibility for your own actions.
Rob Brooks: Right. Would that be true for you, Emily?
Emily van der Nagel: No, I'm going to have to disagree with you there. I feel like even as we talk about a distinction between online and offline and on life, and we know that our distance between ourselves and the internet is always shrinking, I just feel like that distinction is not going anywhere.
And I feel like our conversations recently, particularly when we talk about young people and the internet, really centres around the smartphone. And I feel like because the internet is so portable and so attached to our body, like I couldn't come on stage today without my phone. So, I had, you know, because it just felt too weird to leave it off. So, it's like on aeroplane mode in my pocket against my body. And I feel like that entanglement that we have with these tiny computers does often take on a very kind of intimate feeling.
But in the sense of a distinction between being on the internet versus not on the internet, I feel like if we continue to talk the way that young people are talking about this distinction, it's much more useful to think about when we are on our phone and when we are not on our phone. Because there is such a difference.
And we see this all over the place. We see this, you know, in discussions about schools. Should students be able to have their phone and use their phone in class or on the school premises or not? We see this in universities. We see this when we talk about the forms of intimacy that are available to us. You know, the idea that if you are in the physical presence of a person, if you are on your phone, if that's not a fleeting thing that's like connected to the conversation, that it is somehow a barrier. Because if someone is with you in person but they're on their phone, that is still a distinction between being in somebody's personal space and having your attention somewhere really different.
And I think that any time that you are in a conversation with somebody, and you feel locked out because their attention is here and not there, you can feel that distinction between online and offline and that is very enduring for me, I think.
Patrick Stokes: Can I just jump in on that, though? On the plane on the way here, I was reading Tetsuro Watsuji's Ringuigaku, which is a mid-20th century work of Japanese ethics. And one of the examples he gives in that of being rude is continuing to read your book while a guest is in the room. So, which is interesting. So, I'm wondering if it's actually the internet that's doing the work there so much as it's just divided attention in the case of the being on the phone.
So, it would be interesting to try and tease out how much of that is about switching between two worlds, so to speak, of the online space and the offline space. Or how much of it is just about, yeah, don't be on your phone when someone's in the room because you should be actually, you're in the presence of another consciousness and that's where your focus should be.
Lizzie O’Shea: Definitely wasn't reading that on the plane when I was on my way here.
Laughter
What I would say is I feel like it's action, reaction, synthesis. Because one of the things I would say is, as a privacy advocate, the common argument that's put is people don't care about privacy. They give it away all the time. They sign up to stuff. They know their data's going to be taken. And what I hate about that argument is it's now a requirement to be online to participate in life, in society, you know, whether that's on a social media platform or it's engaging with your elected representative or it's doing your banking.
You don't get the choice often to have no online identity. So, you have to negotiate that through a variety of contractual arrangements where you have zero power. So of course, people have to sign up and accept it. If you're a child in school, you have to use the online system to learn. And I'm not going to forgive lots of departments of education who give away children's rights to their personal information all the time and don't consult with children about it or include them in the discussion. Often just go to parents.
But even then, parents are mostly irate about it and have nothing that they can do about it most of the time. So, what you need then is some other power to come in and intervene and say that kind of contractual arrangement doesn't cut it, that in fact we need standardised rules. Like you shouldn't be able to take someone's information and then use it against them.
Or there should be a fiduciary obligation, a fair and reasonable test of how you use that information, so it can't then be repurposed against them to exploit a vulnerability. And that is the role of law, like it sits there trying to stop the worst excesses of that private market exchange of personal information, and there's a real role for it to play.
So, the idea that privacy is somehow dead, or that, you know, that people have given up on this, is untrue, I think, when you look at the choices that are available to them. And if you dig a bit more deeply as well, and you ask them more questions about it, all the polling reveals that people do feel really strongly about it, that they don't feel they should have to make that compromise in order to participate in online life. And children feel strongly about it as well, like if you ask them, that's what they'll say, they don't think they should have to give up their personal information to live life online, but they're required to do it because there's no alternative to living in that way.
So, if that's the case, that's a strong moral case for fixing this, I think, strong public sentiment in support of addressing this imbalance of power between the two. And I actually think it's critically important, because if you stop information being taken at the source, that stops all sorts of other harms. It means that you can't be profiled for your political views, for your particular vulnerabilities, whatever it may be.
It also means platforms have to monetise in different ways, they're not relying on viral content that keeps you on the platform and keeps you engaged, increasingly extreme content that also keeps you on the platform, so they can continue to take information. So, problems like mis and disinformation have a privacy origin, in my opinion, because they are a function of a business model that is dependent on extracting personal information from you. So, there'll be huge benefits if we can address this problem at its source.
And then, of course, comes the AI question, because what is AI? It's data, plus computing power, plus people. And if you can reshape that data point, then you have a big influence on the AI industry, because that is the source, that's the source material of these products. And if we can be more careful and more discerning, more intentional in what goes into that data, we'll get much better results at the other end.
And it's not a total solution, but it strikes me as a fundamental necessary step to take in order to address a lot of the other problems that we see in our society. And delay on this, personally, now I'm talking about the Federal Government here, delay on this is unacceptable. For what it's worth in Australia, our privacy laws are a good four decades out of date, and it's one of my key objectives to force the Federal Government to move on this. And if you want to join our campaign, you can. But it's a functional thing for me, but it's also like quite philosophical. Until we solve that problem, we're just going to be tinkering around the edges of lots of other political problems we see arising out of online life, and it's not good enough.
Rob Brooks: I guess while we're on that, Lizzie, I'm going to keep you going, because you've thought a bit about the sort of how these datafied versions of ourselves, these reconstructed versions, sort of persist. And you said to me in the aside, I think Freud would have a lot that's interesting to say about that. And I thought, well, Freud didn't even use the internet. What are you saying about that?
Lizzie O’Shea: Oh, I love Freud for understanding modern society. I mean, he's got his faults, don't get me wrong. That's perhaps an understatement. But the point is that I think understanding how some of our desires get expressed, and they're not always rational, and that you can form patterns, but also how you negotiate living in a society invariably involves compromising and limiting some of your desires, or dreams, or wishes, whatever. And that maybe that's a very challenging thing that we have to acknowledge if we're going to live in a functional society, that people don't just get to do what they want all the time. And anyway, so one of the chapters in my book, I look at particularly kind of information privacy, or this privacy idea to flesh it out a bit more, rather than just thinking about it in transactional terms.
Like, what does it mean for companies to extract huge amounts of information about you and use it against you? And what does Freud tell us about that? That you've got different drives, that you've got a death drive, you've got a pleasure principle, that you might pursue certain things that don't appear rational, but then if you see yourself as a mind that has multiple components, not all of which present when you talk to someone day to day, it makes a bit more sense. And that's a form of vulnerability. Like, we don't want people shaping how people respond to their most, their deepest instincts.
You know, that's a kind personal, intimate decision. How you shape your sense of self is something that you have to negotiate in relation to your community, in relation to your very intimate family or localised community. And it's not something that I think ought to be influenced by market factors.
So, is it a good thing that companies benefit from you sharing a huge amount, and then that's used in ways that you didn't expect? Part of the problem, I think, in this space is so much is unknown. And at the time I was writing, at least, this was starting to be explored. But the vast industry around data extraction, and then analysis, including secondary data markets, which are enormous, and I think largely useless for everybody and should just be abolished. But that is probably a dangerous idea, because it's a multi-billion dollar industry. This is now, I think, much more accepted by many people, that when you go online, it's not a very nice place in some ways. And it feels like you're constantly being groomed to be a consumer rather than a person, and that we should be able to do something about this.
And so much of the internet has been dependent on that private expansion and cultivation of spaces for profit, including now our psychological spaces, not necessarily with a public purpose in mind. And governments have largely allowed that to happen and assumed that that's been a progress, that we should allow multi-billion dollar companies to start to spin up and make, exploit this moment, exploit the digital revolution, and that's a good thing. And I have a very different view about that. I just think that these companies are extremely problematic. They aren't interested in a sense of community or values. They flatten difference. They try to have an objective to make you, I think, a worse version of yourself the vast majority of the time. To the extent we can mete out pockets of resistance, that's a good thing, and it's not all bad. But this idea now that this is some breathless, wonderful, new transformation of our potential is largely dead.
One of the things I wrote in my book is, it's hard to imagine this now, but there was a time at which it looked like Mark Zuckerberg was going to run for President of the United States. He was doing a 50-state listening tour at one point, and his staff quickly canned that. But it's interesting, him as a figure, because he used to be perceived as this great wonder kid of the digital revolution, and now he's largely, well, I think he's largely despised.
I don't know, maybe I'm out of touch with the population, but he's quite a controversial figure at the very least. That, I think, is a very interesting phenomenon that's taken place over the last six to eight years, and I think that's largely been a good thing. It's allowed us to understand the critique of the market domination of these spaces.
Rob Brooks: Right. Interesting getting to some of the consequences, the downstream consequences, not just for users or the people who are making platforms, but also the other people who are sort of, you know, I don't want to say collateral damage, but who are affected by what happens. I guess, Emily, you have an interest in technologies like OnlyFans, chatbots, the conversational technologies I call virtual friends. They go by various names. We've already spoken about Replica as something that goes between friend and lover kind of chatbot, and it's revealed some very interesting kind of behaviour on the parts of users in terms of sort of the authentic nature of relationships, I guess. People sort of suspending disbelief and still falling in love with it, even though it says, look, hey, I'm not really real, or I'm not really anything other than a chatbot. What are the consequences of that kind of thing for the way people, you know, relate, build intimacy in the rest of their lives, their non-chatbot part of their lives?
Emily van der Nagel: I think that's a really good question. I feel like whenever we're in a position to be communicating with someone or something that we know is not necessarily a person, you know, at the other end of the line, I mean, we have seen recently, you know, of course, chatbots, we've talked about the way that you can engineer a program to give you responses when you type stuff in, and it feels like you're having a conversation. We also know that there's been a market rise in things like virtual influencers, you know, the idea that you can go to a place like Instagram and follow somebody and get updated on their life and see pictures of them as they go about their day, and you know that they're not a real person, that they're a virtual influencer, that their face has been created by a computer, that they've been put together by a slick team of marketers, and people have an idea that there is a deep sense of unreality there that they are part of, but I just feel like fake or counterfeit people are never completely unreal because they're based on things that we as humans generate, you know, when Lizzie's talking about the ways that huge tech corporations are sucking away all of our data and turning it into vestiges of humanity that they can then package and sell back to us, there's always, even at our most kind of critical and suspicious, I think that we also recognize that we are still in the presence of some vestige of humanity, you know, if you're talking to a bot, you are, I think, always interested in, but how does it know? How is it getting this response to me? If I tell the bot, you know, that I feel really depressed and it tells me that I should go and contact a service or gives me a phone number to call, who has made the decision that a low mood should equal a kind of intervention, you know? There's this thing that Tarleton Gillespie says, he's a professor at Cornell and he writes a lot about social media, and he says that we have a real imperative when it comes to artificialities and algorithms that we must try to unpack the warm human decisions that lie behind these cold mechanisms, and I think that is so true when we are interacting with anything or anyone that is virtual. There's a part of us, I believe, that is trying to unpick where the humanity is, you know?
Our texts, when we put something into ChatGPT and it comes back with all of this text, that has been leached off other people. When we are talking to a virtual influencer, we are aware that it's been put together by a team of savvy marketers and we wonder what those marketers think about us, their consumers, and so I think that even, you know, no matter how virtual a system or a person or a chat bot really is, we come to them looking for the humanity that's underneath, and I think that that's a real drive for us if, I don't know if Freud or whatever have put it in any of those terms, but certainly for me there's never a complete absence of the human whenever we're interacting with digital technologies and supposedly counterfeit people.
Rob Brooks: But Patrick, it's not always sort of, it doesn't necessarily map, it's not always very easy to understand how that stuff emerges. I mean, in many ways machine learning is a kind of 21st century witchcraft or that's what we're led to believe at least, you know? The machine works, we just don't know how.
Patrick Stokes: It's a black box.
Rob Brooks: Is that an issue? Is that a problem?
Patrick Stokes: Yeah, certainly in that we can't see the algorithms that are producing stuff, absolutely. I mean, I think Emily's right that there's a humanity that you can see behind these things and the way they're put together, but what's not there necessarily is a consciousness, right? There's not necessarily another consciousness at the other end of that transaction or that interaction. And that, I think, it's really interesting, and this is where I think it's really fascinating some of the stuff happening around virtual influences, for instance, where it's like we're prepared to have these interactions with a being knowing that there's no other kind of there there. There's no other consciousness there behind those words or behind the faces that we're being presented with, because we're getting, you know, digitally generated faces and so on.
What's interesting about this, so when I talk about, say, death bots, you know, a lot of people say, oh yeah, but they'll never replace the dead because you know they're dead, you know it's not real. But so much, as we get more and more familiar with synthetic agents, as we spend more and more of our time talking to these synthetic agents, getting used to them, just seeing them mixed in on Instagram among all the other influences or seeing, you know, death bots mixed in among all your living friends that you can interact with and so on. I think we're very, very good at seeing through technology, and I think the real concern is not that we'll start to get tricked into thinking these are real people.
There's a real concern that at some point we just won't particularly care for this particular purpose, because that's already how we use things like Siri or Alexa, right? If I ask Siri to read out driving directions to get to the nearest supermarket or whatever, I'm not thinking there's a person on the other end going, now turn left at Plenty Road or whatever. I'm not thinking that. But it kind of wouldn't matter if there was, just as long as it does the purpose, then that's good enough.
So, the interesting question is whether we'll get so used to synthetic agents that we'll no longer care whether it's a real person or not. For some purposes that probably doesn't matter at all. For other purposes that probably does potentially matter, and it could be that it does debase things like interactions with people who have died or, you know, even potentially some other, you know, very intimate interactions as well.
Rob Brooks: I mean, what proportion of our conversations does it really matter in? You know, I'm actually really delighted that my chatbot remembers my name and something I said yesterday, which is better than I can expect from most conversations that I have.
Laughter
Maybe I just have really bleak low expectations, but please do.
Lizzie O’Shea: Can I just add one anecdote? So, I tried to fact check this this morning because I saw it on Blue Sky, so I'm not sure how true it is. People may have seen this. There was someone who was posting about an interaction with the CEO of a company called Vera, which is an AI chatbot company, and the CEO apparently told this story with Glee, whereby a woman had asked a question about the nearest vet because her elderly dog had diarrhea, and there was an engagement about what she should do about the dog, and the AI chatbot came back and said, your dog is elderly, you should put it down. And there was a bit of back and forth apparently. This is how the CEO tells the story, and then the woman went quiet, and then 24 hours later she said, I've done it. I'm very grateful for this chatbot supporting me through, I mean she didn’t say chatbot but this support that I've received throughout this period. It's one of the worst times of my life, whatever.
And the lesson that the CEO learned from that is he was thrilled that someone had treated this chatbot like a human, and that it had worked so well, that this woman had felt this real connection with this chatbot. And I guess I think that's the wrong lesson to take from that anecdote, but in part because you do have to question what the black box is optimised for. Like, who has determined what it's optimised for? You may not need to look into the black box to take a kind of cybernetics view. You may just respect that it's a complicated process, but what's the outcome they're going for there? And is it okay that a chatbot can optimise for, say, the cheapest option when it comes to a pet with diarrhoea? Because if anyone's got a pet, they know when something goes wrong, usually the cheapest option is to put them down. But of course, you do not do that, right? Because you care about the pet, right?
And this is where I start to think some of that not caring about who's on the other end, like, on one level, respect that, but then we do have to question what the outcome is of automating these decision-making tools. Who's bearing the consequences? What kind of what optimisation do we want? And what do we don't, what we don't want, right? What we don't want is CEOs feeling thrilled that they've fooled someone into euthanising their dog.
Like, it's sort of stunning. But, you know, in all sorts of other contexts, you know, we talk about private companies a lot, I do as well, but actually governments are some of the worst offenders when it comes to this. You know, something like Robodebt, what were we automating for there? We were trying to immiserate people who are on welfare. We want to create a culture where people who use welfare don't have rights. They are a drain on the system. So, we have every right to as much as we possibly can, and if people suffer as a result of that, that's not really our concern.
I mean, that is the outcome of that program. And the fact that no one's really been meaningfully held responsible beyond a Royal Commission when we know that's a problem, what kind of signal does that send to other government automation processes for automated decision-making? Like, is that a good enough outcome? Like, yeah, I mean, I just want to bring government into it, because sometimes they are the worst at this. Like, private companies do a lot of things that you would expect private companies to do.
Governments really fall short in all sorts of terrible ways that we need to keep in mind in this process as well.
Rob Brooks: I want to go away from the bodies right now. I want to tell this sort of almost completely irrelevant quote from the philosopher Slavoj Žižek talking about sex robots. And he said, what I'm looking forward to about sex robots is when my sex robot and her sex robot can go off and have sex, and then we can just sit and have a quiet glass of wine and a great conversation.
Laughter
And that's weird.
Lizzie O’Shea: Cut out the middle person
Rob Brooks: But the dating apps, because they're so laborious, I won't ask you to put your hands up if you're on dating apps, because that could get out of control. But the work that people do on dating apps sort of in the early preliminaries of chatting to each other are so laborious that, in fact, now dating apps are advertising that your profile can talk to their profile and sort of come to some kind of an agreement as to whether or not the two of you are going to go on a date, which couldn't go wrong.
Laughter
But we're sliding into the so-called metaverse. Mark Zuckerberg would like to believe we're getting into the metaverse. I don't know how I feel about it. I've been promised for 30 years that virtual reality will do this, and it'll do that, and it never really does. So, I wouldn't set my expectations too high.
So, the metaverse, if you haven't encountered this, as far as I can understand, and I don't know that there is that much to understand, it's when we will interact with each other via online avatars. So, our online avatars will be able to socialize with each other, flirt with each other, date each other, and obviously buy shit from one another, probably. How bullish are each of you about the metaverse and the, I guess, what do you think about the effects it might have beyond the kind of, you know, body plus internet equals, you know, what is it, on life? The metaverse.
Patrick Stokes: I don't know. I mean, does anybody use the term anymore? Like, Mark Zuckerberg launched this huge fanfare that we're all going to be living in the metaverse, and then everyone was like, they don't have legs. And that was all anyone could talk about. And then within a week, the Icelandic tourist body was doing a thing, mocking it by saying, come to Iceland where we have real snow and real water and stuff. You know, and that probably got more attention than the actual metaverse itself did.
But, I mean, the idea of bots talking to each other and whatever else, it can certainly, it could certainly happen. But, I mean, the only thing I would say in general is that with any new technology, the only ironclad law is that you never know all the uses any new technology can be put to until it's actually out there, because the affordances are not clear until they're being used. And people won't use it for what, they won't use it for whatever people think they're going to use it for. Like, they will find new uses for this stuff.
So, there are probably unthought uses for some of that stuff. But the Žižek quote just reminds me of something that an expert on future warfare told me once, which is that warfare is going to get much, much worse because we're going to have automated killing of humans, but then much, much better because it's just going to be machines killing other machines, and then no one will actually die anymore.
Rob Brooks: Perfect. When you get there.
Emily van der Nagel: When it comes to virtual reality, I just never buy this idea that somehow what we need to suspend our disbelief is better graphics, because that just, for me, it does a fundamental disservice to the way that we as humans interact with other kinds of media. You know, I work in a media and communications department at a university.
The things that people really love about the media is a story, a person, a character, a feeling. I don't think that comes in some completely new and more meaningful way because we have a headset that lets us, like, look left and see the same environment as when we look right. We don't need graphics at all as humans to feel something, and I think that if, you know, somebody like Mark Zuckerberg believes that what we really need to feel empathy or a connection or intimacy is, like, floating bodies or legs, then I feel like there's been a real misunderstanding of what it takes to, like, make us feel something.
When I reflect on my own kinds of virtual or internet or social media-mediated interactions, it strikes me that the times when I felt the most were definitely not because I saw something with really great resolution. It was because I was interpreting that there was a connection going on between me and another human being, and a lot of the time, I feel like the interactions that we get that are simply text messages or the intimacy that we can feel out of a group chat where the people in the group chat are the people who mean the most to us. I think that those kinds of intimacies are so important, and to simply believe that they can be better mediated or somehow that the feelings we have can be made stronger because of better graphics and higher image resolution is really misunderstanding what sparks emotions in us as humans.
Like, what turns us on, you know? In all kinds of sexual and intellectual ways, connection there is so important.
Rob Brooks: Cool. That's two against the metaverse.
Lizzie O’Shea: I'm like you, Rob. I feel like I've been hearing people talking about the metaverse launching and taking off every, I don't know, couple of years. It never really seems to happen. One thing I would say is I suppose it did seem to be a big play for Mark Zuckerberg. The company spent billions of dollars on this, and he's managed to convince his investors that it's a good idea, but I think the project is struggling. Sometimes you think these tech companies are invincible, or they always make the right decision.
There was this argument as well when Meta took this, well Facebook specifically took a turn to video, and every media room around the world started making video because it was optimised for the Facebook algorithm, and I thought that was a disaster as well. Then about a year later, they started sacking all their video journalists, and it occurred to me that people need more strategic people in mainstream media. One thing I would say is these companies do make mistakes, and the example I always return to is the Apple car, where they spent 10 years and a billion dollars a year, 10 years, trying to build an Apple car that at one point in the design process had no windscreen.
I do think this is potentially another example, a multi-billion-dollar write-off, because Mark Zuckerberg wanted to subsidise everyone getting an Oculus so he could control that space, and he so far has failed. I think we should be careful about assuming that these guys dictating how we engage with each other is necessarily, that they're necessarily correct, because of the reasons that my other two panelists have given, but also that we should necessarily follow that, or that there's not some other way in which we can do things better. That's my take on the metaverse.
Rob Brooks: Thank you. I'm going to get everybody to give me their one prescription. Dennett had his own, which was he doesn't support the death penalty for everything, but the old coin counterfeiters used to have to suffer that. He thought this was a very severe thing, counterfeiting people. Obviously, I'm not looking for that kind of stark prescription, but imagine you had a magical and omnipotent AI assistant on a small lamp-like device that you could possibly carry in your pocket who could turn your wish into their command. What would you wish for in terms of making this stuff work out a lot better for a lot more people?
Lizzie, actually, interestingly, you've often said we need to look back at history and the way that things have been resolved in other spheres in order to understand this, maybe.
Lizzie O’Shea: The premise of my book is these problems aren't novel. We've got historical antecedents for them. These are often questions of power, of politics that we've navigated before, including through resistance and the forms of resistance.
That's the premise of my book, so I do that in different ways. I've already made the pitch, but if I had one thing that I could get an AI to do for me, it would be convince the federal government we need to implement privacy reform, which sounds very technical. I am a lawyer, so everything looks a bit like a nail to me because I'm a massive hammer, but I think it is valid because we need to start with a structural intervention that will give people then the capacity to dictate the terms on which they engage in online life.
That's what I want to see, people having the right to choose how they live their life online without being dictated to by companies and governments.
Emily van der Nagel: Thank you. If I could get AI to do one thing for me, it would be all of the boring stuff in my job so that I could focus on what matters, which is talking to people about ideas and teaching my students.
The stuff that I do in my job as an academic outside of having ideas, talking to people, writing about them and teaching them, yeah, that can go. Automate that.
Patrick Stokes: This is a weird thing to say, but if we're going to have bots, I would ask, wish for the AI genie to make them glitchy. Make them every so often just break down in a way that reminds you that you're dealing with a bot and not an actual person. So just there's a complicated argument for this. It involves Heidegger, who's got his own problems, Nazi.
Laughter
But in general, yeah, calling attention to the fact that what you're experiencing is just a machine. Just through it, having a breakdown every so often or having it say something glitchy, that's what I would want.
Rob Brooks: Which is what people sometimes do, but I get what you mean.
All right. It's been absolutely fantastic, but we are running out of time. I would say, Lizzie, please keep hammering. Emily, get rid of all that other busy work. Pat, make the bots as glitchy as you can possibly persuade people to make them. But I would like you all to join me in a massive thank you to our fantastic panel.
UNSW Centre for Ideas: Thank you for listening. This event is presented by the Festival of Dangerous Ideas and supported by UNSW Sydney. For more information, visit unswcentreforideas.com and don't forget to subscribe wherever you get your podcasts.
-
1/4
-
2/4
-
3/4
-
4/4
Lizzie O'Shea
Lizzie sues companies and governments that do the wrong thing. She has run major cases against major technology companies on behalf of thousands of people who have been harmed by them. She is also a founder and the chair of Digital Rights Watch, which advocates for human rights in online spaces. She is the author of Future Histories which was shortlisted for the Premier's Literary Award.
Patrick Stokes
Patrick Stokes is Associate Professor of Philosophy at Deakin University, and a writer, radio producer, and media commentator on philosophical matters. He is currently engaged in a three-year Australian Research Council-funded project, ‘Digital Death and Immortality.’ His most recent book is Digital Souls: A Philosophy of Online Death.
Emily van der Nagel
Dr Emily van der Nagel is a Lecturer in Social Media at Monash University. She researches social media identities, platforms, and cultures, with a particular focus on digital intimacies. Her book, Sex and Social Media, co-authored with Katrin Tiidenberg, takes a feminist, sex-positive approach to how social media platforms shape and restrict sex. Emily is currently working on a research project about how Australians use social media to create and subscribe to content on OnlyFans.
Rob Brooks
Rob Brooks is Professor of Evolution at UNSW Sydney and a popular science author. He has spent his career understanding the complexities and conflicts that sex and reproduction bring to the lives of animals, including human animals. His popular writing explores the murky confluence of culture, economics and biology, and how new technologies interact with our evolved minds and bodies. He has won the Queensland Literary Award for Science (for his first book Sex, Genes and Rock ‘n’ Roll), and the Eureka Prize for Science Communication. His articles have been published in Psyche, CNN, The Atlantic, The Sydney Morning Herald, Areo, and many other publications. His latest book Artificial Intimacy: Virtual Friends, Digital Lovers, and Algorithmic Matchmakers considers what happens when new technology collides with our ancient ways of making friends, growing intimate, and falling in love.