Skip to main content
Scroll For More
Listen

Karen Hao: Empire of AI

We really need to start thinking of companies like OpenAI as new forms of empire – more powerful than most nation states, amassing resources, labour and knowledge the way empires of old once did.

Karen Hao

Programs like ChatGPT have become ubiquitous with AI, promising to kick start the next industrial evolution. But the scale of resources needed to support AI are staggering, with the cost largely being levied on the marginalised. From energy demands eclipsing whole cities, to labour exploitation in the global south, this behaviour bodes poorly for an equitable future.

In Empire of AI, award-winning investigative journalist Karen Hao unpacks the rise of OpenAI and their race for global dominance – prompting the question, what will it take to reign in this laissez-faire approach to growth? Answers are needed and UNSW legal expert Mimi Zou is exploring the possibilities surrounding the regulation of AI, along with UNSW neuroscientist Joel Pearson who is scrutinising the human impact of AI at an individual and societal level.

In this podcast, Laureate Fellow and Scientia Professor, Toby Walsh in conversation with Karen Hao, Mimi Zou and Joel Pearson discuss what it will take to usher in a sustainable, equitable AI revolution.

Transcript

Toby Walsh: Good evening and welcome to tonight's event, Karen Hao: Empire of AI. I should also say this is the 2025 Wallace Wurth Lecture today. Wallace Wurth was the first Chancellor of UNSW, and we celebrate his contributions, and I'm pleased to say that some members of his family are here joining us tonight.

My name is Toby Walsh. I'm the Chief Scientist at UNSW's AI Institute, and it's pretty obvious I'm not Ange Lavoipierre, ABC’s Tech Correspondent. Unfortunately, Ange had an important family matter that came up, so she can't be here tonight, so I'm standing in for her. I want to begin by acknowledging, of course, the Bidjigal people, who are the traditional owners of these lands upon which we're meeting. I'd like to pay my respects to their elders, both past and present, and extend those respects to other Aboriginal and Torres Strait Islanders who might be with us tonight.

Tonight, we have the pleasure of a conversation with Karen Hao. She's the author of this wonderful new book, The Empire of AI. It's a wonderful book that's taken the US by storm. It's a new, went straight onto the New York Times bestseller list. It's got a lot of attention. It's opened the lid on one of the most important companies, I think, transforming our lives. OpenAI, and everything to do with that AI. I've known, or I feel I've known Karen for a very long time, although we only just met today. I've known her because, like almost everyone else in AI, I've been reading her and following her wonderful insights ever since she was back at MIT Tech Review. The special thing I think she brings is that she's one of us. Am I allowed to say that?

Karen Hao: That’s very generous. I was just telling Toby earlier that I feel like I've known him for a really long time, because I've been following his work for so long too.

Toby Walsh: But she has a techie background. I'm allowed to say that I'm sure?

Karen Hao: Yes.

Toby Walsh: I think your parents were computer scientists?

Karen Hao: Yes, that's right.

Toby Walsh: So, it runs in her blood, and so what she says, what she writes, really gets to the heart of artificial intelligence and what's going on in our world.

Later, we're going to be joined by Professor Mimi Zou and Professor Joel Pearson for a bit of a panel discussion between the four of us. Mimi works at Law and Justice, although she's previously worked at Exeter, Oxford, for the World Economic Forum, and she's passionate about regulation and AI. And Joel is a Future Fellow here at UNSW also. He's the director of the Future Minds Lab, and he's passionate about how AI is going to impact us.

So, let's start by talking about Artificial Intelligence. I have to say, the thing that surprised me most in the 40 years I've been working in the field is the amount of money and the speed with which it is unfolding today. I mean, I sit in my ivory tower, it must be absolute madness, one thinks, in Silicon Valley.

Karen Hao: It is, yeah, I mean, it's interesting, because I feel like when you're there, you don't realise how crazy things are, and it's only when you step outside that it suddenly dawns on you that nothing that you experienced was normal. Like, researchers are now being offered 100-million-dollar compensation packages to go work at Meta. And this is seen as kind of surreal, but maybe also normal within the boundaries of Silicon Valley. The fact,

Toby Walsh: A little bit of me thinks that's about time. You know, it used to be that, you know, all the jockeys, the sports stars got their size. Finally, scientists are being recognised for their home. But anyway, move on, move on.

Karen Hao: But yeah, everything, everything is so surreal because it's kind of like this weird, fun house mirror effect, where all of the world's extremes are kind of being amplified in Silicon Valley, the just the amount of rich, the amount of riches, the amount of power that people have, the fact that you can tweak just one thing in your AI model, and suddenly it affects billions of people's experiences around the world. Everything just feels a little upside down.

Toby Walsh: The power those people have upon us.

Karen Hao: Yeah, exactly. And I don't think like, I don't think we as humans have evolved to really be able to comprehend that kind of power. And interestingly, one of the things that surprised me the most when I was reporting the book is the degree to which sometimes that responsibility weighs so heavily on the individuals that are building this technology that they start to break apart and they develop extremely religious beliefs around what they're doing and why, because they kind of need some thing to hold onto to help continue making them feel sane.

Toby Walsh: Okay, well, we'll come to, we’ll come to religion in a moment, but we're sticking with power. And of course, maybe for the people shortly by buying your book, maybe you'll explain the title of the book, “Empire of AI.”

Karen Hao: Yeah, so the title is a nod to the argument that I make in the book, that we really need to start thinking of companies like OpenAI as new forms of empire. That doesn't just refer to the fact that these companies now have an extraordinary amount of economic and political power, to the point that they are more powerful than most nation states, if not all nation states in the world. You could argue that maybe the US government is still a higher power, except that the US government has absolutely zero interest now in providing any kind of accountability or any kind of check on these companies,

Toby Walsh: Current US government,

Karen Hao: The current US government, yes, the Trump administration. But it's also a referring to parallels between how these companies have amassed that power and how empires of old amassed their power. So, I point to four parallels in the book. One: empires lay claim to resources that are not their own, and these AI companies have very successfully done that by claiming people's data, claiming people's intellectual property, and just saying, well, it's in the free domain, and it's ours. Empires exploit an extraordinary amount of labour. The AI companies do that both in the production of their technologies. And I go to Kenya in the book, and to Colombia to meet workers that are being actively exploited while they are doing work that is the lifeblood of the AI industry, doing careful data annotation, content moderation to make these systems actually work. It's also exploiting labour in the sense that they're these companies are producing labour automating technologies. They are choosing to design a form of AI that ultimately is eroding a lot of workers' bargaining power around the world. The third feature, empires monopolise knowledge production, and this is one of the most fascinating things, is within the last decade, what we've seen is the AI industry has become so resource rich, 100 million dollar compensation packages that most AI researchers in the world now opt to go to these companies instead of staying within academia. And so there's been a distortion in AI research, the same kind of distortion that you would imagine would happen if most climate scientists in the world were bankrolled by the fossil fuel industry, we would not get an accurate picture of the current climate crisis, and we are not getting an accurate picture of what the actual limitations and challenges of the current dominant AI paradigm are. And the last feature is that empires always engage in this existential tech arms - not tech - arms race narrative where there are evil empires in the world. So that's why they have to be a good empire, China, Google, early in OpenAI’s days, they painted Google as the evil empire, but it's they have to be an empire to be strong enough to beat back the evil empire. But they, as a good empire, have a civilising mission. They're bringing progress and modernity to all of humanity, whereas the evil empire will certainly lead to the demise of humanity.

Toby Walsh: I'm feeling a bit bad about working in AI at the moment. But we can't, we can't leave - talking about power, I mean, we have to talk about probably the central figure in the book, Sam Altman, the CEO of OpenAI, the company of the centre of the book. I mean, he sounds like a very charismatic person?

Karen Hao: incredibly charismatic person. He's, he is a politician. That's the best way to understand him.

Toby Walsh: I want to ask you a tough question, is history going to look kindly upon him? In the fullness of time?

Karen Hao: I don't think so.

Toby Walsh: You don’t think so. Why not?

Karen Hao: I think we are just starting to see a more substantive shift in public opinion, where people are starting to realise that maybe all of the hype that's coming out of the Valley about the type of AI they are building. And to be clear, there are many different types of AI technologies. There's lots of different varieties of AI research, but Silicon Valley selected one particular type that is particularly resource consumptive, that is particularly exploitative, that is labour automating, all the things that I mentioned. And we are just starting to see people realise there's actually there are cracks to the the way that these tech executives have painted what this technology could bring us, and I am quite concerned that not only have we already seen an extraordinary amount of environmental harm and labour harm and social harm from this technology now, but that also we are going to start seeing economic ripple effects as the AI bubble pops, and once we get to that other side, I don't think people are going to look very fondly at the incredibly persuasive abilities of Altman for leading them down this path.

Toby Walsh: What about the darker side? I mean, the board of OpenAI, when they fired him, said that they saw a darker, more lying side to him.

Karen Hao: Yeah, so the thing - 

Toby Walsh: Economical with the truth at the very least.

Karen Hao: The thing about Altman is he's a politician, so he's just really good at telling stories that persuade people that he's on their side.

Toby Walsh: And he's very good at persuading people to give him money.

Karen Hao: Yes, and he's great at fundraising, persuading people to give him money. He's great at recruiting, persuading people to join his quest. But the problem is that he ends up telling different stories to different people. And for some people, this is the natural, it's a natural feature of leadership. Leaders always have to make tough decisions and kind of bring together different groups that might be oppositional to one another into a more cohesive unit and move forward to other people. This means that he's a manipulator, a liar, abuser, and one of the challenges that people, the critics point to, is that he does not use his storytelling ability to actually increase cohesion within the organisations that he's worked for throughout his career. Over the last decade plus, of decade and a half of his career, there has consistently been allegations that have followed him about how he uses his storytelling abilities to create a lot of tension, friction, factions within the organisations that he operates, and that leads to instability and leads to a lot of chaos. And for the board, specifically, they were also operating under this idea that they think AI it could potentially be so profoundly civilisationally transformative in the positive or negative direction that having someone like that, a character like that, who is unreliable, difficult to pin down, was potentially going to be catastrophic.

Toby Walsh: And by the way, he's now back, still as CEO of OpenAI,

Karen Hao: Yes, he is now back after five days of being out.

Toby Walsh: Something else. Maybe you can help me understand this. I've never understood why. Why has he not got such worse press for Worldcoin? And I mean, this is such an icky thing, maybe explain to the audience what Worldcoin is, and The Orb, I mean, this just - I can't understand why more people aren't turned off by the old idea.

Karen Hao: So, Altman, OpenAI's - he's the CEO of OpenAI - but he is also investor at heart. So, he has investments all over the map and some very large investments. So one of his more prominent investments is in this company called Worldcoin, that is a crypto currency digital identity startup, where the idea is that, in the future, once AI, makes fake things look so hyper realistic that you can no longer differentiate between fake and real, we're going to need some kind of fundamentally new identification system that uses biometrics to analyse whether or not someone is three dimensional and who they say they are, rather than just like some digital rendering of that person. And so he, this company, they designed this bowling ball shaped orb that you're supposed to stare into, and it collects data on your iris to do that identification, and then once it has confirmed your identity, it dispenses you some cryptocurrency, the Worldcoin.

Toby Walsh: I mean, it sounds like the plot of a bad movie.

Karen Hao: Yeah. So, to answer your question of, like, why hasn't he gotten more bad press? I don't know. Like, this is like a terrible premise in so many ways. And there have been some investigations. Actually, a former colleague of mine, Eileen Guo, wrote this phenomenal investigation in MIT Tech Review about the all of the data privacy violations that have already happened by this company. This basically “scammy” nature of it, where it enters into developing countries first to try and collect all of this biometric data on the people for in exchange for a promise of dispensing them, like $20 or something like that, which in a developing country is like an extraordinary amount of money. So, people don't ask questions. They just line up and give their Iris away, and no one knows like that.

Toby Walsh: You can never get back.

Karen Hao: Yeah, right. And the company has absolutely conveyed nothing about what they do with this Iris data. There's, I mean, it's a US based company. The US has no federal data privacy law, so they could do anything with that Iris data. And, yeah, it's, it's really bonkers.

Toby Walsh: So, what's his next act going to be? Is he going to become a politician? I mean, some discussion about becoming president.

Karen Hao: So, you know what's so interesting? When I was reporting the book, there had been long kind of rumours about whether or not Altman had ever intended, at some point to launch a political career, and I was able to confirm that in my reporting that he did actually go through the motions of considering a run for California Governor, and he went so far as getting together focus groups to test his candidacy. And then he didn't test very well, because a lot of the California voters felt that he was too young and inexperienced, and so he kind of then let that die. And that was when he realised that his political career might not have legs - that's when he started aggressively gunning to be the CEO of OpenAI. Because when he co-founded OpenAI, he actually didn't have an official executive role. He was co-Chairman with Elon Musk. And then in 2017-2018 OpenAI, which was founded as a nonprofit, starts realising, wait, we the approach that we want to take to AG development is going to take gods and gods of cash. We can't keep it a nonprofit. We need some kind of for-profit vehicle to fundraise. And then he starts like gunning aggressively to be the CEO of this for-profit arm within OpenAI. And at the time, there are emails that I write about in the book where Ilya Sutskever chief scientist and Greg Brockman, Chief Technology Officer, were emailing Altman being like, "we just don't understand what your motives are like. Do you want to be a politician? Do you want to be California Governor? Or do you want to be do you want to be CEO of this company? Like we don't really get it." But basically what, what people close to Altman told me is that after he became CEO of OpenAI, he had, he's lost complete interest in any kind of political career because he realised that he has more power as the CEO of a tech company.

Toby Walsh: Well, what's the saying is, if you want to have power, you end up you start a country or you start a religion.

Karen Hao: Yeah, so the opening epigraph of my book, there's, there's two epigraphs, but there's one from Sam Altman, which is this blog post he wrote in 2013 two years before he founded OpenAI, and it goes, "Successful people create companies. More successful people create countries; the most successful people create religions. It seems to me that the most successful founders do not start off intending to create a company. They intend to create a religion and then realise creating a company is the easiest way to do so."

Toby Walsh: But when you hear him talk these days in an evangelical way, you get the feeling it's almost a religion that he's trying to sell us.

Karen Hao: I mean, I think he was exactly right in his analysis in 2013 at that time in that people will rally around a belief. Giving people a higher belief system is the best way to mobilise the masses. And so he, I don't think it's a coincidence that he was reflecting on that in 2013 and then he created OpenAI in 2015 in the way that he did with a really, really strong quasi-religious mission at it's core. We are going to build this so called artificial general intelligence that could be described as an AI God, and that's going to bring benefit to all of humanity. I think he realised that that was going to be the most persuasive story to tell people, and it would work in many, many different audiences, whether public or policymakers. And he's largely been right. People have been really captivated by these extremely high narratives about what AI is and what it could bring us.

Toby Walsh: And there's two very different narratives. There's boom and doom.

Karen Hao: Yeah. So, yeah. So, in this I like to call it the AGI religion, that has two different factions. There's the boomers who think that AGI is imminent, going to be trans, civilisationally transformative, and that transformation is going to be in the positive direction. Bring all humanity to Utopia. We're going to have endless abundance and global peace, because no one’s going to fight any wars anymore over scarce resources. And then there's the Doomers who think AGI is imminent. It's going to be civilisationally transformative, but the transformation is going to be in the negative direction.

Toby Walsh: And there seem to be some people, like Ilya, who seem to be both at once,

Karen Hao: Well, yeah, I mean, because it's kind of two sides of the same coin, where some people just they just agree that it's going to be civilisationally transformative, and they're conflicted about whether it's going to be in the positive or the negative direction. And for those who think that it might be in the negative direction, they are genuinely concerned and believe that AI could ultimately destroy all of humanity. It could kill everyone on this earth. And a lot of these ideas are there. They're extreme extrapolations of what actually exists in the scientific literature. There's nothing that you can really point to within scientific literature to say absolutely one way or the other is going to happen. But there are people that are looking at like early signals, where they get either extremely excited about the utopic scenario or extremely panicked about this possible dystopic scenario, and they work themselves up into this frenzy where, then, if you are operating on this plane, where you think that any decision that you make in the development of this technology can either be humanity going to heaven or to hell. This is why I've talked about how one of the most surprising things of reporting the book is like people. This responsibility weighs on people so heavily that some of them start to crack.

Toby Walsh: I wrote a piece, an opinion piece, calling it this: “the greatest heist in human history.” How authors, artists, musicians, intellectual property is just being hoovered up by these models with, without consent, without compensation and with callous disregard. I mean, knowing that what they're doing is probably illegal.

Karen Hao: Yeah.

Toby Walsh: Certainly not fair use in any use of the word “fair” that I would consider fair.

Karen Hao: It's interesting because I think a lot of AI researchers who only exist in the Silicon Valley bubble and only talk to one another in these echo chambers don't actually arrive at that conclusion. They do actually think that obviously this kind of copyright, copyrighted material, intellectual property, should just be hoovered up into their systems like there's no other way to see the situation, because they're ultimately on a quest to build something, an everything machine, that's going to bring Utopia to all of us. But absolutely, when you talk with the artists, writers and creators who have their work taken I mean, this has been devastating for some of them, like I interviewed Karla Ortiz for the book, who is a Puerto Rican artist who is most famous for doing some of the conceptual art for Doctor Strange in the Marvel movie. And she's, I mean, this is, she's, like, really prominent in that industry, you know, like she was doing big budget films, and she can barely get any work anymore because her art was taken and trained in these models. And now these models have become effective substitutes for that kind of work, that kind of conceptual where you're, where you're iterating rapidly on different ways that characters or scenes should appear for Hollywood movies. And she mentioned that for her friends who are not nearly as prominent as her, they've switched careers they can't actually sustain. And the these types of jobs were originally middle, solid, middle class jobs in which people could earn enough money to buy a house and raise their kids and send them to college, and that's kind of the effect that we're seeing. I mean, that is kind of a microcosm of the effect that the AI industry is having at large on the economy right now is there is basically an exacerbation of the inequality that already exists. It's destabilising or gouging out these more middle-class jobs. It's pushing lower tier jobs into even worse, lower paid work. And for it's only the executives really at companies that are able to benefit from this by creating an AI pilot at their business, laying off a bunch of workers and then being or praised benefited from, like, good ratings on Wall Street. So it's, it's, it's been like this kind of chasm opening up that has been accelerating for various different reasons, not just AI, but AI is certainly jabbing like another, yet another wedge into this trend that we've already seen.

Toby Walsh: So, while we're going through the harms, what about the harms in the Global South, the fact that they're outsourcing labour to labelling harms that people see those labels and the terrible things that they're having to see and label?

Karen Hao: Yeah, so I mentioned that I went to Kenya for the book, and the reason was to interview workers that OpenAI contracted in Kenya to create a content moderation filter for what would ultimately become ChatGPT. So OpenAI, as it was transitioning from a nonprofit to a more commercial endeavour, it realised that if it was going to place a text generation machine that can generate any kind of text into the hands of millions of users, that they would need to put some kinds of guardrails on what the model can actually generate. Because -

Toby Walsh: To save you from having to see these harms,

Karen Hao: Yeah, to save to save users from being exposed to this kind of stuff. Because at the end of the day, the approach to AI development that they've taken is to scrape the internet and pour it in, and there's a lot of bad stuff on the internet, so you do not want that to then regurgitate when the model is chatting with users. And so they asked these workers to label, to wade through reams and reams and reams of the worst content you could ever think of to find on the internet, as well as AI generated content, where OpenAI was prompting its models to imagine the worst content on the internet for more diverse examples, and then they labelled them into a detailed taxonomy of is this hate speech? Is this harassment? Is this violent content? Is this sexual content? And to what degree of violent or sexual content? Does the sexual content involve abuse? Does that abuse involve children? And for these content moderators, they end up experiencing the same exact thing that content moderators of the social media era experience. They started having deep trauma. They became socially withdrawn, highly anxious. And I talk about one man, Mo Fatokenye, in particular, who not only did it affect him, but it affected his family, because, as he came home every day from his job, he could not explain to his wife or his stepdaughter what was the work that he was actually doing. He couldn't explain to them that he was reading eight hours of sexual content all day. That didn't sound like a real job, and it sounded like a very shameful job. And so they didn't understand why his personality was dramatically changing from one that was very extroverted, very affectionate, to one that was extremely withdrawn, extremely anxious. And one day, his wife asks for fish for dinner. He goes out to the store, buys three fish, one for him, one for her, one for the stepdaughter, and by the time he comes back home, all their bags are packed and they're gone. And his wife texts him, “I don't understand the man you've become anymore. We're not coming back.”

Toby Walsh: And then the environmental harms, and what gets me here is that it's just unnecessary. You can build zero water data centres. You don't have to use people's water supply.

Karen Hao: Yeah, all of the, all of the harms that we're talking about today are, like, wholly unnecessary for general, for progress in AI, as I mentioned, there's so many different types of AI technologies. There's such a variety of AI research, and it's specifically the type of AI that Silicon Valley that open, AI popularised, that is leading to all of these harms that we're talking about, like even the labour harm that I mentioned, is a harm of a scaling approach to model development, where you're scraping the internet, most AI models up until ChatGPT, were not being trained on the internet. Like cancer detecting AI is not trained on the internet. Nor is like protein folding prediction AI, nor is AI for integrating more renewables into the grid. But once you train on the whole internet, then you have the labour harms, the content moderation harms, and then you're also talking about the colossal environmental harms, because now you have to build out an extraordinary amount of super computers to do the training, and then data centres to serve up these really computationally heavy models. And so whereas before, with the modern Internet, we already had plenty of data centres, and there was already a lot of data centre expansion, the pace of that data expansion roughly matched the pace at which data centres were also improving in their energy efficiency. So that's why most developed countries have seen a flat lining in energy demand, or a drop in energy demand in the last decade, in the AI era, that pace of data centre expansion is so extraordinary that we are seeing a historic uptick in energy demand globally, and that energy is being served by fossil fuels, because these data centres have to be powered 24/7 so they're being They're being powered by natural gas, by methane gas turbines. And there was a stat from a McKinsey report that came out earlier this year saying that by 2030 if we, if we go along as planned, with all of the data centres that are being planned to be built for this industry, we would need to add 40% of Australia, or 110% of Australia onto the global grid to meet that demand. Then you talk about the fresh water costs. Data centres, most of the time are cooled with water, because the other option is to cool it with air, and that is even more energy intensive. When you cool it with water, it has to be cooled with fresh water. It can't be any other type of water because that will lead to the corrosion of the equipment or to bacterial growth. So very often, these data centres enter into communities and plug into the public drinking water supply. And Bloomberg did an investigation earlier this year. Two thirds of the data centres being built for the AI industry are going into places that are already scarce on fresh water. So, there are communities around the world that are literally competing with computers for their drinking water. And what I mentioned one of those communities in my book, I ended up going to both Chile and Uruguay to look at the extraction of fresh water resources in these countries for these data centre build outs. And in Uruguay, they were facing such a historic drought that the Montevideo government was literally mixing salt water into the public drinking water supply. And for people who were too poor to buy bottled water, that is the water that they were drinking. And there were pregnant women that were having higher rates of miscarriages. There were people with chronic illnesses that were having intensified symptoms for their chronic illnesses. And it was literally in the middle of that that Google announced that they were going to build a data centre in Montevideo with plans to take their fresh water resources.

Toby Walsh: Again, to get back to the point that it's unnecessary. I mean, you can build closed cycle data centres where they use water in a closed cycle, they don't actually have to take any extra water. And indeed, you know, Microsoft have now finally said that they'll only build data centres that are closed cycle in the future, but it says it costs them a little bit more. They're more, it takes more energy to run them, but you don't actually have to use water. You know, the water's just cooling. You don't have to consume it anyway at all.

Karen Hao: Yeah. Well, one of the problems with closed cycle is that there's still evaporative losses. So obviously, far less losses than just the way that some of the data centres are run now. But it is one of those things of just we are building out this extraordinary infrastructure for a technology that is, it's still unclear whether it's actually going to give us the promised productivity and economic gains that these companies keep saying that it will. And in fact, just two weeks ago, MIT produced a study that found that 95% of Generative AI pilots and businesses in the US right now are utter failures. They are not leading to productivity gains. They are not leading to economic gains, and only 5% are succeeding. And the 5% that are succeeding, it's because the companies are scoping out a very specific task for, that lends itself to the computational strength of AI. And if we are, if that's ultimately what's succeeding, we can just build smaller, task-specific AI models that have already existed for a very long time to do that, rather than building all of this excess to create an everything machine that's not actually giving utility to anyone, but consuming fresh water resources and leading to traumatic experience, labour experiences.

Toby Walsh: So, I mean, we talked about power, we talked about empires. It seems to me, one of the things that's allowing this to happen is the money. I mean, there's just so much money, it's creating these really perverse incentives tolerating this, this bad behaviour, because money is just so much money is in play. I mean, billions of dollars a day, trillions of dollars of value being created it seems.

Karen Hao: Yeah, yeah, yeah. There was this other BoomAgers article recently that said that in the AI era, we've seen more billionaires created than ever before in history. Or the rate at which billionaires are being created is faster than ever before in history. So yeah, absolutely. I mean, the sheer amount of money that's being poured in is sort of creating a bit of a self-perpetuating cycle, because investors now really, really need a return on all of that money that they've poured in, and so they contribute to continuing to hype up the technology and sort of distort public understanding of what the technology can actually do versus what its limitations are,

Toby Walsh: And presuming it's distorting the people. There's a terribly depressing thing I heard you say in one of your interviews about how, even the good people going into these companies,

Karen Hao: They get, yeah, they get sucked into this belief system, because so, so I think there's both a money aspect and there's an ideological aspect, where these employees of these companies, of course, they're first initially attracted, potentially to - maybe not “of course”, I should say many of them are initially attracted to joining these companies because of the monetary upside, because it's expensive to live in the Bay Area. They want to be able to secure financial stability for their family, for their kids, maybe future generations. And you can, you cannot blame people for doing that, like, if they have the skills to then ask to access that kind of wealth, like, who wouldn't, you know, try to join this industry and try to do that, then they kind of get sucked in to this ecosystem, this echo chamber of all of these people, very smart, very generally, very kind people as well. Their colleagues all telling them that -

Toby Walsh: They're people like me.

Karen Hao: yeah, I mean, many of them are just yeah. They're really kind people, and they and they do take responsibility seriously, but they're all talking about this idea of AI is going to be the future. We're going to reach AGI. It's going to be civilisationally transformative, and you just get sucked into this world where you do not hear any other perspectives. You're never told about the environmental impacts. You're never meeting those workers in Kenya. I mean, actually, after my book published, a number of people in the industry who had worked at OpenAI messaged me being like, “I had no idea”, you know, like, you don't get exposed to the downsides. You're just being bathed in all of these positive insights about what you're doing. And you feel like this, this wonderful sense of mission and purpose. And so, then you just conform. You conform, and you stop, you stop looking for evidence of something else that might be happening because it's too inconvenient, like your lifestyle is good, you feel really like emotionally and mentally and intellectually fulfilled, like it's really hard at that point to then step out of that and break out of that. But there have been, you know, I know some people both who have been sucked into it, and they just completely transform as people. And I also know people have stepped out of it. Stepped out of it and then looked behind them and been like, “I don't know what happened there, that was weird, right? “

Toby Walsh: Well, before Mimi and Joel come out in a couple of minutes, let's, I mean, we talked about the problems, we talked about some of the harms, we've talked about power. Let's just end in a little more positive modes for the audience. Can you say some things about what we can do? How do we make sure that we go, this is a technology that is going to be transformative - I think you're going to agree. How are we going to make sure that we come out of it?

Karen Hao: Well, yeah, I mean, I think the biggest thing that we need to sort of realise, or maybe we need to have a reckoning within the tech industry, because, generally speaking, technology serves people when you first centre people and the challenges that they face, and then you find creative ways of solving those challenges, and you design technology as that solution. And I think one of the things that's happened within Silicon Valley for a while now, and now has been inherited within the AI industry, is this mentality of just advancing technology for technology's sake. And this is when the equation flips. And it feels like a lot of the technologies that have been produced by Silicon Valley over the last few years really are just they're not serving us. We are serving the tech industry. We're serving Silicon Valley's business interests. And what I really would love to see happen, because I do think that AI can be really transformative, and I only think that it can be transformative by first looking at the problems that we have in our world, and identifying, what do we actually want to improve about the human condition, about society? We want clean air, clean water, access to better education, access to better health care. And then we figure out, what are the problems within that that lend themselves to the computational strengths of AI? And then, instead of trying to build an everything machine that now lacks product market fit, we should be trying to create very specific, targeted solutions that don't have that many costs but have enormous benefit. And I think that is the type of thing that if we can really figure out how to kind of reverse the innovation mentality back to what it once was to really then we'll have all of these amazing AI models that will be doing incredible work across many different sectors, many different industries, many different domains, that would actually be benefiting us in those ways. So, I think that's one piece of it that's helpful for, you know, the more techie people in the room that might be contributing to that industry. But the thing that I also believe is that every single person, regardless of where you sit in society, also has an active role to play in shaping the trajectory of technology development.

I think this is something that is really often missed, and most people feel that they don't actually have agency at all, that Silicon Valley can just do whatever it wants, and that's the end of the story. But we're already starting to see remarkable movements, collective action, movements that are building among various different groups and communities to push back against the way that Silicon Valley is developing AI right now. So, we have artists and creators that are suing these companies now over the intellectual property rights, trying to use litigation as a way, as a governance mechanism, to put a check on the tech company's power. We are seeing hundreds of communities around the world starting to push back on data centre development when it lacks transparency, when it lacks any kind of benefit to the local community, and recently, there was just a major victory in Tucson, Arizona, where the community blocked, suspended, I should say, suspended, a project that was reportedly from Amazon - Hyperscaler trying to enter their community, but without any transparency to the local community and the city council, after enormous protests, actually acknowledged that they did not enter into this project in a democratic way, and that they will suspend data centre development to first listen to the public about how they would want data centres to engage with their community, and then figure out a more productive way to potentially have data centres there or not at all.

So, these are different types of mechanisms that just average people in the world are using to shape the trajectory of AI development. I would also say that every single school environment, every office environment, hospital environment, government agency, if you are working in one of those environments, or if your kids go to school in one of those environments, you also have plenty to say about governing the AI policy of that organisation, rather than just having some top down adoption where the answer is either everyone use AI or no one use AI, create coalitions with your fellow coworkers, with your fellow parents, and talk about what is a more nuanced AI Adoption policy that will actually help strengthen the goals of that organisation, rather than undermine them.

Toby Walsh: Well, I think that's a great moment to bring Mimi and Joel out and talk about what we can be doing here in Australia.

Karen Hao: That sounds perfect.

Toby Walsh: Well, welcome to the couch. If we could, if we could, turn our gaze now, I think, to Australia and to the positives we can do. Start with you, Mimi.

Mimi Zou: Sure.

Toby Walsh: Regulation. We seem to be on, on and off, on and off. There was a time when I thought we were going to get some regulation here in Australia. Now, I'm not so sure. Certainly, in the US that people are off. Europe seems to have gone down that road. Where should we end up? What's going to happen?

Mimi Zou: So, I think right now we are seeing, globally, among regulators and policies, almost a pause in terms of where we may end up. And so, as you point, rightly point out European Union started the process. They have what's known as the Artificial Intelligence Act that applies within the context of Europe. Australia, most recently, put forward a proposal for mandatory guardrails around high-risk AI systems that seems to be, at the moment, also paused in terms of any movement to bring any sort of specific AI legislation. What we're seeing given also the geopolitics of AI. Some call it the arms race. In your book, you critique this sort of, you know, almost saying, look, it's inevitable, this arms race, and we between the empires and so between governments around the world that are very much bought into this narrative of AI is going to solve our productivity problems. AI is going to bring all of this economic growth. Well, that narrative, which is very much driven by Silicon Valley as well, seems to have dominated concerns over the range of everyday harms. We're not just talking about, you know, sci-fi, kind of existentialist risks. We're talking about everyday harms that Karen has beautifully written about in this book, that everyday Australians, particularly people in the third, in the Global South, are experiencing. So it seems to me that right now, regulation is not going anywhere, and some people, there's a big body of work, including the book that I've just published among legal scholars that says, "Well, if regulators are not doing anything right now, we can't wait. We're going to have to apply our existing laws, whether they're human rights, discrimination laws, our labour laws, our environmental laws, to really challenge some of these big tech companies that are causing significant harms in how they are developing and deploying these technologies."

Toby Walsh: Are existing laws up to these new harms?

Mimi Zou: Well, that's a good question. I'm going to read you a very quick quote from Justice Michael Kirby, many of you know a very prominent former high court judge. He says, "in my 35 years as a judge, I encountered numerous complex problems that tested the boundaries of existing legal rules, yet none quite compares with the far-reaching challenges posed by Generative AI. This technology, with its ability to mimic human intellectual output across a spectrum of activities, presents us with a new frontier in the law." So, he doesn't think existing laws are able to grapple with the unique challenges that particularly Generative AI and some of the new large language models pose and so, you know, in my view, I think regulation is important. But while we can't wait for the regulators who are being lobbied by these big tech companies, we've got to try our best as lawyers to try and apply existing laws, so even if they're not perfect. But as Karen said, you know, all this litigation going on in the US, they are trying to apply existing laws.

Toby Walsh: I presume, Karen, I mean, the reason that the US is backtracking is just because it's empire building.

Karen Hao: Yeah. I mean, I think the you could argue that the US has long been an empire but certainly under the Trump administration, the US has openly now declared that is trying to re-make the American empire great again, and absolutely they, the US government sees these Silicon Valley companies as their empire building assets, and That's the reason why they're trying to give them unfettered access to not just resources at home, but resources around the world. And I, I always think, have seen this as, like, a very tenuous relationship, like there's been, you know, people have described that there's currently a historic alliance between Silicon Valley in Washington. I think that's true and also -

Toby Walsh: But it took another step at the inauguration.

Mimi Zou:Yeah.

Karen Hao: Well, yeah. I mean, it is, it's historic, yeah. I mean the fact that during Trump's inauguration, all of the tech billionaires were lined up,

Toby Walsh: Right behind him.

Karen Hao: Right behind him, to indicate this kind of unity between the two coasts of the US, the two power centres. But it's also a very tenuous alliance, because at the end of the day, these Silicon Valley companies are also in their empire building ambitions. And there are plenty of these billionaires that actually no longer think that democracy is the correct form of government, and their ultimate end game is to actually subsume the US government by essentially cooperating with them until they undermine them. Which example a) Elon Musk and the disillusion of the US government. And so it is, yeah, it's been, it's been remarkable to me that the US government has, still views these companies as somehow an asset to national security in the US, because clearly it has not been doing that.

Toby Walsh: So, Mimi on a positive note, I'm trying to keep us positive. There's some bits of Australia which I have quite a lot of pride about. We were the second country in the world to have a Google tax after the tragedy in Christchurch, we introduced new laws to ensure that that sort of hateful content was taken down quicker. We have the first age band for social media coming into effect here. We can do stuff, and it actually ripples around the world.

Mimi Zou: Yeah, absolutely. I think Australia, actually, our relationship with technology has been quite mixed. And so, I think as a very - our democracy is still very vibrant guys, like I know, when we look over to unfortunately, Karen's country - 

Toby Walsh: Compulsory voting is very good.

Mimi Zou: indeed, and we have compulsory voting over sausage sizzles. I'll explain to you what that means later. So, you can get people out there to the voting -

Toby Walsh: Price of a sausage.

Mimi Zou: That's right.

Toby Walsh: But Australians are cheap.

Mimi Zou: I agree with you. I think we do have and Karen was talking about mobilisation, you know, I think we certainly should be looking to voting-in politicians who care about issues around technology regulation, and so I think at the moment, there is just a bit of regulatory capture in terms of that narrative around productivity, innovation, efficiencies that can be gained from, you know, accelerating our country's adoption of AI. That sort of, some of the good proposals around sort of curtailing AI companies where high risk systems are being deployed. I think that we need to revisit. We need to revisit the mandatory guardrails, because I don't think it's actually that onerous. And I do think that Australians we, yeah, we definitely have this culture where we will take big tech companies on. And so, I think this is what we need to see right now at the political level in our country.

Karen Hao: I'm personally really optimistic about Australia, because you guys don't have guns, and that was because regulation stepped in, which, I mean, look at the US, like we weren't able to make that step. It's, it's analogous to the fact that we can, now, can't make any steps in the tech industry. But you know, Australia has the, it really does have the will to actually make democracy work.

Toby Walsh: Let's move off politicians, and let's focus on us. So, let's bring Joel in. And how are we going to look after ourselves?

Joel Pearson: Yeah. So, I mean, how? How long have we got? I mean, I think the elephant in the room -

Toby Walsh: 15 minutes.

Joel Pearson: 15 minutes, there we go. So, I think the elephant room with AI depending, you know, I think it's not about AI going bad or good, it's about the change that's going to hit us. And humans are not good at change, particularly when we don't have control over it.

Toby Walsh: We evolve very slowly.

Joel Pearson: Yeah, and the uncertainty is the big thing at the moment that's crushing people. We know from a ton of neuroscience that uncertainty induces anxiety and basically everyone in all animals, and we're seeing not only all kinds of geopolitical uncertainty around the world, but uncertainty from AI jobs -

Toby Walsh: and AI is amplifying the uncertainty, the algorithms are lifting, encouraging us to see content that makes us more anxious.

Joel Pearson: Yeah, but at this uncertainty around education, future of jobs, you know, have these predictions we're going to lose 5% to 60% of jobs. What does Sydney look like with half the jobs gone? So, we should start thinking about that. So, you have all this, that sort of the milieu that we're sitting with, this uncertainty, cranking up

Toby Walsh: But when you say that – “half the jobs gone?” I mean, it's the you're going to be anxious.

Joel Pearson: Yeah, so I don't want to make people anxious, but I think we should think about this human -

Toby Walsh: Don't say, "half the jobs gone!"

Mimi Zou: So, we might not be here.

Joel Pearson: I'm not in the good or bad camp. I'm in the camp that there is a Utopia waiting for us, but I think the road to get there is going to be fairly bumpy, and we need to pay attention to the human, to humans, to our psychology, to the neuroscience around that, rather than tech, tech, tech - like focusing on how we adapt. So, change management is needed. Corporations use change management whenever they go through a big change. Why can't we think about change management? AI-specific change management at home, in companies or for a whole nation. So, this is big picture stuff. And then we can go into deep dive into this, you know the something called the “continued influence effect from deep fake”s. We know that when you read or hear or see a deep fake, it has an influence on you. It can change what you think about the person -

Toby Walsh: You can't unsee it.

Joel Pearson: You can't unsee it. It changes your mental model of the thing you've just seen. So just even if you're told it's a fake, it still changes what you think about the person or the company, whatever it is. So, it's not really just a tech problem. There's a whole psychological dimension there. Then you have human AI relationships-

Toby Walsh:  Before we move off deep fakes. Shouldn't we also be regulating against deep fakes?

Joel Pearson: Yes.

Toby Walsh: I mean, because as far as I can see, it's going to corrupt our politics.

Joel Pearson: I think it already is, to some degree.

Toby Walsh: It's already corrupting our politics.

Joel Pearson: Yeah, and then pretty clearly that the next frontier of social media is human-AI relationships. We're seeing a boom in young people having close friendships, loving friendships, sexual relationships with AIs. And you know, we're not even into the human, humanoid robotics revolution, which is about to hit probably next year sometime, with these humanoid robots. So that's a really uncharted territory. And it's not even clear in terms of legislation, what should apply to that. Because, yeah, it is, it is exploding at the moment.

Toby Walsh: So a bit of me just jumps up and says the precautionary principle. If we're going to change human relationships so profoundly, should we not be a bit more careful unleashing this upon ourselves?

Joel Pearson: Absolutely.

Toby Walsh: And do we not learn anything from social media?

Joel Pearson: And that’s the thing. We don’t want to repeat the mistakes of social media of the last decade. There’s been tremendous mental health challenges in young people, particularly young females. And so, we are likely moving to a sort of new version of that that will hit young people first, and then old people following that with these AI-human relationships. And it’s not even clear how I treat an AI, how that’s going to influence how I treat people around me. It will have an influence, and it already has. So, things like that.

Toby Walsh: You should always say please to your chatbot. It’s good practice.

Karen Hao: It uses more water. Sam Altman even told us that. Saying please and thank you. And it jumps the energy bill at OpenAI.

Mimi Zou: Yeah, that’s true.

Joel Pearson: So, there's a lot of things to think about, and I guess where I'm at. You know, I trained as a neuroscientist and psychological scientist, and we would tend to get government grants and do research, and we have plenty of time to do this, and we don't have time right now. Things are happening so quickly, so I tend, I use this phrase now, “neurofuturism”, where we try and take what we know about humans and behaviour in the brain and we apply that to what we think, where AI is going, where it is right now, to try and understand the impacts. And I've been trying to spread the word about this, not that tech companies are really listening, but to try and predict what's going to happen. Because we don't have the traditional luxury of doing the research first.

Toby Walsh: Why are they not listening? Because it's more addictive not to be good.

Joel Pearson: I think they just want to move. I mean, they want to build empires. They want to build, you know, trillion-dollar personal bank accounts. So, I mean, there's probably a number of other reasons they're not listening. They throw a few, you know, coins here and there to say they're going to fund some research into this kind of human impact. But I think the race is on. They're just moving as fast as they can. And if they break things on the way -humans - I think they're going to do that.

Toby Walsh: Move fast and break things, and humans can be broken.

Joel Pearson Yeah.

Toby Walsh: So, what do we need to do? I mean, so we don't repeat the mistakes of social media.

Joel Pearson: So, I think we need to think about, be careful. We need to think whether it's, deep fakes. Whether we can be easily triggered online, we need to pay attention. We need to work on our emotional intelligence, how we're feeling so we're not easily triggered. Work on our emotional awareness of the states we're in when you're online, looking at anything now, most of the things you'll see on social platform are generated by AI, and they're generated in a particular way to trigger you, keep you on these platforms so the platforms can make more money. So, we need to be careful about that. We need to be aware of the way that uncertainty and change is affecting us. We need to understand the kinds of psychological tools we can have to reduce that stress, that anxiety, bring us out of that fight or flight mode. And there are things you can do just box breathing, simple breathing in the moment. There are lifestyle things you can do less acutely over the week, and sleeping well and eating, being with family and friends and really paying attention to the mental health side of things. So, I think in the next decade or two, these things are going to become more important than ever before.

Toby Walsh: Can I be really rude and ask you a personal question?

Joel Pearson: Go for it.

Toby Walsh: Have you changed your own online behaviours? Have you, what do you to –

Joel Pearson: Yeah, I don’t believe things I see online anymore. So, if I'm scrolling through one of the social platforms you know, used to see this stuff, “oh my gosh, I can't believe that really happened.” And I'm like, “what? I'm sure it didn't happen.”

Karen Hao: I don't know if that's any better, because what if it did?

Joel Pearson: And also look for verification, yeah, and I try not to be outraged. I try and limit the time I spend on any social platform. I try and send things out, not necessarily take things in. If I am going to sit there and scroll for a while, I treat it like, you know, junk food or something, where I understand what it is. I'm going to do it for a limited amount of time. I'm not going to just lose two hours in - will have my memory wiped for two hours. So, I have tried to change.

Toby Walsh: It's wonderful. You finish those two hours and nothing's left.

Joel Pearson: But that's the thing, right? You can sit there for an hour and you try and remember all these short, very emotive clips you've seen, and you can maybe remember one or two. You're watching a feature film, and you remember the whole thing in rich depth,

Toby Walsh: Or you read a book!

Mimi Zou: Yeah, yeah! And this is Gen AI free, by the way. Yeah, it actually says “certified, 100% human”, another reason why you should buy her book.

Toby Walsh: Can you, can you see a counterculture, a pushback that's going to happen? The analogue is going to become way more important.

Joel Pearson: I think it is already. And certain people, yeah, a lot of people, go through the stage, or you can stay in the stage of wanting to go off grid, get a, you know, hut somewhere in the bush or the forest and decouple on AI stuff.

Toby Walsh: And that's something deep and neurological, right? It's not just,

Joel Pearson: Yeah, there's different reasons for that. Some of it's driven by anxiety. I mean, the problem is that if you want to keep doing, if you want to do things in the world and have impact, you kind of need to use AI. Now, it's like a utility like the internet or electricity. So, most people who are doing things in their work can't do that. Maybe you can have the luxury of doing that, or do it for a certain period, for a month a year, or that kind of thing. But it's getting increasingly hard to sort of unplug and live off grid.

Karen Hao: I would nuance that a little bit with the fact that certainly everything that you encounter has AI imbued within it now, but I don't mean Generative AI. I mean, like your email has a predictive AI imbued with it, but I think it's totally possible to do your job without Gen AI, which I do. I don't use any Gen AI. My husband does. He's the most productive person at this company. So, I do think that actually, we're moving towards a world where Gen AI gives you cheap ideas and cheap content, and it is very commoditised now. So actually, in order to stand out, you will need to lean on your own brain. And so we are seeing more and more aversion to Generative AI outputs, even when people don't really detect that is Gen AI, it's just it feels like it's all comes from the same statistical, sameness, and so the things that aren't coming from that start to feel more standout, more unique. And that's part of the reason why I don't use any Generative AI in my work, because, like my entire basis of my work is based on a unique perspective, unique reporting, things that are very individual to me, yeah.

Toby Walsh: And that's why, that's why we read you, because it’s your -

Joel Pearson: Karen, do you think that will last though?

Toby Walsh: Absolutely, yeah.

Joel Pearson: Generally, and you specific like, like, as things, as things get more competitive and more people are using it in ways that amplify that personal touch and that human rather than replacing it. Do you think the way you're just talking about it will like, is it got a year left in that or two years? Or?

Karen Hao: No, I don't think. I don't, I don't think it's ever going to go away. And the reason is because, in order to get the -

Toby Walsh: What about when they have the Karen-bots?

Karen Hao: Yeah, well,

Toby Walsh: That sounds perfectly like you.

Karen Hao: So that's the thing is, like, in order to get the personal touch, you actually have to spend, like, a lot of time training and fine tuning the tool and prompting and re-prompting, prompting and re-prompting to get it to -

Toby Walsh: I’ve got 20 years of your writing,

Karen Hao: Yeah, well in my case. But I mean, just like the average person, like, there was this really funny social media post I saw the other day,

Toby Walsh: And when you're dead, I'll still have you.

Karen Hao: Well there's this really funny social media post the other day that was like, “I'm sitting in this Gen AI training, and really all the instructor has said the whole time is just prompt and re-prompt it until it sounds like you” and like the person was like, “I could have just written the email, like, faster than that.” And so that's the problem is, like, the amount of effort it actually takes to get something that's like, really unique and feels handcrafted is the same amount of time, if not more, to just make the handcrafted thing.

Joel Pearson: At the moment, at the moment.

Karen Hao: No, no, I think indefinitely, because at the end of the day, like that is the differentiator. Is if you just prompt it once and get out the output. That's what everyone else is doing as well, and I'll give you an example. So, my publicist in the UK, she was on the committee for reviewing intern applications for a publishing company, and she got an AI generated sheet of the responses to their questions as like a reference for comparing whether or not any of the interns might have used Generative AI to write their application. 100% of the applications used Generative AI to write the application because every single one of them was exactly the same as that reference sheet, and they tossed out the whole pile, because they were like, “how are we going to differentiate between all of these people?” So that's what I mean, is like, they're the average person is not going to go through that handcrafted, bespoke effort to probe the tool. And if you are doing that, then at that point you've basically, you're not actually heavily leaning on Generative AI to be more competitive. You're actually just leaning on your own brain to be more competitive. Like you're just using Gen AI as a tool, and you're using your own intellect to differentiate yourself.

Mimi Zou: Well, let's think about like, how many of us who do teach like courses where traditionally essays are handed in, right? And so how many essays all start sounding the same? And for me, the concern is actually, how do we, how do we, you know, I teach law students, how do we actually, like, teach the next generation of lawyers actually how to draft legal writing when actually the current tools are actually not really good and still has lucinates as well. So I do think that it's about going back to the basics, like closed book exams, not written ones, but maybe touched but yeah, there needs to be some engagement with, you know, skills that universities are teaching have traditionally been teaching us, which is learning how to learn critical thinking, and so I agree with you. I actually think that Generative AI, and it's sort of widespread use now in universities, are undermining some of these core skills that we want our graduates to have, so to think for yourself. Yeah, not let AI do it for you.

Toby Walsh: So, the good news is universities are not going away.

Karen Hao: That's true.

Toby Walsh: We'll still have to teach those -

Karen Hao: So, 50% of the other jobs, but not in the university.

Mimi Zou: Fingers crossed.

Toby Walsh: What you say, I think is perhaps true of writing and so on, but it's not true of computer code. Half of the computer code submitted to GitHub today is computer generated because you're not looking you're not looking for style, personal flare. You're looking for something that works.

Karen Hao:  Right. But at the end of the day, the best engineers are still the ones that are figuring out the system level design to be most efficient, and then they're generating -

Toby Walsh: They’re twice as effective with generative coding.

Karen Hao: Yeah, exactly. Yeah. So, so I think that's kind of like, the thing is, like these Generative AI tools, they can automate away certain things that are highly commoditized, like certain functions in computer code, you just write the same function every time, like you do not need to find some new creative way of expressing the same exact function that has been written millions of times by millions of engineers over the last 10 years

Toby Walsh: You actually want it to be the same function. Because different is wrong.

Karen Hao: Yeah. So, I guess another example is like I was talking with this filmographer, who was saying that, indeed, there is competitive pressure to use Gen AI, but not in his art. Specifically, just to generate an estimated budget for filming. Because when he gets a request from a client, he knows that that client has probably queried multiple filmographers, and the faster he can get them the budget, the faster he gets, the higher chance he gets booked. And so, he just uses that like, he'll generate a budget. Like, he'll say, like, give me a three-day breakdown of like, blah, and then it spits it out, and then he sends it immediately to the client. Doesn't matter if it's right, generally approximate. And then once he gets the job, then everything else after that: Gen AI free. So, yeah. So, like, I will give to you that, like there are certain competing pressures that are leading some people in certain aspects of their profession to lean more on the speed aspect, especially when the task does not require high accuracy, then they do see more gains. But when it comes to actually differentiating yourself in the workplace with other types of skills, being a leader, being a change maker in that kind of way, or being a creative, then it really comes down to whether or not you have something unique to offer. And if you're just using Generative AI, it's not going to be a unique offering.

Mimi Zou: Yeah, agree.

Toby Walsh: Okay, well, we're going to open up to questions now. Don't forget, you can put them on slido. I've got the questions here, and we also have some microphones either side, and I believe, also up in the gallery there, so please line up in front of the microphones, and while we wait for people to line up, let me just read you the first question, and probably Joel wants to answer this, but I'm happy if anyone answers this question. We've got the social media ban coming up into play in December. What should we do as a society: parents, teachers, government, to protect and prepare them - not for social media, but for AI?

Joel Pearson: AI companions?

Toby Walsh: No, no for AI full stop.

Joel Pearson: Prepare people for AI?

Toby Walsh: Yeah. Well, AI companions is -  yes, yes, young people. I mean young people.

Joel Pearson: Well, I think we should think about it as, like I said before, like a like in a change management kind of lens, and we've developed AI specific change management models, which are very simple and very flexible and scalable, whether it's your family at home and you want to introduce AI into your family in some way, or in the workplace or a whole nation. So I think we should think about the human side of this revolution we're entering into at scale as a nation, whether it's re-skilling, how we're going to augment and bring AI into the workplace, but think of it as a national change management problem or challenge and follow. Though we have, we've got a lot of research on this. We know that when a company goes through change, if you have management around that change, they're 600% more likely to be successful through that change. And so why not apply similar principles to socialise the information, educating people? There's a number of steps we can very simply go through.

Toby Walsh: It will look like a information campaign.

Joel Pearson: Yeah, it would start with advertising, getting people on the same page. What is it, what AI at the moment? What is it not? And -

Toby Walsh: Literacy

Joel Pearson: Literacy. It would start there, and then it would integrate. And when people would socialise, I'd try it out and commit to some sort of roadmap of using it in some way. And we do, resistance to change is the big thing when it comes to change management, people resist it. They don't want to go through that. So, I would, if I was in government, I would start with something like that, separate to all the sovereign AI, the AI centres and all,

Toby Walsh: Again, I don’t want to be mean, but what are you going to do at home?

Joel Pearson: Me personally?

Toby Walsh: Yeah, you've got two kids?

Joel Pearson: Yeah, I’ve got two young kids. They have not, they don't know what AI is. They have not seen it. We don't use many screens at all. They just watch some animal documentaries occasionally. They don't, they don't look at YouTube.

Toby Walsh: They’ll soon be AI generated animals.

Joel Pearson: That is. And I mean, the education system is going to have to change, and is changing a lot. And I have, I can talk for an hour about, you know, this manifesto of the future of work, and then what the future of education, schools and universities will have to become over the next two decades. And I think there's certain things we have to solve for in terms of purpose. And we could talk about universal basic income and all the stuff from the psychology around that and what people get wrong. And it's not really an economic thing. It needs to be a human thing. It needs to solve for purpose. And meaning and competition and the things that make us really human. Without these things, we get sick, we get have mental health challenges and physical health challenges. So, surprise, surprise, my answer is to focus on people and help people through this next two decades before we get to wherever we're going at the end of that sort of unknown 10, 15, 20 year period where hopefully there is this Utopia there, but I think there is going to be some bumps along the way.

Toby Walsh: Okay, we'll start taking questions. Number one, I think it's a lady, I can't see for the lights. Number one.

Speaker 1I just wanted to ask Karen about OpenAI's possible miscalculation with GPT-5, and I was wondering if you could just share just any thoughts about that.

Karen Hao: I'm curious what you're describing as the miscalculation?

Toby Walsh: Underwhelming?

Speaker 1: The alienation of what seems like a significant part of their user base.

Toby Walsh: I hope people want to roll back to

Karen Hao: That's interesting. You mean in terms of, like, the personality changes?

Speaker 1: Yes

Karen Hao: Oh, okay, yeah, this is a really interesting topic. So when OpenAI had GPT4o, I don't know if people remember, there was a huge backlash around the sycophancy of GPT4o. They made an update at one point, and it was so sycophantic that it became the laughing stock of the internet for a couple days where, you know people, people were confessing, sometimes, like deep delusions, to 4o and 4o, it would be like, “You go, girl, you're so brave for saying that.” And clearly it led, it led to some actually, like, really dangerous outcomes. And so, they then did a lot of internal reflection, basically on how they how did that happen? They added a lot more metrics to measure for sycophancy, and then they tried to dial it down. So, with GPT5, they basically tried to dial it down as much as possible, because there have also been research studies since then about the impact that this can have on people. It does reinforce delusions in a really dangerous way. We've seen the phenomenon of AI psychosis starting to happen. It also leads to other mental health challenges, where it makes the chatbot addictive to the point of replacing people's real world relationships, which is deeply unhealthy, and basically after they changed the personality, it turns out that most of the user base is already so accustomed to that kind of engaging positive reinforcement behaviour from 4o that there was yet another backlash against them, dialling it down too far. And I think, from my perspective, the most responsible thing that OpenAI should have done is to hold their ground. Because I'm actually, I've been speaking with a lot of people who have had, like, really deep mental health challenges because of the sycophantic nature. And like been reading some of their transcripts, and you can see how even just a little bit of that sycophantic element actually can snowball very rapidly and pull people into a darker and darker rabbit hole over time. But of course, OpenAI, at the end of the day, is a company, it's a business, it wants to be engaging. It wants to retain its user base, and so they immediately flipped back to being more engaging.

Toby Walsh: Joel, OpenAI is just opening their Australian office.

Joel Pearson: Yeah, they are the moment.

Toby Walsh: They offer you a job as their chatbot psychologist. What personality do you advise them to give their chatbot?

Joel Pearson: I would advise them, yeah, to lean into the research on this and hire as many psychologists and neuroscience as possible.

Toby Walsh: Well, they've just hired you so – I hear the pay’s quite good.

Joel Pearson: Look, I'm sure if I did work there, there would be tremendous pressure to improve the, increase the usage numbers and time and hours, and that pressure would be heavy.

Toby Walsh: Easy to measure and feedback on.

Joel Pearson: Yeah, but I think, yeah, that do rapid research and design these things, and especially in voice mode, in a way that is not addictive, and it's not just trying to keep people on the system for as long as possible. We've seen social platforms, you know, where that leads hold, you know, if you give the system the reward that, Its job is to keep people engaged for as long as possible, by whatever means, that goes to a very dark place.

Toby Walsh: But Mimi, that seems to be, that's the capitalist imperative, and the only way to stop that is to regulate.

Mimi Zou: Yeah. Well, I mean, it's interesting, because when I was reflecting on the theme of your book, Karen, laws can also basically allow, legitimise empires, right? And so, at the moment, the current laws around IP intellectual property, for example, in the US, has allowed US AI companies to essentially extract data from around the world and say, “Oh, this is like, fair use or fair dealing.” It can and US courts have not been as proactive in saying, “actually, there's something. Could be something wrong with all of this. It's intellectual property theft.” So, I think laws need to be carefully designed, because otherwise they can basically reinforce the power, that power structures that these empires depend on. And so in the in the context of social media, I think, yeah, to see this ban on under sixteens in Australia, I think everyone's telling me, “Oh, it can't be enforced like, and it's, you know, blaming just like”, well, it's blaming social media platforms in terms of, you know, parents should be controlling their kids or whatever. But I do think that the onus is now on the big tech companies to do something, and so regulation and carefully designed should be able to reinforce the kind of values, particularly countries like Australia, would want to see that AI adoption. I'm not anti AI. I'm not going to go crawl into a cave tomorrow and shut off, but I just think that the sort of AI Karen talks about in your book, the trajectory that Silicon Valley has created that has completely impacted the world in ways that are more harmful than beneficial. That's why we do need to regulate. Existing laws are not going to do it. We do need to see more transparency and accountability of these companies, they're not going to disclose their algorithms because, oh, sorry, commercial secret, we don't know what these algorithms are doing to our children, to all of us, and so that's where AI laws and regulations do compel companies to do at least basic disclosure, which they they're resisting. So, yeah, it's not, it's not. There's - it's pretty gloomy at the moment.

Toby Walsh: Let's go to question number two.

Speaker 2: There is actually a grass movement happening at the moment that models what is happening in the past 50 years. Probably, if you're old enough, you remember they used to be big mainframe computers, and then they transition to PC computers. What this movement is happening right now is they cut AI off at a source, and a source is our data. So what this grassroot movement does is to transfer the data from - . The main problem now essentially concentrates right in Silicon Valley. What this does is it is called teamcomputer.com, and what that does is it actually allows people to actually own their own data, and then the AI can work on them. If you look at what government has done, Deep Seek came out, and the first thing they do is regulate and sub innovate. So they said, Look, all Australian public servants not allowed to use Deep Seek. They didn't realise that Deep Seek is actually an open source technology. The government could easily take Deep Seek and put it in a data centre within Australia and benefit the public servant that way. So, what this Movement does is to actually for the same model, take the open source models. That's where the costs are. That's what all the expensive GPUs are, and they use our personal data to train them.

Toby Walsh: So, let's throw to the panel now. So, do we see much promise in an open source model of AI for the people?

Karen Hao: Yeah. So, I do think that if we go back to sort of the futures of empire, like one of them being the monopolisation of knowledge production, open source is a perfect way to start breaking up that monopoly. One of the reasons why we do have a very limited understanding of the limitations of these models is because no one has access to the models other than the companies who have every incentive to paint the models as flawless, but open source does allow for some replication of how these models behave, and the general able to do some levels of research.

Toby Walsh: I want to give the audience positive things to take home. So like Hugging Face, right?

Karen Hao: So, this, yeah, so I'm a huge fan of, fan of Hugging Face.

Toby Walsh: For people who don’t know, Hugging Face is the open source community for AI.

Karen Hao: Yeah, it's this. Is this platform where anyone that ever has a model to open source can push, publish it onto this platform, anyone can download, anyone can modify, access, build upon, and do research on those models.

Toby Walsh: And as soon as a state of the art model comes out a few months later, six months later, you get something open source.

Karen Hao: Exactly. You get something open source that has pretty much replicated that closed source model. And so I think it's such an important thing to invest in. And I think another element of what you're saying is also like basically maintaining more sovereignty over your data. I think that's also a really, really important thing, like, when you use open source models, your data is not going to the company's server to then be retrained in their models. It is staying on your device. You still have total control over it, and then the balance of power totally shifts. You're not seeding more and more power and more and more agencies of the companies every time you use their tool.

Toby Walsh: Okay, the final question for number one.

Speaker 3: Thank you. Thank you so much. It's been a wonderful talk tonight. A bit of kudos for Karen, and then a question for everyone on the panel. So first, the kudos is, you know, obviously, I'm sure people thank you for the book, but the tour is amazing, right, to be a catalyst to provoke this conversation. So, thanks. Thank you for being a warrior and being on tour for so long and being here. We appreciate that.

Karen Hao: Thank you so much.

Speaker 3: It’s an important role. And I guess with that power that you have in everyone, including you Toby, with the mic in your hand, let's speak - you know, Australia has an opportunity. They let me into this country. 15 years ago, I came from another country, but I think this country has an amazing opportunity to show a middle way, to show a new way and a new leadership. And if you could all give your wish list for what Australia could do to be a superpower with alternative AI for good, I'd love to hear that. Thank you.

Toby Walsh: Well, I'll start while you think about your answers. So, when I get in the room with a politician, I always say to them, “you know, we've already got the slogan - this the Australian waste fair go. AI, that's a fair go.”

Karen Hao: I think invest in more public AI, research, not corporate AI.

Toby Walsh: I agree with that.

Karen Hao: Yes, we need, we need, just, again, more understanding of these models outside of the context of these corporations who are highly incentivised to paint the models in a certain way. We also need just development of different types of AI approaches, more ideas in the space. We need more investment in AI entrepreneurship, where different entrepreneurs can create alternatives to AI tools. Like, whenever I give these talks, people then ask, “well, how is there another tool that I can use instead of ChatGPT?” And unfortunately, there's not that many options that you can point to right now, but if we can point to that, we can shift that over, like more, more consumers can vote with their feet. We need more certification bodies that kind of certify AI companies, for like human rights and environmental abuses, because it is a huge onus on the user to do all that investigation themselves. So just in the same way that the fashion industry has done this, the coffee industry, the other food industries, where there are, there are certification bodies that will do that analysis on their supply chain to understand whether or not the supply chain is being developed appropriately, and then increasing transparency across all of the aspects of what tech companies do. If they're going to build a data centre, they have to tell people where is it going to go?, how much energy going to use?, how much fresh water is it going to use?, and how that might change over time. And then what are the limitations on its ability to change? Like those are all just basics, basically just creating more public knowledge and more public options and alternatives.

Joel Pearson: I would just say quickly that, yeah, I think Australia could be a world leader in AI readiness and AI integration. So, the human side, the workforce side, and not just getting ready. But how do you, you know, it's not just bringing the tech into a company. It's often the whole organisation of a company has to change. The way people work, the way they appear at work. All these things have to change. So, I think both, you know, there's a massive opportunity there for Australia to be a world leader in this human sort of side of things, getting AI ready, understanding and building these change models, and we can export that. I think economically, there's huge opportunity there as well. So, I'll just leave that with all of you.

Toby Walsh: Mimi you get the last word.

Mimi Zou: I agree with Toby. I think, you know, there's something about being Australian in terms of our values and so much of where we need to go with AI is to reinforce those values of a fair go, human rights, democracy, rule of law, so we don't have to follow US or China, okay? So because obviously those two countries are, at the moment, and probably forever, going to be the two superpowers in terms of technological prowess, we can align ourselves with other countries that are like minded, and then I do think European Union maybe people say they over regulate, but they share the same values as us. Many, many countries in our region also share the same values as us, and we need to be strategic in terms of not relying on, you know, the two superpowers in terms of our own tech stack, and also our own approach to governance, responsible AI, so we can certainly, we can certainly push for a third way.

Toby Walsh: Thank you.

Well, I'm afraid we've run out of time, but now I have to give you your homework. So before we finish up, I want to take a moment to reflect on what tonight's conversation has offered us, and to think about what we what you are going to do with it. What role do you want AI to play in your life, in your future? And something I've taken away very clearly from Karen here, is that we have agency. We can bring, come together as communities, at work, at school, at home, in our social spaces, and we can take collective action. We still have that power to do that. So, if you wish to see better policies, better systems and outcomes, you can take action. And we also have, you know, individual responsibility, individual power. It's in our own power to set the boundaries. We could decide where to let AI into our lives, where not to let AI into our lives, so that we feel comfortable with whatever the future is going to be. And then finally, let's not forget that AI is just a tool, and we should treat it as such. Despite all of the magic and deity that we talked about AGI tonight, it still is something that allows us to do more, maybe than we can do on our own. But let's learn from things like smartphones that when we let them into our lives, that there was perhaps a limit to how we should let them into our lives, and that we should use them to empower our decision making. It's still our choices. So I want you, as I said, to take that home as your homework and think about what you can now do with that.

So, thank you for joining us tonight at Karen Hao’s Empire of AI, I'd like to thank Karen for a fantastic conversation, coming all this way join us.

Karen Hao: And thank you to everyone for being here tonight.

Toby Walsh: I'd like to thank Joel and Mimi for joining us in the conversation and sharing so many insights. I want to thank you for coming out tonight. Karen will be signing copies of her book. I encourage you to go to get copies of her book.

And with that, thank you, and good night.

Karen Hao: And thank you to Toby for doing this so last minute.

Mimi Zou: Yeah, you saved the day.

UNSW Centre for Ideas: Thanks for listening. For more information, visit unswcentreforideas.com and don't forget to subscribe wherever you get your podcasts.

Speakers
Karen Hao Headshot

Karen Hao

Called “one of the foremost tech journalists covering AI” by Dr Joy Buolamwini, Karen Hao writes for publications like The Atlantic  and leads the Pulitzer Center’s AI Spotlight Series, which trains journalists around the world on how to cover artificial intelligence. 

In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, Karen, the first journalist to ever profile OpenAI, tells the behind-the-scenes story of how a cadre of the most powerful companies in human history is reshaping the world in its image. “Excellent and deeply reported” (The New York Times), Empire of AI  is a page-turning thriller, an “essential work of public education” (Zuboff), and a revelatory portrait of the people controlling this technology. It is the jaw-dropping story of ambition and ego, hype and speculation, plunder and destruction, politics and labour, and, of course, money and power—a brilliant and deeply necessary look at the industry defining our era, and what the future holds. 

Karen was formerly a reporter for The Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work has been cited by Congress, featured in university curriculums, and remade into museum exhibits. She has won numerous accolades, including an American Humanist Media Award and a National Magazine Award for Journalists Under 30. Karen also sits on the AI advisory board of the Craig Newmark Graduate School of Journalism at CUNY. Prior to journalism, she was an application engineer at the first startup to spin out of Google, and she received a B.S. in mechanical engineering and minor in energy studies from MIT. 

Joel Pearson Headshot

Joel Pearson

Joel Pearson is a neuroscientist, neuro futurist, author and Director of the Future Minds Lab, and an ARC Future Fellow at the UNSW Sydney. He initially studied art and filmmaking at the College of Fine Arts before turning to the scientific mysteries of human consciousness and the complexities of the brain. He is a prolific public speaker, writer, and world expert in Intuition, the psychology of AI, mental imagery, and Aphantasia. His debut book, The Intuition Toolkit: The New Science of Knowing What, Without Knowing Why, came out in 2024. His pioneering research has changed our understanding of intuition, the human imagination, aphantasia and the psychological impact of AI on humanity. He is now helping companies prepare for the disruptive change of the AI revolution. 

Toby Walsh

Toby Walsh

Toby Walsh is Chief Scientist of UNSW AI, UNSW Sydney's new AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being ‘banned indefinitely’ from Russia. He is a Fellow of the Australia Academy of Science and was named on the international ‘Who's Who in AI’ list of influencers. He has written four books on AI for a general audience, the most recent is Faking It! Artificial Intelligence in A Human World. 

Mimi Zou Headshot

Mimi Zou

Professor Mimi Zou is internationally renowned for her work in the fields of artificial intelligence (AI) governance and financial, regulatory and legal technology. Her latest books include the Cambridge Handbook of Generative AI and Law and the Artificial Intelligence Act: Article-by-Article Commentary. She received a Global Australian Award in 2024 for her significant research and policy contributions in this field, including as a former expert advising the G7, World Economic Forum and the UK Government's responsible technology adoption body. 

Prior to joining UNSW Sydney, Professor Zou held various senior academic positions at top institutions worldwide, including the chair in commercial law at the University of Exeter and the first fellowship in Chinese law at the University of Oxford, where she also founded Oxford's first lawtech innovation lab and an AI regtech spinout. 

Professor Zou actively collaborates with industry, government bodies, and civil society organisations to advance responsible AI practices and enhance ethical standards in technology use. Her insights are regularly sought by global media, and she frequently speaks at prestigious conferences around the world. A committed educator, Professor Zou actively mentors the next generation of lawyers, fostering critical thinking and interdisciplinary knowledge in her students. 

For first access to upcoming events and new ideas

Explore past events