Skip to main content
Scroll For More
watch   &   listen

Rewiring AI with Dr Charu Maithani

A miniature painter understood that a Mughal garden could be experienced from multiple viewpoints simultaneously. An Indigenous artist sees the land from above and within... If we allow AI to flatten these ways of seeing, into one single western perspective or model, we are losing different ways of being in the world.

Dr Charu Maithani

Ask any AI image generator to create an image of a garden and you’re likely to receive a very specific type; manicured English or French, colourful plants, geometric forms and a winding pathway made of stone or gravel. Why is this such a big deal? Millions of AI images are generated every week but they only represent a narrow view of real life because the technology is trained off western-centric perspectives. 

In an era of AI slop and Shrimp Jesus’, hear from Dr Charu Maithani, a Lecturer in Media, Journalism and Communication, about how changing the approach to machine learning could have far reaching effects on our visual vocabulary.

Transcript

Charu Maithani: I think a lot of scholars and academics, a lot of artists, a lot of designers, lot of creative practitioners, they aim to fundamentally break down, what are the foundations on which we are even thinking about these technologies? Right? So if an image, for instance, can be broken down into a number of arts, when machine learning programmes are looking at an image and learning, what are these parts that I need to know so that it can be replicated? So, yes, at fundamental level, it's also, why are we breaking it down? But it's also like, Okay, if we do need to break it down, then can we break it down in a different way? And so the image that doesn't really have a depth of field a flatter image. The example that I give is miniature paintings, very flat images, right? Everything is like in that 2D space, yeah. So what if we actually train machine learning programmes on these 2D images? Or, let's say, what would it mean to train machine learning programmes on indigenous image making? You know, because there is a lot of thinking that goes in the indigenous image making practice, which is not about, oh, we need to give a depth of field. We need to break it down into foreground, middle ground, background. You know, there's a different kind of thinking going on. So if we were to train a machine learning programme on that kind of thinking. What is it that we really need to do? What does that thinking look like?

Benjamin Law: G'day, you're listening to One Big Idea presented by the University of New South Wales, Centre for ideas. I'm writer and broadcaster BenjaminLaw, and I can't wait to talk to seven incredible women whose research and ideas are changing the game in fields from the environment to education, quantum physics to cancer research. Now, First Nations, people on this continent have been sharing ideas and knowledge for 10s of 1000s of years. They're humanity's first astronomers, first agriculturalists, first architects, first inventors, and together, those indigenous nations constitute the oldest continuing civilisation the planet has ever known. So we're really grateful to the elders of the Gadigal, of the Eora nation, where we're recording this podcast, that we can continue sharing knowledge here on Aboriginal land. And if you're a listener who's Aboriginal or Torres Strait Islander. We extend that gratitude to you too. Every episode, I'll be interviewing a different UNSW academic learning more about the person behind the big idea, and today, I'm sitting down with a lecturer in the School of Arts and Media at UNSW. She's a researcher who organises her inquiries in the form of writing and curated projects and is really interested in exploring the perceptions and meanings of images in contemporary media culture facilitated by AI. Right now, millions of AI images are generated every week, but they only represent a narrow view of real life. So I can't wait to discuss how these technologies could have far reaching effects on visual vocabulary and our understanding of society itself. With Dr Charu Maithani, welcome to one big idea.

Charu Maithani: Thank you. Thank you, Ben.

Benjamin Law: I feel like you. This is a really timely period to be chatting to you, because these technologies have come, generative AI has arrived, and they're being used at such scale now you're interested in it, of course, tell us about what you do. How would you introduce yourself to someone at a party to help them understand the work that you do and what you're interested in?

Charu Maithani: Right, I think I would, I would introduce myself by saying that I'm interested in how, how we can, how we can make machines see differently. Oh, right. And what does seeing even mean? Right? Like seeing is not just visual, right? So those are some of the things about, you know, questionable things about, how do we understand vision, or what does it mean to translate a particular way of seeing into different kinds of knowing. You know, like seeing is so associated with knowing, but also like visually seeing. You know, like, what happens when we are not just seeing visually, right? Which vision is just one way of seeing.

We never really see we are actually it's a combination of our multiple senses, how we see. You know, we don't just see things when we are remembering things. We are thinking of the sound. We are thinking of the smell, right? Sometimes we are thinking of the touch. So, so seeing, there is too much emphasis, definitely on seeing, as in optical vision, that kind of seeing. But like, let's expand that as well, right? Why? Questionning it. You know, that's, that's where I'm going.

Benjamin Law: Well, let's really expand that, because already you've caught my attention with the idea of, I don't know, helping machines see differently. Like even that concept is blowing my mind. What do you mean by that?

Charu Maithani: Well, I think it really goes down to how we have been trained to take photographs. Take images, you know, photographs, films, like, all kinds of images in a certain way, you know, like there are certain idea of, oh, this, you know, what we teach in school, what I teach to students, you know, this is how you look at an image. You know, this is how you divide the image. You know, this is, these are the rules. These are the depth of field, or, you know, this is where you need to, you know, off centre or centre, and, stuff like that.

So this kind of aesthetic that has been normalised centuries of, you know, work. It isn't always the case. It is the dominant way of organising elements, organising vision, let's say, but it's not the only way. So I'm trying to think of, okay, then, what are the other ways? You know, other cultures that have been organising vision, organising seeing, in fantastic ways for centuries, you know, like we were talking about indigenous knowledge in your opening, right? So there that that sense of vision, and I'm, of course, I'm using it really broadly, as I just said, that it's not just about, you know, optical vision, but that sense of vision is quite it encompasses and goes beyond our really narrow thinking of of, you know, this is how you organise something, or this is how you frame an image, you know.

So let's, let's get that thinking in, you know, we are training images, you know, quote, unquote, training images in different ways of, of how to see, what are the elements you're you the machine has to look for, right, which has very questionable ways like, how do we even do that to an image, right? Or if image making is that simple, that it can be broken down into these like, number of steps that can be learned by the machine, right? All of that is, of course, is very questionable, and that is also that those are also some of the questions that I want to get to, right? By getting to by trying to open up this idea of, okay, how else can we what kind of images can we generate? You know, what are the other ways of seeing which can be used to generate images that look very different to how we understand you know, this is a good photograph, or this is how it should be, right?

Benjamin Law: You ask this very good question, which is like, what are these different ways of seeing? Now, I've heard you speak before, and one of the big takeaways that I got from hearing you is that technology is not neutral, and it's never neutral, it's designed. And the way in which you led me through that concept was through a Mughal garden. Can you, first of all, describe what a Mughal garden is, and what that image and how it intersects with AI tells us about where we're at right now.

Charu Maithani: So a Mughal garden is a kind of a garden, right? It was interesting to think about AI vision through the garden. And yeah, I'll come to that Mughal garden, popular garden design in in South Asia…

Benjamin Law: they use marble in a very particular way. Is that, right?

Charu Maithani: Yeah, it's not just marble. It's also the red sandstone that was used in a particular way. Water is a very, very, very important element in a Mughal garden. So Mughals who were in South Asia from the 16th, I'm pretty sure it was a 16th century to like they were on till British, kind of occupied South Asia, more completely, more authoritatively, more economically and politically. Before that, they came in as, you know, traders, as East India Company and all that. But then till, like 19th centuries, when they solidified, they're like, “Okay, we are. We are not just a trading company, right? We're gonna, we're gonna take over, like the country,” anyway.

Till then Mughals were around. They built, like lots of, you know, so Taj Mahal, or something, Taj Mahal used to have a Mughal garden. So, you know, the Mughal garden main properties, it's actually influenced by Persian Garden Design. So there are a lot of the Persian Garden Design influence. You will actually see that in a lot of monuments, however they exist now, because a lot of gardens don't exist as Mughal gardens anymore. You know, like, including Taj Mahal, doesn't really have that same Mughal garden because, you know, it's been like centuries, right? So things have been built over.

Benjamin Law: Sure, but what you're describing is something that's engineered quite beautifully, really quite large. And if you've ever seen one, either in real life or even online, you'll be like, “Wow, what a beautiful, dreamy and very familiar”, especially to people who've been in the subcontinent, “very familiar kind of garden design”. And yet, when you say prompt, an AI image generator to give us a garden, what comes up?

Charu Maithani: Yeah, so the thread between the, you know, this Mughal garden, which is a particular kind of, you know, which is called charbagh, which is like four grounds, literally, like four grounds that are enclosed in the water. So that particular kind of garden is a very specific kind of garden. There's a specific kind of vision, right, and that garden can be seen a lot in miniature paintings, in Persia, in throughout South Asia. So what I'm saying is that there does exist a huge amount of database of, if that is the language we're speaking out, huge amount of database that shows that this is also a kind of a garden.

Benjamin Law: Yeah, if you want to see a Mughal garden, it's easy to see what a Mughal garden looks like.

Charu Maithani: Yeah, yeah So, but we don't really see that unless we ask the image generator to give you a Mughal garden, right? So, the idea is not that whether AI image generators can do a good vision of the Mughal garden, right? They can. The point is that when I ask for a garden, the default vision, the default image of a garden. That is what is questionable, right? Because there is a universal idea about what things look like, what people look like, what problems people have, you know. And you know that solution through tech kind of mentality, right? Kind of really just sweeps over the local issues, the context right, and really overlooks a lot of the other knowledge, which is, let's say, is not non universal, the non-universal language, or, let's say the non in that sense, non-western knowledge.

Benjamin Law: So to recap, you go to an image generator, you type in, show me a garden. Show me a beautiful garden, show me a lush garden. And it's very unlikely that a Mughal garden will come up. And what you're saying really resonates with me as well. Because you know, you're a South Asian woman. I'm an East Asian man that a lot of the time, if we're typing things about like, show me beauty, show me something, show me a family, even, that people like us or our cultures or our environments, our our foods might not even come up, probably won't come up. What's that telling us?

Charu Maithani: Well, that is telling us a lot of things about what kind of ideas or what kind of vision do do technologists, let's say, have and how we really need to push back at at, like, at very fundamental levels, about how really are we even thinking about these technologies, as to, you know, some kind of steps that can be done if we are thinking, can we think more in local context, right? How is it actually really helping us do anything at all? Like, also, you can generate an image of a garden. What do I Yeah, you know? What do you want me to do about it?

Benjamin Law: Well, to that point, I imagine some people might be listening thinking, Well, what's really at stake? We're talking about an image generator that's spitting out an image. It's not the image that is most inclusive. It's not giving us a broad range of what say a beautiful garden looks like. Why should we care what's at stake in this conversation,

Charu Maithani: Right? So, as I said that at a fun at, you know, some level we could just say, like, why should I care about this?

Benjamin Law: Why bother

Charu Maithani: Yeah, yeah, you know, garden, whatever garden comes up, right? But then, if you actually really think about it, it's about inclusion, but it's also about, how else can we imagine these things to be right?

Benjamin Law: But what I'm hearing is there's an implicit question, which is like, what happens when we're using a flawed technology?

Charu Maithani: Yeah, yeah. I mean, not a flawed technology, a technology that has been constructed with a certain mindset, is more like it

Benjamin Law: Value’s and meaning’s that aren’t neutral.

Charu Maithani: That are definitely not neutral, that also has a very colonising effect. You know, that's, that's, that's the word to use here. And I know it's very loaded word, but that is true, technology is very comes with a lot of colonising that kind. In a mindset?

Benjamin Law: Well, you have reframed something in my mind, because, like when we think of colonisation, obviously we're thinking of geopolitics history and also living history and the legacy of colonisation in real, concrete ways. But what you're also talking about is an intersection of colonial mindsets in a way, with the design of technology and how that legacy is actually living digitally.

Charu Maithani: Yeah, yeah, absolutely. So, of course, the digital colonisation obviously has a lot of legacy of the territorial colonisation or historic colonisation, and that can be seen in how, in not just how we think, but also the use, the actual, the infrastructure, the materiality of technology, right? I mean, talk about rare earth minerals. Everybody's talking about rare earth minerals nowadays, right? And that is majorly mined out of indigenous lands, right? So, there is, it's a, it's a mentality that kind of manifests itself in actual ways that we go about in our daily life using, quote, unquote, you know, technology.

Benjamin Law: And your second point then is, how else do we think about this? Or how else can we think about this? And if we're talking about colonisation, I guess the reframing of that is decolonisation, which makes me wonder, What does decolonising AI, what does decolonising, generative image functions look like, and how do we go about that?

Charu Maithani: Right, so decolonising is one thing, or anti colonising is another. So, you know, you can think of decolonial or anti colonial like, those are the ways to think about it, because at some point we can't really like decolonising is not enough. There is a certain kind of, I guess, a certain mindset and a certain vision that one needs to have, to not have a colonial thinking practice,

Benjamin Law: let's say yeah, that it's actually in motion, yeah.

Charu Maithani: So what that looks like is means very different to different people who work with AI technologies or data technologies. Let's say, so I think diversifying data set is one way in which a lot of tech companies have also gone and there have been some amazingly questionable results, you know, like showing founding fathers of America, as you know, have, you know, including black people in that. Like, over diversifying,

Benjamin Law: yes, yes, yes. Or just like, you know, yeah, what does, what does a neo Nazi look like? Yes, like, well, we'll put a Korean woman.

Charu Maithani: Oh, okay, yeah. I mean, you know, not that that's the thing, right? Not that Asian people can't be Neo Nazis. But the point is that you're actually taking away from the real, yes, you know Neo Nazis. You know, right now in the world, the Neo Nazis is a certain kind of person in a particular part of the world, right? So, so, you know, we then also kind of making these weird diverse in the name of diversification.

Benjamin Law: Yeah, there's a bit of context collapse happening.

Charu Maithani: Yeah, yes, absolutely. So diversification of data, so, very questionable, but a lot of tech companies like to take that route. But I think a lot of scholars and academics, a lot of artists, a lot of designers, lot of creative practitioners, they really fundamentally want to break down. They aim to fundamentally break down what are the foundations on which we are even thinking about these technologies? Right? So if, if an image, for instance, can be broken down into a number of parts. I guess, when machine learning programmes are looking at an image and learning, what are these parts that I need to to know so that it can be replicated? So yes, at at fundamental level, it's also, why are we breaking it down? But it's also like, Okay, if we do need to break it down, then can we break it down in a different way? And so the image that doesn't really have a depth of field, a flatter image, the example that I give is miniature paintings, very flat images. Right depth of field is like, there's no depth of field in that right flat images, everything is like in that 2D space, yeah.

So what if we actually train machine learning programmes on these kind of, these 2D images, right? Or, let's say, how can we? What would it mean to train machine learning programmes on indigenous image making, you know, because there is a lot of thinking that goes in the indigenous image making, I guess practice which is not about, you know, oh, we need to give a depth of field. We need to break it down into foreground, middle ground, background. You know, there's a different kind of thinking going on. So if we were to train a machine learning programme on that kind of thinking, what is it that we really need to do? Like, what do we what does that thinking look like basically?

Benjamin Law: Basically, we're talking about the intersection of AI and culture, and we have also seen examples of AI generating, say, indigenous imagery, but I can't imagine that First Nations, communities or people have necessarily being collaborated with deeply to use or generate that imagery. What is the line here? Because these are systems that are designed for profit, usually, and they're often working with imagery that isn't theirs. What are the ethics in all of this?

Charu Maithani: Well, AI, ethics is a is a very pertinent topic. And there are lots of work going on, I guess, not enough, because big tech, the companies, have a certain way in which they have organised things, and they continue to do so, right? So there's, there has to be a lot of lot of voices and a lot of pushback against that kind of, not just thinking, but also way of operating…

Benjamin Law: Because of what we're talking about. Feels so extractive.

Charu Maithani: Yeah, it is, right. I mean, like, where are those data set images coming from? You have the Leon 5b data set that has a lot of images, a lot of the initial training happened on, on the on these, you know, these data sets that were compiled, I think, since the 1990s or something, right?

Benjamin Law: And there's so many conversations about ownership, as well as an author myself who's had my own books scraped without my consent,

Charu Maithani: Yes.

Benjamin Law: And without remuneration for AI, I feel that personally, but I can only imagine what it's like for, say, a First Nations community or culture to suddenly have their images scraped without any consent or remuneration. It's kind of history repeating, right?

Charu Maithani: Absolutely, it's yes. That's why the the colonising, you know, colonisers mindset, is not it is totally what is happening, right? So, yes, you have the you have image permission issues. You have who has the ownership, then of the model of the AI model itself, right? So there are lots of those issues I think here. That's why I think the future, and I think a few organisations and people are working towards is, is to go back to the community. You need to. That's why that okay, you know, AI, can some kind of model exists or something, right? So, how do we, how do we create kind of new models, or, what do we do with AI?

So raising these issues, about ownership, about permissions, right? Also makes me more confident about this research, and when I talk to others, is that the future is going to be more community specific, right? We are going to, we will have to get communities involved to design specific AI models around specific kind of issues, right? So this kind of, that's the problem, right?

This universalising, this totalitarian vision of gardens, that will have to be countered by going to communities and institutions. That's why institutions here do have a big role to play, because let's say a lot of paintings have been digitised by the museums, right? So what do they do with with that kind of digitised data set now? Right?

So that's why you'll have to get the institutions. And they have been doing it for a really long time, right? Institutions, archive, institutions, right? They have these images and different kinds of, you know, documents for a really long time. So we need to get them into conversation of, okay, how do we…What is it that we do with, with, Okay, we have this kind of, you know, AI model or or we have this ability, let's say, of creating an AI model. What do we need it for? Whether we need it or not, what should we do with it? Right? So there has to be different communities, institutions, different disciplines, who have to be in conversation here.

Benjamin Law: That's really interesting. What you're saying, because right now, the status quo really feels like the generative AI models are usually in the hands of very particular for-profit tech companies run by a certain kind of tech bro. You're kind of forecasting that these technologies, as they grow, will become increasingly available to other institutions, including not for profit, or even publicly owned institutions as well, that will be more adept at engaging with community, with people, with cultures on a grassroots level, and that in itself might be its own alternative or intervention.

Charu Maithani: Yeah, absolutely, because we have to understand that till now. So the technology companies have been treating image making, let's say, pure, at least in the context that we're in, taking image making as some kind of a technology issue, right? When it is not, right. Image making is a social, cultural, political issue, right? So if you're going to treat it only as a problem to be solved by technology, then that is the kind of thinking you're going to get. So that's why that fundamental, you know, anti colonial and decolonial thinking is required, because you need to fundamentally alter the foundations in which actually enable this kind of thinking, you know, and yes, of course, there's a lot of you know, money that is involved, a lot of geopolitics that is involved in this, that enable these companies to thrive and flourish and remain a black box away from scrutiny, away from any kind of you know, legal issues.

Benjamin Law: So these are really important questions that we need to be asking each other and also the people designing our technology. Meanwhile, while you and I are having this conversation, a lot of images are already being generated by the AI kind of platforms and apps that we're talking about and at scale. Can you give us an idea of how big that scale is right now, because we're not talking about something that's niche. We're talking about a lot of images being produced to help us make sense of the world, in a way, right?

Charu Maithani: So the statistic that I usually share about 15 billion images in since you know, Dali 2's beta launch in 2022 was something that I got from every pixel. I think that, I mean, you know, there are lots of platforms that have been doing this kind of number crunching of, you know, how many images are we generating each day, or, you know, since the launch of a particular platform, and so on. So, from what I get from every pixel is that I think they say that models that are based on stable diffusion, and this is like 2024 statistic. There are like models based on stable diffusion, 12.5 billion images have been generated in total. They say about 15, over 15 billion images in 2024, have been generated till now. And this is like across different fields, right? Because so it's education, it's academia, it's medicine, even finance and stuff.

Also with all of these platforms, and only now recently, we can see that that is not the case. But all of these platforms have, you have to give a text prompt, right? So there's a certain, there's anyway, a certain kind of technology that is involved here, which requires text prompt to generate an image, right? Which in itself, is again very problematic, because I think, as we discussed earlier, that, seeing and, you know, firstly, relating it to words, and it's a completely different way in which one is thinking about images here, right? So that is something we really need to keep in mind that how is it that the images are tied to text? It's because also in the databases, in the data sets, the image is tied to text. So all images are tagged and labelled, and that is how the machines are trained, and that is why you need text to to generate images, right? So, so it's you see how it's fundamentally so if we were to think about different systems, we will have to think about, what do we do with databases? Right? Because, right now, they are tagged and labelled, which has other kind of label practices which are very questionable, right? That's another kind of a, another area. There is a lot of work on that. So, you know, what do we do with databases? You know? How do we dissociate images with words? Because that has been going on for well over like 50 years, or something.

Benjamin Law: And being used to train these machines that are making meaning for us as well. You know, once upon a time and up until present day as well. You know, photographs were our visual record of the world. Photographs made us meaning. What are these images doing to us now? What are we doing? What are we doing with them?

Charu Maithani: The association of text with image, of course, has a really long history. If you look at how there are labels and titles of paintings, of photographs, right? You need that context, and that context comes with text, right? So it's not that it's completely unwarranted, right? But what has, I think, I guess, the question that I'm more interested in asking is that, okay, this has led us to a certain kind of production, right? A certain way of image production, right? And that's why I go to, you know, traditional cultures, non-western cultures, right? Because they have been imagining things over so many centuries in very different ways, right? So the point is not that, oh, “let's technologize that way of thinking”. You know, that's not the point, but also that we need to push back against this universal colonising vision that is constantly exerted on us, through, through certain ways in which technology is presented to us. You know, the liberatory potentials of technologies which are presented to us, right? So, so my thing is, like, forget, you know that technology is going to liberate us or save us in any way. Let's, let's think of, Okay, what else can we do? Like, what are the potentials? What are the different ways of seeing things, or, you know, feeling things that we can do here instead of, you know…

Benjamin Law: There are myriad questions that I've been hearing from you that we need to be engaging with the technologists and the designers. It's like, maybe these are things to reconsider. But again, there's the user experience as well. The people who are engaging with these technologies unsure about these technologies as well. What are the questions that you want us asking of ourselves as we think about using these technologies as we are, perhaps even using these technologies?

Charu Maithani: I think, immediately, right now, the way these, these things, have been structured in our lives. First thing that comes to mind, you know, when you're talking about user experiences, you know, slop..

Benjamin Law: AI slop. We are in the age of that, aren't we? We

Charu Maithani: We are in the age of that. I do think that slop has a little bit of tendency that gifs had once upon a time, you know,

Benjamin Law: Define AI slope for me, because, you know, I know it when I see it, and I know that there's a lot being generated. How would you say this is AI slop?

Charu Maithani: It is an interesting phenomenon that is happening in this because it's a combination of what we have already been seeing, right? And then you, you know, you drop in, like, AI slop of, you know, these things that kind of looks look familiar, but are not, you know, it's like something in your mind goes like, this is weird, or this is just, like, plain ridiculous, like the shrimp Jesus being like,

Benjamin Law: shrimp Jesus. I mean, Donald Trump is a huge fan of like, generating and disseminating AI slop out into the world as well.

Charu Maithani: Yeah. I think there, what we need to be careful of is how the AI slop industry has major geopolitical kind of roots. And by roots I mean not just like a tree root, but almost, but like trade routes, yeah, because a lot of the slop is generated in what are called the Global South. You know, a lot of countries, like India, Vietnam, a few African countries. You know, Western African countries. There is, that is where huge amount of AI slop is being produced.

Benjamin Law: You're talking about the actual machines that are generating.

Charu Maithani: No humans.

Benjamin Law: Humans like they're making it?

Charu Maithani: Yes, they are making it. They are giving tutorials about how to make it to what end for, for, for easy money, you know. And it might seem like, Okay, how much is $1 gonna, you know, it's like easy, probably for us to, you know, sitting in Australia thinking about, you know, $1 to you know, to learn how to generate images or to learn how to use prompts, is feels like nothing really, right? But $1 currency conversion is like fair amount of money.

Benjamin Law: This is what the point where I feel like the conversation has almost back flipped on itself, because we started talking about how it's often the so-called global south that is erased from the technology that's generating the images. That's why we're not getting the Mughal gardens, and yet simultaneously, the irony is that so many of the images are being generated by people in the Global South. That is, that is like a intellectual gymnastics going on, right there.

Charu Maithani: Absolutely. But that is, that is what is problematic about the whole thing, right? Because the labour issues, that's why the labour issues, that's why they become so important. Because it seems like, okay, there are certain kind of things that non-western Westerners are allowed to do, but their history, how their traditions, their traditional knowledge, is something that we're just gonna, you know, very conveniently, just gonna keep out, you know, so so that's why it's called technology, like AI, technology has colonising effect because of exactly this. There's a certain, almost like parasitical nature in which the labour practices are put very carefully, knowledge generating, meaning generating ideas from off those people have been kept out. Yeah, so you're Yeah, you're absolutely right.

Benjamin Law: Charu, can I ask you to be a futurist for a second here and project into the future in 10 years time from now, say these technologies are going to rapidly advance at speed and scale? What do you hope is going to happen in terms of their evolution? And what do you anticipate will happen in terms of their evolution? Are they the same thing?

Charu Maithani: Well, no, I don't think they are the same thing. But I'm not really a futurist in that sense of, you know, predicting technology, but I think we are already seeing a lot of the AI kind of bubble stories being questioned and, you know, almost like, okay, that is something we need. You know, it probably if the prediction of, right now, the state that we're in is an AI bubble, if that prediction is true, then bubbles pop. But, I mean, forget even about, you know, thinking about AI bubble and all that. But we can already see in our daily in how we use AI currently, that the technology has kind of plateaued right. So I think that over-promising of AI that has happened is going to unravel itself in so, so the technology is interesting. Yeah, but the technology is not going to go away. You know? It's just that that the hype that we are in that is going to subside, right? So I think at least that we can already see that happening so in 10 years time, and I think the technologies, and because there is going to be a lot of push, the technology is going to be more concentrated locally and be more specific, and that's where I think the use of this technology is going to start mattering a lot more. I think, yeah.

Benjamin Law: Dr Charu Maithani, thank you so much for sharing those insights with us. I feel like I want to walk through a Mughal garden now just to clear my mind of the AI slop that's been that my brain's been infected with. But thank you so much for sharing your knowledge with us today.

Thanks for listening. This episode was brought to you by the Centre for Ideas. For more information, visit UNSWcentreforideas.com and don't forget to subscribe wherever you get your podcasts.

Speakers
Charu Maithani Headshot

Dr Charu Maithani

Dr Charu Maithani is a researcher who organises her inquiries in the form of writing and curated projects. Her research articulates conditions of mediality, aesthetic and political relations of/by media objects, and material-discursive practices by thinking and working across media studies, media ecologies, digital art and cinema. She is interested in exploring the perceptual and epistemological changes in contemporary media culture facilitated by AI and the relationship between data practices and dominant ideological arrangement of vision. She is a Lecturer in the School of Arts & Media, UNSW Sydney. 

For first access to upcoming events and new ideas

Explore past events