Jake Elwes – Interview for Staging Decadence

Fig. 1: The Zizi Show (2020), montage of deepfake drag artists. Courtesy of the artist.

Jake Elwes is a London-based artist whose work draws big data, deepfakes and Artificial Intelligence (AI) into the orbit of queer creative practice. Elwes primarily works as a visual artist experimenting with coding, AI, and datasets, although more recently they have been collaborating with drag performers with the Zizi Project: an original and evolving series of installations, digital artworks, and live performance events. Their work has been presented internationally at museums and galleries including the Max Ernst Museum in Brühl, RMIT Gallery in Melbourne, Pinakothek der Moderne in Munich, ZKM in Karlsruhe, Today Art Museum in Beijing, ALIEN Art Centre in Taiwan, and Gazelli Art House, Somerset House, the National Gallery, and the V&A in London. Elwes has received honourable mention in the Prix Ars Electronica – Interactive Art category in 2022, they were shortlisted for the Lumen Prize twice in 2018 and 2021, and they were the recipient of the AI Newcomer Award (Art Category) in 2021.

The Zizi Project began with Zizi: Queering the Data Set (2019), which explored the in-building of social bias in ‘Big Data’. However, collaborations with queer performance scholar Joe Parslow and Me the Drag Queen marked a formative turning point in Elwes’s practice. In work produced between 2020 and 2023, drag performers became both data source for the generation of subversive, mercurial deepfakes, and doppelgängers for projected deepfakes in live performance events. Notable works from this period include Zizi & Me (2020), and The Zizi Show (2020/23), which was originally designed as an interactive online platform during the COVID-19 pandemic, and was later adapted as a museum installation to mark the opening of the new Photography Centre at the V&A in 2023.

Fig. 2: The Zizi Show (2020), deepfake generation of Me the Drag Queen (montage). Courtesy of the artist.

Key to Elwes’s practice is how cutting-edge technology and data science might be queered. They champion the techno-activist notion of ‘dirtying the data set’ – a concept based on feeding alternative and subversive data sources into existing systems and networks, not least the mediated faces and bodies of drag kings, drag queens, and trans and non-binary people. What results is a fascinating exploration of glitching, failure, entropy, unpredictability, and a playful demystification a ‘new dark age’ dominated by Big Tech.

In this interview for Staging Decadence, Elwes contextualises and unpacks these ideas and ambitions, ultimately arriving at a fascinating consideration of pleasure, decadence, and the complexities of refinement in degenerative technological systems.   

Who is Zizi?

Zizi is our virtual deepfake drag character. It’s a deepfake persona that’s been trained on a group of London drag performers. It explores the idea of creating a deepfake constructed version of an art form that is all about construction.

Ever since I was a little kid, I was into Photoshop and multiplying images on top of each other, and seeing how they break down. I got really interested in programming these tools for myself as a teenager, moving away from creative software and starting to create patterns on computers using code. A key moment was when I was in Berlin at The School of Machines, Making & Make-Believe back in 2016. At that time, you could only really create low-resolution images trained on huge datasets. I got excited as an artist, asking what all this means and looking into the history of media and playing with ideas of randomness, emergence, pattern making, and systems.

Zizi is also a collaborative project. Every time it’s presented, I want everyone’s names to be present. I was first working on this three or four years before policy was coming out around deepfakes, and their implications for performers. It was at a time when people hadn’t really heard of deepfakes and didn’t really understand generative AI, so we sat down with each of them and explained how their body was going to be distorted, sometimes in quite grotesque and disturbing ways, making sure they were going to be okay with that, but also explaining that we only wanted to do it within our community. That was a really important point for us. This is important, as otherwise it becomes a fascistic technology that can non-consensually reanimate someone else’s body and form. We also paid everyone for their data at the time, we pay drag venues to set up the stage, and then we pay everyone every time it gets shown – and they have a right to withdraw as well.  

Ethics has clearly been at the forefront of the Zizi Project from the outset, then, but there are also some really interesting political implications underpinning the datasets you’re working with, particularly around the specific data that’s inputted into AI systems. Could you walk us through some of these implications, and how they informed Zizi – Queering the Dataset (2019)?

Yeah, that’s a really interesting one. In the beginning, I was appropriating existing code from universities and other institutions in the United States. At the time, it was too big a task to gather your own datasets, but a handful of artists were starting to question what these datasets were, and what happens when you use pre-existing datasets. Issues come up linked to social biases within those datasets. So a lot of my early work was using pre-existing datasets and appropriating them, seeing the holes in them, or seeing what happens if you don’t train a neural network on these vast datasets.

In 2019 I realised that facial recognition datasets had become standardised in the US, and biased toward normativity. The main datasets I was using back then were Celeb-HQ and FF-HQ. Celeb-HQ had nearly 100,000 images of faces, which were very normative. It was literally based on American celebrities, so you can imagine there was a big issue with diversity. Then the engineers realised that this was quite problematic and tried to diversify by creating FF-HQ, which I think was based on 70,000 high-resolution images of faces that had been stolen from Flickr. The problem was that they didn’t have permission to use any of these faces. It felt like a weird form of digital colonialism, because they were gathering these images that used tags like ‘African village’, although it was often American tourists going and taking pictures of different cultures and identities.

My friend at Deep Mind traced the ChatGPT dataset to a completely unqualified male engineer who wrote a list of about 200 words that should be removed from that language dataset – words like ‘faggot’ and ‘queer’ – just because they thought it was too problematic. Any kind of article on the internet that was in their dataset that contained those words would automatically be removed. Activist text and hate speech were being removed indiscriminately.

Fig. 3: Zizi in Motion (2023), still of deepfake drag artist in close up resembling Sister Sister and Bourgeoisie. Courtesy of the artist.

If we strip out a lot of these biases then we end up stuck with AIs in the image of Mark Zuckerberg, Elon Musk, and other white straight men in America. The big thing for Queering the Dataset was to take this large dataset of nearly 100,000 images of faces, and inject it with 1000 images of drag kings, drag queens, and drag monsters: so images of gender fluidity and otherness. Once an AI model has been trained, it’s very good at saying ‘this is an image of a human face’, and ‘this isn’t’ – but you can add a step in a more generative model that can start to create new simulacra: fake faces that exist in a latent space. Here the AI doesn’t try to mimic a real image, but instead creates images in the in-between spaces from what it’s learned in a mathematical space, which is a beautiful concept. There’s a real queerness to this space. Injecting 1000 images of drag identities into a corporate dataset and carrying on training it for a week on my computer in my bedroom ended up shifting all of the weights in this neural network from quite normative faces, to a place of otherness where the face suddenly breaks down. It no longer sees normative features. Drag makeup and images confuse it so much that the whole thing becomes a smeared echo of a face.

Fig. 4: Zizi in Motion (2023), still of deepfake drag artist in close up. Courtesy of the artist.

This is making me think about where your work sits in relation to things like Xenofeminism and accelerationism, and maybe glitch feminism too. I find binary code quite an unusual thing for radical political activists and artists to work with, and yet there’s so much queer potential with things like quantum computing, where it’s not just about ones or zeros, but the infinite space between.

An amazing artist called Libby Heaney is working with quantum computing. She’s creating these images and putting them through quantum encryption. It’s really beautiful work! I’m not working with quantum computing, but there’s something really interesting in this idea of de-binarising AI. I like to think about training an AI spatially, moving away from anthropomorphic metaphors of it being similar to thinking or creativity, and into thinking of this as just data in a space that’s representing us. Even ChatGPT and language models are still plotting words in multi-dimensional mathematical spaces and making connections between them. If we’re training a neural network in this space, what you can do is explore the in-betweens. There’s something inherently queer in this potentiality between the data points, between what we’ve labelled as male / female, old / young, black / white. There’s always this in-between spectrum.

Let’s say that we’ve trained an AI on 100,000 images of human faces. Each of those faces gets given a unique coordinate in a 512 dimensional space. Everything the AI has learnt now exists in this latent space, and the boundaries of that space are defined by what the AI has seen in the data. If it’s never seen an image of a black trans person that would be outside of it’s latent space, it can’t recognise it and the AI will fail. But what’s exciting is that we can explore the spaces in-between the original datapoints and start to visualise these in-between spaces using generative AI to create new non-existent identities.

There’s also this thing called unsupervised deep learning, which is where we don’t give things labels. So we don’t say ‘these are all images of male faces, and these are all images of female faces, now work out how to best plot them in a mathematical space to divide them’. Instead, you can give it thousands of data points of images, sounds or words without labels, and it will plot them depending on what it thinks they have in common based on raw pixel data, for instance. In this space it discovers new meaning. There’s something beautiful in exploring what’s there and what’s not there.

Fig. 5: Zizi in Motion (2023), still of deepfake drag artist resembling Oedipussi Rex. Courtesy of the artist.

The way that machine learning is working is already quite non-binary. It only becomes so when we say: ‘Is it a zero or a one? Is it a woman or a man?’ And even when we do that, it’s always going to be on a spectrum and might say: ‘with 67% certainty, it exists over on that side’. But the human problem is that we then interpret that information into 100% female, whereas actually, under the surface, the algorithm often contains more space for uncertainty.

I love the idea of playing into this uncertainty, or asking what an unproductive AI would look like. This perhaps moves against accelerationism. What if this thing is unfunctional, or failing? How can we program that? How can we move away from the way that the tech engineers are thinking in terms of results and function, and instead move towards a place of purposelessness, and exploring fuzzy spaces?

 

That’s one of the things that excites me most about your practice. Your work with technology moves away from ever-improving efficiency or ever-intensifying productivity drivers. Deepfakes also play an important role in your practice, but instead of making them ever-more perfect, they break down and fail. There’s a compelling entropy in that.

Interesting! Yeah, most people are trying to improve these things and move more and more towards photo realism and being indistinguishable from a human form, which is being used for very insidious purposes, like deepfake pornography and fake news. Jaron Lanier spoke beautifully about this. He was the guy that came up with the word ‘VR’. He talks about what he thinks it’s doing to our brains. I agree with a lot of what he says. I mean, I think we should probably stop talking about ‘Artificial Intelligence’ anyway, because it’s quite a confusing term. It’s not going to surpass human ability, as it’s made from human ability. He has a lovely analogy. He says that a car isn’t something that can run faster than a human; it’s something built to extend and augment human ability. So it goes faster, but it’s not a faster runner. So machines aren’t better thinkers, or better at making moves; they extend our ability to make calculations and explore multi-dimensional big data space.

He also says that he’s not worried about machines replacing us, but he is worried about them making us mutually unintelligible to each other. There’s a sense of entropy in that as we’re losing the ability to communicate as humans. I’m quite interested in flipping this idea, though, moving away from ideas of progress and improvements and instead going back and looking at some of the earlier AI models, or working with some of the latest ones, and deliberately looking at when they fail. The most exciting thing is when it breaks down. My intention is to explore the glitch, the entropy or the guts of the AI, which reveals the artifice in a way that demystifies the technology.

Fig. 6: Zizi & Me, live perfomance at Zabludowicz, 2019. Courtesy of Jake Elwes & Me The Drag Queen.

Yea, so a lot of the time when roboticists and technologists strive toward achieving perfect states of imitation, but just falling short, the result gets described as ‘uncanny’ – but when I look at your work, the word ‘uncanny’ seems completely irrelevant. It’s doing something very different, which comes across most starkly when you put technology into dialogue with live performance. For instance, in that piece with Me the Drag Queen – Zizi and Me (2020-23) – you have Me as a live drag performer entering into a comic and fascinating exchange with their own AI doppelgänger. Could you tell us about how you conceive of this relationship between live performers and their digital others?

AI is a reflection of us. Beneath our deepfakes there’s a human who’s doing the movements and who brought the drag persona. The drag performer has a constructed character, but the AI has also applied a kind of makeup. It’s reflecting back our own data. We’re having a lot of fun with it. Creating a sense of playfulness is really important to me, and fun around a topic that is often taken too seriously. There’s too much dystopic narrative, whereas what we really need are ultimate utopias. Playing with something that’s quite familiar to people, like cabaret, is very accessible. People can comprehend it and understand it, rather than seeing AI and algorithms as something scary, inaccessible, and intangible.

Then we create these musical theatre acts that we choose to satirise narratives around AI taking over from human artists. The first thing we did with Zizi and Me was have Me perform ‘Anything you can do, I can do better’ from Annie Get Your Gun (1946). It’s comical when you see the AI drag queen failing and breaking. A bit of her face falls off when she leans over. She’s not too scary! I think, like you say, it’s not uncanny because it’s all underpinned with this funny, goofy musical theatre and a kind of drag clowning. When her boob falls off… Well, it becomes a visual gag that we didn’t write in, but the AI plays into it while claiming that she can do anything better than the human.

Fig. 7: Zizi & Me, live perfomance at Zabludowicz (2019). Courtesy of Jake Elwes & Me The Drag Queen.

 I’m really glad you mentioned fun! I love sharing the Zizi Project with students as it’s switched on and raises a host of interesting questions, but it’s also totally joyful. It really makes me think about where pleasure and excitement sit in relation to your use of technology more broadly. Big Tech companies tend to promote a dry and limited sense of what constitutes excitement. For instance, for each launch of a new Apple product there also comes an explicit attempt to manufacture excitement around it – to create a sense of pleasure-giving eventfulness to match the pleasure-giving promise of the product. These product launches rely on a very narrow and of course highly commercialised sense of pleasure and excitement. In contrast, the pleasure that you’re celebrating is much more messy and unpredictable.

I think it gives people a sense of pleasure across the board. When we presented The Zizi Show at the Victoria and Albert Museum, it was often the little kids who stayed by the display longest. They were trying to copy all the choreography and movements of the performance, and it was it was so joyful. They were getting pulled into this strange, glitchy, drag show. The other people who stayed in that room were people who were over the age of sixty. Interestingly, the curators were telling me that was happening time and time again. I think they were just sitting with that pleasure. Technology is too often presented as fearful and intimidating, but there it was presented in a way that allows for pleasure from their perspective, rather than that of a tech bro in Silicon Valley. I think that is a wonderful thing. The older generation got it. They got Shirley Bassey. They got the costumes. They got the colours. They didn’t get the deepfake, or the AI, but they saw that maybe it wasn’t something they had to be scared of.

I think you’re quite right about those new product launches. Silicon Valley utopians only really approach pleasure in forms like the Apple adverts, or Google ads: stainless steel, Helvetica font... We need more hopeful narratives around these tools, especially when we’re looking at social bias rather than creating work that is further oppressing our communities.

 

This is probably a good point to talk about decadence. I like to think about decadence as the refinement of rot, or the refinement of decay, or the refinement of ruination, but refinement implies agency – in other words, that the subject has agency over the propagation of rot, decay, or ruination. In your work, though, you seem to relinquish agency to an AI-de-generative process of ruination, where you can’t really control what happens despite whatever data you feed to the algorithm. What results is unrefined, but in a way that still references and engages virtuosity and refinement by working with such talented drag performers, and their carefully-crafted looks.

Yeah! The rot is the drag performers that we are introducing as a glitch in the system. They break the system, ‘dirtying the dataset’, which is a techno-activist mode of introducing meaningless data or data that we know will break the system and especially systems of oppression. Queerness becomes the rot in the system, which then enters into a process of refinement, where the AI is trying to make sense of that data. The machine learning process goes through many, many iterations trying to make sense of this new data, with ‘infection data’ becoming like a cancer that we have injected into its neural network. This process of refinement can lead to the creation of new images, which can come out very beautiful.

Ultimately, I’m giving it the data. I’m choosing the outputs and inputs, and I’m training the system, but there is a sense that in that process, as I take a step back, a sense of randomness or emergence comes through. I’m really interested in the idea of unsupervised learning: of not having control and just seeing what happens organically.

 

The Zizi Project has been through an extensive, iterative process. It always seems to be evolving as you build on it and adapt it for different spaces and contexts, but I’m also mindful that your work reaches well beyond Zizi. What’s next?

There’s a few projects that I’m thinking about at the moment. I want to do a collaborative project where we write a letter to Mark Zuckerberg from a bunch of different perspectives, asking him to change his engagement algorithms so that they prioritise compassion over conflict, even though he’s probably a lost cause.

I’m working with a wonderful writer who’s a drag clown and puppeteer called The Public Universal Friend (aka The PUF). We’re working on a puppetry project based on a work The PUF created for our V&A Friday Late event, where we’re exploring fates and fortunes in a hopeful digital apocalypse: a queerly apocalyptic space where the world that we know is dead, and everything’s changed. It hasn’t changed in a way that needs to be negative or dystopic; just very uncertain, and no longer recognisable. We’re thinking about connections between puppetry and deepfakes, and want to play with the aesthetic of a puppet theatre filled with both marionettes and deepfakes, as well as game characters and shadow puppets. Animation is a form of puppetry as well, so we’re thinking expansively about puppetry.

Then we also want to stage a live version of ‘Zizi and Me’. We had a short residency at the National Theatre thinking through some ideas for it. We want to set the piece up as a familiar musical theatre-style act, and then really pull it apart and hijack the stage, which the AI has taken over, but then ripping layers off the AI as if it’s shedding its skins. We also have a vision of a drag queen performing I am what I am (1983) to herself, while creating a live deepfake on stage.

Fig. 8: UK AI, 19 March 2024. QEII Conference Centre. Courtesy of the artist.

Nothing succeeds like excess! 

The other thing we did recently was a talk for the government with the Alan Turing Institute. They do lots of AI engineering research. It was called AI UK, and they had people from the Ministry of Education and Ministry of Defense there, as well as engineers, researchers, and industry people. They gave the gave me the opening provocation, the first 90 minutes on the first day, on the main stage. The idea was to get radical artists thinking about alternate narratives and futures around AI. I did my bit with drag queens, who did a couple of performances, and we looked at fates and fortunes, and a utopic, queer digital apocalypse. I don’t know what they were thinking! I think it went down quite well, and we got some quite radical thinkers and artists in front of some good people.

Adam Alston