Holly Herndon and Mat Dryhurst are two of the most influential artists working with artificial intelligence (AI) today. With the rise of text-to-image and text-to-text generative AI models such as Midjourney and ChatGPT, the Berlin-based musicians and audiovisual artists have emerged as informed, witty commentators within the art-world debate about the power, ownership and governance of AI. They launched their Interdependence podcast in 2021 and both are active on X (formerly Twitter).
The married couple both have musical backgrounds and have collaborated for the past 18 years. Herndon describes their music as a mix of “European art music, electronic music, with some pop influences. But we find the space in between more interesting.”
The US-born Herndon grew up with electronic music and the UK-born Dryhust in the independent music scene in London. They met after Dryhurst answered Herndon’s email to a music label in London about a delayed podcast episode. They have released recordings and performed live in the UK, the US and Germany. Their rich audiovisual art practice encompasses generative art and they have published a series of NFT-based editions since 2020.
Their project Holly+ (since 2021) allows anyone to overlay the timbre or sound quality of Herndon’s voice over another song or musical line.
In 2022, meanwhile, Herndon and Dryhurst co-founded Spawning, an organisation designed to give artists the means—including through the web-based tool Have I Been Trained?—to understand and opt in or out of the vast data sets used to “train” AI models for generative art. Their knowledge of the golden age of music sampling enriches their take on concerns about cloning and deep fakes in AI.
They have confronted the reductive potential of giant text-to-image AI models, and played with it as a counter-tactic. At this year’s Whitney Biennial they launched Xhairymutantx (2024)—constructing a costume in which Herndon is overrun by her light orange hair. They refined this new model of Holly to produce “a consistent character that is able to be spawned by anyone using the interface provided”.
Herndon and Dryhurst have gone back to AI first principles with The Call, their debut solo show, at Serpentine in London. The exhibition has been created in collaboration with Serpentine’s arts technologies team—Kay Watson, Eva Jäger and Victoria Ivanova—and, on its interactive experiential installation, with Niklas Bildstein Zaar and Andrea Faraguna of the red-hot ritual-savvy Berlin-based architectural practice sub. The team at sub has created three exquisitely imaginative and fine-tuned objects—an organ-like instrument called the Hearth, a hanging ring or chandelier, and a curtained oratory—to enable visitors to interact with the AI models without the use of computer screens.
The Call is built around a suite of AI models trained on a dataset of new recordings from the Sacred Harp collection of hymns, which was created in the US South in the 19th century and is based on old English choral music. The new set of songs were recorded with 15 professional and community choirs from around the UK. Herndon and Dryhurst, choirs and Serpentine’s team are using The Call to explore questions of co-ownership and creativity under a specially devised “data trust” agreement.
Herndon and Dryhurst see each stage of the process—developing songs, recording the vocal data sets, training their AI models, and generating outputs—as a work of art. With The Call, they aim to develop a protocol that can be carried on by others. They are looking to offer “a beautiful way to make AI”.
The Art Newspaper: How did you come to create your exhibition The Call with the Serpentine team?
Holly Herndon: We’ve been in conversation with Hans Ulrich Obrist [the Serpentine’s artistic director] for years and done various events together. We’ve also been following what the [gallery’s] Future Art Ecosystems team does because they deal with really contemporary technology, which is quite rare for art institutions. So there was a flirtation period… and then we decided to embark on this relationship.
You have been champions for artists in developing a healthy understanding of AI. With The Call you have trained AI models on a set of new recordings and are devising a “data trust” for the output, co-owned by the choristers. You seem to be going back to AI first principles.
Mat Dryhurst: I think that’s a good way of putting it. When most people discuss an AI model, the assumption is one of typing into an interface that you don’t control, run by a company that trained it on [a data set] that we don’t know about. We were always interested in the full pipeline [realising that] the data you put in changes the output.
So you can use Midjourney and make a cool-looking picture and that could absolutely be art. But for people who are sceptical, it’s really important to elaborate on the fact that you can take ownership of the process and make models do what you want. Somewhere along that continuum, there’s so much room to assert authorship over what’s happening.
How did you find, and connect with, the choir groups?
HH: This was in huge part down to Serpentine. They have relationships and our fabulous producer, Ruth Waters, made personal connections with the choir leaders. It was about trying to build trust; around this unusual data trust itself, but also with music that would be slightly unusual for some of the groups.
Where did you make the recordings on tour?
HH: We decided to focus on community halls, so as to have similar-shaped rooms for each recording. Of course, each reverb is unique. And sometimes we landed in churches as well, which had their own reverb. We felt like it would make people the most comfortable [to go] where they would play anyway, and capture the sound in their rehearsal spaces.The tour was wild because I brought our, at the time one-and-a-half-year-old, son and my mom along with me. So it was quite a motley crew of recordists. It was fun.
You developed your songbook from the Sacred Harp collection of a cappella hymns, for a mix of professional and community choirs. Were you trying to capture the full phonetic range of sung English?
HH: Exactly. We also wanted to cover the full pitch range of each group and to build in some improvisation. I wrote into the songbook [the idea of] “please take these hymns. and interpret it to the aesthetic of your group”. We wanted to capture the sound of each group. We worked with a team in Lille called Algomus—they research AI music notation—and we wrote a program based on a subset of the Sacred Harp canon. It can write infinite hymns. The songbook could be a thousand hymns, it could be two thousand. We only had time to record so many. So we brought these hymns into our universe and then sent them out into the world to be interpreted.
You captured the choirs in multiple channels with ambisonic mics, lavalier mics, and a four-channel surround. How does a visitor experience the resulting sound experience at Serpentine?
HH: We wanted people to experience what it's like to stand in the middle of a choir and listen. That moment when you're standing in the middle conducting and you have voices coming at you from all around, very much like [Thomas Tallis’s 16th-century polyphonic 40-voice masterpiece] Spem in Alium. It's this overwhelming feeling of beauty and the marvel of human co-ordination. If you go into the South Powder Room… you'll be able to hear the sopranos to the right of you, the basses to the right. You can hear each voice.
MD: Three separate AI models were trained for the show. There's the model that produced the songbook with Algomus. There's another model that we worked on with Ircam [sound research centre in Paris] where you can [interact vocally with] the data that we recorded. And there's a third model that produces emergent new compositions from the data set. It's like the positive-sum model. So you'll hear songs that were invented from the joint contributions of everybody to the data set—including Holly.
HH: We put our whole archive in there. We added our [existing] archive to what we recorded [for The Call] to make a super-archive.
The Berlin architectural practice sub collaborated with you on the installation for The Call and the design of “training objects” for visitor interaction with the AI model. It sounds fun.
MD: The “call” itself is put out by a large instrument [with the sound of a computer’s cooling fan] that we produced with a collaborator of ours, Andrew Roberts. The instrument sets the call and the key for everything else, acoustically. In the second room, we have this very beautiful object, which we’ve been calling the wheel, but people will refer to it as a chandelier, that tells you how to contribute. Then there’s the third room, almost like an oratory, where you’re using your voice to navigate the latent space of this strange singing model.
HH: We wanted people to experience what it’s like to stand in the middle of a choir and listen. That moment when you’re standing in the middle conducting and you have voices coming at you from all around, very much like Spem in Alium. It’s this overwhelming feeling of beauty and the marvel of human co-ordination.
You've spoken of making each stage of the process a work of art, including the training data, or recordings. How do you make an art from training your AI with the recordings?
MD: For a work like this, you have to go through a laborious process of tagging the recordings—for example [with the names of choirs from] Bristol, Salford or Penarth—so that you can “invoke” them when the AI model is trained. This is a creative act: the way in which you categorise data to go into a model. It's like mind mapping—how you want to organise the data for future referral.
If you get some of that wrong, whether it's the tagging, the quality of the recording, or in training the AI, you can “overfit” the model, so that it can only produce a bad version of [a recording you apply it to]. So it's this negotiation or sweet spot of training a model that is flexible enough to come up with its own stuff. And that's where the art is.
The name of the project: is it a call inviting a musical and collaborative response; or a community call to action to build a better AI world?
MD: It can be read both ways. We've been doing a lot of call and response music. Not only with these recordings, but generally. There's also something about a higher calling—contributing to something greater than oneself. But we've been discussing a lot the idea of a “public AI”—of produc[ing] beautiful data for the benefit of everybody.
Biographies
Holly Herndon
Born: Johnson City, Tennessee, 1980.
Educated: MFA, Mills College, Oakland; PhD, Stanford University Center for Computer Research in Music and Acoustics.
Teaching: Fellow, Berlin Artistic Research Programme 2024-25.
Mat Dryhurst
Born: Solihull, Warwickshire, 1984.
Education: Media arts degree, Royal Holloway College, London University 2000-03.
Teaching: Clive Davis Institute of Recorded Music, New York University (NYU); European Graduate School; formerly director of programming, Gray Area, San Francisco.
Key collaborative works
Research: Sonic Movement, into the sound of electric vehicles (2013, for Semcon); Antikythera Programme on planet-scale computing, Berggruen Institute; Moviment, Centre Pompidou, Paris, 2023.
Music: Chorus (2014), Platform (2015), Proto (2019), touring to Barbican, London, Kunstverein, Hamburg, and Volksbühne, Berlin; Expanding Intimacy, Solomon R. Guggenheim Museum, New York, 2014; supporting act, Radiohead tour, 2016; I’M HERE 17.12.2022 5:44 (2023).
Audiovisual art: The NFT series Infinite Images (2021/22); Classified (2021); Play from Memory—part of Sound Machines (2024, with MoMA and Feral File)
AI models and tools: Holly+ (2021); xHairyMutantx (2024), 2024 Whitney Biennial; The Call, Serpentine, London, 2024.
Organisations: Co-founded Spawning—and a website, Am I Being Trained?—to build a consent layer for AI, 2022.
- Holly Herndon and Mat Dryhurst, The Call, Serpentine North, London. Design by sub. Presented in collaboration with 10F1 until 2 February 2025.