The response to the $432,500 sale of Portrait of Edmond Belamy at Christie’s in New York in October was breathless. More than 40 times its upper estimate! A work made by artificial intelligence (AI)! The robots are coming! Humans are pointless!
Large sums of money tend to make people lose their senses. Most afflicted by this malady—let’s call it AI-itis—was the deluded schmuck who paid nearly $500,000 for a work that would top the charts for the most boring art made in 2018. The portrait of a fictional denizen of the squirearchy was made by the collective Obvious using GAN (generative adversarial network) technologies. But aside from its means of production, it’s an epically dumb work of art, realised with all the elan and visual sophistication of a teenager making their first daubs with some dried-up oil paints their dad once used at evening class.
The Christie’s website article on the portrait claims, presumably because of the smooth digitised impastLeto on the image’s surface, that “he looks unnervingly like one of Glenn Brown’s art-historical appropriations”. Anyone with any knowledge of Brown’s technically brilliant and conceptually rich work knows this is rubbish. But, of course, Brown’s “art-historical appropriations” regularly sell for six or seven figures at auction; the comparison would be hilarious were it not so nakedly cynical.
What’s striking about Obvious’s work is its profound conservatism. Robbie Barrat, an AI artist who wrote the code that Obvious adopted, expressed a sense of injustice at artnome.com that among the “really compelling work” being made it was “this uninspired low-res GAN generation and the marketers behind it” getting all the publicity. And on the same site, through interviews with Obvious member Hugo Caselles-Dupré and others, Jason Bailey has made it clear that the bold claims about the portrait being made autonomously aren’t even true. Christie’s hype, vaunting “a new medium”, when Portrait of Edmond Belamy is really just a digital print on canvas made from an image using code pioneered by other artists, should be widely exposed for what it is.
Two works shown this year at the Serpentine Galleries alone reflect how AI can impact on art in fascinating ways. In the spring, Ian Cheng’s BOB explored AI’s implications, both technically and philosophically. BOB, which stands for “bag of beliefs”, applied Ursula le Guin’s “carrier-bag theory” of fiction—an argument for an anti-heroic form of literature—to the world of AI. BOB subverted the dominant narratives of AI as a submissive (often female) concierge, like Apple’s Siri and Amazon’s Alexa, or as a dystopian omnipresent spectre. Using different models, Cheng created a complex organism that took shape in front of us: an entity that was capable of inconsistent bodily and emotional behaviour, based on environmental data, audience reaction and its own sense of history and memory. At the Serpentine Gallery now (until 10 February) is Pierre Huyghe’s UUmwelt, in which the French artist links human brain activity to AI: in a series of films, we watch fast-moving sequences in which a machine zips through elusive images, trying to conjure what the brain is seeing or imagining.
Both Cheng’s and Huyghe’s works are absorbing, strange and deeply thought-provoking about humans’ relationship to AI—quite unlike Obvious’s lame portrait. So here’s a thought: let’s allow artists and non-profit galleries to dictate the debate about AI—and not be hoodwinked by the market.