When the digital artist Daniel Ambrosi wanted to go to the next level and create images that would fully express his visceral reaction to the grandly picturesque mature landscapes created by Capability Brown in 18th-century Britain—at large estates such as Blenheim Palace, Oxfordshire, Chatsworth in Derbyshire and, most painterly of all, Stourhead, Wiltshire—he had an advantage in the help he could call on from neighbours in Silicon Valley.
The assistance he needed, he told a panel session to introduce his Capability Brown “Dreamscapes” at Robilant + Voena in London, was to harness the power of Google DeepDream's artificial intelligence (AI) to make a cognitive analysis of photographs he had taken of Brown's landscapes and add a final “hallucinatory” level to them, the AI analysing the wide-angle files he submitted after joining together multiple shots using stitching software.
The engineering enhancement came courtesy of Joseph Smarr at the tech giant Google and Chris Lamb at Nvidia, pioneers of accelerated computing. It enabled Ambrosi to get a specially modified version of DeepDream—an open-source software that made headlines by allowing people to turn family photos psychedelic—to follow his guidance in how it dreams, in overnight processing sessions, on his 500MB digital landscapes; with some arresting results.
At a distance, Ambrosi’s landscapes—up to 16ft wide, backlit or projected on fabric—look hyperreal, almost uncannily detailed, like some haze-free take on William Holman Hunt at his most exacting. But from close up the fruits of the AI cognitive dreaming appear, in the swirls, mottling and palette-knife patterns that DeepDream has applied in surreal musings on bark, water or grassy banks. It is, Ambrosi says, “art seen through the eyes of a machine”. One final control he adds is to tone down the AI’s wilder palettes and maintain an appropriate “earth-colour” order in these most British of landscapes.