The New Master of AI-created ‘painting’

This image shows “The Butcher’s Son" (detail) by Mario Klingeman was awarded the Lumen Prize in 2018. (Image credit: Screenshot detail from lumenprize.com)

“The Butcher’s Son" (detail) by Mario Klingemann was awarded the Lumen Prize in 2018. (Image credit: Screenshot detail from lumenprize.com)

Mario Klingemann paints with neural nets. His work isn’t painting, exactly, but it can look a lot like what we think of as painting. When he won the prestigious Lumen Award for Digital Art recently for “The Butcher’s Son,” an image of a naked seated figure with a distorted and featureless face, judges described the work as “Francis Bacon as reimagined by AI.” But instead of putting paint on canvas to create the disturbing image, Klingemann captured it from inside the multidimensional computational space of a neural network, a practice he refers to as “neurography.”    

Artists have been working with artificial intelligence for decades. In the 70s, the abstract painter Harold Cohen created a computer program called AARON that was able to make playful “freehand” drawings that were shown at major museums around the world. Until recently, AI art has been “generative” — where the machine generates works based strictly on the rules and parameters programmed into the system. But now with the emergence of machine learning and neural networks that are being trained on vast data sets of imagery — everything from cave paintings to Old Masters to medical drawings to landscape photography — machines can, in some sense, generate their own imagery.

This is a photo of Mario Klingemann is currently an artist-in-residence at Google Arts and Culture. (Image credit: makingWeb_2.16)

Mario Klingemann is currently an artist-in-residence at Google Arts and Culture. (Image credit: makingWeb_2.16)

Klingemann has been fascinated with computers and visuals since the 90s when he tinkered around with Photoshop and motion graphics. Self-taught, both as an artist and a coder, curiosity guided him and he has continued to experiment as new technologies come online, buoyed by the open source movement, which makes code accessible to all. He’s currently artist in residence at Google Arts and Culture.

To create “The Butcher’s Son,” Klingemann used a chain of Generational Adversarial Neural Networks, or GANs. GANs consist of two neural networks, a generator and a discriminator, that compete with each other. The generator tries to generate an image and the discriminator decides, based on its training data, whether the output is real or fake. A GAN can be used to try to perfectly replicate say, a photo, but “you can also manipulate what it learns so that it produces visuals following other criteria — in Klingemann’s case reflecting his artistic gist,” as Digital Arts Weekly put it.

Klingemann wanted to make portraits of naked bodies so he used a GAN that extracts stick figures of human poses from a training set of hundreds of thousands of photographs. He used pornography in the training set because “it was an abundant never-ending source of these types of images,” he told All Turtles in an interview. The generator has limited information and must try to “expand that information back into something that makes an image” based on what it has seen before, Klingemann explained. “It’s at the point when the model has to figure out how to best solve the problem that it creates new types of aesthetics which are kind of in-between but often very interesting and different.”

The strange images that his stick figure GAN produced intrigued Klingemann. “I like the look, it’s painterly and adds new content. It makes up new information,” he explained in a talk about his work. He then ran them through another GAN to increase the resolution and add texture and other details. The model produces many thousands of variations, which is where Klingemann’s abilities as a human discriminator come in. Out of all the possible images, he finds “the few islands of interest.” One of them became “The Butcher’s Son.” As a creator, he can steer the model in a certain direction by choosing the training data and setting the hyper-parameters. But he can’t fully control what the system will produce. “It’s really about finding a balance between control and a happy accident.”

The deep dream of AI art

When Google’s DeepDream burst on the scene in 2014, Klingemann was thrilled by the possibilities it offered for using neural networks for artistic purposes — and people everywhere were mesmerized by its trippy aesthetic. By running a neural net that had been trained to identify, say, dogs, or human facial features, backward, DeepDream would start to find the features it had been trained on in seemingly unrelated imagery — eyes in clouds or puppy ears in palm trees. But as DeepDream was popularized through open source and various apps, making it easy to do a psychedelic version of your holiday card, the style rapidly became a cliché — then “everybody hated it,” Klingemann said.

This image shows before and after treatiment of images by Google's DeepDream. (Image credit: Wikipedia)

(Image credit: Wikipedia)

Another example of how AI art can easily veer into kitschy-ness is the recent sale at Christie’s of a GAN- generated print, “Portrait of Edmond Belamy,” for almost half a million dollars. Christie’s hyped the piece, which looks like someone took a damp rag to an Old Masters painting of an aristocratic gentleman, as “not the product of the human mind.”  But to produce the work, Obvious, a Paris-based collective of young AI researchers, used an open source Old Masters GAN created by Robbie Barrett in 2014, then packaged the output in a gilt frame with a mathematical formula for a signature. The portrait was blah, according to Klingemann and other AI artists who were critical of the Christie’s spectacle, because Obvious used an out-of-the-box GAN from several years ago: the results lacked originality. Also, Christie’s sold the work with a breathless narrative implying that machines are all powerful and even sentient when the reality is neural nets remain a tool of human creativity, if a spookily powerful one.

This images hows a Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network). Sold for $432,500 on 25 October at Christie’s in New York. (Image credit: Christies © Obvious)

Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network). Sold for $432,500 on 25 October at Christie’s in New York. (Image credit: Christies © Obvious)

Klingemann tries to avoid these traps by always experimenting, tweaking the code, developing new training sets, and finding his own techniques. He describes himself as a designer of systems that “are able to keep me interested and surprised.”

An important part of his process is to be continuously developing new collections of training data, which involves painstaking categorizing and tagging. His collections encompass visuals like the Old Masters, 20th-century art, and all kinds of portrait photography, but it also includes electrical microscopy, animal fur, plastic toys, hair, skin, metals, minerals, organic materials such as bark, and photos of decay. “They’re like my paints,” he said. “I am constantly adding new ones and combining them in new ways.”

What would Max Ernst think?

Klingemann’s hero is Max Ernst, the surrealist artist who invented “automatic” techniques to bring an element of randomness into his art. With frottage, for example, Ernst placed paper over a rough surface, such as a piece of wood or bark, and rubbed it with a pencil or chalk until an image appeared. The patterns formed unique landscapes that he would respond to and incorporate into the final piece.

Klingemann finds a lot of parallels between Ernst and his own machine learning-based artistic practice. Like the rubbings, “machines give me starting points, starting patterns, that I bring meaning to,” he said. “The machine doesn’t know how the world works and juxtaposes elements that do not belong together, which is what surrealism does, too.” As our brains try to make sense of what we’re seeing, we enter “this uncanny dreamlike state.”

In his series “Neural Decay” Klingemann uses a GAN he’s trained on images of decay, like rusted metal or rotting wood. The organic textures and patterns that emerged in the pieces look like digital versions of frottage, as if Ernst were lurking somewhere between the pixels.

A painting called Neural Decay by Mario Klingemann, 2017. From a series of portraits transformed and generated using a sequence of three custom trained generative adversarial neural networks. (Image creditL Mario Klingemann/Tumblr)

Neural Decay by Mario Klingemann, 2017. From a series of portraits transformed and generated using a sequence of three custom trained generative adversarial neural networks. (Image creditL Mario Klingemann/Tumblr)

Klingemann isn’t worried about machines replacing artists, except for the bad ones. Art of the sort you find at IKEA to match your sofa could definitely be made by machines, he laughed. For Klingemann, what AI really adds is new ideas — it’s about “augmenting the human imagination.”  

Still, machines are turning out to be very good at things we thought were exclusive human abilities. So we do need to remain alert. “It’s a competition,” Klingemann said. “We just have to keep upping our game so we continue to remain interesting.”