Reconstructing lost art work just got a lot more interesting now that artificial intelligence experts know how to leverage technology as an art learning tool.
Researchers have used a neural network to reconstruct an image Picasso had painted over during his Blue Period. “Reconstruct” is of course an open conversation. AI did the reconstruction so one needs to carry on the discussion from there.
It’s a case of a discovery of something under something. Art observers were not surprised there was something-under-something. They had sensed that before. The Art Institute of Chicago has the Blue Period (1903-1904) painting The Old Guitarist. Art historians pointed out that there was “a ghostly woman’s face faintly visible beneath the paint.” MIT Technology Review‘s “Emerging Technology from the arXiv” said that in 1998, conservators decided to try to learn more and they photographed the painting using ex-rays and infrared light.
The reason why the researchers were not surprise that he had painted over something else was because “Artists often paint over earlier works, particularly during periods of penury when canvas is in short supply.”
The results were, well, sketchy, as their use of Infrared and X-ray images had only shown faint outlines and neither revealing color nor style.
The ex-ray examination, said David Conrad in I Programmer, had delivered only an idea what the geometry of the lost painting was like, but no clear idea of what the complete work would have looked like.
Fast-forward to a technique that is now explored, namely, a machine vision technique called neural style transfer. It was developed in 2015 by a group at the University of Tubingen in Germany by Leon Gatys and colleagues. About the Germany discovery: Stepan Ulyanin in Medium said this was about neural style transfer. “Gatys et al. base their approach on the unique ability of the convolutional networks to be able to extract features of different scales in different layers of the network.”
MIT Technology Review went on about layers: “Neural networks consist of layers that analyze an image at different scales. The first layer might recognize broad features like edges, the next layer sees how these edges form simple shapes like circles, the next layer recognizes patterns of shapes, such as two circles close together, and yet another layer might label these pairs of circles as eyes.”
Gatys and colleagues’ key discovery, said the article, “was that the ability to distinguish style was entirely separate from the ability to see faces or other objects. In fact, Gatys and co were able to separate this ability and use it in reverse. They fed a picture into the neural network, which then superimposed the style onto the image.”
Now, Anthony Bourached and George Cann at University College London and Oxia Palus have used neural style transfer to deliver their AI recreation and have authored a paper about it. Oxia Palus is an AI art collective. The title of their paper, now on arXiv, submitted earlier this month, is fittingly titled “Raiders of the Lost Art.”
Bourached and Cann took a manually edited version of the X-ray images of the woman beneath The Old Guitarist and passed it through a neural style transfer network, trained to convert images into the style of another artwork from Picasso’s Blue Period. They said they presented “a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork.”
(David Conrad in I Programmer: “It is worth noting the X-ray images were manually edited to provide a good starting point, so some human subjectivity was employed early on.”)
Verdict? It is impossible to confirm Picasso painted the image this exact way. All in all, David Conrad in I Programmer took a reasonably cautious view. “Can we recover lost paintings using a little AI? The answer seems to be yes, but it really all depends what you mean by ‘recover'”. It “approximates what the artist might have painted over. It seems to work, but there are few samples to judge and, of course, the whole thing is subjective.”
Bourached and Cann addressed why their work matters: Our method of combining original but hidden artwork, subjective human input, and neural style transfer helps to broaden an insight into an artist’s creative process,” they said. “Furthermore, it creates a human-AI collaboration which cultivates empathy with the creative potential of AI and its harmonious use as an artistic tool.”
Collaboration? AI and art, AI and fiction, AI and music are separate. One cannot replace the other? Wrong, say some researchers, who prefer to think of AI as a pathway for insight.
“We believe a lot of trepidation surrounding machine learning is that it replaces people. We argue that the use of machine learning as an artistic tool can ultimately broaden creative insight and widen the landscape of inspirational ingenuity by human-AI collaboration. We believe that this concept is generalisable to many domains and that it implies that the jobs of tomorrow have the opportunity to be better and more fulfilling. We believe that AI art may pioneer this positive change of mentality.”