Digital designers

Art created by robots may do some weird and wonderful things, but how does it challenge the human definition of artistic expression?

Illustration by Keely Van Order

Illustration by Keely Van Order

Human beings have come a long way since cave paintings. We have a myriad of ways to express ourselves artistically, and are constantly redeveloping the tools we use to do so. From the humble paintbrush, to the camera obscura, to the printing press and beyond, we have built machines that help us design, create and replicate art. The digital boom of the 20th century expanded this like never before, with the advent of computers and new areas of art such as 3D design, digital painting and visual effects.

Our technological advances also kick started the field of robotics: the ability to give machines instructions so they could carry out complex actions automatically. We built them to make our own lives easier. Robots could do monotonous tasks without complaint, and construct things with more precision and speed than any human could. Printers have long been making copies of text and images with ink, first with stamps and then with electronic components. Inkjet printers like the one in your nearest office work by printing dots of ink narrower than human hairs onto paper, row by row. It’s an immensely complex task that can have no room for error.

If we’ve already taught robots to perform menial tasks, then it’s only natural to be curious about just what else they can do, and how far we can take this technology. Teaching robots to master the creative arts is just one possible avenue, and one that took off in the 1970s, when the personal computer became more affordable to programming enthusiasts and engineers. One of the first computer artists was a program called AARON created by British computer scientist Harold Cohen, who passed away earlier this year. It began in 1972 when Cohen attempted to ‘teach’ a computer program artistic rules that it could implement in creating its own randomised artworks. It then created a physical copy of the artwork via a large scale inkjet printer. AARON’s paintings featured in modern art galleries all over the world. Cohen himself was an artist and intrigued by the subject of robot created art. He said in a Stanford publication that “If what AARON is making is not art, what is it exactly, and in what ways, other than its origin, does it differ from the 'real thing?' If it is not thinking, what exactly is it doing?" To him, his creation was an exploration of an age old philosophical question: What is art?

The exploration continued and expanded over the decades. We gave robots human tools to see if they could create what our hands could. 2016 saw the first year of Robot Art, an annual worldwide painting contest for students – and their robots. Software developers wrote scripts that told their machines how to paint brush strokes and mix colours, creating some beautiful paintings in the process. Some entrants analysed reference pictures and created their own interpretations of them. Others created more abstract work with software-driven randomisation algorithms. One program, Picassnake, took music as an input and painted unique brush strokes by analysing the frequency levels in the sound.

The winning Einstein portrait, by TAIDA. The robotic arm was created by a team at the National Taiwan University, who programmed it to know basic painting techniques. © NTU-iCeiRA via Robotart, used with permission

The winning Einstein portrait, by TAIDA. The robotic arm was created by a team at the National Taiwan University, who programmed it to know basic painting techniques. © NTU-iCeiRA via Robotart, used with permission

The winner of this year's contest was TAIDA, a robotic arm created by a group from the National Taiwan University. They ‘taught' their machine how to paint base layers, blend colour, and apply small brush strokes to create replications of pictures. Ming-Jyun Hong, the winning team’s leader, believes that creating art with machines is simply a way to bring something more to people's’ lives: “Although it may not be able to compete with human artists, there is still a group of people admiring this form of art. And that's what gives meaning to the existence of machine-created art.”

It's exciting to think of where this could lead to in the future. How long will it be before machines can hand-paint perfect replications of the Old Masters, or pictures more photo-realistic than we can produce? But art stretches far beyond paintings. Music, literature, film, interpretative dance – anywhere humans can go, the robots are following. Head to this page, and you can listen to the sound of Wikipedia being edited. In real time, changes made to the user-driven website are assigned a sound based on the size of the edit, which creates a soothing, unending symphony. Other wiki-events, such as new users signing up to the site, add to the instrumental mix.

Philip M. Parker is a busy man. He's written more than 200,000 books, one with the particularly grabbing title of 2007-2012 Outlook for Bathroom Toilet Brushes and Holders in Greater China (a bargain at £495!). 219 of his works are on the subject of wax. Each of his books takes about 20 minutes to create, as they are entirely generated by computer software that adheres to templates and auto-filled content from the internet. His poetry generator, Eve, can mimic the poetic forms of sonnet, limerick, acrostic, and dozens more. It's formulaic, but often difficult to differentiate from a human writer.

 

Roland Olbeter created The Festo Sound Machine in 2005. The machine is able to listen to and interpret music, using it to recreate a live piece with five mechanical instruments.

 

Robots can write and play their own complex music. The Festo Sound Machine, created by Roland Olbeter in 2005, listens to a piece of music, and creates another in a similar style by interpreting the pitch, duration and frequency of the notes. It can then play the music live with five instruments, through mechanical plucking of the strings. Of course, visually, the performance does not evoke the same emotions as one might experience from seeing a full symphony play, however it does inspire other feelings of wonder at how far we’ve come.

Jacob Harris, a software architect for the New York Times, created a program that scanned the newspaper's articles and searched for phrases that fit into the general rules of haiku making.

There is pleasure to
    be had here, in flares of spice
            that revive and warm
Haiku from the New York Times article "Neighbors From the Far East, Getting Along Just Fine" 

The program itself has no aesthetic sense of course – that's up to the humans to decide. True haikus are also more complex than the well-known 5-7-5 syllable pattern, as they have to include image juxtaposition and references to seasons or nature. Regardless, the NY Times program follows rules, finds patterns, and by doing so occasionally creates something that we as humans can connect with.

We create art as a creative outlet, to appreciate nature, to imitate, to provoke, to inspire. The reasons for art are as endless as the definitions we have for it. Most people would agree that art is expression; an attempt to replicate the human condition externally; to design without purpose. Machines meanwhile, are entirely driven by purpose. They can do only what they are coded to do. Unlike humans, they can't think or feel.

The use of imagination is central to the human creative process. Finding abstract patterns or original ideas is something that has yet to be truly digitised. We can tell a program what humans find aesthetically pleasing, for example by feeding it a sample of highly rated photographs or paintings. The program can analyse the similarities of the pictures, and apply those rules to judge other artwork itself.

We've established that robotic art is created through imitation or randomisation. Is this any different to how humans learn? We have developed as an intelligent species primarily due to our ability to learn and pass on information, first verbally and then through the written word. Everything we develop is based on the successes of previous generations. Computers can do this on a much grander scale. They can collate and assemble data faster than any human brain, at a pace that is increasing all the time. But, they cannot think like we can. The greatest chess program in existence doesn't have a clue how to play tic tac toe, and never will unless we tell it how. Nor do they have emotions. A computer program that spent 17 days calculating pi to 25 billion digits didn't get bored or need a coffee break.

 
The evolved intelligence of human beings can be attributed to our ability to communicate information down through generations, both verbally and through the written word. Nicolas Vollmer/Flickr (NC-BY 2.0)

The evolved intelligence of human beings can be attributed to our ability to communicate information down through generations, both verbally and through the written word. Nicolas Vollmer/Flickr (NC-BY 2.0)

 

Art can adhere to certain rules. Musical acoustics have a large array of mathematical properties, such as the ratios between frequencies to create harmonising sounding notes. This idea that art can be distilled down to mathematical principles is not a new one. As far back as the 5th century BC, Greek sculptors used ratios to define the 'perfect' anatomical structure of their works. Twenty centuries later, Galileo said in his book 'Il Saggiatore' (The Assayer), “[The universe] is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures.”  Maybe art is a solvable problem, and we just don’t yet have the tools to unlock it.

We could be living in the century where true AI is achieved: a digital human brain with a near-endless capacity for information. What then? Would an intelligent machine yearn to create art in the same way humans do? We can't answer that question until we've unravelled the mystery of what it is to be human and conscious, and we're a long way away from that.

Edited by Sara Nyhuis