It sounds like a science-fiction fantasy: researchers are using artificial intelligence to produce novels and short stories. But are they any good? Hephzibah Anderson finds out.
Artificial intelligence has long fascinated sci-fi and fantasy writers. Arthur C Clarke, William Gibson, Iain M Banks – they’ve all mined it in their fiction but now, such futuristic visions are fast becoming fact. From his new post as Google’s director of engineering, AI developer Ray Kurzweil has predicted that by 2029, computers will be able to outsmart even the cleverest human. Stephen Hawking and Elon Musk, the founder of PayPal and Tesla, clearly think he’s onto something: at the start of this year, both were among the signatories of an open letter calling for responsible development of AI in light of threats posed by the so-called intelligence explosion.
Novelists, at least, have nothing to fear. Or so you might think. In George Orwell’s 1984, the ‘proles’ read books generated by a machine, but a machine is hardly going to able to replace Margaret Atwood. After all, we turn to literature in part to deepen our understanding of the human condition, and its magic derives as much from the writer’s own lived experience – emotional, sensory or otherwise – as from their creativity. Even if a string of zeroes and ones evolves to understand what it means to taste a childhood food in later life, or to feel the first splash of spring sunshine as a long winter loosens its grip, that algorithm won’t truly be able to know such experiences. For all these reasons and more, a robot could never rival a flesh-and-blood novelist. Could it?
Yes, says futurologist Professor Kevin Warwick. And not only that, it could do so imminently. (Sceptics should keep in mind that Warwick correctly predicted the advent of autonomous fighting planes –we just happen to call them drones.) Already, software systems can joke and flirt. They can compose music (5,000 pieces in a morning, in the case of compositional program Emmy), create visual art, and have been writing poetry, of sorts, since 1983, when an experimental book called The Policeman’s Beard is Half Constructed was written by a program called Racter. Machines can also, in fact, pen entire novels – they’re just unlikely to be something you’ll want to read anytime soon. A variation on Anna Karenina told in the style Haruki Murakami sounds intriguing enough, but dip into TrueLove, the novel Alexander Prokopovich’s algorithm ‘wrote’ in 2008 and you’ll find prose like this: ‘Kitty couldn’t fall asleep for a long time. Her nerves were strained as two tight strings.’
There’s a case to be made for technology allowing us to become lazier, correcting our spelling for us and alerting us to stylistic niggles like word repetition, yet at the same time, it’s learning from us. Take the What If Machine. Fondly known as WHIM, it’s the creation of an international group of experts in machine learning, web mining, and the generation of narrative, metaphor and humour, spearheaded by Simon Colton from Goldsmith’s College, University of London. It works by generating scenarios in various creative palettes, in styles ranging from Walt Disney to Franz Kafka. A human must choose the genre and then select a few other details from a list before the machine spits out a gnomic creative prompt. Currently, it’s a little hit and miss. In the Kafkaesque category, for instance, things often seem a little too – well, Kafkaesque. “What if there was a dancer who woke up in [sic] a floor as a cat but could still swim?” WHIM suggests when I try it out. What indeed?
But the machine is learning, and it’s using human appreciation as its guide. The Goldsmiths group’s favourite what-if so far asks what if there was an old dog, who couldn’t run anymore, so decided to ride a horse instead?
“It sounds like a short story, with a fun and very unexpected twist. If a child came up with this idea, we would probably be pleased with him/her. Hopefully, as the What-If Machine gets better, it will become easier to project the word ‘imaginative’ onto what the software does”, said Colton via an email written with research associate Dr Maria Teresa Llano Rodriguez (and no algorithmic participation, they say).
One of the biggest challenges, they go on, is teaching the machine to understand what’s interesting from a fictional point of view – differentiating, say, between ‘What if there was a chair with five legs?’ and ‘What if there was a little dog who learned how to speak?’
Darius Kazemi began programming on his graphing calculator as a schoolboy. A professional video game developer for most of his twenties, he’s spent the past few years developing quirky, philosophical software such as You Must Be, a bot that generates pick-up lines. “Boy, you must be a solution because you are the state of being dissolved”, reads one. Or how about “Girl, you must be a china because you are high-quality porcelain or ceramic ware, originally made in China”?”
The Booker for a bot?
They seem great examples of the extent to which machines just don’t get it but Kazemi, a Thomas Pynchon fan, is also the originator of National Novel Generation Month or NaNoGenMo – the techy riposte to National Novel Writing Month in the US. He suggested it via Twitter in 2013 and it promptly took off, its only rules being that entrants must share a novel of more 50,000 words along with the code that generated it.
The NaNoGenMo works require varying degrees of human input, though Kazemi admits that the most successful ones involve a considerable amount. One of his favourites is The Seeker by thricedotted, which cannily casts the narrator as a robot trawling WikiHow to learn to become more human. “There was a lot of human input, sourcing from WikiHow which is full of human text,” he tells me via Skype from Boston. He also points out that the writers of the algorithms bring their own human experiences to bear on their coding, adding a necessarily human element.
So does he believe a machine could write a truly great novel – one that transcends its own novelty? “I think bots and the meaning of what a good novel is will converge”, he says. “I don’t think we’ll have something that looks like Moby Dick but I do think we’re already writing code that is generating really excellent conceptual novels. And I’m actually excited about novels that are co-authored between humans and code. That would be really cool.” So far, he’s yet to be approached by any professional novelists, just kids wanting the code to write their homework.
For now, the real challenge for developers is length. “Once you hit that 3,000-word barrier, it starts to get very difficult to sustain people’s attention”, Kazemi says, and Professor Warwick agrees. Technology would seem to be levelling the playing field by shortening our attention spans, but plenty of obstacles still separate bots from the Booker. They’re not good with characters vanishing then reappearing 78 pages later, for instance, and without regurgitating from a database, they can’t convincingly integrate sensory detail. Yet they will get there, Warwick insists – if not winning over book prize judges then certainly fooling them. “If it hasn’t been done within about ten years, then I would be very surprised”, he adds.
To date, our ideas about AI have been shaped largely by fiction writers. The very word ‘robot’ was first introduced in Karel Čapek’s 1920 play, Rossum’s Universal Robots. Whether or not algorithmic authors are shooting up the bestseller lists by 2020, human writers have an ever more critical role to play in navigating the implications of a brave new world that’s finally catching up with the literary imagination. The answers that computer scientists are coming up with raise ethical, philosophical and legal questions – not to mention security concerns – that we’ve barely begun to fathom. Now surely there’s a novel in that?