Recently, more and more have started to ask if machines can tell stories about the real world. To get computers to tell story in a complex social reaity is not a new endavour indeed. And with the advent of AI technology, this is not a futuristic scenario anymore
There are many projects that aims to make AI create, develop and write storylines, scripts and other art forms. One of the notable example of these projects is the movie called Sunspring.
This 2016 experimental science fiction short film was entirely written by an artificial intelligence bot using neural networks. It was conceived by BAFTA-nominated filmmaker Oscar Sharp and NYU AI researcher Ross Goodwin and produced by film production company, End Cue along with Allison Friedman and Andrew Swett. It stars Thomas Middleditch, Elisabeth Grey, and Humphrey Ker as three people, namely H, H2, and C, living in a future world and eventually connecting with each other through a love triangle. The script of the film was authored by a recurrent neural network called long short-term memory (LSTM) by an AI bot named Benjamin.
Despite its flaws, the technology is interesting enough to make us think if the screen writers and storytellers will be replaced by machines in the near future.
Leaving this apocalyptical scenario aside, there is a more important question about this technology, how does training neural networks function make it possible to tell good stories?
Mark Riedl, an associate professor at the Georgia Tech School of Interactive Computing and director of the Entertainment Intelligence Lab, has spent years researching this question.
Talking to Big Think, Riedi summarizes his work as a vision to feed a large number of stories into a computer program, have it learn how the world works – or, at least, how story worlds work – and then be able to produce new stories that are distinct from the stories it’s read that are plausible.
The Massachusetts Institute of Technology (MIT) Media Lab has also a similar investigation for such machine storytelling. These works are all based on Kurt Vonnegut’s idea of quantifying the emotional arcs of stories. He suggested that “Man-in-a-hole” is a primary kind of shape in the dimension of good-ill fortune. “Somebody gets into trouble… gets out of it again. People LOVE that story!”
MIT’s storytelling project uses machine-based analytics to identify the qualities of engaging and marketable media. By developing models with the ability to “read” emotional arcs and semantic narrative video content, our researchers aim to map video story structure across many story types and formats.
To complement this content-based analysis, the researchers are also developing methods to analyze how emotional and semantic narratives affect viewer engagement with these stories. By tracking “referrals” of video URLs on social media networks, the researchers hope to identify how stories of different types and genres diffuse across networks, who influences this spread, and how video story distribution might be optimized. Given this project’s two-pronged strategy, our hope is to develop a robust story learning machine that uniquely maps the relationship between story structure and engagement across networks.
We still do not know if the machines will take over storytelling mission one day but the research in the field shows that there is a big potential og human-machine collaboration in creating great stories in the future.
Image via www.vpnsrus.com