What a robot vigil says about our desire to relate with machines. A food delivery robot caught fire at UC Berkeley last Friday, while making its rounds on campus. As newsworthy as that is, what happened next is more interesting: an outpouring of grief from students, who called it a “hero” and a “legend,” and who commemorated its loss with a candlelit memorial. The whole thing is rather ridiculous—and could be written off as a hilarious episode of “things college students do to get out of studying for finals.” But there’s also something kind of noteworthy about the readiness with which people anthropomorphized a machine. It gives us just a little window of insight into how our societal dynamics will change as we grow to accommodate more robots—and human-mimicking AI.
In fact, our complicated relationship with robots has warranted an entire field of study, and it has discovered some pretty fascinating things. Through her research at Georgia Tech, for example, social robotics expert Ayanna Howard found that children naturally interact with humanoids like they would other humans and will work hard to please them if they “show” signs of disappointment. Adults, too, will place significant trust in robots even in high-stress situations, such as during a smoke-filled fire evacuation.
Our tendency to build relationships with non-human entities isn’t new. "We have anthropomorphized things for a very long time," Genevieve Bell, an anthropologist and vice president and senior fellow at Intel, once told me when I asked her what she thought about these kinds of relationships, "whether it was the ways in which we attributed higher level thinking to domestic animals—dogs, cats, horses—or the ways we named our cars and gave them personalities."
But our specific desire to relate with robots and AI feels different from what has come before. Unlike pets, they’re inanimate objects. Unlike cars—or really any other machine—they’re capable of engaging with us at much more intimate and powerful levels. One of the earliest studies conducted on long-term human-robot interaction in 2010 showed that participants developed a far greater emotional attachment to a robotic weight-loss coach than a desktop computer with the same software. Other studies have found that we are reluctant to “hurt” robots and will react to seeing one in “pain” as if it were a human being.
So, in a moment of silliness, students inadvertently struck upon some deeply relevant questions as we march deeper into this century: what it means for us to humanize robots and AI, how we should account for that when we bring them into our lives, and whether all this is okay.
The laborious process of making AI pull a funny. Inspired by research scientist Janelle Shane, author of the delightful blog AI Weirdness, senior AI editor Will Knight and I embarked on a challenge to generate funny Christmas movie synopses with a neural network. Don’t worry, there will be a story of our results (with illustrations!) soon. In the meantime, here’s a peek behind the scenes.
As I mentioned in passing in the last Algorithm, we used a library called textgenrnn, which can generate sentences in the style of the text you train it on. I now empathize with people who say training neural networks is more of an art than a science. To coax good results out of a network, you can either change your dataset or tune the algorithm’s various settings. Both Will and I used the same dataset, a list of synopses from Wikipedia, so we focused on the latter.
Whereas Will cleverly used the defaults and got some pretty decent results, I immediately began changing everything from the number of layers to epochs to the temperature. The layers here refer to the complexity of the neural network: the more layers it has, the more complicated the data it can handle. The number of epochs are the number of times it gets to look at the training data before it spits out its final result. And the temperature is like a creativity setting: the lower the temperature, the more the network will choose common words in the training dataset versus those that rarely appear. (Of course, I didn’t know any of this while doing the exercise. Thank you, Janelle, for explaining this to me later.)
As I blindly tweaked all these knobs, most of the results I got were flat out terrible—sentences starting with three nouns or ending in an article:
- arthur serial daughter meet greed to as when reunite up and low parents a paws become
- an dads aunt decides the growing to of cheer to try jingle photograph the holiday mysterious the
In other words, they were incomprehensible, not funny.
Part of the reason, Shane explained, was due to my small training data set and part of it was due to textgenrnn. The algorithm, she said, just isn’t that good at constructing sentences compared to alternatives. But even if I’d used better data and a better algorithm, the challenge I hit on was exceedingly normal. It just takes a lot of manual labor to make a neural network spit out gibberish that humans would consider remotely humorous.
"For some data sets, I'm only showing people maybe one out of a hundred things it generates," Shane admitted. "I'm doing really well if one out of ten is actually funny and worth showing." In many instances, she continued, it takes her more time to curate the results than to train the algorithm. Lesson learned: neural networks aren’t that funny. It’s the humans that are.