MIT Technology Review
The Algorithm
Artificial intelligence, demystified
You didn’t force a bot to do anything
Hello Algorithm readers,

This week, writer and comedian Keaton Patti tweeted out the script of a new Hallmark channel Christmas movie, purportedly written by a bot after being fed 1,000 hours of Hallmark Christmas movies. The short two-pages are absolutely hilarious, with lines like “I refill globes better than Jesus Claus,” and “They are all giftwrapped in eggnog.” To skip to the punchline, they’re also not written by AI.

A screenshot of Keaton Patti's tweet that says "I forced a bot to watch over 1,000 hours of Hallmark Christmas movies and then asked it to write a Hallmark Christmas movie of its own."

This has become Patti’s shtick—to write scripts that spoof a neural network’s inability to form coherent sentences. And it’s a good one; many of his tweets go viral overnight. But they also often spark widespread confusion about whether the texts are truly AI-generated.

The trick to knowing that they’re not is that they actually have some form of continuity. Despite all the non sequiturs within each sentence, the scripts always retain the same characters and same themes throughout the story. In the latest one, this paragraph in particular is a dead giveaway:

Business Man has flashback to when he was Business Boy. A Christmas tree explodes his family on purpose. He now hates trees and Christmas and explosions. He exits the flashback.

As weird as it is, every sentence builds on the next. The first enters a flashback; the last exits it. The middle two evolve around Christmas trees and explosions. Even just the continuity of Business Man to Business Boy is beyond a neural network’s current capabilities.

As we’ve discussed before, machine-learning algorithms are really good at using statistics to find and apply patterns in data. But that’s about it. Let’s think about that in the context of constructing sentences. If you indeed fed 1,000 hours worth of Christmas movie scripts into a neural net, it would identify which words typically appear next to one another. It might notice, for example, the word “a” often followed by “woman,” “cookie,” or “tree.” It would then start to generate sentences purely based on that information—just like if you tried to compose an entire email using only predictive text. The result would not only be non sequiturs within but also among sentences, plus run-on sentences, switches between singular and plural, and a whole lot of confusion over parts of speech.

Here are some examples from my own tinkering this week of trying to generate Christmas movie synopses:

  • santa angel telling gangster broker the giver of to hustler happiness
  • a orphan tomato plans overcome accident humanity attempt into a possibly skating during christmas
  • bored young businesswoman in her christmas a time christmas inherited in new on daughter wooed and holiday his his

Utter garbage, right? And these are some of the best ones. Granted, I didn’t use a ton of training data, but you get the gist: if there’s one thing to conclude from reading these sentences, it’s that language generation involves more than just finding and applying patterns.

This is why, compared to other subfields of AI research, getting a computer to speak with its own words has seen such little progress. Anytime you talk to a chatbot or a digital assistant, don’t be fooled: its responses are scripted.

The truth is tough, sorry not sorry. I’d still watch those movies.


If you’d like more hands on experience with language generation, try training a neural network of your own! Here are two interactive environments already set up for you, one using the library textgenrnn, the other using char-rnn. (I used the former for my experiments.)

You can open each of them with Google Colaboratory and read through the instructions to start playing along. The code is broken up into cells that you’ll need to run in order. Press Shift + Enter when your cursor in a cell to run it. At some point within each setup, it’ll prompt you to upload some training data. Put together a text file of words, sentences, or passages that you’re trying to mimic, and start training the model to see the results. Send your best creations to

More from TR

Will Knight, senior AI editor, on China’s bid to be a leader in AI chip manufacturing: “While China manufactures most of the world’s electronic gadgets, it has failed, time and again, to master the production of these tiny, impossibly intricate silicon structures. Its dependence on foreign integrated circuits could potentially cripple its AI ambitions.” Read more here.

New and improved

Based on the feedback of Algorithm readers and a follow-up interview with researcher David Duvenaud, I updated my explainer from last week’s issue on one of NeurIPS' best paper winners. You can read it here.

Featured Business of Blockchain Speaker: Ariana Fowler

At ConsenSys, Ariana Fowler sits on the Social Impact team, consulting on client-facing projects in the public, private, and not-for-profit sectors, as well as developing use-cases and research studies focusing on the application of blockchain technology as it relates to development and humanitarian aid. Join Ariana at Business of Blockchain in May. Secure your ticket today!

Screen Shot 2018-12-14 at 12.22.32 PM


GANs for days. From the creators of these faces, we now have, well, just look at them. (No, they are not photos. Yes, I am terrified too.)

Researchers at Nvidia have outdone themselves again by redesigning the architecture for generative adversarial networks. If you recall, GANs involve two dueling neural networks. The first is called the generator whose sole task is to produce artificial outputs. The researchers focused on re-jigging this one, based on new techniques developed in another subset of AI research know as style transfer.

The generator starts by taking in an image of someone's face and then tunes the different features, like its pose, freckles, or hair, based on the "style" desired. The technique not only gives people finer control for what kinds of faces they want to design but also the ability to transfer over features from one face to another. In a less creepy context, this technique could also prove useful for generating images of furniture or cars.

Bits and Bytes

Nine charts that really bring home just how fast AI is growing
It’s booming in Europe, China, and the US, but (surprise, surprise) it’s still a very male industry. (TR)

All automated hiring tools are prone to bias
The software reflects the data it’s trained on—which could be biased or unrepresentative. (TR)

Google says it won’t sell face recognition for now...
In Asia, that is, until it has better policies to prevent misuses of the technology. (TR)

...but Taylor Swift is still using the tech to track her stalkers
Inside a kiosk playing rehearsal clips at her concert, a facial-recognition camera was taking people’s photos. (Quartz)

A guide to self-driving cars
How we got to this moment in the autonomous revolution—and what happens next. (WIRED)


AI and machine learning are topics of priority and deep interest among the members of this subcommittee as we build a blueprint for the battle of the future.

Elise Stefanik, Congresswoman from New York, delivering open remarks at a subcommittee hearing on the capabilities, limitations, and threats of AI in the context of national security

Karen Hao
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHao.
Was this forwarded to you, and you’d like to see more?
New Call-to-action
You received this newsletter because you subscribed with the email address:
edit preferences   |   unsubscribe   |   follow us     
Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142