MIT Technology Review
Sponsored by Alegion
The Algorithm
Artificial intelligence, demystified
Young innovators

Hello Algorithm readers,

Today, we’re looking at five young AI innovators, a disturbing deepfake app, and a throwback to the 2016 mannequin challenge. You can share this issue here, and view our informal archive here. Your comments and thoughts are welcome at

A portrait of Rediet Abebe

Hello, senior AI editor Will Knight here. Karen is helping to host an event in Hangzhou, China, this week, so I’m filling in for the first section. Normal service will resume next week.

This week, we thought we’d use the newsletter to highlight five remarkable young researchers working to advance artificial intelligence. These individuals are part of our Young Innovators list, compiled each year with help from an impressive group of expert judges.

The AI researchers on this year’s list are, in my opinion, especially important. They reflect the current state of the technology, and they point to where it is headed. The five innovators I’m highlighting here also show how international AI is today—they come from Ethiopia, China, India, Poland, as well as the United States.

Let’s take a look.

Algorithmic good. Rediet Abebe, a student at Cornell University, uses artificial intelligence to address socioeconomic inequality. As an intern at Microsoft, she devised a project that mined data across 54 African nations to identify demographic groups most concerned about HIV/AIDS, and which treatments might appeal to them. She is among a growing number of AI researchers determined to ensure that such a powerful technology is used for more than just optimizing.

Imperfect information. Noam Brown created Libratus, a poker-playing program that bested several professional players at heads-up, or two-player, no-limit Texas hold’em. Unlike checkers, chess, or Go, poker is a game of “imperfect information,” since a player has a limited view of the cards in play, so it was a major achievement. The work could have important practical benefits because many real-world situations, such as trade or financial negotiations, involve an imperfect picture of the world.

Leaner learning. Song Han is helping make AI more efficient. He invented a “deep compression” technique that makes deep learning—the very powerful yet very power-hungry technique at the center of the current AI boom—a lot leaner. Several big companies use Han’s code to reduce their energy bills, and to put AI on mobile devices. And last year his startup, DeePhi Tech, was acquired by the US chipmaker, Xilinx. In his new role as an assistant professor at MIT, Han plans to make AI algorithms even more efficient and compact.

Broken bias. Himabindu Lakkaraju is using machine learning to fight both machine and human bias—vital work important when algorithms are increasingly used to determine medical treatment, grant financial loans, and decide who goes to jail. Lakkaraju, who was recently appointed an assistant professor at Harvard, developed a system that combines human and computer judgement and can spot instances where unconscious bias may creep into either.

Human-like robots. Wojciech Zaremba, a researcher at OpenAI, created a robot hand that taught itself how to manipulate an object in a strikingly human way. Instead of needing careful programming, the robot, called Dactyl, learned to play with a child’s block through simulated experimentation. Most robots remain amazingly dumb, so advances in AI could unlock huge untapped potential in all sorts of domains. Indeed robot learning was chosen by Bill Gates as one of our 10 Breakthrough Technologies for 2019.

To take a look at all of our innovators, read more here. —WK

Sponsor Message

NL Image-1

New survey: 96% of AI/ML projects experience issues with training data

Nearly 8 in 10 organizations rolling out AI and machine learning (ML) have stalled and 96% of these companies have run into problems with data quality, data labeling required to train AI, and building model confidence according to a new global survey by Alegion and Dimensional Research.

Download the full survey findings here

More news

Deepfakes harm more than politicians. On Thursday, Vice reported on the existence of an app called DeepNude that “undressed” photos of women. It used algorithms known as generative adversarial networks, or GANs, to swap the women’s clothes out for highly realistic nude bodies. Shortly after the article published, the apps’ creator took the site down from the viral backlash.

While the mainstream conversation around deepfakes thus far has primarily focused on their potential harms to politicians, human rights and tech ethics experts warn that we’re overlooking their danger to vulnerable populations. The use of GANs to create nonconsensual sexual imagery of women is just one example. There will certainly be many more. Technologists who enable the rapid creation of fake media must also bear the responsibility of preventing future abuses. Read more here.

EU experts release more AI ethics guidelines. On Wednesday, a group of policy experts assembled by the EU released its second report on the ethical use of AI. The first report, released in April, outlined in broad strokes what it meant to develop “trustworthy” AI. The new one follows up with more specific recommendations, including which AI research areas to fund and how to monitor the technology’s impact.

But critics say that many of them remain vague and toothless. One of the recommendations, for example, urges the EU to ban the use of AI for citizen “scoring,” spurred by mounting fear of reports of China’s nascent social credit score system. The practice involves using data, such as from employment or criminal history, to evaluate an individual’s societal standing and use it to grant or withhold benefits. While it is often portrayed as an Orwellian tool for political and social control, however, many experts point out that the West has various equivalents in the form of credit card scores, insurance scores, and other privately-run systems for evaluation and ranking. So it’s unclear what “scoring” would even encompass.

Discover where tech, business, and culture converge.

Join us at EmTech where you can gain access to the most innovative people and companies in the world. Purchase your ticket today.



A gif of the mannequin challenge.

Mannequin challenge. Cast your mind back to the internet in 2016. Do you have hazy memories of the Mannequin Challenge? (If it happened to pass you by at the time, it involved standing as still as possible while someone moved around you, filming the pose from all angles.) While merely a fun internet phenomenon then, the YouTube videos have now been repurposed by Google AI to advance robotics research.

Specifically, the researchers used 2,000 of them to train a neural network to predict 3D scenes from 2D videos. It’s a useful skill to have: the ability to reconstruct the depth and arrangement of freely moving objects can help robots, like self-driving cars, maneuver in unfamiliar surroundings. The unexpected data source led to a much higher prediction accuracy than was possible with previous state-of-the-art methods, but it also calls into question the norms around consent in the AI research field. Read more here.

Bionic hand. There are approximately 540,000 upper-limb amputees in the United States, but sophisticated “myoelectric” prosthetics, controlled by muscle contractions, are still very expensive. Such devices cost between $25,000 and $75,000 (not including maintenance and repair), and they can be difficult to use because it is hard for software to distinguish between different muscle flexes.

In a new paper, published in the journal Science Robotics, researchers in Japan came up with a cheaper, smarter myoelectric device. The five-fingered, 3D-printed hand is controlled using a neural network trained to recognize combined signals—or, as they call them, “muscle synergies.” The team tested their setup on seven people, including one amputee, and the participants were able to perform 10 different finger motions with around 90% accuracy. The new approach might someday significantly lower the cost of bionic limbs and change the lives of amputees who rely on them. Read more here.

If you come across interesting AI research papers or conferences, send them my way at

Bits and bytes

G20 leaders will gather to discuss global data governance
By extension, they will discuss how data should be used in artificial intelligence systems. (Atlantic)

Apple has acquired a self-driving car company
The move is an “acqui-hire” meant to beef up Apple’s own self-driving team. (Reuters)

Chinese casinos are using machine learning to predict the biggest losers
It’s part of an attempt to save a struggling industry. (Bloomberg)

AI may not take your job, but it could become your boss
Several companies now offer or use software that monitors employee performance, and the trend is spreading. (NYT)

Deepfake detection algorithms will never be enough
At some point, they will no longer work and other methods will be needed to prevent the spread of misinformation. (Verge)

What Google thinks about when designing AI tools for the workplace
The VP of user experience shares her four guiding principles: alleviate peripheral work, enhance creative work, respect social dynamics, and course-correct. (Quartz)

Robots could take over the dangerous task of cleaning up nuclear waste
But designing ones capable of that is hard. (Economist)

This is what the future of surveillance could look like
Hundreds of companies now promise to help you track anyone using video analysis, biometric scanning, and more. (New Scientist)


There is a question of whether deepfakes are actually just a completely different category of thing from normal false statements overall.

Mark Zuckerberg at the Aspen Ideas Festival, hinting at a new Facebook strategy on deepfakes

Karen Hao
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHao, and share this issue of the newsletter here.
Was this forwarded to you, and you’d like to see more?
New Call-to-action
New call-to-action
You received this newsletter because you subscribed with the email address:
edit preferences   |   unsubscribe   |   follow us     
Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142