MIT Technology Review
The Algorithm
Artificial intelligence, demystified
Speak of the devil
06.14.19
Hello Algorithm readers,

This is our second week testing out a new weekly format. Your comments and thoughts are welcome at algorithm@technologyreview.com. Today, we’re looking at the US House hearing deepfakes, a chip that runs on light, and detecting kissing scenes in Hollywood movies. You can share this issue here, and view our informal archive here.

An illustration of a photo of president Trump broken into two.

On Thursday, the US House of Representatives held its first dedicated hearing on deepfakes, the class of synthetic media generated by AI. As if on cue, two high-profile reports of deepfakes on social media surfaced in the news. The first was of a forged video of Mark Zuckerberg, among other famous figures, created by artists as part of an exhibition to raise awareness about data privacy. The second was a report from AP about how a spy likely used a non-existent face on LinkedIn to infiltrate the Washington political scene.

Deepfakes are arguably still not mainstream, but they are here already. The technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. “Many of the ways that people would consider using deepfakes—to attack journalists, to imply corruption by politicians, to manipulate evidence—are clearly evolutions of existing problems,” says Sam Gregory, the program director of the human right nonprofit Witness, “so we should expect people to try on the latest ways to do those effectively.”

They really don’t have to be good to be damagingly deceptive, either. In fact, faked videos don’t have to be deepfakes at all. The recent doctored video of Nancy Pelosi, which was merely slowed down to make her appear drunk, is an example of a “cheapfake” that can also get out of hand. Preparing for the era of deepfakes, therefore, is also about addressing our current state of fake news and misinformation.

During the hearing, the House members and experts present discussed the current state of the technology, what regulators might do, and possible methods of retaliation against foreign governments should they use deepfakes to threaten national security or disrupt elections. The focus was largely on the upcoming 2020 elections, but the discussion also touched upon the impact of deepfakes on journalists, particularly female journalists, and other vulnerable populations online. “My overall impression was that it was—with a couple of exceptions—genuinely thoughtful,” says Jack Clark, OpenAI’s policy director who was among the experts who testified. “The members there were asking what I felt were quite reasonable and detailed questions.”

So where do we go from here? A Witness report and a bill on deepfakes, introduced by Representative Yvette Clarke in parallel, offered similar recommendations for the path forward: companies and researchers who produce tools for deepfakes must also invest in countermeasures; social media and search companies should invest in and integrate manipulation detection features directly into their platforms; and regulators should not just focus on politicians but also consider vulnerable populations and international communities.

Clark has another: the government should develop ways of measuring the state of the technology by engaging directly with the scientific literature. It would help them pre-empt the issues much earlier the next time around, he says. “I do think we could’ve had this conversation two years ago.” Read more detailed recommendations here.

Deeper

For more on deepfakes, try:

Send questions, thoughts, and concerns to algorithm@technologyreview.com.

More news

Bill Gates is backing a chip startup that uses light. While conventional semiconductors use electrons to help carry out the demanding mathematical calculations that power AI models, Luminous Computing is using light instead. It uses different colored lasers to beam light through tiny structures on its chip, known as waveguides, and can outstrip the data-carrying capabilities of conventional ones. It also requires far less power: Mitchell Nahmias, a cofounder of Luminous and its chief technology officer, says its current prototype is three orders of magnitude more energy efficient than other state-of-the-art AI chips. The company recently raised $9 million of seed funding from prominent investors, including Gates.

Many industries are trying to pack an increasing amount of AI into their machines, which has driven a new wave of innovation in energy-efficient semiconductors. Data-processing limitations in widely used electrical chips like central processing units can cause lags and delays—annoying if you’re waiting for some machine-learning results for a research paper, but far more serious if you’re relying on an AI algorithm to guide a car down a busy street. As a result, Luminous faces stiff competition from other startups like Lightelligence and Lightmatter and semiconductor behemoths like Intel. Read more here.


Construction firms are predicting accidents before they happen. A construction site is a dangerous place to work, with a fatal accident rate five times higher than that of any other industry. Now Suffolk, a Boston-based construction giant, is working on a system to save lives, and money, by detecting accident-prone situations. The system makes use of a deep-learning algorithm trained on construction site images and accident records. It can then be put to work monitoring a new construction site and flagging situations that seem likely to lead to an accident, such as worker not wearing gloves or working too close to a dangerous piece of machinery.

While the project is primarily designed to improve safety for workers, it is also an example of a much wider trend: using AI to monitor, quantify, and optimize work life. Increasingly, companies are finding ways to track the work that people do and are using algorithms to optimize their performance. Read more here.


IBM is making a comeback in AI research. IBM may not be the sexiest tech giant, compared with Google or Apple or the latest cutting-edge startup. But it’s been around since 1911, so it must be doing something right. Its secret is its research division, with 3,000 researchers distributed across 12 locations, which the company relies on to stay on top of trends in emerging technology. For decades now, the company has engaged in an annual process to create and adapt business units in light of what’s on the horizon. The company is now striking up new partnerships with academia.

One such collaboration is the MIT-IBM Watson AI Lab, which funds projects that are jointly conducted by researchers from each side of the aisle. It focuses on four guiding pillars: core AI algorithms, the physics of AI, applied AI to industries, and prosperity enabled by AI. At EmTech Next, our event on the future of work, Sophie Vanderbroek, the VP of emerging-technology partnerships, shared with me her strategy for long-term innovation. Read more here.

EmTech MIT is where technology, business, and culture converge, and where you gain access to the most innovative people and companies in the world.

Held each fall on the MIT campus in Cambridge, MA, EmTech MIT offers a carefully curated perspective on the most significant developments of the year, with a focus on understanding their potential economic as well as societal impact. Purchase your ticket today!


Research


An image of two people stealing a kiss in a crowd.

Kiss and tell. While object recognition in video has rapidly advanced, scene detection, or knowing what’s actually happening on screen, has lagged behind. But being able to analyze and recognize actions in footage could prove useful for applications like video editing. So Amir Ziai, a Stanford student at the time of research and now a senior data scientist at Netflix, took it upon himself to advance the state of the art, specifically in detecting Hollywood kissing scenes. The study may seem rather light-hearted or silly, but it has important implications.

Ziai selected a subset of 100 movies and labeled their various non-kissing and kissing scenes between 10 and 20 seconds in length. He then extracted image and audio stills for every second of each scene, and used them to train a machine-learning algorithm. The resulting model was able to identify which seconds depicted kissing and group them into scenes, achieving a high level of accuracy.

The study shows how quickly the means of analyzing footage for specific, even intimate, actions have advanced. Couple that with surveillance footage, and the implications quickly turn Orwellian. In fact, in a new report, the ACLU sounded the alarm on a future in which camera owners would be able to rapidly identify unusual behavior or seek out embarrassing moments. Like deepfakes, it’s yet another example of a situation where technologists should think about the consequences of their work. Share the story here.

Bits and bytes


Virtual spaces could teach AI about its physical surroundings
Facebook released a new simulation environment that could help the field move toward embodied AI. (TR)

Art historians are using machine learning to crack attribution puzzles
They’re using it to pick out forgeries, identify artists, and map out a hidden web of artist collaborations. (Nature)

Research mavericks want to build a better brain for industrial robots
Some big names in AI and robotics are teaming up to develop a robot operating system. (TR)

The next big privacy hurdle is teaching AI to forget
GDPR includes the “right to be forgotten,” but what does that mean when the data has already been used to train an algorithm? (WIRED)

Facebook has promised to leave up a deepfake video of Mark Zuckerberg
It was created as an art project and uploaded to Instagram. (TR)

Taxing robots may actually be a sensible idea
We tax labor, and increasingly automation is replacing human labor. (TR)

Image recognition is biased against lower-income countries
Popular off-the-shelf systems are much worse at identifying items from Somalia and Burkina Faso than from the US. (The Verge)
+ Google has a plan to change AI’s culturally-biased world view (TR)

AI can speak with Bill Gates’s voice
Researchers have developed a speech synthesizer capable of copying anybody’s voice. (TR)

Quotable

Not only may fake videos be passed off as real, but real information can be passed off as fake. This is the so-called liar’s dividend, in which people with propensity to deceive are given the benefit of an environment in which it’s increasingly difficult for the public to determine what is true.

—US representative Adam Schiff, chairman of the House Intelligence Committee, in his opening remarks at the deepfakes hearing

Karen Hao
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHao, and share this issue of the newsletter here.
Was this forwarded to you, and you’d like to see more?
New Call-to-action
New call-to-action
You received this newsletter because you subscribed with the email address:
edit preferences   |   unsubscribe   |   follow us     
Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142
TR