MIT Technology Review
The Algorithm
Artificial intelligence, demystified
Multipronged attack
04.12.19

Hello Algorithm readers,

The Algorithm will be off on Tuesday next week and resume regular programming Friday. In the meantime, we’ve been nominated for The Webby Awards, the highest honor in digital media, and would love your help to win the People’s Voice award. Vote here and spread the word on social media. It takes only 30 seconds. And don’t forget to click the email confirmation to count your vote.


On Wednesday, US lawmakers introduced a new bill that represents one of the first major efforts to regulate AI in the country. It should also be the beginning of more to come.

It hints at a dramatic shift in Washington’s stance toward one of this century’s most powerful technologies. Only a few years ago, policymakers had little inclination to regulate AI. Now, the consequences of not doing so have grown increasingly tangible, spurring a small contingent of Congress members to advance a broader strategy to reign the technology in.

Though the US is not alone in its new endeavor—the UK, France, Australia, and others have all recently drafted or passed similar legislation to hold tech companies accountable for their algorithms—the country has a unique opportunity to shape AI’s global impact as the home of Silicon Valley. “An issue in Europe is that we're not front runners on the development of AI,” says Bendert Zevenbergen, a former technology policy advisor in the European Parliament and now a researcher at Princeton University. “We're kind of recipients of AI technology in many ways. We’re definitely the second tier. The first tier is the US and China.”

The new bill, called the Algorithmic Accountability Act, would require big companies to audit their machine-learning systems for bias and discrimination and to take corrective action in a timely manner if such issues were identified. It would also require those companies to audit all processes beyond machine learning involving sensitive data—including personally identifiable, biometric, and genetic information—for privacy and security risks. Should it pass, the bill would place regulatory power in the hands of the US Federal Trade Commision, the agency in charge of consumer protections and antitrust regulation.

The draft legislation is the first product of many months of discussion between legislators, researchers, and other experts to protect consumers from the negative impacts of AI, says Mutale Nkonde, a researcher at the Data & Society institute who was involved in the process. It comes in response to several high-profile revelations in the past year that have shown the far-reaching damage algorithmic bias can have in many contexts. These include Amazon’s internal hiring tool that penalized female candidates; commercial face analysis and recognition platforms that are much less accurate for darker-skinned women than lighter-skinned men; and mostly recently, Facebook’s ad recommendation algorithm that likely perpetuates employment and housing discrimination regardless of the advertiser’s specified target audience.

The bill has already been praised by members of the AI ethics and research community as an important and thoughtful step toward protecting people from such unintended disparate impact. “Great first step,” wrote Andrew Selbst, a technology and legal scholar at Data & Society, on Twitter. “Would require documentation, assessment, and attempts to address foreseen impacts. That’s new, exciting & incredibly necessary.”

It also won’t be the only step. The proposal is part of a larger strategy to bring regulatory oversight to any AI processes and products in the future, says Nkonde, by using key issues like algorithmic bias as the mechanisms for regulation. There will likely soon be another bill to address the spread of disinformation, including deepfakes, as a threat to national security, she says. Another bill introduced on Tuesday would ban the manipulative design practices of tech giants to coerce consumers into giving up their data. “It's a multipronged attack,” Nkonde says.

Each bill is purposely expansive in nature to encompass different AI products and data processes across a variety of domains. One of the challenges that Washington has grappled with in regulating these technologies is their migratory nature—the ability for face recognition, for example, to be used for drastically different purposes across industries, such as law enforcement, automotive, and even retail. “From a regulatory standpoint, our products are industry specific,” Nkonde says. “The regulators who look at cars are not the same regulators who look at public sector contracting, who are not the same regulator who look at appliances.”

Congress is trying to be thoughtful about how to rework the traditional regulatory framework to accommodate this new reality. But it will be tricky to do so without forcing a one-size-fits-all solution across different contexts. “Because face recognition is used for so many different things, it’s going to be hard to say, ‘these are the rules for face recognition,’” says Zevenbergen.

Nkonde foresees this regulatory movement eventually culminating in the formation of a new office or agency specifically focused on advanced technologies. There will, however, be major obstacles along the way. While the other issues have garnered bipartisan support, the algorithmic accountability bill is sponsored by three Democrats which will hinder its safe passage through a Republican-controlled Senate and through President Trump. In addition, currently only a handful of Congress members have a deep enough technical grasp of data and machine-learning processes to approach regulation in an appropriately nuanced manner. “These ideas and proposals are kind of niche right now,” Nkonde says. “You have these three or four members who understand them.”

But she remains optimistic. Part of the strategy moving forward includes educating more members about the issues and bringing them on board. “As you educate them on what these bills include and as the bills get cosponsors, they will move more and more into the center until regulating the tech industry is a no brainer,” she says.

Deeper

For more relevant reading, try:

TR archives

My dispatch from EmTech Digital on how new chip architectures will drive the next AI explosion: "Naveen Rao, the corporate vice president and general manager of the AI Products Group at Intel, likened the importance of the AI hardware evolution to the role that evolution played in biology. Rats and humans, he said, are divergent in evolution by a time scale of a few hundred million years. Despite vastly improved capabilities, however, humans have the same fundamental computing units as their rodent counterparts. The same principle holds true when it comes to chip designs." Read more here.

Artificial intelligence and other digital technologies can restore economic prosperity.

Our newest newsletter is all about looking for ways to move forward. To learn more, sign up now.


Research

ezgif.com-crop (1)

Learning to use tools played a crucial role in the evolution of human intelligence. It may also prove vital to the emergence of smarter, more capable robots, too.

New research out of Google Brain and UC Berkeley shows that robots can figure out at least the rudiments of tool use. The researchers used an off-the-shelf robot arm, connected to a camera and controlled by a large neural network, to perform their experiments. Through a combination of trial and error and observing people’s demonstrations, the robot worked out how to make use of simple implements that it had never seen before, including a dustpan, broom, and duster, to move other objects around. Notably, it also used those tools in unconventional ways to achieve its task, suggesting a level of improvisation.

While that may not seem like a big feat, it requires the robot to build a complex model of the physical world to know that moving one item here can help move other items over there. The work hints at how robots might someday learn to perform sophisticated manipulations, and solve abstract problems, for themselves. Read more here.

Bits and bytes


Amazon workers are listening to some of your conversations with Alexa
The company employs thousands of people to transcribe and annotate a sample of recordings in an effort to improve the software. (TR)

Microsoft worked with a Chinese military university on AI research that could be used for censorship and surveillance
Given the Chinese government’s use of such technologies, it calls into question whether such sensitive collaborations should continue. (FT)

China is offering free healthcare to rural elders in exchange for their data
That data is then used to train AI healthcare products for suggesting diagnoses and treatments. (WIRED)

Police departments are using bots to break up sex trafficking rings
Undercover bots are trained to text like sex workers and catch men responding to illegal solicitations. (NYT)

AI is turning the insurance industry into a surveillance economy
Insurers are using data from apps, social media, and even fitbits to adjust your premiums. (NYT)

The algorithms behind text prediction
An interactive explainer of how a basic natural language processing model can work. (Pudding)

Quotable

A black box can and should be used when it produces the best results.

Elizabeth A. Holm, professor of material science and engineering at Carnegie Mellon University, on why we shouldn’t discard black box AI systems so readily

Karen Hao
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHao.
Was this forwarded to you, and you’d like to see more?
New Call-to-action
New call-to-action
You received this newsletter because you subscribed with the email address:
edit preferences   |   unsubscribe   |   follow us     
Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142
TR