On Wednesday, US lawmakers introduced a new bill that represents one of the first major efforts to regulate AI in the country. It should also be the beginning of more to come.
It hints at a dramatic shift in Washington’s stance toward one of this century’s most powerful technologies. Only a few years ago, policymakers had little inclination to regulate AI. Now, the consequences of not doing so have grown increasingly tangible, spurring a small contingent of Congress members to advance a broader strategy to reign the technology in.
Though the US is not alone in its new endeavor—the UK, France, Australia, and others have all recently drafted or passed similar legislation to hold tech companies accountable for their algorithms—the country has a unique opportunity to shape AI’s global impact as the home of Silicon Valley. “An issue in Europe is that we're not front runners on the development of AI,” says Bendert Zevenbergen, a former technology policy advisor in the European Parliament and now a researcher at Princeton University. “We're kind of recipients of AI technology in many ways. We’re definitely the second tier. The first tier is the US and China.”
The new bill, called the Algorithmic Accountability Act, would require big companies to audit their machine-learning systems for bias and discrimination and to take corrective action in a timely manner if such issues were identified. It would also require those companies to audit all processes beyond machine learning involving sensitive data—including personally identifiable, biometric, and genetic information—for privacy and security risks. Should it pass, the bill would place regulatory power in the hands of the US Federal Trade Commision, the agency in charge of consumer protections and antitrust regulation.
The draft legislation is the first product of many months of discussion between legislators, researchers, and other experts to protect consumers from the negative impacts of AI, says Mutale Nkonde, a researcher at the Data & Society institute who was involved in the process. It comes in response to several high-profile revelations in the past year that have shown the far-reaching damage algorithmic bias can have in many contexts. These include Amazon’s internal hiring tool that penalized female candidates; commercial face analysis and recognition platforms that are much less accurate for darker-skinned women than lighter-skinned men; and mostly recently, Facebook’s ad recommendation algorithm that likely perpetuates employment and housing discrimination regardless of the advertiser’s specified target audience.
The bill has already been praised by members of the AI ethics and research community as an important and thoughtful step toward protecting people from such unintended disparate impact. “Great first step,” wrote Andrew Selbst, a technology and legal scholar at Data & Society, on Twitter. “Would require documentation, assessment, and attempts to address foreseen impacts. That’s new, exciting & incredibly necessary.”
It also won’t be the only step. The proposal is part of a larger strategy to bring regulatory oversight to any AI processes and products in the future, says Nkonde, by using key issues like algorithmic bias as the mechanisms for regulation. There will likely soon be another bill to address the spread of disinformation, including deepfakes, as a threat to national security, she says. Another bill introduced on Tuesday would ban the manipulative design practices of tech giants to coerce consumers into giving up their data. “It's a multipronged attack,” Nkonde says.
Each bill is purposely expansive in nature to encompass different AI products and data processes across a variety of domains. One of the challenges that Washington has grappled with in regulating these technologies is their migratory nature—the ability for face recognition, for example, to be used for drastically different purposes across industries, such as law enforcement, automotive, and even retail. “From a regulatory standpoint, our products are industry specific,” Nkonde says. “The regulators who look at cars are not the same regulators who look at public sector contracting, who are not the same regulator who look at appliances.”
Congress is trying to be thoughtful about how to rework the traditional regulatory framework to accommodate this new reality. But it will be tricky to do so without forcing a one-size-fits-all solution across different contexts. “Because face recognition is used for so many different things, it’s going to be hard to say, ‘these are the rules for face recognition,’” says Zevenbergen.
Nkonde foresees this regulatory movement eventually culminating in the formation of a new office or agency specifically focused on advanced technologies. There will, however, be major obstacles along the way. While the other issues have garnered bipartisan support, the algorithmic accountability bill is sponsored by three Democrats which will hinder its safe passage through a Republican-controlled Senate and through President Trump. In addition, currently only a handful of Congress members have a deep enough technical grasp of data and machine-learning processes to approach regulation in an appropriately nuanced manner. “These ideas and proposals are kind of niche right now,” Nkonde says. “You have these three or four members who understand them.”
But she remains optimistic. Part of the strategy moving forward includes educating more members about the issues and bringing them on board. “As you educate them on what these bills include and as the bills get cosponsors, they will move more and more into the center until regulating the tech industry is a no brainer,” she says.