MIT Technology Review
The Algorithm
Artificial intelligence, demystified
New Year's resolutions
01.04.2019
Hello Algorithm readers,

I’m not one for making predictions. But when asked to reflect on the upcoming year for AI, I do have some resolutions.

As much as 2018 saw major developments for the technology—impressive breakthroughs in reinforcement learning, GANs, and natural language understanding—the year was also, well, kind of a shitshow. Last year, we saw revelations about Facebook and Cambridge Analytica, and Google’s Project Maven to enhance drone strikes with imaging technology. We also saw a Tesla crash on Autopilot, killing the driver, and a self-driving Uber crash, killing a pedestrian. And it was the year we learned that tech giants are selling facial recognition technologies to law enforcement agencies for surveillance, despite studies showing high error rates in these systems for dark-skinned minorities.

(In October, AI Now, an influential nonprofit that studies the social impacts of AI, summarized all this and more in a graphic that still haunts me.)

Timeline AI Now

At MIT Technology Review, we believe technology is not neutral. And AI has proven to be no different. But where it does differ is in its ability to operate at a speed and scale that we haven’t seen before. If AI is shoddily built and wielded in haste, the consequences will affect many human lives.

Suffice it to say, 2018 was just a taste of that—a huge wake-up call for technologists, policymakers, and the public. So we arrive at the resolution that I hope the AI community will adopt: to stop acting like AI is magic (**cough Mark Zuckerberg cough**) and take an active responsibility in creating, applying, and regulating it ethically.

Fortunately, there are reasons to be optimistic. AI bias, once a little known concept, is now a well-recognized term and top-of-mind for the research community, which has begun developing new algorithms for detecting and mitigating it. Some researchers have also fought harder to fix one of the root causes of the issue, a lack of diversity in the field itself: women currently occupy at most 30% of related industry jobs and fewer than 25% of related teaching roles at top universities; black and Latinx researchers have even lower representation.

Companies, held accountable by a new wave of employee activism, have begun hiring ethical AI officers, and establishing codes and processes for evaluating when to place an irresponsible project on the chopping block. In parallel, countries like Canada and France have taken a lead in guiding the global AI ethics discussion.

Finally, social activists, lawyers, and academics are engaging with AI at deeper levels, no longer relying on technologists alone to plot out its future. Together, they are helping to educate the public and policymakers and to elevate the quality of the regulatory debate.

Here is also my resolution: to continue diligently explaining the scope and limitations of the technology, be more fastidious about highlighting instances of its wrongdoing, and spend more time covering and debating its ethics.

On all fronts, a foundation has been laid to start turning this ship around in 2019. Now, let’s get to work.

TR Archives

To throw a wrench into this whole AI ethics thing, my piece on the challenges of establishing a universal code: “Technology often highlights peoples' differing ethical standards—whether it is censoring hate speech or using risk assessment tools to improve public safety. [...] Establishing ethical standards also doesn’t necessarily change behavior.” Read more here.

New year, new Algorithm?

If you didn't get a chance to send us thoughts over the holidays, drop us a note at algorithm@technologyreview.com. We'd love to hear what you’d like to see from this newsletter in the new year: things that worked this year, things that didn’t, and things you don’t understand about AI but wish you did. Plus, we always welcome fun out-of-the-box ideas that we’ve never tried.

Ignorance and implicit bias can skew AI’s usefulness.

Will you step in? Learn from leading experts on how to harness AI the right way. Secure your ticket to EmTech Digital today.


Research

Moral impossibility. Algorithms are increasingly being used to make trade-offs between different peoples’ lives. Of course, the classic example of this in popular imagination is whether a self-driving car should kill one pedestrian or another. In reality, this problem is highly improbable and mostly unrealistic. But there are many other systems for which these trade-offs are relevant, such as autonomous weapons that must weigh the lives of friends and enemies, soldiers and civilians; or algorithms used in the criminal justice system that weigh the risks to society and the harm to individual defendants and their families.

The problem is that algorithms aren’t designed to handle these nuanced trade-offs. They are meant instead to pursue a single mathematical goal, such as to maximize the number of lives saved or minimize economic costs. But when you start dealing with multiple objectives, some of which may be competing, and try to maximize intangibles like “freedom” and “wellbeing,” sometimes a satisfactory mathematical solution just doesn’t exist. In ethics, this is a well-studied problem, known as an impossibility theorem. There are many of these theorems—I won’t list them all here—but all of them cause problems for algorithms as they currently exist.

So concludes Peter Eckersley, the Director of Research for the Partnership on AI, in a new paper. “We do not presently have a trustworthy framework for making decisions about the welfare or existence of people in the future,” he writes.

In response, he proposes to introduce uncertainty into our algorithms. Just as humans might not know the exact trade-off between two lives, algorithms shouldn’t either. And when they make decisions, they should communicate their uncertainty to their human counterparts. Read more on his idea here.

Bits and byte

The Pentagon wants AI talent
It believes the technology will be vital for warfare—but has struggled to convince Silicon Valley. (WIRED)

Deepmind's AlphaZero hints at a principle of AI
It is a far less complex algorithm than its predecessor AlphaGo, and as a result, is more generalizable. Sometimes simpler is better. (New Yorker)

The mining industry is waiting for its AI revolution—but it hasn’t yet come
Machine learning could help identify where to drill but companies are struggling to invest in innovation and recruit talent. (WSJ)

Flint used machine learning to find homes with lead pipes
But it was soon abandoned under political pressure to inspect every home. (The Atlantic)

A Chinese idol group made a music video with digital clones of themselves
Their avatars are designed to look, sound, and act like them—as well as stay young forever. (SCMP)

Quotable

AI is coming to warfare. [...] The agency will proceed even if it has to rely on lesser experts.

WIRED paraphrasing Chris Lynch, a former tech entrepreneur who now runs the Pentagon’s Defense Digital Service

Karen Hao
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHao.
Was this forwarded to you, and you’d like to see more?
New Call-to-action
You received this newsletter because you subscribed with the email address:
edit preferences   |   unsubscribe   |   follow us     
Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142
TR