MIT Technology Review
The Algorithm
Artificial intelligence, demystified
Check list
05.14.19
Hello Algorithm readers,

Five questions that cut through AI hype. Two weeks ago I was in Dubai attending Ai Everything, the United Arab Emirates' first major AI conference and one of the largest AI applications conferences in the world. The event was an impressive testament to the breadth of industries in which companies are now using machine learning. It also served as an important reminder of how the business world can obfuscate and oversell the technology’s abilities.

In response, I’d like to briefly outline the five questions I typically use to assess the quality and validity of a company’s technology:

1. What is the problem it’s trying to solve? I always start with the problem statement. What does the company say it’s trying to do, and is it worthy of machine learning? Perhaps we’re talking to Affectiva, which is building emotion recognition technology to accurately track and analyze people’s moods. Conceptually, this is a pattern recognition problem and thus would be one that machine learning could tackle (see: What is machine learning?). It would also be very challenging to approach through another means because it is too complex to program into a set of rules.

2. How is the company approaching that problem with machine learning? Now that we have a conceptual understanding of the problem, we want to know how the company is going to tackle it. An emotion recognition company could take many approaches to building its product. It could train a computer vision system to pattern match on people’s facial expressions or train an audio system to pattern match on people’s tone of voice. Here, we want to figure out how the company has reframed its problem statement into a machine-learning problem, and determine what data it would need to input into its algorithms.

3. How does the company source its training data? Once we know the kind of data the company needs, we want to know how the company goes about acquiring it. Most AI applications use supervised machine learning, which requires clean, high-quality labeled data. Who is labeling the data? And if the labels are subjective like emotions, do they follow a scientific standard? In Affectiva’s case you would learn that the company collects audio and video data voluntarily from users, then employs trained specialists to label the data in a rigorously consistent way. Knowing the details of this part of the pipeline also helps you identify any potential sources of data collection or labeling bias (See: This is how AI bias really happens).

4. Does the company have processes for auditing its products? Now we should examine whether the company tests its products. How accurate are its algorithms? Are they audited for bias? How often does it re-evaluate its algorithms to make sure they’re still performing up to par? If the company doesn’t yet have algorithms that reach its desired accuracy or fairness, what plans does it have to make sure they will before deployment?

5. Should the company be using machine learning to solve this problem? This is more of a judgement call. Even if a problem can be solved with machine learning, it’s important to question whether it should. Just because you can create an emotion recognition platform that reaches at least 80% accuracy across different races and genders doesn’t mean it won’t be abused. Do the benefits of having this technology available outweigh the potential human rights violations of emotional surveillance? And does the company have mechanisms in place to mitigate any possible negative impacts?

In my opinion, a company with a quality machine-learning product should check off all the boxes: they should be tackling a problem fit for machine-learning, have robust data acquisition pipeline and auditing processes, have high accuracy algorithms or a plan to improve them, and be grappling head-on with ethical questions. Oftentimes, most companies pass the first four tests but not the last. For me, that is a major red flag. It demonstrates that the company isn’t thinking holistically about how its technology can impact people’s lives and has a high chance of pulling a Facebook later down the line. If you’re an executive looking for machine-learning solutions for your firm, this should warn you against partnering with a particular vendor.

MIT_AIlearning_sketch_final-1

A privacy-protecting way to apply AI to healthcare. The potential for AI to transform health care is huge, but there’s a big catch. AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease. That means imagery, genomic information, or electronic health records—all potentially very sensitive information.

Two months ago, I shared with you a machine-learning method that could simultaneously learn from patient data stored locally across multiple hospitals without actually centralizing them within a tech company’s servers. Now my colleague Will Knight, our senior AI editor, has found another way in which AI could save lives without spilling secrets.

At Stanford Medical School in California, a trial is now underway that will let patients contribute their medical data to an AI system that can be trained to diagnose eye disease without ever actually accessing their personal details.

The technology, developed by Oasis Labs, a startup that spun out of the University of California, Berkeley, stores the private patient data on a secure chip. It then keeps the data within the Oasis cloud such that outsiders are able to run algorithms on the data and receive the results. The data, on the other hand, never leaves the system. It’s an approach that could prove attractive across many domains—for processing financial records, individuals’ buying habits, or web browsing data.

“The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. Read more here.


Next month, I will be on stage at EmTech Next, our annual event exploring how technology is reshaping the future of work. If you’d like to purchase a standard rate ticket, sales will be open until June 10. (You can also pay a slightly higher price at the door.) Here are some of the talks and panels you can expect this year:

  • MIT economist David Autor will offer a big-picture perspective on how digital technologies are enriching some people and leaving many more behind
  • I will interview Microsoft Research’s Mary Gray and Harvard Kennedy School’s Susan Winterberg about the ethics of on-demand work
  • Philippe Beaudoin, co-founder of Element AI, will offer a case study of how Canada is preparing for an autonomous future

Help us improve

We are in the process of redesigning The Algorithm and would love to hear more about your experience. Thank you to everyone who already filled out our feedback survey last week! For those who haven’t, we’d greatly appreciate 5 minutes of your time.

As usual, you can also send your thoughts and questions on this issue to algorithm@technologyreview.com.

EmTech Next will examine the technology behind global trends and their implications for the future of work.

Join us at our 2-day future of work conference, where you'll hear from some of the world’s leading experts. Purchase your ticket today.


Bits and Bytes


Machine learning is helping us to understand and combat the effects of climate change
It can assess damage from extreme weather or analyze crop genetics to produce climate-adapted variants. (NYT)

Can satellite imagery and machine learning help track polluters?
A nonprofit wants to measure air pollution from every large power plant in the world. (TR)

Middle schoolers are learning to be responsible consumers of AI
A research assistant at MIT Media Lab designed a curriculum to teach kids concepts like deep learning and algorithmic bias. (WSJ)

Google is bringing the power of deep learning to your pocket
On-device AI would bring speed, privacy, and energy efficiency improvements to current cloud-based AI. (Tech Talks)
+ Here’s another development that could lead to powerful AI on your phone (TR)

An interactive technical explainer of how to initialize a neural network
The initialization step can be critical to the model’s ultimate performance. (deeplearning.ai)

Quotable

If you want to make a device do something intelligent, you’ve got two options: you can program it or you can learn. And people certainly weren't programmed, so we had to learn.

Geoffrey Hinton, an AI pioneer, talks to Wired about the origin and evolution of deep learning

Karen Hao
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHaoand share this issue of the newsletter here.
Was this forwarded to you, and you’d like to see more?
New Call-to-action
New call-to-action
You received this newsletter because you subscribed with the email address:
edit preferences   |   unsubscribe   |   follow us     
Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142
TR