News and Music Discovery
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The Quest To Create A Better Spy-Catching Algorithm

RACHEL MARTIN, HOST:

Law enforcement has been searching for years to find a perfect algorithm for uncovering spies, but it's proven elusive. As part of an NPR special series on the technologies that watch us, NPR's Dina Temple-Raston takes a look at what exactly makes this so hard.

DINA TEMPLE-RASTON, BYLINE: Cathy O'Neil is a mathematician and data scientist, and she thinks we've got algorithms all wrong. She says we think about them as if they're immutable science.

CATHY O'NEIL: And it's nothing like that actually. They're much more uncertain than things like gravity and motion.

TEMPLE-RASTON: And, O'Neil says, algorithms are replacing humans in a lot of decision-making. Financial institutions deploy them to set interest rates on credit cards. Algorithms help colleges winnow down applicants. Even prisons use them when deciding who to parole.

O'NEIL: They're used so heavily they don't just predict the future; they cause the future.

TEMPLE-RASTON: The easiest way to understand algorithms is to think of them as recipes. They're instructions about past patterns of success aimed at helping a computer to predict the future. It gets trickier when algorithms have to assess things that are a little more nuanced. If you ask an algorithm to spot people who are likely to get promoted based on everything it's seen in the past, it would decide that being a white man is a good predictor of success.

O'NEIL: There's the feedback loop that happens in data science that makes lucky people luckier and unlucky people unluckier.

TEMPLE-RASTON: So that's the first problem - by learning from the past, algorithms can be doomed to reinforce and repeat it. The second problem is that algorithms aren't objective. They may be math, but they are mathematical equations written by humans, so they can end up reflecting the bias of their authors. Take spying, for example - if you wanted an algorithm to look for spies based on past experience, the data you put into the algorithm would find a good predictor of spies would be that they are white men who work for the U.S. government and speak Russian.

O'NEIL: You sort of mark them as, you know, suspicious, but they end up not being suspicious at all, and that's a false positive problem. And false positives tend to land on certain populations more than others.

TEMPLE-RASTON: Another classic example - people stealing corporate secrets. They tend to come into the office early and leave late. It's easy to understand why - if they're taking things they aren't supposed to, they wouldn't want co-workers seeing them do it. Consider the case of Aldrich Ames, a CIA officer who was one of America's most damaging spies in the early 1990s. He was spying for the Soviets.

(SOUNDBITE OF ARCHIVED RECORDING)

UNIDENTIFIED REPORTER #1: What made you think you could get away with it, Mr. Ames?

TEMPLE-RASTON: For years before his arrest, it didn't occur to anyone to notice that his work patterns had changed. Of course, back then there were no algorithms that might have put together that he was drinking, had gone through a costly divorce, paid cash for his house, was driving a new car and arrived at the office early and left late. Those were things that Aldrich Ames himself admitted should have tipped off authorities.

(SOUNDBITE OF ARCHIVED RECORDING)

ALDRICH AMES: There is no question there was a recklessness in how I handled so many things that is striking.

TEMPLE-RASTON: Of course, the kinds of sophisticated algorithms we have now weren't around back then, which leaves the question, would one have made a difference? Some people say that algorithms - for all the hype - just aren't there yet.

YAEL EISENSTAT: What the algorithm has zero insight into is, did that person change their pattern because maybe they had a baby and now they come in at different hours, or maybe they were sick so they've been doing a series of physical therapy in the morning?

TEMPLE-RASTON: Yael Eisenstat is a former CIA analyst who's now a visiting fellow at Cornell Tech, and she studies the effect of algorithms and technology on society.

EISENSTAT: Like, there's so many actual human things that could make that abnormality in the pattern; the algorithm isn't going to know.

TEMPLE-RASTON: Earlier this year, algorithms missed someone the CIA believes had been spying for China for years.

(SOUNDBITE OF MONTAGE)

UNIDENTIFIED REPORTER #2: A former CIA officer is in jail this morning...

UNIDENTIFIED REPORTER #3: Well, he's 52-year-old Jerry Chun Shing Lee...

UNIDENTIFIED REPORTER #4: And he's accused of unlawfully having notebooks containing national defense information, including...

TEMPLE-RASTON: Lee is thought to have been spying for the Chinese for the past eight years. And almost exactly like Ames before him, he was accused of single-handedly helping a foreign power imprison and kill a dozen or more double agents. He pleaded guilty to possessing classified information. And like Ames, it was a human who discovered him.

EISENSTAT: It's still a human analyst who needs to be able to put all of the information together to decide whether or not that pattern actually means anything.

TEMPLE-RASTON: Eisenstat says algorithms still haven't been able to do that as well as human intuition can.

Dina Temple-Raston, NPR News.

MARTIN: And you can hear more of our series on the technologies that watch us at npr.org/illbeseeingyou. Transcript provided by NPR, Copyright NPR.

Dina Temple-Raston is a correspondent on NPR's Investigations team focusing on breaking news stories and national security, technology and social justice.