Former FBI Chief Information Security Officer Patrick Reidy had few laughs at the State Department’s expense during his Black Hat presentation on Wednesday, but he wasn’t foolish enough to come out and say that his agency would have detected an insider threat like Edward Snowden. In fact, depending on how Snowden’s behavior patterns and personal traits lined up with other known threats, spotting his plans might have been impossible.
Someone who knows he’s getting a pink slip on Monday printing off a bunch of stuff on Friday evening? “That’s what we at the FBI call a clue,” Reidy joked. But someone taking a few files here and there, all while technically remaining within his access permissions? That’s like finding a needle in, well, a stack of needles.
Too often, Reidy said, “We take one problem — [like] Snowden — and just generalize it everywhere.”
In the statistical sense of the word, he explained, “predicting really rare events may actually be impossible.” A better bet might be taking an employee-centric approach: analyzing behavior at an individual level and trying to deter them from becoming disillusioned in the first place.
A needle in a stack of needles
According to research the FBI conducted on the topic of insider espionage, spotting such behavior is so difficult because there is so little and such unhelpful data to work with. Reidy and his team crunched the numbers and they just couldn’t find the red flags that predict an insider threat without also identifying lots of false positives.
This isn’t the NSA trying to spot a U.S. citizen calling Yemen at 3 a.m., hanging up, and getting a call back two minutes later from a different number. There’s a big difference between identifying unauthorized access that often signifies an attack, or analyzing enough network traffic to recognize a nefarious signature, and trying to figure out when someone doing something he’s authorized to do is acting on an ulterior motive.
In the case of the FBI, for example, about 2 percent of its employees are responsible for about 80 percent of the data movement. That hardly produces a standard bell curve. Good luck spotting the outliers in the remaining 98 percent.
In some ways, Reidy’s talk dovetailed nicely with another Black Hat talk, this one about trying to predict the susceptibility of Twitter users to social bots. The researchers who gave it were trying to figure out if they could identify the characteristics (e.g., personality, followers, Klout score, etc.) of people who’d be more likely to engage with a bot and less likely to report it for spam.
They found that some signals were stronger than others, but mainly they found that it’s difficult to predict with great accuracy who’ll engage in the behavior you’re targeting when the vast majority of people won’t. Only 20 percent of the participants in their study interacted with bots at all, and only 13 percent actually replied. Absent some clearly distinguishing characteristics — some strong signals in the noise — it’s a lot easier to predict that someone isn’t who you’re looking for.
Think like a credit card company
This is why Reidy says the FBI now focuses on analyzing individuals’ behavior rather than aggregate behavior. It creates a baseline for individual employees and then can more easily detect changes that might signal a problem. Think about it how credit card companies analyze behavior to combat fraud: there are certain universal red flags, but often you’ll get a call when something perfectly normal for someone else doesn’t fit your usual spending patterns.
Although, Reidy said, even this isn’t enough. The that’s why the agency looks at the whole situation, combining everything they know about individual employees in order to paint a complete picture. Activity logs, HR profile, salary, psychographic profile — it all comes together to suggest whether any changes are worth looking into.
In fact, he added, if organizations have a limited budget to spend on trying to detect insider threats, they should put it into capturing and combining HR data and individual behavior data. It’s not big data, it’s the right data.
The best offense is a good defense
Of course, you don’t have to detect insider threats at all if you can stop them from materializing in the first place. Of the 65 insider threat cases the FBI analyzed, Reidy said only about 5 percent came in “bad” — like Snowden, who reportedly took a job at Booz Allen Hamilton to access NSA documents — while the rest turned bad.
How do you stop that from happening? On the one hand, Reidy said, the FBI uses positive social engineering to create a more-pleasant work experience. Rather than dictating what people can and can’t do, or treating employees like children (14,000 FBI agents carry guns to work, he joked, but it can’t trust people to carry USB drives?), it just targets the behaviors it doesn’t like as they’re happening. Go ahead and use Facebook, for example, but don’t post sensitive information there.
Thanks to popups on employees’ computer screens warning them they’re doing something potentially dangerous but still giving them the opportunity to continue, certain risky behaviors (e.g., removing files to external drives) decreased significantly in just a year. Given timely guidance, Reidy said, people will make the right decisions.
But companies and organizations still need to protect their data. That means indentifying the most-sensitive data and the systems it’s on and setting permissions accordingly, Reidy said. It also means knowing your enemies or competitors and, just as important, which of your employees they’d be most likely to target.
Whether you’re in government or private industry, Reidy warned, it’s a “hostile marketplace” where someone will always be out to compromise your people and your data. “In 5 to 10 years, people who take insider threats seriously will be around,” he said. “Those who don’t, wont.”