Advertisement

The NSA is experimenting with machine learning concepts its workforce will trust

NSA brass wants to increase the agency's use of AI in defensive and offensive operations.

As the U.S. National Security Agency incorporates machine learning and artificial intelligence into its defensive cyber operations, officials are weighing whether cyber operators will have confidence in the algorithms underpinning those emerging technologies.

NSA operators want to say, “is my AI or ML system explainable?” Neal Ziring, NSA’s Technical Director for Capabilities, told CyberScoop Thursday. “Contexts where the AI is recommending an action is where that will be most important.”

The intelligence agency still is exploring how machine learning, an automated method of data analysis, might be used to detect threats and protect new Internet of Things technology. Given the amount of information that agency employees need to sort through, machine learning could help prioritize tasks and decrease the amount of time employees spend on triage. The NSA aims to use machine learning and artificial intelligence, in which computers make their own decisions, to more efficiently stop threats, and eventually leverage those tools in possible offensive operations via Cyber Command.

But, if NSA workers don’t trust the AI or ML protocols that are telling them what to do, any deployment could be for naught.

Advertisement

“Analysts are not going to trust an automated alert that lands in their lap without understanding how it got there in the first place,” NSA’s David Hogue said in remarks at a McAfee event this spring.

NSA Director Gen. Paul Nakasone indicated during a March congressional hearing that helping agency employees build this trust in machine learning techniques at the NSA is a priority.

While Nakasone has said he thinks artificial intelligence will initially be used in defensive operations, he also predicted AI could help U.S. hackers find holes in adversaries’ networks. 

“Currently, access development is our most time-consuming and difficult element of developing offensive options,” Nakasone said in a recent interview in the Joint Forces Quarterly.

How NSA thinks about AI today

Advertisement

For now, the NSA is exploring the use of artificial intelligence to detect vulnerabilities.

“We are experimenting and developing ‘self-healing networks,’ where we see a vulnerability and the vulnerability is recognized rapidly and patched or mitigated,” Nakasone explained in his Joint Forces Quarterly interview.

Machine learning eventually could help ease the immense workload placed on each cyber staffer at the agency, Ziring told CyberScoop. By prioritizing tasks, NSA employees can dedicate more time to solving the most urgent problems. 

“We’re going to need, at the very least, ML techniques to pull signal out of the noise so that the defenders, the operators can be informed [and] spend their time on the most critical events or anomalies rather than trying to make sense of this huge data space manually,” Ziring said.

Nakasone also has said machine learning could help the agency address a shortage in the number of linguists, according to C4ISRNET.

Advertisement

The NSA recently signed a five-year contract with the University of Texas System to conduct research on machine learning.. The program, organized through the NSA’s Technology Transfer Program, could result in “development breakthroughs for mission,” according to an NSA review of emerging tech at the agency.

The focus of the contract, a Cooperative Research and Development Agreement, is on anomaly detection, threat activity on high performance computing systems, and IoT, according to the NSA.

Context influences decisions

Analysts may want to verify the AI decision-making processes that recommend taking one action over another, NSA’s Ziring said Thursday at a Nutanix event produced by FedScoop. 

“The operator is going to totally want to know algorithm,” Ziring said, adding that employees will want to know, “’what observation or set of observations caused you to recommend that action over this other one?’”

Advertisement

The answer could lie in the concept of “random forests,” Ziring added. The idea rests on the notion of “decision trees,” which classify data under rule sets that can be used to predict outcomes later. The algorithm demonstrates a clear chain of events, which makes it more understandable to computer scientists than neural nets, which are based on human brain activity.

“Random forests are usually fairly explainable,” he said. “You can go back through the random forest and say, ‘I chose that action because my weights said this event was really important and that drove the algorithm.’” 

Neural nets, on the other hand, are “much less explainable,” as they produce outcomes based on experience, Ziring said. The different matters to NSA operators; whether an AI system is transparent to NSA operators could play a role in what kinds of AI the NSA relies on to conduct its operations, according to Ziring.

“It will create or influence your implementation choices,” he said.

Shannon Vavra

Written by Shannon Vavra

Shannon Vavra covers the NSA, Cyber Command, espionage, and cyber-operations for CyberScoop. She previously worked at Axios as a news reporter, covering breaking political news, foreign policy, and cybersecurity. She has appeared on live national television and radio to discuss her reporting, including on MSNBC, Fox News, Fox Business, CBS, Al Jazeera, NPR, WTOP, as well as on podcasts including Motherboard’s CYBER and The CyberWire’s Caveat. Shannon hails from Chicago and received her bachelor’s degree from Tufts University.

Latest Podcasts