Latest News

Alexis Bonnell, head of adoption and deployment at Open AI for Government, speaks at CyberSat on Nov. 18. Photo: Mark Holmes for Via Satellite
RESTON, Virginia — The balance between space industry cyber defenders and the hackers, online spies and cyber warriors attacking their systems will tilt in dramatic and largely unpredictable fashion over the next few years as increasingly proficient generative AI models provide growing capabilities to both sides, Open AI executive Alexis Bonnell told the CyberSat conference Tuesday.
“We must recognize the duality. AI is an accelerant for both offense and defense,” said Bonnell, who is head of adoption and deployment at Open AI for Government.
Large language models (LLMs) like Open AI’s ChatGPT and its competitors can already assist both hackers and cyber defenders in important ways, she said, and full automation of some critical workflows in both attack and defense is coming soon.
“What determines who wins is how fast we adopt it,” Bonnell warned.
America could out-resource its adversaries in every way except one, she said, “The only thing that adversaries have the exact same amount of as we do is time. That means that time is actually the most effective and critical and powerful weapon system we have available to us.”
The extraordinary potential of AI to unleash human creativity meant that the key to victory in space is to ask and answer ‘What if?’ faster than the adversary. “This isn’t just about AI. This is also about, on the human side, making us more curious, more comfortable with speed, with risk,” she said.
Some respected voices in malware analysis have poured scorn on the idea of AI-powered autonomous malware attacking critical infrastructure, highlighting deficiencies in the so-far documented efforts of threat actors using LLMs.
Bonnell told Via Satellite in a brief interview after her remarks that to properly understand the threat to the space sector, it is important to develop “a reasonable and mature hypothesis” about AI risk.
On autonomous malware, “It depends how autonomous we mean,” she said. Human operators are already using AI to assist them during specific stages of an attack, she said, “Social engineering no longer requires skill, just compute.”
But the vision of a human operator merely issuing a single command or prompt to set in motion a complete attack chain, “I would say that’s going to be a couple of years,” she predicted.
“You have to be ready to be wrong” when making predictions about AI, she added, the technology is advancing so fast that “In three months this all could be totally different.”
On defensive capabilities, Bonnell cited an exercise by one of the U.S. national labs, simulating a cyber attack on U.S. satellite communications. She said human analysts took 45 minutes to “piece together across consoles … root cause [analysis], hypothesis, and recommended responses.” An AI assistant was able to do it in 30 seconds.
“This is not future tech. It is happening right now,’ she said.
But AI is not a philosopher’s stone for either attackers or defenders, Bonnell said. Cyber defenders in the space sector need to decompose their workflows and figure out where AI could help their human workforce.
“This is not an argument that AI is going to solve everything. In fact, AI isn’t very good at a lot of things.” The key is teaming human and machine, she said.
More from CyberSat 2025:
- Multi-Orbit Networks Expand the Attack Surface, But Basic Cyber Threats Remain, Experts Say
- NRO Establishes Space Cyber Program After Last Month’s Moonshine Guardian Exercise
- DHS Wants Satellite Volunteers to Test New Cyber Tools
- Pentagon’s Acting CIO Arrington Pushes Against Complacency in Space Cybersecurity
Stay connected and get ahead with the leading source of industry intel!
Subscribe Now