Latest News

Katrina Mulligan, national security lead at OpenAI, at GEOINT Symposium 2025 in St. Louis on May 20. Photo: USGIF
ST LOUIS — The national security lead for OpenAI demonstrated some of the geospatial capabilities of OpenAI’s new o-series “reasoning” models, which the AI organization says are capable of “complex problem solving, coding, and scientific reasoning.”
Katrina Mulligan, national security lead at OpenAI, went through a series of ChatGPT demonstrations at GEOINT Symposium 2025 in St. Louis on May 20, telling attendees, “We’re now in a region era where our LLMs that were not explicitly trained on GEOINT workflows can nonetheless outperform highly specialized models and provide new levels of multimodal analysis.”
Using the recently released o3 reasoning model, Mulligan showed a piece of overhead imagery and the reasoning process the model went through to correctly identify the image as the Simpson Desert in Australia. In another demo, the model identified the location of a street-level image down to the specific district in Bulgaria.
In the last demo, the model correctly identified that a specific NASA image of a rabbit (without metadata) was taken at Cape Canaveral in Florida by identifying the genus of the rabbit and the type of grass in the photo.
“OpenAI’s new series of reasoning models can perform high accuracy localization tasks across both overhead and ground level views. We’re going to see capability here across a wide variety of applications, from building better geospatial foundation models, to better conducting sensitive site exploitation, to accelerating and automating analytical workflows,” Mulligan said.
Mulligan said the level of detail in the answers from the model is a “major capability jump” between the o1 and o3 reasoning models. The o3 model was released in mid-April. “We’re still in the very, very early stages of figuring out what these models can do,” she added.
Mulligan also spoke to the OpenAI collaboration that began earlier this year with the U.S. National Laboratories, saying it shows promise to “dramatically shorten” the timeline on scientific discovery. OpenAI announced in January to deploy its o-series model on an Nvidia supercomputer at Los Alamos National Laboratory as a shared resource with Lawrence Livermore and Sandia National Labs.
“This is the first time OpenAI or any frontier model developer has actually moved model weights inside of a secure government environment. In just four months, we have gotten our entire inference stack up and running on their system, and they’re beginning to automate the tasks associated with scientific discovery,” she said. “Early indications are very clear that we are going to dramatically shorten the timeline for net new scientific discoveries.”
Mulligan spoke to hesitation around adopting AI technologies and argued it is a greater risk to national security for the U.S. government to not adopt AI.
“Government often feels like using AI is too risky and that it’s better and safer to keep doing things the way that we’ve always done them, and I think this is the most dangerous mix of all,” Mulligan said. “If we keep doing things the way that we always have, and our adversaries adapt to this technology before we do, they will have all of the advantages that I show you today, and we will not be safer.”
Stay connected and get ahead with the leading source of industry intel!
Subscribe Now