in , ,

Inside DOD’s Pursuit of Trusted AI & Autonomy

Inside DOD’s Pursuit of Trusted AI & Autonomy - top government contractors - best government contracting event

Despite how it is often described, artificial intelligence is not necessarily a new or emerging technology. In fact, AI has been around for nearly 80 years, according to Dr. Cara LaPointe, co-director of the Johns Hopkins Institute for Assured Autonomy. 

But with the considerably increased speed of advancements made in the last few years, AI is more prevalent, useful and even sometimes dangerous than ever before. Today, public and private sector leaders alike are looking at AI from different angles and challenging their own notions of what it is, what it’s not and what it can be.

Inside DOD’s Pursuit of Trusted AI & Autonomy - top government contractors - best government contracting event

Learn more about AI at the Potomac Officers Club’s 9th Annual Intel Summit on Sep. 21! This in-person summit features an expert panel discussion on the impact of AI and open source on intelligence analysis. Register here for this can’t-miss networking and learning event. 

“I think what we’re seeing now that’s really different is finally that realization of it’s not just about getting to a system, to a technology, to a particular algorithm or a particular system. It’s also doing the really hard work of all the enablers,” said Dr. LaPointe during a panel discussion at the ExecutiveBiz Trusted AI & Autonomy Forum.

“We’re really seeing a sea change across the DOD in thinking about, okay, well, it’s not just about autonomous networks, it’s about the entire data infrastructure that’s going to feed that autonomous system,” she added.

AI is often mistakenly viewed as a singular technology or as some kind of one-stop-shop solution to a data problem. In reality, AI is much more complex and multi-faceted, and it requires a deeper understanding in order to effectively implement.

“What I hear is a lot of people talking about AI, but what I don’t see is a lot of people actually building AI or addressing the issues — the fundamental issues — or putting real AI at least close enough past TRL [Technology Readiness Level] 3 that actually could make it into a real system,” shared Dr. Missy Cummings, professor and director of Mason Autonomy and Robotics Center at George Mason University.

“What I don’t see is a lot of people who understand the problems, the limitations,” she added.

Inside DOD’s Pursuit of Trusted AI & Autonomy - top government contractors - best government contracting event
Dr. Missy Cummings, Dr. Matthew Johnson, Dr. Cara LaPointe and Theodore Tanner spoke on a panel at the ExecutiveBiz Trusted AI & Autonomy Forum on Tuesday.

While AI certainly needs to be better understood, it also needs to be both trusted and trustworthy, Dr. LaPointe argued, and there needs to be a closer partnership, integration and teaming between human operators and machines.

“Engineers alone cannot solve these problems; operators and social scientists alone cannot solve these problems,” said Dr. LaPointe. “So I think there’s been a real understanding that ultimately the way we’re going to employ AI is going to be in close coordination between letting humans do what humans do best and then letting machines do what they do best.”

She added that we need “systems engineering processes that will develop technology that is trustworthy, and we have to involve people and operators in that process to even figure out what is it going to take for people to trust it, so we can make sure we’re building the right level of trust in these systems.”

Theodore Tanner, chief technology officer for, underscored the importance of open source software in fostering a greater sense of transparency and trust. 

“The main goal here is to execute on software development in an open environment that’s transparent. We need to have open models, we need to have open source software, we need to get it out to the developers and we need to have a system that actually everybody can develop onto for critical infrastructure for AI,” said Tanner.

Despite the knowledge gaps and trustworthiness hurdles, there is progress being made within the public sector in regards to AI. Dr. Matthew Johnson, senior technical advisor for responsible AI within the Office of the Chief Digital and Artificial Intelligence Officer, suggested that AI tools like ChatGPT are unlocking new opportunities for optimization in the private sector that could be mirrored in the federal space too.

“How can we speed up some of these back office functions so we have a more agile acquisitions process, so that we have automation in those things as well? How do we spur the potential for grassroots innovation? That’s where something like ChatGPT has been really interesting. The ability for people outside the government… are able to optimize their work,” Dr. Johnson said.

Dr. Johnson highlighted a few notable programs within CDAO that are effectively using AI to optimize the agency’s operations today. CDAO’s GAMECHANGER project employs natural language processing to more effectively search and summarize policy documents, and the Tradewinds project uses AI for content generation in requests for information and requests for proposals. 

Ultimately, Tanner reminded the audience that at the heart of AI integration is an important mission carried out by our nation’s warfighters. “This is not about the bottom line, this is about a great mission for the warfighter,” he said. 

Sign Up Now! ExecutiveBiz provides you with Daily Updates and News Briefings about Artificial Intelligence


Written by Summer Myatt

IARPA Moving Forward With Program Seeking to Expedite Attribution of Cyber Attacks
CRDF Global’s Consortium Facilitates Public-Private Partnership to Boost Ukraine’s Cyber Defense; Tina Dolph Quoted