Building a trusted relationship with artificial intelligence technologies is non-negotiable for the Department of Defense as it gets serious about the hot button tool. Pentagon officials have to consider all of the intricacies of weapon systems in order to properly contend with how to integrate and build in AI features — as well as “the complexities of the operational environment, where these systems are going to be deployed.”
According to Kimberly Sablon, principal director for trusted AI and autonomy at the Office of the Undersecretary of Defense for Research and Engineering, it is common to think about AI as an off-the-shelf single application, but in reality, it is necessarily a constantly evolving “collective of approaches.” Thus, commercial companies need to design their AI-based offerings with the operational requirements of the increasingly contested scenarios in mind from the beginning. Sablon addressed an audience of government contracting industry members at ExecutiveBiz’s Trusted AI and Autonomy Forum on Sept. 12.
AI will be the focus of a panel discussion at Potomac Officers Club’s highly anticipated Intel Summit next week. Come for this panel and stay for keynote addresses by the Office of the Director of National Intelligence’s Stacey Dixon and the Central Intelligence Agency’s Jennifer Ewbank. Register for the Sept. 21st event today!
During her remarks, Sablon stated that AI systems need to be “robust in their ability to maintain performance when deployed in an environment that is very complex, where threats are certainly more camouflaged and the environment writ large is very obscured.” She also said developers need to ensure that the technologies are aligned with DOD ethical principles and guidelines.
She said that in order to be certain of AI systems’ viability, the DOD feels it is key to drive toward a “warfighter-in-the-loop design or experimentation,” a set-up where there is always human oversight and judgment being implemented. With this in mind, the DOD has recently stood up the Center for Calibrated Trust Measurement and Evaluation, or CaTE, in collaboration with academic partners, to help advise on the construction of AI systems on which the department can rely. CaTE tries to answer the question, Sablon said, of “how do we operationalize value alignment?”
These steps, especially the consistent human involvement, are important because the DOD has already begun taking note of “instances of deception,” wherein AI is trying to delude humans and even where the technology is trying to trick other AI mechanisms. Sablon reported that a deception detection capability is a crucial next step; “we’re not going to have that tomorrow,” but top military and civilian leaders are beginning to start incentivizing serious conversations about how to “detect and recognize deception.”
Sablon also took a moment to single out one of the sponsors of the forum, BigBear.ai, whose team she said is working “very closely” with the DOD on “pulling together a scalable, autonomous AI orchestrator.”
On the near horizon, the DOD’s AI agenda will entail rapid experimentation, including an upcoming project with Israel, with whom they have “started to baseline some of our edge capabilities.” She promised that “initial demonstrations” for the endeavor, which she didn’t elaborate on further, will occur in April 2024.