It’s been made clear over the past couple of years that artificial intelligence is of primary concern to the Department of Defense. In Feb. 2022, it famously stood up the Chief Digital and Artificial Intelligence Office, which consolidated and united several disparate functions that dealt with AI and autonomy throughout the department. But in order to move forward with incorporating AI into its day-to-day activities, the DOD has placed a premium on making sure that AI usage is ethical, smart and manageable. In June, the Responsible Artificial Intelligence Strategy and Implementation Pathway was instituted to make sure things are moving in the right direction.
That same month, Under Secretary of Defense for Research and Engineering and Wash100 Award winner Heidi Shyu helped lead a three-day conference to discuss and lay out plans for artificial intelligence implementation within the DOD. Pentagon Principal Director for Trusted AI and Autonomy Kimberly Sablon shared two major endeavors the department is working on intended to ensure confidence in and reliability of AI in military and civilian settings.
Advancements in the DOD happen quickly. If you want an of-the-moment update on the department’s status with these initiatives, Sablon will be delivering the keynote address at the Trusted AI and Autonomy Forum on Sept. 12 at Falls Church, Virginia’s 2941 Restaurant. Come enjoy a hearty breakfast and listen to Sablon’s remarks as well as two subsequent panels that will tell you all you need to know about how the government is working to build healthy and sustainable relationships with AI. Register today!
At the June conference, Sablon detailed the following two lines of action as next up on the DOD’s AI agenda:
- Establishment of a Center for Calibrated Trust Measurement and Evaluation, an organization that will attempt to quantify “trust in heterogeneous and distributed human-machine teams,” according to DOD information shared with Defense Scoop. The hub will combine efforts of test, evaluation, verification and validation, acquisition and research and development specialists to reach its goals. It will also take a warfighter-in-the-loop approach.
- Creation of a Research and Engineering Community of Action, which will focus on fostering interoperability of AI systems and perform “rapid experimentation with mission partners.” To do so, Sablon and her cohort will coordinate mission engineering, systems engineering and research activities.
Sablon additionally noted the importance of challenging AI systems to make sure that they can withstand attacks and survive in adversarial environments through testing strategies like red-teaming.
“There’s a balance of roles and responsibilities between humans and machines, and there’s different levels of human autonomy interactions that we ought to be thinking about…I just want to put it out there that at least we’re taking some critical steps to tackling those,” commented Sablon in late August.
Part of its overwhelming push toward widespread AI acceptance and adoption, the DOD also debuted a task force in mid-August to address generative AI specifically. (Typified by applications like ChatGPT, generative AI produces original content after human-determined input.) Task Force Lima is charged with evaluating, integrating and operationalizing generative AI at a department-wide scale. The Pentagon reportedly hopes that getting serious about generative AI will prompt commercial companies that make the products to up their game in terms of making their programs more trustworthy and easy to use.
To continue your journey learning about how the public and private sectors are collaborating to master AI to use it to its full potential, it is imperative that you register to attend ExecutiveBiz’s Trusted AI and Autonomy Forum on Sept. 12. In addition to DOD major voice Kimberly Sablon, panel discussions at the event will feature the CDAO’s Senior Technical Advisor for Responsible AI Dr. Matthew Johnson, BigBear.ai Chief Technology Officer Theodore Tanner Jr. and more.