Artificial intelligence has been steadily tightening its grip on the technology industry for the last half of a decade and innovators in the space are trying as fast as they can to share the benefits with the government. Building on the progress of the internet age and near-universal connectivity, AI leaders are seeing an information race emerging that seeks to use AI to exploit that widespread connectivity.
The race values organizations that can mine signals in data at scale to maximize efficiency. The technical discoveries the private sector is making inevitably outstrip established federal policy, but efforts such as the National Institute of Standards and Technology’s January-released AI Risk Management Framework are attempting to keep pace with the fast-moving commercial world, including new big ticket items like Chat GPT.
“We feel it’s been necessary to ensure that government and our national security agencies have access to the same level of technology that the commercial industry has,” asserted Dominic Delmolino, vice president of technology and innovation at Amazon Web Services, as part of a panel discussion, “Building the AI Powered Information Enterprise,” during the Potomac Officers Club’s Feb. 16 AI Summit.
Delmolino said that hundreds of thousands of customers have utilized cloud computing as a platform to conduct data analysis and AI activities since AWS launched their offering. He cited reports that data-driven enterprise is “more efficient and effective than those that aren’t using data to inform their decision-making.”
Another panelist, NVIDIA Senior Solutions Architect Larry Brown, clued the audience to recent news that Google is indicating large-scale search will soon evolve into AI-enabled large-scale search. Brown cautioned that such large companies shouldn’t roll out their own AI-sourced platforms too hastily, lest they fail, but did applaud Microsoft on its foresight in being an early investor in Open AI, the progenitor of Chat GPT. Microsoft has made steps to integrate the service into its Bing search engine.
Brown acknowledged, however, that while generative language models like Chat GPT are getting a lot of media attention — and a huge first wave of subscribers — one of the more practical and proven uses of AI technology, especially for the government, is document summarization and document matching. The exec shared that NVIDIA currently has multiple ongoing projects with various federal agencies in this arena.
In terms of alternative and less-discussed yet viable AI usages, HII VP of AI Brendan McElrone said his team is “really gravitating toward” the neuro-symbolic AI approach. This methodology combines the two strands of AI thinking — learning and reasoning — into one school of thought, taking the “more…classical AI, rules-based AI and symbolic AI” and merging them.
“We need to figure out how we can leverage the learning aspect within neuro-symbolic AI to extract, let’s say, low-level patterns within data, but then use those low-level differences to then build higher-level systems with reasoning and subject matter expertise,” McElrone commented.
Later in the conversation, McElrone noted a difficulty that is arising in government applications of AI — the conflict between establishing a zero trust cybersecurity architecture, as mandated by President Biden’s Executive Order, and building the trust practitioners are encouraging for AI systems. The HII representative said that ZTA could potentially “cause issues” when AI is applied to information and data sets “in one embedded space.”
“Not to bring the neuro-symbolic AI approach up again,” McElrone joked, nonetheless suggesting that users ought to be “understanding where those limitations are with those large language models and fuse data where it makes sense, but also bring in more of the symbolic classical approach to get that high level fusion that you’re looking for.”
Department of the Air Force Senior Scientist in AI and Machine Learning Steven Rogers took the perspective that in order to sow trust and integrity
into AI systems, those constructing and programming the technologies have to participate in thorough conversations with users.
“You have to take the time to talk to those humans about what are their ethical, responsible concerns, what are their trust concerns and then some of those we can handle from an engineering perspective…to figure out how you can enable that end user who be interacting with that on a daily basis, how they know when to trust it and what amount of trust to use in a given scenario,” Rogers explained.
But an absolute trust in an autonomous system is still not feasible, according to Delmolino, who said that a level of discernment and questioning of an AI like Chat GPT’s results is still crucial. He drew a comparison between the latter program and an older sibling who perhaps foolheartedly responds to all questions with supreme confidence and assurance.
“That level of confidence and what’s likely to be said is something we should feel comfortable, challenging, comfortable exercising and using our critical thinking skills [against].”
Summarizing the objectives and overall thrust of the panel—which also included the insights of ECS Data and AI Community of Excellence Director Anthony Zech—the moderator, HII Executive Vice President and Chief Technology Officer Todd Borkey laid out what an organization who stays at the front of the pack in the AI hustle requires, along with his predictions for what this will mean for future of machine/human dynamics.
“To win this information race, you need to know first and you need to know more, and disseminate that information effectively. Systems are going to collect, distill, route, prioritize, share automatically like never before. And humans will slowly move from in the loop to on the loop as these capabilities will become trusted,” Borkey stated.