Michael Adams is the artificial intelligence and big data solutions vertical executive at Carahsoft. He joined in 2007, not long after the company’s founding, and has played a key role in its notable year-over-year growth. The sales director works with providers of advanced analytics and other emerging solutions to meet the unique needs of federal, state and local government organizations.
Adams recently participated in a Spotlight interview with ExecutiveBiz, during which he explored emerging AI government use cases and the ethical use of AI as well as actions agencies should take now to prepare for AI deployments.
Can you tell us about your role at Carahsoft? How do you collaborate with AI and big data vendors, and how do you help government organizations benefit from their solutions?
I manage the Carahsoft AI and big data solutions team, which collaborates with a partner ecosystem of more than 50 large and small solution providers. We set up a team around each vendor to help them deliver solutions to the public sector, understand the government market, and navigate government contracts and issues like Federal Risk and Authorization Management Program authorization. Our role is to educate government customers and federal systems integrators who own the mission requirements and make procurement quick and easy. Additionally, we serve as the spoke in the wheel that connects them with implementation partners and resellers who are vital to implementing the technology as well as go-to acquisition reseller partners.
Generative AI tools like ChatGPT and Lensa AI have captured the public’s imagination. How is that influencing the way agencies consider AI or the way Carahsoft’s partners approach AI?
Excitement around generative AI is encouraging government agencies to more quickly embrace AI. There was a time when government was the first mover on new technology, yet in contrast, today, the private sector is the early adopter. But with AI, agencies have an opportunity to move quickly to operate faster, make smarter decisions and achieve their missions more effectively.
Further motivating the U.S. government and military is that adversarial nations are also looking at how they can adopt AI. Organizations need to move aggressively on AI, or they may risk finding themselves at a disadvantage.
The vendor community is also recognizing AI opportunities. We’re seeing a boom of startups focused on AI and many vendors you wouldn’t think of as AI providers begin to integrate large language models and natural-language processing into their offerings.
Agencies are exploring AI use cases. Are organizations prepared to use AI effectively?
There are a lot of promising use cases for AI in government. For instance, cybersecurity has become a critical issue. Applying AI to cybersecurity monitoring and the processing of cybersecurity data can be an important part of the solution.
Another perennial concern of government is fraud, waste and abuse. AI can monitor and detect fraud — for example, to increase tax compliance or ensure that government benefits reach the right people. AI can very quickly recognize patterns that suggest fraudulent behavior that would be difficult or extremely time-consuming to spot manually.
But one thing organizations should understand is that they can’t expect to achieve optimal outcomes with AI if they haven’t laid the groundwork. There are three key prerequisites to effective use of AI. First is the need for large quantities of data to train machine learning models. Second is clean, accurate, and timely data, because without the right data, you won’t gain the right insights or achieve the right outcomes. The third requirement is an IT infrastructure that can support accelerated computing for AI training and processing.
Many agencies have gone through a first wave of infrastructure modernization. But many outmoded legacy systems remain. To support AI-driven data processing and analysis, and to capitalize on emerging AI use cases, organizations need to embrace a second wave of modernization. They need modern, high-performance data centers or cloud infrastructures to support future AI workloads.
AI promises to benefit citizens. But some are concerned about unintended downsides. How can agencies and the vendor community optimize the advantages of AI while protecting the public from potential disadvantages?
We’re seeing more discussions around issues like ethical AI, equitable AI and explainable AI. AI should be used in a responsible way. AI inputs and outputs should treat people fairly. And, where possible, AI models should be transparent so that people can understand what data went into the model or why the model made a certain recommendation.
These are important conversations, and it’s positive that both government and industry are having them. Government is also taking action – for instance, through the AI Bill of Rights Blueprint, issued by the White House last year, which establishes principles for the design and use of automated systems.
Finally, workers are concerned that AI could eliminate jobs. But we don’t see that happening. We’ve seen with other types of automation that technology might alter roles, but it doesn’t replace them. AI can help people become more efficient or make better decisions or take faster, more accurate actions. But people are still required to train the AI models, interpret the AI outputs, make decisions based on AI recommendations, and take AI-assisted actions.
In the short term, AI will benefit citizens through better services – from automated contact centers that quickly and accurately connect people with benefits, to optimized healthcare at VA hospitals that improves health outcomes for veterans.
In the longer term, organizations like NOAA can use AI to analyze weather patterns to predict and respond to natural disasters. The military can use AI to improve decision-making and strengthen national security.
The positive use cases for AI in government feel limitless. It might sound like a cliché, but AI truly can make the world a better place.