in , ,

Booz Allen’s John Larson on Responsible AI, Market Trends & More

Booz Allen’s John Larson on Responsible AI, Market Trends & More - top government contractors - best government contracting event

Artificial intelligence is sweeping today’s public sector market. As executive vice president and AI practice lead at Booz Allen HamiltonJohn Larson helms the firm’s efforts to deliver what he considers the “single most transformative technology of a generation” to the federal government.

With new technologies come new risks, and rapid advancements in AI have brought about a number of ethical considerations. A strong advocate for responsible AI, Larson is committed to educating AI adopters on how these tools can be safely and sustainably implemented where they are needed.

Recently, Larson participated in an Executive Spotlight interview, during which he sat down with ExecutiveBiz to discuss responsible AI, related federal standards and Booz Allen’s role in the growing field.

Tell me about the current state of the artificial intelligence market. Where are you seeing new opportunities in AI, and where do you think the market is heading?

The AI market is rapidly evolving, and as the largest single provider of AI services for the federal government, it is very encouraging that the appropriations around AI have seen a substantial increase over the past several years.

It is important to note that within the federal AI market, the Department of Defense has traditionally been at the forefront of the application of AI to their mission. Now, civilian agencies are increasingly beginning to embrace AI. Although they’re behind the DOD in terms of the adoption of these capabilities, they are rapidly accelerating, which is why you see these AI allocation patterns in appropriations. 

Overall, what we’re seeing is a need for AI across two dimensions. First, we see agencies and program shops that approach AI from a “problem centric” perspective. In these instances, a mission challenge is defined while the composition of the tools and techniques required to address them are left unspecified. This can be due to an intentional desire to solicit the broadest number of solutions to the problem or – as is often the case with AI – due to a lack of understanding of the full power of this evolving technology and how it can solve problems differently than traditional approaches. 

Second is a smaller set of customers that know when they have a problem that fits an AI solution, and we know that we can be prescriptive. These clients need to determine how to solve specific challenges, such as improving civic engagement or handling fraud, but they may not know which AI solution best fits their problem. These demands – tracking with the rise in AI awareness – are becoming an increasingly large portion of the types of challenges we’re seeing.

The biggest trend emerging in the market today is certainly generative AI. Even my 87-year-old father, who didn’t have electricity in his house until age 11, is experimenting with ChatGPT. Clearly, generative AI is at the forefront of everyone’s perspective. What Open AI and ChatGPT have done is initiated the democratization of AI, meaning they have created a way that people of all technical skill levels can natively engage with an AI engine that didn’t exist mere months ago. 

We started working with large language models a little more than five years ago. While practitioners have been using large language models for some time, the advent of this language-based, prompt-driven capability allows people with varying skill sets to utilize AI in ways previously unimagined. Nevertheless, prompt-driven engineering is emerging as a skillset because the better you are at engaging these engines and informing and setting the context of that prompt, the better results you will get. Still, it has been democratized so nearly anyone can use it, and increased access to these engines is transforming the industry and society today.

Another significant trend we are helping the government address today is adversarial AI, where we’re seeing an uptick in the kinds of attacks bad actors can bring to bear to sabotage or negatively influence algorithms. One of the more popular techniques is something called black box attacks, in which you reverse engineer a model and thwart the model’s predictions. With this type of adversarial attack one can probe the model’s decision boundaries and exploit internal model gradients to achieve a desired outcome. For example, you can poison the data the model is trained on by inserting perturbations into the training data for the model, and as a result, the model will make erroneous predictions. With this information on the underlying model, nefarious actors can exploit it to fool algorithms. This is something that can easily be done today – something as simple as a sticky note or stripes on a gun can change its detection from being identified as a weapon to being identified as benign. 

Finally, the third emerging trend is the rise of responsible AI. Although Responsible AI has been in discussion for some time – from NIST’s Responsible AI Framework to OSTP’s Blueprint for an AI Bill of Rights – the recent rise of ChatGPT and sensationalized headlines about applications, such as ChaosGPT, ChatGPT’s alter-ego tuned to bring about humanity’s downfall, has thrust the need for governance and guide rails around AI into the public consciousness. That is why we made the investment in Credo AI – we know that we have to think through these challenges.

Can you talk about the importance of responsible AI? Explain why we should be paying more attention to responsible AI.

Many of the key challenges responsible AI is intended to address, such as data and model bias or drift, have been around for decades. In the past, these risks were generally managed by data scientists and model engineers, but today, the power of AI models is both their speed and ability to learn and get smarter over time. What the models do is make a forecast, look at the error from that forecast and reinforce that model so it improves and minimizes errors. We want to harness that learning feature, but we want to do it within a set of parameters that comport with our democratic values. We don’t want it to learn certain things because they don’t align with these beliefs, so we need to make sure that we create the governance and guardrails around AI to ensure models learn in a way that is consistent with our core values. 

Moreover, responsible AI is important because AI is transformative, and we need this technology to be adopted because it will help us solve some of the most complicated challenges we face today. Intrinsically, we want its applications to remain consistent with our democratic values and our societal goals, so we need to ensure we have the proper governance around it. 

At the same time, it is also imperative that we embrace responsible AI assurance because we don’t want to stifle adoption and innovation of this technology, and proactively putting these frameworks in place helps reduce some of the anxiety around AI. For example, we need frameworks to understand the inherent bias in training data sets to ensure that we both mitigate and disclose those potential biases to ensure models do not propagate and amplify them in their results.

The reality is that some of the biggest risks we face are not necessarily technological hurdles, but societal and cultural hurdles around AI’s adoption and applications. It is critical for us as a nation to maintain leadership in this space for our economic prosperity, our national security and the betterment of our society. 

How can the government work to begin assessing the risks of AI?

The government needs to consider what regulatory frameworks currently exist, both to avoid redundant regulations and to identify any potential gaps in regulations already in place today that could be relevant to AI. Most importantly, they need to make sure these learning algorithms comply with existing regulations – something that the FTC is already looking to do. As we identify new gaps, we can add further layers of regulation.

The second step is focusing on use cases and applications. There is a lot of conversation today about the need for transparency in these models, which makes sense for certain applications. For example, if you entitle individuals into a beneficiary program in the government, a degree of transparency is important so you can understand why one individual was entitled and another was not.

A different example is cancer detection. If I had a black box algorithm that could detect cancer in a person with 90 percent accuracy but there was no clarity into why or how it accomplished this, I don’t think we should limit its deployment simply because we could not fully understand how it arrived at its predictions. That is an example where transparency is not an intrinsic good – we do not need to understand the why in that instance, the ends would justify the means.

I think it is critical to focus on the specific use cases of AI so you can tailor any requirements around it to that use case. There are so many examples of something that works perfectly in one use case but requires a different approach for another. 

How is Booz Allen using its position to help the industry move towards a responsible AI future?

With more than 160 enterprise-wide deployments of AI in the federal government, we have a long history of working with and developing capabilities around responsible AI. This has included efforts to support the deployment of AI in alignment with the Department of Defense Ethical Principles for AI. We have also provided input for NIST’s Responsible AI and AI Risk Management frameworks. Finally, we have developed our own AI Ethical Authority to Operate framework used to govern our AI development and deployment in line with our core values and principles. Through all these efforts, we have strived to contribute to federal responsible AI initiatives as thought leaders and provided our expertise to advise the government on key issues. 

Our recent investment in Credo AI is an extension of this work. What we value about Credo is that they have built a framework that allows users to embed the regulatory requirements into AI operations. They have these incredible tools called policy packs, which allow you to identify which policies impact a model and determine the policies that you must adhere to.

Using these tools, we can operationalize those policies through a streamlined set of actions that ensure compliance with the information from the policy pack based on regulations in the government. With that, we can embed it within our aiSSEMBLE framework, which is a software factory we use to operationalize AI at the enterprise for the government.

We are trying to bring responsible AI to our clients and help inform the public sector both as thought leaders providing our perspective and input and also as a developer of solutions for the government as it looks to embed those offerings into the operationalized deployment of our models to ensure that they learn in a responsible way. 

Sign Up Now! ExecutiveBiz provides you with Daily Updates and News Briefings about Artificial Intelligence

mm

Written by Ireland Degges

DOD, GovCon Industry Veterans List Priorities to Innovate Amid Great Power Competition
Inside the Army’s 3 Transformational Tech Initiatives - top government contractors - best government contracting event
Inside the Army’s 3 Transformational Tech Initiatives