Artificial intelligence is sweeping the world and even the United States federal government. AI is being researched, developed, adopted and implemented across the Department of Defense in particular, and as the technology is better understood and harnessed, new opportunities are being unlocked.
However, with the rise of AI language tools and chatbots, security, ethics and responsibility around AI are being called into question. Google is one company in the AI space that is making sure it gets the safety piece right.
“We’re getting ready to make some of our state-of-the-art language models available to the public so they can interact with our tools directly,” Google Public Sector Vice President Leigh Palmer said in an exclusive video interview with Executive Mosaic.
“AI is a big part of our culture and a big part of what we do here at Google. That being said, we’re committed to launching these features responsibly,” she added.
Palmer said Google is conducting tests to ensure that its AI tools are meeting the bar for safety so that when they are fully released to the public, it’s with security and responsibility at the forefront.
In her conversation with Executive Mosaic’s Summer Myatt, Palmer also discussed the Department of Defense’s JWCC contract, how Google will deliver capabilities as an awardee and what the contract means for the defense landscape.
This video is part of Executive Mosaic’s series on all four JWCC winners. Check out Microsoft’s contribution to the conversation here, and stay tuned for insights from AWS and Oracle.