Steve Orrin, federal chief technology officer at Intel, said government agencies should prioritize governance and assurance as they start evaluating use cases for artificial intelligence and should protect data and training models as they work to deploy AI tools.
“AI governance and assurance are necessary for the technology to become truly trustworthy, democratized and ubiquitous,” Orrin wrote in an article published Wednesday in FedTech Magazine.
He noted that agencies should ensure security and transparency in AI models and this could be done across three phases: data ingestion, model implementation and model optimization.
The federal CTO cited poisoning, fuzzing, prompt injection and spoofing as some of the cybersecurity threats facing AI and called on agencies to protect against such threats by implementing penetration testing, identity and access controls and continuous monitoring of outputs.
The Intel executive highlighted how confidential computing addresses a key aspect of zero trust and the importance of data encryption.
“Encryption of data at rest, in transit and in use strengthens security across the AI lifecycle, from data ingestion to model implementation and optimization,” Orrin said.
“By securing the building blocks of AI systems, organizations can achieve governance and assurance to make models and outputs more accurate for the agencies that use them and more trustworthy for the constituencies they serve,” he added.
Register here to attend the Potomac Officers Club’s 5th Annual Artificial Intelligence Summit on March 21 and hear federal leaders and industry experts discuss the latest developments in the field.