The White House has received voluntary commitments from eight companies to help foster secure, safe and trustworthy development of artificial intelligence tools.
Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability have committed to adhering to three principles of responsible AI development in support of the current administration’s efforts to manage AI risks, the White House said Tuesday.
To ensure AI tech safety, the companies will tap independent experts to conduct testing of their AI systems prior to launch and share information with government agencies, academia and civil society to manage AI risks.
They also vowed to safeguard unreleased and proprietary model weights through investments in insider threat prevention and other cybersecurity initiatives and facilitate third-party detection and reporting of vulnerabilities in AI tools.
Other commitments are creating technical mechanisms to better inform users of AI-generated content, pursuing research on AI systems’ potential societal risks and developing advanced AI systems to help address challenges facing the society.
The administration is working on an executive order on AI as part of efforts to safeguard the rights and safety of U.S. citizens.
The move marks the latest round of voluntary commitments to advancing the White House’s AI risk management efforts. In July, seven AI companies expressed their commitment to developing trustworthy, safe and secure AI technologies.