Microsoft, Google, Anthropic and OpenAI have established an industry body that seeks to advance responsible and safe development of frontier artificial intelligence models.
The Frontier Model Forum has four core objectives and these are identifying best practices; advancing AI safety research; sharing knowledge with policymakers, civil society, companies and academics about trust and safety risks; and supporting efforts to build applications to address society’s major challenges, Microsoft said in a July 26 blog post.
The forum will form an advisory board to guide its priorities and strategy and founding company members will develop a charter and other key institutional arrangements with an executive board and a working group.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity,” said Brad Smith, vice chair and president of Microsoft.
Over the coming year, the forum will prioritize three actions: promoting best practices and knowledge sharing among governments, industry, academia and civil society; determining key research questions on AI safety; and establishing secure mechanisms for sharing information on AI safety and risks among governments, companies and other stakeholders.
In July, the White House received voluntary commitments from the Frontier Model Forum’s founding members and three other AI companies to help promote secure, safe and transparent development of AI tools.
Join the ExecutiveBiz Trusted Artificial Intelligence and Autonomy Forum on Sept. 12 to hear public sector leaders and technology experts as they talk about the opportunities and risks associated with generative AI and related tools. Click here to register.