Companies like Adobe, IBM, and Nvidia promised the White House they would develop safe, secure, and trustworthy AI, the second such agreement the Biden administration negotiated with AI developers. Other companies that committed to the White House include Cohere, Palantir, Salesforce, Scale AI, and Stability AI.
Many of the commitments are similar to the earlier ones signed by Meta, Google, and OpenAI. The agreements are voluntary, so there is no punishment if the companies fail to follow through.
In a press release, the White House said the companies have agreed to create internal and external testing of AI systems before commercial release, invest in safeguards to protect model weights, and share information to manage risks with governments, civil society, and academia.
The companies also agree to allow third-party reporting of vulnerabilities, watermark AI-generated material, publicly report risks associated with their AI systems, research societal risks, and develop AI systems “to help address society’s greatest challenges.”
The Biden administration said it consulted with leaders from several countries to help develop these commitments.
AI has been a big focus of the Biden administration as it seeks to balance safety and innovation. It released an AI Bill of Rights that plans to be a blueprint for rulemaking and directed the National Science Foundation to establish new National AI Research Institutes.
However, legislating AI has been slow compared to the pace of innovation. Regulations governing AI, particularly generative AI, are still being discussed, with the Senate resuming hearings on legislating the technology this week. Senate Majority Leader Chuck Schumer, who will be meeting with some leaders in the AI space on Wednesday, previously urged his colleagues to “pick up the pace” around AI policymaking.