Four of the most prominent AI companies are forming a new organization to assure the “safe and responsible development” of so-called “frontier AI” models.
Read also: Snap’s revenue difficulties persist, but earnings offer a few rays of light.
In response to growing calls for regulatory oversight, ChatGPT developer OpenAI, Microsoft, Google, and Anthropic have announced the Frontier Model Forum, a coalition that will utilize the expertise of member companies to develop technical evaluations and benchmarks as well as promote best practices and standards.
Fresh front
At the center of the forum’s mission is what OpenAI has previously referred to as “frontier AI,” which consists of advanced AI and machine learning models considered to pose “severe risks to public safety.” They argue that such models present a unique regulatory challenge because “dangerous capabilities can arise unexpectedly,” making it difficult to prevent the misappropriation of models.
The self-proclaimed objectives of the new forum include
i) Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
ii) Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
iii) Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
iiii) Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Although there are only four members of the Frontier Model Forum at present, the group states it welcomes new members. Organizations that qualify must develop and deploy cutting-edge AI models and demonstrate a “strong commitment to frontier model safety.”
In the immediate future, the founding members will establish an advisory council to steer the organization’s strategy, as well as a charter, governance, and funding structure.
Read also: Google reports that 2 billion monthly logged-in users view YouTube Shorts
“We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” the companies wrote in a statement released today.
Regulation
While the Frontier Model Forum is intended to demonstrate that the AI industry is treating safety concerns seriously, it also demonstrates Big Tech’s desire to forestall incoming regulation through voluntary initiatives, and perhaps even write its own rules.
In fact, today’s news comes as Europe moves forward with what is expected to be the first concrete AI regulation, designed to place safety, privacy, transparency, and anti-discrimination at the core of companies’ AI development philosophies.
And last week, President Biden met with seven AI companies, including the four founding members of the Frontier Model Forum, at the White House to agree on voluntary safeguards against the burgeoning AI revolution, despite criticism that the commitments were somewhat ambiguous.
Nonetheless, Biden also indicated that future regulatory oversight was possible.
Biden stated at the time, “Realizing the promise of AI through risk management will require new laws, regulations, and oversight.” “In the coming weeks, I will continue to take executive action to assist the United States lead the way in responsible innovation. And we will work with both parties to devise legislation and regulation that is appropriate.”