The White House has announced a voluntary agreement with prominent artificial intelligence firms. Are they being coerced, or is this a mutually beneficial relationship? (AP Photo/Evan Vucci)
On Friday, the Biden administration announced a voluntary agreement with seven prominent AI companies, including Amazon (AMZN), Google, and Microsoft (MSFT).
The move, ostensibly aimed at mitigating the risks posed by artificial intelligence and protecting the rights and safety of Americans, has prompted a number of concerns, the foremost being: What does the new voluntary AI agreement entail?
The voluntariness of these commitments appears promising at first inspection. Regulation in the technology sector is perpetually contentious, with companies wary of impeding growth and governments desirous of avoiding errors.
By avoiding the direct imposition of command and control regulations, the administration can avoid the pitfalls associated with imposing overly burdensome rules.
This is precisely the error that the European Union has made throughout the years, resulting in the suffocation of innovation on the continent.
Nonetheless, a closer examination of the voluntary agreement discloses certain restrictions. Specifically, businesses may feel compelled to participate due to the implicit threat of regulation. As is always the case with governments, the line between a voluntary commitment and a required obligation is hazy.
In addition, the commitments lack specificity and appear to be broadly consistent with what the majority of AI companies are already doing: ensuring the safety of their products, prioritizing cybersecurity, and pursuing transparency.
Although the president describes these commitments as revolutionary, it may be more accurate to view them as the formalization of industry practices that already existed.
This raises the question of whether the administration’s action is a matter of optics or a substantive policy decision.
Despite its rhetoric, the Biden administration has not taken many steps to regulate artificial intelligence. To be explicit, this may very well be the best course of action.
It implies, however, that this agreement is more likely to be perceived as a symbolic gesture intended to appease so-called “nervous ninnies—vocal critics concerned about the impact of AI—than as a step toward aggressive regulation.
While managing risks and maintaining safety are commendable objectives, the administration’s brief press release provides few specifics. The agreement does not specify what specific outcomes it intends to attain or what concrete steps the parties are taking.
What does this imply for the future of artificial intelligence? Most likely, the answer is not much. This accord appears to be primarily a public relations exercise, both for the government, which wants to demonstrate that it is taking action, and for the AI companies, which are eager to demonstrate their commitment to responsible AI development.
However, it is not a wholly empty gesture. It does emphasize the importance of safety, security, and trust in AI, and it reinforces the notion that corporations should be accountable for the potential societal impact of their technologies.
In addition, the administration’s emphasis on a cooperative approach incorporating a wide range of stakeholders hints at a potentially fruitful future course for AI governance. However, we should not overlook the possibility of the government becoming too close to industry.
However, this announcement should not be interpreted as a seismic shift in AI regulation. This is a relatively insignificant step on the path to responsible AI. The government and these companies issued a press release at the end of the day.