OpenAI, the artificial intelligence behemoth that virtually single-handedly introduced the concept of generative AI to the global public discourse with the launch of ChatGPT, is undergoing a significant personnel change. Dave Willner, an industry veteran who served as the startup’s chief of trust and safety, announced his resignation and transition to an advisory role in a LinkedIn post last night. He stated that he intends to devote more time to his young family. He had been in the position for one and a half years.
The timing of his departure is crucial for the field of AI.
Alongside the excitement surrounding the capabilities of generative AI platforms, which are based on large language models and can generate text, images, music, and more based on basic user input, a growing number of questions have emerged. How should activity and businesses be regulated in this strange new world? How can we best mitigate adverse effects across a vast array of issues? These conversations are predicated on confidence and security.
Greg Brockman, the president of OpenAI, is scheduled to appear at the White House today with executives from Anthropic, Google, Inflection, Microsoft, Meta, and Amazon to endorse voluntary commitments to pursue shared safety and transparency objectives in advance of a forthcoming AI Executive Order. This follows a great deal of commotion in Europe regarding AI regulation, as well as shifting attitudes among others.
OpenAI, which has sought to establish itself as a conscientious and dependable participant in the industry, recognizes the significance of all this.
In his LinkedIn post, Willner makes no explicit mention of any of the aforementioned Instead, he maintains a general tone, remarking that his OpenAI job has entered a “high-intensity phase” since the launch of ChatGPT.
Read also: This is commerce kid
He wrote, “I’m proud of everything our team accomplished during my time at OpenAI, and while my job there was one of the coolest and most interesting jobs you can have today, its scope and scale had expanded dramatically since I first joined.” While he and his wife, Charlotte Willner, who is also a specialist in trust and safety, made a commitment to always put family first, he said, “in the months following the launch of ChatGPT, I’ve found it increasingly difficult to keep my end of the bargain.”
Willner has only been in his position at OpenAI for one and a half years, but he has an extensive background in the field that includes overseeing the trust and safety teams at Facebook and Airbnb.
The Facebook study is particularly fascinating. There, he was an early employee who helped outline the company’s first position on community standards, which remains the foundation of the company’s approach.
That was a very formative period for the corporation and, arguably, for the internet and society as a whole, given Facebook’s influence on the global development of social media. Some of those years were marked by extremely outspoken stances on free speech and the need for Facebook to resist demands to restrict controversial groups and posts.
In 2009, there was a heated debate in the public forum regarding how Facebook dealt with Holocaust deniers’ accounts and postings. Some employees and outside observers believed that Facebook had a responsibility to prohibit these posts. Others believed that doing so would be akin to censorship and would send an incorrect message regarding free speech.
Willner believed that “hate speech” was not the same as “direct harm” and therefore should not be moderated in the same manner. He wrote at the time, “I do not believe Holocaust Denial, as an idea in and of itself, poses a threat to the safety of others.” (For a thrill from the past, see the complete post on this topic here.)
Given how everything else has transpired, it was a rather short-sighted and unsophisticated position. However, it appears that at least some of these concepts have evolved. In 2019, after leaving the social network, he spoke out against the company’s plans to grant legislators and public figures more lax content moderation exceptions.
However, if the necessity of establishing the proper foundation for Facebook was greater than anticipated at the time, the same could be said for the new wave of technology. According to an article published by The New York Times less than a month ago, Willner was initially hired by OpenAI to help it figure out how to prevent Dall-E, the startup’s image generator, from being abused and used to create generative AI child pornography.
However, as the adage goes, OpenAI (and the industry) require this policy immediately. David Thiel, the chief technologist of the Stanford Internet Observatory, told the New York Times, “Within a year, we’ll be in a very problematic situation in this area.”
Who will now lead OpenAI’s charge to resolve this issue without Willner?
(We have sought out OpenAI for comment and will update this post accordingly if we receive a response.)