Thursday, September 28, 2023
HomeTECHCompanies voluntarily submit to AI guidelines

Companies voluntarily submit to AI guidelines

Keeping up with an industry as dynamic as AI is difficult. So until artificial intelligence can do it for you, here is a convenient roundup of recent news in the field of machine learning, as well as notable research and experiments that we did not cover on our own.

Read also: This week in robotics: a number of Chinese firms raised capital

This week, OpenAI, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon voluntarily committed to pursuing AI safety and transparency objectives in advance of a planned Executive Order from the Biden administration.

According to my colleague Devin Coldewey’s writing, no rule or enforcement is proposed here; the agreed-upon practices are entirely voluntary. However, the pledges outline, in broad outlines, the AI regulatory approaches and policies that each vendor may find amendable in the United States and abroad.

The companies have agreed, among other things, to conduct security evaluations on AI systems prior to their release, to share information on AI mitigation techniques, and to develop watermarking techniques that make AI-generated content easier to identify.

In addition, they stated that they would invest in cybersecurity to safeguard private AI data and facilitate vulnerability reporting, as well as prioritize research on societal risks such as systemic bias and privacy concerns.

Certainly, the commitments are an essential step, despite the fact that they are not enforceable. Nevertheless, one wonders if the undersigned have ulterior motives.

OpenAI reportedly drafted an internal policy memo indicating the company’s support for mandating government licenses for anyone wishing to develop AI systems. In a May hearing before the U.S. Senate, CEO Sam Altman first proposed the establishment of an agency that could issue licenses for AI products and revoke them if anyone violated the rules.

Anna Makanju, OpenAI’s vice president of global affairs, insisted in a recent interview with the press that OpenAI was not “pushing” for licenses and that the company only supports licensing regimes for AI models more potent than OpenAI’s current GPT-4.

However, if government-issued licenses are implemented as OpenAI suggests, they could spark a conflict with entrepreneurs and open-source developers, who may view them as an attempt to make it more difficult for others to enter the market.

I believe Devin characterized it best when he compared it to “dropping nails on the road behind them in a race.” At the very least, it demonstrates the duplicitous nature of AI companies, which seek to appease regulators while shaping policy in their favor (in this case, by placing small competitors at a disadvantage).

It is a troubling state of affairs. But if policymakers step up to the task, there is still hope for adequate safeguards without undue private sector interference.

Here are additional AI-related news items from the previous few days:

The leader of OpenAI’s trust and safety departs: Dave Willner, an industry veteran who served as OpenAI’s director of trust and safety, announced his resignation and transition to an advisory role in a LinkedIn post.

OpenAI stated in a press release that it is searching for a replacement and that CTO Mira Murati will manage the team in the interim.

Customized instructions for ChatGPT: In additional OpenAI news, the company has introduced custom instructions for ChatGPT users so that they do not need to repeatedly type the same instruction prompts to the chatbot.

Google’s AI newswriter: According to a new report from The New York Times, Google is testing a tool that uses AI to write news articles and has begun demonstrating it to publications. The tech titan has presented the AI system to The New York Times, The Washington Post, and News Corp.

the owner of The Wall Street Journal.

Apple tests a chatbot similar to ChatGPT: Apple is developing artificial intelligence to compete with OpenAI, Google, and others, according to a new report by Bloomberg’s Mark Gurman. In particular, the tech behemoth has developed a chatbot that some engineers refer to internally as “Apple GPT.”

Meta unveils Llama 2: Meta unveiled Llama 2, a new family of AI models designed to power applications similar to OpenAI’s ChatGPT, Bing Chat, and other contemporary chatbots. Meta claims that Llama 2’s performance has improved considerably over the previous generation of Llama models due to its training on a variety of publicly available data.

Authors are opposed to generative AI. Not all content creators are satisfied with the fact that ChatGPT and other generative AI systems are trained on publicly accessible data, including books.

In an open letter signed by over 8,500 authors of fiction, nonfiction, and poetry, the tech companies behind large language models such as ChatGPT, Bard, and LLaMa are criticized for using their work without permission or compensation.

Microsoft introduces Bing Chat for enterprise use: Microsoft introduced Bing Chat Enterprise at its annual Inspire conference, a variant of its Bing Chat AI-powered chatbot with business-focused data privacy and governance controls. With Bing Chat Enterprise, chat data is not saved, Microsoft does not have access to a customer’s employee or business data, and customer data is not used to train the AI models.
Additional machine learning

Technically, this was also news, but it deserves to be mentioned in the research section. Fable Studios, which previously created CG and 3D short films for VR and other media, demonstrated a model of artificial intelligence called Showrunner, which (according to the company) can compose, direct, act in, and edit an entire television series—in their demonstration, South Park.

I am on the fence about this. On the one hand, I believe pursuing this at all, much less during a major Hollywood strike involving compensation and AI, is in poor taste. Although CEO Edward Saatchi believes the tool places power in the hands of creators, it is also arguable that the opposite is true. In any case, members of the industry did not particularly like it.

On the other hand, if someone on the creative side (which Saatchi is) does not explore and demonstrate these capabilities, then others with less reluctance to use them will explore and demonstrate them.

Even if the claims Fable makes are a bit exaggerated compared to what they actually demonstrated (which has significant limitations), it is comparable to the original DALL-E in that it sparked discussion and even fear despite not being a replacement for a real artist. AI will have a position in media production one way or another, but it should be approached with caution for a multitude of reasons.

On the policy side, the National Defense Authorization Act recently passed with (as usual) a number of absurd policy amendments that have nothing to do with defense. However, one of them called for the government to host a gathering where businesses and researchers will try to spot artificial intelligence-generated content. This situation is clearly approaching “national crisis” proportions, so it’s probably for the best that this was slipped in there.

For presumably theme park purposes, Disney Research is always attempting to discover a way to bridge the digital and the real. In this instance, they have developed a method to transfer the virtual movements of a character or motion capture (such as a computer-generated dog in a film) onto an actual robot, regardless of the robot’s size or shape.

It relies on two optimization systems, each of which informs the other of what is ideal and what is feasible, similar to ego and super-ego. This should make it much simpler to make robot canines behave like real dogs, but it can also be applied to other things.

And let’s hope that AI can help us steer the world away from sea-bottom mining for minerals, which is a terrible notion. A multi-institutional study utilized artificial intelligence’s ability to separate signals from noise to predict the locations of valuable minerals around the world. According to the abstract:

In this study, we acknowledge the inherent complexity and “messiness” of our planet’s interconnected geological, chemical, and biological systems by employing machine learning to characterize patterns embedded in the multidimensionality of mineral occurrences and associations.

In fact, the study predicted and confirmed the locations of uranium, lithium, and other valuable minerals. The system “will enhance our understanding of mineralization and mineralizing environments on Earth, across our solar system, and throughout deep time.” Awesome.

Source

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments