Google’s AI chatbot Bard, about which we previously wrote, has officially debuted in the European Union. We understand that it did so after implementing some adjustments to improve transparency and user controls, but the privacy watchdogs in the bloc are still on high alert, and significant decisions over how to enforce the bloc’s data protection regulations on generative AI are still to be made.
The Irish Data Protection Commission (DPC), Google’s primary regional data protection watchdog, informed us that it will keep working with the tech giant on Bard after the app’s debut.
Google has promised to conduct a probe and provide a report to the watchdog in three months, or around mid-October, according to the DPC. Therefore, if not (yet) a formal investigation, there will be increased regulatory focus on the AI chatbot in the upcoming months.
The European Data Protection Board (EDPB) team is also investigating whether AI chatbots are in conformity with the EU’s General Data Protection Regulation (GDPR).
The taskforce initially concentrated on OpenAI’s ChatGPT, but from what we’ve heard, Bard issues will be included in the study, which tries to coordinate any actions that different data protection authorities (DPAs) may take to try to unify enforcement.
“Google has made a number of adjustments ahead of [Bard’s] launch, with a focus on improved user controls and enhanced transparency.
Google has promised to conduct an assessment and submit a report to the DPC three months after Bard becomes operational in the EU, and we will continue to work with them on this matter once Bard launches, according to DPC deputy commissioner Graham Doyle.
Additionally, the European Data Protection Board established a task force earlier this year, of which we are members, to examine a wide range of challenges in this area, the speaker continued.
After the Irish regulator urgently requested details that Google had not yet provided, the EU launch of Google’s ChatGPT rival was postponed last month.
This included withholding a data protection impact assessment (DPIA), a crucial compliance record for identifying potential threats to fundamental rights and evaluating mitigating actions, from the DPC. So, not providing a DPIA raises a lot of regulatory red flags.
Doyle told TechCrunch that the DPC had indeed recently seen a DPIA for Bard.
This document, along with other “relevant” records, will be included in the three-month review, he claimed, adding that “DPIAs are living documents and are subject to change.”
Google said it has “proactively engaged with experts, policymakers, and privacy regulators on this expansion” in an official blog post, but did not immediately provide any details on the exact measures taken to reduce its regulatory risk in the EU.
We contacted the tech giant with inquiries about the transparency and user control changes made prior to the launch of Bard in the EU. A spokeswoman highlighted a number of areas it has paid attention to which she suggested would ensure it is rolling out the technology responsibly — including limiting access to Bard to users aged 18+ who have a Google Account.
She mentioned a new Bard Privacy Hub, which she said made it simple for customers to study explanations of the privacy options accessible. This is one significant change.
Google’s stated legal justifications for Bard include contract performance and legitimate interests, according to information on this Hub.
However, it seems to be doing the majority of the associated processing on the latter basis. (It also mentions that as the product evolves, consent to the processing of data for particular purposes may be requested.)
According to the Hub, users can erase their own Bard usage history as their only clearly labeled data deletion option; there isn’t a clear way for users to request that Google delete the personal information used to train the chatbot.
While it does provide a web form for reporting problems or legal issues, it also states that users can request a correction of inaccurate information generated about them or object to the processing of their data (the latter being a requirement if you’re relying on legitimate interests for the processing under EU law).
Google also provides users with the option to use another web form to request the removal of content that violates its own policies or applicable laws (which most obviously implies copyright violations, but Google also encourages users to use this form if they want to object to its processing of their data or request a correction, so this appears to be the closest thing to a “delete my data from your AI model” option).
Other changes, according to a Google spokeswoman, concern user restrictions over the company’s preservation of Bard activity data, or even the option to opt out of having activity registered.
By default, Google retains users’ Bard activity in their Google Accounts for up to 18 months, although users can alter this to three or 36 months if they prefer.
Users can also choose how long Bard stores their data with their Google Account. At g.co/bard/myactivity, they may quickly and easily turn this off and delete all of their Bard activity, the spokeswoman said.
On the surface, Google’s approach to transparency and user control with Bard appears to be quite similar to the adjustments OpenAI made to ChatGPT in response to regulatory review by the Italian DPA.
When the Garante ordered OpenAI to stop operating locally earlier this year, it attracted attention while simultaneously raising a host of data privacy issues.
After following the original DPA to-do list, ChatGPT was able to restart service in Italy. This included adding privacy disclosures about the data processing used to create and train ChatGPT, letting users opt out of data processing for training its AIs, and offering a way for Europeans to request their data be deleted, including if it was unable to fix errors generated about them by the chatbot.
In order to reduce worries about children’s safety, OpenAI was also asked to add an age-gate soon and work on enhancing existing age assurance technology.
Additionally, Italy ordered OpenAI to remove any references to the legal basis for the processing claimed to be the performance of a contract, stating that it could only rely on either permission or legitimate interests. (In fact, when ChatGPT started operating again in Italy, OpenAI seemed to be relying on LI as the legal foundation.) And we know the EDPB team is looking at legal basis as one of the challenges on that front.
The Italian DPA launched its own inquiry into ChatGPT and compelled OpenAI to make a number of rapid improvements in response to its worries. That probe is still ongoing, a Garante spokeswoman confirmed to us today.
Other EU DPAs have also stated that they are looking into ChatGPT, which is subject to regulatory investigation from all around the EU because, unlike Google, it is not primarily based in any Member State.
In comparison to Google’s chatbot, which, as we’ve said, isn’t formally under investigation by the DPC yet, this means that OpenAI’s chatbot may be subject to greater regulatory risk and uncertainty. It also means that the company’s compliance situation is more complicated because it must deal with inbound requests from multiple regulators rather than just a lead DPA.
If EU DPAs can agree on shared enforcement positions on AI chatbots, the EDPB taskforce may be able to reduce some of the regulatory uncertainties in this area.
In spite of this, certain authorities have already laid out their own strategic stance on generative AI technology. The CNIL in France, for instance, published an AI action plan earlier this year that stated it would pay special attention to safeguarding publicly accessible data online against scarping, a technique that OpenAI and Google both employ when building significant language models like ChatGPT and Bard.
Therefore, it is unlikely that the taskforce would result in total agreement among DPAs on how to address chatbots, and some variations of approach seem probable.