Thursday, September 28, 2023
HomeTECHUK's approach to AI safety is untrustworthy a research cautions

UK’s approach to AI safety is untrustworthy a research cautions

The UK government has been attempting to project an image of itself as a global leader in the developing field of AI safety in recent weeks.

Last month, it dropped a flashy announcement of a summit on the subject and made a commitment to spend £100 million on a foundational model taskforce that, according to the government, will conduct “cutting edge” AI safety research.

However, the same government, led by UK Prime Minister and Silicon Valley aficionado Rishi Sunak, has opted not to establish new domestic legislation to control applications of AI, a stance that is dubbed “pro-innovation” in its own policy document on the subject.

Additionally, a deregulatory overhaul of the national data protection framework is being passed, which could have a negative impact on the safety of AI.

The latter is one of several findings made by the Ada Lovelace Institute, an independent research organization that is a charitable trust under the Nuffield Foundation, in a recent report examining the UK’s approach to AI regulation.

The report makes for diplomatic-sounding but occasionally awkward reading for ministers.

If the UK wants to be taken seriously on this issue, the paper makes a total of 18 recommendations for improving government policy and credibility in this area.

The Institute supports a “costly” definition of AI safety that “reflects the wide variety of harms that are arising as AI systems become more capable and embedded in society.

Thus, the report’s focus is on how to control “the current harm that AI systems can do.” Consider them AI harms in real life.

(Not with sci-fi-inspired hypothetical potential future threats that have recently been exaggerated by certain high-profile personalities in the tech industry, maybe in an effort to snag politicians’ attention.)

It’s fair to say that for the time being, Sunak’s government’s approach to regulating (real-world) AI safety has been contradictory; heavy on flashy, industry-led PR claiming it wants to champion safety but light on policy proposals for setting substantive rules to guard against the smorgasbord of risks and harms we know can result from poorly considered applications of automation.

The main truth bomb from the Ada Lovelace Institute is as follows:

The UK government has outlined its goals for becoming a “AI superpower,” including convening a worldwide summit in the fall of 2023 and utilizing the growth and adoption of AI technology for the UK’s society and economy.

Effective domestic legislation, which will serve as the foundation for the UK’s future AI economy, is necessary for this goal to become a reality.

The report’s extensive list of suggestions shows that the Institute believes the UK’s present approach to AI has a lot of space for improvement.

The government announced its preferred strategy for domestic AI regulation earlier this year and stated that, at this time, it did not see the necessity for new legislation or oversight organizations.

Instead, the government advised existing, sector-specific (and/or cross-cutting) regulators to “interpret and apply to AI within their remits” in the white paper, which established a set of flexible principles. Just without any additional funds or new legal authority to oversee creative applications of AI.

The white paper outlines five guiding principles, including fairness, accountability, contestability, and redress, as well as safety, security, and robustness. All of this appears to be OK on paper, but when it comes to regulating AI safety, paper is obviously not enough.

The UK’s decision to leave it up to current regulators to decide what to do about AI contrasts sharply with that of the EU, where parliamentarians are occupied with reaching consensus on a risk-based framework that the bloc’s executive suggested back in 2021.

To put it gently, the UK’s shoestring budget strategy of giving current, overburdened regulators new duties for monitoring AI developments on their sphere of influence without any tools to enforce consequences against bad actors doesn’t look very trustworthy on AI safety.

Since it will need AI developers to take into account a whole patchwork of sector-specific and cross-cutting legislation, drafted long before the most recent AI boom, it doesn’t even seem like a cohesive plan if you’re trying to be pro-innovation.

Additionally, regulators may have to keep an eye on developers (as feeblesauce their attention may be in light of the lack of resources and legal clout to enforce the aforementioned principles).

Therefore, it appears to be a recipe for confusion over which current laws may apply to AI applications.

(And, most likely, a mishmash of legal interpretations, depending on the industry, use-case, oversight agencies, etc. Confusion and expense result, not clarity.)

There will still be many gaps, as the Ada Lovelace Institute’s analysis also points out, even if existing UK regulators immediately publish guidelines on how they will treat AI — as several already have or are planning to do.

This is because coverage gaps are a feature of the UK’s current regulatory architecture.

The suggestion to simply extend this strategy further suggests that regulatory inconsistencies will become entrenched and even accentuated when usage of AI scales/explodes across all sectors.

Once more, the Institute

Currently, large portions of the UK economy are either completely deregulated or only loosely controlled. It is unclear who would be in charge of putting AI principles into practice in these situations, which include: private sector services like education and law enforcement, which are overseen and enforced by an uneven network of regulators; activities carried out by central government departments, which are frequently not directly regulated, like tax collection; sensitive practices like recruitment and employment, which are not fully monitored by regulators, even within regulated sectors;

“AI is being implemented and used in every industry, however there are still large holes in the UK’s patchwork legal and regulatory network for AI. To guarantee that safeguards apply throughout the economy, clearer rights and new institutions are required, it also argues.

Another growing inconsistency with the government’s claimed “AI leadership” position is that current efforts to weaken domestic data protections for individuals, such as by lowering protections when they are subject to automated decisions with significant and/or legal consequences, are being directly countered by the deregulatory Data Protection and Digital Information Bill (No. 2).

The government is moving forward with a plan to lessen the level of protection citizens currently have under current data protection law in a number of ways, even though it has so far refrained from the most headbanging Brexiteer suggestions for tearing up the EU-derived data protection rulebook, such as simply deleting the entirety of Article 22 (which deals with protection for automated decisions) from the UK’s General Data Protection Regulation.

“The UK GDPR, the existing legislative framework for data privacy in the UK, offers safeguards that are essential for shielding people and communities from possible damages caused by AI.

The Institute notes that the Data Protection and Digital Information Bill (No. 2), which is scheduled to be introduced in March 2023, “significantly amends these protections,” citing as an example the Bill’s removal of a ban on many types of automated decision-making and replacement with a requirement that data controllers have “safeguards in place, such as measures to enable an individual to contest the decision,” which it contends is a lower level of protection in practice.

It continues, “It is even more crucial that underpinning regulation like data protection governs AI effectively given the Government’s proposed framework’s reliance on current law and regulators.

According to legal counsel obtained by the Ada Lovelace Institute, “existing automated processing safeguards may not in fact provide sufficient protection to people interacting with routine services, like applying for a loan.”

The paper continues, “Taken together, the Bill’s changes run the risk of further undermining the Government’s regulatory proposals for AI.”

The government should thus reconsider provisions of the data protection reform bill that are “likely to undercut the safe development, deployment, and use of AI, such as changes to the accountability framework,” according to the Institute’s first proposal.

It also urges the government to broaden the scope of its study to consider all rights and safeguards already recognized by UK law, with the goal of filling any remaining legal gaps and, as appropriate, creating new rights and protections for anyone impacted by AI-informed decisions.

The report also suggests creating a statutory obligation for regulators to consider the aforementioned principles, including “strict transparency and accountability obligations,” and giving them more funding and resources to address AI-related harms.

It also suggests exploring the introduction of a common set of regulatory powers, including an ex ante, developer-focused regulatory capability. Finally, it suggests that the government investigate whether an AI Ombudsperson should be established.

The Institute also urges the government to clarify the law regarding AI and liability, another area in which the EU is well ahead of other countries.

The Institute also thinks the government needs to go further with foundational model safety, an issue that has recently drawn special interest and attention from the UK government due to the widespread interest in generative AI tools like OpenAI’s ChatGPT.

It suggests that UK-based foundational model developers be given mandatory reporting requirements to help regulators keep up with a very fast-moving technology.

It also proposes that the government should be forced to notify top fundamental model developers like OpenAI, Google DeepMind, and Anthropic when they (or any subprocessors they’re working with) start undertaking extensive training runs of new models.

It suggests that reporting requirements should also include data like access to the data used to train models, results from internal audits, and supply chain data.

This would give the government an early warning of advancements in AI capabilities, allowing policymakers and regulators to prepare for the impact of these developments rather than being caught off guard.

Another recommendation is for the government to fund modest pilot projects to improve its knowledge of developments in AI research and development.

The Ada Lovelace Institute’s associate director, Michael Birtwistle, provided the following comment in response to the report’s findings:

The prime minister deserves praise for his global leadership on this topic, and the government is right to recognize that the UK has a special potential to be a world leader in AI legislation.

However, the UK’s credibility on AI regulation depends on the capacity of the government to implement a top-notch regulatory framework domestically.

International coordination efforts are greatly appreciated, yet often fall short.

If the government hopes to be taken seriously on AI and realize its global goals, it must bolster its domestic regulatory initiatives.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments