The global rise of AI: opportunities and challenges

At the start of November, the government will host the first global summit on artificial intelligence (AI), with a focus on the safe use of AI. The AI Safety Summit will be at Bletchley Park, a venue representing innovation and pioneering, and once the top-secret home of WW2 codebreakers.

The Summit will focus on the risks which are created (or at least significantly increased) by the most powerful AI systems and tools, particularly those where such systems are regarded as having the potential to be extremely disruptive or damaging to society. The Summit will also focus on how AI – when used safely and effectively – has the potential to be used for public good and to improve and indeed enrich people’s lives; for example, through lifesaving medical technology.

Ahead of the Summit, the government has drafted the following five objectives:

  • a shared understanding of the risks posed by frontier AI and the need for action;
  • a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks;
  • appropriate measures which individual organisations should take to increase frontier AI safety;
  • areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance; and
  • showcase how ensuring the safe development of AI will enable AI to be used for good globally.

Many countries, global organisations, corporates and academics are working together to manage the risks associated with the advancement of AI. It is understood that the Summit will build on this collaboration and those attending will seek to work through and agree what is needed to address the risks posed. This could not come at a more important time as the world seeks to unpick the conundrum of AI, both in terms of the challenges it throws up and opportunities it presents.

The regulatory landscape – a comparison with the US

When thinking about how to use AI effectively and safely within a law firm, it is key to identify and manage the associated risks, without stifling innovation. One of the priorities in the Solicitors Regulation Authority (SRA) 2023/24 draft Business Plan continues to be supporting innovation and technology, particularly that which improves the delivery of legal services and access to them. The SRA recognises that fast-moving AI can drive new risks and challenges for regulation and as part of its work, it will consider the effectiveness of introducing a new regulatory sandbox, enabling those it regulates to test innovative products and initiatives in a safe space.

In terms of the SRA’s regulatory requirements, individuals and firms must provide services to clients competently, considering clients’ specific needs and circumstances. For those with management responsibilities, they must ensure the same in respect of those they supervise. Furthermore, law firms must identify, monitor and manage all material risks to their business, of which generative AI is arguably becoming a frontrunner.

In England & Wales, the use of AI would currently fall squarely within the broad competency requirements. However, in the US, comment 8 to rule 1.1 of the American Bar Association’s (ABA) Model Rule of Professional Conduct was added almost a decade ago to address technological competency: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”. As of March 2022, 40 US states had adopted an express duty of technological competence for lawyers, with many of the remaining states taking the view that this duty is already implicit.

The ABA Commission on Ethics 20/20 clarified that the reference was added owing to the “sometimes bewildering pace of technological change”. We are now a decade on and in a very different space with AI. This begs the question: do regulators, in our case the SRA, need to be doing more? Will the use of AI within law firms eventually be drafted into the Codes of Conduct?

Just at the end of August 2023, the ABA formed a task force to study the impact of AI on the legal profession and the related ethical implications for lawyers. The task force will consider the potential benefits of using AI, such as the possibility of making legal services more accessible and process efficiencies, while evaluating some of the risks that AI can pose. When focusing on generative AI and the prevalence of large language models (LLMs), this includes bias and data privacy issues and a LLM’s ability to spread incorrect information as a result of hallucinations. These potential pitfalls were illustrated earlier this year when in the US, a $5000 fine was imposed on two attorneys and their law firm for filing a brief drafted with the help of ChatGPT, which cited six fictitious judicial decisions. In finding that the attorneys had acted in bad faith, U.S. District Judge P. Kevin Castel noted: “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.

Global approach to AI regulation – what’s needed?

Generative AI is disruptive, its trajectory is fast-paced and governments around the world are trying to work out how best to strike the balance in its use. The current approach appears to be trying to work out how best to balance the power which AI can harness with the need to address the risks and potential dangers it poses if used unethically or without safeguards in place. Countries are thinking about what they need to do to address this and seek to regulate the proliferation of AI in everyday life.

Legislative approach

Some countries are looking to legislation to promote the safe and effective use of AI. Brazil, for example, has a draft AI bill which focuses on the rights of users interacting with AI. Accompanying this will be guidelines which categorise AI into types based upon the risk they pose. Prioritising the safety of users of AI means that AI providers will be required to give information about their products and conduct extensive risk assessments before bringing a product to market. Crucially, users of AI will be able to contest AI generated decisions and can demand human-led evaluation, particularly if the outcome is likely to have significant user impact. The result is that all AI developers will be liable for any damage caused, with developers of higher-risk products being held to a higher standard of liability.

China has also published draft regulations which require generative AI to reflect “socialist core values” and be designed to generate only “true and accurate” content. As drafted, the regulations require developers of AI to “bear responsibility” for the output their AI creates. There are also restrictions around how developers train their data, and they will be liable if this infringes a third party’s intellectual property rights.

In June 2023, the European Parliament voted to approve the AI Act, with the next step (expected later this year) being for the European Council to rubber stamp it. Similar to Brazil’s approach, the AI Act categorises the risk the AI poses in three ways – as unacceptable; high; and limited. AI systems which are designated as ‘unacceptable are those which are regarded as a ‘threat’ to society, and these are banned under the Act. EU officials will need to approve high risk AI both before market entry and on an ongoing basis. Those posing a limited risk will still need to be appropriately labelled to users, enabling them to make informed decisions about the use of that AI.

A softer approach?

Other countries are looking to policy development. Israel, for example, has published a draft policy on AI regulation which is described as a “moral and business-oriented compass for any company, organisation or government body involved in the field of AI” and has a specific focus on responsible innovation. The draft policy also states that the development and use of AI should respect the rule of law, fundamental rights and public interests and, in particular, maintain human dignity and privacy. The overall positioning of the policy is to encourage self-regulation and minimise government intervention in the development of AI, with more focus on sector-specific regulation which better lends itself to more bespoke intervention (where that is needed.)

Japan has adopted a similar approach, with no immediate plans for prescriptive rules. Instead, Japan is choosing to wait and see how AI develops over the coming months, stating that it does not want to stifle innovation at this formative stage. Japan is also relying on existing legislation – particularly in the information law space – to act as guidelines when it comes to the development and use of AI. For example, in 2018, the Copyright Act was amended to allow for copyrighted content to be used for data analysis. The effect of that amendment has since been extended to apply to AI training data, meaning that AI developers can train their algorithms on copyrighted content, with a view to generating more accurate data and minimising the spread of misinformation.

Israel and indeed, the UK government, have taken a similar tack. The most recent version of the government’s white paper confirms that there is no plan to give responsibility for AI governance to a new single regulator. Instead, existing sector-specific regulators will be supported and empowered to produce and implement context-specific approaches that suit the way AI is used in their sector. The white paper also outlines five principles regulators should have in mind in terms of the safe and innovative use of AI:

  • safety, security and robustness;
  • transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

While there is no immediate plan to put these principles on a statutory footing, it is anticipated that a statutory duty on regulators will be introduced requiring them to have due regard to the principles.

Where does this leave law firms?

The advancement of AI (particularly in the form of generative AI and the explosion of LLMs such as Chat GPT, Bard and Bing) and its fast-paced trajectory will require agile and progressive leadership across all sectors to protect us from harm while enabling us to reap the benefits it can provide. How law firms start to grapple with this challenge is no exception.

Generative AI will prove valuable for law firms in process driven areas such as due diligence, regulatory compliance (including client on-boarding processes) and contract analysis. However, in terms of managing associated regulatory and reputational risks, firms must not lose sight of the fact that legal professionals have a fundamental oversight role in a firm’s use of this technology. PWC recently referred to this approach as “human led and technology enabled”.

Firms will need to adopt a robust ethical framework to underpin key decision making processes in the use of AI, as well as heeding key pointers in the SRA Enforcement Strategy. This means being accountable, being able to justify decisions that are reached and being able to demonstrate the underpinning processes and factors that have been considered.

Finally, the following will stand a firm in good stead when it comes to the use of generative AI:

  • Do not input any confidential or commercially sensitive information into an open AI LLM.
  • Scrutinise and verify the information the model generates. We know that these models can produce incorrect content which appears to be convincingly accurate.

This combined with the government principles and a risk-based approach – taking on board the SRA’s stance set out in the Enforcement Strategy – is a good place to start.

Written by Jessica Clay From Kingsley Napley

Read more stories

Join over 6,000 wills and probate practitioners – Check back daily for all the latest news, views, insights and best practice and sign up to our e-newsletter to receive our weekly round up every Friday morning. 

You’ll receive the latest updates, analysis, and best practice straight to your inbox.

Features