X

Welcome to CM Murray LLP. This site uses cookies, read our policy here.

Beyond ChatGPT: AI Considerations For Law Firms

This article was first published in Law360 on March 24 2023.

Since the launch late last year of ChatGPT, an artificial intelligence chatbot, there have been numerous articles asking whether it or other forms of artificial intelligence will replace the need for lawyers, or even put them out of business altogether.

In response to this concern, lawyers have undertaken their own tests of the software, asking it questions in order to demonstrate its limitations.

While acknowledging the software’s impressive ability to immediately produce simple template agreements and provide comprehensive answers to basic legal questions, lawyers have found — and been keen to emphasize — that it struggles to provide tailored or nuanced responses to more complex queries, and often recommends that legal advice be obtained.

However, the potential for ChatGPT to change the legal landscape is clear: With Allen & Overy LLP recently announcing that they had adopted their own artificial intelligence assistant, “Harvey,” based on a similar technology, which, they state, will “automate and enhance various aspects of legal work, such as contract analysis, due diligence, litigation and regulatory compliance,” while stressing that the output would require “careful review” by a lawyer. It is reasonable to assume that other firms will start to bring in their own bespoke products in order to address some of the current challenges around accuracy and confidentiality. As a result, firms will need to start grappling with the issues that arise from the use of AI, some of which are set out below.

The Wider Implications for Law Firms and Their Clients

Liability At the heart of any contract for legal services is the assumption by the law firm of the client’s risk, or some of it at least. This works well because clients want that expertise and assurance, and firms are happy to assume it because it falls within their expertise. In addition, they receive consideration through payment.

Firms manage this assumption of risk on a daily basis, through ensuring their teams are competent, well-trained and up to date. They carefully scope out the terms of the retainer and assess whether the proposed task is within the firm’s risk appetite.

Having said that, mistakes happen. Firms therefore also carry significant personal indemnity insurance, which are usually greatly in excess of the minimum levels required by the regulator.

Often mistakes are due to a human error and the firm is liable. But if the advice were provided via technology, who is liable? If the technology produces information and the lawyers use it to provide advice, then clearly the firm is still liable.

If technology such as AI provides an answer and the lawyer checks it and provides that advice, the firm is still liable to the client. Serious issues might lead to claims by law firms if the technology’s output is seriously faulty.

If AI produces the answer and the law firm provides it without any further checks or input, or indeed if the law firm’s service is simply to provide access to an AI system that generates answers, who is liable to the client then?

It will depend, of course, on the specific circumstances, but it is conceivable that law firms may seek to share some of the risk with their clients, for example if they ask bad questions or start with bad data. Perhaps then the AI solutions will, or should, only be used by sophisticated clients, like general counsels with legal training and a background understanding of the legal issues.

Regulatory

Firms will need to consider how they ensure that any use of AI is compatible with their regulatory obligations. Firms and individuals have obligations to deliver a competent service to clients. Firms are required to have effective systems in place for supervising client matters, and supervisors are accountable for the work of those they supervise.

It is likely that as the use of AI in firms increases, firms will need to develop policies or checklists to ensure that the results of any AI-generated research or contracts are checked in a systematic manner before they are provided to clients.

A further consideration is the U.K. Legal Services Board’s current focus on continuing competence within the legal profession, with individual regulators recently being asked to set out how they intend to ensure the competence of those they regulate.

The use of AI to conduct research or draft contracts may change the dynamic in terms of the skill set that junior solicitors are required to develop, and it is likely that the U.K. Solicitors Regulation Authority will need to reconsider and revise its statement of solicitor competence to specifically address the use of AI if it becomes more widespread.

At present, ChatGPT can only be used for generic research or queries, given that it retains and stores the information it is given in order to respond to future queries. There is currently a risk of a solicitor breaching a client’s confidentiality if they input client details or information when asking a question.

Finally, SRA rules require firms to give clients the best possible information about how their matter will be priced. A solicitor’s hourly rate will usually be reflective of their experience and the amount of time that it takes for them to complete a task.

If AI is to be used more widely, then firms may have to explain how they have factored this into their costs and the extent to which they have passed on the cost of any product to their clients.

Firm Structure and Culture

Most law firms still have a quite traditional structure. Trainees, associates and partners do the legal work, and a network of specialist professionals perform other vital roles, such as finance, business development, IT, administration, human resources and risk management — the list is long.

Larger firms often have more sophisticated structures with different levels of legal expert or freelance models, but in the vast majority of cases, lawyers do the legal work and the junior lawyers do most of their work supervised and trained by senior lawyers.

If AI takes a central role in providing legal services, the traditional model will evolve. Clients already resist paying for juniors to learn, so only want to pay for the value that they add. Juniors may evolve and become technology tsars, skilled in manipulating the AI in producing pertinent answers for specific client issues.

But would IT professionals be better placed to perform those roles? And how do junior lawyers gather an understanding of clients’ legal needs if their only experience is managing the AI?

It is often said that partners’ jobs are safe because clients value partners’ judgment and strong relationships with those partners. There is probably some truth in that, especially during this transitional phase as AI becomes prevalent, but if AI performs very well then the cost of partners may become unattractive to some clients.

Even if the traditional partner role, as ultimate guardian of the relationship and accountable for the team, is untouched by AI, how long will this last?

Junior generations will have had vastly different legal education to the current cohort of partners. Will they share the same skills as the current cohort of partners?

Different may be better or worse, of course. And if it is accepted that firms need to cultivate the next generation of partners in the image of the current cohort, when will they decide who is destined for the highest rank? This may be early in a lawyer’s career. This may have serious diversity implications.

Partnership Model and Succession

The partnership model affords firms and partners great flexibility and full income distribution each year, which is popular with partners. Large items of capital expenditure, such as investment in new technologies, are often a vital way of staying relevant for clients.

However, these can require widespread partner support and for a partner nearing retirement this can equate to a drop in short-term profits, with the longer-term benefits arising after they leave.

In the largest firms, the value of innovation is often well recognized, and the impact of individual dissent is minimal, but in medium- and smaller-sized firms this can be significant. In extremis this could lead to division and distraction from client service, while undermining long-term succession planning within the firm.

Conclusion

While some firms or individuals may be reassured by the current limitations of ChatGPT, it would be unwise to be complacent. The use of AI is likely to become increasingly mainstream and firms should be seeking to address these issues sooner rather than later.

This article was first published in Law360 on March 24 2023.

If you have any questions arising from this article or would like to discuss the implications of ChatGPT or AI more generally for your firm, please contact Partners Andrew Pavlovic or Corinne Staves.