X

Welcome to CM Murray LLP. This site uses cookies, read our policy here.

AI Considerations for Employers

AI remains front of mind for employers and employees alike, particularly with the EU Parliament approving the Artificial Intelligence Act in March 2024 (EU AI Act) (with most parts likely to become effective in 2026).

In this news alert, Associate Liz Pearson and Partner Merrill April outline how firms are currently using AI and some potential challenges of integration and consider some of the legal concerns which are emerging, particularly the degree of human oversight, through the case study of the Uber “robo-firing” issue.

What is AI?

The EU AI Act defines “AI systems” as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

In the workplace, people probably think of AI in the form of chatbots such as ChatGPT (Open AI) or Bard (Google). These are a specific type of generative AI that use large language models (LLM) as their algorithmic base. The LLM is essentially ‘a next-word prediction engine. … Information is ingested, or content entered, … and the output is what that algorithm predicts the next word will be. The input can be proprietary corporate data or, as in the case of ChatGPT … whatever data it’s fed and scraped directly from the internet’.  

How are employers using AI?

Future articles will consider usage in the areas of algorithmic management. Here, we consider a few examples of how law firms are now integrating AI into the workplace as another tool for employees and clients. 

FleetAI – Dentons’ proprietary system 

FleetAI includes a chatbot based on GPT-4’s LLM and a bot for multiple document upload and analysis. In a product walkthrough, the team uploaded the EU AI Act amendments and asked questions such as:

  • Summarise in 100 words
  • Which amendments deal with high risk AI systems?
  • Analyse amendment 59
  • What questions would you, acting as an experienced lawyer, ask of this document from a legal perspective?

All bar the last question were answered successfully, and this question could be reworded for improved results. 

As a chatbot example, the team asked the bot to generate clauses covering indemnities, third parties, and IP – all completed successfully.

Harvey – Macfarlanes 

Harvey is an AI startup used by various firms. Macfarlanes provided examples of successful analysis and content creation:

Analysis and interpretation

  • Upload a loan agreement – can I do X under the agreement? if Y happens, what are the consequences?; when would default interest be payable and are there any rights to challenge that?; are there any other relevant terms? Present the previous analysis in a short email format, including references.
     
  • Upload a 50 page article – what are the three key takeaways?
     
  • Upload a case – how was legal advice privilege applied?

Content creation

  • Upload notes from conference – present in an interesting email to a general counsel at a financial institution.
     
  • Draft a mortgage clause.

A&O – ContractMatrix 

ContractMatrix was developed with Microsoft and Harvey (the AI startup discussed above). ContractMatrix ‘draws on existing templates for contracts, such as non-disclosure agreements and merger and acquisition terms, to draft new agreements that lawyers can then amend or accept’. 

The tool was used for the ‘world’s first 100 per cent AI generated contract between two companies’, when Dutch chipmaking equipment manufacturer ASML and health technology company Philips used ContractMatrix in their negotiations. 

Josef – personalised bots

Managing the integration of AI

Josef can be used to build personalised bots for firms or in-house teams. As an example, HSF built a bot to help its clients respond to whistleblowing complaints.  

The above firms have integrated their new technology alongside policies and additional training. For example, staff need to know: 

  • the sorts of tasks that the tools should be used for, 
  • the kinds of questions to ask (and how best to phrase them), and
  • relevant risks and how to avoid them.

While these larger firms have specialised technology staff who can assist with such endeavours, smaller firms will need to work closely with the AI provider to ensure staff can competently utilise these tools. We can assist employers and firms in considering and drafting appropriate policies and procedures. As noted by the SRA, firms remain responsible and accountable for the outputs of any AI systems.

Effects of integration

  • AI integration could trigger changes in job roles, skill requirements or working conditions and even result in job reduction programmes. Whether or not integration triggers collective consultation and collective bargaining rights, employers will need to carefully consider AI implementation and individual consultation obligations and best practice.
     
  • Employees lacking exposure to such technologies may fear they’ll be left behind. For example, those who are averse to change or resistant to technology (whether because of their age or other reasons) or those from a lower socio-economic background. Employees who are keen to progress may feel their employer/firm is not moving fast enough, or may already be using such technologies outside of work and require training on appropriate use and disclosure in the work environment. 
     
  • Some staff may have ethical concerns, such as the carbon footprint of the associated servers.  
     
  • Integration of AI also raises challenges for business development and marketing teams, such as the necessary levels of disclosure, client consent or collaboration, and the impact on pricing the services. 

All these issues need to be considered internally and plans developed into policies that reflect the strategy and core aims of the firm/employer.

Case study – “robo firing”

As part of a series of cases brought under the General Data Protection Regulation (GDPR), British-based Uber drivers argued their accounts had been deactivated based solely on automated processing in contravention of art 22, after Uber used software to automate fraud signals and deactivate drivers’ accounts. The Court of Appeal upheld contraventions in respect of the cases brought by three of the four drivers. 

The Court considered the European Data Protection Board Guidelines (Ch II para B), whereby to achieve actual human intervention, the controller must ensure:

  1. that all monitoring of decision-making is meaningful, and not just a token act, 
  2. that this intervention must be carried out by someone who is competent and capable of changing the decision, and 
  3. that it must include all relevant data in its analysis.

The Uber risk team was based in Krakow and made a note on the drivers’ files upon seeing the fraud flags. The Court found this to be a symbolic act. Uber did not point to evidence of meaningful human oversight, such as a conversation with the drivers (particularly given fraud requires intent or intention), the relevant qualifications of the risk team members, or the specific knowledge of the risk team members. In contrast, Uber’s risk team did have a personal conversation with one of the drivers. 

The case highlights that rubber stamping an automated decision will not meet the GDPR requirements. Deeper analysis may involve consideration of the nature of the underlying data (e.g. does this include purely objective data; how is objective data calculated and weighted), weighing potential alternative decisions and the basis for those decisions. Employers will need to be satisfied that their reviewers are suitably competent and capable to perform this task.  

Conclusion

Our review of law firms’ AI usage and the partially successful challenge to Uber’s “robo-firing” highlights the human-centric component of AI integration, where employees are required to act as tool-users and diligent overseers. In future articles, we will consider the challenges arising from integrating AI into the management of humans in the workplace. These concerns are part of the broader debate about the extent of AI regulation in the workplace, with bodies such as the Trades Union Congress (TUC) recently calling for new laws to protect workers in this space. 

If you are an employer and would like to discuss how we can advise you in relation to AI, or if you have any questions arising from this news alert, please contact Associate  Liz Parker or Partner Merrill April.