AI in the Workplace ‒ Embrace or Evade?

If 2023 was the year that generative AI entered mainstream consciousness, 2024 will be the year generative AI became part of mainstream establishment following an explosion of growth in users, both commercial and personal.

Full disclosure from the outset: This article is not a product of generative AI and does not discuss the technology and advancements of AI models. Rather this article seeks to highlight some of the workplace issues that may be facing organisations as generative AI becomes an integral part of our working lives.

The discussions of the benefits and pitfalls of generative AI and models such as Google Bard, Microsoft Copilot, Perplexity, ChatGPT and DALL-E have been widespread, and show no sign of abating. The number of organisations expanding or implementing the use of predictive AI and generative AI models is ever increasing, as is the number of employees becoming aware of the benefits of using AI models in their own daily tasks.

Whilst AI is by no means a new concept, the integration of generative AI models into organisations has been exponentially rapid. A survey undertaken by KPMG in March and June 2023 found that 20% of businesses were already using generative AI and 67% of executives confirmed that budget was allocated towards generative AI technology.

It is widely acknowledged that large language model generative AI holds enormous potential across industries. Whilst organisations are considering how best to implement generative AI in the workplace, employers should be aware that given the large number of AI models that are both freely available, and easily accessible, many employees will already be using models such as ChatGPT or Microsoft Copilot without their employers knowledge. Employees are recognising that the speed with which generative AI can draft letters, undertake research, summarise documents and create presentations (to list a very small number of tasks) means increased productivity which benefits them professionally and personally. As such, many employees are using generative AI models in the workplace without understanding the fallibility of AI or recognising how and where bias may be built into these models. Employers should be mindful that even if they are not yet ready to embrace generative AI within their own organisations, it is almost a certainty that many of their employees will have done so already.

Hallucinations

The risks of generative AI have been widely discussed and inaccuracy, “hallucinations” and bias are the predominant reasons cited for caution.

There have been a number of widely publicised cases both here and in the US highlighting the risk of AI hallucinations (where an AI model creates inaccurate or false information). The very plausible and authoritative presentation of information created by generative AI means that both users of generative AI and the recipients of the information produced would not have any immediate reason to check the validity or accuracy of the information provided to them.

In a case before the Tax Tribunal, Harber v The Commissioners for his Majesty’s Revenue and Customs [2023], an appellant had presented cases in support of her appeal which were not real. Whilst the tribunal was satisfied that the appellant neither knew nor had the means to check that the cases were not real, it raised questions about the transparency of the use of AI in tribunal proceedings and the need for human verification of legal arguments and cited cases.

Whilst the Harber case was before the Tax Tribunal, there are no doubts of a rise in the use of generative AI in employment claims given the volume of litigants who may consider generative AI as a cost effective means of assisting with litigation. All organisations and their representatives should ensure, if they are presented with a claim, that they check and verify case citations, legal arguments and summaries. This will be critical to maintain trust with clients and compliance with the employment tribunal.

Bias and Discrimination

A fundamental point to be mindful of when using AI models is that AI is only as good as the data it is trained on, and the inputted data can create inherent bias within the models. The risk regarding bias in the use of AI models was highlighted in one of the first AI cases to be presented to the Employment Tribunal in the case of Manjang & Raja v Uber Eats UK ltd and others: 3206212/2021. The claims in this case related to harassment, victimisation and indirect race discrimination arising from the introduction of Microsoft facial recognition software, which required drivers to take a real time photo of themselves to verify when they were working using the Uber Eats app. The use of this facial recognition software as a means of verifying workers using the app resulted in the claimants relationship with Uber Eats being terminated for repeated failures of facial recognition by the software.

At a preliminary hearing, the claimant asserted that facial recognition software places people from ethnic minority groups at a disadvantage in that false positive and false negative results are greater in individuals from ethnic minority groupsThe claimant had raised his concerns regarding the use of the app with Uber Eats and further requested a human review of the app, which was met with no response by the respondent. The case was listed for 17 days later this year, but was withdrawn in March when the claimant reached an undisclosed settlement with Uber Eats.

The Manjang & Raja case highlights the risk of assuming AI is infallible and relying on AI without human interaction. It was acknowledged by Gideon Christian, assistant professor at the Faculty of Law, University of Calgary, who researches race, gender and privacy impacts of AI facial recognition technology, that: “There is this false notion that technology, unlike humans, is not biased. That’s not accurate. Technology has been shown to have the capacity to replicate human bias”. This replication of human bias will affect content generation and decision-making such that the use of AI algorithms may create a risk of discrimination. Christian has stressed that “Diversity in design matters. The current industry’s success caters to a homogenous image… [and] it needs diverse perspectives and representative data to ensure fair technology.”

Where organisations are seeking to use any form of AI in automated decision-making, human intervention is key at some point in the process to check against any inherent bias in the systems used. Further, it is imperative that if an individual claims any form of AI being used by an organisation is incorrect or demonstrating bias, the organisation must take immediate action to review.

Data Protection and IP

The use of generative AI within the workplace has also raised concerns with how these models may be used in compliance with data protection legislation both within and outside the UK.

To assist in compliance with data protection legislation, employers should:

  • Advise employees about the data that they are permitted to input in generative AI tools;
  • Provide training and awareness of acceptable use;
  • Explain and justify the use of AI models when processing personal data;
  • Clearly set out information regarding the use of AI models in their privacy notices;
  • Ensure that data protection impact assessments are completed prior to using any new model of AI;
  • Provide appropriate security measures and controls in place regarding acceptable use of personal data and AI; and
  • Where personal data is used in AI tools consider the ability to locate, extract or amend this data in compliance with a data subject access request (DSAR).

With regard to IP, there are risks for organisations in both developing generative AI models and using AI models in the workplace regarding both the ownership of product created by AI models and whether any outputs produced are infringing protected works. There are also questions to be addressed regarding the IP rights that can be assigned to content produced by generative AI given this work is not entirely created by humans.

The rapidity of emerging AI technologies and questions regarding IP are beyond the scope of this article but are a key matter for any organisation using AI models given the importance of retaining control of intellectual property rights. Duane Morris has a leading US and UK team focusing in this area who are able to offer support and would welcome a discussion at any time.

Transparency

Much like the regulations regarding data protection, a key theme in the regulation of AI is transparency. It is vital that organisations are transparent about their use of AI and require the same transparency from their workforce. Of particular importance will be transparency regarding employees’ use of generative AI models such as Chat GPT. Given AI technology will only become more prevalent and impactful across all industries, a prohibition on the use of generative AI would be discourage a culture of openness and awareness amongst workers.

In a study undertaken by Salesforce in 2023, which involved over 14,000 worker across 14 countries, more than half of those interviewed were using generative AI in the workplace without their employer’s approval. This raises concerns that employees are generating work using AI models without guidance regarding the ethical and legal issues nor having received training for effective use and alignment with the style and tone of the organisation.

It is essential that where employers are considering implementing AI in the workplace they also involve their workforce in this process. If employees are open about their use of generative AI, this will enable organisations to check for hallucinations and inaccuracy, maintain data protection compliance and avoid bias and IP infringement, all of which will protect a business in terms of both financial and reputational cost.

Governance

There are currently no explicit UK laws in governing the use of AI in the workplace. However, the UK government has set out a cross-sector framework for regulating AI underpinned by five principles to “drive safe, responsible AI innovation”. These are:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

Organisations should be aware that greater regulatory activity within the UK is inevitable. On 18 April 2024, the TUC issued the Artificial Intelligence (Regulation and Employment) Bill. The draft regulation sets out provisions to protect the rights and interests of individuals in the workplace to safeguard them against the risk associated with AI, which include:

  • Automatic unfair dismissal to dismiss an employee “through unfair reliance on high-risk decision making” using AI or as a punishment for exercising the right to disconnect;
  • Right to request human review of AI generated decisions; and
  • Amendment to the Equality Act 2010 to protect employees against the “discriminatory consequences” of AI systems utilised by their employers.

The bill has yet to be passed, so organisations must, for now, decide how they will harness the benefits of AI and manage the risks within the government framework provided. It should also be noted that the UK government appears to be mindful of addressing AI as a global issue ‒ on 1 April 2024, the UK and the US signed a memorandum of understanding to work collaboratively to build a common approach to AI safety testing and addressing AI risks.

Where organisations operate outside of the UK, they will need to consider and comply with any applicable AI legislation. For organisations operating throughout Europe, this will include the EU AI Act which came into force 13 March 2024 and has stringent regulations for AI applications that pose a “clear risk to fundamental rights” such as those involving the processing of biometric data.

Embrace or Evade?

It is right that caution should be exercised given the infancy of many of the AI models, the risk of hallucinations (a risk that is likely to always be present), the concerns regarding bias and a simple lack of user knowledge.
For all employers, whether they have embraced generative AI or are approaching with caution, the key is to be transparent about use of AI both internally and externally and provide a clear statement to employees regarding use of generative AI models. All employers, regardless of size or industry should have a policy in place to address their stance on the use of generative AI.

Where organisations are using AI models, they need to consider the risks in the workplace such as inaccuracies, algorithmic bias (as evidenced in Manjang &Raja v Uber Eats UK ltd and others), the impact on personal data and data protection compliance and the potential infringement on intellectual property rights.

A clear AI policy will be vital in ensuring all workers are aware of acceptable uses of AI, inputting of personal data or sensitive or confidential information and the potential of unintended breach of copyright or IP infringement.

The policy should also be supplemented with training workers in the effective and appropriate use of AI and how to identify the risks associated with AI use to enable them to provide human intervention where required and review AI generated material for accuracy.

The rapid and ongoing evolution of AI will mean organisations will need to consider which AI tools to use and be prepared to change these tools as new technologies emerge. This in turn will mean policies and training will also need to be revisited regularly and amended as required.

In our view, evasion is simply not an option. Organisations should not consider AI models as a stand-alone option but as systems in partnership with their workforce. The human element of review and balance to mitigate risk must not be underestimated and remain ever present, even as AI services become more independent. Whilst there is understandably caution and some concern regarding the use of AI in the workplace, organisations would be advised to think about how generative AI can support and enhance their businesses with processes that are human led but technology enabled.

© 2009-2025 Duane Morris LLP. Duane Morris is a registered service mark of Duane Morris LLP.

The opinions expressed on this blog are those of the author and are not to be construed as legal advice.

Proudly powered by WordPress