Insights -

Artificial intelligence at work

Artificial intelligence (AI) is not a new concept, but the debate about its use in everyday life has really ramped up in recent months, particularly following the launch of OpenAI’s ChatGPT at the end of 2022.

ChatGPT is a chatbot founded by the company OpenAI. It was initially built on a family of large language models (LLMs) and using these models it can understand and generate human-like answers to text prompts. It can write essays, scripts, and poems, and solve computer coding in a human-like way. It can even have conversations and admit mistakes. This has inevitably sparked fears and discussion about the extent to which it will replace existing jobs and/or create new ones.

Advances in technology can create opportunities for organisations, especially those who seek to be innovative and efficient and AI will play an increasingly significant role. Conversely, critics of the system are concerned with the speed at which business is seeking to deploy AI and warn that it’s too much too soon and comes with significant risks. Some even warn that it’s too late – AI has already infiltrated technology without our explicit knowledge or consent and there are going to be consequences to that. But what does that mean for the workplace?

Manjang & Raja v Uber Eats UK Ltd

We know that many companies have already embraced artificial intelligence to assist with some of their internal practices, particularly tech companies. In March 2020 Uber introduced a facial recognition system that required workers to provide a real-time selfie when they logged on for work.

Failure to comply risked disciplinary action and the driver being reported to Transport for London with the prospect of losing their private hire vehicle license. However, data subsequently revealed that the facial recognition system used was significantly less likely to work when used by black and minority ethnic workers.

Driver Pa Edrissa Manjang was dismissed by Uber after the company’s facial recognition system failed to recognise him when he attempted to sign in to work. They relied on these mismatches to end the working relationship as they claimed Mr Manjang had been sharing his account, which he denies. He is currently pursuing his claims in the employment tribunal on the basis that the software was racially biased making his dismissal unlawful. Uber’s attempts to get his claims struck out were unsuccessful.

Does Uber’s use of the system without appropriate safeguards amount to indirect race discrimination? Maybe. We wait to see but it is a reminder for companies who are early adopters that there are risks as well as benefits to being at the cutting edge.

Earlier this month campaigners, trade unions and MPs joined forces to call for stricter oversight of AI at work. The Trade Union Congress raised concerns over whether “management by algorithm” could really be trusted to provide safe and fair treatment of staff. As an example of this staff at an Amazon warehouse in Coventry recently went on strike following the introduction of a tough regime of ever-changing targets which they believe are set by AI and are unrealistic and unobtainable.

Regulatory issues

Recent history has shown that technology often evolves and develops at a faster rate than Lawmakers can consider and enact regulation.

Different jurisdictions have also taken different approaches when it comes to the use of AI. The EU is wary and is preparing tough regulatory measures on its use and in the meantime the Italian data protection authority has gone one step further in announcing a temporary ban and investigation into OpenAI “with immediate effect” given its privacy concerns with the model.

By way of justification, the Regulator said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”. It also expressed concern that there was no way to verify the age of users whilst the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”.

In contrast, the UK is planning a much lighter approach to regulation as confirmed in the recent publication of a pro-innovation white paper on the matter: AI regulation: a pro-innovation approach – GOV.UK (www.gov.uk).

It appears that the Government intends to “encourage” the Equality and Human Rights Commission and the Information Commissioner to work with the Employment Agency Standards Inspectorate to issue joint guidance on the use of AI systems in recruitment or employment as opposed to rushing to new legislation which has the potential to restrict businesses. Instead, they have proposed a set of five principles designed to encourage responsible use of AI:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

While we can’t predict precisely what the Government will do to manage AI, employers can start taking steps to deal with integrating AI into workplace operations now. This could include implementing new policies and procedures that govern the use of AI at work, informing staff and any potential applicants about its use, and training staff about how to spot any discriminatory practices that require human input to rectify. Employers need to be vigilant and ready for the change.

How can Morr & Co help your business?

For an initial discussion and a no-obligation quote, get in touch with us today by calling 01737 854 500 or by emailing [email protected] and a member of our expert team will get back to you.

Contact one of our team today


Although correct at the time of publication, the contents of this newsletter/blog are intended for general information purposes only and shall not be deemed to be, or constitute, legal advice. We cannot accept responsibility for any loss as a result of acts or omissions taken in respect of this article. Please contact us for the latest legal position.

Back to listing
Print Share