AI is increasingly occupying a central role in our offices and workplaces. From forecasting and financial modelling to document drafting and editing, it is clear that use of AI is already driving productivity efficiencies, improving work quality and freeing up employees’ time to focus on higher value tasks. For HR professionals handling employment duties, AI tools have been deployed to assist a variety of taskings, including the shortlisting of candidates through screening CVs; helping to implement personalised workplace learning and development; and delivering analytics and monitoring of workplace performance.
In the current climate, AI has been touted as the answer to many problems but despite its clear and undeniable uses, it’s no surprise that without some attention, the unrestricted use of AI in the workplace poses serious risk. Below, we take a look at some different areas where AI tools are being deployed and possible pitfalls around these.
Hiring and Recruitment
As mentioned, some employers use AI to efficiently manage large volumes of job applications by screening CVs for keywords, desirable traits and specific qualifications. Indeed, it is not uncommon to even hear of employers using AI for automatic video interview analysis, where it can now effectively monitor facial expressions and body language, as well as analyse a candidate’s vocabulary, tone and professionalism.
While this may save time for employers, they must be careful to ensure that such processes do not inadvertently discriminate based on protected characteristics. Under the Equality Act 2010, unlawful discrimination during the recruitment process is prohibited. Meaning, for instance, if an AI tool has been trained on biased or imbalanced datasets which unintentionally creates discriminatory results, then employers may be held liable for the consequently discrimination suffered by applicants.
In terms of how to mitigate against these risks, due diligence prior to implementation is required. For example it would be reasonable to ask providers of AI services if they conduct regular audits of their recruitment algorithms and datasets and to require confirmation of how they ensure that discriminatory outcomes are avoided. Automated decisions also have more complex requirements under data protection law. The laws are being relaxed in the UK, but as the position currently stands, compliance with the automated decision making rules is difficult to achieve in many HR use cases. Aside from data protection reasons, we would always recommend ensuring that a person, rather than a programme, remains the final decision maker in hiring and recruitment decisions. AI can certainly assist in streamlining the process, but the final decision should remain with a person who can weigh context, intuition and judge if someone will truly fit within an organisation.
On top of this, there are reports of candidates becoming increasingly frustrated by the fact they may not speak to a human representative from a potential employer until several stages down the application process. As such, employers should consider the impact that automating the application process may have on their reputation and their culture. Although it may generate efficiencies, it also may have the unintended consequences of deterring the best talent.
Workplace Monitoring
There are number of providers in the market who offer a suite of AI tools to assist employers monitor their staff. This can range from reviewing keyboard activity and which applications employees are using, to time spent away from desks and use of inappropriate language on digital communication platforms. Expectedly, during the Covid-19 pandemic, interest amongst these types of AI tools grew dramatically as employers looked for effective ways to monitor staff during the working from home era.
Needless to say, whist some employers no doubt view such AI monitoring tools as a quick win, it is clear many employees would find this unsettling. In 2023, research commissioned by the Information Commissioner’s Office revealed that 70% of the public consider workplace monitoring by employers to be intrusive, and fewer than one in five would be comfortable accepting a new job that involved being monitored.
If employers do genuinely feel the need for such workplace monitoring using AI tools, they should as a minimum make their employers aware of the nature and reasons for the monitoring, aim to be unintrusive as possible, and ensure that there is a lawful basis for doing this.
An obvious pitfall here to bear in mind is that, due to the significant amount of personal data which may be handled by employers deploying sophisticated AI monitoring tools, it is important that any personal employee data is processed within the parameters of UK GDPR, meaning this personal data must be used fairly, lawfully, stored securely and kept only for as long as is strictly necessary.
Finally, if workplace monitoring leads to the dismissal of an employee, it’s essential that employers fully understand how any AI tool involved in that decision operates. As noted above, there are strict data protection rules around automated decision making and whilst these are being relaxed in the UK this has not yet come into force and will in any case not apply to all automated decisions.
For instance, consider a dismissal due to an employee’s poor capability, which is based on data extracted from an AI performance monitoring tool, or a conduct dismissal which is justified because an AI system has picked up the misuse of a company’s IT resources. In either situation, there is real risk in relying solely on AI generated data as the single or definitive reason for dismissal. If either matter progressed to a Tribunal, it is highly likely that both the data and the AI tools would face significant scrutiny. If an employer demonstrated a lack of thorough or genuine understanding on how the AI tool functions – yet were freely deploying it to justify dismissals – then this will certainly be viewed unfavourably by a Tribunal when assessing whether the employer’s decision to dismiss was within the range of reasonable responses open to a reasonable employer.
Accordingly, while AI monitoring tools may support a rationale for a dismissal, they are by no means a replacement for a fair and reasonable process.
Concluding thoughts
AI is only going to become more embedded in the workplace. Whilst AI tools and platforms may allow employers to achieve much needed productivity gains at relatively low cost, it’s clear its uses are not without risk. As we have touched on here, such AI tools and platforms are not immune to misuse, misinterpretation or bias. It’s clear (at least for now!) that AI should be used to enhance human decision-making and not replace fair and lawful procedure.

