What are the legal risks for my business allowing our staff to use AI in the workplace?

Ask the Legal Expert

Q: What are the legal risks for my business if I allow our staff to use AI in the workplace?

A: While AI applications have undergone rapid development in recent times, they are not perfect. Any gaps or inaccuracies in the training data, from which the output is derived, can result in incorrect outcomes. Therefore, there are potential risks in allowing your staff to rely fully on AI generated content. Depending on what you are using any relevant AI tools for, outputs should regularly be checked to ascertain their accuracy. Use of AI tools is generally made at the risk of the user, with the licensors of such tools not standing by the outputs delivered. Hence, the risk from any errors in outputs supplied to clients/customer resulting from the use of AI by employees, will generally lie with your business, and not the AI provider, save to the extent you are able to limit such liability in your own contracts with such clients/consumers.

On the other hand, staff can make mistakes without using AI, so AI may well be a useful tool to generate efficiencies in your business, and ultimately the outputs generated by staff. As AI tools proliferate, their quality is going to improve further, however we are still very much in an early phase regarding its widespread adoption.

It is also worth bearing in mind that information inputted into AI tools can potentially be used to further train and develop that AI application. Care needs to be taken that any information inputted to any AI application is not subject to any confidentiality obligations your business has to staff or any third parties. Otherwise, the use of such information in an AI application could create potential liability to the owner of such confidential information.

Q: We have quite a detailed staff handbook, but it doesn’t yet address AI. Do we need an additional policy for the use of AI?

A: Whether or not your business adopts the use of AI applications in the workplace, applications such as ChatGPT will be accessible to staff wherever they may carry out their work. The initial issue to consider is whether AI has any application to your business, and the services performed by your employees. If so, then it certainly would be prudent to either add provisions regarding AI to your existing staff handbook, or to develop a separate AI policy. The sorts of things potentially to be included in such a policy include whether the use of AI tools are permitted for work related matters or not, if they are permitted which ones may be used, what the approval process is for using AI tools within the business, what types of information belonging to the business may be inputted into any such tools (and which types of information may not), whether any further checks are required on outputs generated from AI, and the reporting of errors or complaints generated from AI tool use.

We are thinking of using AI to improve the services which we offer to our customers. Is   this a good idea and should we be aware of any legal risks?

A: AI tools have the ability to greatly improve productivity and accuracy of outputs. Thus, if there are AI tools available, at a suitable price, which will allow you to improve your deliverables to customers then it would certainly be worth considering using them. Some of the issues to be considered in doing so are:

If the AI tools used generate incorrect results, which are provided to customers, the risk of any errors will lie with your business. You are unlikely to have any right of recourse against any AI provider.

While in theory AI outputs should ideally be checked or reviewed (even if only on a random basis), given that part of the attraction of AI tools is that they will potentially reduce manpower costs, that begs the question of who will actually be in a position to check those outputs. If use of such AI tools leads to downsizing of a skilled workforce, will you retain sufficient qualified staff to be able to check whether the AI outputs are correct or not?

Care needs to be taken as to what information is inputted into the relevant AI tools. Are you comfortable that such information may continue to be used by the AI platform to train itself further in the generation of outputs to other users? Is any of the information you might wish to input into the AI tool concerned subject  to any confidentiality obligations?

AI tools can potentially be utilised to select/approve customer applications. Care needs to be taken when setting optimal parameters (say when determining whether to accept a new client) that the tool is not operating in a manner that could be potentially discriminatory (for example, not accepting applications from persons over a certain age, or of a particular sex). An AI tool making decisions on who to accept for insurance cover, for example, might trend towards declining male drivers given their historically higher accident rates.


Quentin Golder

Partner, Birketts LLP

Previous articleFive Estuaries Offshore Wind Farm application accepted for planning examination
Next articleOur growth and prosperity depends on sustainable infrastructure