When is it negligent for a professional to use or ignore AI?
Artificial intelligence is increasingly embedded in professional services, from legal research and document review to medical diagnostics, valuation, and auditing. As AI tools become more widely available and more capable, an important question arises: when might it be negligent for a professional either to use AI, or to fail to do so?
How professional negligence law applies to AI
At a high level, existing principles of professional negligence are well placed to deal with AI. AI has no separate legal personality and cannot be liable in its own right. It is, or should be treated as a tool, no different in principle from any other technology or methodology available to a professional. The central question remains whether the professional has exercised reasonable care and skill in the circumstances of its use.
When failing to use AI could breach the standard of care
Using AI in a professional context will not generally be mandatory (at least for the time being), and consequently a professional is unlikely to be negligent simply because AI exists and was not used. The key issue is whether, in the particular circumstances, a reasonably competent professional would have used AI in the client’s interests.
Relevant factors are likely to include whether AI use would have materially improved quality, speed, or cost efficiency, mitigated resource constraints, or reduced the risk of error. Over time, as AI becomes more established within particular professions, failure to use appropriate AI tools may increasingly fall below acceptable standards.
Duty of care: supervision, verification and accountability
Conversely, using AI does not dilute the professional’s duties. The duty of care remains unchanged. A professional who chooses to deploy AI must do so competently. That requires an adequate understanding of the tool’s purpose, limitations, and known failure modes. Technical mastery is not required, but blind reliance is unlikely to be defensible. Output must be capable of being supervised and, where appropriate, independently verified. Errors that should reasonably have been spotted will remain the responsibility of the professional.
Risks of using AI in professional decision-making
Risks associated with AI use include fabricated or inaccurate output, reasoning failures, context limitations, and bias. These risks heighten where generative or agentic AI is used, particularly where decision making becomes less transparent or more autonomous. If an AI tool is unsuitable for a task, inadequately monitored, or used without proper safeguards, that may point towards negligence.
Client consent, confidentiality and regulatory considerations
Client consent and regulatory compliance are also important. Professionals should be clear about whether and how AI is being used, particularly where confidentiality, privilege, or data protection issues arise. Transparency alone, however, is not a defence. Merely telling a client that AI will be used does not excuse a failure to exercise reasonable care and skill in deciding whether to use AI, selecting an appropriate tool, or supervising its use.
Issues of scope also remain relevant. Liability will turn on what falls within the retainer and what is reasonably incidental to it. Professionals cannot assume that AI transforms the scope of their obligations, nor that they can exclude responsibility for AI driven tasks without clear agreement.
The limits of AI in professional judgement
The risk is not that professionals use machines that can process information quickly to assist them in their work, rather that they allow the information processing to replace the slower, contextual, empathic, imaginative, intuitive and accountable judgment that remains the hallmark of proper professional care, and a defining feature of embodied human decision making.
All professionals in all contexts would do well to bear in mind Philosopher and Neuroscientist Iain McGilchrist’s comment that “there is much to fear if we leave important decisions in the hands of AI. All decisions affecting humans are moral decisions. And morality is not purely utilitarian; it cannot be reduced to calculation. Every human situation is unique, its uniqueness arising from personal history, consciousness, memory, intention, all that is not explicit”.
How we can help
We advise professionals and organisations on managing legal risk when using AI, including negligence exposure, regulatory compliance and governance, for more information, please contact our team.
Talk to us about
Related services