Artificial acceptance – how AI agrees on your behalf
As artificial intelligence (“AI”) systems become increasingly embedded in daily life, we can use them for much more than just providing information. AI systems are now commonly used to book appointments, procure goods and services on behalf of a user and interact with digital platforms on a user’s behalf. In most cases, the completion of these tasks requires that the AI accepts the provider’s terms and conditions of behalf of the user, and the AI may not ask users for explicit consent to do so.
This article considers whether users, in particular, consumers, are bound by terms accepted by AI systems on their behalf.
When are terms and conditions binding?
When you buy goods or services online, you’re only bound by the terms of the seller if they’ve been properly incorporated into the sales contract. To do this, the seller must bring their terms sufficiently to your attention before the contract is formed.
Online contracts can be created in different ways. The most common are:
- Browse‑wrap agreements – you accept the terms simply by continuing to use the website.
- Click‑wrap agreements – you actively tick a box or click a button to confirm you agree to the terms.
Whichever method is used, you don’t need to have actually read or understood the terms for them to be legally binding. The key obligation on the vendor is that they gave you a fair chance to see them.
This raises the question of whether an AI system accepting terms and conditions on behalf of a user is legally binding although the user has never been presented with them.
How can an AI legally bind you?
Whilst the tech industry introduced the term of “agentic AI”, an AI system cannot act as agent in a legal sense as it does not have legal identity. It would be more appropriate to classify it as an instrument through which the user acts. By deploying an AI system to perform tasks that require the acceptance of contractual terms, you will be bound by the terms they accept. You will have to accept the risk associated with using a tool that has a degree of autonomy.
This approach aligns with existing principals of contract law. Provided the vendor has taken reasonable steps to disclose the terms in the course of the contracting process, and they comply with applicable law, you will be bound regardless of whether you have read them or not.
Where does this leave consumers?
As we increasingly rely on AI systems to purchase goods and services, we need to be aware that we may be legally bound by whatever terms they agree to. This position is fair to both parties: merchants have no means of knowing whether the consumer uses an AI system or not and, consequently, they should not carry the associated risk. Users on the other hand, are in full control over what they use AI systems for, and they can manage the associated risk at their end.
Prudent users should ensure that they have read the terms and conditions for their chosen service prior to instructing an AI system to book their appointment or order their goods. Alternatively, users could ask the AI system to highlight or summarise any terms and conditions prior to completing the booking or order. However, this will never be as safe as reading the terms yourself as AI systems frequently hallucinate. They may not provide the correct terms or give incorrect advice.
For many, this may defeat the point of seeking the assistance of the AI system, however, until there is a greater emphasis on transparency, user controls and contractual safeguards for the use of AI systems, it is likely to be the safest option.
But then again, hand on heart, who of us reads terms and conditions anyway when we shop on the internet? So, whether we or an AI system skip them may not make much difference in practice.