This is a guest post by  Alex Cisneros, barrister at 39 Essex Chambers. Although it is primarily concerned with cases in the Court of Protection, the risks of using AI apply equally to family law cases, particularly where they have been or will be heard in private. 

I have increasingly encountered litigants in person in Court of Protection proceedings who appear to be using AI tools such as ChatGPT to assist them with their cases. This often presents innocuously: a well-structured email, a surprisingly polished position statement or a confident summary of legal principles. In many instances, the intention is plainly to cope with an unfamiliar and stressful process in a jurisdiction where legal aid is limited and the stakes are deeply personal.

But the growing use of AI in this context raises issues that are specific, and serious, in the Court of Protection. This brief article highlights a particular concern around confidentiality, transparency orders and the protection of P’s private information.

What are OpenAI tools and how are they being used?

Tools such as ChatGPT are a form of ‘generative artificial intelligence’. They allow users to type text into a system and receive generated responses that can draft, rewrite, summarise or analyse information.

In practice, litigants in person might use these tools to help draft emails, prepare position statements or witness statements or summarise expert reports. From the litigant in person’s perspective, this can feel little different from asking for help from a lawyer with wording or structure.

However, these tools are not operating like a spell-checker or a word processor. Information typed into AI software is being communicated to an external platform, subject to that platform’s own data handling and retention policies. The UK Information Commissioner’s Office has published guidance explaining how generative AI systems process personal data, and why this matters from a privacy and data-protection perspective.

The Lady Chief Justice also recently published guidance for judicial office holders on AI use in legal contexts, emphasising both the potential utility of AI tools and the risks associated with accuracy, confidentiality and misuse.

From the litigant in person’s perspective, none of this may be obvious. But legally, particularly in the Court of Protection, the distinction matters. Using an AI tool is not simply a private act of drafting; it is the sharing of information with a third-party system.

Why this may breach a transparency order

Most Court of Protection cases are covered by rules that limit who can see or share information from the case. Very often, the court also makes a transparency order, which sets out exactly what information can and cannot be shared, and with whom. The Open Justice Court of Protection Project has a clear explanation of what these orders are and how they work: Transparency Orders: Reflections from a Public Observer.

Put simply, these orders usually mean that information from the case must stay within the case, unless the court has clearly said otherwise.

When someone copies information from a Court of Protection case into an AI tool such as ChatGPT, they are sharing that information with an outside platform. That platform is not part of the court process, is not one of the parties, and has not been approved by the court to receive the information. In legal terms, this is likely to count as sharing the information with a third party.

Even where names are removed or only initials are used, the information in Court of Protection cases is often so detailed and personal that it can still point clearly to the individual involved. Mr Justice Rajah has recently warned about the risk of “jigsaw identification” in the case of W v P [2025] EWCOP 11 (T3), where different pieces of information can be put together to work out who a person is, even though they are not named. This risk is particularly high in cases involving health conditions, care arrangements, or family relationships, where the facts themselves may be distinctive or already known to others.

For that reason, using AI tools with real information from a Court of Protection case will likely breach a transparency order, and may carry significant criminal consequences.

Why this matters

This is not a technical point. Court of Protection cases routinely involve deeply sensitive information about P, including medical diagnoses, care arrangements and family relationships. The jurisdiction exists precisely because P is vulnerable and entitled to enhanced protection.

Once information is uploaded to an external AI platform, control over that information is lost. Whatever assurances exist about privacy or data handling, the court has not sanctioned that disclosure, and P has not consented to it.

How to spot possible AI-generated material

AI use is not always obvious, but there are patterns that make it possible to recognise. Documents may suddenly adopt a highly formal tone that does not match the litigant in person’s previous correspondence. There may be confident, fluent statements of legal principle that are oddly detached from the facts of the case. Or even make reference to made-up case law.

What solicitors and barristers should do

If there is a genuine concern, it may be appropriate to draw the court’s attention to the relevant transparency order and explain why uploading case material to third-party platforms is problematic. In some cases, it may assist to invite the court to give clear guidance or directions, particularly where a litigant in person is plainly unaware of the restrictions they are operating under.

Clearly, if suspected breaches of the transparency order continue, then further applications may need to be made.

AI tools are here to stay, and for many litigants in person they are filling a gap created by an overstretched justice system. In some cases, their use is probably unavoidable. But the Court of Protection is a jurisdiction where privacy is fundamental, not optional. Practitioners need to be alert to how new technologies interact with long-standing duties of confidentiality and protection, and must ensure that P remains at the centre of any analysis.


Featured image: generated by ChatGPT. For the avoidance of doubt, this didn’t involve any personal data or breach of a transparency order.