How to safely integrate AI into your compliance strategy
Published

It feels like you can’t open LinkedIn or read a legal newsletter without being hit by a wall of noise about AI. For compliance professionals, this often brings up a conflict between hope for better efficiency and anxiety about the risks. In our recent Compliance Unfiltered webinar, we sat down with a panel of experts, including Katrina Varsani (Risk and Compliance expert), Nabil Shah (Head of Conveyancing at Leadenhall Law Group) and Harriet Holmes (Thirdfort’s own AML expert) to tackle this tension head-on. The consensus from the group was clear: AI isn't here to replace you, it’s here to give you your evenings back.
The risk of ignoring AI
One key takeaway was the danger of ignoring AI. It might be tempting to hope the hype blows over, but failing to address it creates a significant blind spot for your firm. If a firm doesn’t have a clear policy or approved tools, they face the risk of what is known as "AI smuggling".
This happens when staff, feeling the pressure of heavy workloads, quietly start using unvetted public tools, such as the public version of ChatGPT, to draft emails or summarise documents. This is where the real data danger lies, because you start to lose control over where sensitive client information is going. The safest approach is to bring AI into the light. By choosing secure, firm-approved tools, you move from a risky environment to a controlled one. You need to be able to explain and defend the tools you use, ensuring you understand exactly where the data is stored and how it is used.
Your AI policy template: Version 1.0
Don't wait for the "perfect" policy. Download our ready-to-use framework to set clear guardrails and empower your team to use AI safely today.
How to avoid over-reliance on AI
Another growing concern for many professionals is the fear of over-reliance. If the computer produces an answer, is it safe to just accept it? The educational approach here is to view AI as a framework, not a final answer.
For example, you might use an AI tool to generate a list of enquiries for a specific property transaction. The tool could produce a comprehensive list of 20 points in seconds, a task that might usually take an hour. However, an AI doesn't understand the nuance of the specific deal in front of you. It is the lawyer’s job to review that list, filter out the irrelevant points, and apply their professional judgment. The technology handles the heavy lifting of the draft, but the qualified professional provides the final verification. It turns a long administrative task into a shorter review task, but the human element remains the most important part of the process.
Transparency is non-negotiable
As we move toward a more tech-enabled industry, transparency with clients becomes essential. The SRA and other regulators are emphasising that while innovation is positive, it cannot come at the cost of clarity. Clients trust you to keep their data safe, and if you are using AI tools to process their information, you must be clear about it.
This goes beyond a simple box-ticking exercise. It is about maintaining the trust you have built with your clients. Whether it is in your terms of business or your initial client care letter, being open about how you use technology to support your work demonstrates that you are in control of the process. It shows that you are using technology to enhance your service, not to cut corners.
Subscribe to our newsletter
Subscribe to our monthly newsletter for recaps and recordings of our webinars, invitations for upcoming events and curated industry news. We’ll also send our guide to Digital ID Verification as a welcome gift.
Our Privacy Policy sets out how the personal data collected from you will be processed by us.


