AI in compliance: Real-world use cases, risks and professional insights

Published

ai in compliance blog

AI is dominating headlines, promising transformation across every sector. But how are compliance professionals actually perceiving and using it day-to-day when they're dealing with high-stakes transactions?

To understand the reality of how AI is actually being used, we recently brought together a group of senior compliance professionals from across the legal, property, and finance sectors for a series of roundtable discussions. Participants shared their current experiences, pressing concerns, and thoughts on AI's role in their everyday work. The picture that emerged was one of cautious optimism, significant practical hurdles, and a preference for technology that acts as a reliable assistant, not a replacement.

How compliance teams are using AI right now

Despite the buzz, we found that the current adoption of AI for core compliance tasks is limited. Firms are primarily experimenting in low-risk areas where efficiency gains are clear and the potential for error is low.

Common use cases discussed included:

  • Internal knowledge management: Creating internal search tools allowing staff to quickly query company policies instead of manually searching documents.

  • Document comparison and summaries: Using AI to summarise key differences between old and new regulations or to break down complex, jargon-heavy documents for initial review.

  • Content generation for internal use: Leveraging tools like Microsoft Copilot for low-risk administrative tasks, such as drafting quiz questions for company newsletters or initial email drafts.

While some firms mentioned developing in-house AI tools, the general consensus was that AI is currently trusted mainly for automating "trivial stuff" or administrative burdens, rather than for making critical compliance decisions.

Key concerns blocking wider AI adoption

A prominent lack of trust emerged as the primary barrier to broader AI implementation. Several key concerns were raised by those at the roundtable including:

  • Reliability: There was a universal sentiment that AI output cannot be fully trusted without manual verification by a human expert. Anecdotes were shared of AI tools providing incorrect or inconsistent information, reinforcing this scepticism.

  • Data security and GDPR: This was described as the biggest "red line." The risk of employees inputting sensitive client data into public AI models (like the free version of ChatGPT) is a major concern, with one participant calling it a potential "fireable offence". The lack of formal AI usage policies in many firms enhances this risk.

  • Loss of the human element: Professionals expressed significant concern that over-reliance on AI could lead to a loss of authenticity in client communication and internal interactions. AI-generated content was often described as "robotic" or lacking personality. People are worried that rapport-building and the strong client relationships that depend on a human touch could be lost.

The priority: secure and integrated AI tools

While the potential applications of AI are vast, the professionals we spoke with were clear about their immediate priorities. Before embracing more advanced features, there's a fundamental desire for AI that operates within a secure, controlled environment. Concerns about data privacy and GDPR compliance were paramount, leading to a strong preference for AI capabilities integrated directly into the trusted platforms they already use. The ideal AI, therefore, isn't just powerful: it's reliable, secure, and easily embedded within existing workflows, reducing risk rather than adding to it.

Next steps: A gradual move towards assistance

Given these priorities, the overwhelming preference was for AI capabilities to be integrated securely within platforms that firms already use and trust, rather than relying on open, public tools. A controlled environment is seen as essential for managing data security risks.

Ultimately, the professionals we spoke to do not see AI as an imminent replacement for human expertise. Instead, they view it as a potentially powerful "co-pilot" or assistant. The immediate goal isn't necessarily to automate complex decision-making but to find reliable, secure ways to automate the laborious, time-consuming administrative tasks, freeing up skilled professionals to focus on judgment, client interaction, and the complex nuances that technology cannot yet handle.

Ultimately, broader AI adoption in compliance looks like it will be gradual. It hinges on proven reliability, robust security, and tools that empower human experts, not replace them.

Subscribe to our newsletter

Subscribe to our monthly newsletter for recaps and recordings of our webinars, invitations for upcoming events and curated industry news. We’ll also send our guide to Digital ID Verification as a welcome gift.

Our Privacy Policy sets out how the personal data collected from you will be processed by us.