- GRC
- 8th May 2026
- 1 min read
How AI Is Used in GRC Today - SureCloud
- Written by
In Short..
-
AI in GRC is already live — automated control testing, ML anomaly detection, regulatory text parsing, and AI-assisted audits are actively used today.
-
Financial services is leading adoption — driven by DORA, NIS2, FCA, PRA, and broader regulatory pressure to scale compliance operations.
-
The most mature AI use cases are operational, not autonomous — evidence collection, transaction monitoring, and control testing deliver measurable value now.
-
Human oversight and model governance remain critical — regulators still expect explainability, validation, and accountability for AI-assisted compliance decisions.
The real shift in GRC isn’t AI replacing compliance teams—it’s AI reducing the manual burden of monitoring, testing, and coordinating controls at scale.
Introduction
Artificial intelligence is no longer a roadmap item in governance, risk and compliance. Across financial services and other regulated industries, AI is actively deployed for automated control testing, machine learning-powered anomaly detection in risk data, natural language processing of regulatory texts, and AI-assisted audit workflows. This article documents what is genuinely live in production environments today — and where the technology is still maturing — so that compliance leaders can make informed decisions about where to invest.
Expert View
Quote topic: The most significant gap between AI GRC marketing claims and what is actually deployed in regulated institutions today — and what compliance leaders should be sceptical of.
Suggested SureCloud contact: SureCloud AI and Compliance Automation Lead
Alternative external source: Deloitte Centre for Regulatory Strategy — published commentary on AI adoption in financial services risk functions (2024 report); FCA Chief Data and Analytics Officer public statements on AI in compliance.
Why AI Adoption in GRC Is Accelerating Now
Governance, risk and compliance (GRC) functions have historically been labour-intensive. Control testing, evidence collection, audit scheduling and regulatory monitoring rely on manual effort that does not scale well against growing regulatory complexity. A single large financial institution may be subject to dozens of overlapping frameworks — DORA (the EU Digital Operational Resilience Act, which came into force on 17 January 2025), ISO 27001:2022 (the international standard for information security management systems), NIS2 (the EU Network and Information Security Directive 2, which entered into force in October 2024 for in-scope member states), and domestic frameworks enforced by the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) in the UK, or the European Banking Authority (EBA) and European Securities and Markets Authority (ESMA) at EU level.
Against this backdrop, AI offers genuine operational relief. [STAT NEEDED: Percentage of financial services firms actively deploying AI in compliance functions — suggested source: Deloitte or EY 2024 financial services AI adoption survey.] The pressure is real: regulators are increasing both the volume of guidance they issue and the pace at which firms are expected to respond.
The important qualification is that AI in GRC exists on a spectrum. Some applications — particularly automated control evidence collection and ML-based anomaly detection — are mature and in broad use. Others, such as fully autonomous regulatory interpretation or AI-generated board-level narrative reporting, are still emerging. The credibility of any AI GRC investment depends on accurately categorising which is which.
Automated Control Testing: Live and Scaling
The most mature application of AI in GRC is automated control testing — using software agents to continuously test whether security and compliance controls are operating as designed, rather than relying on point-in-time manual checks.
In a traditional audit cycle, a control might be tested once or twice per year. An automated approach continuously queries systems, pulls evidence, and flags exceptions in near real time. For financial services firms subject to DORA Article 5 requirements around ICT risk management frameworks, or ISO 27001:2022 Annex A controls, this shift from periodic to continuous testing is substantive.
What Is Actually Deployed
In financial services, automated control testing is live in several forms:
- Configuration drift detection: AI agents monitor infrastructure configurations against a defined baseline and alert when controls are no longer in their expected state.
- Access control validation: Automated checks verify that user permissions align with documented access policies, flagging deviations without human review of every log entry.
- Evidence collection pipelines: Systems automatically pull screenshots, logs and configuration exports and map them to control objectives, reducing the evidence burden on compliance teams during audit cycles.
The key distinction is that AI here is pattern-matching against defined rules — it is not exercising judgement. The value is scale and consistency, not autonomous decision-making.
Machine Learning for Anomaly Detection in Risk Data
Machine learning (ML) — a subset of artificial intelligence in which systems learn patterns from data without being explicitly programmed for every scenario — is actively used in risk monitoring across banking, insurance and capital markets.
Financial Crime and Fraud Monitoring
The most established use case is transaction monitoring for financial crime. ML models trained on historical transaction data identify patterns that deviate from expected behaviour — flagging potential fraud, sanctions violations or money laundering indicators for human review. This is not new technology: major banks have operated ML-based transaction monitoring for over a decade.
Operational Risk Monitoring
More recent deployments apply ML to operational risk data: identifying anomalies in system availability metrics, third-party service performance, and ICT incident patterns. Under DORA Article 10, firms must monitor ICT-related incidents and implement automated detection mechanisms — ML anomaly detection is a direct fit for this obligation.
What to Watch For
ML models require quality training data. A model trained on historical data that did not capture a particular risk pattern will not detect it reliably. Financial institutions deploying ML for risk monitoring need strong model governance processes — including model validation, drift monitoring and documented explainability — to satisfy regulatory expectations under frameworks such as the FCA's internal model requirements and the EBA's guidelines on internal governance.
Natural Language Processing for Regulatory Text
Natural language processing (NLP) — an AI technique that enables software to parse, interpret and extract meaning from written text — is increasingly used to manage the volume of regulatory and policy documentation that compliance teams must process.
Regulatory Change Management
Regulatory bodies publish an enormous volume of guidance, consultation papers, and final rules. [STAT NEEDED: Volume of regulatory publications per year across major jurisdictions — suggested source: Corlytics or Thomson Reuters Regulatory Intelligence annual report.] NLP tools monitor these publications, classify content by relevance (by jurisdiction, business line or applicable framework), and surface changes that require action.
This is live in large financial institutions and increasingly available as a component of GRC platforms. The technology reliably handles classification and tagging of regulatory updates. What remains genuinely harder is NLP-based interpretation — understanding what a new requirement means for a specific organisation's control framework. That step still requires human analysis.
Policy Gap Analysis
NLP is also used to compare internal policy documents against regulatory texts, flagging where policy language does not explicitly address a regulatory requirement. This accelerates gap analysis exercises — particularly useful when a new regulation (such as NIS2 or DORA) arrives and an organisation needs to assess its existing policy estate quickly. The output requires human review; NLP surfaces candidates, it does not make compliance determinations.
AI-Assisted Audit Scheduling and Evidence Collection
Audit planning and evidence collection are two areas where AI augmentation is live and delivering measurable time savings.
Risk-Based Audit Scheduling
AI tools can analyse historical audit findings, control failure rates, and emerging risk indicators to suggest which controls or business areas warrant increased audit attention in a given period. Rather than relying on fixed schedules or manual risk assessment, scheduling becomes adaptive — higher-risk areas are prioritised dynamically.
This is deployed in larger internal audit functions and in some external audit contexts, though the degree of AI involvement varies significantly. In most live deployments, AI produces recommendations that auditors review and act on — full automation of scheduling decisions is not common practice.
Evidence Collection and Mapping
Evidence collection is one of the most time-consuming aspects of compliance work. AI-assisted pipelines can automatically pull system-generated evidence — logs, configuration exports, access reports — and map it to the relevant control objective or framework clause. This reduces the manual burden on control owners and speeds up audit preparation.
The DORA requirement under Article 19 for firms to maintain documentation of ICT-related incidents and recovery activities is one example where automated evidence collection pipelines are directly applicable.
What Is Live vs What Is Still Emerging
The table below documents the current maturity state of key AI applications in GRC, based on deployment evidence across financial services and regulated industries.
|
AI Application |
Maturity Status |
|
Automated control testing (configuration, access) |
Live — broadly deployed |
|
ML anomaly detection (fraud, transaction monitoring) |
Live — mature, decade-plus deployment |
|
ML operational risk anomaly detection |
Live — increasing adoption post-DORA |
|
NLP for regulatory change classification and tagging |
Live — in large FIs and specialist platforms |
|
AI-assisted audit scheduling (risk-based) |
Live — deployed with human oversight |
|
Automated evidence collection and mapping |
Live — in GRC platforms |
|
NLP regulatory interpretation (what it means for your firm) |
Emerging — human review still required |
|
Fully autonomous board reporting narrative generation |
Emerging — experimental |
|
AI-driven third-party risk scoring at scale |
Emerging — pilots underway in financial services |
Where Financial Services Is Leading Adoption
Financial services is the most advanced sector for AI in GRC, driven by regulatory pressure, data availability, and the scale of compliance operations. Banks subject to Basel III capital requirements, insurers regulated under Solvency II, and investment firms under MiFID II all operate compliance functions large enough to justify AI investment.
Practically, this means that technology that is 'emerging' in other regulated sectors may already be mature in tier-one banking. Compliance leaders outside financial services should adjust their expectations accordingly — vendor claims built on banking use cases may not translate directly to smaller or differently regulated organisations.
What Compliance Leaders Should Actually Do With This
The practical question is not whether AI has a role in GRC — it clearly does. The question is where to start and how to evaluate vendor claims honestly.
- Prioritise high-volume, rule-based tasks first. Automated control testing and evidence collection deliver the most reliable ROI because the task is well-defined. These are also the easiest to validate.
- Ask vendors to specify whether AI is rule-based, ML-based, or LLM-based — and what evidence they have of accuracy in a regulated context. Vague claims about 'AI-powered' capabilities should trigger scepticism.
- Build human review into any AI-assisted process. Regulators including the FCA and EBA expect firms to maintain accountability for compliance outcomes, regardless of the tools used to reach them.
- Model governance is not optional. Any ML model used in a risk or compliance context needs documented validation, explainability, and ongoing monitoring. Factor this into implementation cost.
See AI-Powered Compliance Management in Action
Request a demo of SureCloud's compliance management platform to see how automated control testing, evidence collection, and risk monitoring work in practice — not in theory.
Internal links:
- /blog-hub/ai-in-grc-complete-guide — Link from the 'What is AI in GRC?' context in the Key Context section, as the natural next read for readers who want the full landscape overview before the operational detail.
- /product/compliance-management — Link from the 'Evidence collection pipelines' section in Automated Control Testing and from the CTA, as the direct product reference for readers ready to evaluate.
- /platform/automate — Link from the 'AI-assisted audit scheduling' section as the relevant platform capability for readers interested in automation workflows.
See AI-Powered GRC in Action
FAQ’s
Is AI actually being used in GRC today, or is it mostly hype?
AI is genuinely deployed in GRC today, particularly in financial services. Automated control testing, ML anomaly detection for fraud and transaction monitoring, NLP-based regulatory change classification, and AI-assisted evidence collection are live in production environments. The hype relates primarily to more advanced applications — autonomous regulatory interpretation or AI-generated board reporting — which remain emerging rather than proven.
What is the most mature AI application in compliance right now?
Transaction monitoring using machine learning is the most established AI application in regulated industries, with major financial institutions having deployed ML models for fraud and financial crime detection for over a decade. Within GRC platforms specifically, automated control evidence collection and configuration monitoring are the most mature and broadly deployed capabilities.
Does AI in GRC create regulatory risk if something goes wrong?
Yes, and this is a live concern for regulators. The FCA and EBA both expect firms to maintain accountability for compliance outcomes — using AI to inform a process does not transfer regulatory responsibility to the vendor. Firms need model governance processes that cover validation, explainability, and human oversight checkpoints. Documenting these processes is itself a compliance obligation in some frameworks.
What regulatory obligations does AI in GRC actually help with?
AI is directly applicable to several current regulatory obligations. DORA Article 10 requires automated ICT incident detection, where ML anomaly detection is a fit. DORA Article 19 requires documentation of ICT incidents, where automated evidence collection pipelines apply. ISO 27001:2022 continuous monitoring requirements align with automated control testing. NIS2 Article 21 security measure requirements for monitoring and incident handling are similarly addressable.
How should I evaluate an AI GRC vendor's claims?
Ask three questions: What specific tasks is the AI performing — is it rule-based, ML-based, or LLM-based? What is the evidence of accuracy, including false positive and false negative rates? And what human oversight is required, and is that built into the product or an additional process burden on the compliance team? Vendors who cannot answer these specifically are likely overclaiming.
“In SureCloud, we’re delighted to have a partner that shares in our values and vision.”
Read more on how Mollie achieved a data-driven approach to risk and compliance with SureCloud.
“In SureCloud, we’re delighted to have a partner that shares in our values and vision.”
Read more on how Mollie achieved a data-driven approach to risk and compliance with SureCloud.
“In SureCloud, we’re delighted to have a partner that shares in our values and vision.”
Read more on how Mollie achieved a data-driven approach to risk and compliance with SureCloud.
Platform +
Frameworks +
Products +
Industries +
Resources +
Company +
London Office
1 Sherwood Street, London, W1F 7BL, United Kingdom
US Headquarters
6010 W. Spring Creek Pkwy., Plano, TX 75024, United States of America
© SureCloud 2026. All rights reserved.