Indhold
Mød forfatteren
Christian VistiKandidat i Erhvervsadministration og Revision fra Copenhagen Business School. 15 års erfaring inden for bank, finansiel lovgivning, FinTech og Payment Solutions sektoren.
Compliance work cannot tolerate invented answers. This paper introduces RAG Intelligence — a category of compliance technology built for one purpose: to make AI defensible in the contexts where defensibility is the entire point.
Executive summary
Compliance work cannot tolerate invented answers. Yet the dominant model for AI today, generative AI, is built to produce plausible-sounding output, not provably-sourced output. For any regulated work, this is not a limitation that can be patched with better prompting. It is an architectural mismatch.
This paper introduces RAG Intelligence: a category of compliance technology in which every output is retrieved from verified source material, cited to its origin, and never generated from a model's internal parameters. It is built for one purpose — to make AI defensible in the contexts where defensibility is the entire point.
RAG Intelligence is not autonomous. It does not make decisions, file reports, or replace the responsibility of the compliance officer. It does the part of compliance work that software can do well — retrieving the right document, surfacing the relevant data, paragraph, or business procedure, drafting a sourced summary — and leaves every decision in the hands of a human.
For any EU regulated firm preparing for the AI Act (Regulation (EU) 2024/1689, mostly applicable from 2 August 2026), RAG Intelligence is the standard that lets you adopt AI without exposing the firm to the risks that have made compliance teams rightly sceptical of it.
The problem with generative AI in compliance
Generative AI systems produce answers by predicting what text should come next. They draw on patterns learned during training, not on verifiable source material. When they are right, they are useful. When they are wrong, they are confidently, fluently, and convincingly wrong.
For most use cases this trade-off is acceptable. For compliance work, it is not — and under any plausible regulatory future, it will not be.
A compliance officer reviewing a customer file is not looking for plausible answers. They are assembling a defensible record. Every conclusion — this customer is low-risk, this beneficial owner is identified, this transaction does not require a Suspicious Activity Report (SAR) — must be traceable to a source the firm can produce on demand, years later, to a regulator, an auditor, or a court. This applies for any compliance work under any legal framework.
Generative AI breaks this requirement in three ways:
It cannot show its work
The output of a generative model is not derived from a specific document; it is synthesised from statistical patterns across an entire training corpus. There is no citation to produce, no audit trail to follow, no way to answer the only question that matters in a regulatory review: where did this come from?
It invents what it does not know
Generative models do not distinguish between facts they have seen and facts that are merely consistent with what they have seen. Faced with a customer who resembles other customers in the training data — but is not them — the model fills in the gaps from pattern, not from the file in front of it. A beneficial owner is named who does not actually own the company. An ownership structure is described that resembles other structures the model has seen but is not the one on the customer's documents. A risk profile is asserted that no document in the file supports. The output looks the same whether it is correct or fabricated. For a recipe, this is acceptable. For compliance, fabrication is not a quality problem — it is a liability event.
It cannot be governed at the output layer
Because generative output is a product of the model's internal weights, the firm cannot constrain what the model can say to what the firm has actually verified. The model's universe of possible answers is whatever it was trained on, much of which the firm does not control, did not approve, and cannot audit.
These are not edge cases. They are how generative AI works. No amount of prompt engineering changes the underlying architecture.
The autonomous-agent problem
The direction enterprise AI is moving makes this worse, not better. The most actively-marketed category in 2026 is agentic AI — systems designed to take actions, make decisions, and execute workflows with minimal human oversight. A specific sub-category has emerged in regulated industries: AI compliance agents, marketed as autonomous or semi-autonomous systems that take over the collection of data, the reviews, risk assessments, and onboarding decisions on the firm's behalf.
For most domains, autonomy is a reasonable productivity goal. For compliance, it is the opposite of what the regulation requires.
National legislation across the EU — and equivalent regimes in the UK, the US, and other jurisdictions — places the obligation to assess, decide, and report on a named human: the firm's designated compliance officer. The incoming AMLR and AMLD6, applying from 10 July 2027, are a good example: they harmonise these obligations across the EU and increase the standards for documented, defensible decision-making, supervised by the new EU Anti-Money Laundering Authority (AMLA). The EU AI Act explicitly classifies certain compliance and risk-scoring applications as high-risk, with mandatory human oversight requirements. GDPR Article 22 restricts solely automated decision-making that produces legal or similarly significant effects on individuals.
An AI system that approves customers, assigns risk classifications, or files reports on its own is not a productivity tool. It is a regulatory exposure — and the AI compliance agent category, at least where it takes autonomous action on decisions the regulation reserves for a human, is exactly that exposure dressed as innovation.
This is the trap that has made compliance teams across regulated industries wary of AI adoption: the louder the AI is marketed, the less it looks like something the firm's designated compliance officer can sign their name to.
What RAG Intelligence is
RAG Intelligence is the application of Retrieval-Augmented Generation — a well-established technical pattern in AI — to the specific demands of regulated work.
In a RAG system, every output is constructed in two steps. First, the system retrieves the relevant source material from a controlled knowledge base — the firm's own customer files, the applicable legislation, supervisory guidance, the firm's internal policies. Second, the language model is constrained to produce output only on the basis of what was retrieved, with citations back to the source.
RAG Intelligence is the discipline of doing this correctly for compliance work. That means three architectural commitments that go beyond generic RAG:
Retrieval is the source of truth, not a hint
Many RAG implementations use retrieval as one input among others, blending retrieved content with the model's parametric knowledge. RAG Intelligence does not. If the retrieved sources do not contain the answer, the system says so. It does not fill in the gap from the model's training data.
Every output is cited
Not summarised with a footnote — cited at the level of the specific source, the specific document, and where possible the specific passage. A compliance officer reading a RAG Intelligence output can verify every claim against the source it came from.
The system never decides
RAG Intelligence retrieves, surfaces, summarises, and drafts. It does not approve customers, assign risk scores, file reports, or take any action that the regulation reserves for a human. Decisions are made by the firm's designated compliance officer. RAG Intelligence assembles the basis on which they decide.
The shorthand is: retrieved, not generated. Supportive, not autonomous. Cited, not summarised.
Why this matters for regulated professional services
For any firm operating under specific legal obligations and preparing for the EU AI Act's high-risk requirements, the question of AI adoption is not whether AI can help with compliance. The benefits — faster onboarding, more consistent risk assessment, easier document review — are obvious. The question is whether the firm can adopt AI without taking on a new category of risk in the process.
RAG Intelligence is designed around this question. Five properties make it fit for purpose.
Defensibility
Every output produced by a RAG Intelligence system can be defended in front of a regulator, an auditor, or a court, because every output is grounded in a source the firm controls and can produce. There should be no hallucination to explain, no model behaviour to defend, no black box to apologise for. There is only the source, and what was retrieved from it.
Regulatory alignment
Because RAG Intelligence is decision-support rather than decision-making, it is designed to sit outside the EU AI Act's high-risk categorisation for autonomous decision systems, and outside GDPR Article 22's restrictions on solely automated decisions. It belongs to the same regulatory category as the calculators, search tools, and document templates the firm already uses — a tool the compliance professional applies, not an actor the firm has to govern.
Data sovereignty
A correctly-built RAG Intelligence system runs on EU-based infrastructure with self-hosted language models inside the EU. No customer data leaves the EU. No third-party model provider sees a single customer file. The firm's data stays the firm's data.
Auditability
Every retrieval, every query, every output, every citation can be logged. The firm has, on demand, a full record of what the AI was asked, what it retrieved, and what it produced. This is what an internal control system looks like. It is also what makes ISAE 3000 and ISO 27001 assurance possible over AI workflows — something generative AI, by its nature, makes far harder.
Human authority
The compliance officer remains the compliance officer. They review what the system surfaces, they confirm what the system drafts, they make the call. The system reduces the time they spend on retrieval and assembly. It does not reduce the time they spend on judgement, and it does not pretend to.
Workflow automation is not enough
The compliance technology market has, until now, been defined by workflow automation. Existing platforms collect documents, route them through approval flows, schedule reassessments, and produce audit logs. They make compliance work faster. They do not make it more intelligent.
Workflow automation answers the question did the firm collect the right documentation? It does not answer does the documentation actually support the conclusion? That is the question regulators, auditors, and supervisors will increasingly ask.
RAG Intelligence is the layer above workflow automation. It does not replace the workflow — clients still need to be onboarded, documents still need to be collected, reassessments still need to be scheduled. But it adds something workflow automation cannot: every conclusion in the file, drawn from the actual source material in the file, cited to the page it came from. The compliance officer reviewing the assessment is not reading a summary the system invented. They are reading what the documentation actually says, organised for a decision.
Workflow automation gets the documentation collected. RAG Intelligence makes sure the documentation answers the question.
What RAG Intelligence is not
Defining a category is partly a matter of saying what falls inside it and partly a matter of saying what does not. RAG Intelligence is a precise term. It excludes several things that are sometimes marketed adjacent to it.
It is not generative AI with retrieval bolted on
A generative system that also searches the web or also consults a vector store is not RAG Intelligence if it still falls back on parametric knowledge when retrieval is thin. The discipline of RAG Intelligence is that retrieval is the floor, not a feature.
It is not agentic AI
A system that takes actions, makes decisions, or executes workflows autonomously is not RAG Intelligence, regardless of how well it cites its sources. RAG Intelligence is bounded by design: it informs the human, it does not act on the human's behalf.
It is not a chatbot
A conversational interface is a delivery mechanism. RAG Intelligence is an architectural standard. A chat experience built on RAG Intelligence is fine; a chat experience that calls itself RAG-powered while generating uncited summaries is not.
It is not a compliance decision system
RAG Intelligence does not assess risk levels, classify customers, or determine whether a Suspicious Activity Report is warranted. It surfaces the inputs to those decisions. The decisions belong to the firm's designated compliance officer.
The distinction matters because the regulatory and reputational risk profile of compliance AI depends entirely on which side of these lines a system sits on. RAG Intelligence is the side a regulated firm can actually adopt.
A standard, not a product
RAG Intelligence is not a Meo feature. It is a category — a standard that any compliance technology can meet or fail to meet. A platform delivers RAG Intelligence if, and only if:
- All outputs are derived from retrieved source material in the firm's controlled knowledge base.
- Every output is cited to its specific source at a level of granularity that supports independent verification.
- The system does not produce ungrounded inferences when retrieval returns insufficient material — it reports the gap.
- The system does not make compliance decisions, take regulated actions, or operate autonomously on the firm's behalf.
- Every query and output is logged in a manner that supports audit and regulatory review.
A platform that meets these five criteria delivers RAG Intelligence. One that does not, does not — regardless of how it is marketed.
We publish this definition because the category needs one. As AI adoption in compliance accelerates, firms will be asked to evaluate a growing number of tools claiming to be safe for regulated work. Is it RAG Intelligence? is the question that cuts through the marketing and points at the architecture.
What this means for the next decade of compliance
The regulated firms that thrive in the next ten years will be the ones that adopt AI without abandoning the standards that make regulated professional work worth paying for: defensibility, traceability, and accountable human judgement.
This is not a contradiction. The wrong AI threatens those standards. The right AI reinforces them — by automating the parts of compliance work that should be automated, while leaving the parts that should not be exactly where they belong.
The window in which firms can choose their architecture rather than have it chosen for them is narrowing. The AI Act's high-risk classifications come into force across 2026 and 2027. Firms that adopt the wrong AI architecture now will spend the second half of the decade unwinding it.
RAG Intelligence is our name for the right architecture. We did not invent retrieval-augmented generation. We did not invent the regulatory principles it serves. We are naming the discipline of bringing the two together correctly, and committing — publicly, in writing — to the architectural standards that make it real.
If you are a compliance officer, a managing partner, or a procurement lead at a regulated firm, the question to ask of any AI tool entering your firm is no longer does it use AI? The answer is yes; it always will be. The question is now: is it RAG Intelligence?
If the answer is no, the firm will eventually be asked to explain why.
About Meo
Meo is a compliance platform for regulated industries. Founded in Copenhagen and built around the ability to evidence compliance, it covers the full workflow from initial risk assessment, to data exchange, to ongoing data management and audit-ready documentation. These are the activities at the core of regulatory frameworks spanning financial crime prevention, data protection, information security, and sector-specific obligations.
Meo ApS — RAG Intelligence for Compliance.
Always retrieved. Never generated.

