AI Compliance Diagnostic · Free Assessment

Is Your AI EU AI Act Compliant?
Find Out in 10 Minutes.

The ACAI diagnostic tests your running AI systems across 5 technical and governance dimensions. Built on the ACAI Framework — engineering-based, not a maturity checkbox.

15 questions across 5 audit dimensions
Aug 2, 2026 enforcement deadline
Penalties up to 7% global revenue
⚠️

AI Compliance Audit Index (ACAI)

Rate each statement based on your organisation's current AI systems in production. Honest answers only — overconfidence here is itself a compliance risk.

D1 System Inventory D2 Behavioural Consistency D3 Data Governance D4 Governance Process D5 Transparency
D1
System Inventory & Risk Classification
Do you know what you have deployed and how it's classified under EU AI Act risk tiers?
Question 1 of 15
We have a complete, up-to-date inventory of every AI/ML system running in production — including third-party tools and embedded AI features in software we use.
No inventory existsFull documented registry
Question 2 of 15
Each AI system has been assessed against EU AI Act risk tiers (Unacceptable / High-Risk / Limited / Minimal) and we know which Annex III categories, if any, apply to our systems.
No classification doneAll systems classified
Question 3 of 15
There is a named, accountable owner for each AI system — a person who can be held responsible if that system causes harm, a compliance breach, or a reputational incident.
Nobody accountableClear ownership enforced
D2
Behavioural Consistency & Reliability
Have you tested whether your models behave reliably in production conditions?
Question 4 of 15
We have tested our production AI models for calibration — we know whether confidence scores are reliable, and we monitor for overconfidence or underconfidence in high-stakes decisions.
No calibration testingContinuous monitoring
Question 5 of 15
We monitor for distribution shift — we have a process to detect when the data our models receive in production is meaningfully different from what they were trained on.
No drift detectionFull automated monitoring
Question 6 of 15
Our models have been tested with edge cases and adversarial inputs — we know how they behave when inputs are unusual, ambiguous, or slightly outside the training distribution.
No robustness testingComprehensive test suite
D3
Data Governance & Lineage
Can you trace where your training data came from and prove it was handled compliantly?
Question 7 of 15
We have documented data lineage for every production AI system — we can show where training data came from, how it was processed, and whether it contained personal data subject to GDPR.
No lineage documentationFully auditable lineage
Question 8 of 15
We have assessed whether our training data was representative of the real-world population our models make decisions about — including bias and fairness analysis where relevant.
No representativeness checkDocumented bias analysis
Question 9 of 15
If an upstream data source changes (a database schema update, a new data feed, a change in how customer data is collected), we would detect this before it silently affects model performance.
Would find out when model breaksAutomated pipeline validation
D4
Governance Process & Change Control
Do you have structured processes for deploying, updating, and monitoring AI systems?
Question 10 of 15
There is a formal deployment gate for AI systems — a defined process that must be completed (technical evaluation, risk sign-off, documentation) before any model goes live or is updated in production.
Models deploy when readyEnforced deployment gate
Question 11 of 15
We have a model version registry — every production model has a documented version history, including who approved each version, what changed, and what evaluation was done before deployment.
No version controlComplete auditable history
Question 12 of 15
There is a documented incident response process for AI systems — if a model produces a harmful output, causes an operational failure, or triggers a regulatory concern, we know exactly what to do.
No incident processDocumented & tested
D5
Transparency & Explainability
Can you explain what your AI systems do, how they decide, and document this for regulators?
Question 13 of 15
Our AI systems can produce explanations for their outputs that a non-technical person can act on — not just accuracy metrics, but specific reasons why a particular decision or prediction was made.
No explainabilityActionable explanations
Question 14 of 15
We have technical documentation for each AI system that meets Article 11 requirements — system purpose, capabilities, limitations, training methodology, performance metrics, and known risks.
No technical documentationFull Article 11 compliance
Question 15 of 15
Humans can meaningfully override our AI systems — there is a real human-in-the-loop for high-stakes decisions, not just a theoretical override that nobody uses in practice.
Fully automated decisionsEnforced human oversight

Where Should We Send Your Compliance Report?

Your personalised ACAI report — including dimension scores, EU AI Act exposure, and prioritised remediation tasks — will be sent to your email immediately.

Please answer all 15 questions and fill in your name, email and organisation.

Responses are confidential. Used only to generate your report.

Calculating your compliance score…

ACAI Compliance Assessment Result
0
/ 75

Critical Findings & Remediation Priorities

Based on your lowest-scoring dimensions — ranked by EU AI Act penalty exposure.

Get the Full Technical Audit

This diagnostic identifies where your exposure lies. The full ACAI audit runs real technical tests on your production systems and delivers a findings register ready to show a regulator or customer.

Request the Full Audit
Full audit: €25,000–€55,000 · Readiness Scan: €8,000–€15,000 · gayan@gayandesilva.com