EU AI Act Compliance
Xase helps AI Labs and Data Holders comply with the EU AI Act by providing governance controls, evidence generation, and regulatory documentation for high-risk AI systems.
EU AI Act overview
The EU AI Act is the world's first comprehensive regulatory framework for AI systems, establishing:
- — Risk-based approach with specific requirements for high-risk AI
- — Technical documentation requirements for AI systems
- — Human oversight for significant decisions
- — Transparency and explainability obligations
- — Risk management and quality management systems
- — Conformity assessment procedures and CE marking
The EU AI Act applies to providers, deployers, and importers of AI systems used in the EU, regardless of where they are established.
Key EU AI Act requirements
Article 11: Technical Documentation
Xase automatically generates technical documentation for high-risk AI systems:
import xase
client = xase.Client(api_key="sk_...")
# Generate EU AI Act technical documentation
tech_doc = client.compliance.create_technical_documentation(
model_id="diagnostic-model-v2",
documentation_type="eu_ai_act",
sections=[
"system_description",
"risk_management",
"data_governance",
"technical_specifications",
"human_oversight_measures",
"accuracy_metrics",
"cybersecurity_measures"
],
format="pdf" # or "html", "markdown"
)
# Save documentation for conformity assessment
tech_doc.download("./eu_ai_act_technical_documentation.pdf")Article 14: Human Oversight
Xase enables and documents human oversight:
# Configure human oversight requirements
client.compliance.configure_human_oversight(
model_id="diagnostic-model-v2",
requirements={
"review_percentage": 100, # 100% human review
"oversight_roles": ["medical_professional", "legal_advisor"],
"review_timeout_hours": 24, # Maximum time allowed for review
"escalation_path": "medical_director@hospital.org",
"required_documentation": ["review_reason", "decision_rationale"]
}
)
# Record human intervention with all required EU AI Act fields
intervention = client.interventions.create(
record_id="rec_a1b2c3",
actor_email="doctor@hospital.org",
actor_role="medical_professional",
action="APPROVED",
reason="Clinical indicators consistent with model output",
oversight_notes="Verified with patient history and latest lab results",
response_time_minutes=45
)Article 17: Quality Management System
Xase facilitates quality management documentation and audits:
{
"quality_management": {
"documented_procedures": [
{
"procedure_id": "AI-QMS-001",
"title": "Testing Strategy for High-Risk AI Systems",
"version": "1.2",
"last_updated": "2025-12-15T10:30:00Z",
"responsible_party": "QA Manager",
"link_to_document": "https://xase.ai/evidence/docs/AI-QMS-001.pdf"
},
{
"procedure_id": "AI-QMS-002",
"title": "Risk Management for AI Systems",
"version": "2.0",
"last_updated": "2026-01-10T14:15:00Z",
"responsible_party": "Risk Officer",
"link_to_document": "https://xase.ai/evidence/docs/AI-QMS-002.pdf"
},
{
"procedure_id": "AI-QMS-003",
"title": "Post-Market Monitoring of AI Systems",
"version": "1.1",
"last_updated": "2026-01-05T09:45:00Z",
"responsible_party": "Product Manager",
"link_to_document": "https://xase.ai/evidence/docs/AI-QMS-003.pdf"
}
]
}
}Article 61: Post-Market Monitoring
Continuous monitoring and evidence collection:
# Configure post-market monitoring
client.compliance.setup_monitoring(
model_id="diagnostic-model-v2",
monitoring_config={
"performance_metrics": ["accuracy", "fairness", "drift"],
"alert_thresholds": {
"accuracy_drop": 0.05, # Alert on 5% accuracy decrease
"drift_score": 0.15, # Alert on 15% distribution drift
"fairness_disparity": 0.10 # Alert on 10% disparity
},
"monitoring_frequency": "daily",
"reporting_frequency": "monthly",
"responsible_contacts": [
"ai_safety@company.com",
"compliance@company.com"
]
}
)
# Generate post-market monitoring report
monitoring_report = client.compliance.generate_monitoring_report(
model_id="diagnostic-model-v2",
start_date="2026-01-01",
end_date="2026-01-31",
include_incidents=True,
include_performance_metrics=True,
format="pdf"
)
# Save report for regulatory submission
monitoring_report.download("./eu_ai_act_monitoring_report_jan_2026.pdf")How Xase helps with EU AI Act compliance
Runtime Governance
Policy Engine ensures AI systems operate only within permitted parameters, with real-time enforcement of usage rules.
Evidence Generation
Automatic creation of immutable evidence bundles for all AI operations, supporting conformity assessment and audits.
Human Oversight
Structured workflow for human review of AI decisions with complete traceability and documentation.
Technical Documentation
Auto-generated EU AI Act-compliant documentation that evolves with your AI system.
Risk Management
Tools to identify, assess, and mitigate risks in AI systems throughout their lifecycle.
CE Marking Support
Readiness assessment and documentation preparation for conformity assessment procedures.
EU AI Act compliance checklist
Risk classification assessment
Determine if your AI system is high-risk under EU AI Act criteria
Technical documentation
Generate and maintain comprehensive technical documentation
Human oversight measures
Implement and document human oversight mechanisms
Quality management system
Establish a quality management system for AI development
Post-market monitoring
Implement continuous monitoring and reporting system
Conformity assessment
Complete relevant conformity assessment procedure
CE marking
Apply CE marking once conformity is demonstrated
