AI compliance: An overview of the most important rules and obligations
Estimated reading time: 15 minutes
AI has long been part of everyday business life—but its use also brings with it growing regulatory requirements. AI compliance ensures that companies are on the safe side legally, build trust, and minimize risks. In this guide, you will learn what AI compliance means, what legal and ethical frameworks apply, and how you can build a robust AI compliance structure step by step.
AI compliance: The most important facts in brief
-
AI compliance ensures that AI systems are used in a legally compliant, transparent, and responsible manner.
-
Under the EU AI Act, AI applications are classified based on risk: the higher the risk of a use case, the stricter the requirements.
-
In addition to compliance with legal regulations, governance, technical validation, fairness, security, and audit readiness are crucial for comprehensive AI compliance.
-
With clear responsibilities, defined processes, a technical basis (MLOps), and a step-by-step approach, AI compliance can be implemented in a structured manner.
-
Anyone using external AI models must pay attention to contracts, data processing, transparency, and security at the provider.
What is AI compliance?
The use of AI in companies is no longer a rarity and offers enormous potential: More efficient processes, data-driven decisions, and automated analyses are just a few of the advantages and areas of application for AI. But what happens when AI is wrong or discriminates unnoticed? What if a model is based on biased data, processes sensitive information, or makes decisions that cannot be explained later? This is precisely where potential quickly turns into risk: legally, financially, and in terms of reputation.
The goal of AI compliance is to minimize precisely these risks. The term encompasses all organizational, technical, and legal measures that companies use to ensure that artificial intelligence is used in a legally compliant, responsible, and controlled manner. AI compliance therefore not only involves adherence to legal requirements, but also clear governance structures, transparent processes, and the fair and secure handling of data and models.
The series highlights technology, organization, and governance—so you can securely and scalably embed AI in your company. Available in German language.
Artificial intelligence: An overview of legal issues
AI often works faster and is more scalable than humans ever could. But the more responsibility we give to AI systems, the more pressing the legal questions behind them become. What's allowed? What's not allowed? And what responsibilities do companies that use AI have? To build an AI compliance structure, it's essential that you know and can correctly classify the legal and regulatory framework that's relevant to your company.
The EU AI Act – What regulations apply now?
The EU AI Act aims to ensure that AI is used safely and reliably in Europe. To this end, it creates a uniform EU-wide framework that specifies which AI practices are prohibited and which requirements apply depending on the risk class. This applies not only to providers of AI systems, but also to companies as operators/users as soon as AI is used in the EU or its results have an impact in the EU (market location principle). Violations are punishable by heavy fines, up to €35 million or 7 percent of global annual turnover, depending on the case.
At the heart of the AI Regulation is a risk-based approach in which AI systems are classified into four levels depending on the potential damage they could cause. For companies, this means that they must first classify their AI use cases and systems in order to derive specific measures.
Unacceptable risk
This level includes AI applications that are considered fundamentally incompatible with European values. These include manipulative AI systems, social scoring, and certain forms of emotion recognition, particularly in the workplace or educational settings.
Consequence: These AI systems are prohibited and may not be developed or used. Companies must ensure that they do not use such applications—not even via third-party providers.
High risk
High-risk AI includes systems that can have a significant impact on people's rights, safety, or living conditions. These include, for example, AI in human resources management (applicant selection), education, critical infrastructure, and certain financial or healthcare applications.
Obligations: Companies must meet extensive requirements, including risk management, high-quality data, technical documentation, human oversight, logging, and regular monitoring during operation.
Limited risk
AI systems with limited risk are applications where users should be aware that they are interacting with AI or viewing AI-generated content. Typical examples include chatbots, voice assistants, or AI-generated images, texts, or videos.
Obligations: The main obligations here are transparency and labeling requirements. Users must be able to recognize that AI is being used or that content has been generated artificially. There are generally no other extensive compliance requirements.
Minimal risk
The majority of today's AI applications fall into this category, such as AI-supported recommendation systems, spam filters, or simple analysis tools.
Obligations: There are no additional legal requirements for these AI systems beyond existing laws. Nevertheless, responsible use is recommended, for example through voluntary governance or compliance measures.
Please note: AI models are not inherently "low" or "high risk." It depends on what they are used for and whether or not the decisions are reviewed.
Practical example: Using a large language model such as ChatGPT can be harmless if you use it to write a text for a greeting card. However, it becomes much more critical when the same model is asked for specific medication recommendations. What is relevant here is not only what the AI system is used for, but also by whom: If a pharmacist asks for details about medications, she can professionally classify and verify the information she receives. As a layperson, you usually cannot do this, and incorrect statements could have health consequences.
Data protection – What data can AI use?
When using AI, the GDPR is the central legal benchmark. As soon as the training, testing, or operation of an AI system involves personal data, the provisions of the General Data Protection Regulation apply. Companies must demonstrate a specific legal basis for processing (e.g., consent, contract, or legitimate interest) and comply with data protection principles such as purpose limitation, data minimization, transparency, and data security. Increased requirements apply to particularly sensitive data, such as that relating to employees or health.
The Federal Data Protection Act (BDSG) supplements the GDPR at the national level and is always relevant when German law is applicable. The BDSG is particularly important when it comes to the use of AI in the employment context (e.g., HR analytics, applicant management, performance evaluation), as Section 26 BDSG contains specific rules for the processing of employee data.
In addition, the EU Data Act plays a role when AI works with data from connected products or digital services. It primarily regulates access and usage rights to non-personal data. Depending on the use case, regulations such as the EU AI Act, the Telecommunications and Telemedia Data Protection Act (TTDSG), and labor or copyright regulations may also be relevant.
Intellectual property – Who owns AI-generated content?
Whether AI-generated content is protected by copyright depends largely on human contribution. Under current copyright law, protection only applies to works that represent a person's "own intellectual creation". The decisive factor is that a person shapes the content through free creative decisions and expresses their personality in it. Purely AI-generated texts, images, or music do not usually meet this requirement and therefore have no author in the legal sense.
Copyright protection may arise if the user employs AI solely as an aid and significantly influences the specific form of the result through their own creative decisions. However, this must always be examined on a case-by-case basis.
There are also clear limits when it comes to training AI systems with copyrighted works. In principle, such content may only be used if there is a legal basis, for example through licenses or legal exceptions such as text and data mining. However, these exceptions are subject to conditions and may be restricted by rights holders.
Ethical use of AI – How can AI remain fair?
Fair use of AI primarily means that AI systems do not disadvantage or discriminate against people or make decisions that lack transparency. Companies must therefore specifically check whether training data contains biases and whether decisions systematically disadvantage certain groups. In addition to technical measures such as bias analyses and fairness checks, it is also important to establish clear ethical guidelines. Guidance on this can be found, for example, in the six ethical principles of the German Federal Association of the Digital Economy (BVDW). These are defined as follows:
-
Fairness: AI systems should not discriminate against or disadvantage anyone.
-
Transparency: The functioning of AI systems should be transparent.
-
Explainability: AI decisions should be explainable.
-
Data protection: The protection of personal data must be guaranteed.
-
Security: Malfunctions, manipulation, and misuse should be prevented.
-
Robustness: AI systems should also function under uncertain conditions.
In addition, the UNESCO Recommendation on the Ethics of AI provides a global reference framework that emphasizes values such as privacy, transparency, explainability, and non-discrimination, and translates these into concrete actions.
Would you like to use AI in your company, but are unsure about the key considerations? We are happy to assist you.
What aspects does AI compliance cover?
The following aspects show you how to check AI compliance in practice and what measures are necessary to operate AI systems in a permanently compliant manner. At the same time, they form the basis for audits and regulatory reviews and make it clear that AI compliance only works if you take a holistic approach.
Governance artifacts
At the formal governance level, it is particularly relevant whether fundamental control and responsibility structures exist and are documented. These include in particular:
-
Model cards and data sheets for each AI model
-
Traceable data lineage (origin, processing, transfer of data)
-
Defined data contracts
-
Model Governance Board that conducts regular reviews, assigns models to risk classes, and decides on approvals
-
Clearly defined responsibilities (data owner, model owner, reviewer)
-
Documented requirements, risk assessments, and approvals
-
Standardized approval pipeline (e.g., development → validation → staging → production) with gate checks
Technical validation
Technical verifiability primarily concerns the traceability, stability, and resilience of the model. Among other things, the following are examined:
-
Reproducibility: Can the training and scoring be reproduced (seeds, environments, version statuses)?
-
Performance tests: Metrics such as accuracy, precision/recall, AUC, and confidence distributions.
-
Robustness and stress tests: Behavior in edge cases, adversarial inputs, and under load.
-
Drift analyses: Monitoring of data drift and concept drift during operation.
Fairness & Explainability
AI compliance must also include the ethical dimension. Companies must be able to prove that their AI does not create systematic disadvantages and that decisions are explainable. Relevant factors include, for example:
-
Bias analyses with a focus on sensitive attributes
-
Use of fairness metrics
-
Explainability methods (e.g., SHAP, LIME, surrogate models)
-
Documented decision-making logic for internal stakeholders and regulators
Data protection & GDPR compliance
The data protection review assesses whether the use of data is legally permissible and technically secure. Specifically, this includes:
-
Integrating data protection into the system from the outset through privacy by design (pseudonymization, anonymization, consent tracking)
-
Reviewing the data origin and legal basis (consent, contract, legitimate interest)
-
Implementing data minimization and purpose limitation
-
Clearly defined retention and deletion periods
Security & Access Control
Part of AI compliance is ensuring that models, data, and interfaces are protected against unauthorized access and manipulation. This can be achieved, for example, through:
-
Holistic integration of security requirements through security by design and secure software development lifecycle (SSDLC)
-
Secrets and key management (e.g., with the Bring Your Own Key encryption concept, or BYOK for short)
-
Use of hardware security modules (HSM) for particularly sensitive keys
-
Secure API authentication
-
Role-based access control (RBAC)
-
External validation, for example through penetration tests
Regulatory evidence & audit readiness
Companies must be able to prove at all times that their AI systems are operated in compliance with regulations. This requires, among other things:
-
Complete audit artifacts (logs, reports, deployment histories, test results, access records)
-
Evidence of model governance and validation
-
Detailed instructions (known as runbooks) for incident response and rollbacks (resetting changes)
In short, an AI compliance audit is a combination of governance checks, technical validation, data protection and security reviews, and regulatory evidence management.
Since this process is very time-consuming, we recommend automating it with a machine learning operations platform (MLOps platform) and predefined checks.
We would be pleased to assist you on your path to AI compliance. Simply contact us without obligation.
5 steps to ensure AI compliance in your company
AI compliance is complex and can quickly feel a bit overwhelming. The good news is that you don't have to solve everything at once. To help you get started instead of getting bogged down in checklists, we provide you with a practical 5-step plan in the next section.
1. Determine your status quo
Before implementing guidelines or technical measures, you should understand where you stand and what risks currently exist. In practice, an AI readiness check or scoping workshop has proven effective for this purpose. This involves analyzing your data situation, existing processes, skills, and infrastructure and identifying initial risks. Particular attention is paid to AI models that could have legal or financial implications (high-risk use cases).
Our experts analyze where your company stands today, what risks exist, and which AI applications should be prioritized.
2. Establish clear governance structures
Many compliance risks arise not from AI itself, but from unclear responsibilities. Therefore, define who is responsible for data, models, data protection, and security and establish these roles as binding within your organization:
-
Data Owner: Responsible for the quality, origin, use, and protection of data used for AI systems.
-
Model Owner: Bears technical and operational responsibility for an AI model throughout its entire lifecycle.
-
DPO (Data Protection Officer): Ensures that the use of AI systems complies with the requirements of the GDPR and that data protection risks are addressed at an early stage.
-
Security Owner: Responsible for the IT and information security of AI systems, including access protection, threat analysis, and security measures.
3. Define processes for the use of AI
The establishment of a minimal, binding model lifecycle process ensures that new AI models are not developed in an uncontrolled manner, but rather developed, tested, deployed, and monitored according to clearly defined phases. In addition, a model governance board takes control at the decision-making level: it conducts regular reviews, assigns models to risk classes, and decides on approvals, continued operation, or decommissioning.
4. Establish a technical compliance foundation
To achieve AI compliance, you need basic MLOps equipment. Ensure that code, data, and model versions are consistently versioned so that it is always clear which version was trained with which data and used productively. In addition, you should establish ongoing monitoring that makes changes in data, model behavior, and results visible at an early stage. Also, define which quality and security checks a model must undergo as standard before going live (and when updates are made).
5. Develop a pilot plan for critical use cases
Instead of implementing AI compliance directly for all models at once, a focused pilot approach is recommended. Select one or two critical AI use cases and implement compliance requirements fully in those cases.
A pilot plan covering the entire life cycle is created for these use cases: from development and validation to productive use and monitoring. The plan should include a specific audit checklist that specifies which documentation, tests, logs, and evidence are required.
This proof of concept has two advantages: First, you can quickly create an auditable reference model that you can use to test processes, documentation, and controls. Second, the lessons learned from this pilot project can then be efficiently transferred to other AI systems.
Achieving AI compliance with MaibornWolff
Designing AI systems that are legally compliant, fair, and technically sound is a major undertaking. This is especially true when use cases become more complex, regulations change, or external providers are involved. This is precisely where a well-designed AI compliance foundation proves its worth: it not only protects against regulatory risks, but also strengthens trust in your AI—both internally and externally.
Our experience from numerous AI projects shows that companies benefit most when governance, technology, and organization are considered together from the outset. At MaibornWolff, we accompany you along the entire AI value chain: from the strategic evaluation of your use cases to a tailor-made AI strategy and technical implementation.
Ensure that your AI systems are not only powerful, but also legally compliant, explainable, and operated responsibly. We support you with well-designed compliance structures, clear testing processes, and practical expertise.
FAQ: Frequently asked questions about AI compliance
Do the AI rules also apply to small businesses and start-ups?
Yes, regulations such as the EU AI Act also apply to small businesses and start-ups, provided they use or offer AI systems in the EU. The obligations are based on the risk of the AI application, not on the size of the company.
How do I set up an AI compliance structure?
First, record your AI use cases and assess their risks. Then define clear responsibilities, processes, and governance rules for the development, deployment, and monitoring of AI. Technical controls such as testing, monitoring, and documentation ensure that specifications are adhered to in everyday use. It is also important to regularly review compliance and adapt it to new requirements or changes in AI use.
What do I need to consider in terms of compliance when using AI from external providers?
The use of external AI services such as Software as a Service (SaaS), APIs, or LLMs requires additional care. In addition to a clearly regulated Data Processing Agreement (DPA) and Service Level Agreements (SLAs), the transparency of data processing is particularly important: Where is the data stored? Who processes it? And can the provider enter into legal obligations (e.g., liability, audit support)?
This is especially true for models-as-a-service: As a company, you have to trust the provider in many areas, such as training data, model behavior, and security measures. In return, providers such as Azure also assume part of the responsibility. This is a particularly good solution for many small and medium-sized enterprises, as they often do not have the financial resources to train AI models themselves.
In summary, it can be said that the use of external AI must be contractually and technically secured in such a way that all governance artifacts and audit accesses are guaranteed. In addition, for sensitive data, you should ideally keep the encryption keys yourself (Bring Your Own Key), use hybrid key models (Hold Your Own Key), or on-premise deployments. Some providers or models should also not be used for highly critical decisions in general.
How often do AI systems need to be reviewed or audited?
The frequency with which AI systems should be checked for compliance depends on what they are used for. For low-risk applications, ongoing, automated monitoring of key operating metrics such as model drift, data quality, performance, latency, and failure rates, as well as a comprehensive annual review, is usually sufficient.
If the use case becomes more critical, for example because decisions have financial, legal, or regulatory consequences, the checks must be much more frequent. For such high-risk use cases, in addition to continuous monitoring, regular operational health checks (daily to weekly) and structured reviews (quarterly to semi-annually) of the productive models are advisable. This ongoing review should be supplemented by annual external audits and ad hoc audits. For example, in the event of significant changes, new regulatory requirements, or after security incidents, as well as in the event of significant performance declines.
How do I check whether an AI model complies with current compliance requirements?
An AI compliance audit consists of several elements:
-
Governance check: Are model cards/data sheets, responsibilities, risk assessments, and approvals fully and clearly documented?
-
Technical validation: Is the model reproducible and does it work stably?
-
Data protection and security review: Is data usage GDPR-compliant and are systems and access securely protected?
-
Regulatory evidence management: Is all the necessary audit evidence available?
MaibornWolff expert tip: Automate this audit using an MLOps platform and predefined checks.
-
Kyrill Schmid is Lead AI Engineer in the Data and AI division at MaibornWolff. The machine learning expert, who holds a doctorate, specialises in identifying, developing and harnessing the potential of artificial intelligence at the enterprise level. He guides and supports organisations in developing innovative AI solutions such as agent applications and RAG systems.