Logo of AccediaContact us
Logo of AccediaOpen menu icon

EU AI Act: 6 Steps For Finding A Compliant AI Partner

  • By

    Dimitar Dimitrov

11.02.2025

The EU AI Act is a piece of landmark regulation designed to build trustworthy AI by setting stringent obligations for providers and users of AI systems. Its introduction comes at a time when AI adoption is accelerating rapidly across industries.


After hovering at around 50% for six years, one of the latest McKinsey surveys reveals that 72% of organizations have now adopted AI—a significant jump that reflects the growing reliance on these technologies.


As in 2025 businesses increasingly integrate AI into their operations, partnering with a compliant AI development partner that aligns with the AI Act is critical. Thus, below, we are going to walk you through the most specific clauses of the document, offering actionable insights to help you verify your partner’s readiness.


Demand Comprehensive Risk Classification


Chapter 2, Section 1, Article 6: Classification Rules for High-Risk AI Systems


At the core of the AI Act is a risk classification framework categorizing AI systems into unacceptable, high-risk, limited and minimal risk. High-risk systems, like those used in biometric identification or critical infrastructure, are subject to the strictest requirements.


Thus, when evaluating an AI development partner, request a detailed risk classification report to assess how the proposed system fits within this framework.


At Accedia, we conducted an internal assessment of our AI projects and found that over 90% fall into the limited risk category. While this classification involves fewer regulatory hurdles than high-risk systems, it still demands comprehensive documentation to meet compliance standards.


That’s why it’s important to guarantee that any risk classification report from your vendor clearly outlines the rationale behind the classification. If the system falls into the high-risk category, demand a compliance roadmap specifying steps for conformity assessments, whether conducted internally or by an external body.


Strategic AI Development: From Goals to Results in 2025


Require Conformity Assessment Certification


Chapter 3, Section 5, Article 43: Conformity Assessment


Compliance doesn’t end with classification. High-risk systems must also pass rigorous conformity assessments to verify their safety and transparency. Software development companies should demonstrate their ability to meet these requirements, ideally by providing evidence of successful audits by a notified body or internal conformity checks.


To protect yourself, make this non-negotiable in your contract. Specify that you need conformity certifications before deployment. Also, look for a vendor who is proactive about keeping compliance records up to date and adheres to IT governance best practices. It’s a sign they’re committed to long-term collaboration, not just delivering a one-off project.


Ensure Post-Market Monitoring


Chapter 9, Section 1, Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems


Once a system is operational, the AI Act requires providers to implement robust post-market monitoring mechanisms. These systems collect, document and analyze performance data over the AI system’s lifetime to ensure continued compliance and address emerging risks.


When selecting an AI development partner, review their post-market monitoring strategy and confirm it includes protocols for incident reporting and issue resolution. Serious incidents must be reported to the relevant authorities within 15 days, a strict timeline unique to this regulation—so ask them how they’ve handled this in the past.


They should be able to provide historical examples of how they’ve managed incidents, including communication with stakeholders and mitigation efforts. Contractual provisions can also formalize these obligations, specifying penalties for non-compliance.


Ensure The AI Development Company Provides Explainability


Chapter 3, Section 2, Article 12: Record-Keeping and Chapter 3, Section 2, Article 13: Transparency and Provision of Information to Deployers


The AI Act emphasizes explainability as a core requirement for high-risk AI systems. These systems must provide explanations for their outputs, tailored to the intended audience and use case. This goes beyond the general concept of transparency seen in other regulations and ensures that decision-making processes are understandable to stakeholders.


Ask your potential AI development partners for documentation outlining how the developed systems generate decisions, particularly for sensitive applications like recruitment or credit scoring, for example. Test these explanations with different user groups—technical staff, end users and regulators—to ensure clarity. Also, check if your potential AI development partner includes tools for generating logs and evidence for audits.


Strategic AI Development: From Goals to Results in 2025


Address Risks With General-Purpose AI


Chapter 5, Section 1, Article 52: Procedure


The Act introduces specific obligations for general-purpose AI systems, which are often used across multiple industries. When building such systems, vendors must ensure that users understand their limitations and intended use cases.


When working with an AI development company, request clear usage guidelines tailored to your industry and evaluate their ability to simulate and address potential misuse scenarios. Liability clauses in your agreement can further protect your business, holding the provider accountable if ambiguous guidance leads to harmful outcomes.


Confirm AI Development Partner Location And Territorial Compliance


Chapter 1, Article 2: Scope


Consider the vendor’s location and its implications for compliance. The AI Act applies to companies both within and outside the EU if their systems are placed on the EU market or affect EU users.


Non-EU vendors are required to designate an EU-based authorized representative, so ask for proof during your evaluation. If your potential AI development partner isn’t EU-based, dig deeper. Do they have a strong understanding of EU regulations? How do they plan to adapt their systems to comply with the Act? This consideration is also essential for vendors operating across multiple markets, where compliance nuances can vary significantly between EU and non-EU jurisdictions.


As a company working with clients in over 20 countries—spanning several continents—we’ve seen how geographic diversity can add complexity to compliance, making a thorough understanding of territorial regulations vital. Thus, contracts should specify jurisdictional requirements, holding development companies accountable for adhering to the Act regardless of their headquarters.


Conclusion


Navigating the AI Act might feel overwhelming, but it’s worth the effort. This regulation isn’t just about legal compliance—it’s about building trust in AI and ensuring these powerful technologies are used responsibly. By asking the right questions, you can find an AI development partner who follows the IT governance best practices and therefore, is both compliant and committed to ethical and transparent AI solutions.


This article was originally published by Dimitar Dimitrov, Managing Partner at Accedia, as a contribution to the Forbes Technology Council.


  • Author

    Dimitar Dimitrov

    Dimitar is a technology executive specializing in software engineering and IT professional services. He has solid experience in corporate strategy, business development, and people management. Flexible and effective leader instrumental in driving triple-digit revenue growth through a genuine dedication to customer success, outstanding attention to detail, and infectious enthusiasm for technology.