The EU AI Act officially entered into force in August 2024, but six months in, a troubling pattern has emerged: companies think they’re compliant when they’re not. Industry assessments now suggest that the gap between perceived and actual compliance is significant across sectors — particularly in high-risk categories like hiring tools, credit scoring, and healthcare diagnostics.
For Indian technology companies and startups with European customers or ambitions, this isn’t a distant regulatory concern. It’s a live business risk that affects product roadmaps, partnership deals, and market entry timelines.
The Compliance Gap Nobody Talks About
The EU AI Act categorises AI systems by risk level, from minimal to unacceptable. High-risk systems — which include anything used in employment, education, law enforcement, or critical infrastructure — face the strictest requirements. These include mandatory risk assessments, human oversight mechanisms, and detailed technical documentation.
Here’s where companies are stumbling: most compliance efforts focus on surface-level documentation rather than structural changes to how AI systems are built and monitored. A risk assessment filed once doesn’t satisfy the Act’s requirement for continuous monitoring. Many organisations have discovered that their existing AI governance frameworks, often built around GDPR (the EU’s data protection law), don’t map cleanly onto AI Act requirements.
The penalties are substantial. Violations can attract fines of up to €35 million or 7% of global annual turnover, whichever is higher. For context, that’s steeper than GDPR’s maximum penalty structure.
Where Indian Companies Face the Highest Exposure
Indian IT services giants like TCS, Infosys, and Wipro have deep relationships with European enterprises, often managing or building AI systems on their behalf. Under the EU AI Act, providers of AI systems bear primary compliance responsibility — meaning Indian vendors could be directly liable for systems deployed in Europe, even if the end customer is a European bank or manufacturer.
Startups face a different challenge. Many Indian AI companies building products for global markets have designed their systems without the EU’s “high-risk” classification in mind. A hiring tool that works perfectly well in India might require fundamental architectural changes — like explainability features or human review workflows — before it can legally operate in Germany or France.
The compliance burden falls disproportionately on smaller players who lack dedicated regulatory teams. Large enterprises can absorb the cost of hiring EU AI Act specialists; a 50-person startup cannot.
What Smart Companies Are Doing Now
Forward-thinking organisations have started treating compliance as a product feature, not a legal checkbox. This means building audit trails, bias testing, and human oversight directly into AI systems from the design phase — an approach regulators call “compliance by design.”
Some companies are conducting pre-deployment conformity assessments even when not strictly required, using them as a selling point with European customers. Others are appointing EU-based authorised representatives, a requirement for non-EU companies placing high-risk AI systems in the European market.
The timeline matters here. While enforcement of most provisions begins in August 2026, certain obligations — including bans on specific AI practices and requirements for AI literacy training — kicked in earlier. Companies that wait until 2026 to act will find themselves scrambling.
The Regulatory Landscape Is Still Shifting
Adding complexity, the European Commission is still issuing guidance documents and delegated acts that clarify how the law applies to specific situations. The definition of what constitutes a “high-risk” system in practice remains somewhat fluid, with industry bodies lobbying for narrower interpretations.
Meanwhile, individual EU member states are establishing their own national supervisory authorities, which will handle enforcement. Early signs suggest these authorities may take varying approaches, creating potential inconsistencies across the bloc.
Companies operating across multiple European countries should prepare for a patchwork of enforcement styles, at least in the early years.
What This Means for You
If your AI systems touch European users or customers, conduct a gap analysis against the EU AI Act’s high-risk requirements now — not in 2026. Identify which of your systems might fall into regulated categories and assess whether your current documentation and monitoring practices meet the structural requirements, not just the paperwork ones.
Budget for compliance. This isn’t a one-time legal review; it’s an ongoing operational cost that should be factored into European market entry calculations. And if you’re building AI products for global markets, consider EU compliance requirements as baseline design constraints rather than regional add-ons.
The companies that treat this as a competitive advantage — rather than a regulatory burden — will find European doors opening faster.
