When Mercor, the AI-powered hiring platform backed by Peter Thiel, suffered a cyberattack recently, the breach didn’t come through a sophisticated zero-day exploit or social engineering. The vulnerability was traced to LiteLLM, an open-source tool that helps companies route requests to multiple AI models like OpenAI, Anthropic, and others.
For technology leaders in India racing to deploy AI across their organisations, this incident is a warning shot. The very tools that make AI adoption faster and cheaper can also open doors you didn’t know existed.
What Actually Happened at Mercor
Mercor uses AI to match companies with talent, processing sensitive data including resumes, interview recordings, and employment details. The platform had integrated LiteLLM — a popular open-source proxy that acts as a universal translator between applications and various large language models (LLMs).
Attackers exploited security weaknesses in LiteLLM to gain unauthorised access. While Mercor has not disclosed the full extent of data compromised, the breach reportedly affected user information stored on the platform. The company has since patched the vulnerability and notified affected users.
LiteLLM, maintained by a small team, has over 15,000 GitHub stars and is used by thousands of developers worldwide. Its popularity made it an attractive target — and a single point of failure for every company relying on it.
Why Open-Source AI Tools Create Blind Spots
Most enterprises today don’t build their AI capabilities from scratch. They assemble them from components: a model from OpenAI or Google, a vector database from Pinecone, an orchestration layer from LangChain, and connectors like LiteLLM to tie everything together.
This mix-and-match approach accelerates deployment but creates what security professionals call “supply chain risk.” Each component inherits the security posture of its maintainers — often small teams or individual developers without dedicated security resources.
The problem is visibility. A recent survey by Synopsys found that 96% of commercial codebases contain open-source components, and 84% contain at least one known vulnerability. In AI systems, these components often handle API keys, user data, and model outputs — exactly the kind of information attackers want.
The Governance Gap in Indian Enterprises
Indian companies are adopting AI faster than their security policies can keep up. A Nasscom report from late 2024 found that 65% of Indian enterprises had deployed at least one AI application, but fewer than 30% had formal security review processes for AI-related dependencies.
The challenge is structural. Procurement teams know how to vet SaaS vendors. Security teams know how to audit network configurations. But AI supply chains cut across both — and often fly under the radar entirely because developers can install packages with a single command.
When an engineer adds LiteLLM or a similar tool to speed up development, it rarely goes through the same scrutiny as a new enterprise software purchase. Yet the risk profile can be comparable or worse.
Building a Defence That Matches the Threat
The Mercor incident points to specific actions technology leaders should take now.
First, inventory your AI dependencies. You cannot secure what you cannot see. Require engineering teams to document every open-source component in AI workflows, including indirect dependencies that get pulled in automatically.
Second, establish security baselines for AI tools. Before adopting any open-source AI component, evaluate the project’s security track record, maintenance activity, and disclosure practices. A popular project with no security policy is a red flag.
Third, treat AI proxies and connectors as critical infrastructure. Tools like LiteLLM often handle API credentials and route sensitive data. They deserve the same network isolation, access controls, and monitoring as your databases.
Fourth, plan for component failure. If a critical open-source dependency is compromised tomorrow, do you have a response playbook? Can you switch to an alternative quickly? Redundancy isn’t just for servers.
What This Means for You
The Mercor breach isn’t about one company’s mistake. It reflects a systemic gap between how fast organisations adopt AI and how slowly security practices evolve to match.
For CIOs and CTOs in India, the lesson is clear: AI supply chain security needs a seat at the strategy table, not just the engineering stand-up. The tools that accelerate your AI ambitions can just as easily become your biggest vulnerabilities.
Start the inventory this week. Your next board meeting will thank you.
