New Delhi: India’s securities regulator has informed every broker, fund house, exchange, depository, and market intermediary in the country to immediately lock down their systems against a new generation of Artificial Intelligence (AI) tools capable of detecting and exploiting software vulnerabilities at machine speed, specifically naming Anthropic’s AI platform ‘Mythos’.
The Securities and Exchange Board of India (SEBI) issued the directive on 5 May in a circular signed by Deputy General Manager Mamata Roy and addressed to the entire regulated universe of Indian securities markets — from stock exchanges and mutual funds to credit rating agencies, custodians, portfolio managers, merchant bankers, and alternative investment funds.
Claude Mythos Preview is a frontier artificial intelligence model built by Anthropic that found over 2,000 previously unknown software vulnerabilities in just seven weeks of testing — including flaws that had survived decades of human security review — and developing working exploits on the first attempt in over 83 per cent of cases. Anthropic judged the model too dangerous for public release and instead formed Project Glasswing, a controlled coalition of technology companies including Amazon Web Services, Apple, Microsoft, Google, and CrowdStrike, to deploy it exclusively for defensive purposes.
The model’s announcement on 7 April triggered a wave of regulatory alarm worldwide, with Union Finance Minister Nirmala Sitharaman convening a meeting with Indian banks on pre-emptive steps, United States officials urging major financial institutions to test Mythos in controlled environments, Australia’s prudential regulator writing to its financial sector, and Germany’s Bundesbank calling on the European Commission to formally request access to the technology.
SEBI was pointed about why the threat is systemic and not confined to any single entity. “Due to the interconnectedness and interdependency of market participants in the Securities Market Ecosystem,” the circular states, “a periodic coordinated approach for vulnerability management, information sharing and monitoring/assessment is required to prevent a cascading impact.” In plain terms: one breach, and the damage could ripple across the entire market infrastructure like falling dominoes.
Also Read: Anthropic’s Claude Mythos: The AI model that India cannot access but cannot ignore either
Task force in place
To coordinate the response, SEBI has constituted a dedicated task force called cyber-suraksha.ai that draws representatives from Market Infrastructure Institutions (MIIs), Qualified Registrar and Transfer Agents (QRTAs), Qualified Regulated Entities (QREs), and other stakeholders, reachable at project-cyber-suraksha.ai@sebi.gov.in. A meeting of this task force was specifically convened with MIIs and QRTAs to review the risks posed by platforms like Mythos and work out mitigation measures. The task force has been mandated to examine cybersecurity risks from AI-based models and devise uniform mitigation strategies, share threat intelligence and vulnerability management best practices, report cyber-incidents on a priority basis, and review the cybersecurity posture of third-party application service providers, including empaneled vendors.
The 10-point advisory that emerged from those consultations is sweeping. At the most basic level, SEBI has told all regulated entities to patch every operating system and application immediately. Where patches do not yet exist, virtual patching — a technique that protects systems through security policies rather than changes to the underlying code — is recommended as a stopgap. Vulnerability assessments, using both conventional tools and AI-based tools where suitable, must be conducted on a continuous basis in line with SEBI’s existing Cyber Security and Cyber Resilience Framework (CSCRF), the advisory says.
Exchanges and depositories face specific obligations around their vendor ecosystems. They have been directed to instruct empaneled application vendors — those supplying commercial off-the-shelf (COTS) software to market members — to conduct comprehensive risk assessments covering AI-led vulnerability detection models and implement safeguards including patching, Vulnerability Assessment and Penetration Testing (VAPT), continuous monitoring, and hardening measures.
Application Programming Interface (API) security gets its own dedicated section in the advisory. Regulated entities must maintain current inventories of all APIs and the applications using them, enforce strong authentication on least-privilege principles, implement rate limiting and throttling to detect abuse, and restrict API connections strictly to whitelisted sources.
On monitoring, the circular points all eligible regulated entities toward the Market Security Operations Centre (M-SOC) — established jointly by the National Stock Exchange (NSE) and BSE — which provides round-the-clock real-time threat detection across market digital infrastructure. Any regulated entity not yet onboarded with any M-SOC has been told to expedite the process given the heightened risks from AI-driven attacks. MIIs are required to run awareness and handholding programs, including workshops, to get entities integrated smoothly.
Risk assessments must now include comprehensive scenario-based testing that explicitly models AI tools as a threat vector — covering both internal and external risks in a regulated entity’s information technology environment. System hardening — disabling unnecessary services, killing default accounts, and enforcing Zero Trust Network Architecture (ZTNA) — is also mandated to shrink the available attack surface. Regulated entities must additionally maintain updated asset inventories and Software Bill of Materials (SBOM) for all critical applications, including open-source components.
The final directive looks further ahead. SEBI has told all regulated entities to consult their information technology committees on managing AI-led vulnerability risks and to draw up long-term plans for deploying AI themselves — both for detection and for what the circular calls “autonomous/agentic mitigation”. The regulator also called for recalibration of risk frameworks for AI-accelerated threats and AI-augmented transformation of security operations centres.
(Edited by Nardeep Singh Dahiya)
Also Read: India leads in AI talent, but also brain drain & anxiety, says Stanford’s AI index report

