There are moments when a technology does not merely advance the frontier — it erases it. The emergence of Claude Mythos, Anthropic’s new artificial intelligence model, is one such moment. The fact that the United States Treasury Secretary and the Federal Reserve Chair felt it necessary to convene an emergency, closed-door meeting with the chief executives of America’s largest banks ought to concentrate minds — in Washington, certainly, but no less urgently in New Delhi.
Mythos is, on standard benchmarks for coding, logical reasoning, and mathematical problem-solving, the most capable AI model yet built. What has triggered alarm is something rather more consequential than benchmark performance: Mythos can autonomously identify, chain together, and exploit software vulnerabilities at a speed and scale no human team of hackers can match. In controlled testing, it uncovered thousands of previously unknown security flaws across every major operating system and browser — including zero-day vulnerabilities that had lain undetected for over two decades. It did not merely find them. It wrote the exploit code. Autonomously.
Anthropic has, to its considerable credit, chosen not to release the model publicly. Access is being extended to roughly 40 trusted institutional partners — Microsoft, Apple, Google, The Linux Foundation among them — specifically to enable patching of discovered vulnerabilities before wider deployment. This is responsible behaviour. It is also, in the long arc of technological history, a temporary condition. What one company withholds today, state-sponsored programmes in Beijing and Moscow will replicate tomorrow, with fewer scruples about who receives the keys. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell are not worried about Anthropic. They are worried about leakage, reverse engineering, and the near-certainty that equivalent offensive AI capability will, within a foreseeable horizon, be deployed by actors who will use it without restraint against a global financial system never designed to withstand autonomous AI-driven attack.
India’s self-inflicted exposure
India would be unwise to treat this as a distant American problem. The Unified Payments Interface, or UPI, processes over 20 billion transactions a month. Aadhaar underpins welfare delivery and financial inclusion for hundreds of millions of citizens. NPCI’s switching infrastructure is the spine of the digital economy. This is a remarkable achievement — and a target surface of extraordinary breadth, one that the Government of India has managed with a degree of complacency that borders on recklessness.
The Zoho episode is instructive and, in retrospect, deeply embarrassing. In the name of digital sovereignty and the Atmanirbhar Bharat narrative, the government migrated 1.2 million central government employee email accounts to Zoho’s cloud platform following a tender won in 2023. Cabinet ministers — from IT to Home Affairs to Education — publicly endorsed the switch on social media as though it were a patriotic act. The Ministry of Education issued a formal office memorandum directing officials to adopt Zoho’s entire suite. Arattai, Zoho’s messaging app, was promoted as India’s WhatsApp.
What the cheerleaders did not dwell upon was the platform’s security record. By 2025, SQL injection flaws in Zoho Analytics were surfacing with top severity ratings from global trackers. Arattai lacked end-to-end encryption at the very moment ministers were urging officials to adopt it for sensitive government communications. In February 2026, a North Korean state-sponsored group released tools exploiting a backdoor in Zoho WorkDrive to deploy malware — the very platform Indian government employees were being directed to use for official documents. When 16 billion login records were exposed globally in June 2025, the government finally ordered migration to a new mail.gov.in domain. The horse had bolted. The stable door was shut.
Also read: India’s CEOs agree AI will transform jobs. They disagree on how
What India must do — starting now
India’s response to Claude Mythos must begin not with a committee, but with a phone call. The government should formally approach Anthropic for inclusion in its trusted-partner programme — the same access extended to Microsoft, Apple, and Google. The rationale is self-evident: no country in the world has built a digital public infrastructure of comparable scale or systemic importance. UPI, Aadhaar, NPCI, the government email stack — these are not corporate assets. They are sovereign infrastructure serving over a billion citizens. If Mythos can find vulnerabilities in Windows and Chrome that went undetected for decades, it can find vulnerabilities in India’s payments and identity architecture before an adversary does. That is precisely the point. Seeking trusted-partner access would be unprecedented — no sovereign government has yet been admitted to the programme — but India’s case is categorically different from any corporate partner’s, and it should make that argument without hesitation.
Once the vulnerable sectors are mapped and patched, the regulatory architecture must follow. Offensive AI must be recognised as a distinct legal category in Indian law — a chatbot and an autonomous vulnerability-exploitation engine are not the same instrument, and no framework that treats them identically deserves to be taken seriously. Any AI system crossing defined offensive thresholds must be required to notify national cybersecurity authorities before deployment. The RBI and the Indian Computer Emergency Response Team (CERT-In) must mandate AI-specific stress tests for systemically important financial institutions. And no government technology procurement — however draped in the language of ‘swadeshi’ — should proceed without an independent, published security audit by a CERT-In–empanelled body. The Zoho lesson must be institutionalised, not merely regretted.
India must also use its international weight. As a G20 member with proven convening capacity, India should champion a multilateral framework on offensive AI governance. Cyber threats are jurisdictionally blind. An AI-generated attack on NPCI’s switching layer cares nothing for the nationality of the server it originates from. A governance architecture that mistakes patriotic branding for security assurance is no architecture at all.
Anthropic has, for the moment, acted responsibly. The question is whether governments can build — before the next actor with equivalent capability chooses not to — the regulatory scaffolding that makes responsible behaviour the norm rather than the exception. For India, with its 20 billion monthly UPI transactions, its billion-plus Aadhaar registrations, and a political class that has shown it will prioritise optics over security, the answer is not academic. It is a matter of national resilience.
KBS Sidhu is a former IAS officer who retired as Special Chief Secretary, Punjab. He tweets @kbssidhu1961. Views are personal.
(Edited by Aamaan Alam Khan)

