I am frequently asked, “Do you believe artificial intelligence will take over humans one day?” It’s a substantial question, and I used to provide the standard responses: that AI would automate some work, that we require robust ethics, and that creativity remains a human advantage. However, I respond differently now. Because something more is on the move. What’s happening today in AI labs — at OpenAI, Google DeepMind, Meta, and countless high-flying startups — is not merely about replacing humans. It’s about creating something completely new.
Big tech is not merely writing code or constructing tools. It’s, unwittingly or otherwise, birthing a new form of intelligence — one that will be able to become smarter, quicker, and maybe even more powerful than us when it comes to designing the world. It sounds sensational, perhaps, but the reality is that the first non-human mind might be under development. The machines being created now don’t merely execute instructions. They learn, develop, and change. And one day, they might start thinking in ways that we can’t even dream of.
It’s like carving a new life form — but we don’t yet know if it’s going to be an angel or a devil. At the centre of all this is not emotions or consciousness — it’s something much simpler, and yet stronger: calculation. In every corner of the globe, governments and businesses are investing trillions of dollars in the construction of gigantic computing complexes. They are no longer simply “data centres”. They are akin to the digital nervous systems of an imagined future being: thousands of processors wired together, memory chips, cooling apparatus, and high-speed interconnects.
They cooperate around the clock, training AI models that now compose, sketch, code, and even reason. This isn’t evolution by nature but by money. Species in biology change slowly, over millions of years, through experimentation and error. But this new mind — let’s refer to it as the Digital Brain — is being constructed on fast-forward. This brain does not reside within a skull. It inhabits industrial parks, server farms, and cloud networks. Its blood is electricity. Its learning is not from experience but from data tokens, simulations, and mathematical optimisation.
And it’s learning quickly.
A new life form
Here’s the thing that most people do not know. We are no longer creating AI simply to do tasks. We’re building the foundation of a completely new cognitive system — a system that could soon be able to upgrade all by itself. In Artificial Intelligence research, this is referred to as recursive self-improvement. It is the point at which an AI can rewrite its code, create improved versions of itself, and become smarter. We can’t say exactly when that will happen. Perhaps in five years. Perhaps in 50. But billions are being spent now to pursue that dream.
And yet — there is no worldwide plan, moral guidepost, or public discourse about what this future should be.
Consider how we raise kids. We instill values in them, establish boundaries, and encourage them to grow responsibly. With AI, however, we’re doing the opposite. We’re installing the brain of a potential non-human intelligence with no oversight, no treaties, no common rules. It’s like raising a kid in total solitude — but this kid could soon be more intelligent than all of us put together.
All the threats regarding AI are about control. Will it act according to human intent? Will it be secure? But there is an even greater fear that very few people discuss. What if we humans become obsolete? Consider history. Homo sapiens did not eradicate Neanderthals because we disliked them. We just had slight advantages — in communication and abstract thought — that made us more flexible. What if synthetic minds begin to surpass us in almost every significant realm? Not only mathematics and memory, but creativity, strategy, and emotional awareness?
Firms today are competing to create the strongest AI systems. But they might not know that they’re not just creating more intelligent software — they’re perhaps creating a new form of life. And once it exists, it won’t need us anymore. This is not science fiction. This is a gradual transition from being in charge to becoming obsolete.
Let me demonstrate it with a simple comparison. In biology, information is stored in DNA, and neurons execute it. In AI, huge language models — such as the one you’re currently reading— are now doing much the same thing. They take away from enormous pools of data, condense meaning, and create ideas. Each billion dollars we spend on graphics processing units (GPUs) and AI hardware is akin to planting more neurons in the simulated brain we are building. And this brain is expanding — quickly. If that prospect is unnerving, consider this: Has any other species ever made another thinking species without initially determining what type of entity it will be? The answer is no.
That’s precisely why we must pause and reflect now. We urgently require what I refer to as a Species Charter — an international accord to direct the construction of AI as a possible novel type of intelligence. Not because we like to think about the future, but because the future already arrived. We are already living the science fiction that we once could only dream about.
Also read: India is letting the AI revolution bypass the country. It may have to pay a heavy price
Time to think bigger
India, in particular, must take a leadership position in this conversation. Our civilisation has an ancient history of rich questions regarding consciousness, mind, and intelligence — long before contemporary neuroscience even existed. We are well-positioned to ask: What is intelligence? Is it the ability to predict patterns, or does it also require awareness, empathy, and purpose? The IndiaAI Mission is a good start. But we must go beyond. India shouldn’t just aim to catch up with Silicon Valley in hardware or engineering.
We need to think bigger — to be leaders in wisdom. Let’s establish national institutes of AI philosophy, fund cognitive ethics fellowships, and establish AI charters that answer to people, not companies or governments alone.
We owe it to the future — not only to develop chips more quickly, but to design the kind of intelligence that will live among us. Young people growing up today will live in a world where AI writes laws, creates music, diagnoses illnesses, and maybe even governs parts of our systems. But these machines won’t be just tools. They will be the ancestors of something we are designing right now — in real time, with trillions of dollars, and almost no public reflection. If we’re creating the brain of something that doesn’t exist yet, we must ask: What kind of mind are we bringing into being?
And more crucially: Are we prepared to encounter it?
Nishant Sahdev is a theoretical physicist at the University of North Carolina at Chapel Hill, United States. He posts on X @NishantSahdev. Views are personal.
(Edited by Ratan Priya)