Thank you dear subscribers, we are overwhelmed with your response.
Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/
Until recently, AI did exactly what humans expected it to do. It followed rules. It searched faster. It calculated deeper. It impressed, but it rarely astonished.
Then, twice in the last decade, something different happened.
Twice, the people building AI systems paused and said: we did not expect this. Not in the polite academic sense, but in the deeper, unsettling way that forces you to rethink assumptions. These were not incremental improvements. They were conceptual shocks. Together, they define the psychological beginning of the current AI era.
The first is often called the “cat moment.” The second is remembered simply as Move 37.
The first showed that machines can discover meaning without being told what meaning is.
The second showed that machines can discover strategies humans never imagined.
The “Cat” moment
The cat moment happened quietly, almost accidentally. In the early 2010s, researchers at Google were experimenting with very large neural networks trained on unlabeled data. Instead of feeding the system carefully annotated images, they gave it millions of random frames from YouTube videos and asked it to learn compact representations. No labels. No hints. No definitions. Just exposure.
When the researchers later looked inside the system, they noticed something unexpected. One internal unit lit up reliably in response to images that shared a single feature: cats. Not a particular breed or posture, but the general idea of a cat. No one had explained what a cat was. No features had been labelled or pointed out. And yet, somewhere inside this vast numerical machinery, a stable pattern had formed.
The moment felt familiar in an unexpected way. It resembled how neuroscientists identify regions of the human brain that respond selectively to faces, language, or movement. Not because the machine had a brain, but because, for the first time, researchers were observing a machine develop internal structures that could be meaningfully interpreted after the fact.
This was not a clever trick or a statistical curiosity. It marked a deeper shift in how learning and abstraction were understood. For decades, AI had been built on the assumption that meaning must be imposed: symbols defined, categories specified, concepts supplied by humans and merely manipulated by machines. The cat neuron quietly inverted that logic. It suggested that abstraction could emerge from exposure alone, that a system immersed in raw experience could carve the world into meaningful patterns without supervision.
The shock was not really about cats. It was about cognition itself. If a neural network could form concepts without being taught, then intelligence no longer required explicit rules. Perception could be learned, and the boundary between data and meaning had begun to blur.
At the time, this insight passed with little fanfare. Only in retrospect did it reveal itself as the first crack in a long-held certainty: that machines could reflect human understanding, but never generate it.
Move 37
Four years later, the second crack arrived, this time in public view.
In March 2016, DeepMind’s AlphaGo faced Lee Sedol, one of the greatest Go players in history. The match itself was historic, but the real moment came during the second game. On move 37, AlphaGo placed a stone in a position that made no sense to human experts. Commentators were baffled. Professionals watching the game assumed it was a mistake.
It was not.
Move 37 turned out to be brilliant. Subtle. Strategically profound. A move that no human school of Go had ever seriously considered, and one that shifted the trajectory of the game irreversibly.
What made this moment different from earlier AI victories was not that the machine won. Chess programs had been defeating world champions for years. What stunned observers was that AlphaGo did not win by exhausting calculation alone. It won by inventing a move that violated human intuition.
This was not a faster version of human reasoning. It was something orthogonal.
AlphaGo had not been programmed with Go wisdom. It had learned by playing millions of games against itself, guided by reinforcement learning and neural networks that evaluated positions probabilistically. In doing so, it explored parts of the strategic space humans had never systematically visited. Move 37 emerged not from tradition or theory, but from optimization unconstrained by human bias.
That realization landed heavily on the Go community. Professionals later admitted that the match permanently changed how the game is taught and understood. But its implications went far beyond board games.
The unsettling message of Move 37 was this: expertise does not exhaust possibility.
Humans had been playing Go for over two millennia. Entire cultures had treated it as a near-perfect expression of intuition and strategic depth. And yet, a machine, given enough self-play and feedback, found ideas that centuries of human exploration had missed.
Taken together, the cat moment and Move 37 revealed two different dimensions of surprise.
The cat moment showed that machines can form internal representations of the world without being told what to look for. Meaning, once thought to be uniquely human and symbolic, could emerge from exposure and compression.
Move 37 showed that machines can discover novelty within structured domains, not by creativity in the human sense, but by exploring solution spaces we implicitly ignore.
One was about perception. The other was about strategy.
One unsettled our theories of learning. The other unsettled our confidence in expertise.
In the end, the ah-ha moments belong less to the software than to us, the observers. The machines did not awaken. They simply became precise enough, and strange enough, to surprise us. And in that surprise, we briefly mistook a powerful statistical mirror for something like a mind.
But moments of surprise can also mislead, especially when they are retold in isolation.
Tip of the Iceberg
The cat moment and Move 37 were the visible tip of the iceberg, made possible by years of largely invisible technical progress beneath the surface.
It is tempting to read these episodes as sudden breakthroughs, but doing so misses the long, cumulative journey that made them possible. Between them lay thousands of incremental advances in neural architectures, training methods, optimization techniques, hardware acceleration, and data pipelines. Most were unglamorous, rarely noticed outside specialist circles, and absolutely essential.
These submerged layers did not announce themselves with spectacle, but they quietly made the spectacle possible. What appeared as a leap was, in reality, a threshold crossed after sustained accumulation.
In that sense, the ah-ha moments belong less to the machines than to us, the observers. The systems did not awaken or acquire intent. They simply became capable enough, and unfamiliar enough, to force a revision of our assumptions. What changed was not the nature of software, but the scale and subtlety at which it could operate.
The cat neuron and Move 37 mark where that long journey became visible. The real story lies in everything that made them inevitable.
These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.
