Thank you dear subscribers, we are overwhelmed with your response.
Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/
The innocent office email exchange
It begins inconspicuously, like a normal last-minute adjustment to an earlier calendar invite. Along with it comes a document shared with a note that says, ‘Sharing a doc so we’re aligned.’ Small negotiations that keep the business moving. So far nothing unusual.
The recipient responds because it feels reasonable to respond. A delay would look unhelpful because the sender appears legitimate, familiar, and the matter seems urgent. Nothing has gone wrong. There is no breach. No system failure. No red flag blinking on a dashboard. There is only work – happening at speed, under pressure, with the quiet assumption that cooperation is safer than caution.
The incident begins socially, not technically
Cyber incidents usually begin long before a system is touched. Smart hackers rely on coordination, cooperation, compliance, or, sheer courtesy, rather than begin with malicious code or sophisticated exploits. They make a request in good faith. A response follows because that is how work is expected to function. Risk enters the moment cooperation becomes automatic, not at the point of intrusion. When speed is rewarded, when responsiveness signals competence, and when questioning a request feels like friction rather than diligence, the conditions for failure are set.
By the time a system is accessed or data is exposed, the decisive step has often already occurred quietly, socially, and without resistance.
What drives compliance is more often organisational psychology than technical ignorance
Let us understand why this happens. There is an invisible social mechanics in play here that typically encourages people to not respond in a ‘no’. Especially if the request is from a customer, or someone in a position of authority.
Asking questions signals lack of alignment or slowness, or insecurity. Hesitation is often interpreted as inefficiency while compliance is read as professionalism. Declining a request disrupts momentum. It introduces friction, and requires justification.
The fear of appearing incompetent, and the perceived social cost of saying ‘no’ makes it way lot easier to say ‘yes’. Because the context creates legitimacy, social acceptance creates room for technical compromise.
Nobody behaves irrationally. Nobody thinks they are being careless. Every step feels explainable in isolation.
The structure itself enables its destruction
Organisations audit systems, not conversations. They discipline errors, not environments. They log actions, not pressures thus individualising failure, erasing context, and preserving the fiction that choice was free and unconstrained.
Most enterprise systems are designed to record what was done, who did it, when it was done, from where, and using which credentials. They are not designed to record why it felt necessary, what fear preceded it, what threat, ambiguity, or power imbalance was present, and what consequences the person was trying to avoid.
So the record shows: Employee X accessed File Y at 11:42 pm. It does not show: Employee X had just been told their role was ‘under review’, was informally asked for data by a superior, and was afraid that refusal would be noted.
When something goes wrong, investigations move outwards from the log, not inwards from the human. An action is identified, a policy violation is cited, and a responsibility is assigned. The pressure gradient that made the action feel inevitable remains invisible.
This is why good employees, who complied earlier, appear reckless later. Mistakes look malicious in hindsight, thereby hardening ‘insider threat’ narratives.
This allows institutions to say that, ‘The system worked. The human failed.’ Even when the system quietly created the conditions for that failure.
What we misunderstand about risk
There is no checklist here, no definitive to-do list of what organisations must do next. But there is no denying that in fearing technical sophistication, we overlook social predictability.
Organisations tend to believe danger as something external and advanced: zero-day exploits, sophisticated malware, elite foreign hackers. This fear is comfortable because it externalises the threat, justifies expensive tools, and suggests that only specialists could have prevented it. So when incidents occur, organisations ask, ‘Which vulnerability was exploited? Which system failed? Which tool did we lack?’ The assumption is: damage requires brilliance.
But damage often comes from social predictability which we routinely overlook. Human behaviour is remarkably consistent under pressure. People comply with authority, rush under deadlines, avoid conflict, and fear reputational harm more than abstract policy.
Attackers don’t need genius. They need insight into incentives, fear points, organisational culture, and informal power structures. Once those are mapped, outcomes become almost deterministic. The attacker isn’t guessing. They’re triggering a known response.
The question, then, is not whether people behaved wrongly, but whether modern organisations have made it unreasonable to behave otherwise.
If risk increasingly emerges from predictable human responses, we may need to ask a harder question than how systems were breached: what kinds of behaviour do our institutions reward, and which do they quietly punish?
Nitish Bhushan writes on technology, relationships, and moral dilemmas, examining how human behaviour shapes consequences.
These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.
