Thank you dear subscribers, we are overwhelmed with your response.
Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/
Have you tried using AI to learn a new topic? Better still, have you tried debating a half-formed idea with it before you were ready to commit to it? If not, bookmark this article. Try both. Then come back.
That experience is quietly transformative. It changes how people acquire information, test ideas, and move from uncertainty to clarity. It is therefore no surprise that AI fluency is rapidly becoming a baseline expectation in professional life—and that this shift has already reached classrooms, whether institutions acknowledge it or not.
Many educators and teachers see this shift as troubling and respond by rejecting AI outright. In doing so, they often conflate legitimate concerns about learning quality with the tool itself. Attempts to stop AI use have largely failed. Bans do not prevent AI use; they push it underground, erode trust, and forfeit the opportunity to use AI where it genuinely helps.
What, then, does AI do well—and where are its limits?
This makes it worth asking a simpler question: what is AI actually good at, and where does it fall short?
In delivering education, AI is particularly strong at explanation and self-directed learning. It can clarify concepts, adapt to different levels, offer examples, and respond immediately without fatigue. For much of modern education, this function was treated as the core of teaching, largely because explanation was scarce. Teachers therefore became the primary channel for content delivery.
But explanation is always a proxy, not the essence of education. Education is what turns exposure into competence through guidance, judgment, and certification. As explanation becomes abundant, it can appear that teachers are becoming redundant. What is actually disrupted, however, is the long-standing habit of treating instruction as the organising centre of education. To see what truly changes, the functions that have long been bundled together—instruction, practice, feedback, judgment, motivation, and certification—need to be separated and examined in their own right.
Learning starts with instruction, guided practice, and feedback: explaining concepts, demonstrating methods, sequencing difficulty, designing practice, and pointing out what is missing or unclear. It has historically been the most visible and time-consuming part of teaching, and it is precisely where AI now performs well. AI is patient, adaptive, always available, and increasingly capable of generating exercises and providing rapid, first-pass feedback. Where it still falls short is not ability, but prioritisation and timing—knowing which effort matters most now and when struggle should be allowed to do its work.
Beyond this lies a different kind of work. Once basic competence is established, the focus shifts from correctness to depth, coherence, prioritisation, originality, and readiness under constraint—the difference between work that is merely acceptable and work that is genuinely strong. This stage also requires calibrating effort over time: knowing when to stop consuming information and start producing it, when to rebuild fundamentals, and when to take intellectual risks. These judgments are contextual and consequential.
AI can instruct, generate practice, and even simulate evaluation, but it does not exercise judgment. It does not reliably feel stakes, weigh trade-offs, or calibrate risk; nor can it prioritise what matters, sequence effort over time, judge readiness under constraint, or distinguish between work that is merely acceptable and work that must stand out. It also falls short on the psychological, social, and normative dimensions of learning—managing confidence and anxiety, modelling intellectual seriousness, and socialising students into standards through presence and example. AI can describe norms, but it cannot interpret a learner’s state, exercise situational judgment, or confer legitimacy. That remains irreducibly human.
Finally, there is certification and legitimacy. Teachers and institutions signal to the outside world that a learner has crossed a threshold of competence. Grades, degrees, and recommendations are not just records of learning; they are social claims that others rely on. AI cannot yet credibly issue these signals on its own.
Taken together, these limits clarify the division of labour. AI absorbs instruction, practice generation, and first-order feedback. What remains with the teacher are judgment, calibration, motivation, norm-setting, and certification. This is not a shrinking of the teacher’s role, but a concentration of it around the functions that matter most when learning must be trusted rather than merely completed.
Thought experiment for the future
Suppose, as a thought experiment, that instruction is largely handled by AI. Explanations, examples, repetition, and early clarification happen outside the classroom, at the learner’s pace. What remains for teachers is not delivery, but judgment.
In such a model, classroom time shifts from covering material to testing whether it has been understood. Teachers surface learning through discussion, debate, presentation, and application under constraint. Assessment becomes continuous rather than episodic, based on observing thinking in real time rather than grading artifacts produced at a distance. Calibration becomes explicit: deciding when students should rebuild fundamentals, practise under pressure, or take intellectual risks. Classrooms also become the primary site where standards are enforced—where students learn what counts as serious work and where shortcuts stop being acceptable.
This does not make teaching easier. It removes the most scalable part of the job and concentrates responsibility around the least scalable parts. Judgment, discussion, and evaluation do not scale the way lectures do. A teacher can explain to hundreds; they can meaningfully evaluate only a few dozen.
The implication is not that teachers become unnecessary. It is that the system’s constraint shifts. As instruction becomes abundant, human judgment becomes scarce. A serious AI-integrated model may therefore require smaller classes and, potentially, more teachers rather than fewer—though with different responsibilities. Fewer instructors focused on explanation, and more educators focused on assessment, calibration, and the maintenance of standards.
A word of caution
This model also assumes maturity. Autonomous instruction requires discipline, resilience, and foresight—traits that develop over time. Applied too early, it risks serving the self-directed learner while leaving behind those who still need the teacher not only to judge learning, but to sustain it.
These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.
