scorecardresearch
Add as a preferred source on Google
Saturday, October 25, 2025
Support Our Journalism
HomeIndiaWant AI to cure cancer, not kill humanity? Start by fixing academia

Want AI to cure cancer, not kill humanity? Start by fixing academia

Disciplinary silos are hindering a holistic understanding of AI and responsible innovation.

Follow Us :
Text Size:

What if we asked an AI to cure cancer and it eliminated cancer by eliminating humans? This thought experiment, though extreme, is often used to highlight the risks of misaligned AI systems and the unintended consequences of poorly framed objectives that make the possible implications of AI in society very real. It also reveals a deep challenge.

We are entering what many believe could be the most consequential decade of human history. The choices we make and the decisions that are made (or not made) about AI are going to create the future. Across academia, disciplines are rapidly incorporating AI and its potential capabilities into research and innovation, as well as teaching curricula. Some focus predominantly on the opportunities, others on the risks. Yet, despite good intentions and much talk of interdisciplinary thinking that is needed to address a rapidly changing complex world, academic silos continue to exist.

As we develop and design AI we have to tackle this challenge that is hiding in plain sight; disciplinary silos are hindering a holistic understanding of AI and responsible innovation.

Why disciplinary siloes in AI matter more than we think

Disciplinary siloes create the opportunity for disciplinary blindness. This occurs when individual disciplines can only achieve a partial view of risks. A discipline can address risk in one area but be blind to the associated and confounding risks considered in another discipline. For example, medicine and climate science both address AI. In medicine, AI is accelerating breakthroughs in diagnostics, drug and vaccine discovery and laboratory testing, which is bringing insight that has the potential to improve and save millions of lives globally. At the same time, in climate science, the environmental impact of AI and its expansion is being addressed with a focus on carbon dioxide emissions, water and electricity consumption. While the advances in medicine have the potential to save millions of lives, the consequences of the environmental impact may impact billions.

A second challenge arises from disciplines having different and contradictory notions of risk and what constitutes desirability. AI, for example, can automate repetitive tasks highly effectively. In some disciplines, such as education, its potential for rapid marking and feedback creates the opportunity to free up educators to address aspects of teaching that require creativity and nuance in complexity. In a similar way, in the creative industries, production is being transformed by AI tools that accelerate and scale certain tasks. Yet, it is possible to argue that these repetitive tasks that are being outsourced and automated are a key source of human creativity and expertise, which is achieved through trial and error, iteration and reflection. Herein lies the contradiction; the very capability that is progressing one discipline may also be undermining another. Losing the iterative repetitive tasks may remove the path to developing creativity and expertise.

What is the way forward?

Interdisciplinary thinking has tended to be cited as the way to bring together different forms of knowledge to solve complex challenges and create better outcomes. What is less common is to take on disciplinary or even intradisciplinary conflicts. ⁠These conflicts are not simply academic. They shape real-world outcomes.

Returning to the curing cancer thought experiment. The answer to mitigating misalignment with an AI tool lies (in part) in the crafting of the prompt. Instead of prompting an AI to ‘cure cancer,’ a prompt such as ‘design and implement a set of measures that will enhance and improve human life by eliminating cancer’ may provide the nuance needed. It’s beguilingly simple. Yet, we often fail to frame tasks in ways that reflect multiple disciplinary perspectives because we are not allowing a clash of disciplines.

History is peppered with examples of failures to think across disciplinary boundaries in these ways. Think of GM crops and diesel cars. Both were technological breakthroughs. Yet, for GM crops, there was a failure to understand the risk of losing public opinion, which would have been evident to a social scientist or analyst of public discourse. For diesel cars, there was a failure to consider the health impacts, which were already known from the evidence that existed on respiratory illnesses. Broader disciplinary insights could have made these avoidable.

We need an approach to develop prompts that encompass the disciplinary contradictions. A directive, for example, to ‘create personalised medical solutions while minimising the effect on the environment’ could be one possible prompt that incorporates the clash between different disciplinary perspectives. But, of course, this is an extremely simple example. As more complexity is encountered, more nuance is required and more disciplinary debate is necessary.

However, the single-disciplinary directive approach is still deeply embedded in our academic disciplines. ‘Find a cure for cancer’ is an imperative that drives many scholars in medicine. ‘Reduce human effort’ drives many scholars in engineering. A few hundred metres away on the same campuses, another scholar is coming to a conclusion that goes in the opposite direction. They will publish in different journals using different lexicons, present at different conferences and exist in different echo chambers that confirm the disciplinary approach and perspective. We can and should do better.

Three things we need in order to do better

1. Spaces for deep and tangible cross-disciplinary or even counter-disciplinary debate and eventual negotiation. This cannot happen only at the edges of each discipline; it needs to be happening at the heart of how each discipline formulates the questions it addresses.

2. A culture of challenge across disciplines, not just collaboration. That challenge can be cast as ultimately finding solutions, but it needs to retain the sharp edges of mutual critique.

3. New forms of expertise to orchestrate, manage and facilitate the debate. Disciplines have different vocabularies and value systems. A word like ‘efficient’ may be laden with positive value in one discipline (for instance, engineering) but be much more problematic in another (parts of the humanities, for instance). We will need new types of academic actors who are fluent in these different languages and can translate across disciplinary cultures.

This is an urgent need. Tasks are being defined for new AI capabilities now and more often than not that is happening in disciplinary siloes. The notion that humanity could be eliminated in order to kill cancer may seem far-fetched. However, we are in danger of doing just that if we continue to insulate each discipline against the implicit or, at times, explicit critique that comes from a different silo. The stakes are high. The time is now.

The views expressed in this article are those of the author alone and not the World Economic Forum. Read the original article here.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular