SubscriberWrites: A perspective on what makes generative AIs dangerous

The inability to grasp the problem in an effective manner & build consensus to regulate it at the individual & societal level is what makes generative AIs dangerous.

Representative Image | Pixabay
Representative Image | Pixabay

Thank you dear subscribers, we are overwhelmed with your response.

Your Turn is a unique section from ThePrint featuring points of view from its subscribers. If you are a subscriber, have a point of view, please send it to us. If not, do subscribe here: https://theprint.in/subscribe/

What makes ChatGPT and their ilk, commonly referred to as generative artificial intelligence (or generative AI), dangerous? 

Decades of movies on robots becoming sentient and destroying human life have captured our imagination. Even today, for most people, the scariest scenario involving AI is one where an artificial general intelligence (“AGI”) possesses a humanoid body and threatens humans. Within AI research circles, they might not be too concerned about the humanoid part of this scenario, but they are definitely concerned about AGIs turning against humans. In particular, they talk about the alignment problem – where AGIs do not have the same motivations that we have and consequently turn against us. Whilst this might be a possibility in the future, there are more immediate problems from this technology.

One such problem is the proliferation of fake narratives. With the use of these generative AIs, it will get harder to tell the difference between real news and fake news. ChatGPT can be used to generate articles that spread fake news. Midjourney can be used to create any image from plain text – one that might look very real. Finally, Deepfake make it harder to tell whether, for instance, a video of Putin’s speech announcing the launch of a nuclear missile on Ukraine is authentic.  

Leveraging on the existing social media infrastructure, which aims to keep your attention on their platform by sending you things that are of interest to you, the outputs of these generative AIs find a captive audience. One that might believe these fake and potentially dangerous fabrications without much critical inspection. With a growing number of people getting their news and updates from social media, the end result of this process might be that people only gather a one-sided perspective on things while never coming across differing views. This would naturally lead to a polarised society where forming consensus becomes harder. 

To be clear, this is already an existing problem for which we don’t yet have a viable solution. With generative AIs, this problem will get turbocharged. Unfortunately, a solution to this is not on the horizon. This is because, amongst other things, the problem is hard to see and appreciate. 

Let’s compare for instance nuclear fission technology with AI – one that is frequently made. With the atomic bomb, developed using the nuclear fission technology, it was much easier to see its destruction capabilities. Anyone who sees a video of an exploding atomic bomb would objectively agree that it is dangerous. Similarly, with AGI too, the dangers might be easy to see. If we do get close to developing an AGI, is not unreasonable to expect people working on this to agree on the dangers and tread cautiously. Also, once AGI is developed, it would seem plausible that some sort of global consensus and regulation on its use can be achieved. 

That cannot be said of the problem associated with generative AI leveraged social media usage. 

Despite growing awareness of the dangers of social media, both at the societal level and also at the individual / psychological level, its usage has only increased over time without much regulation at both levels. With generative AIs, these dangers would only get exacerbated. Yet, people do see these dangers in the same way as they do with an atomic bomb.

To be frank, at the societal level, generative AI and social media combined do not have the same destructive capability as an atom bomb. Further, at the individual level, it is hard to quantify the harm done by only consuming information that aligns with one’s beliefs. However, as explained above, generative AIs and social media use over time have the ability to destabilise societies by dividing the people and undermine democracy.

All of these are not insignificant problems. Unfortunately, these problems rarely hit us emotionally when we just read about them – not unless we experience them personally. Therefore, whilst global consensus was reached to regulate the use of nuclear fission technology for peaceful purposes and ensure its destructive capabilities were not let loose, such consensus and regulation seem to be out of reach for generative AI and social media at the moment – unless we get a way to communicate the seriousness of the problem in a way that hits home in an emotional way or we reach a stage where we experience a society destabilised by division.

This inability to grasp/communicate the problem in an effective manner and build consensus to regulate it both at the individual and societal level is what makes generative AIs dangerous. 

These pieces are being published as they have been received – they have not been edited/fact-checked by ThePrint.