scorecardresearch
Wednesday, May 8, 2024
Support Our Journalism
HomeOpinion2024 will be the year of AI. Here's what to expect

2024 will be the year of AI. Here’s what to expect

Wherever the dust settles on the debates, one hopes that organisations will be mindful of the ethical implications of these technologies and deploy them responsibly.

Follow Us :
Text Size:

Artificial intelligence had a breakout year in 2023. Everyone had it on their minds, be it titans of industry like Nandan Nilekani and Mukesh Ambani or political leaders like US President Joe Biden and Prime Minister Narendra Modi. Large language models like OpenAI’s ChatGPT became a household name and were then hit by multiple lawsuits. There were several run-ins with the damaging potential of deepfakes, and countries scrambled to rethink their regulatory and industrial policies around these technologies. If 2023 was the year that sparked these debates, 2024 is when many of them will be settled.

Smaller or larger AI models?

There are two camps on the trajectory that AI development should take in India. The first, represented by Nilekani, believes that a resource-scarce country like ours should develop and encourage smaller models for particular use cases, instead of larger ones. Smaller models assist in specific tasks. Conversely, larger models like OpenAI’s ChatGPT can engage in more complex problem-solving and perform a range of real-world tasks.

Whether a model is small or large depends on the number of parameters it is trained on. Parameters are the variables that the model uses to make predictions. Researchers have found that an increased number of parameters can yield capabilities that smaller models do not have. At the same time, larger models are expensive and complicated to train and develop. One facet of the cost is computing power – it takes very powerful computers to train larger models. Illustratively, GPT-3 was built on the fifth most powerful supercomputer globally. At present, India does not have a single supercomputer in the top 100 supercomputers in the world, and only two in the top 500. Presumably, this is why Nilekani argues that smaller, task-specific models may be easier to develop.

On the other hand, Reliance Industries chairman, Ambani believes that India must forge ahead and build its own large language models trained on Indian languages. His firm has partnered with NVIDIA, arguably the world’s leading AI chip producer, to execute such a project. The California-based tech giant agreed to equip Jio with supercomputing hardware as well as design frameworks for models. Other reports indicate that the Indian Institute of Technology, Bombay is also a part of developing an Indian-language-centric large language model with Reliance. Over the course of 2024, we shall see which camp ends up on the winning side. Wherever the dust settles on this debate, one hopes that organisations developing AI capabilities will be mindful of the ethical implications of these technologies and deploy them responsibly.


Also Read: Global policymakers don’t understand AI enough to regulate it. Tech companies must step up now


Lawsuits against OpenAI

Everyone is suing generative AI companies. Author George RR Martin, computer scientists, and the New York Times all filed lawsuits against OpenAI, alleging that it trained its AI models on their works without authorisation. One expert suggests that entities like the New York Times may be looking for an ongoing royalty payment from AI companies. However, such expectations are not going to pan out. Royalties typically accrue on a per-use basis. For example, in the context of music streaming, an artist gets some money every time their song is played by a user. However, generative AI does not work like that.

Typically, a user gives a prompt and there is almost no way to discern what sources were used by the AI system to give a response. There are a few instances where generative AI models like ChatGPT reproduce training data text verbatim—but these are few and far between and impossible to monitor. It is also likely, down the road, that AI companies will remove unauthorised proprietary data from their training datasets. Thus, it is more than likely that the lawsuits against these companies will culminate in one-time settlements.


Also Read: AI tech has transformed how people see themselves—No one can accept their natural self


Deepfakes will be everywhere

In 2023, deepfakes were prevalent but not omnipresent. One report indicated that there was a fivefold increase in deepfakes last year from 2019. In India, the most prominent deepfake incident pertained to actress Rashmika Mandanna, whose face was superimposed on the body of a social media influencer. After her, multiple other actresses found themselves targeted by the technology. However, these were still isolated events.

In 2024, we can expect deepfakes to be everywhere. Elections in India and the United States will drive their preponderance. In July 2023, a political action committee supporting the potential Republican presidential candidate Ron DeSantis released an ad which used his opponent Donald Trump’s AI-generated voice. A recent India Today sting uncovered a potential industry of service providers in India willing to create deepfakes to further political ends. As part of the sting, the news channel’s reporters engaged with Rohit Kumar, the “political manager” of Obiyan Infotech, a digital marketing agency based in Delhi. He told them that the dominant trend was to alter the contents of a politician’s speech.

On 26 December 2023, as a measure to counter the spread of deepfakes, the government issued an advisory to social media companies that reminded them to be vigilant about the spread of misinformation on their platforms. It may also endeavour to bring out social media regulation targeting the perpetuation of deepfakes. However, the India Today sting operation indicates that a bulk of the problem lies offline. Until all outfits like Obiyan Infotech are found out and shut down, any regulation or advisory will be ineffective in stemming the tide of deep fakes that will travel across platforms and encrypted chat applications.

The author is a consultant for Koan Advisory on emerging tech. This article is part of ThePrint-Koan Advisory series on emerging policies in India’s tech sector. Views are personal.

(Edited by Theres Sudeep)

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular