scorecardresearch
Wednesday, October 30, 2024
Support Our Journalism
HomeTechChatGPT vs Bard: Microsoft, Google's growing chatbot rivalry and its lurking dangers

ChatGPT vs Bard: Microsoft, Google’s growing chatbot rivalry and its lurking dangers

After ChatGPT’s entry into the market, major tech giants added to the competition with their own generative AI tools. But the concern is how accurate and bias-free these tools are.

Follow Us :
Text Size:

New Delhi: The race to build a generative artificial intelligence (AI) tool has kept two of the biggest tech giants, Microsoft and Google, busy ever since OpenAI, a startup, introduced ChatGPT (Generative Pre-trained Transformer) in November last year.

Now, Google has introduced its own version of a chatbot named Bard. The announcement, however, faltered right off the bat. During the live demo, Google’s Bard AI made a factual mistake, which dented investor confidence, resulting in Google’s parent company Alphabet losing $100 billion in market value on Wednesday. At the same time, Microsoft has announced that its search engine Bing and browser Edge will be powered by ChatGPT. 

Although chatbots have existed in the past, OpenAI’s ChatGPT was the first interactive tool, powered by AI, that was easily accessible. This “natural language processing” tool is capable of collating information from the internet and producing inputs in one place through a question-and-answer format, in near real-time.

Although this tool created enough hype and excitement for venture capitalists and investors, it has also led to several suspicions of inaccuracy, misinformation or disinformation and biases. The education sector, for example, is already facing issues pertaining to plagiarism with many students using this tool to directly solve problems and finish assignments.

The Chatbot Rivalry

“The race starts today, and we’re going to move, and move fast,” said Microsoft CEO Satya Nadella, who set the tone for a chatbot rivalry after he went for a reboot of the Bing search engine this week.

Bing will now be a tool with a combined experience of an AI chatbot and a search engine.

Google’s Bard is also an experimental conversational AI service with a similar chatbot-like mechanism that claims to be providing factual inputs on complex subjects. It is based on the company’s Language Model for Dialogue Application or LaMDA.

Even Chinese tech firm Baidu has finished its internal testing for ‘Ernie Bot’, which is a ChatGPT-style project.

Venture capitalists such as Sequoia have shown interest in investing in generative AI tools like ChatGPT, saying that this discovery will set a new precedent for human and tech learning.

“As investors, when we see any superpower in town and people are enabled to harness it, great companies get built and we get to participate and invest in those and make money with those, so we are absolutely intrigued by what this can become,” Anandamoy Roychowdhary, Surge Partner, Sequoia Southeast Asia, told ThePrint.

“The reason why everyone is so excited about ChatGPT and generative AI is not that it is very smart or cogent, it is just our first real attempt at building something like this that is not human.”

“This tech learns how humans learn,” he added. “We have been able to discover a large language model that can basically learn from the internet, learn from everything we have done in the past and just help us understand our own history in our own knowledge base.”

However, this technology is not new, with Google claiming that its “transformer research project” in 2017 was the basis for many of the generative AI applications being seen today. 

“We re-oriented the company around AI six years ago — and why we see it as the most important way we can deliver on our mission: to organise the world’s information and make it universally accessible and useful,” said Sundar Pichai, CEO, Alphabet Inc., in a blog post.

“Advanced generative AI and large language models are capturing the imaginations of people around the world,” he added.

The Transformer Research Project explored how language could be translated, interpreted and understood from the vast texts and information present in the public domain. 

Upasna Dash, Founder of Jajabor Brand consultancy, a communication and a brand building company, believes that generative AI has existed for a long time but this time easy access is a key differentiator.

“They have equalised the internet by making it in a form wherein one can prompt a question and a foundational level answer is given,” Dash explained. “I think that is the biggest differentiator. It is giving you customised content and resources for any subject. It actually does the job of assembling the analysis and insights on what is available.”

“The tool eliminates certain steps for the user since we are used to finding answers from multiple platforms using the Google search engine,” she added. 


Also Read: Budget 2023: Modi govt puts focus on upgrading skills & tech, 3 new AI centres to come up


Bias and accuracy

However, all the hype and excitement around AI tools has been accompanied by concerns over the misuse of this tool, as well as questions on the tool’s accuracy, its ability to be socially sensitive and handle personal data.

According to Akash Karmakar, partner with the Law Offices of Panag & Babu, ChatGPT scours the internet to answer questions. From a cybersecurity standpoint, such algorithms that offer artificial intelligence (AI) based natural language processing (AI-NLP) tools could potentially be used to create malicious codes.

This, he explained, necessitates the creation of proportionate and reasonable end-use restrictions which would, for example, prevent similar tools from aggregating personal data which could be used for cyberstalking, exploit vulnerabilities in cybersecurity frameworks, or scour the internet for pirated content.

Karmakar further explained how there could be a need for a regulation that will take care of ‘biases’.

“With increasing use cases for AI in risk-decisioning, determination of creditworthiness and prioritisation of healthcare, bias can creep in during training and adversely impact individuals,” Karmakar said. “This is where regulation of AI becomes key since bias or prejudice such as racism or sexism is often apparent in the output of such tools.”

In the education sector, too, conversations around plagiarism are taking place.

According to Jaideep Kewalramani, Head of Employability Business & COO of TeamLease Edtech, there is a sincere need for more policy support at the institutional level.

“In the education sector, because this is a general AI, the applications are far and wide,” Kewalramani said. “The obvious ones have already been spoken about. But what people are not talking about are two things. One, how do we now accept this as a new normal and integrate that at the policy level, the pedagogy level, and the classroom level?”

For example, he says, if there is a policy that says educational institutions should determine their own AI reference framework, then giving them autonomy could be an option. Educational institutions will then have to decide for themselves whether they want a complete ban or whether they want to use this tool to enhance learning,” Kewalramani said.

However, there is also a perspective on how these concerns may be “alarmist” and an example of a “classic fear of change”.

“In any wave of tech, we tend to see that some companies are serious and some companies use them as window dressing,” said Roychowdhary. “Most of the alarmist stuff is overblown, a classic fear of change. The reality is different. In the history of human endeavour, a tool shows up and is disruptive, be it a hammer or a car or a computer or an aeroplane. We have basically absorbed the tool and harnessed the power of that tool,” he explained.

“Some of it is good and some of it is bad and then society has changed. Large language models will have a definite impact on white-collar working habits, etc,” he added.

Yet others are concerned about how this tool could be a game changer in employment and how this could lead to fewer job roles, especially in the communications sector.

“I have been speaking to stakeholders in my industry because there is a concern that it may eradicate roles and manpower,” Dash said. 

“You have to start thinking of this like a relay race. I don’t think that as companies or as human beings you can fully rely on offloading certain aspects to this platform, but we have to see this as a tool that is not making anyone obsolete… Technology is not harmful but the use cases are and it is necessary to scrutinise these specific use cases which may be harmful.”

(Edited by Geethalakshmi Ramanathan)


Also Read: There’s a new shrink in town, and it’s the AI chatbot. Patients enjoy more privacy, no bias


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular