scorecardresearch
Thursday, November 7, 2024
Support Our Journalism
HomeTechCovid made companies AI friendly, but consumers are yet to trust it

Covid made companies AI friendly, but consumers are yet to trust it

The role of Artificial intelligence is now crucial in helping countries to adapt, stay safe & improve how we live and work. But transparency and ethics are key.

Follow Us :
Text Size:

We are living through one of the most challenging and devastating health crises in living memory. This year has brought untold loss of life and livelihoods, the true worldwide repercussions of which are still to be seen. The COVID-19 pandemic has also altered the global business landscape, accelerating the pace and volume of data created through increased remote working and digital transacting, and fast shifting the economic realities for all business leaders. Against this backdrop, technology – artificial intelligence (AI), in particular – now has an even bigger role to play in helping organizations and countries to adapt, keep us safe and improve how we live and work.

But the thirst and drive to innovate with these new technologies at speed must be balanced with the need to carefully build consumer trust in those same innovations. Bringing consumers on this journey will be key. A new report from Longitude explores this precise balance between engendering trust in technology and fostering AI innovation. This is a topic that I am very passionate about and one that I spoke on at Davos this year, so I want to explore some of the report’s findings in more detail here.

Transparency is everything

Until recently, there has been too much focus on what AI can do and not how it does it. Today’s organizations must be able to demonstrate that their systems and algorithms are responsible, fair, ethical and explainable. In a word, that their AI is trustworthy.

High-profile cases of misuse of AI by global technology firms have dented consumer trust in AI. The subsequent fallout has also raised greater global awareness of the broader issues around the use of data and our personal information.


Also read: People don’t get nutrition in plants, but AI can


The result? Trust in technology can no longer be assumed – it must be earned. In this sense, organizations must think of their technology as ‘guilty until proven innocent’. The onus is on them to proactively demonstrate the responsible use of their technology to the world and to be prepared to explain and justify decisions made by those systems when required. Here we have the right to meaningful information about the logic, significance and envisaged consequences of automated decisions or what is also called ‘the right to explanation’, as laid out in the EU’s General Data Protection Regulation (GDPR). Businesses must consider how they apply these technologies – only using personal information when it is needed and with the user’s consent. By building these principles into AI as it is developed, businesses can ensure that it is ethical and transparent from the outset.

We are already seeing the impact of this transition to ethical AI. A recent Capgemini study found that 62% of consumers placed more trust in a company whose AI was understood to be ethical, while 61% were more likely to refer that company to friends and family, and 59% showed more loyalty to that company. Those who openly communicate in this way about how their technology works are more likely to be trusted by consumers to use AI to its full potential.

AI is our best problem solver

The COVID-19 pandemic has accelerated digitalization at a rate we could have never imagined. This has created a volatile environment with a plethora of challenges to overcome and opportunities to exploit. In the payments industry, we have had the challenge of protecting consumers and businesses against an explosion in cyberattacks and fraud – for example, our NuData technology, which verifies users based on their inherent behaviour, has seen attacks become more sophisticated, with one in every three attacks now emulating human behaviour. Account creation attacks, where bad actors create fake accounts for subsequent fraudulent use, have increased by 500% during the pandemic, compared to the same period in 2019 – one global retailer experienced a 679% increase in suspicious account creations alone. Overall, global fraud rates have hit a near-20-year high, according to the latest PwC figures, with 47% of companies reported to have experienced fraud over the past two years.


Also read: The lab assistant that can work for 21.5 hours a day and tackle problems humans cannot


It would have been impossible to maintain our defences without the implementation of AI on our network. It is, and will continue to be, a vital part of adapting to and securing this new world. As AI becomes more powerful and pervasive, we must put systems in place to ensure that it is developed and deployed ethically.

Consumer driven, consumer focused

Consumers create a huge amount of data. By 2025, we will be creating an estimated 463 exabytes every day. And that is only going to increase – the oft quoted formula is that 90% of the data that has ever been created was created in the last two years. AI-driven systems have been invented to help turn some of this information into recognizable benefits for the people who create that data – making our lives work for us.

But AI is a technical and complicated tool. The trust that is needed for it to be most effective will come when consumers see and feel its real-world benefits in action. In this sense, trust can be a key differentiator – a competitive advantage for businesses. Only those who are trusted to operate AI will be able to maximise the benefits of its value-added services in years to come. Not only can AI deliver safety for consumers online or revolutionise their shopping experiences; it is also revolutionising farming as well as giving the environment a new lease of life. For those that get it right, the possibilities are endless.

The big picture

At times of such uncertainty, it can be difficult to look too far ahead. But now is the time for business leaders to take a step back and look at the bigger picture. The landscape has changed, and that change is permanent. Our digital futures have been brought forward and society will continue to demand higher levels of transparency in the way that AI is used to solve new challenges.

Responsible development of, and engendering trust in, technology will be crucial to business success in the ‘next normal’ – but more importantly, to building a world that is more prosperous and more equal for all.

This article was first published in World Economic Forum. 


Also read: Don’t get too close to colleagues in office. AI snoops are here to ensure social distancing


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular