Why Lucknow Police wanting to use AI to ‘read’ distressed expressions of women won’t work
Tech

Why Lucknow Police wanting to use AI to ‘read’ distressed expressions of women won’t work

The scientific basis behind the idea — to use AI to read human expressions & behaviours — is unsound and has proven to have wide margins of error that can do more harm than help.

   
Representational image | Manisha Mondal | ThePrint

Representational image | Manisha Mondal | ThePrint

Bengaluru: The Lucknow Police plans to set up a city-wide network of AI-enabled cameras that can read expressions of distress on women’s faces and alert the nearest police station.

The idea was met with backlash, with people noting how it could potentially lead to surveillance of women in the city, as well as raise ethical and privacy concerns.

These concerns aside, the logic behind the idea also doesn’t seem to hold much water scientifically.

It is true that expressions are a part of emotion recognition, but numerous studies have led to the consensus that trying to recognise emotions through facial expressions is highly inaccurate and prone to extreme levels of misinterpretation. In other words, junk science.


Also read: India needs senior female cops for safer cities, 90% women retire as police constables


How emotion recognition ‘works’

The way artificial intelligence (AI) would work in this situation is that it would learn how to identify and distinguish between positive and negative expressions by studying videos that distinctly specify what are positive and negative expressions.

For instance, if AI has to learn to identify emotions during a job interview, it is first trained with videos of interviewees who performed well (positive emotions) and with those that didn’t. A large database of such videos is mined by the system to associate facial expressions with outcome.

Other behavioural features are studied as well, such as upward turns of the mouth that indicate a smile, which is then codified as a positive emotion. Similarly, a downward turn of the lips is associated with a negative emotion. These micro expressions are assigned to emotions on a scale.

Thus, an AI “learns” to understand which expressions correspond to which emotions.

In 2019, a comprehensive review of existing studies found that there is no accuracy when it comes to emotion recognition through facial expressions. For example, the study showed that people scowl often when they are not angry, and when they are, they scowl less than 30 per cent of the time.

“So scowls are not the expression of anger; they’re an expression of anger — one among many,” said Lisa Feldman Barrett, a professor of psychology at Northeastern University and one of the review’s five authors, to The Verge.

People also scowl when they are confused, concentrating, or have gas, she said. As one Twitter user pointed out, it could just be that a person is in a bad mood.

 

There have also been ample instances of such tech generating ‘false positives’, as it did during the 2017 Champions League final in Cardiff — over 2,000 people were incorrectly identified as potential criminals by facial scanning technology.


Also read: India’s MeToo will succeed if our laws catch up with it


Human bias in AI

Additionally, AI systems are also prone to bias, as they are designed by humans who have biases too. Most scanning technology learns by studying existing data of right versus wrong, which is often full of stereotypes and biases.

This has manifested the most in terms of skin colour, in not just surveillance but also in everyday tech such as automatic hand sanitiser dispensers and pulse oximeters that work better on lighter skin than darker skin tones.

The algorithms can also carry biases about what expressions befit which emotions. Multiple studies have found that defendants who put on a convincing display of remorse or rape victims who appear extremely distressed, have a greater chance of getting a lenient sentence or being believed respectively.

Reactions also vary depending on individuals and the way they express emotion. For example, people might cry when they are both very happy or very upset, or sometimes even when very angry.

Most of the ‘science’ that has gone into developing “core emotions” based on facial recognitions, as used by AI today, have been based on outdated and discredited theories about non-verbal behaviour and facial expressions.


Also read: AI has almost solved one of biology’s greatest challenges — how protein unfolds


Emotion recognition tech popular

The lack of science, though, has not prevented companies from making products that claim to read emotions/behaviours accurately.

Multinational companies such as Facebook, Amazon, Disney, Apple, and even Kellogg’s have built or used software and hardware to capture facial expressions to perform emotion recognition.

As we rely more and more on AI to make our lives easier, facial recognition offers both promise as well as a societal risk given the shaky science it’s built on.

Some companies, like Unilever and Cathay Pacific, are already using such software for analysing faces of potential employees during job interviews. Owing to existing racial and gender biases, AI has also demonstrated the potential to reinforce inequality.

Emotion recognition tech has also found use in surveillance and security, prompting an alarm.

“While I can imagine that there are some genuinely useful use-cases, the privacy implications stemming from emotional surveillance, facial recognition and facial profiling are unprecedented,” said Frederike Kaltheuner of Privacy International to the BBC. Privacy international is an organisation that works to ensure that all surveillance is within the rule of law.


Also read: Why we may be exactly wrong about technology and inequality