In 2007, Peter M. Asaro proposed a pioneering concept that suggested attributing a “quasi-person” status to robots, thereby granting them a nuanced legal identity.
This concept, characterised by the enjoyment of “partial rights and duties,” aimed to strike a balance between acknowledging the advanced capabilities of AI systems and delineating their legal responsibilities within society.
Asaro’s proposal opened avenues for deliberation on the evolving intersection of technology and law, paving the way for nuanced frameworks that acknowledge the unique attributes of AI entities while ensuring accountability and ethical consideration in their integration into various facets of human life.
Challenges
In the entire chapter, we have deliberated around the equation of the AI as similar to the corporation or somewhat like an unborn child or dead person or even an idol for that matter. The issue is that they all are being administered by the human agency; but the AI in certain cases, where there is a high level of autonomy (which is our major concern), is independent of any human agency. They can be said to be similar to animals, which, today, are considered legal subjects but not legal entities or persons.
Also read: iPhone wasn’t the first touchscreen phone. Steve Jobs was good at stealing great ideas
The concept of AI entities as ‘quasi-persons’ draws a parallel to how children, unborn foetuses, and individuals with severe cognitive impairments are treated in legal contexts. Like children, AI would not possess full rights of personhood; thus, it would face restrictions such as the inability to sign contracts, vote, or engage in activities requiring legal responsibility. However, AI might benefit from special protections to prevent misuse or exploitation, much as children are protected by laws against child labour and other forms of abuse.
In terms of liability, AI’s limited agency and decision-making capabilities would imply a reduced responsibility for damages. Instead, liability would likely fall on its developers, operators, or users, depending on the extent of human oversight and control exercised over the AI. This quasi-person status would necessitate an evolving legal framework to address the unique ethical and practical issues arising from AI’s capabilities and limitations, balancing the need for innovation with considerations of accountability and protection.
Another side of recognising legal personhood is that AI entities have often been likened to a ‘legal black hole,’ a metaphor highlighting the potential danger that these entities could obscure human accountability and responsibility, effectively absorbing all traces of liability. This comparison suggests that as AI systems gain more autonomy and become further integrated into legal and societal frameworks, they could serve as buffers, protecting the individuals and organisations behind them from direct legal consequences. In essence, human legal responsibilities could be hidden behind the veil of AI, similar to the principle of the ‘corporate veil’ that we follow in terms of corporate entities, creating a situation where it becomes exceedingly difficult to pinpoint who is accountable for any harm or wrongdoing perpetrated by AI.
The concept of AI as a ‘legal black hole’ raises profound concerns. If AI entities are granted a form of legal personhood or significant operational independence, they could act in ways that lead to harm without clear paths to hold their creators, operators, or owners responsible. This could result in scenarios where victims of AI-related harm find it challenging to seek redress, as traditional pathways of liability and accountability are obscured or severed by the AI’s autonomous status.
This excerpt from ‘AI on Trial’ by Sujeet Kumar and Tauseef Alam has been published with permission from OakBridge Publishing.