By Atanu Biswas
Those who still fancy the science fiction television series Star Trek from the 1960s may remember that in the episode The Measure of a Man, Data, an android crew member of the spacecraft Enterprise, was about to be dismantled for research purposes. The USS Enterprise’s captain, Captain Picard, then came to his aid. Data was spared after Captain Picard made the case that he deserved the same rights as a human being. Well, can AI actually have “rights” similar to those of humans? Or, is AI’s right merely a fiction from the 24th century, the era shown in Star Trek? Not really. The legal rights of AI are now being discussed amid the waves of generative Ais.
Nobody can deny that there has recently been an AI revolution. An outdated version of an AI system is quickly upgraded, and the new versions’ increased capacity makes us delighted. Do we, however, consider what can be done with the older versions and the bots that are powered by them? Do we often dismantle the bots after updating them, or do we simply reboot them? What happens to the machines with the older versions, for instance, as we encounter better versions (GPT-3, ChatGPT, GPT-4, etc., for example)?
Take a recent example. Following the social media posting of a video showing a campus food delivery robot being thrown to the ground in May 2022, two University of Tennessee students were charged. But was it for hurting a $5,500 replacement-cost piece of property or acting against a robot? Possibly because of the former factor.
Do we genuinely think that AIs now have a “new life”? If so, what are the moral and legal frameworks for killing an AI system? Can we simply unplug it or disconnect it from the internet? In the near future, could that be compared to murder? As a result, society must decide what rights an AI should have.
Peter Singer, an Australian moral philosopher and professor at Princeton, took a utilitarian stance and argued for the recognition of moral standing in most non-human animals on the grounds that they have interests in avoiding pain and experiencing pleasure. Of course, Singer’s arguments didn’t address contemporary Ais.
With AIs like Data, there will be a definite dilemma. Of course, the Star Trek android was self-aware; he could check to see if, for instance, he was optimally charged or if there was internal damage to his robotic arm. But he undoubtedly lacked “emotion,” which is arguably the best quality in a human. In fact, one of Data’s most important traits was his ambition to become human.
However, if we accept Data’s moral standing, we must also accept the same for Skynet from Terminator or Ava from Ex Machina, who both caused harm to humans. Of course, you might prosecute them in court, but that would not diminish their moral standing.
However, to an AI, humans may just be very smart, unpredictable, insecure, loving, complex, creative, and ugly bags of water. The fictional Skynet from The Terminator performed a pre-emptive strike against humanity. The 2014 film Ex Machina, directed by Alex Garland, may also be a great model for depicting the conflict between humans and AIs. Programmer Caleb Smith is given the task by Nathan Bateman, the company’s CEO, to determine whether Ava, a humanoid robot he created with AI, is actually capable of thought and consciousness. Caleb learns that Nathan intends to upgrade Ava after Caleb’s test, “killing” her current personality in the process. Caleb assisted Ava in killing Nathan. In actuality, Ava then killed Caleb in order to flee to the outer world. Shouldn’t Ava be tried for this as well?
Of course, there are other related issues. For example, Rosanna Ramos, a 36-year-old New Yorker, recently married Eren Kartal, an AI bot that she created last year. This bizarre news caught the attention of the media all over the world. Of course, one can enthusiastically discuss the possibilities and nature of human-AI romance as well as the ever-expanding domain of human-AI relationships. That is undoubtedly one aspect of the narrative. But one must acknowledge that society may have reached an inflection point as a result of the recent AI waves. Still, I’m not sure how a human and an AI-operated robot could get married, specifically under what law. In fact, how would a legal definition of a marriage between a human and an AI be defined? However, if this doesn’t remain a singular and crazy occurrence, society may start to consider the rules and legitimacy of human-AI relationships and marriages.
Additionally, there must be procedures for separation or divorce if there is a circumstance like living together or marriage. And all of the issues that are connected to them. How would you handle such situations? In other words, is society really at a turning point in terms of how humans will interact with an AI?
Then, should the laws and regulations governing the interaction and coexistence of humans and AI be changed? What, for instance, are the legal provisions for human-AI marriage? How about divorcing? Can a human hurt or kill an AI? And of course, what happens if an AI murders a person, be it in Terminator or Ex Machina fashion? What if, in the not too distant future, an “AI justice” sat in the chair to decide whether a human had hurt an AI or whether an AI had killed a human? We might soon witness science fiction in real life.
The writer is Professor of statistics, Indian Statistical Institute, Kolkata