If I shoot a robot, is that murder?

The Lonely and Emotional AI: Ethics in Artificial Intelligence

Shivani Gandhi
8 min readJan 4, 2022

“The Lonely” is an episode of “The Twilight Zone” in which an inmate named Corry has been convicted of murder and sentenced to fifty years of solitary confinement. Four times a year, the supply ship along with the crew meets him to hand off supplies and updates on a pardon. Allenby, the ship’s captain, has been doing his best to make Corry’s stay bearable and brings him things to distract him from loneliness. In this episode, Allenby has met up with Corry and brought a special package, which he asks Corry to hold off on opening until they are out of sight. When Corry finally does open it, he is met with a robot named Alicia. She is capable of feeling emotions, has a memory, and even has a lifespan similar to a human. At first, Corry is shocked and rejects Alicia as being just a machine and mocking him. However, as time goes on, Corry begins to fall in love with Alicia and even considers her his “wife.” Not long after, Allenby arrives to tell Corry that he has been pardoned and can come back but with only fifteen pounds of his stuff. Corry was astonished hearing this because he didn’t want to leave Alicia, he “loved” her. Despite Allenby’s attempts to convince Corry to leave, Corry was adamant on showing Allenby and the crew that Alicia wasn’t a robot, she was a woman. It wasn’t until Allenby shot Alicia that Corry came back to reality. When analyzing this famous episode, there are many things we can pull from to relate to modern day technological advancements in artificial intelligence. The focus here will be dealing with ethics and morals with the development of artificially conscious robots.

When Alicia first arrives in Corry’s life, he makes many ironic statements that change as the episode continues. Corry mentioned “reality is what [he] needs” but eventually will succumb to Allenby’s “illusion” and “salvation”. Corry says “he doesn’t need a machine” and that Alicia was just like his car, “a heap of metal with arms and legs instead of wheels”. However, as the story continues, we hear Corry say, “I love Alicia, and nothing else matters”. He pleads to Allenby to bring her along because “if [he left] her behind that’s murder’. After Allenby shoots Alicia he tells Corry that “all [he is] leaving behind is loneliness”. What caused Corry to consider Alicia as nothing but a machine to believing that ‘she’s a woman’? How did he fall in love with her? Well, this was the exact point in stressing these claims early on in the film to see the contrast at the end.

As humans, we get attached to things quickly, whether they are alive or inanimate. For example, we may have a favorite sweater or a great grandmother’s necklace that’s been passed down that we love. At what point did Corry’s affection towards Alicia differ from that towards his car or his books? It was the moment Alicia passed the Turing test that made the difference.

Turing Test

The Turing test is ultimately deciding if the machine can trick a human into thinking it is a human. While many may not believe this is the best way to test for intelligence, it is still a standard we hold on to in order to use as a metric. Alicia “passes” the Turing test when she reacts accordingly with her emotions as a response to Corry’s actions. For example, she cried when he hurt her and comforted him when he was frustrated and frazzled with her presence. The main difference between “humans” and machines in the film is the ability to interact with Corry. Alicia distracts Corry from his loneliness because she is an extension of him. And Corry, having been with an extension of himself for so long became attached to Alicia because she was basically him. This justifies Corry’s reaction to when she was shot. He lost a part of himself which he had lived with since he met her. However, was this shooting a “murder”, was it ethical? If Alicia passed the Turing test, does that make her a sentient “being”? Let’s tackle this one at a time.

Was the Shooting Ethical?

Considering how attached Corry was to Alicia, it was unethical for Allenby to shoot Alicia at the end of the film. Alicia was capable of feeling emotions, had a memory, and a conscious. Does this not make her more “human-like” than many other things we treat with moral and ethical understanding, such as corporations or institutions? Alicia was able to understand Corry to another extent due to her “training” her algorithm on him. If we compare this to a human brain, which we can boil down to a “complex algorithm”, wouldn’t that mean both AI and man are similar and should be granted the same moral status. (Risse , 2018)

Although Allenby shot Alicia, he had good rational in doing so. If we assess the situation, Corry was finally granted a pardon and they needed to get him back home. However with the space they had left, Alicia couldn’t be taken and she would’ve had to stay alone. Given her makeup, she has human-like abilities and would have gone through the exact same pain and suffering Corry was going through. If we compare this to animals being euthanized or put down to end suffering, is that not the same thing? We treat animals with morals and ethics because we view them to be human-like and in this situation, Alicia is human like as well. Given this rational, Allenby was rational and didn’t commit “murder”. If we pull away from this specific scenario and look at AI in general, we can see that it should be legal to kill/terminate AI.

Consequences and Justice

Just as there are consequences and punishments for humans, there should be a system in place for AI which could cause potential harm. If we are holding AI to the same standards as humans, the same forms of punishments should apply. Consider a scenario where AI machines train and become capable of making their own decisions which may benefit them but not humans. We would then take to laws such as the “three laws of robotics”, instigated by Isaac Amisov which states: “First Law — A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law — A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law — A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” Of course, these set of laws have their own counterarguments; however, it gets the point across. AI should have it’s own form of consequences and terminated if need be, for the better and protection of society. (Müller, 2020)

This might bring up the issue of artificially conscious robots having moral/ethical rights to claim their personhood and therefore protection from the law. If we look at these “beings” similar to how we look at animals, just with a more complex intelligence, then yes we should extend these rights to AI. It would only make sense. For example, reinforcement learning in machines may be seen similar to training a dog. Furthermore, who are we to know if negative rewards cause suffering within the machine? Assuming, these machines have “feelings” it would only be right to treat them accordingly. (Julia Bossmann)

Advances in Technology

This brings us back to establishing a consensus on a metric to “test” AI. If these machines are this complex, how can something as simple as a Turing test, which has many loopholes, be used to evaluate a machine that is supposed to be better than us. We have to keep in mind that the Turing test was set during a time when the field was just emerging. Now that technology has advanced and time has passed we need a better system to measure intelligence, not just how similar a machine can be to human-behavior. (Dvorsky, 2014)

Since the times of the Turing test, we’ve come a long way. For the future, Elon Musk believes that AIs will be able to persuade us that they love us. Considering what we witnessed in this episode, it is fully capable that they will since they are an extension of us. With systems capable of being this adaptable and intelligent it is important that we implement an ethical system, probably more strict than our “human-centric” system but following the same principles. As technology continue to advance, it is likely that our future will include AI that will impact our professional responsibilities, such as robots like Alicia that become our “friends” or a guide. These AI may even be future projects that we work on ourselves. In that case, having a set of ethics or protocols would be essential to follow in the case that something was to go wrong. For example, would you terminate the project if told to? Well, that depends on the state of the AI and its situation. If we were to create AI with a conscious and emotional being, we would have to treat it as a human-being and therefore justify rulings accordingly.

Overall, this is an issue that is more pressing and closer than we may think it seems. Although “The Twilight Zone” was released in the 1950s, some of these concepts are more relevant now than ever. If all we need is reality like Corry, we are going to have to maintain our sanity first and understand the consequences of developing systems such as Alicia.

Sources:

G. Dvorsky, “Why The Turing Test Is Bullshit,” io9, 09-Jun-2014. [Online]. Available: https://io9.gizmodo.com/why-the-turing-test-is-bullshit-1588051412. [Accessed: 16-Feb-2021].

HKS Authors See citation below for complete author information. and Mathias Risse Director of the Carr Center for Human Rights Policy Lucius N. Littauer Professor of Philosophy and Public Administration, “Human Rights and Artificial Intelligence: An Urgently Needed Agenda,” Harvard Kennedy School, 01-May-2018. [Online]. Available: https://www.hks.harvard.edu/publications/human-rights-and-artificial-intelligence-urgently-needed-agenda. [Accessed: 16-Feb-2021].

Jp, “The Lonely”, 01-Jan-1970. [Online]. Available: http://twilightzonevortex.blogspot.com/2011/09/lonely.html. [Accessed: 16-Feb-2021].

Julia Bossmann, “Top 9 ethical issues in artificial intelligence,” World Economic Forum. [Online]. Available: https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/. [Accessed: 16-Feb-2021].

R. Hunter, “Exploring the Twilight Zone #7: The Lonely,” Film School Rejects, 18-Jun-2020. [Online]. Available: https://filmschoolrejects.com/exploring-the-twilight-zone-7-the-lonely-988d321b189b/. [Accessed: 16-Feb-2021].

V. C. Müller, “Ethics of Artificial Intelligence and Robotics,” Stanford Encyclopedia of Philosophy, 30-Apr-2020. [Online]. Available: https://plato.stanford.edu/entries/ethics-ai/#MachEthi. [Accessed: 16-Feb-2021].

--

--