Categories
Discussions

Does AI Deserve Human Rights?

Artificial Intelligence has been going through unparalleled progress during recent years. Many intellectuals like Elon Musk for example, believe that AI will at one point surpass human intelligence. What is meant by intelligence is already a problematic concept but it might be fun to think about the implications it would have if AI really achieved the same level of intelligence or greater as human beings and more importantly if AI then deserves rights.

            Let me make clear from the offset what I mean by the “same level of intelligence as human beings”. Here I mean that AI has the same cognitive capabilities as human beings have. They can plan, set goals for themselves, are self-conscious and have emotions. We are not going to go into the problems of using these terms. We can even say that AI looks human, they have the same sort of body, only made by machinery instead of biology. In short, the question would be: If there is no epistemological difference between AI and human beings, does AI deserve the same human rights as humans. I’m going to argue that they do deserve the same rights.

            First, let us start by making an argument based on capabilities. If we believe that AI has the same cognitive capabilities which I laid out above, then why shouldn’t they deserve human rights? Take the human right that no person is allowed to be enslaved. Are we allowed to enslave AI robots to do our bidding even though they are cognitively capable of the same things as human beings and can sense that they are being unfairly treated? If one takes the road of capabilities, then the answer would be no. The robot has the same level of autonomy as human beings and should thus be treated as such an autonomous being.

            One way to slip away from this argument is by prescribing a certain unique ontological difference between humans and robots. The argument might go like this. Humans are made of flesh and blood, they have DNA, and are made by nature. They can find their own meaning in life. No one has put them here for a certain reason. AI robots however are made of metal, are programmed by algorithms, and are made by humans in order to perform a specific task. Thus, human beings are ontologically different than AI robots.

            Obviously, it is true that robots and humans are made of different material components. The question is: does this matter? If your humanness depends on having flesh and blood, does this mean that someone with less flesh or blood is less human? Someone who lost his leg in a war and now has a cybernetic prosthetic is now less human? It would seem that this isn’t a valid argument. But one argument claims that robots are put on the world for a reason by humans and humans are not. Notwithstanding religious arguments about God putting humans on this earth for a reason, this might be a valid claim. I don’t believe that is a valid argument. In a way, it doesn’t matter for what reason something was put on this world, it’s about the capacities which the thing has. For example, an AI machine might be put on this world in order to make complex mathematical formulations. But what if this autonomous robot creates goals and plans of his own and decides to become a painter? Is he now being a bad robot? Is it not functioning properly?

            Aristotle believed that everything had a telos or a certain function and using the thing according to its appropriate function is the good way of using that thing. The question then is: who decides what the telos is? A knife is perfect for cutting, but can it not also used to block a door? Sure, it wasn’t what the maker of the knife intended that the knife would be used for, but it still functions properly by holding the door. Things don’t have an inherent telos, the function only exists when there is a subject who decides what that function is. But if the subject can decide for himself what his function is, which human beings can do (and for our purposes advanced AI as well), then doesn’t the function depend on the subject deciding it? Thus, AI being able to decide its own function, no longer has an inherent function even though it was put on this world with a certain intention of fulfilling a function.

            Another small example might shed some more light on the matter. Back when a lot of people were farmers, children were produced in order to help with the production of food. They were put on this earth as free labor. Is the function of that child to be a farmer? He was put on this world to farm for the family. But the person also has ambitions and goals of his own. So, we don’t say that farming is the inherent function of the child, but rather the reason why the parents put the child on the world in the first place. The same goes with advanced AI in our thought experiment.

            This was the theoretical analysis of the problem, but there is also a practical side we can consider. So, our premise was that we couldn’t distinguish between human beings and AI robots. Epistemologically they are the same. The practical side of giving them human rights is that we can’t know when we’re dealing with a human being or an AI. Even if they didn’t have the same capacities as us, we still should allow them human rights because of our incompetence of telling them apart.

            So, for example, if a human being has the right not to be enslaved and we cannot know the difference between a human being and an AI, we should be extending the rights to AI as well. Imagine someone owning a factory and he puts people at work for no pay. He may claim that his workers are AI machines, but they might be actual people working under oppression or for extremely low wages. By extending the rights to AI as well, this would be a safeguard.

            We looked at a hypothetical claim in order to look at the consequences that claim might have. It’s sometimes a lot of fun to just present yourself with a hypothesis and then think about the ramifications of that hypothesis. I say hypothesis because it is still a big ‘if’ if we’re ever going to be able to create AI with the same human capacities. But people like Elon Musk seem pretty sure we’ll eventually get there, so it doesn’t hurt to think about what the consequences might be.

By elenchusphilosophy

I'm a Philosophy student in Belgium, trying to talk and write about ideas of all kinds of sorts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s