Forums

ForumsWorld Events, Politics, Religion, Etc.

AI Rights

Posted Feb 14, '13 at 12:48pm

Nerdsoft

Nerdsoft

1,066 posts

Not sure if this belongs in the Tavern, but what the hell. If, nay when, we develop a sapient AI, should it have the rights of a human (or equivalent)? There are loads of research programs going on right now, most famously Cleverbot, into the subject. But what should happen to them when they start to think?
Stuff like the Terminator and Matrix series always feature murderous AIs trying to kill of the heroic humans, but, well, what should be done? I myself think that anything that thinks like a person, is a person. So what if it's not human? In Iain M. Banks' Culture novels, the AIs (drones and Minds) are given full rights.
Minds fly starships measured in kilometers and human/drone crews are, well, an optional aesthetic. I say we should aim to be more or less like this, but I value your opinions too.

 

Posted Feb 14, '13 at 12:56pm

Kasic

Kasic

5,591 posts

If, nay when, we develop a sapient AI, should it have the rights of a human (or equivalent)?

I suppose that would depend on just how developed it is. My rule of thumb is if it talks about fairness and asks for being treated well, then yes. If it just compiles information upon more information and recognizes itself as a thing and doesn't really care what happens to it, then whatever.

However, there's a few sticky issues. For example, would it be considered morally right to create AIs in the first place? If so, why would cloning be against the law?

Or employment. An AI is likely going to be made for the purpose of working. Do they get to choose whether or not to do the work they were made for?

What happens if an AI breaks a law?

So many more like this.

 

Posted Feb 14, '13 at 1:10pm

Nerdsoft

Nerdsoft

1,066 posts

I personally have no problems with cloning, except for consequences regarding population growth. As for employment, I say that an AI should be allowed to, yes. However, it is likely that such a computer would have been specially built for that job and would probably be a) adept at it and b) perfectly content with it. And law-breaking, well, it should be appropriately punished.
Likewise, killing it would be akin to killing a human and hacking akin to assault. And so on.

 

Posted Feb 14, '13 at 1:24pm

Kasic

Kasic

5,591 posts

However, it is likely that such a computer would have been specially built for that job and would probably be a) adept at it and b) perfectly content with it.

Again, moral issues. Is it right to create a sentient being to be happy with what you want it to be?

 

Posted Feb 14, '13 at 1:48pm

Avorne

Avorne

3,224 posts

Again, moral issues. Is it right to create a sentient being to be happy with what you want it to be?

Isn't that exactly what society does with humans? We're taught to act a certain way, follow certain protocols and obey the orders that we're given without getting too grumbly about it.

 

Posted Feb 14, '13 at 2:00pm

Kasic

Kasic

5,591 posts

Isn't that exactly what society does with humans? We're taught to act a certain way, follow certain protocols and obey the orders that we're given without getting too grumbly about it.

Never said I agreed with current society :P

It's not quite the same. An AI being programmed to like something could be seen as the equivalent of brainwashing, which most people agree is wrong. Although unlike with how a program would be 'brainwashed' people disagree that some things are brainwashing or not (religion, morals, etc.).

 

Posted Feb 14, '13 at 4:48pm

HahiHa

HahiHa

5,082 posts

Knight

If an AI is created for a specific purpose, it does not need complete sentience, and if it doesn't possess sentience from the beginning, we can hardly call that discrimination. It is built for it's purpose, and it does what it has to do. End of story. No brainwashing needed, no brainwashing used.

Now if we created a perfectly sentient AI, most probably for research projects, we should respect its personality, so far as it has an independent one. And it would be ridiculous to restrict such an AI to one single task, so it won't have to be unhappy.
However, what if the AI is not self-aware? I remember reading about a project where researchers were trying to build an artificial rat brain, as close to an average organic rat brain as technically possible. Would it be wrong to use such a brain for example for stimulus research, as long as no "pain" is inflicted?

Humans are not created for a purpose like robots, Avorne; we're born into society, for whatever reason, and have to live within said society. Humans need to learn how to live with other humans, and in big societies this apparently needs to be done systematically.

 

Posted Feb 14, '13 at 4:52pm

Kasic

Kasic

5,591 posts

If an AI is created for a specific purpose, it does not need complete sentience, and if it doesn't possess sentience from the beginning, we can hardly call that discrimination. It is built for it's purpose, and it does what it has to do. End of story. No brainwashing needed, no brainwashing used.

Which is why I said this: "I suppose that would depend on just how developed it is."

 

Posted Feb 14, '13 at 5:45pm

MageGrayWolf

MageGrayWolf

9,691 posts

Knight

Stuff like the Terminator and Matrix series always feature murderous AIs trying to kill of the heroic humans,

I find this to be an unlikely scenario. In most of these cases it basically portraying pure logic as a bad thing. Which is rather silly. Another point against this is we will be in control of the programming instilled in these new forms of life. It's again far to often an inaccurate depiction that an AI is sapient if it's capable of acting against it's programming. Not even biological life acts against it's natural "programming".

The ways in which this could happen was if we were to given a machine designed for war and programmed with violence such an AI. However for such tasks we really don't need to take it's programming that far.

Another way in which this could happen would be if we developed software that was designed to mimic a natural progression into sapience. In such a case we wouldn't have instilled any basics for it to follow, this "programming" would form in a similar way to biological life where it wasn't hardwired with anything specific and developed hardwired traits over time.

However, it is likely that such a computer would have been specially built for that job and would probably be a) adept at it and b) perfectly content with it.

I could see such an AI being developed as  mean to artificially replicate the human brain. For instance let's say some of these life extension concepts eventually pan out and the human life expectancy exceeds 300 years. There would then become a need to develop a way to preserve and extend the capacity of the brain as our brains would by then "max out".

 

Posted Feb 14, '13 at 8:14pm

Blairlarson

Blairlarson

93 posts

Well the thing that the Terminator people did not think of is put a shut down button or explosion button or a pony button.

 
Reply to AI Rights

You must be logged in to post a reply!