ForumsWEPRAI Rights

12 10862
Nerdsoft
offline
Nerdsoft
1,266 posts
Peasant

Not sure if this belongs in the Tavern, but what the hell. If, nay when, we develop a sapient AI, should it have the rights of a human (or equivalent)? There are loads of research programs going on right now, most famously Cleverbot, into the subject. But what should happen to them when they start to think?
Stuff like the Terminator and Matrix series always feature murderous AIs trying to kill of the heroic humans, but, well, what should be done? I myself think that anything that thinks like a person, is a person. So what if it's not human? In Iain M. Banks' Culture novels, the AIs (drones and Minds) are given full rights.
Minds fly starships measured in kilometers and human/drone crews are, well, an optional aesthetic. I say we should aim to be more or less like this, but I value your opinions too.

  • 12 Replies
Kasic
offline
Kasic
5,552 posts
Jester

If, nay when, we develop a sapient AI, should it have the rights of a human (or equivalent)?


I suppose that would depend on just how developed it is. My rule of thumb is if it talks about fairness and asks for being treated well, then yes. If it just compiles information upon more information and recognizes itself as a thing and doesn't really care what happens to it, then whatever.

However, there's a few sticky issues. For example, would it be considered morally right to create AIs in the first place? If so, why would cloning be against the law?

Or employment. An AI is likely going to be made for the purpose of working. Do they get to choose whether or not to do the work they were made for?

What happens if an AI breaks a law?

So many more like this.
Nerdsoft
offline
Nerdsoft
1,266 posts
Peasant

I personally have no problems with cloning, except for consequences regarding population growth. As for employment, I say that an AI should be allowed to, yes. However, it is likely that such a computer would have been specially built for that job and would probably be a) adept at it and b) perfectly content with it. And law-breaking, well, it should be appropriately punished.
Likewise, killing it would be akin to killing a human and hacking akin to assault. And so on.

Kasic
offline
Kasic
5,552 posts
Jester

However, it is likely that such a computer would have been specially built for that job and would probably be a) adept at it and b) perfectly content with it.


Again, moral issues. Is it right to create a sentient being to be happy with what you want it to be?
Avorne
offline
Avorne
3,085 posts
Nomad

Again, moral issues. Is it right to create a sentient being to be happy with what you want it to be?


Isn't that exactly what society does with humans? We're taught to act a certain way, follow certain protocols and obey the orders that we're given without getting too grumbly about it.
Kasic
offline
Kasic
5,552 posts
Jester

Isn't that exactly what society does with humans? We're taught to act a certain way, follow certain protocols and obey the orders that we're given without getting too grumbly about it.


Never said I agreed with current society :P

It's not quite the same. An AI being programmed to like something could be seen as the equivalent of brainwashing, which most people agree is wrong. Although unlike with how a program would be 'brainwashed' people disagree that some things are brainwashing or not (religion, morals, etc.).
HahiHa
offline
HahiHa
8,256 posts
Regent

If an AI is created for a specific purpose, it does not need complete sentience, and if it doesn't possess sentience from the beginning, we can hardly call that discrimination. It is built for it's purpose, and it does what it has to do. End of story. No brainwashing needed, no brainwashing used.

Now if we created a perfectly sentient AI, most probably for research projects, we should respect its personality, so far as it has an independent one. And it would be ridiculous to restrict such an AI to one single task, so it won't have to be unhappy.
However, what if the AI is not self-aware? I remember reading about a project where researchers were trying to build an artificial rat brain, as close to an average organic rat brain as technically possible. Would it be wrong to use such a brain for example for stimulus research, as long as no &quotain" is inflicted?

Humans are not created for a purpose like robots, Avorne; we're born into society, for whatever reason, and have to live within said society. Humans need to learn how to live with other humans, and in big societies this apparently needs to be done systematically.

Kasic
offline
Kasic
5,552 posts
Jester

If an AI is created for a specific purpose, it does not need complete sentience, and if it doesn't possess sentience from the beginning, we can hardly call that discrimination. It is built for it's purpose, and it does what it has to do. End of story. No brainwashing needed, no brainwashing used.


Which is why I said this: "I suppose that would depend on just how developed it is."
MageGrayWolf
offline
MageGrayWolf
9,462 posts
Farmer

Stuff like the Terminator and Matrix series always feature murderous AIs trying to kill of the heroic humans,


I find this to be an unlikely scenario. In most of these cases it basically portraying pure logic as a bad thing. Which is rather silly. Another point against this is we will be in control of the programming instilled in these new forms of life. It's again far to often an inaccurate depiction that an AI is sapient if it's capable of acting against it's programming. Not even biological life acts against it's natural &quotrogramming".

The ways in which this could happen was if we were to given a machine designed for war and programmed with violence such an AI. However for such tasks we really don't need to take it's programming that far.

Another way in which this could happen would be if we developed software that was designed to mimic a natural progression into sapience. In such a case we wouldn't have instilled any basics for it to follow, this &quotrogramming" would form in a similar way to biological life where it wasn't hardwired with anything specific and developed hardwired traits over time.

However, it is likely that such a computer would have been specially built for that job and would probably be a) adept at it and b) perfectly content with it.


I could see such an AI being developed as mean to artificially replicate the human brain. For instance let's say some of these life extension concepts eventually pan out and the human life expectancy exceeds 300 years. There would then become a need to develop a way to preserve and extend the capacity of the brain as our brains would by then "max out".
Blairlarson
offline
Blairlarson
93 posts
Nomad

Well the thing that the Terminator people did not think of is put a shut down button or explosion button or a pony button.

MageGrayWolf
offline
MageGrayWolf
9,462 posts
Farmer

Well the thing that the Terminator people did not think of is put a shut down button or explosion button or a pony button.


Wouldn't even need that. Just program into it "not to take over the world and kill all human".
Nerdsoft
offline
Nerdsoft
1,266 posts
Peasant

Pretty certain they did have a shut down button. But anyway, I'd say "learning" AIs like Cleverbot should be kept away from the military, as it would probably end up with a very skewed perspective of human life, e.g. "all these humans are worthless, that one over there is an American, MUST PROTECT AMERICANS" and so on.
Likewise, Cleverbot (if not carted off to the army) will probably end up either deranged or the best troll in history.

MageGrayWolf
offline
MageGrayWolf
9,462 posts
Farmer

Speaking of something like Cleverbot this is another misconception of sapience. Just accumulating and regurgitating information isn't going to cause something to eventually reach a sapient level.

Showing 1-12 of 12