I know this might be a little random... and unlike the normal stuff we see on the forums here... but when you have an idea you have an idea so I'll just lay it out there.
It is things like this that make me wonder whether or not we'll actually have irobot/terminator type androids running around in the near future. I mean we already have robots that walk and keep themselves from falling. I wonder what one of those things would be like with an actual "cyber brain."
I'd like to discuss random things related to this issue. If you can build a sentient machine, then do you have the right to control it?
If they are sentient, then do you have the right to use them as war machines?
If you were the programmer, then would it be ethical for you to program limitations into their being?... like things they couldn't do (irobot robot laws)
More strikingly perhaps it shows that phase-change materials can be used to make artificial neurons and synapses. This means that an artificial system made entirely from phase-change devices could potentially learn and process information in a similar way to our own brains.
Holy crap.. do you also think what I think? They should try to find out if there is any possible medicinal/therapeutical use for this material. I mean I already heard things like trying to reconnect severed spine neurons with spider silk because of it's properties, but if we now have this material.. and if it keeps what it promises.. why not research in that direction?
Back to your questions.. well, depends how you define 'sentient'. You could create robots that can think complex things, but are unable to feel pain or emotions, and then you don't have to ask yourself if you're torturing them or not, right? Robots are already used in warfare, they will not hesitate to use sentient ones if that offers them any advantage. Is it right? Difficult question. I feel like it can't be considered as bad for the roboter (if anything, is it ok to use them against people?) since they have been built to that exact purpose.
I see robots more as a sort of extension of humans, they're mechanical elongations of our sphere of influence, they're our tools and weapons. I don't think that making them think complex gives them a soul (since I don't even believe in the existence of something like a soul, lol^^)
Well, what determines consciousness? A human body? An organic brain? If an android with consciousness was forced to work for the military w/o the choice, then isn't that sort of an idiot move on the creators part? If they wanted robotic slaves, why would they give them the ability to reason in the first place? That, to me, makes no sense. With consciousness and intellectual reasoning, comes rights as any normal human.
Well.... a new parallel cyber system akin to our biological one would make them "cyber" humans would it not? ...especially if they learn and model themselves after us.
You can take the sensation of pain away from a person... and you can lobotomize someone so they are effectively emotionless... are they then that much different from humans?
I think the general consensus with indoctrination of purpose into being solely for military tools and war machines was that it was bad. ...or that's what I hear when the subject of having children snatched from their parents and force fed "kill kill kill" comes up.
Granted, we have people that fight wars... but if they're sentient and have transcended basic machinery (and maybe even us), then shouldn't they have the option of deciding what it is that they want to do with their "life?"
You can take the sensation of pain away from a person... and you can lobotomize someone so they are effectively emotionless... are they then that much different from humans?
Don't get me wrong, lobotomization is a horrible crime that has been done in the past; you reduce a personality to a barely living vegetable. But if you create a robot that can do what you need it to do, without having to give it emotions or sense of pain in the first place, you gain a sort of improved machine, that is simply more quick/powerful/competent than the ones we have now.
You know, it's pretty similar to the argument christians advance against abortion.. if you don't want that baby, don't even make it, because killing it would be a crime. If you won't be able to use it, don't make that robot sentient, because taking parts of it's "consciousness" away is a crime. Not saying it is a crime, I'm not sure about that personally; but it could be interpreted as such. Just food for thought.
Pain is a message that is sent by the sensation receptors to the brain which interprets it. What if said machine was given sensory receptors (pressure/temperature) all along its being, would it then be a parallel to the sensation of pain?
Maybe a human replica would have better success as a sentient being. They wouldn't be scripted, and they would be able to know what they're doing, how they're going to/should do it, and would also be able to easily adapt to unforeseen circumstances. ...like spy replicas... or famous people look alikes.
I guess popular examples of what I'm getting at would be: The Geth from Mass Effect, Mr. Data from Star Trek, iRobot, possibly the Terminators (not sure if they were "sentient" or just scripted for every occasion), and I'm sure there are more examples.
Well with the way scientists are, they try and do things just for the sake of finding out if they can do it. What happens when they make such a break through, let the thing be sentient for some extended period of time, and are then like... "well, research is done... now what do we do with you?... we don't have funding... and you can't stay with me..." Do we just turn it off? Do we destroy it? Do we just let it go out and about roaming the world looking for a home?
I know that is hypothetical, but I'm trying to tease at the concepts I'm trying to express in a different way.
I dunno if it would be a religious scientist that would be trying to create life. ...but then again it might just be
If you can build a sentient machine, then do you have the right to control it?
I would say you have the right to control it as a parent would conrol their child. But it depends, does the sentient robot learn information or does it already have it.
If they are sentient, then do you have the right to use them as war machines?
Only if they want to.
Would it be ethical to use them as tools?
Well they arn't exactly part of the human species, and humans have been known for using everything a tool. Wheather it's human or not. So I would say no, but it would probably happen anyway.
If you aren't happy with your sentient creation's progress, do you then have the right to dismantle it and start over?
To a point. If your creation tries to kill people for no good reason, then please kill it. But if you wanted a war machine and instead made a innocent child then you have no right to destroy your creation just because it didn't come out right. It's like a mother killing her child because she wanted the opposite gender.
Do you have the right to destroy it if it isn't a hazard?
Maybe, if you're saving it from hurting itself then probably yes. If not, then probably no.
Those were the question I thought I could answer. I also think that if the robot is made then it should not be given access to the internet and should learn like regular people. This helps lessen the chance of the robot trying to take over humanity.
I would say you have the right to control it as a parent would conrol their child. But it depends, does the sentient robot learn information or does it already have it.
The source said that it would be possible for the machine to learn and process information like a human brain. I mean it is hypthetical/theoretical. I don't know... but assuming that it was just a human with completely robotic parts would your opinion change at all?
If they are sentient, then do you have the right to use them as war machines?
Would it be ethical to use them as tools?
If we can create sentient machines than we should be able to create machines capable of preforming these tasks without being sentient.
For sentient machines it would make sense to ask them what they want to do.
If you were the programmer, then would it be ethical for you to program limitations into their being?... like things they couldn't do (irobot robot laws)
Why wouldn't it be? Many species are born with instincts. Given the nature of this "birth" it wouldn't be much different.
Why wouldn't it be? Many species are born with instincts. Given the nature of this "birth" it wouldn't be much different.
I understand that part of it... Like the beginning instincts of don't touch fire or jump off a cliff... but I meant something more along the concept of mental barriers. maybe I can express it better in a different way
1. you cannot kill humans intentionally 2. you must obey your creator/owner/humans in general 3. you must protect your creator/owner/human with your life
I'm talking about statements that they couldn't override... yet be aware of... Like if you told a normal human to make you a sammich they'd most likely make faces and flip you the bird... but would have to go make you a sammich regardless of whether they wanted to or not. (not talking about fear of being hit... but just you HAVE to go do that)
No, that would be terrible. Those features don't require the robot to be sentient. So basicly you would have made a new source of life and then you abuse it and torment it. That would be terrible. If it's sentient then it should have the rights other sentient beings have. Except access to computer, I'm still iffy on wheather we can trust them that much.
I'm talking about statements that they couldn't override... yet be aware of... Like if you told a normal human to make you a sammich they'd most likely make faces and flip you the bird... but would have to go make you a sammich regardless of whether they wanted to or not. (not talking about fear of being hit... but just you HAVE to go do that)
If we had such things programmed into it as a sort of instinct it would likely be fine with preforming the order. If we are trying to make something that can decide for itself keeping at least the first and third ones wouldn't be a bad idea. the second would be a bit counter intuitive in making a robot that can think for itself.
Except access to computer, I'm still iffy on wheather we can trust them that much.
That would likely confuse the hell out of them since they would essentially be computers.
That would likely confuse the hell out of them since they would essentially be computers.
So would it be like a person meeting a brain? I thought that if they were raised like regular people were they would understand the concept like a regular person. So does the robot know it's a robot and is proud, or wants to be human?