I know this might be a little random... and unlike the normal stuff we see on the forums here... but when you have an idea you have an idea so I'll just lay it out there.
It is things like this that make me wonder whether or not we'll actually have irobot/terminator type androids running around in the near future. I mean we already have robots that walk and keep themselves from falling. I wonder what one of those things would be like with an actual "cyber brain."
I'd like to discuss random things related to this issue. If you can build a sentient machine, then do you have the right to control it?
If they are sentient, then do you have the right to use them as war machines?
If you were the programmer, then would it be ethical for you to program limitations into their being?... like things they couldn't do (irobot robot laws)
This just freaks me out and makes me think that a robot- human war is not impossible. With technology like that, we should rebuild human brains that have been destroyed first, then worry about robots. I think that We need to keep robots where they are now.
we should rebuild human brains that have been destroyed first, then worry about robots. I think that We need to keep robots where they are now.
If this technology is capable of interacting with the human brain we could blur the line between human and robot in a sort of Ghost In The Shell style. We could become the robots.
If this technology is capable of interacting with the human brain we could blur the line between human and robot in a sort of Ghost In The Shell style. We could become the robots.
I think that We need to keep robots where they are now.
No, I don't think so. Robots can be of a lot of use in many practical situations not related to war. For example robots that would help find survivors after a tsunami, among the debris. If it is able to move along independently we won't have to steer it ourselves anymore and have more chance to find people. You just don't need to make them completely human-like, just add them enough software for their purpose.
very interesting topic from an ethical point of view. In my opinion the creators have the can (legally) use the robot they built as a tools since it was made as such, thus giving them full control over their creation meaning they can destroy it, dismantle it or recycle.... you name it. but were it gets triky is do they have the right? if the machin in question learns, explores, and overtime devellopes; speech, feelings, other human apects (fear, sadness, joy....) then it isn't much different of a human being (legally parents can't destoy, dismantle or recycle their children) so the creators in question don't have the right morally or legally to do so. I am not includint into this situation limitations to the programme, because we don't have these kind of limitations. for the millitary part;
If an android with consciousness was forced to work for the military w/o the choice, then isn't that sort of an idiot move on the creators part? If they wanted robotic slaves, why would they give them the ability to reason in the first place?
i agree with the Mexican, if you need a war machin you don't give it human intellect, you make a simple, efficiant commands i.e. "kill enemies that fall into this description:......."
In my opinion the creators have the can (legally) use the robot they built as a tools since it was made as such, thus giving them full control over their creation meaning they can destroy it, dismantle it or recycle.... you name it.
Given the second half of your statement I don't see how you hold this opinion.
Given the second half of your statement I don't see how you hold this opinion.
i make a distinction between a robot that can learn, explore.. in short that is very much like us exept the physical part and between a robot that was progarmmed to "be" like us which is not really a creation of a self consious being. so the second half aplies only to the learning type and the programmed type is mentioned in the first part.