I'm sure everyone here has seen "Terminator" or maybe the movie "AI" my question is what are your thoughts of AI life. Mine I dilike it and I beleive we shoudn't even explore the possibiltity of it. Robots should be controled by humans not themselves.
It doesn't matter... machines cannot think for themselves. This is one of the first principles that we must remember when working with AI. I've seen where AI is... I'm not worried about it taking over.
@It doesn't matter... machines cannot think for themselves.
I'm no computer scientist, but I think that I ought to plop a big fat "yet" at the end of your sentence. I would also like to link to another in my series of "Small ROFLs"
Haha wow, just another wow. Although it is an interesting idea that robots could take over the world, it is highly unlikely, especially in today's era. Maybe in like 500 years, if our world is even around at that time, but until such a day comes I would not worry too much.
This is a great question. If consciousness can be reductively explained then we can conceivable emulate consciousness and thought in something non-organic (well, we could theoretically grow tissue, so I guess I should just say something man-made). I really think we just come down to "nuts and bolts." Just complex chemical and electrical reactions occurring in the brain. If consciousness could be emulated, should these new entities have rights like us?
Anyway, this is totally unrealistic, at least for right now. Many of you have said that robots can't think for themselves, which is totally true. I'm not sure why we would want to explore that anyway.
There is a robot that was built and not programmed to walk. It moved around its 12 actuators on six legs, anylyzing (spelled for censor) its own movement. It taught itself to walk. They then took off a leg. It adapted and could still move around. It could also get around obstacles. Moegreche posted an interesting article about a colony of different colored robots. Read that thread. I think it is possible for a robot to surpass the conciousnous level of a segmented worm. Anyway, I think we could efficiently shut down all robots with EMP's if they get out of hand.
If consciousness could be emulated, should these new entities have rights like us?
You know, strictly speaking, nobody has any rights :P /cryptic
As for this "thinking for ourselves", let us for a minute give berth to the possibility that the reductive model is an accurate one, which, like Moe says, would give rise to the possibility that we can replicate it.
First, what exactly does "thinking for yourself" mean? The above suggests that "thinking for yourself" is a property that can be given to you- i.e. that your consciousness, for everything that it does to your thinking, is something that convinces you to think. Whatever it is, we tend to define AI based on human cognitions.
As such, from this view, you could possibly give AI 'consciousness' by giving AI 'the goal of thinking for themselves'. That is to say, we can give them a set criteria for 'survival and advancement' that relate to fundamental behaviors. But just how similar would they be? The kind of mechanical, crafted AI unit we envisage have different properties to humans. If we really want to create something that is recognised as human, we need to consider everything that is relevant to the development of human behavior and consciousness.
To summarise all that into a riddle: the map is not the territory, and shortcuts are an entirely different route.