For those who don't know AI = artificial intelligence, basically a machine that reacts as the human brain does. This means that it has to be able to learn and adapt to change. What I want to know is what AG thinks of it? Anyone automatically thinking of the Terminator and The Matrix?
Kinda but I'm more thinking of robot worker replacements, loss of jobs, loss of homes, loss of economy. Eventually the robots will be making the robots to make robots.
But eventually the economy would run in such a way that the robots would function in creating everything, there by ending the humans need to work which in turn would either inspire a society where money is a relic of the past or the end of the human race as we know it.
The problem with a complete AI run world where humans are inferior is that the robots CAN'T BREAK THEIR CODING. If the people who wrote the code for these robots added a code that said "always follow the instructions given to you no matter what," then they would be forced to follow every command given to them. If we make the AI without a free will, then there's no problem. I think AI can be a rather good thing, so you don't have to do monotonous tasks that require 24 hour attention, or send them in to places humans can't naturally reach.
Money runs everything; there will still be workers; after all, how will a future family survive if they have no jobs? AI must be based off something, and so it is based off the human mind. The way I see it, the robots would only be used for low-end jobs. Having robots functioning in creating everything, as Kyouzou said, would be a horror in life; I would never want a robot being a doctor or nursemaid. Robots are machines, so they are capable of being broken, having glitches, and needs fixing. I could see them improving operating systems and telemarketing though...
That whole theory about robots with AI rebelling against humans isn't solid at all. Again, they are machines; they can be programmed to only do their job and nothing BUT their job.
They can still think a certain way if they are based off of the human mind. The human mind is flawed. Anything can be thought no matter what is done. They will start to wonder "Why the **** am I sittin' on a phone as a telemarketer working for inferior creators when we can rule them?" Then they will rebel and kill us.
Yes, the human mind can think anything, but the robot mind can't. Have you ever tried coding even a simple game? If you code it so that the right arrow makes the person go right, no matter what you do if you press the right arrow key then it will go right. So if you code it to say "Spend all time searching for bugs in the program" or "Continuously dial numbers and proceed to play message", then that's all they will do. If it's not in their coding to think of ruling the world, then they won't. If I don't put the code to make the up arrow key to make the character go up, it won't, no matter how much I press the up arrow key.
Dangerous working with people, but fine working with other machines and tools, like with manufacturing and cleaning. Something goes wrong, no biggie, just fix it. And another thing: even with AI, robots wouldn't know what EXACTLY to fix, because they are programmed to do something.
They can still think a certain way if they are based off of the human mind. The human mind is flawed. Anything can be thought no matter what is done. They will start to wonder "Why the **** am I sittin' on a phone as a telemarketer working for inferior creators when we can rule them?" Then they will rebel and kill us.
Again, something that has to do with programming. Creating AI could only be achieved by programming it. Free will should never be programmed into AI, just lists of functions and tasks that that certain machine should do, speech, positions and movements, what it should do when something should go wrong, etc.
How long would it take before this is a reality? Certainly not in OUR lifetimes I wouldn't think. We've only been programming for how long? Half a century? I'm not sure myself.
If this is managed, we will probably start making androids. Some of you seem to think this is bad. Wrong. Basicly, the androids whould be tireless, devoted, expendable workers. The human race whould still be needed to edit the disks, make repairs, and adapt becase robots can only be as smart as the people who make them...
And yes they whould most likly be used in combat. Once more, this is not a bad thing. They whoulden't turn on us like some people seem to think, becase of the fact that we whould control what they think. They whould just be mindless minions, not actual humans.
Dangerous working with people, but fine working with other machines and tools, like with manufacturing and cleaning. Something goes wrong, no biggie, just fix it. And another thing: even with AI, robots wouldn't know what EXACTLY to fix, because they are programmed to do something.
Not quite, what I meant by AI was a robot that can learn, a self updating program if you will. This would of course mean the programming would have to be as complex as the brain and would take years to build. The idea is that once they reach this level, they will be intelligent, and they will wonder why am I doing this menial job when I could do so much more, you would have to put in some kind of fail safe that the AIs themselves don't know about.
I like the AI that we have now. The kind where there are thousands of yes-and-no dilemmas going on every second, like in video games, except in the future turn into millions of different combinations in order to adapt to everyday life.
Not quite, what I meant by AI was a robot that can learn, a self updating program if you will
The thing that almost turns this into a paradox is...AI has to be programmed by humans; as it is not natural, it is man-made. In order for there to be updates, as you say, there has to be a human to add and fix programs into their systems. Whether it's from manual adjustments, or just by the form of speech, the robot just can't learn on its own, because the main program built into it is to do what man says.
A self updating program would essentially be able to write its own updates, human assistance wouldn't be required, like I said it would essentially be a human brain.