You've never seen The Terminator, have you? Skynet absolutely devastated the human population.
And in Star Wars people could shoot lightning from their hands. My point is that movies aren't real.
Also, an AI would only be able to do what it was programmed for, unless it was programmed to alter its code, thus learn. Even if it could learn, it would only be able to perform operations and make decisions that could exist as code, thus all their reasoning would be based on logic.
I'm fairly certain that this would also mean that robots would not have emotions, would not want power and would not harm a living creature unless its programming told it to, and I'm pretty sure that whoever developed such an AI would be very, very, very specific about that particular part of the programming.
In other words, the robot would still strive to fulfill the purpose for which it was created, and unless people would decide it was a good idea to build an army of robotic soldiers (which wouldn't surprise me, now that I think about it), there's absolutely no chance that they would turn to violence and murder.
So far, an advanced AI sounds like a good idea, but then there are some other aspects to consider.
What if someone bought a robot, and reprogrammed it to be violent? Sounds possible. Sounds dangerous. Sounds like a really mean thing to do, but it also sounds like it could happen. In this case, an advanced AI would be a dangerous thing. But then again, so are today's guns. If advanced AI would be forbidden, shouldn't guns be too? Personally, I think that both guns and AIs would have to be very controlled. They shouldn't just sell them at the gas station, to anyone with the money to pay for it. Every AI should be tracked and registered.
So, I still think advanced AIs would be a good thing. Assuming what you are talking about is robots. Chances are you're talking about an advanced AI in some sort of computer, with no way to physically harm anyone. In that case, it'd be quite safe.
But then again... (Another movie reference incoming!)
In Resident Evil, there's this AI in the hive thingie where the scientist do their science. It's programmed to not let the virus which name I seem to have forgotten spread outside the building. When someone releases the airborne virus and infects the entire building, the AI kills everyone inside to stop it from spreading. This would of course be the logical thing to do, fulfill your purpose in the only way possible. However, the logical thing to do and the right thing to do is not always the same. I imagine there'd be no way to give an AI emotions and empathy. In other words, an AI doesn't care, it only does what it's supposed to.
So to summarize all this, I'd like to say this:
I'd say yes to advanced AIs, but I'd like to ask just how much power and responsibility you could trust them with? Just look at the human brain, the perfect AI
The human brain is in no way artificial.