The Armor Games website will be down for maintenance on Monday 10/7/2024
starting at 10:00 AM Pacific time. We apologize for the inconvenience.

ForumsThe TavernDevelopment of advanced AI: Good or Bad? Opinions about the topic in general?

27 3212
Paarfam
offline
Paarfam
1,558 posts
Nomad

The title says it all, just try not to go on 10 pages in a 1 V. 1 argument, it's annoying.

  • 27 Replies
loloynage2
offline
loloynage2
4,206 posts
Peasant

No, the title doesn't doesn't explain everything. Are you talking about robots? Then yes, you can't censure science.

Paarfam
offline
Paarfam
1,558 posts
Nomad

AI = Artificial Intelligence. It means developing fake "minds" for thinks that can't think like machinery, equipment, and yes, robots.

Joe96
offline
Joe96
2,226 posts
Peasant

According to some article I read in Time, the day that computers have an amount of artificial intelligence equal to that of a human will be known as "The Singularity" and is supposed to happen sometime in 2040-2050.

loloynage2
offline
loloynage2
4,206 posts
Peasant

Joe, what you probably mean is that robots would have the same activity as a brain. Robots would be able to solve problems better, but it will probably still not make computers actually "think".

darkrai097
offline
darkrai097
858 posts
Nomad

Ever seen "I robot" It showed that the development of advanced technology could end in strategy.

Do you mean tragedy? As for the argument, no, I do not thin it's bad. If we create something, we will have the ability to influence/change it.
Legion1350
offline
Legion1350
5,365 posts
Nomad

I don't believe humanity can develop true Artificial Intelligence. If we can, I think it will take a very long time.

If we create something, we will have the ability to influence/change it.


You've never seen The Terminator, have you? Skynet absolutely devastated the human population.
deathopper
offline
deathopper
1,564 posts
Nomad

Robots would be able to solve problems better, but it will probably still not make computers actually "think".


Agreed. Robots need to be programmed and that's the only way it can function. And thinking is also feeling emotion and we cannot program a robot to feel true love, jealousy, hatred...ect. The only way a robot can "think" is if we implement robotic parts on a human but the would still make it a human.
loloynage2
offline
loloynage2
4,206 posts
Peasant

Agreed. Robots need to be programmed and that's the only way it can function. And thinking is also feeling emotion and we cannot program a robot to feel true love, jealousy, hatred...ect. The only way a robot can "think" is if we implement robotic parts on a human but the would still make it a human.

Yes, but I still believe in AI. Just it's a lot harder to make then it looks. AI is possible. Just look at the human brain, the perfect AI
iMogwai
offline
iMogwai
2,027 posts
Peasant

You've never seen The Terminator, have you? Skynet absolutely devastated the human population.


And in Star Wars people could shoot lightning from their hands. My point is that movies aren't real.

Also, an AI would only be able to do what it was programmed for, unless it was programmed to alter its code, thus learn. Even if it could learn, it would only be able to perform operations and make decisions that could exist as code, thus all their reasoning would be based on logic.

I'm fairly certain that this would also mean that robots would not have emotions, would not want power and would not harm a living creature unless its programming told it to, and I'm pretty sure that whoever developed such an AI would be very, very, very specific about that particular part of the programming.

In other words, the robot would still strive to fulfill the purpose for which it was created, and unless people would decide it was a good idea to build an army of robotic soldiers (which wouldn't surprise me, now that I think about it), there's absolutely no chance that they would turn to violence and murder.

So far, an advanced AI sounds like a good idea, but then there are some other aspects to consider.

What if someone bought a robot, and reprogrammed it to be violent? Sounds possible. Sounds dangerous. Sounds like a really mean thing to do, but it also sounds like it could happen. In this case, an advanced AI would be a dangerous thing. But then again, so are today's guns. If advanced AI would be forbidden, shouldn't guns be too? Personally, I think that both guns and AIs would have to be very controlled. They shouldn't just sell them at the gas station, to anyone with the money to pay for it. Every AI should be tracked and registered.

So, I still think advanced AIs would be a good thing. Assuming what you are talking about is robots. Chances are you're talking about an advanced AI in some sort of computer, with no way to physically harm anyone. In that case, it'd be quite safe.

But then again... (Another movie reference incoming!)

In Resident Evil, there's this AI in the hive thingie where the scientist do their science. It's programmed to not let the virus which name I seem to have forgotten spread outside the building. When someone releases the airborne virus and infects the entire building, the AI kills everyone inside to stop it from spreading. This would of course be the logical thing to do, fulfill your purpose in the only way possible. However, the logical thing to do and the right thing to do is not always the same. I imagine there'd be no way to give an AI emotions and empathy. In other words, an AI doesn't care, it only does what it's supposed to.

So to summarize all this, I'd like to say this:

I'd say yes to advanced AIs, but I'd like to ask just how much power and responsibility you could trust them with?


Just look at the human brain, the perfect AI


The human brain is in no way artificial.
Legion1350
offline
Legion1350
5,365 posts
Nomad

My point is that movies aren't real.


I know, but we can take a similar scenario. Suppose we put an A.I. inside a defense system. What if it decides humanity is a threat? It has control over every aspect of our defenses.
Paarfam
offline
Paarfam
1,558 posts
Nomad

I know, but we can take a similar scenario. Suppose we put an A.I. inside a defense system. What if it decides humanity is a threat? It has control over every aspect of our defenses.

@Legion1350
You're hitting on something very important. This is the only thing I could think of that would cause humanity to be at danger because of artificial intelligence. Everything else I could think of that was related to the topic was positive.
Patrick2011
offline
Patrick2011
12,319 posts
Treasurer

I know, but we can take a similar scenario. Suppose we put an A.I. inside a defense system. What if it decides humanity is a threat? It has control over every aspect of our defenses.


If an AI is placed inside a defense system, it would have to be programmed in a specific way. AIs in defense systems are only dangerous if the programming is not specific enough.
Paarfam
offline
Paarfam
1,558 posts
Nomad

@Patrock2011
What if a glitch in the system changes its coding? Sure, by then we'll probably have more advanced technology that keeps all machinery in check and more technology that keeps that in check, but what if there's a chain reaction?

iMogwai
offline
iMogwai
2,027 posts
Peasant

What if it decides humanity is a threat?


I'm pretty sure it would be programmed to defend humanity, not to defend itself. If that was the case, it wouldn't attack humanity. Sure, humanity is a threat to itself, but if the computer would come to that conclusion somehow, it would then become a threat to humanity itself, making it completely retarded and hopefully shut down and/or explode.

But yeah, I think AIs are good, but you need to have a whole crapload of safety procedures and backup plans and whatnot to make sure it doesn't go crazy, or to make sure it could be taken care of if it did, and I trust the engineers and codemonkeys to be able to take care of that. There's no way the general population and the politicians would let them create a fully functional AI, and give it control over our defense systems, without more safety thingies than you could count.
d_dude
offline
d_dude
3,523 posts
Peasant

Now I know this is just fiction, but in the Halo universe there are AIs. There are 'dumb' AIs that are programmed for few tasks and have a small thinking capacity. Then there are 'smart' AIs. These are used in the spaceships, and are truly 'smart'. Seconds are hours for them. They grow 'rampant' in around 7 years. They never stop absorbing information, but they absorb too much in 7 years and basically go insane from too much information. So they get shut down.

Showing 1-15 of 27