ForumsWEPRAI life. Good? Bad?

27 7854
Strategy_guy
offline
Strategy_guy
290 posts
Nomad

I'm sure everyone here has seen "Terminator" or maybe the movie "AI" my question is what are your thoughts of AI life. Mine I dilike it and I beleive we shoudn't even explore the possibiltity of it. Robots should be controled by humans not themselves.

  • 27 Replies
Megamickel
offline
Megamickel
902 posts
Peasant

It doesn't matter... machines cannot think for themselves. This is one of the first principles that we must remember when working with AI. I've seen where AI is... I'm not worried about it taking over.

Pfhortipfhy
offline
Pfhortipfhy
70 posts
Nomad

@It doesn't matter... machines cannot think for themselves.

I'm no computer scientist, but I think that I ought to plop a big fat "yet" at the end of your sentence. I would also like to link to another in my series of "Small ROFLs"

http://www.a-i.com/

Sting
offline
Sting
266 posts
Peasant

Haha wow, just another wow. Although it is an interesting idea that robots could take over the world, it is highly unlikely, especially in today's era. Maybe in like 500 years, if our world is even around at that time, but until such a day comes I would not worry too much.

Moegreche
offline
Moegreche
3,826 posts
Duke

This is a great question. If consciousness can be reductively explained then we can conceivable emulate consciousness and thought in something non-organic (well, we could theoretically grow tissue, so I guess I should just say something man-made).
I really think we just come down to "nuts and bolts." Just complex chemical and electrical reactions occurring in the brain.
If consciousness could be emulated, should these new entities have rights like us?

Pfhortipfhy
offline
Pfhortipfhy
70 posts
Nomad

Wrong topic, buddy.

XCoheedX
offline
XCoheedX
922 posts
Scribe

^ ya I don't know where he got that either lol.

Anyway, this is totally unrealistic, at least for right now. Many of you have said that robots can't think for themselves, which is totally true. I'm not sure why we would want to explore that anyway.

LordBob
offline
LordBob
517 posts
Nomad

Well my big concern is this vicious cycle.

Human makes AI robot

Human Happy

Robot gets virus

Robot ether blows up or goes on rampage

Britney spears makes a new album

Human makes robot to destroy other one

and it repeats

thegrim23
offline
thegrim23
172 posts
Nomad

until robots can think for themselves or get infected by a computer virus that tells them top kill us, we shouldn't worry about this too much

chiliad_nodi
offline
chiliad_nodi
637 posts
Peasant

There is a robot that was built and not programmed to walk. It moved around its 12 actuators on six legs, anylyzing (spelled for censor) its own movement. It taught itself to walk. They then took off a leg. It adapted and could still move around. It could also get around obstacles. Moegreche posted an interesting article about a colony of different colored robots. Read that thread. I think it is possible for a robot to surpass the conciousnous level of a segmented worm.
Anyway, I think we could efficiently shut down all robots with EMP's if they get out of hand.

garifu
offline
garifu
145 posts
Shepherd

If you don't want a robocalypse, then never give a robot a weapon. And don't give it agility (maybe a slow-moving joint apparatus?)

Ok that's all fantasy, but seriously, AI has the potential to be beneficial as well, so let's not can it altogether just yet...

I hope someday we can create nanotechnology that could be used to repair atherosclerotic vasculature. Now THAT would be sweet.

Strop
offline
Strop
10,816 posts
Bard

If consciousness could be emulated, should these new entities have rights like us?

You know, strictly speaking, nobody has any rights :P /cryptic

As for this "thinking for ourselves", let us for a minute give berth to the possibility that the reductive model is an accurate one, which, like Moe says, would give rise to the possibility that we can replicate it.

First, what exactly does "thinking for yourself" mean? The above suggests that "thinking for yourself" is a property that can be given to you- i.e. that your consciousness, for everything that it does to your thinking, is something that convinces you to think. Whatever it is, we tend to define AI based on human cognitions.

As such, from this view, you could possibly give AI 'consciousness' by giving AI 'the goal of thinking for themselves'. That is to say, we can give them a set criteria for 'survival and advancement' that relate to fundamental behaviors. But just how similar would they be? The kind of mechanical, crafted AI unit we envisage have different properties to humans. If we really want to create something that is recognised as human, we need to consider everything that is relevant to the development of human behavior and consciousness.

To summarise all that into a riddle: the map is not the territory, and shortcuts are an entirely different route.

Tony
offline
Tony
35 posts
Nomad

Terminator? AI? Has somebody been watching too much Hollywood?

Strop
offline
Strop
10,816 posts
Bard

Most of us watch too much Hollywood flicks if you ask me!

...you forgot The Matrix.

Kurgle
offline
Kurgle
163 posts
Nomad

What if we got far enough to make AI think for it's self and it came upon religion? Do you think it would follow one or not?

Strop
offline
Strop
10,816 posts
Bard

To answer that question one would have to answer the question of what drives religion.

And that is a topic of significant controversy!

Showing 1-15 of 27