ForumsWEPRAI Barriers

22 4150
Moegreche
offline
Moegreche
3,826 posts
Duke

I got to thinking about this during the stem cell thread. There are 2 main barriers to achieving the goal of simulation human intelligence that would I like to offer for consideration. One is a technological barrier and the other a logical one (although the logical one might be a bit hard to follow). I'd like to get some thoughts on these possible restrictions on the limits of computation. There are some important points I need to make, though, so if you don't have time to read this post, please don't respond.

Background:
The nature of artificial intelligence is much more subtle than the movies make it seem. We're not talking about robots that can think for themselves and have emotions. I understand that seems like what artificial intelligence is, but in order for a machine to reach that kind of capacity it must first be able to reason on a very fundamental level like a human. I am taking this condition for granted because I think it is deductively certain, but if anyone can take up argument against it, please feel free.

P1) Any machine capable of any type of artificial intelligence must be capable of fundamental human reasoning.

Technological Barrier:
There are 2 barriers I identify here: the decoding and the encoding. Decoding refers to the process of brain mapping and somehow storing that functional information. So will we ever develop the mapping technology and method of recording and storing that information that will make it useful? Encoding is the process of taking that information (if we can store it) and making a "brain" out of it. Some actual piece of hardware (be it organic or inorganic) that will be able to emulate brain function on even the most basic level. Can such a thing be made?

P2) It is technologically impossible to decode the brain's reasoning abilities in such a way that makes it useful in emulating its function.

P3) It is technologically impossible to design hardware that can perform the function of the brain.

Logical Barrier:
This is technical and might be difficult to follow. I find this facet of the problem more intriguing because if we cannot even axiomatize human thinking then the technological problem because moot (I hope this point is obvious). This is actually a philosophical consequence of Godel's First Incompleteness Theorem. It is also linked with the halting problem, if you know what that is. In order to avoid a very very long and complicated proof, I will make an analogy. This theorem basically states that any system of higher order logic is necessarily incomplete. This is because we can form a self-referential statement in a propositional language that creates a contradiction. This is similar to the liar's paradox which is demonstrated by the statement "This sentence is false." If the sentence is indeed false the the statement is actually true. So it's true if and only if it's false and it's false if and only if it's true. When we see a proposition such as this, we can make sense of it, though. But if a computer came across this statement, there's no way we can give it instructions on what to do in this case.

P4) A proposition can be constructed that creates a contradiction in any set axiomatized propositions/instructions. [this has been proved by Godel]

P5) A computer cannot be given an axiom to handle all possible contradictory propositions [this is the halting problem]

Because we can understand and know what to do with contradictory statements (like the liar's paradox) and a computer cannot, it is logically impossible to construct a computer capable of emulating human intelligence.

I'd love feedback on either of these problems (or both). If you have an objection to any of the premises (P1 - P5) or if you need clarification on something please feel free.

  • 22 Replies
Moegreche
offline
Moegreche
3,826 posts
Duke

Let me respond to these objections of the premises.

P1)Any machine capable of any type of artificial intelligence must be capable of fundamental human reasoning.

Objection: Because we don't know if the most basic type of sentient consciousness is the human one or not.

My Response: I actually took this premise to be self evident, but I'll see if I can break it down. In order to understand this premise, we must understand what it means for something to be "artificially intelligent." Again, this is not computers behaving like humans, but something much more fundamental. We want something that has no intelligence to be able to emulate something with intelligence. This means that any machine will have to only run off of basic algorithms in a given system in order to process any information given to them. Now this might imply having a capacity to somehow "learn" but this premise is not concerned with something like that. The bottom line is that out experiential knowledge of what intelligence or sentience is rests in our own sentience and intelligence. We would never truly be able to assess something else as intelligent (which is much different from sentient) unless we compared it to our own intelligence.

P2) It is technologically impossible to decode the brain's reasoning abilities in such a way that makes it useful in emulating its function.

P3) It is technologically impossible to design hardware that can perform the function of the brain.

P2 and P3 I have defended as much as I can in my original post. I realize these conditions are not logically impossible, but the barriers of decoding and encoding (represented by P2 and P3 respectively) certainly make them seem impossible on a practical level.
I can make a case, however, for the encoding problem to actually be logically impossible. Given the nature of Godel's First Incompleteness Theorem and the halting problem, there doesn't seem to be a way for a computer to recognize these cases that humans can. A computer would loop infinitely whereas a human would realize the contradiction and stop. The halting problem comes in here because as soon as you try to add an algorithm to handle the infinite loop, it turns out that a different argument can be introduced that will again create an infinite loop. This can be done forever.

i have a logical problem with many of your premises. Some of them say that things are absolutely impossible, when we don't have sufficient knowledge about it to really know.

I assume by "absolutely impossible" you mean logically impossible. Again, P2 and P3 I do not claim as logically impossible, only technologically impossible. Again, I can make a decent case for P3 to be logically impossible (see above).
Strop
offline
Strop
10,816 posts
Bard

Hm, wanted to expand on something here:

A computer would loop infinitely whereas a human would realize the contradiction and stop.


Would like to add the qualifier- an assumedly reasonable human would realised the contradition and stop...because there are other outcomes that can occur:

a) person goes insane
b) person creates a new set of axioms that alters conditions such that contradiction becomes acceptable.

I assume that when you say "stop" you cover the above, in that you're talking about any alteration in process. That's why I was interested in thinking about potentiation, in that persistent inputs can alter long-term outcomes.
thisisnotanalt
offline
thisisnotanalt
9,821 posts
Shepherd

P2 and P3 I have defended as much as I can in my original post. I realize these conditions are not logically impossible, but the barriers of decoding and encoding (represented by P2 and P3 respectively) certainly make them seem impossible on a practical level.
I can make a case, however, for the encoding problem to actually be logically impossible. Given the nature of Godel's First Incompleteness Theorem and the halting problem, there doesn't seem to be a way for a computer to recognize these cases that humans can. A computer would loop infinitely whereas a human would realize the contradiction and stop. The halting problem comes in here because as soon as you try to add an algorithm to handle the infinite loop, it turns out that a different argument can be introduced that will again create an infinite loop. This can be done forever.

The entire argument is very dependent on Godel's First Incompleteness Theorem. Are we completely and absolutely sure that this theorem is correct?
-----------------
I actually took this premise to be self evident, but I'll see if I can break it down.

*tsks sarcastically*
Strop
offline
Strop
10,816 posts
Bard

Are we completely and absolutely sure that this theorem is correct?


If you wish to argue about this, why don't you read it up and tell us if you can formulate any objections yourself? Seems pretty obvious to me.

[quote]I actually took this premise to be self evident, but I'll see if I can break it down.


*tsks sarcastically*[/quote]

Be nice now, Moe was simply trying not to insult anybody's intelligence. After all I got why the premise was self-evident.
Strop
offline
Strop
10,816 posts
Bard

Also: fancy arguments are frequently fanciful- the less premises an argument is built on, the easier it is to defend because the less likely it is for there to be conflict. That the entire argument here is built upon Godel's Incompleteness Theorem, which is regarded as being solid, makes it a strong argument.

thisisnotanalt
offline
thisisnotanalt
9,821 posts
Shepherd

I read Godel's First Incompleteness Theorem. It seems to be a solid theorem.
------------
I understood why it would be self-evident, yet at the same time, it would be void, because we wouldn't know the extent of intelligence and consciousness that the machine would emulate. Setting what we think are AI barriers seems a bit foolish if we don't know the level of intelligence it would have toe emulate, no? Knowing the boundaries is important.

Strop
offline
Strop
10,816 posts
Bard

You're conflating necessary and sufficient conditions.

All Moe was arguing in P1 was what would be necessary i.e. what must be satisfied. Sufficiency is completely different, it's a condition that, if satisfied, the argument is proven valid provided no conflicts arise from other premises (i.e. provided the argument is a good one).

So what you're talking about here is about what we could call sufficient i.e. what level of condition needs to be satisfied. I raised this issue in my original post, and really, this has been something the entire thread addresses one way or another.

Showing 16-22 of 22