ForumsWEPRAI Barriers

22 4143
Moegreche
offline
Moegreche
3,826 posts
Duke

I got to thinking about this during the stem cell thread. There are 2 main barriers to achieving the goal of simulation human intelligence that would I like to offer for consideration. One is a technological barrier and the other a logical one (although the logical one might be a bit hard to follow). I'd like to get some thoughts on these possible restrictions on the limits of computation. There are some important points I need to make, though, so if you don't have time to read this post, please don't respond.

Background:
The nature of artificial intelligence is much more subtle than the movies make it seem. We're not talking about robots that can think for themselves and have emotions. I understand that seems like what artificial intelligence is, but in order for a machine to reach that kind of capacity it must first be able to reason on a very fundamental level like a human. I am taking this condition for granted because I think it is deductively certain, but if anyone can take up argument against it, please feel free.

P1) Any machine capable of any type of artificial intelligence must be capable of fundamental human reasoning.

Technological Barrier:
There are 2 barriers I identify here: the decoding and the encoding. Decoding refers to the process of brain mapping and somehow storing that functional information. So will we ever develop the mapping technology and method of recording and storing that information that will make it useful? Encoding is the process of taking that information (if we can store it) and making a "brain" out of it. Some actual piece of hardware (be it organic or inorganic) that will be able to emulate brain function on even the most basic level. Can such a thing be made?

P2) It is technologically impossible to decode the brain's reasoning abilities in such a way that makes it useful in emulating its function.

P3) It is technologically impossible to design hardware that can perform the function of the brain.

Logical Barrier:
This is technical and might be difficult to follow. I find this facet of the problem more intriguing because if we cannot even axiomatize human thinking then the technological problem because moot (I hope this point is obvious). This is actually a philosophical consequence of Godel's First Incompleteness Theorem. It is also linked with the halting problem, if you know what that is. In order to avoid a very very long and complicated proof, I will make an analogy. This theorem basically states that any system of higher order logic is necessarily incomplete. This is because we can form a self-referential statement in a propositional language that creates a contradiction. This is similar to the liar's paradox which is demonstrated by the statement "This sentence is false." If the sentence is indeed false the the statement is actually true. So it's true if and only if it's false and it's false if and only if it's true. When we see a proposition such as this, we can make sense of it, though. But if a computer came across this statement, there's no way we can give it instructions on what to do in this case.

P4) A proposition can be constructed that creates a contradiction in any set axiomatized propositions/instructions. [this has been proved by Godel]

P5) A computer cannot be given an axiom to handle all possible contradictory propositions [this is the halting problem]

Because we can understand and know what to do with contradictory statements (like the liar's paradox) and a computer cannot, it is logically impossible to construct a computer capable of emulating human intelligence.

I'd love feedback on either of these problems (or both). If you have an objection to any of the premises (P1 - P5) or if you need clarification on something please feel free.

  • 22 Replies
orion732
offline
orion732
617 posts
Nomad

Wow...You have alot of free time...Though I guess the same could be said for alot of the people here on AG. I agree with most of your points. I think that one way we could have the robot make decisions is by usint Q-bits.

Strop
offline
Strop
10,816 posts
Bard

Thanks for taking the time to post that Moe, I've been looking for a concise way to express my skepticism at this form of AI and you've just posted it! So I unfortunately won't be useful to you for critique.

I haven't thought about this for a while but I do recognise a few fundamental foundations to the argument, in particular what it means to "emulate human reasoning". And what human reasoning is in the first place.

Given my background, I happen to think that it's not quite sufficient to think of human reasoning as its own entity, as it is necessarily bundled up with and shares space with our other physiological faculties. Skipping a few steps, it follows that I believe that human intelligence is not fully "emulatable" unless one fairly recreates a human, right down to the embryological processes of development. And since we have an incomplete understanding of that...

But that sort of leads in to what I'm interested in, however, which is some form of organic knowledge acquisition. Perhaps instead of on-off phenomena, a focus on the process of potentiation might prove more useful. I believe there is current research on the integration of neural stem cells into artificial circuits.

Moegreche
offline
Moegreche
3,826 posts
Duke

it follows that I believe that human intelligence is not fully "emulatable" unless one fairly recreates a human

This is an interesting point, Strop, and one that I come back to a lot when I think about this problem. In this creation of a human, is it necessary to use biological/organic parts to compose the human, or will any material work just fine?

I think that one way we could have the robot make decisions is by usint Q-bits.

I had to just look up what a q-bit (qubit) is, but it seems the only difference between this piece of information and any other is that the q-bit is capable of quantum superimposition. My question is: how does this solve the encoding/decoding problem? Why can human reason or thought not be represented by solid states?
Strop
offline
Strop
10,816 posts
Bard

Why can human reason or thought not be represented by solid states?


I actually think my response touches upon this when I mention potentiation. Whilst neural networks are easily thought of as vastly complex sets of binary switches which feedback upon themselves, the activation, formation and evolution of these networks is a different matter.

With current standard computers, as far as I see it we don't really emulate this on a hardware level, and software emulations of AI are merely probablistic.
Moegreche
offline
Moegreche
3,826 posts
Duke

I'm not familiar enough with how the brain works on this level to be able to ask intelligent questions of you, Strop, so I'll have to stick with mere logical possibilities. But given that we can only have probabilistic software programs (at least until a working quantum computer is developed) could we not still emulate the appearance or at least the end result of having quantum bits of information? Also, would the advent of quantum computers bring about a solution to you? Or is still impossible because of other restrictions?

thisisnotanalt
offline
thisisnotanalt
9,821 posts
Shepherd

P3) It is technologically impossible to design hardware that can perform the function of the brain.

That is, until we decode the function of the brain, which is currently impossible.
----------
The thing about the "this sentence is false" paradox is that it is referring to the sentence itself, not the statement. If a differentiation can be made between the statement and the sentence, then it can be solved, while a true paradox is absolutely unsolvable.
-----------
All thinking has some sort of algorithm or pattern--including the human brain. If we are somehow able to find this algorithm and implement the entire aspect of a persona into the computer, and take into account all factors that influence making a decision, then it is feasible to create a true AI. Of course, with today's tech, that is still a long way off, if even possible.
Moegreche
offline
Moegreche
3,826 posts
Duke

alt, you have some good points which I would like to make more clear. The analogy I made between the logical problem and the liar's paradox fails for an important reason. While the liar's paradox may not be a contradiction, Godel showed a real, genuine paradox in any system tat includes basic mathematical axioms (you know, those things that are supposed to be necessarily true). So the problem in the logical barrier is actually a true paradox that is unsolvable.
The key point here is that we can understand statement (such as self-referentially paradoxical statements) that a computer cannot.

ShintetsuWA
offline
ShintetsuWA
3,176 posts
Nomad

This is, of course true, only because a computer follows a specific line of code. The computer will
follow that code if possible. Let me show you something. In a code, a computer relies on conditions,
such as for Moe's paradoxical statement, the "lier's paradox".

(bear with me on this, I forgot how the real saying went)

If (statement == false)
hung;

If (statement == true)
electrocuted;

statement == false;

Since the computer must go through both conditions at once, and both of them cancel each other out,
the whole code is meaningless and it brings up an error. It cannot follow it because it was given
a set of rules to follow, and those rules countered each other. Now if there was some way to break that, e.g. given the brain's power to fix that statement, we would have figured out AI.

Strop
offline
Strop
10,816 posts
Bard

But given that we can only have probabilistic software programs (at least until a working quantum computer is developed) could we not still emulate the appearance or at least the end result of having quantum bits of information?


I don't know how qubits would work, so all I can say right now is that currently no computers I know of really learn independently (in the developmental manner I spoke of above). So since all the fundamental mechanism of inputs have already been taught to computers, they will always fall to P5 i.e. the halting problem.

All thinking has some sort of algorithm or pattern--including the human brain.


Hold up a minute there Alt, I disagree. You could arguably say all thinking can be represented by some sort of pattern as no doubt we are able to spot patterns, but as to whether this is even the best way to philosophically conceive human thinking is uncertain.

To revisit what I said earlier:

Given my background, I happen to think that it's not quite sufficient to think of human reasoning as its own entity, as it is necessarily bundled up with and shares space with our other physiological faculties. Skipping a few steps, it follows that I believe that human intelligence is not fully "emulatable" unless one fairly recreates a human, right down to the embryological processes of development.


I'm going to now fill in that "skipping a few steps" now:

Seeing as we're thinking about thinking, I find it important to go back to some fundamental Dennett (pretty much all I know of Dennett, that is), and reflect upon the nature of consciousness and what makes us consider rationality the way we do. If you can appreciate that consciousness is essentially described as something that can convince you that you are conscious, then there should be no problem with considering our rational thought as the byproduct of that which is represented by concepts of biological processes.

What I aim to do there is to separate myself from all aspects of taxonomical constructs in separating "emotive" and "cognitive" and "motor" processing, as while these distinctions are ultimately very important on a neurobiological level, to the layman that thinks of a person's moving, thinking and feeling as three separate things, this is misleading. Because one has to understand first that all these behaviours share the same basis.

@ShintetsuWA: Moe already covered that (and shot it down) in his original post.
FireflyIV
offline
FireflyIV
3,224 posts
Nomad

Encoding is the process of taking that information (if we can store it) and making a "brain" out of it.


I would just like to point out that it has not been proven exactly how the human brain encodes, so how would it be possible to attempt to do so with an artificial entity?
thisisnotanalt
offline
thisisnotanalt
9,821 posts
Shepherd

alt, you have some good points which I would like to make more clear. The analogy I made between the logical problem and the liar's paradox fails for an important reason. While the liar's paradox may not be a contradiction, Godel showed a real, genuine paradox in any system tat includes basic mathematical axioms (you know, those things that are supposed to be necessarily true). So the problem in the logical barrier is actually a true paradox that is unsolvable.
The key point here is that we can understand statement (such as self-referentially paradoxical statements) that a computer cannot.

i have a logical problem with many of your premises. Some of them say that things are absolutely impossible, when we don't have sufficient knowledge about it to really know. We can't say something is impossible until we actually know for sure. We thought that it was impossible to travel faster than light. . .then we discovered quantum entanglement. See where I'm going with this? We can't say something is impossible until we have enough evidence to actually fully back up our statement.
-------------
Hold up a minute there Alt, I disagree. You could arguably say all thinking can be represented by some sort of pattern as no doubt we are able to spot patterns, but as to whether this is even the best way to philosophically conceive human thinking is uncertain.

It comes down to what drives human thought, no? And all processes have something to start them. If we find out what starts the process of human thought, and what drives it, then scientists will probably be able to find out a way to recreate it artificially.
Strop
offline
Strop
10,816 posts
Bard

It comes down to what drives human thought


Which comes down to what I said about Dennett: consciousness is an emergent property that operates by convincing us that we are conscious.

This in itself can lead to a whole range of misconceptions, such as that there is some unique, singular component to "consciousness" that distinguishes it from that which we do not conceive as "conscious", one form of which is the Platonic dualism I mentioned earlier.

We can't say something is impossible until we actually know for sure.


Read that sentence again- any good scientist will never know when something is "impossible". At any rate, Godel's incompleteness theorem and the halting problem are gold-standard, so it sounds like your current objections are more on the rhetorical side than logical, unless you could elaborate further?
thisisnotanalt
offline
thisisnotanalt
9,821 posts
Shepherd

Read that sentence again- any good scientist will never know when something is "impossible". At any rate, Godel's incompleteness theorem and the halting problem are gold-standard, so it sounds like your current objections are more on the rhetorical side than logical, unless you could elaborate further?

They are mostly rhetorical. . .though at the same time, if you think about it, if I walk out into the street and say that it is impossible for a red car to pass in front of me, will I actually have anything to substantiate that claim? No. And some of the premises make claims of that caliber.
--------------------
This in itself can lead to a whole range of misconceptions, such as that there is some unique, singular component to "consciousness" that distinguishes it from that which we do not conceive as "conscious", one form of which is the Platonic dualism I mentioned earlier.

That same thing is applicable to sentience in a way, that sentience is but something that convinces us that we're sentient. And like you said with consciousness, it is just something that convinces us that we are conscious. If we figure out what convinces us that we're conscious and sentient, then implementing it in a machine could conceivably be achieved. . .yet that would conceivably present a whole new set of problems that we won't know until we try.
Strop
offline
Strop
10,816 posts
Bard

I presume you're particularly looking at these premises:

P2) It is technologically impossible to decode the brain's reasoning abilities in such a way that makes it useful in emulating its function.

P3) It is technologically impossible to design hardware that can perform the function of the brain.


Yes? If so I would at least somewhat agree with you, and it'd naturally be up to Moe (not me) to answer that.

sentience is but something that convinces us that we're sentient


Given that sentience is subjective perception of sensory input, I'll agree for now with the qualifier that I'm too tired to think about it properly and will have to get back to you later on whether I actually agree or not!

On that note though, it has been observed that we are able to establish interfaces with motor networks, as well as modulate certain emotive and cognitive pathways. Just how the latter is done is still a big mystery though. So that raises the question as to how close we are to understanding the processes at hand, if at all.
thisisnotanalt
offline
thisisnotanalt
9,821 posts
Shepherd

Those two, and this one:

P1) Any machine capable of any type of artificial intelligence must be capable of fundamental human reasoning.

Because we don't know if the most basic type of sentient consciousness is the human one or not.
-----------
Given that sentience is subjective perception of sensory input

And the perception is from our consciousness! Hmmm. . . .
Showing 1-15 of 22