ForumsWEPRPhilosophical Issues (Extended Cognition p. 5)

57 24354
Moegreche
offline
Moegreche
3,826 posts
Duke

Since I'm back for the foreseeable future, I thought it might be fun to consider some important philosophical issues. The idea I have in mind is to discuss issues - taking one at a time - that philosophers are currently thinking about. The goal is to introduce some of the member of the AG community to what philosophers do but more importantly to help develop critical thinking and argumentation skills for us all.

My role will simply be to (try to) guide the conversation and to fill in any theoretical gaps that appear. I know that my posts can be rather long-winded, so I'll try to sum up the conversation thus far with some handy bullet points at the end.

The topic I'd like to start with: how is knowledge more valuable than true belief?

This issue goes back as far as the writings of Plato, but is still an unanswered question - and one that is hugely important. There's an intuition that knowledge is, in fact, more valuable than true belief. The problem is actually providing an account of this value.

So let's suppose you're in Venice and you want to get to Rome, so you ask for directions. You have the choice between asking someone who has a true belief on how to get there or someone who has knowledge of the road to take. In either case, it looks like you're going to get to Rome. The first guy's belief is true (we've stipulated that) but so is the second guy's (since knowledge implies truth). So why would we prefer knowing over truly believing?

Summary:
Knowledge seems more valuable that (mere) true belief.
We can find plenty of cases where a true belief seems just as good as knowledge.
Is there a way to explain the value of knowledge over that of true belief? Or maybe knowledge doesn't have the value we think it has!

  • 57 Replies
Moegreche
offline
Moegreche
3,826 posts
Duke

Just some closing thoughts on these issues before we move on. For those uninterested, just ignore this post - my next post will introduce the new topic.

Virtue ethics, from the little understanding I have of it, seems to take the middle ground between Deontology and Consequentialism.


I've seen VE presented that way, but such a presentation misses the heart of VE. It's an entirely different approach to the problems of ethics - one that is incredibly unique and has had a very wide influence (especially in my field with the rise of virtue epistemology).

The ethical theories we've been looking at are both act-centred, meaning that they focus on what a right/wrong act is. VE is agent-centred, so it looks at what it means to be a good/virtuous person. Right acts, then, are just those acts that a virtuous person would do in that situation.

Plus it's just not compelling.


VE got absolutely hammered when it was first put on the table. It seemed like it was circular, impossible to implement, and foundationless (beyond the obvious lack of foundation that circularity implies). But as more philosophers thought about the issues, it really gained a great deal of credence as an approach to an ethical theory. I say approach here because (I think, at least) it was the approach, rather than the subsequent theories, that was the most compelling and influential aspect of VE. But ethicists have (for the most part) moved on to other, more compelling theories.

I will end with proposing that perhaps some sort of hybrid of the two would be in order?


On the face of it, that might seem perfectly reasonable. Rule consequentialism, for example, might be viewed as an attempt to do just that. But as I noted above, I don't like the thought of trying to find the middle ground between the two theories. Or, for that matter, to try to hybridise them.

The reason being - and this is important for understanding the issues we've been talking about - is that deontology and consequentialism are incompatible. We might be able to get some sort of hybrid if there was merely a difference in the value systems. But there's also a difference in how these values are realised - one that would make an attempt to form some sort of hybrid theory fail to even get off the ground.

Now, this might be too strong a claim here. But the key point is that such a hybrid system would be incompatible with one or the other (or both!) of the ethical theories we've been looking at.
The nice thing about VE is that is comes awfully close to getting at why we would want a hybrid theory. It seems like both the consequences and the motives behind the act are needed to properly assess the act. VE can get us there but without taking on the problems of both theories that an act-centred hybrid theory would have.

But I'll leave it at that and prepare the next topic. We already have a nice suggestion from Zahz, so we'll go with that!
Moegreche
offline
Moegreche
3,826 posts
Duke

So, our new topic: Extended Cognition
Cheers to Zahz for the suggestion.

This is a pretty new avenue of research in epistemology (the study of knowledge) and there are a lot of issues involved in which we could easily get bogged down. So rather than me blather on, I'm going to get us started with a working definition and let's see where things go.

The basic idea behind extended cognition (or perhaps, more properly, extended knowledge) is that cognition regularly happens outside the brain. (Keep in mind that extended cognition is just one feature of the much broader research programme of extended knowledge.)

So let's start with 3 not-so-easy questions:

1) What is cognition?

2) What are some examples of how cognition could be extended?

3) What are some important issues that are brought up when we think about cognition being extended?

Zahz
offline
Zahz
47 posts
Peasant

Alright, lets try to tackle this in order:

1) Wow. Uh okay. This is not an easy thing to try to do. I'll try to get us a working definition so we can function. Cognition is the processing of information. It covers input, output, and all the fiddly bits in between I don't have the training in computer science or information theory to elucidate. Under this definition a computer could be said to have a limited form of cognition which is, I think, the right direction to go. This definition ignores many of the layers and idiosyncrasies of human cognition but I assume we'll get to those later in the discussion.

2) Cognition can be extended in a lot of ways. A calculator is a great example. You punch up a problem and the calculator does it for you and returns the answer. How is this different from simply doing the problem yourself? It really isn't when you look at how the brain does things. If you do it "yourself" you are merely using a different tool. Your brain. Essentially the calculator is as much a part of your cognition as the part of your brain (loosely) associated with mathematics.

3) In short, the death of the individualized self. Since the calculator is pretty undeniably part of your cognition why would this not extend to other people? Moe and I have a conversation. Now we are part of each others cognition. Indeed inextricably so. I provide input and output for Moe and he for me. We unavoidably effect one another's cognition. Now since we are both part of the whole cognition where does Moe begin and were do I end? Not a possible distinction if you bought the calculator story above. Here is where we start to get wild: Where does this addition of people to the new Moe/Zahz complex end? It doesn't everyone we ever have (Or indeed ever will have because of the ongoing nature of the beast.) had contact with is a part of this cognitive function. So there, most of the human race is on big cognitive being. But it doesn't end there. What about the other things that effect our cognition in ways that aren't obvious? Things like ambient temperature, animals, things so far away in time and space we could never hope to consciously detect their influence? Think of it like a brain. One neuron has little effect on one neuron in another region but you'd be a fool to discount their influence on one another no matter how indirect. Here's the part where people generally ask if I'm high: All of these things are a part of the grand unified cognitive function if they interact with any part of it in any way as input, output, or anything in between. All of time and space is one cognition.

I should clarify, not all parts of this cognition are smart, and it is in no way unified despite it's universal nature. For example, your right big toe has something to do with how you solve the math problem, just not very much. The neurons in your brain, however, have a lot to do with it. In the same way a supernova beyond the observable universe doesn't and never will directly effect anything on earth much less on the conversation of Moe and I, but does effect very much the conversation of Zoe and Mahz who were in orbit of that star.

Indeed this problem of time is the biggest wrench I've found in the idea: If it wont effect Moe and I how is it a part of us? Answer: It isn't. The three of us are all part of something else entirely. In essence we are parts of the Grand Cognition. No part of it may be extracted from the whole and even with the time problem the cognition influence stretches backwards in time too. Or more accurately did stretch forward.

So yeah, we're all one. Ethical issues blah blah blah, Moe I am your father blah blah blah, Stick that in your pipe and smoke it Ayn Rand. And human cognition ain't any more monolithic or unified than the Grand Cognition. (Alright yes it is but I'm speaking colloquially in this section.)

MageGrayWolf
offline
MageGrayWolf
9,462 posts
Farmer

Seeing as this is the first I have heard of this I will mostly sit back and read up. Though by the sound of it this concept would extend nicely to the use of the internet and what we are doing right now.

Kyouzou
offline
Kyouzou
5,061 posts
Jester

1.

I tend to think of it primarily as an awareness of self, something in the realm of "I think, therefore I am." However in addition to this I feel that you also need to be aware of your environment, the interactions that take place between your memories, as a result of context, or what have you, are a prime examples of cognition.

2.

I'm fairly sure Zahz covered this pretty well, but it seems to me that general idea of extended cognition is derived from the use of tools to augment cognitive function. This is of course, dependent on having a working definition for cognition that falls in line with that train of thought. But the general idea would be that people or objects outside your own consciousness would have an influence that either changes your capacity to understand something, or simply increases your capability for thought to a greater extent than what one might be capable of individually. To this end, I'd say a Socratic Seminar is fairly accurate representation of the human interaction that enhances such capability.

3.

Reading Zahz's post got me to thinking about this theory known as SIDE (Social Identity model of Deindividuation Effects), essentially what SIDE postulates is that in large groups individuals begin experiencing some anti-normative behavioral effects, in relation to the society. And thereby, an increase normative behavior with regard to the group. Anyway, what I was thinking was that the process of extended cognition, perhaps in the long term or depending on the situation in which it takes place, instead of having the positive effects I've referred to so far, it could have consequences similar, if not identical to SIDE. In this case it would likely be the case that the cognitive interaction with the group suppresses the values and desires of the individual.

Zahz
offline
Zahz
47 posts
Peasant

The issue I have with Kyour definition of cognition is that it isn't general enough. It seems the the self and the awareness thereof that you describe are products of cognition rather than aspects of it.

Kyouzou
offline
Kyouzou
5,061 posts
Jester

Kyour?

I feel as though cognition, as of yet at least, is something that tends to be a distinctly organic trait. A lot of machinery and organisms process information, but that doesn't mean they understand it. The way I see it, the definition for cognition needs to be limited to such an extent where information needs not only be processed but understood.

Zahz
offline
Zahz
47 posts
Peasant

Kyour. A portmanteau of Kyouzo and your. I made me laugh.

I know what you're saying I just disagree. For one , understanding is poorly defined and more importantly understanding under most definitions is an emergent phenomenon within and of cognition. Understanding is higher order cognition. If that's what we want to talk about that's fine but it isn't the whole story.

Moegreche
offline
Moegreche
3,826 posts
Duke

There's a way here to preserve the broadness of Zahz's definition of cognition while excluding things like machines from counting as cognitive agents. Kyouzou's point, though, is very strong - cognition seems to have a temporal aspect to it. I don't think Kyouzou literally meant understanding (in the sense of a distinct mental state with respect to a body of information) but rather some sort of awareness of the processes going on. There has to be some sort of active engagement with the input, so to speak, in order to reach the output.

We don't want to limit ourselves, but how about this for a slightly more formal, working definition of cognition:

The process(es) by which agents form beliefs.

This would prima facie exclude machines from counting as cognitive agents, since we wouldn't ascribe a belief state to a machine. This point is certainly debatable, but the nuances aren't super important for our purposes here.
One might still object that this definition is just too broad - that cognition is some sort of internal reasoning process with information as input and mental states (e.g. beliefs, knowledge, understanding) as the 'output'. But notice that this subtly begs the question against the extended cognition thesis by requiring that cognition be an internal process.

The important thing about the above definition is that it capture what both Zahz and Kyouzou are after (I hope!). It's also broad enough that we can still pick away at it, if needed, but not so broad that it's trivial or just unhelpful for theorising.

Now, there are a number of ways that cognition can be extended. Mage made an excellent point that perhaps what's going on right now is a form of extended cognition. I'm inclined to agree with that - we are essentially a thinking group. This bring up 2 important questions. First, can we properly ascribe (justified) beliefs, knowledge, or understanding to a group? And if so, does the mental state of the group supervene on the mental states of its members? These kinds of questions fall within the extended knowledge and social cognition lines of study, which fall within the broader project of extended cognition.

It's interesting that you guys used a calculator as an example of extended cognition. I think this can be right, but we have to be careful. So I'll leave you guys with 2 additional questions (also feel free to challenge anything I've said so far).

1) Are there cases in which an agent could use a calculator but that wouldn't count as extended cognition? If so, what's the difference?

2) What are some other, clear-cut examples of how cognition can be extended? I'm hoping we can get a better grip of this phenomenon, which will lead to a more thorough discussion of question (3) in my opening post on this topic.

MageGrayWolf
offline
MageGrayWolf
9,462 posts
Farmer

We don't want to limit ourselves, but how about this for a slightly more formal, working definition of cognition:

The process(es) by which agents form beliefs.


I'm not so sure of this definition. Not so much that it's too broad but perhaps a bit too limiting. The broadest definition of cognition would seem to be the mental processes of thinking. If we are to branch out to extended cognition we would seem to have to include knowledge as we do have this collective knowledge that would seem to be part of this extended cognition from the way it's being described here.

This is a definition of cognition I found on Psychology Today.
"Quite simply, cognition refers to thinking. There are the obvious applications of conscious reasoningâ"doing taxes, playing chess, deconstructing Macbethâ"but thought takes many subtler forms, such as interpreting sensory input, guiding physical actions, and empathizing with others. The old metaphor for human cognition was the computerâ"a logical information-processing machine. (You canât spell cognition without âcog.â) But while some of our thoughts may be binary, there's a lot more to our 'wetware' than 0's and 1's."

And wiki has this definition under "Extended Mind".
"The "extended mind thesis" (EMT) refers to an emerging concept that addresses the question as to the division point between the mind and the environment by promoting the view of active externalism. The EMT proposes that some objects in the external environment are utilized by the mind in such a way that the objects can be seen as extensions of the mind itself. Specifically, the mind is seen to encompass every level of the cognitive process, which will often include the use of environmental aids."

1) Are there cases in which an agent could use a calculator but that wouldn't count as extended cognition? If so, what's the difference?


From above This part would seem to be the part that would apply. "some objects in the external environment are utilized by the mind in such a way that the objects can be seen as extensions of the mind itself." If you're just inputting, say a mathematical formula, you're not necessarily using as a utility of your mind if you don't understand the formula or how you're getting the answer.

What are some other, clear-cut examples of how cognition can be extended?


I have to wonder if empathy is part of extended cognition.
Zahz
offline
Zahz
47 posts
Peasant

In Response to Moe: First off, excluding machines as cognitive agents is a bad plan. It kinda excludes Humans. A calculator arrives at an answer. How is that answer qualitatively different from any belief I myself arrive at? Moreover how is the process by which it arrived at this admittedly simplistic form of belief so wildly different from mine as to be incomparable? It's a glorified Turing machine and I'm a relatively sophisticated heuristic neural network. We're both machines.

Second, the way you are trying to do it makes me cringe. It implies that awareness, as I think you are describing it is not, cannot, and/or must not be anything but a function of the system by which we neural nets cognate and, if what I said in the "first off" section is correct, that just doesn't hold water. Of course if you want to bring sentience-as-agency or something loony like souls into the argument we can, but that gets stupid fast.

Le discussion questions:

1) We can try. We can whip a calculator at freshman who have just discovered Nietzsche and Existentialism just to watch them pout and try to cook up an argument about the calculator not being used to perform calculation and therefor not being part of our cognition. But this ignores what I'm going to call the giggle problem: The confused and nihilistic outrage of our froshy friends causes us to giggle for it's misguided impotence. The calculator provided hilarious input in the form of whining. Indeed when we touched the calculator our nervous system registered it and this effected our cognitive system. Taken to the logical conclusion anything you are at any point even remotely aware of becomes part of the system at least as input in some small way and that makes it part of the system. Heck, even if you aren't aware that you used a calculator in some way, like when the service person at the drive through uses one to calculate your change it still effects the system indirectly which makes it part of the system and you can't separate any part of the system from the whole because blah blah blah typed it before. Tl, dr: No, and have you ever really looked at your hands, bro?

2) Well, I kinda already went to the whole universe as cognitive system so I'll leave other ways to other people and I suppose there might be some ethical issues depending on how one defines self and the value thereof but I choose to ignore them. The issue brought to light by extended cognition I'd like to discuss is funny but inappropriate and irrelevant so I guess we could talk about the Trans-humanist element in the smartphone and internet examples. I said Trans-humanist. I feel dirty.

Moegreche
offline
Moegreche
3,826 posts
Duke

First off, my apologies for the slow response. Trying to meet publication deadlines and whatnot tends to take a lot out of me. Second, I'm super happy to see the responses so far. I can tell that we're doing philosophy since I can't even get a simple definition off the ground!

I'll begin with some responses to my definition of cognition. Although I don't want to press the point (after all, we could continue the discussion without a strict definition of cognition) I would like to just do a bit of philosophy and further the discussion on this front.

To address Mage's concern: it looks like you're suggesting that my definition is too narrow since it excludes knowledge from the discussion of extended cognition. But the definition I've provided talks in terms of belief. It's very important to note here that knowledge (as it's typically defined) is a species of belief. More specifically, it's a species of true belief. So couching cognition in terms of belief is pretty dang broad, but I think it adequately captures what happens (most of the time) when we engage our cognitive faculties. I say most of the time here because there are occasions in which an agent might simply withhold belief, but would nonetheless count as cases of cognition. So on this front, my definition is too narrow. But keep in mind I'm not presenting belief as a necessary condition of cognition. I'm simply stating a role that cognition plays - I suppose this would be a functionalist definition of cognition.

As for Zahz's worry that my definition excludes machines - fair enough. As artificial intelligence becomes more and more developed we might begin to wonder whether machines really can have beliefs. But I have a theoretical reason for excluding machines. If we allow machines like calculators to count as having beliefs, what we have in these cases is belief gained via testimony. While testimony is an interesting discussion is its own right, we end up missing an important point in the discussion about extended cognition. While cognition can extend to other agents (e.g. a group of people engaged in a discussion) there is a significant respect in which cognition involves non-cognitive entities. But perhaps it would just be simpler to focus on cases in which cognition isn't even a possibility. These would be cases in which an agent uses a whiteboard or a notebook to facilitate their cognitive process.

As for the discussion questions, both of you touch upon a very interesting point. Here's Mage:

"some objects in the external environment are utilized by the mind in such a way that the objects can be seen as extensions of the mind itself." If you're just inputting, say a mathematical formula, you're not necessarily using as a utility of your mind if you don't understand the formula or how you're getting the answer.


And here's Zahz:

Taken to the logical conclusion anything you are at any point even remotely aware of becomes part of the system at least as input in some small way and that makes it part of the system.


Both of these points seem to recognise the importance in requiring that an agent is aware that s/he is utilising something as part of their cognitive process. But there seems to be something more, as Mage suggests. Perhaps we should require that the agent has some understanding (I'll use this word very loosely) of the role played in the extended part of their cognition. So a mathematician might use a computer for some complicated calculations, but understands what is going on to some extent - I'm still not sure what I mean by this. Without this requirement, we run the risk posed by Zahz that just about anything would count as part of our cognitive process, whether or not we're aware of what's going on.

So we have some additional problems (apart from my definition) before this discussion can even get off the ground. First, is there a principled way to determine when an agent is using a device or entity as part of her cognition? Second, what role does awareness of the extended nature of cognition play in this discussion - and how much awareness (or understanding) is required for cognition to count as extended and not, say, merely facilitated?
Showing 46-57 of 57