I have to admit that I am not happy with how the word "logic" is thrown around in the politics forum like a cheap stone age weapon. Logic is a way to reason and think, it is not a scare word.

/// Mastaplaya ///

I have to admit that I am not happy with how the word "logic" is thrown around in the politics forum like a cheap stone age weapon. Logic is a way to reason and think, it is not a scare word.\r\n\r\n /// Mastaplaya ///

I'm curious about the Rule of Contraction and why exactly people thought it was a good idea.

It should be obvious that
A implies that (A implies B) ≠ A implies B

I'm curious about the Rule of Contraction and why exactly people thought it was a good idea.
It should be obvious that
A implies that (A implies B) ≠ A implies B

I've never actually heard of the rule of contraction, though I can at least try to explain what's going on here. So you have two sentences in propositional logic:
(1) P -> Q and (2) P -> (P -> Q)

You note that these aren't identical. But they are logically equivalent. In fact, the following sentence is a theorem of prop logic:
(P -> (P ->Q)) <-> (P -> Q)

Basically that sentence just says that (1) and (2) (from above) imply one another. In other words, they have the same truth value in all cases. And that's all that logical equivalence is.

There are basically 2 ways to determine whether a set of sentences are logically equivalent. The easiest way is to just construct a truth table for each one. A truth table looks at every logical possibility and determines whether the sentence is true or false. The sentence P -> Q is false only when P is true and Q is false and is true otherwise. And it turns out that (2) behaves the same way.

Now, semantically, these are saying two different things. That much does seem obvious. But it's important to keep in mind what logical equivalence is saying. Logic isn't concerned with semantic content (well, it is, but not at this level) - all that matters is the truth values of a given sentence. So with this in mind, it turns out that all of the theorems of a given system are logically equivalent. They're all saying very different things, but from a purely logical point of view, they are all equivalent.

[url=http://armorgames.com/user/FishPreferred]@FishPreferred[/url]
I've never actually heard of the rule of contraction, though I can at least try to explain what's going on here. So you have two sentences in propositional logic:
(1) P -> Q and (2) P -> (P -> Q)
You note that these aren't identical. But they are logically equivalent. In fact, the following sentence is a theorem of prop logic:
(P -> (P ->Q)) <-> (P -> Q)
Basically that sentence just says that (1) and (2) (from above) imply one another. In other words, they have the same truth value in all cases. And that's all that logical equivalence is.
There are basically 2 ways to determine whether a set of sentences are logically equivalent. The easiest way is to just construct a truth table for each one. A truth table looks at every logical possibility and determines whether the sentence is true or false. The sentence P -> Q is false only when P is true and Q is false and is true otherwise. And it turns out that (2) behaves the same way.
Now, semantically, these are saying two different things. That much does seem obvious. But it's important to keep in mind what logical equivalence is saying. Logic isn't concerned with semantic content (well, it is, but not at this level) - all that matters is the truth values of a given sentence. So with this in mind, it turns out that all of the theorems of a given system are logically equivalent. They're all saying very different things, but from a purely logical point of view, they are all equivalent.

The sentence P -> Q is false only when P is true and Q is false and is true otherwise. And it turns out that (2) behaves the same way.

But if we equate (2) to (1), we can put anything in place of Q and still have (2) turn out a "true", as in Curry's paradox.

If P = "P -> Q", we get: P -> (P -> Q) and (P -> Q) -> Q, so P implies that P implies Q and P implying Q also implies Q. If Q is false, "P -> Q" tells us that P is also false, meaning the statement "P -> Q" is false. We can still say it's true that P -> (P -> Q), because P -> P, but this doesn't relate to Q being true.

As I see it, having both P and Q false, we haven't actually said anything about the truth of the statement "P -> Q", because P doesn't need to have any relation to Q.

[quote]The sentence P -> Q is false only when P is true and Q is false and is true otherwise. And it turns out that (2) behaves the same way.[/quote]
But if we equate (2) to (1), we can put anything in place of Q and still have (2) turn out a "true", as in Curry's paradox.
If P = "P -> Q", we get: P -> (P -> Q) and (P -> Q) -> Q, so P implies that P implies Q and P implying Q also implies Q. If Q is false, "P -> Q" tells us that P is also false, meaning the statement "P -> Q" is false. We can still say it's true that P -> (P -> Q), because P -> P, but this doesn't relate to Q being true.
As I see it, having both P and Q false, we haven't actually said anything about the truth of the statement "P -> Q", because P doesn't need to have any relation to Q.

Ah, I see what you're saying now. I have two thoughts on what's going on.

First thought - problems with material implication

The logical connective (represented by an arrow -> or a horseshoe) that is material implication is a tricky beast. It's really unintuitive and many of my students struggle to understand what's going on. Note that I'm not suggesting you're struggling to understand, I'm just pointing out that it's really, really unintuitive.

Typically when we use conditionals in ordinary language, we are expressing a specific relationship between two things. Sometimes it's a causal relationship - 'If you fall from that height, you're going to get hurt'. Other times it's more of a biconditional - 'If you drive me home, I'll pay you £5'. But the arrow just doesn't work that way. The statement could still be true if you don't fall from that height and you still get hurt. Or if you don't drive me home and I still give you £5. But this is just a feature of how conditionals work in formal logic. There are other logical systems that try to handle the conditional differently, but these are far beyond the scope of this intro to logic thread. Plus, I'm not really familiar with these systems, so I can't give much insight here. My earlier comments about logical equivalence and implication rest upon this reading of conditionals. They're unintuitive, but they're also fundamental to building a sound and complete logical system (even though more powerful logical systems are incomplete, but that's a different matter).

Second thought - Solving the problem

If we take a step back to look at what's going on, we can solve the problem with predicate logic. You mention Curry's paradox, which hinges on what's going on here. A central claim in the paradox is that, whatever P we choose, we have a Q such that P -> (P->Q).

With predicate logic, we can represent this claim using quantifiers. I can't write them here because the quantifiers are an upside-down A and a backwards E. The basic idea, though, is once we properly quantify Curry's paradox, we can avoid it. In short, the claim that 'For every P there is a Q such that P -> (P->Q)' just isn't saying the same thing anymore. So the equivalence relation that was in place in propositional logic no longer holds. Interestingly, this is also true if we reverse the quantifiers -- 'There is a P such that for every Q...'

Ah, I see what you're saying now. I have two thoughts on what's going on.
First thought - problems with material implication
The logical connective (represented by an arrow -> or a horseshoe) that is material implication is a tricky beast. It's really unintuitive and many of my students struggle to understand what's going on. Note that I'm not suggesting you're struggling to understand, I'm just pointing out that it's really, really unintuitive.
Typically when we use conditionals in ordinary language, we are expressing a specific relationship between two things. Sometimes it's a causal relationship - 'If you fall from that height, you're going to get hurt'. Other times it's more of a biconditional - 'If you drive me home, I'll pay you £5'. But the arrow just doesn't work that way. The statement could still be true if you don't fall from that height and you still get hurt. Or if you don't drive me home and I still give you £5. But this is just a feature of how conditionals work in formal logic. There are other logical systems that try to handle the conditional differently, but these are far beyond the scope of this intro to logic thread. Plus, I'm not really familiar with these systems, so I can't give much insight here. My earlier comments about logical equivalence and implication rest upon this reading of conditionals. They're unintuitive, but they're also fundamental to building a sound and complete logical system (even though more powerful logical systems are incomplete, but that's a different matter).
Second thought - Solving the problem
If we take a step back to look at what's going on, we can solve the problem with predicate logic. You mention Curry's paradox, which hinges on what's going on here. A central claim in the paradox is that, whatever P we choose, we have a Q such that P -> (P->Q).
With predicate logic, we can represent this claim using quantifiers. I can't write them here because the quantifiers are an upside-down A and a backwards E. The basic idea, though, is once we properly quantify Curry's paradox, we can avoid it. In short, the claim that 'For every P there is a Q such that P -> (P->Q)' just isn't saying the same thing anymore. So the equivalence relation that was in place in propositional logic no longer holds. Interestingly, this is also true if we reverse the quantifiers -- 'There is a P such that for every Q...'