Subsections of Propositional Logic Proofs
Introduction
While we can use truth tables to check whether a set of premises entail a conclusion, this requires testing all possible truth assignments – of which there are exponentially many. In this chapter, we will learn the process of natural deduction in propositional logic. This will allow us to start with a set of known facts (premises) and apply a series of rules to see if we can reach some goal conclusion. Essentially, we will be able to see whether a given conclusion necessarily follows from a set of premises.
We will use the Logika tool to check whether our proofs correctly follow our deduction rules. HOWEVER, these proofs can and do exist outside of Logika. Different settings use slightly different syntaxes for the deduction rules, but the rules and proof system are the same. We will merely use Logika to help check our work.
Sequents, premises, and conclusions
A sequent is a mathematical term for an assertion or an argument. We use the notation:
p0, p1, ..., pm ⊢ c
The p0, p1, …, pm are called premises and c is called the conclusion. The ⊢
is called the turnstile operator, and we read it as “prove”. The full sequent is read as:
Statements p0, p1, ..., pm PROVE c
A sequent is saying that if we accept statements p0, p1, …, pm as facts, then we guarantee that c is a fact as well.
For example, in the sequent:
p → q , ¬ q ⊢ ¬ p
The premises are: p → q
and ¬q
, and the conclusion is ¬p
.
(Shortcut: we can use |-
in place of the ⊢
turnstile operator.)
Sequent validity
A sequent is said to be valid if, for every truth assignment which make the premises true, then the conclusion is also true.
For example, consider the following sequent:
p → q , ¬q ⊢ ¬p
To check if this sequent is valid, we must find all truth assignments for which both premises are true, and then ensure that those truth assignments also make the conclusion true.
Sequent validity in Logika using truth tables
We can use a different kind of truth table to prove the validity of a sequent in Logika:
* * *
---------------------------
p q # (p →: q, ¬q) ⊢ ¬p
---------------------------
T T # T F F
T F # F T F
F T # T F T
F F # T T T
---------------------------
Valid [F F]
Notice that instead of putting just one logical formula, we put the entire sequent – the premises are in a comma-separated list inside parentheses, then the turnstile operator (which we type using the keys |-
in Logika), and then the conclusion. We mark the top-level operator of each premise and conclusion.
Examining each row in the above truth table, we see that only the truth assignment [F F] makes both premises (p → q
and ¬q
) true. We look right to see that the same truth assignment also makes the conclusion ( ¬p
) true, which means that the sequent is valid.
Proving sequents using natural deduction
Now we will turn to the next section of the course – using natural deduction to similarly prove the validity of sequents. Instead of filling out truth tables (which becomes cumbersome very quickly), we will apply a series of deduction rules to allow us to conclude new claims from our premises. In turn, we can use our deduction rules on these new claims to conclude more and more, until (hopefully) we are able to claim our conclusion. If we can do that, then our sequent was valid.
Logika natural deduction proof syntax
We will use the following format in Logika to start a natural deduction proof for propositional logic. Each proof will be saved in a new file with a .sc
(Scala) extension:
// #Sireum #Logika
//@Logika: --manual --background type
import org.sireum._
import org.sireum.justification._
import org.sireum.justification.natded.prop._
@pure def ProofName(variable1: B, variable2: B, ...): Unit = {
Deduce(
(comma-separated list of premises with variable1, variable2, ...) ⊢ (conclusion)
Proof(
//the actual proof steps go here
)
)
}
Once we are inside the Proof
element (where the above example says “the actual proof steps go here”), we complete a numbered series of steps. Each step includes a claim and corresponding justification, like this:
// #Sireum #Logika
//@Logika: --manual --background type
import org.sireum._
import org.sireum.justification._
import org.sireum.justification.natded.prop._
@pure def ProofName(variable1: B, variable2: B, ...): Unit = {
Deduce(
(comma-separated list of premises with variable1, variable2, ...) ⊢ (conclusion)
Proof(
1 ( claim_a ) by Justification_a,
2 ( claim_b ) by Justification_b,
...
736 ( conclusion ) by Justification_conc
)
)
}
Each claim is given a number, and these numbers are generally in order. However, the only rule is that claim numbers be unique (they may be out of order and/or non-consecutive). Once we have justified a claim in a proof, we will refer to it as a fact.
We will see more details of Logika proof syntax as we progress through chapter 4.
Premise justification
The most basic justification for a claim in a proof is “premise”. This justification is used when you pull in a premise from the sequent and introduce it into your proof. All, some or none of the premises can be introduced at any time in any order. Please note that only one premise may be entered per claim.
For example, we might bring in the premises from our sequent like this (the imports, proof function definition, deduce call, and formatter changes are omitted here for readability):
(p, q, ¬r) ⊢ (p ∧ q)
Proof(
1 ( p ) by Premise,
2 ( q ) by Premise,
3 ( ¬r ) by Premise,
...
)
We could also bring in the same premise multiple times, if we wanted. We could also use non-sequential line numbers, as long as each line number was unique:
(p, q, ¬r) ⊢ (p ∧ q)
Proof(
7 ( p ) by Premise,
10 ( q ) by Premise,
2 ( ¬r ) by Premise,
8 ( p ) by Premise,
...
)
We could only bring in some portion of our premises, if we wanted:
(p, q, ¬r) ⊢ (p ∧ q)
Proof(
1 ( p ) by Premise,
...
)
But we can only list one premise in each claim. For example, the following is not allowed:
(p, q, ¬r) ⊢ (p ∧ q)
Proof(
1 ( p, q, ¬r ) by Premise, //NO! Only one premise per line.
...
)
Deduction rules
The logical operators (AND, OR, NOT, IMPLIES) are a kind of language for building propositions from basic, primitive propositional atoms. For this reason, we must have laws for constructing propositions and for disassembling them. These laws are called inference rules or deduction rules. A natural deduction system is a set of inference rules, such that for each logical operator, there is a rule for constructing a proposition with that operator (this is called an introduction rule) and there is a rule for disassembling a proposition with that operator (this is called an elimination rule).
For the sections that follow, we will see the introduction and elimination rules for each logical operator. We will then learn how to use these deduction rules to write a formal proof showing that a sequent is valid.
AND Rules
In this section, we will see the deduction rules for the AND operator.
AND introduction
Clearly, when both p
and q
are facts, then so is the proposition p ∧ q
. This makes logical sense – if two propositions are independently true, then their conjunction (AND) must also be true. The AND introduction rule, AndI
, formalizes this:
P Q
AndI : ---------
P ∧ Q
We will use the format above when introducing each of our natural deduction rules:
P
andQ
are not necessarily individual variables – they are placeholders for some propositional statement, which may itself involve several logical operators.- On the left side is the rule name (in this case,
AndI
) - On the top of the right side we see what we already need to have established as facts in order to use this rule (in this case,
P
and alsoQ
). These facts can appear anywhere in our scope of the proof, in whatever order. (For now, all previous lines in the proof will be within our scope, but this will change when we get to more complex rules that involve subproofs). - On the bottom of the right side, we see what we can claim by using that proof rule.
Here is a simple example of a proof that uses AndI
. It proves that if propositional atoms p
, q
, and r
are all true, then the proposition r ∧ (q ∧ p)
is also true:
(p, q, r) ⊢ (r ∧ (q ∧ p))
Proof(
1 ( p ) by Premise,
2 ( q ) by Premise,
3 ( r ) by Premise,
4 ( q ∧ p ) by AndI(2, 1),
5 ( r ∧ (q ∧ p) ) by AndI(3, 4)
)
You can read line 4 like this: “from the fact q
stated on line 2 and the fact p
stated on line 1, we deduce q ∧ p
by applying the AndI
rule”. Lines 4 and 5 construct new facts from the starting facts (premises) on lines 1-3.
Note that if I had instead tried:
(p, q, r) ⊢ (r ∧ (q ∧ p))
Proof(
1 ( p ) by Premise,
2 ( q ) by Premise,
3 ( r ) by Premise,
4 ( q ∧ p ) by AndI(1, 2),
...
)
Then line 4 would not have been accepted. The line numbers cited after the AndI
rule must match the order of the resulting AND statement. The left-hand side of our resulting AND statement must correspond to the first line number in AndI
justification, and the right-hand side of our resulting AND statement must correspond to the second line number in the justification:
...
4 ( p ) by (some justification),
5 ( q ) by (some justification),
6 ( p ⋀ q ) by AndI(4, 5),
...
9 ( q ⋀ p ) by AndI(5, 4),
...
AND elimination
The idea of the AND elimination rules is that if we have a proposition p ⋀ q
as a fact, then we can separately claim both p
and q
as individual facts. After all, the only time p ⋀ q
is true in a truth table is when both p
and q
are individually true. There are two AND elimination rules – AndE1
and AndE2
. AndE1
allows us to claim that the left (first) side of an AND statement is individually true, and AndE2
allows us to do the same with the right (second) side. Here is the formalization of each rule:
P ∧ Q P ∧ Q
AndE1 : --------- AndE2 : ---------
P Q
Here is a simple example showing the syntax of the AndE1
rule:
(p ∧ q) ⊢ (p)
Proof(
1 ( p ∧ q ) by Premise,
2 ( p ) by AndE1(1)
...
)
We can read the justification AndE1(1)
as: AND-elimination 1 from line 1, or “take the AND statement on line 1 and extract its first (left) side”.
Here is a simple example showing the syntax of the ∧e2
rule:
(p ∧ q) ⊢ (q)
Proof(
1 ( p ∧ q ) by Premise,
2 ( q ) by AndE2(1)
...
)
We can read the justification AndE2(1)
as: AND-elimination 2 from line 1, or “take the AND statement on line 1 and extract its second (right) side”.
Example 1
Suppose we want to prove the following sequent:
(p ∧ (q ∧ r)) ⊢ (r ∧ p)
Whenever we approach a proof, a good first strategy is to see what we can extract from the premises. If we have a premise that is an AND statement, then we can use AndE1
and then AndE2
to extract both its left and right side as separate claims. So we start our proof like this:
(p ∧ (q ∧ r)) ⊢ (r ∧ p)
Proof(
1 ( p ∧ (q ∧ r) ) by Premise,
2 ( p ) by AndE1(1),
3 ( q ∧ r ) by AndE2(1),
...
)
But now we have a new AND statement as a claim – q ∧ r
. We can again use both AndE1
and AndE2
to extract each side separately:
(p ∧ (q ∧ r)) ⊢ (r ∧ p)
Proof(
1 ( p ∧ (q ∧ r) ) by Premise,
2 ( p ) by AndE1(1),
3 ( q ∧ r ) by AndE2(1),
4 ( q ) by AndE1(3),
5 ( r ) by AndE2(3),
...
)
Now that we have done all we can with our premises and the resulting statements, we examine our conclusion. Whenever our conclusion is a conjunction (AND statement), we know that we must separately show both the left side and the right side of that conclusion. Then, we can use AndI
to put those sides together into our goal AND statement.
In this example, we have already proved both sides of our goal AND statement – r
(from line 5) and p
(from line 2). All that remains is to use AndI
to put them together:
(p ∧ (q ∧ r)) ⊢ (r ∧ p)
Proof(
1 ( p ∧ (q ∧ r) ) by Premise,
2 ( p ) by AndE1(1),
3 ( q ∧ r ) by AndE2(1),
4 ( q ) by AndE1(3),
5 ( r ) by AndE2(3),
6 ( r ∧ p ) by AndI(5, 2)
)
Example 2
Suppose we want to prove the following sequent:
(p ∧ q ∧ r, a ∧ (t ∨ s)) ⊢ (q ∧ (t ∨ s))
We again try to use AND-elimination to extract what we can from our premises. We might try something like this:
(p ∧ q ∧ r, a ∧ (t ∨ s)) ⊢ (q ∧ (t ∨ s))
Proof(
1 ( p ∧ q ∧ r ) by Premise,
2 ( a ∧ (t ∨ s) ) by Premise,
3 ( p ) by AndE1(1), //NO! Won't work.
...
)
However, we get into trouble when we try to use AndE1
to extract the left side of the premise p ∧ q ∧ r
. The problem has to do with operator precedence – we recall that ∧
operators are processed from left to right, which means that p ∧ q ∧ r
is equivalent to (p ∧ q) ∧ r
. By reminding ourselves of the “hidden parentheses”, we see that when we use AndE1
on the premise p ∧ q ∧ r
, we extract p ∧ q
. Similarly, AndE2
will extract r
.
We try again to extract what we can from our premises:
(p ∧ q ∧ r, a ∧ (t ∨ s)) ⊢ (q ∧ (t ∨ s))
Proof(
1 ( p ∧ q ∧ r ) by Premise,
2 ( a ∧ (t ∨ s) ) by Premise,
3 ( p ∧ q ) by AndE1(1),
4 ( r ) by AndE2(1),
5 ( a ) by AndE1(2),
6 ( t ∨ s ) by AndE2(2),
...
)
As before, we look at our resulting claims – we see a p ∧ q
, and we know that we can use AND elimination again to extract both sides. Now we have:
(p ∧ q ∧ r, a ∧ (t ∨ s)) ⊢ (q ∧ (t ∨ s))
Proof(
1 ( p ∧ q ∧ r ) by Premise,
2 ( a ∧ (t ∨ s) ) by Premise,
3 ( p ∧ q ) by AndE1(1),
4 ( r ) by AndE2(1),
5 ( a ) by AndE1(2),
6 ( t ∨ s ) by AndE2(2),
7 ( p ) by AndE1(3),
8 ( q ) by AndE2(3),
...
)
Now, we look at what we are trying to prove – q ∧ (t ∨ s)
. Since its top-level operator is the AND, we know that we must separately prove q
and t ∨ s
. Then, we can use AND introduction to put the two pieces together to match our conclusion. We see that we already have q
on line 8 and t ∨ s
on line 6, so we add our final line to finish the proof:
(p ∧ q ∧ r, a ∧ (t ∨ s)) ⊢ (q ∧ (t ∨ s))
Proof(
1 ( p ∧ q ∧ r ) by Premise,
2 ( a ∧ (t ∨ s) ) by Premise,
3 ( p ∧ q ) by AndE1(1),
4 ( r ) by AndE2(1),
5 ( a ) by AndE1(2),
6 ( t ∨ s ) by AndE2(2),
7 ( p ) by AndE1(3),
8 ( q ) by AndE2(3),
9 ( q ∧ (t ∨ s) ) by AndI(8, 6)
)
You might notice that lines 5 and 7 were not needed, as both p
and a
were not part of the conclusion. That’s true – we could have eliminated those steps. However, it’s a good idea to extract as much information as possible while you are getting used to doing these proofs – it doesn’t hurt to have extra claims, and you may find that you end up needing them.
OR Rules
In this section, we will see the deduction rules for the OR operator.
OR introduction
If we know that a proposition P
is true, then it will also be the case that both P ∨ Q
and Q ∨ P
are also true. It doesn’t matter what Q
is – it might even be something that is know to be false. Because P
is true, it will make the overall OR statement true as well.
There are two OR introduction rules – OrI1
and OrI2
. OrI1
allows us to claim an OR statement with some previous fact on the left (first) side, and OrI2
allows us to do the same with the right (second) side. Here is the formalization of each rule:
P Q
OrI1 : -------- OrI2 : --------
P ∨ Q P ∨ Q
Here is a simple example showing the syntax of the OrI1
rule:
(p) ⊢ (p ∨ q)
Proof(
1 ( p ) by Premise,
2 ( p ∨ q ) by OrI1(1)
)
We can read the justification OrI(1)
as: OR introduction 1 from line 1, or “create a OR statement that puts the claim from line 1 on the first (left) side, and puts something new on the second side”.
Here is a simple example showing the syntax of the OrI2
rule:
(p) ⊢ (q ∨ p)
Proof(
1 ( p ) by Premise,
2 ( q ∨ p ) by OrI2(1)
)
We can read the justification OrI2(1)
as: OR introduction 2 from line 1, or “create a OR statement that puts the claim from line 1 on the second (right) side, and puts something new on the first side”.
OR elimination
The OR elimination rule is used when we have an OR statement of the form P ∨ Q
, and we wish to use it to extract new information. In real life, we call the rule “case analysis”. For example, say that you have either 12 quarters in your pocket or 30 dimes in your pocket. In either case, you can buy a $3.00 coffee. Why? You do a case analysis:
- In the case you have 12 quarters, that totals $3.00, and you can buy the coffee;
- In the case you have 30 dimes, that totals $3.00, and you can buy the coffee.
So, in both cases, you can buy the coffee.
We can formalize the idea behind the OR elimination rule as follows:
- In order for the OR statement
P ∨ Q
to be true, at least one ofP
andQ
must be individually true - If we are able to reach some conclusion
R
if we assumeP
is true, and we are able to reach the SAME conclusionR
if we assumeQ
is true… - …Then no matter what,
R
will be true.
Subproofs
OR elimination will be our first proof rule that uses subproofs. Subproofs are tools for case analysis or what-if excursions, used to support justification for later claims. In propositional logic, they will always contain one assumption. This assumption is a proposition whose scope is limited to the subproof. The syntax of a subproof in Logika looks like this:
(premises) ⊢ (conclusion)
Proof(
...
5 ( fact_A ) by Justification_A,
...
17 SubProof(
18 Assume( fact_D ),
...
25 ( fact_G ) by (some justification using claim 5) //this is OK
...
),
...
45 ( fact_R ) by (some justification using claim 25) //this is NOT OK
...
)
Opening and closing parentheses, (...)
, that are attached to “Proof” and “SubProof” elements define the scope of claims. The SubProof elements are given a claim number when they are opened, but no justification. Closing parenthesis for ending proofs and subproofs are not given claim numbers. The use of parentheses in this manner is analogous to the use of curly brackets use to define scope in Java, C# and C.
In the example above, the subproof starting on line 17 creates an environment where fact_D is true. The justification used on claim number 25, which uses claim 15, is valid. The scope of claim 5 includes subproof 17.
However, the justification for line number 45 is invalid. Fact_G on line number 25 was proven true in an environment where fact_D is true (ie subproof 17). That environment ends (falls out of scope) when the closing parenthesis for the subproof is reached. This happens before line 45.
Only specific deduction rules allow you to close a scope and create a new claim based on that subproof in the enclosing (outer) scope. These rules always take a subproof (i.e “17”) as part of the justification.
Syntax
Here is the OR elimination rule:
SubProof( SubProof(
Assume ( P ), Assume ( Q ),
P ∨ Q ... ...
R R
), ),
OrE : -----------------------------------------------------------
R
In order to use the OrE
rule, we must have three things:
- An OR statement of the form
P ∨ Q
- A subproof that begins by assuming the left side of the OR statement (
P
) and ends with some claimR
- A subproof that begins by assuming the right side of the OR statement (
Q
) and ends with the same claimR
If we have all three parts, we can use the OrE
and cite the OR statement and both subproofs to claim that R
is true no matter what.
Here is a simple example showing the syntax of the OrE
rule:
(p ∨ q) ⊢ (q ∨ p)
Proof(
1 ( p ∨ q ) by Premise,
2 SubProof(
3 Assume ( p ),
4 ( q ∨ p ) by OrI12(3)
),
5 SubProof(
6 Assume ( q ),
7 ( q ∨ p ) by OrI1(6)
),
8 ( q ∨ p ) by OrE(1, 2, 5)
)
Here, we have the OR statement p ∨ q
. We then have two subproofs where we separately assume that the two sides or the OR are true. The first subproof on line 2 starts by assuming the left side of the OR, p
. It then uses OR introduction to reach the goal conclusion, q ∨ p
. After reaching our goal, we end the first subproof and immediately start a second subproof. In the second subproof, we assume that the the right side of our OR statement is true, q
. We then use the other form of OR introduction to reach the SAME conclusion as we did in the first subproof – q ∨ p
. We end the second subproof and can now use ∨e
to state that our conclusion q ∨ p
must be true no matter what. After all, we knew that at least one of p
or q
was true, and we were able to reach the conclusion q ∨ p
in both cases.
When using the justification:
OrE(1, 2, 5)
The first line number corresponds to our original OR statement (line 1 with p ∨ q
for us), the second line number corresponds to the subproof where we assumed the first (left) side of that OR statement (line 2 for us, which starts the subproof where we assumed p
), and the third line number corresponds to the subproof where we assumed the second (right) side of that OR statement (line 5 for us, which starts the subproof where we assumed q
)
This proof shows that the OR operator is commutative.
Example 1
Suppose we want to prove the following sequent:
(p ∧ (q ∨ r)) ⊢ ((p ∧ q) ∨ (p ∧ r))
As we have done before, we start by extracting whatever we can from our premises:
(p ∧ (q ∨ r)) ⊢ ((p ∧ q) ∨ (p ∧ r))
Proof(
1 ( p ∧ (q ∨ r) ) by Premise,
2 ( p ) by AndE1(1),
3 ( q ∨ r ) by AndE2(1),
...
)
Next, we look at what we are trying to prove, and see that its top-level operator is an OR. If we already had either side of our goal OR statement (i.e., either p ∧ q
or p ∧ r
), then we could use AndI
to create the desired proposition. This isn’t the case for us, though, so we need to use a different strategy.
The next consideration when we want to prove an OR statement is whether we have another OR statement, either as a premise or a fact we have already established. If we do, then we can attempt to use OR elimination with that OR statement to build our goal conclusion ((p ∧ q) ∨ (p ∧ r)
). We have the OR statement q ∨ r
available, so we’ll try to use OR elimination – we’ll have a subproof where we assume q
and try to reach (p ∧ q) ∨ (p ∧ r)
, and then a subproof where we assume r
and try to reach (p ∧ q) ∨ (p ∧ r)
:
(p ∧ (q ∨ r)) ⊢ ((p ∧ q) ∨ (p ∧ r))
Proof(
1 ( p ∧ (q ∨ r) ) by Premise,
2 ( p ) by AndE1(1),
3 ( q ∨ r ) by AndE2(1),
4 SubProof(
5 Assume( q ),
6 ( p ∧ q ) by AndI(2,5),
7 ( (p ∧ q) ∨ (p ∧ r) ) by OrI1(6)
),
8 SubProof(
9 Assume( r ),
10 ( p ∧ r ) by AndI(2,9),
11 ( (p ∧ q) ∨ (p ∧ r) ) by OrI2(10)
),
12 ( (p ∧ q) ∨ (p ∧ r) ) by OrE(3, 4, 8)
)
We can make our final claim:
12 ( (p ∧ q) ∨ (p ∧ r) ) by OrE(3, 4, 8)
Because we had an OR statement on line 3 (q ∨ r
), assumed the left side of that OR (q
) in subproof 4 and reached the conclusion of (p ∧ q) ∨ (p ∧ r)
, and then assumed the right side of our OR (r
) in subproof 8 and reached the SAME conclusion of (p ∧ q) ∨ (p ∧ r)
.
Example 2
Suppose we want to prove the following sequent:
((p ∨ q) ∧ (p ∨ r)) ⊢ (p ∨ (q ∧ r))
Note that this is the same as the previous example, but the premises are switched with the conclusion. If we prove this direction too, we will have shown that (p ∨ q) ∧ (p ∨ r)
is equivalent to p ∨ (q ∧ r)
. We’ll learn more about this process in section 4.8.
We start by pulling in our premises and extracting whatever information we can:
((p ∨ q) ∧ (p ∨ r)) ⊢ (p ∨ (q ∧ r))
Proof(
1 ( (p ∨ q) ∧ (p ∨ r) ) by Premise,
2 ( p ∨ q ) by AndE1(1),
3 ( p ∨ r ) by AndE2(1),
...
)
We see that the top-level operator of our conclusion (p ∨ (q ∧ r)
) is an OR, so we apply the same strategy we did in the previous example – we see if we have an OR statement already available as a claim, and then try to use OR elimination on it to build to our conclusion in both subproofs. In this case, though, we have TWO or statements – p ∨ q
and p ∨ r
. We will see that it doesn’t matter which of these we choose, so let’s pick the first one – p ∨ q
:
((p ∨ q) ∧ (p ∨ r)) ⊢ (p ∨ (q ∧ r))
Proof(
1 ( (p ∨ q) ∧ (p ∨ r) ) by Premise,
2 ( p ∨ q ) by AndE1(1),
3 ( p ∨ r ) by AndE2(1),
4 SubProof(
5 Assume( p ),
6 ( p ∨ (q ∧ r) ) by OrI1(5)
)
7 SubProof(
8 Assume( q ),
//what do we do now?
)
...
)
We are able to finish the first subproof, but it’s not clear what to do in the second subproof. We assume q
, and know we have the goal of reaching the same conclusion as we did in the first subproof, p ∨ (q ∧ r)
…but we don’t have enough information yet to get there. The only piece of information that we haven’t used that might help us is our second OR statement – p ∨ r
. We are already inside a subproof, but we can still nest other subproofs – just as we can nest conditional statements in computer programs.
We start on a nested OR elimination approach with p ∨ r
:
((p ∨ q) ∧ (p ∨ r)) ⊢ (p ∨ (q ∧ r))
Proof(
1 ( (p ∨ q) ∧ (p ∨ r) ) by Premise,
2 ( p ∨ q ) by AndE1(1),
3 ( p ∨ r ) by AndE2(1),
4 SubProof(
5 Assume( p ),
6 ( p ∨ (q ∧ r) ) by OrI1(5)
),
7 SubProof(
8 Assume( q ),
//start OR elimination with p ∨ r
9 SubProof(
10 Assume( p ),
11 ( p ∨ (q ∧ r) ) by OrI1(10)
),
12 SubProof(
13 Assume( r ),
14 ( q ∧ r ) by AndI(8, 13),
15 ( p ∨ (q ∧ r) ) by OrI2(14)
),
16 ( p ∨ (q ∧ r) ) by OrE(3, 9, 12)
),
17 ( p ∨ (q ∧ r) ) by OrE(2, 4, 7)
)
Note that we use the ∨e rule twice – on line 16 to tie together subproofs 9 and 12 (where we processed the OR statement p ∨ r
), and on line 17 to tie together subproofs 4 and 7 (where we processed the OR statement p ∨ q
).
When we first started this problem, we mentioned that it didn’t matter which OR statement we chose to work with – p ∨ q
or p ∨ r
. Indeed, we could have chosen p ∨ r
instead – but we would end up nesting another OR elimination for p ∨ q
:
((p ∨ q) ∧ (p ∨ r)) ⊢ (p ∨ (q ∧ r))
Proof(
1 ( (p ∨ q) ∧ (p ∨ r) ) by Premise,
2 ( p ∨ q ) by AndE1(1),
3 ( p ∨ r ) by AndE2(1),
//OR elimination for p ∨ r
4 SubProof(
5 Assume( p ),
6 ( p ∨ (q ∧ r) ) by OrI1(5)
),
7 SubProof(
8 Assume( r ),
//start OR elimination with p ∨ q
9 SubProof(
10 Assume( p ),
11 ( p ∨ (q ∧ r) ) by OrI1(10)
),
12 SubProof(
13 Assume( q ),
14 ( q ∧ r ) by AndI(13, 8),
15 ( p ∨ (q ∧ r) ) by OrI2(14)
),
16 ( p ∨ (q ∧ r) ) by OrE(2, 9, 12)
),
17 ( p ∨ (q ∧ r) ) by OrE(3, 4, 7)
)
Implies Rules
In this section, we will see the deduction rules for the implies operator.
Note that in our Logika proofs, the implies operator is typed as __>:
but is rendered as →
.
Implies elimination
Remember that →
is a kind of logical “if-then”. Here, we understand p → q
to mean that p
holds knowledge sufficient to deduce q
– so, whenever p
is proved to be a fact, then p → q
enables q
to be proved a fact, too. This is the implies elimination rule, ImplyE
, and we can formalize it like this:
P → Q P
ImplyE : ------------
Q
Here is a simple example of a proof that shows the syntax of the ImplyE
rule:
(a, a → b) ⊢ (b)
Proof(
1 ( a ) by Premise,
2 ( a → b ) by Premise,
3 ( b ) by ImplyE(2, 1)
)
Note that when we use ImplyE
, we first list the line number of the implies statement, and then list the line number that contains the left side of that implies statement. The ImplyE
allows us to claim the right side of that implies statement.
Implies introduction
The idea behind the next deduction rule, implies introduction, is that we would be introducing a new implies statement of the form P → Q
. In order to do this, we must be able to show our logical “if-then” – that IF P
exists, THEN we promise that Q
will also exist. We do this by way of a subproof where we assume P
. If we can reach Q
by the end of that subproof, we will have shown that anytime P
is true, then Q
is also true. We will be able to close the subproof by introducing P → Q
with the ImplyI
rule. We can formalize the rule like this:
SubProof(
Assume( P ),
...
Q
),
ImplyI : --------------
P → Q
Here is a simple example of a proof that shows the syntax of the ImplyI
rule:
(a → b, b → c) ⊢ (a → c)
Proof(
1 ( a → b ) by Premise,
2 ( b → c ) by Premise,
3 SubProof(
//we want to prove a → c, so we start by assuming a
4 Assume( a ),
5 ( b ) by ImplyE(1, 4),
6 ( c ) by ImplyE(2, 5)
//...and try to end with c
),
//then we can conclude that anytime a is true, then c is also true
7 ( a → c ) by ImplyI(3)
)
Note that when we use ImplyI
, we list the line number of the subproof we just finished. We must have started that subproof by assuming the left side of the implies statement we are introducing, and ended that subproof with the right side of the implies statement we are introducing.
Example 1
Suppose we want to prove the following sequent:
(p → (q → r)) ⊢ ((q ∧ p) → r)
We start by listing our premise:
(p → (q → r)) ⊢ ((q ∧ p) → r)
Proof(
1 ( p → (q → r) ) by Premise,
...
)
We can’t extract any information from the premise, so we shift to examining our conclusion. The top-level operator of our conclusion is an implies statement, so this tells us that we will need to use the ImplyI
rule. We want to prove (q ∧ p) → r
, so we need to show that whenever q ∧ p
is true, then r
is also true. We open a subproof and assume the left side of our goal implies statement (q ∧ p
). If we can reach r
by the end of the subproof, then we can use ImplyI
to conclude (q ∧ p) → r
:
(p → (q → r)) ⊢ ((q ∧ p) → r)
Proof(
1 ( p → (q → r) ) by Premise,
2 SubProof(
3 Assume( q ∧ p ),
//goal: get to r
),
//use ImplyI to conclude (q ∧ p) → r
)
Now we can complete the proof:
(p → (q → r)) ⊢ ((q ∧ p) → r)
Proof(
1 ( p → (q → r) ) by Premise,
2 SubProof(
3 Assume( q ∧ p ),
4 ( q ) by AndE1(3),
5 ( p ) by AndE2(3),
6 ( q → r ) by ImplyE(1, 5),
7 ( r ) by ImplyE(6, 4)
//goal: get to r
),
//use ImplyI to conclude (q ∧ p) → r
8 ( (q ∧ p) → r ) by ImplyI(2)
)
Example 2
Suppose we want to prove the following sequent:
(p → (q → r) ⊢ (p → q) → (p → r))
We see that we will have no information to extract from the premises. The top-level operator is an implies statement, so we start a subproof to introduce that implies statement. In our subproof, we will assume the left side of our goal implies statement (p → q
) and will try to reach the right side of our goal (p → r
):
(p → (q → r) ⊢ (p → q) → (p → r))
Proof(
1 ( p → (q → r) ) by Premise,
2 SubProof(
3 Assume( p → q ),
//goal: get to p → r
),
//use ImplyI to conclude (p → q) → (p → r)
)
We can see that our goal is to reach p → r
in our subproof – so we see that we need to introduce another implies statement. This tells us that we need to nest another subproof – in this one, we’ll assume the left side of our current goal implies statement (p
), and then try to reach the right side of that current goal (r
). Then, we’d be able to finish that inner subproof by using ImplyI
to conclude p → r
:
(p → (q → r) ⊢ (p → q) → (p → r))
Proof(
1 ( p → (q → r) ) by Premise,
2 SubProof(
3 Assume( p → q ),
4 SubProof(
5 Assume( p ),
//goal: get to r
),
//use ImplyI to conclude p → r
//goal: get to p → r
),
//use ImplyI to conclude (p → q) → (p → r)
)
Now we can complete the proof:
(p → (q → r) ⊢ (p → q) → (p → r))
Proof(
1 ( p → (q → r) ) by Premise,
2 SubProof(
3 Assume( p → q ),
4 SubProof(
5 Assume( p ),
6 ( q ) by ImplyE(3, 5),
7 ( q → r ) by ImplyE(1, 5),
8 ( r ) by ImplyE(7, 6)
//goal: get to r
),
//use ImplyI to conclude p → r
9 ( p → r ) by ImplyI(4)
//goal: get to p → r
),
//use ImplyI to conclude (p → q) → (p → r)
10 ( (p → q) → (p → r) ) by ImplyI(2)
)
Example 3
Here is one more example, where we see we can nest an ImplyI
subproof and a OrE
subproof:
(p → r, q → r) ⊢ ((p ∨ q) → r)
Proof(
1 ( p → r ) by Premise,
2 ( q → r ) by Premise,
3 SubProof(
//assume p ∨ q, try to get to r
4 Assume( p ∨ q ),
//nested subproof for OR elimination on p ∨ q
//try to get to r in both cases
5 SubProof(
6 Assume( p ),
7 ( r ) by ImplyE(1, 6)
),
8 SubProof(
9 Assume( q ),
10 ( r ) by ImplyE(2, 9)
),
11 ( r ) by OrE(4, 5, 8)
//goal: get to r
),
//use ImplyI to conclude (p ∨ q) → r
12 ( (p ∨ q) → r ) by ImplyI(3)
)
Negation Rules
In this section, we will see the deduction rules for the NOT operator. In this section, we will introduce the notion of a contradiction, which occurs when, for some proposition P
, we have proved both the facts P
and ¬ P
. This indicates that we are in an impossible situation, and often means that we have made a bad previous assumption. In Logika, we use an F
(“false”) as a claim to indiciate that we’ve reached a contradiction. In other texts, you sometime see the symbol ⊥
(which means “bottom operator”) for a contradiction.
Negation elimination
The negation elimination rule allows you to claim that you have reached a contradiction. We can formalize the rule like this:
P ¬P
NegE : -----------
F
Here is a simple example of a proof that shows the syntax of the `NegE rule:
(q, ¬q) ⊢ (F)
Proof(
1 ( q ) by Premise,
2 ( ¬q ) by Premise,
3 ( F ) by NegE(1, 2)
)
We use the NegE
rule when we have facts for both P
and ¬P
for some proposition P
. When we use the justification, we first list the line number of the claim for P
(line 1, in our case) and then the line number of the claim for ¬P
(line 2, in our case).
Sometimes, the proposition P
itself has a NOT operator. Consider this example:
(¬q, ¬¬q) ⊢ (F)
Proof(
1 ( ¬q ) by Premise,
2 ( ¬¬q ) by Premise,
3 ( F ) by NegE(1, 2)
)
Here, our proposition P
is the claim ¬q
, and our proposition that is of the form ¬P
is the claim ¬¬q
.
Negation introduction
The negation introduction rule allows us to introduce a NOT operation. If assuming some proposition P
leads to a contradiction, then we must have made a bad assumption – P
must NOT be true after all. We can then introduce the fact ¬P
. We can formalize the negation introduction rule like this:
SubProof(
Assume ( P ),
...
F
),
NegI : --------------
¬P
Here is a simple example of a proof that shows the syntax of the NegI
rule:
(p, q → ¬p) ⊢ (¬q)
Proof(
1 ( p ) by Premise,
2 ( q → ¬p ) by Premise,
3 SubProof(
4 Assume ( q ) ,
5 ( ¬p ) by ImplyE(2, 4),
6 ( F ) by NegE(1, 5)
),
7 ( ¬q ) by NegI(3)
)
Note that the negation introduction rule involves a subproof – if we wish to prove ¬P
for some proposition P
, then we start a subproof where we assume P
. If we are able to reach a contradiciton on the last line of that subproof, then we can use the NegI
rule after the subproof ends to claim that our assumption was bad and that it is actually ¬P
that is true. When we use ¬i
as a justification, we list the line number corresponding to this subproof.
Bottom elimination
There is a special law for reasoning forwards from an impossible situation — the ⊥e law — which says, in the case of a contradiction, everything becomes a fact. (That is, “if False is a fact, so is everything else ¬”.) This rule is called “bottom elimination”, and is written as BottomE
. Here is a formalization of the rule:
F
BottomE : ------ for any proposition, Q, at all
Q
Here is a simple example of a proof that shows the syntax of the BottomE
rule:
(p, ¬p) ⊢ (q)
Proof(
1 ( p ) by Premise,
2 ( ¬p ) by Premise,
3 ( F ) by NegE(1, 2),
4 ( q ) by BottomE(3)
)
Note that when we use BottomE
as the justification, we list the line number of where we reached a contradiction (F
).
The bottom elimination rule works well with case analysis, where we discover that one case is impossible. Here is a classic example:
(p ∨ q, ¬p) ⊢ (q)
Proof(
1 ( p ∨ q ) by Premise,
2 ( ¬p ) by Premise,
3 SubProof(
4 Assume( p ),
5 ( F ) by NegE(4, 2),
6 ( q ) by BottomE(5)
),
7 SubProof(
8 Assume( q )
),
9 ( q ) by OrE(1, 3, 7)
)
Considering the premise, p ∨ q
, we analyze our two cases by starting OR elimination. The first case, where p
holds true, is impossible, because it causes a contradiction. The BottomE
-rule lets us gracefully prove q
in this “impossible case”. (You can read lines 4-6 as saying, “in the case when p
might hold true, there is a contradiction, and in such an impossible situation, we can deduce whatever we like, so we deduce q
to finish this impossible case”.)
The second case, that q
holds true, is the only realistic case, and it immediately yields the conclusion. The proof finishes the two-case analysis with the OrE
rule.
Proof by contradiction
The proof by contraction rule, PbC
, says that when assuming ¬P
leads to a contradiction for some proposition P
, then we made a bad assumption and thus P
must be true. It is very similar to the NegI
rule, except PbC
has us assuming ¬P
and eventually concluding P
, while the NegI
rule has us assuming P
and eventually concluding ¬P
.
Here is a formalization of PbC
:
SubProof(
Assume( ¬P ),
...
F
),
PbC: --------------------
P
And here is an example that demonstrates the syntax of the rule:
(¬¬p) ⊢ (p)
Proof(
1 ( ¬¬p ) by Premise,
2 SubProof(
3 Assume ( ¬p ),
4 ( F ) by NegE(3, 1)
),
5 ( p ) by PbC(2)
)
When we use the PbC
rule as a justification for a claim P
, we cite the line number of the subproof where we assumed ¬P
and ended in a contradiction.
Example 1
Suppose we want to prove the following sequent:
(¬p ∧ ¬q) ⊢ (¬(p ∨ q))
We start by listing our premise, and extracting the two sides of the AND statement:
(¬p ∧ ¬q) ⊢ (¬(p ∨ q))
Proof(
1 ( ¬p ∧ ¬q ) by Premise,
2 ( ¬p ) by AndE1(1),
3 ( ¬q ) by AndE2(1),
...
)
Next, we see that our conclusion has the form NOT (something), so this tells us that we will need to introduce a NOT (using the NegI
rule). In fact, ANY time we wish to prove a proposition of the form NOT (something), we will always use the NegI
rule. (We will discuss proof strategies in detail in the next section.) Since we want to prove ¬(p ∨ q)
, then we open a subproof where we assume p ∨ q
. If we can end that subproof with a contradiction, then we can use NeEgI
afterwards to conclude ¬(p ∨ q)
.
We know that we want this proof structure:
(¬p ∧ ¬q) ⊢ (¬(p ∨ q))
Proof(
1 ( ¬p ∧ ¬q ) by Premise,
2 ( ¬p ) by AndE1(1),
3 ( ¬q ) by AndE2(1),
4 SubProof(
5 Assume ( p ∨ q ),
//want to reach a contradiction
),
//will use NegI to conclude ¬(p ∨ q)
)
We know we must reach a contradiction in our subproof. We see that we have claims ¬p
, ¬q
, and p ∨ q
. Since at least one of p ∨ q
is true, and since either one would yield a contradiction with one of ¬p
or ¬q
, then we start on OR elimination:
(¬p ∧ ¬q) ⊢ (¬(p ∨ q))
Proof(
1 ( ¬p ∧ ¬q ) by Premise,
2 ( ¬p ) by AndE1(1),
3 ( ¬q ) by AndE2(1),
4 SubProof(
5 Assume ( p ∨ q ),
//OR elimination subproofs on p ∨ q
6 SubProof(
7 Assume( p ),
8 ( F ) by NegE(7, 2)
),
9 SubProof(
10 Assume( q ),
11 ( F ) by NegE(10, 3)
),
//use OrE rule to tie together subproofs
//want to reach a contradiction
),
//will use NegI to conclude ¬(p ∨ q)
)
We see that both OR elimination subproofs ended with a contradiction (F
). Just like any other use of OrE
, we restate that common conclusion after the two subproofs. We knew at least one of p
or q
were true, and both ended in a contradiction – so the contradiction holds no matter what:
(¬p ∧ ¬q) ⊢ (¬(p ∨ q))
Proof(
1 ( ¬p ∧ ¬q ) by Premise,
2 ( ¬p ) by AndE1(1),
3 ( ¬q ) by AndE2(1),
4 SubProof(
5 Assume ( p ∨ q ),
//OR elimination subproofs on p ∨ q
6 SubProof(
7 Assume( p ),
8 ( F ) by NegE(7, 2)
),
9 SubProof(
10 Assume( q ),
11 ( F ) by NegE(10, 3)
),
//use OrE rule to tie together subproofs
12 ( F ) by OrE(5, 6 , 8)
//want to reach a contradiction
),
//will use NegI to conclude ¬(p ∨ q)
)
All that remains is the use the NegI
rule to finish subproof 4, as that subproof ended with a contradiction:
(¬p ∧ ¬q) ⊢ (¬(p ∨ q))
Proof(
1 ( ¬p ∧ ¬q ) by Premise,
2 ( ¬p ) by AndE1(1),
3 ( ¬q ) by AndE2(1),
4 SubProof(
5 Assume ( p ∨ q ),
//OR elimination subproofs on p ∨ q
6 SubProof(
7 Assume( p ),
8 ( F ) by NegE(7, 2)
),
9 SubProof(
10 Assume( q ),
11 ( F ) by NegE(10, 3)
),
//use OrE rule to tie together subproofs
12 ( F ) by OrE(5, 6 , 8)
//want to reach a contradiction
),
//will use NegI to conclude ¬(p ∨ q)
13 ( ¬(p ∨ q) ) by NegI(4)
)
Example 2
When doing propositional logic translations, we learned that p → q
is equivalent to its contrapositive, ¬q → ¬p
. We will prove one direction of this equivalence (to show two statements are provably equivalent, which we will see in section 4.8, we would need to prove both directions):
(p → q) ⊢ (¬q → ¬p)
We notice that the top-level operator of our conclusion is an IMPLIES operator, so we know that we need to introduce an implies operator. We saw in the previous section that the blueprint for introducing an implies operator is with a subproof: assume the left side of the goal implies statement, and try to reach the right side of the goal implies statement by the end of the subproof.
We have this proof structure:
(p → q) ⊢ (¬q → ¬p)
Proof(
1 ( p → q ) by Premise,
2 SubProof(
3 Assume( ¬q ),
//goal: reach ¬p
),
//use →i to conclude ¬q → ¬p
)
We see that our goal in the subproof is the show ¬p – if we could do that, then we could tie together that subproof with the →i rule. Since our intermediate goal is to prove NOT (something), then we use our strategy for negation introduction as an inner subproof. We finish the proof as shown:
(p → q) ⊢ (¬q → ¬p)
Proof(
1 ( p → q ) by Premise,
2 SubProof(
3 Assume( ¬q ),
//use ¬i strategy to prove ¬p
4 SubProof(
5 Assume( p ),
6 ( q ) by ImplyE(1, 5),
7 ( F ) by NegE(6, 3)
),
8 ( ¬p ) by NegI(4)
//goal: reach ¬p
),
//use →i to conclude ¬q → ¬p
9 ( ¬q → ¬p ) by ImplyI(2)
)
Example 3
Suppose we want to prove the sequent:
(¬(¬p ∨ ¬q)) ⊢ (p ∧ q)
We see that there is nothing to extract from our premise, and that the top-level operator of the conclusion (p ∧ q
) is an AND. We see that we will need to introduce an AND statement – but the only way we can create p ∧ q
is to separately prove both p
and q
.
It is not immediately clear how to prove either p
or q
. We will discuss proof strategies in more detail in the next section, but PbC
is a good fallback option if you don’t have a clear path for how to prove something and some of the claims in the proof involve negation. Since we wish to prove p
, then will will assume ¬p
in a subproof. If we can reach a contradiction, then we can use PbC
to conclude p
:
(¬(¬p ∨ ¬q)) ⊢ (p ∧ q)
Proof(
1 ( ¬(¬p ∨ ¬q) ) by Premise,
2 SubProof(
3 Assume( ¬p ),
//goal: contradiction
),
//use PbC to conclude p
//similarly prove q
//use AndI to conclude p ∧ q
)
In subproof 2, we know we need to end with a contradiction. The only propositions we have to work with are ¬(¬p ∨ ¬q)
and ¬p
. But if we use OrI1
with ¬p
, then we could have ¬p ∨ ¬q
– and then we could claim a contradiction. We complete the proof as shown (using the same strategy to prove q
):
(¬(¬p ∨ ¬q)) ⊢ (p ∧ q)
Proof(
1 ( ¬(¬p ∨ ¬q) ) by Premise,
2 SubProof(
3 Assume( ¬p ),
4 ( ¬p ∨ ¬q ) by OrI1(3),
5 ( F ) by NegE(4, 1)
//goal: contradiction
),
//use PbC to conclude p
6 ( p ) by PbC(2),
//similarly prove q
7 SubProof(
8 Assume( ¬q ),
9 ( ¬p ∨ ¬q ) by OrI2(8),
10 ( F ) by NegE(9, 1)
//goal: contradiction
),
11 ( q ) by PbC(7),
//use AndI to conclude p ∧ q
12 ( p ∧ q ) by AndI(6, 11)
)
Law of the excluded middle
The law of the excluded middle (LEM) is famous consequence of PbC
: from no starting premises at all, we can prove p ∨ ¬ p
for any proposition we can imagine:
⊢ (p ∨ ¬p)
Proof(
1 SubProof(
2 Assume( ¬(p ∨ ¬p) ),
3 SubProof(
4 Assume( p ),
5 ( p ∨ ¬ p ) by OrI1(4),
6 ( F ) by NegE(5, 2)
),
7 ( ¬p ) by NegI(3),
8 ( p ∨ ¬p ) by OrI2(7),
9 ( F ) By NegE(8, 2)
),
10 ( p ∨ ¬p ) By PbC(1)
)
In other proofs involving negation and no clear path forward, it is sometimes useful to first derive LEM (this is possible since no premises are needed). If we have the claim p ∨ ¬ p
in a proof, then we can use OR elimination where we separately assume p
and then ¬ p
to try to reach the same conclusion. Here is one such example:
(p → q) ⊢ (¬p ∨ q)
Proof(
1 ( p → q ) by Premise,
// start of previous p ∨ ¬ p proof
2 SubProof(
3 Assume( ¬(p ∨ ¬p) ),
4 SubProof(
5 Assume( p ),
6 ( p ∨ ¬ p ) by OrI1(5),
7 ( F ) by NegE(6, 3)
),
8 ( ¬p ) by NegI(4),
9 ( p ∨ ¬p ) by OrI2(8),
10 ( F ) By NegE(9, 3)
),
11 ( p ∨ ¬p ) By PbC(2), // conclusion of p ∨ ¬ p proof
12 SubProof(
13 Assume( p ),
14 ( q ) By ImplyE(1, 13),
15 ( ¬p ∨ q ) By OrI2(14)
),
16 SubProof(
17 Assume( ¬p ),
18 ( ¬p ∨ q ) By OrI1(17)
),
19 ( ¬p ∨ q ) By OrE(11, 12, 16)
)
Summary and Strategies
When examining more complex propositional logic sequents, it can be challenging to know where to start. In this section, we summarize all available rules in propositional logic, and discuss stratgies for approaching proofs.
AND rules
Rule summaries:
P Q P ∧ Q P ∧ Q
AndI : --------- AndE1 : ---------- AndE2 : ----------
P ∧ Q P Q
Rule syntax summaries:
...
x ( p ) by (...),
y ( q ) by (...),
z ( p ∧ q ) by AndI(x, y),
...
...
x ( p ∧ q ) by (...),
y ( p ) by AndE1(x),
z ( q ) by AndE2(y),
...
OR rules
Rule summaries:
SubProof( SubProof(
Assume ( P ), Assume (Q ),
P ∨ Q ... ...
R ... R } R
P Q ), ),
OrI1 : --------- OrI2 : ---------- OrE : -------------------------------------------------------
P ∨ Q P ∨ Q R
Rule syntax summaries:
...
x ( p ) by (...),
y ( p ∨ q ) by AndE1(x),
...
...
x ( q ) by (...),
y ( p ∨ q ) by AndE2(x),
...
...
a ( p ∨ q ) by (...),
b SubProof(
c Assume( p ),
...
d ( r ) by (...)
),
f SubProof(
g Assume( q ),
...
h ( r ) by (...)
),
i ( r ) by OrE(a, b, f),
Implies rules
Rule summaries:
SubProof(
Assume( P ),
...
Q
P → Q P ),
ImplyE : ------------- ImplyI : ----------------
Q P → Q
Rule syntax summaries:
...
x ( p → q ) by (...),
y ( p ) by (...),
z ( q ) by ImplyE(x, y),
...
...
a SubProof(
b Assume( p ),
...
c ( q ) by (...)
),
d ( p → q ) by ImplyI(a),
...
Negation rules
Rule summaries:
P ¬P
NegE : ----------
F
SubProof(
Assume ( P ),
...
F
),
NegI : --------------
¬P
F
BottomE : ------ for any proposition, Q, at all
Q
SubProof(
Assume( ¬P ),
...
F
),
PbC: --------------------
P
Rule syntax summaries:
...
x ( p ) by (...),
y ( ¬p ) by (...),
z ( F ) by NegE(x, y),
...
...
a SubProof(
b Assume( p ),
...
c ( F ) by (...)
),
d ( ¬p ) by NegI(a),
...
...
x ( F ) by (...),
y ( q ) by BottomE(x),
...
...
a SubProof(
b Assume( ¬p ),
...
c ( F ) by (...)
),
d ( p ) by PbC(a),
...
Strategies
- Write down all premises first. Can you extract anything from the premises?
- If you have
p∧q
, useAndE1
to extractp
by itself and thenAndE2
to extractq
by itself. - If you have
p→q
andp
, useImplyE
to getq
. - If you have
p
and¬p
, useNegE
to claim a contradiction,F
.
- If you have
- Look at the top-level operator of what you are trying to prove.
Are you trying to prove something of the form
p→q
?- Use
ImplyI
. Open a subproof, assumep
, and get toq
by the end of the subproof. After the subproof, useImplyI
to concludep→q
.
- Use
Are you trying to prove something of the form
¬p
?- Use
NegI
. Open a subproof, assumep
, and get a contradiction,F
, by the end of the subproof. After the subproof, useNegI
to conclude¬p
.
- Use
Are you trying to prove something of the form
p ∧ q
?- Try to prove
p
by itself and thenq
by itself. Afterward, useAndI
to concludep ∧ q
.
- Try to prove
Are you trying to prove something of the form
p ∨ q
?- See if you have either
p
orq
by itself – if you do, use eitherOrI1
orOrI2
to concludep ∨ q
.
- See if you have either
- You’ll need to nest the approaches from step 2. Once you are in a subproof, think about what you are trying to prove by the end of that subproof. Follow the strategy in step 2 to prove your current goal, nesting subproofs as needed. As you work, stop and scan the propositions that you have available. See if you can extract anything from them as you did for the premises in step 1.
- No match, or still stuck?
- Do you have an OR statement available? Try using OR elimination to prove your goal conclusion.
- Do your propositions have NOT operators, but don’t fit the form for using
¬i
? Try usingPbC
. If you are trying to prove something of the formp
, open a subproof, assume¬p
, and try to reach a contradiction by the end of the subproof. Afterward, usePbC
to concludep
. - As a last resort, try pasting in the proof for the law of the excluded middle (see section 4.5). Then use OR elimination on
p ∨ ¬p
.
Proofs can be quite challenging. You might follow one approach, get stuck, and not be able to make progress. If this happens, backtrack and follow a different approach. If you are using Logika to verify your work, make sure it does not mark any lines in the proof in red – this means that you’ve made an invalid conclusion along the way, or that your justification for a particular line doesn’t follow the expected format. Try to fix these errors before continuing on with the proof.
Theorems
Definition
A theorem in propositional logic is something that is always true with no need for premises. The truth table for a theorem is always a tautology – it is true for any truth assignment.
To express a theorem as a sequent, we write:
⊢ (theorem)
This shows that we are trying to prove our theorem with NO premises. Such a proof will always immediately begin with a subproof, as there are no premises to process.
Law of the excluded middle, revisited
For example, the law of the excluded middle (LEM), p ∨ ¬p
, is a theorem. We proved in section 4.5 that p ∨ ¬p
is always true with no premises:
⊢ (p ∨ ¬p)
Proof(
1 SubProof(
2 Assume( ¬(p ∨ ¬p) ),
3 SubProof(
4 Assume( p ),
5 ( p ∨ ¬ p ) by OrI1(4),
6 ( F ) by NegE(5, 2)
),
7 ( ¬p ) by NotI(3),
8 ( p ∨ ¬p ) by OrI2(7),
9 ( F ) By NegE(8, 2)
),
10 ( p ∨ ¬p ) By PbC(1)
)
We also see that the truth table for LEM is a tautology:
*
-------------
p # p ∨ ¬p
-------------
T # T F
F # T T
-------------
Tautology
Another example
Suppose we wish to prove the following theorem of propositional logic:
(p → q) → ((¬p → q) → q)
We would need to prove the sequent:
⊢ ((p → q) → ((¬p → q) → q))
We see that the top-level operator of what we are trying to prove is an implies operator. So, we begin our proof using the strategy for implies introduction:
⊢ ((p → q) → ((¬p → q) → q))
Proof(
1 SubProof(
2 Assume( p → q ),
//goal: reach (¬p → q) → q
),
//use ImplyI to conclude (p → q) → ((¬p → q) → q)
)
Inside subproof 1, we are trying to prove (¬p → q) → q
. The top-level operator of that statement is an implies, so we nest another subproof with the goal of using implies introduction:
⊢ ((p → q) → ((¬p → q) → q))
Proof(
1 SubProof(
2 Assume( p → q ),
3 SubProof(
4 Assume( ¬p → q ),
//goal: reach q
)
//use ImplyI to conclude (¬p → q) → q
//goal: reach (¬p → q) → q
),
//use ImplyI to conclude (p → q) → ((¬p → q) → q)
)
Now we must prove q
in subproof 3. We have available propositions p → q
and ¬p → q
– we can see that if we had LEM (p ∨ ¬p
) available, then we could use OR elimination to get our q
in both cases. We insert the LEM proof into subproof 3:
⊢ ((p → q) → ((¬p → q) → q))
Proof(
1 SubProof(
2 Assume( p → q ),
3 SubProof(
4 Assume( ¬p → q ),
//Begin LEM proof, p ∨ ¬p
5 SubProof(
6 Assume( ¬(p ∨ ¬p) ),
7 SubProof(
8 Assume( p ),
9 ( p ∨ ¬ p ) by OrI1(8),
10 ( F ) by NegE(9, 6)
),
11 ( ¬p ) by NotI(7),
12 ( p ∨ ¬p ) by OrI2(11),
13 ( F ) By NegE(12, 4)
),
14 ( p ∨ ¬p ) By PbC(5)
//End LEM proof for p ∨ ¬p
//goal: reach q
)
//use ImplyI to conclude (¬p → q) → q
//goal: reach (¬p → q) → q
),
//use ImplyI to conclude (p → q) → ((¬p → q) → q)
)
Finally, we do OR elimination with p ∨ ¬p
and tie together the rest of the proof:
⊢ ((p → q) → ((¬p → q) → q))
Proof(
1 SubProof(
2 Assume( p → q ),
3 SubProof(
4 Assume( ¬p → q ),
//Begin LEM proof, p ∨ ¬p
5 SubProof(
6 Assume( ¬(p ∨ ¬p) ),
7 SubProof(
8 Assume( p ),
9 ( p ∨ ¬ p ) by OrI1(8),
10 ( F ) by NegE(9, 6)
),
11 ( ¬p ) by NotI(7),
12 ( p ∨ ¬p ) by OrI2(11),
13 ( F ) By NegE(12, 4)
),
14 ( p ∨ ¬p ) By PbC(5),
//End LEM proof for p ∨ ¬p
//use OR elimination on p ∨ ¬p, try to reach q
15 SubProof(
16 Assume( p ),
17 ( q ) By ImplyE(2, 16)
),
18 SubProof(
19 Assume( ¬p ),
20 ( q ) By ImplyE(4, 19)
),
21 ( q ) By OrE(14, 15, 18)
//goal: reach q
)
//use ImplyI to conclude (¬p → q) → q
22 ( (¬p → q) → q ) By ImplyI(3)
//goal: reach (¬p → q) → q
),
//use ImplyI to conclude (p → q) → ((¬p → q) → q)
23 ( (p → q) → ((¬p → q) → q) ) By ImplyI(1)
)
If we complete a truth table for (p → q) → ((¬p → q) → q)
, we also see that it is a tautology:
*
-----------------------------------
p q # (p →: q) →: ((¬p →: q) →: q)
-----------------------------------
T T # T T F T T
T F # F T F T F
F T # T T T T T
F F # T T T F T
----------------------------------
Tautology
Equivalence
In this section, we will revisit the notion of equivalence. In chapter 2, we saw how we could use truth tables to show that two logical formulae are equivalent. Here, we will see that we can also show they are equivalent using our natural deduction proof rules.
Semantic equivalence
We saw in section 2.4 that two (or more) logical statements S1
and S2
were said to be semantically equivalent if and only if:
S1 ⊨ S2
and
S2 ⊨ S1
As a reminder, the S1 ⊨ S2
means S1
semantically entails S2
, which means that every truth assignment that satisfies S1
also satisfies S2
.
Semantic equivalence between S1
and S2
means that each proposition semantically entails the other – that S1
and S2
have the same truth value for every truth assignment; i.e., their truth tables evaluate exactly the same.
Showing semantic equivalence with two truth tables
For example, if we wished to show that the propositions p → ¬q
and ¬ (p ∧ q)
were semantically equivalent, then we could create truth tables for each proposition:
*
--------------
p q # p →: ¬q
--------------
T T # F F
T F # T T
F T # T F
F F # T T
--------------
Contingent
T: [T F] [F T] [F F]
F: [T T]
*
----------------
p q # ¬(p ∧ q)
----------------
T T # F T
T F # T F
F T # T F
F F # T F
----------------
Contingent
T: [T F] [F T] [F F]
F: [T T]
We see that the same set of truth assignments, [T F] [F T] [F F]
, satisfies both p →: ¬q
and ¬(p ∧ q)
.
Showing semantic equivalence with one truth table
To show that propositions S1
and S2
are semantically equivalent, we need to show that if S1
is true, then so is S2
, and that if S2
is true, then so is S1
. Instead of comparing the truth tables of both S1
and S2
, we could instead express our requirements as a bi-implication: S1 ↔ S2
. To express a bi-implication operator, we can use a conjunction of two implications: (S1 → S2) ∧ (S2 → S1)
. If this conjunction is a tautology, then we know that if one proposition is true, then the other one is too – that S1
and S2
are semantically equivalent.
Below, we show that p →: ¬q
and ¬(p ∧ q)
are semantically equivalent using one truth table:
*
-------------------------------------------------------
p q # ((p →: ¬q) →: ¬(p ∧ q)) ∧ (¬(p ∧ q) →: (p →: ¬q))
-------------------------------------------------------
T T # F F T F T T F T T F F
T F # T T T T F T T F T T T
F T # T F T T F T T F T T F
F F # T T T T F T T F T T T
-------------------------------------------------------
Tautology
Provable equivalence
Two propositional logic statements S1
and S2
are provably equivalent if and only if we can prove both of the following sequents:
(S1) ⊢ (S2)
and
(S2) ⊢ (S1)
We can also write:
(S2) ⟛ (S1)
For example, suppose we wish to show that the propositions p → ¬q
and ¬(p ∧ q)
are provably equivalent. We must prove the following sequents:
(p → ¬q) ⊢ (¬(p ∧ q))
and
(¬(p ∧ q)) ⊢ (p → ¬q)
We complete both proofs below:
(p → ¬q) ⊢ (¬(p ∧ q))
Proof(
1 ( p → ¬q ) by Premise,
2 SubProof(
3 Assume( p ∧ q ),
4 ( p ) by AndE1(3),
5 ( q ) by AndE2(3),
6 ( ¬q ) by ImplyE(1, 4),
7 ( F ) by NegE(5, 6)
),
8 ( ¬(p ∧ q) ) by NegI(2)
)
(¬(p ∧ q)) ⊢ (p → ¬q)
Proof(
1 ( ¬(p ∧ q) ) by Premise,
2 SubProof(
3 Assume ( p ),
4 SubProof(
5 Assume( q ),
6 ( p ∧ q ) by AndI(3, 5),
7 ( F ) by NegE(6, 1)
),
8 ( ¬q ) by NegI(4)
),
9 ( p → ¬q ) by ImplyI(2)
)
Soundness and Completeness
Section 4.8 showed us that we can prove two statements are semantically equivalent with truth tables and provably equivalent with deduction proofs. Does it matter which approach we use? Will there ever be a time when two statements are semantically equivalent but not provably equivalent, or vice versa? Will there ever be a time when a set of premises semantically entails a conclusion, but that the premises do not prove (using our deduction proofs) the conclusion, or vice versa?
These questions lead us to the notions of soundness and completeness. Formal treatment of both concepts is beyond the scope of this course, but we will introduce both definitions and a rough idea of the proofs of soundness and completeness in propositional logic.
Soundness
A proof system is sound if everything that is provable is actually true. Propositional logic is sound if when we use deduction rules to prove that (P1, P2, ..., Pn) ⊢ (C)
(that a set of premises proves a conclusion) then we can also use a truth table to show that P1, P2, ..., Pn ⊨ C
(that a set of premises semantically entails a conclusion).
Propositional logic is, in fact, sound.
To get an idea of the proof, consider the AndE1
deduction rule. It allows us to directly prove:
(P ∧ Q) ⊢ (P)
I.e., if we have P ∧ Q
as a premise or as a claim in part of a proof, then we can use AndE1
to conclude P
. We must also show that:
P ∧ Q ⊨ P
I.e., that any time P ∧ Q
is true in a truth table, then P
is also true. And of course, we can examine the truth table for P ∧ Q
, and see that it is only true in the cases that P
is also true.
Consider the AndI
deduction rule next. It allows us to directly prove:
(P, Q) ⊢ (P ∧ Q)
I.e., if we have both P
and Q
as premises or claims in part of a proof, then we can use AndI
to conclude P ∧ Q
. We must also show that:
P, Q ⊨ P ∧ Q
I.e., that any time both P
and Q
are true in a truth table, then P ∧ Q
is also true. And of course, we can examine the truth table for P ∧ Q
and see that whenever P
and Q
are true, then P ∧ Q
is also true.
To complete the soundness proof, we would need to examine the rest of our deduction rules in a similar process. We would then use an approach called mathematical induction (which we will see for other applications in Chapter 7) to extend the idea to a proof that applies multiple deduction rules in a row.
Completeness
A proof system is complete if everything that is true can be proved. Propositional logic is complete if when we can use a truth table to show that P1, P2, ..., Pn ⊨ C
, then we can also use deduction rules to prove that (P1, P2, ..., Pn) ⊢ (C)
.
Propositional logic is also complete.
We assume that P1, P2, ..., Pn ⊨ C
, and we consider the truth table for (P1 ∧ P2 ∧ ... ∧ Pn) → C
(since that will be a tautology whenever P1, P2, ..., Pn ⊨ C
). In order to show propositional logic is complete, we must show that we can use our deduction rules to prove (P1, P2, ..., Pn) ⊢ (C)
.
The idea is to use LEM for each propositional atom A
to obtain A ∨ ¬A
(corresponding to the truth assignments in the (P1 ∧ P2 ∧ ... ∧ Pn) → C
truth table). We then use OR elimination on each combination of truth assignments, with separate cases for each logical operator being used.