Subsections of Propositional Logic Proofs
Introduction
While we can use truth tables to check whether a set of premises entail a conclusion, this requires testing all possible truth assignments – of which there are exponentially many. In this chapter, we will learn the process of natural deduction in propositional logic. This will allow us to start with a set of known facts (premises) and apply a series of rules to see if we can reach some goal conclusion. Essentially, we will be able to see whether a given conclusion necessarily follows from a set of premises.
We will use the Logika tool to check whether our proofs correctly follow our deduction rules. HOWEVER, these proofs can and do exist outside of Logika. Different settings use slightly different syntaxes for the deduction rules, but the rules and proof system are the same. We will merely use Logika to help check our work.
Sequents, premises, and conclusions
A sequent is a mathematical term for an assertion. We use the notation:
p0, p1, ..., pm ⊢ c
The p0, p1, …, pm are called premises and c is called the conclusion. The ⊢
is called the turnstile operator, and we read it as “prove”. The full sequent is read as:
Statements p0, p1, ..., pm PROVE c
A sequent is saying that if we accept statements p0, p1, …, pm as facts, then we guarantee that c is a fact as well.
For example, in the sequent:
p → q , ¬ q ⊢ ¬ p
The premises are: p → q
and ¬q
, and the conclusion is ¬p
.
(Shortcut: we can use 
in place of the ⊢
turnstile operator.)
Sequent validity
A sequent is said to be valid if, for every truth assignment which make the premises true, then the conclusion is also true.
For example, consider the following sequent:
p → q , ¬ q ⊢ ¬ p
To check if this sequent is valid, we must find all truth assignments for which both premises are true, and then ensure that those truth assignments also make the conclusion true.
Here is a (nonLogika syntax) type of truth table that combines all three statements:
p q  (p → q)  ( ¬q)  ¬p

T T  T  F  F
T F  F  T  F
F T  T  F  T
F F  T  T  T
Examining each row in the above truth table, we see that only the truth assignment [F F] makes both premises (p → q
and ¬q
) true. We look right to see that the same truth assignment also makes the conclusion ( ¬p
) true, which means that the sequent is valid.
Using Logika for proof verification
We can use the Logika tool to help check the correctness of our proofs. (Again, Logika is just a tool to help check our work – we could write the same argument in a different environment or on paper, and the meaning would be the same.)
Each Logika proof should be written in a separate file with a .logika extension. Logika verification knows each of the deduction rules we will see in the next few chapters, and will automatically check to ensure that your steps obey these deduction rules as you type your proof. If a proof is correct, you will see a purple checkmark in the lower right corner that says “Logika verified”. If you have logic errors, you will see them highlighted in red.
Sometimes, the Logika verification needs to be run manually. If you don’t see either red errors or a purple checkmark, rightclick in the text area that contains the proof and select “Logika Check”.
Logika proof syntax
Sequents in Logika have the following form:
< 0 or more premises, separated by commas > ⊢ < 1 conclusion >
Proofs in Logika are structured in two columns, with claims on the left and their supporting justification on the right:
premises ⊢ conclusion
{
1. claim_a justification_a
2. claim_b justification_b
... ...
736. conclusion justification_ef
}
Each claim is given a number, and these numbers are generally in order. However, the only rule is that claim numbers be unique (they may be out of order and/or nonconsecutive). Once we have justified a claim in a proof, we will refer to it as a fact.
We will see more details of Logika proof syntax as we progress through chapter 4.
Premise justification
The most basic justification for a claim in a proof is “premise”. This justification is used when you pull in a premise from the sequent and introduce it into your proof. All, some or none of the premises can be introduced at any time in any order. Please note that only one premise may be entered per claim.
For example, we might bring in the premises from our sequent like this:
p, q, ¬r  p ∧ q
{
1. p premise
2. q premise
3. ¬r premise
...
}
We could also bring in the same premise multiple times, if we wanted. We could also use nonsequential line numbers, as long as each line number was unique:
p, q, ¬r  p ∧ q
{
7. p premise
10. q premise
2. ¬r premise
8. p premise
...
}
We could only bring in some portion of our premises, if we wanted:
p, q, ¬r  p ∧ q
{
1. p premise
...
}
But we can only list one premise in each claim. For example, the following is not allowed:
p, q, ¬r  p ∧ q
{
//THIS IS WRONG ¬
1. p, q, ¬r premise
...
}
Deduction rules
The logical operators (AND, OR, NOT, IMPLIES) are a kind of language for building propositions from basic, primitive propositional atoms. For this reason, we must have laws for constructing propositions and for disassembling them. These laws are called inference rules or deduction rules. A natural deduction system is a set of inference rules, such that for each logical operator, there is a rule for constructing a proposition with that operator (this is called an introduction rule) and there is a rule for disassembling a proposition with that operator (this is called an elimination rule).
For the sections that follow, we will see the introduction and elimination rules for each logical operator. We will then learn how to use these deduction rules to write a formal proof showing that a sequent is valid.
AND Rules
In this section, we will see the deduction rules for the AND operator.
AND introduction
Clearly, when both p
and q
are facts, then so is the proposition p ∧ q
. This makes logical sense – if two propositions are independently true, then their conjunction (AND) must also be true. The AND introduction rule, ∧i
, formalizes this:
P Q
∧i : 
P ∧ Q
We will use the format above when introducing each of our natural deduction rules:
P
andQ
are not necessarily individual variables – they are placeholders for some propositional statement, which may itself involve several logical operators. On the left side is the rule name (in this case,
∧i
)  On the top of the right side we see what we already need to have established as facts in order to use this rule (in this case,
P
and alsoQ
). These facts can appear anywhere in our scope of the proof, in whatever order. (For now, all previous lines in the proof will be within our scope, but this will change when we get to more complex rules that involve subproofs).  On the bottom of the right side, we see what we can claim by using that proof rule.
Here is a simple example of a proof that uses ∧i
. It proves that if propositional atoms p
, q
, and r
are all true, then the proposition r ∧ (q ∧ p)
is also true:
p, q, r ⊢ r ∧ (q ∧ p)
{
1. p premise
2. q premise
3. r premise
4. q ∧ p ∧i 2 1
5. r ∧ (q ∧ p) ∧i 3 4
}
You can read line 4 like this: “from the fact q
stated on line 2 and the fact p
stated on line 1, we deduce q ∧ p
by applying the ∧i
rule”. Lines 4 and 5 construct new facts from the starting facts (premises) on lines 13.
Note that if I had instead tried:
p, q, r ⊢ r ∧ (q ∧ p)
{
1. p premise
2. q premise
3. r premise
4. q ∧ p ∧i 1 2
...
}
Then line 4 would not have been accepted. The line numbers cited after the ∧i
rule must match the order of the resulting AND statement. The lefthand side of our resulting AND statement must correspond to the first line number in ∧i
justification, and the righthand side of our resulting AND statement must correspond to the second line number in the justification:
{
...
4. p (some justification)
5. q (some justification)
6. p ⋀ q ⋀i 4 5
...
9. q ⋀ p ⋀i 5 4
...
}
AND elimination
The idea of the AND elimination rules is that if we have a proposition p ⋀ q
as a fact, then we can separately claim both p
and q
as individual facts. After all, the only time p ⋀ q
is true in a truth table is when both p
and q
are individually true. There are two AND elimination rules – ∧e1
and ∧e2
. ∧e1
allows us to claim that the left (first) side of an AND statement is individually true, and ∧e2
allows us to do the same with the right (second) side. Here is the formalization of each rule:
P ∧ Q P ∧ Q
∧e1 :  ∧e2 : 
P Q
Here is a simple example showing the syntax of the ∧e1
rule:
p ∧ q ⊢ p
{
1. p ∧ q premise
2. p ∧e1 1
}
We can read the justification ∧e1 1
as: ANDelimination 1 from line 1, or “take the AND statement on line 1 and extract its first (left) side”.
Here is a simple example showing the syntax of the ∧e2
rule:
p ∧ q ⊢ q
{
1. p ∧ q premise
2. q ∧e2 1
}
We can read the justification ∧e2 1
as: ANDelimination 2 from line 1, or “take the AND statement on line 1 and extract its second (right) side”.
Example 1
Suppose we want to prove the following sequent:
p ∧ (q ∧ r) ⊢ r ∧ p
Whenever we approach a proof, a good first strategy is to see what we can extract from the premises. If we have a premise that is an AND statement, then we can use ∧e1
and then ∧e2
to extract both its left and right side as separate claims. So we start our proof like this:
p ∧ (q ∧ r) ⊢ r ∧ p
{
1. p ∧ (q ∧ r) premise
2. p ∧e1 1
3. q ∧ r ∧e2 1
...
}
But now we have a new AND statement as a claim – q ∧ r
. We can again use both ∧e1
and ∧e2
to extract each side separately:
p ∧ (q ∧ r) ⊢ r ∧ p
{
1. p ∧ (q ∧ r) premise
2. p ∧e1 1
3. q ∧ r ∧e2 1
4. q ∧e1 3
5. r ∧e2 3
...
}
Now that we have done all we can with our premises and the resulting statements, we examine our conclusion. Whenever our conclusion is a conjunction (AND statement), we know that we must separately show both the left side and the right side of that conclusion. Then, we can use ∧i
to put those sides together into our goal AND statement.
In this example, we have already proved both sides of our goal AND statement – r
(from line 5) and p
(from line 2). All that remains is to use ∧i
to put them together:
p ∧ (q ∧ r) ⊢ r ∧ p
{
1. p ∧ (q ∧ r) premise
2. p ∧e1 1
3. q ∧ r ∧e2 1
4. q ∧e1 3
5. r ∧e2 3
6. r ∧ p ∧i 5 2
}
Example 2
Suppose we want to prove the following sequent:
p ∧ q ∧ r, a ∧ (t ∨ s) ⊢ q ∧ (t ∨ s)
We again try to use ANDelimination to extract what we can from our premises. We might try something like this:
p ∧ q ∧ r, a ∧ (t ∨ s) ⊢ q ∧ (t ∨ s)
{
1. p ∧ q ∧ r premise
2. a ∧ (t ∨ s) premise
3. p ∧e1 1 //NO ¬ Won't work.
...
}
However, we get into trouble when we try to use ∧e1
to extract the left side of the premise p ∧ q ∧ r
. The problem has to do with operator precedence – we recall that ∧
operators are processed from left to right, which means that p ∧ q ∧ r
is equivalent to (p ∧ q) ∧ r
. By reminding ourselves of the “hidden parentheses”, we see that when we use ∧e1
on the premise p ∧ q ∧ r
, we extract p ∧ q
. Similarly, ∧e2
will extract r
.
We try again to extract what we can from our premises:
p ∧ q ∧ r, a ∧ (t ∨ s) ⊢ q ∧ (t ∨ s)
{
1. p ∧ q ∧ r premise
2. a ∧ (t ∨ s) premise
3. p ∧ q ∧e1 1
4. r ∧e2 1
5. a ∧e1 2
6. t ∨ s ∧e2 2
...
}
As before, we look at our resulting claims – we see a p ∧ q
, and we know that we can use AND elimination again to extract both sides. Now we have:
p ∧ q ∧ r, a ∧ (t ∨ s) ⊢ q ∧ (t ∨ s)
{
1. p ∧ q ∧ r premise
2. a ∧ (t ∨ s) premise
3. p ∧ q ∧e1 1
4. r ∧e2 1
5. a ∧e1 2
6. t ∨ s ∧e2 2
7. p ∧e1 3
8. q ∧e2 3
...
}
Now, we look at what we are trying to prove – q ∧ (t ∨ s)
. Since its toplevel operator is the AND, we know that we must separately prove q
and t ∨ s
. Then, we can use AND introduction to put the two pieces together to match our conclusion. We see that we already have q
on line 8 and t ∨ s
on line 6, so we add our final line to finish the proof:
p ∧ q ∧ r, a ∧ (t ∨ s) ⊢ q ∧ (t ∨ s)
{
1. p ∧ q ∧ r premise
2. a ∧ (t ∨ s) premise
3. p ∧ q ∧e1 1
4. r ∧e2 1
5. a ∧e1 2
6. t ∨ s ∧e2 2
7. p ∧e1 3
8. q ∧e2 3
9. q ∧ (t ∨ s) ∧i 8 6
}
You might notice that lines 5 and 7 were not needed, as both p
and a
were not part of the conclusion. That’s true – we could have eliminated those steps. However, it’s a good idea to extract as much information as possible while you are getting used to doing these proofs – it doesn’t hurt to have extra claims, and you may find that you end up needing them.
OR Rules
In this section, we will see the deduction rules for the OR operator.
OR introduction
If we know that a proposition P
is true, then it will also be the case that both P ∨ Q
and Q ∨ P
are also true. It doesn’t matter what Q
is – it might even be something that is know to be false. Because P
is true, it will make the overall OR statement true as well.
There are two OR introduction rules – ∨i1
and ∨i2
. ∨i1
allows us to claim an OR statement with some previous fact on the left (first) side, and ∨i2
allows us to do the same with the right (second) side. Here is the formalization of each rule:
P Q
∨i1 :  ∨i2 : 
P ∨ Q P ∨ Q
Here is a simple example showing the syntax of the ∨i1
rule:
p ⊢ p ∨ q
{
1. p premise
2. p ∨ q ∨i1 1
}
We can read the justification ∨i1 1
as: OR introduction 1 from line 1, or “create a OR statement that puts the claim from line 1 on the first (left) side, and puts something new on the second side”.
Here is a simple example showing the syntax of the ∨i2
rule:
p ⊢ q ∨ p
{
1. p premise
2. q ∨ p ∨i2 1
}
We can read the justification ∨i2 1
as: OR introduction 2 from line 1, or “create a OR statement that puts the claim from line 1 on the second (right) side, and puts something new on the first side”.
OR elimination
The OR elimination rule is used when we have an OR statement of the form P ∨ Q
, and we wish to use it to extract new information. In real life, we call the rule “case analysis”. For example, say that you have either 12 quarters in your pocket or 30 dimes in your pocket. In either case, you can buy a $3.00 coffee. Why? You do a case analysis:
 In the case you have 12 quarters, that totals $3.00, and you can buy the coffee;
 In the case you have 30 dimes, that totals $3.00, and you can buy the coffee.
So, in both cases, you can buy the coffee.
We can formalize the idea behind the OR elimination rule as follows:
 In order for the OR statement
P ∨ Q
to be true, at least one ofP
andQ
must be individually true  If we are able to reach some conclusion
R
if we assumeP
is true, and we are able to reach the SAME conclusionR
if we assumeQ
is true…  …Then no matter what,
R
will be true.
Subproofs
OR elimination will be our first proof rule that uses subproofs. Subproofs are tools for case analysis or whatif excursions, used to support justification for later claims. In propositional logic, they will always contain one assumption. This assumption is a proposition whose scope is limited to the subproof. The syntax of a subproof in Logika looks like this:
premises ⊢ consequent
{
1. fact_A justification_1
... ...
17. {
18. fact_ D assume
... ...
25. fact_G some_rule using claim 1 // this is ok
... ...
}
... ...
45. fact_R some_rule using claim 25 // this is NOT ok
}
Opening and closing braces, {...}
, define the scope of claims. Other than the first {
which goes directly under the sequent, the opening brace is given a claim number, but no justification. Closing braces are not given claim numbers. The use of braces is analogous to their use to define scope in Java, C# and C.
In the example above, the subproof starting on line 17 creates an environment where fact_D is true. The justification used on claim number 25, which uses claim 1, is valid. The scope of claim 1 includes subproof 17.
However, the justification for line number 45 is invalid. Fact_G on line number 25 was proven true in an environment where fact_D is true (ie subproof 17). That environment ends (falls out of scope) when the closing brace for the subproof is reached. This happens before line 45.
Only specific deduction rules allow you to close a scope and create a new claim based on that subproof in the enclosing (outer) scope. These rules always take a subproof (i.e “17”) as part of the justification.
Syntax
Here is the OR elimination rule:
{ P assume { Q assume
P ∨ Q ... R } ... R }
∨e : 
R
In order to use the ∨e
rule, we must have three things:
 An OR statement of the form
P ∨ Q
 A subproof that begins by assuming the left side of the OR statement (
P
) and ends with some claimR
 A subproof that begins by assuming the right side of the OR statement (
Q
) and ends with the same claimR
If we have all three parts, we can use the ∨e
and cite the OR statement and both subproofs to claim that R
is true no matter what.
Here is a simple example showing the syntax of the ∨e
rule:
p ∨ q ⊢ q ∨ p
{
1. p ∨ q premise
2. {
3. p assume
4. q ∨ p ∨i2 3
}
5. {
6. q assume
7. q ∨ p ∨i1 6
}
8. q ∨ p ∨e 1 2 5
}
Here, we have the OR statement p ∨ q
. We then have two subproofs where we separately assume that the two sides or the OR are true. The first subproof on line 2 starts by assuming the left side of the OR, p
. It then uses OR introduction to reach the goal conclusion, q ∨ p
. After reaching our goal, we end the first subproof and immediately start a second subproof. In the second subproof, we assume that the the right side of our OR statement is true, q
. We then use the other form of OR introduction to reach the SAME conclusion as we did in the first subproof – q ∨ p
. We end the second subproof and can now use ∨e
to state that our conclusion q ∨ p
must be true no matter what. After all, we knew that at least one of p
or q
was true, and we were able to reach the conclusion q ∨ p
in both cases.
When using the justification:
∨e 1 2 5
The first line number corresponds to our original OR statement (line 1 with p ∨ q
for us), the second line number corresponds to the subproof where we assumed the first (left) side of that OR statement (line 2 for us, which starts the subproof where we assumed p
), and the third line number corresponds to the subproof where we assumed the second (right) side of that OR statement (line 5 for us, which starts the subproof where we assumed q
)
This proof shows that the OR operator is commutative.
Example 1
Suppose we want to prove the following sequent:
p ∧ (q ∨ r) ⊢ (p ∧ q) ∨ (p ∧ r)
As we have done before, we start by extracting whatever we can from our premises:
p ∧ (q ∨ r) ⊢ (p ∧ q) ∨ (p ∧ r)
{
1. p ∧ (q ∨ r) premise
2. p ∧e1 1
3. q ∨ r ∧e2 1
...
}
Next, we look at what we are trying to prove, and see that its toplevel operator is an OR. If we already had either side of our goal OR statement (i.e., either p ∧ q
or p ∧ r
), then we could use ∧i
to create the desired proposition. This isn’t the case for us, though, so we need to use a different strategy.
The next consideration when we want to prove an OR statement is whether we have another OR statement, either as a premise or a fact we have already established. If we do, then we can attempt to use OR elimination with that OR statement to build our goal conclusion ((p ∧ q) ∨ (p ∧ r)
). We have the OR statement q ∨ r
available, so we’ll try to use OR elimination – we’ll have a subproof where we assume q
and try to reach (p ∧ q) ∨ (p ∧ r)
, and then a subproof where we assume r
and try to reach (p ∧ q) ∨ (p ∧ r)
:
p ∧ (q ∨ r) ⊢ (p ∧ q) ∨ (p ∧ r)
{
1. p ∧ (q ∨ r) premise
2. p ∧e1 1
3. q ∨ r ∧e2 1
4. {
5. q assume
6. p ∧ q ∧i2 5
7. (p ∧ q) ∨ (p ∧ r) ∨i1 6
}
8. {
9. r assume
10. p ∧ r ∧i2 9
11. (p ∧ q) ∨ (p ∧ r) ∨i2 10
}
12. (p ∧ q) ∨ (p ∧ r) ∨e 3 4 8
}
We can make our final claim:
12. (p ∧ q) ∨ (p ∧ r) ∨e 3 4 8
Because we had an OR statement on line 3 (q ∨ r
), assumed the left side of that OR (q
) in subproof 4 and reached the conclusion of (p ∧ q) ∨ (p ∧ r)
, and then assumed the right side of our OR (r
) in subproof 8 and reached the SAME conclusion of (p ∧ q) ∨ (p ∧ r)
.
Example 2
Suppose we want to prove the following sequent:
(p ∨ q) ∧ (p ∨ r) ⊢ p ∨ (q ∧ r)
Note that this is the same as the previous example, but the premises are switched with the conclusion. If we prove this direction too, we will have shown that (p ∨ q) ∧ (p ∨ r)
is equivalent to p ∨ (q ∧ r)
. We’ll learn more about this process in section 4.8.
We start by pulling in our premises and extracting whatever information we can:
(p ∨ q) ∧ (p ∨ r) ⊢ p ∨ (q ∧ r)
{
1. (p ∨ q) ∧ (p ∨ r) premise
2. p ∨ q ∧e1 1
3. p ∨ r ∧e2 1
...
}
We see that the toplevel operator of our conclusion (p ∨ (q ∧ r)
) is an OR, so we apply the same strategy we did in the previous example – we see if we have an OR statement already available as a claim, and then try to use OR elimination on it to build to our conclusion in both subproofs. In this case, though, we have TWO or statements – p ∨ q
and p ∨ r
. We will see that it doesn’t matter which of these we choose, so let’s pick the first one – p ∨ q
:
(p ∨ q) ∧ (p ∨ r) ⊢ p ∨ (q ∧ r)
{
1. (p ∨ q) ∧ (p ∨ r) premise
2. p ∨ q ∧e1 1
3. p ∨ r ∧e2 1
4. {
5. p assume
6. p ∨ (q ∧ r) ∨i1 5
}
7. {
8. q assume
//what do we do now?
}
...
}
We are able to finish the first subproof, but it’s not clear what to do in the second subproof. We assume q
, and know we have the goal of reaching the same conclusion as we did in the first subproof, p ∨ (q ∧ r)
…but we don’t have enough information yet to get there. The only piece of information that we haven’t used that might help us is our second OR statement – p ∨ r
. We are already inside a subproof, but we can still nest other subproofs – just as we can nest conditional statements in computer programs.
We start on a nested OR elimination approach with p ∨ r
:
(p ∨ q) ∧ (p ∨ r) ⊢ p ∨ (q ∧ r)
{
1. (p ∨ q) ∧ (p ∨ r) premise
2. p ∨ q ∧e1 1
3. p ∨ r ∧e2 1
//OR elimination for p ∨ q
4. {
5. p assume
6. p ∨ (q ∧ r) ∨i1 5
}
7. {
8. q assume
//start on OR elimination with p ∨ r
9. {
10. p assume
11. p ∨ (q ∧ r) ∨i1 10
}
12. {
13. r assume
14. q ∧ r ∧i 8 13
15. p ∨ (q ∧ r) ∨i2 14
}
16. p ∨ (q ∧ r) ∨e 3 9 12
}
17. p ∨ (q ∧ r) ∨e 2 4 7
}
Note that we use the ∨e rule twice – on line 16 to tie together subproofs 9 and 12 (where we processed the OR statement p ∨ r
), and on line 17 to tie together subproofs 4 and 7 (where we processed the OR statement p ∨ q
).
When we first started this problem, we mentioned that it didn’t matter which OR statement we chose to work with – p ∨ q
or p ∨ r
. Indeed, we could have chosen p ∨ r
instead – but we would end up nesting another OR elimination for p ∨ q
:
(p ∨ q) ∧ (p ∨ r) ⊢ p ∨ (q ∧ r)
{
1. (p ∨ q) ∧ (p ∨ r) premise
2. p ∨ q ∧e1 1
3. p ∨ r ∧e2 1
//OR elimination for p ∨ r
4. {
5. p assume
6. p ∨ (q ∧ r) ∨i1 5
}
7. {
8. r assume
//start on OR elimination with p ∨ q
9. {
10. p assume
11. p ∨ (q ∧ r) ∨i1 10
}
12. {
13. q assume
14. q ∧ r ∧i 13 8
15. p ∨ (q ∧ r) ∨i2 14
}
16. p ∨ (q ∧ r) ∨e 2 9 12
}
17. p ∨ (q ∧ r) ∨e 3 4 7
}
Implies Rules
In this section, we will see the deduction rules for the implies operator.
Implies elimination
Remember that →
is a kind of logical “ifthen”. Here, we understand p → q
to mean that p
holds knowledge sufficient to deduce q
– so, whenever p
is proved to be a fact, then p → q
enables q
to be proved a fact, too. This is the implies elimination rule, →e
, and we can formalize it like this:
P → Q P
→e : 
Q
Here is a simple example of a proof that shows the syntax of the →e
rule:
a, a → b ⊢ b
{
1. a premise
2. a → b premise
3. b →e 2 1
}
Note that when we use →e
, we first list the line number of the implies statement, and then list the line number that contains the left side of that implies statement. The →e
allows us to claim the right side of that implies statement.
Implies introduction
The idea behind the next deduction rule, implies introduction, is that we would be introducing a new implies statement of the form P → Q
. In order to do this, we must be able to show our logical “ifthen” – that IF P
exists, THEN we promise that Q
will also exist. We do this by way of a subproof where we assume P
. If we can reach Q
by the end of that subproof, we will have shown that anytime P
is true, then Q
is also true. We will be able to close the subproof by introducing P → Q
with the →i
rule. We can formalize the rule like this:
{ P assume
... Q }
→i : 
P → Q
Here is a simple example of a proof that shows the syntax of the →i
rule:
a → b, b → c ⊢ a → c
{
1. a → b premise
2. b → c premise
3. {
//we want to prove a → c, so we start by assuming a
4. a assume
5. b →e 1 4
6. c →e 2 5
//...and try to end with c
}
//then we can conclude that anytime a is true, then c is also true
7. a → c →i 3
}
Note that when we use →i
, we list the line number of the subproof we just finished. We must have started that subproof by assuming the left side of the implies statement we are introducing, and ended that subproof with the right side of the implies statement we are introducing.
Example 1
Suppose we want to prove the following sequent:
p → (q → r) ⊢ (q ∧ p) → r
We start by listing our premise:
p → (q → r) ⊢ (q ∧ p) → r
{
1. p → (q → r) premise
}
We can’t extract any information from the premise, so we shift to examining our conclusion. The toplevel operator of our conclusion is an implies statement, so this tells us that we will need to use the →i
rule. We want to prove (q ∧ p) → r
, so we need to show that whenever q ∧ p
is true, then r
is also true. We open a subproof and assume the left side of our goal implies statement (q ∧ p
). If we can reach r
by the end of the subproof, then we can use →i
to conclude (q ∧ p) → r
:
p → (q → r) ⊢ (q ∧ p) → r
{
1. p → (q → r) premise
2. {
3. q ∧ p assume
//goal: get to r
}
//use →i to conclude (q ∧ p) → r
}
Now we can complete the proof:
p → (q → r) ⊢ (q ∧ p) → r
{
1. p → (q → r) premise
2. {
3. q ∧ p assume
4. q ∧e1 3
5. p ∧e2 3
6. q → r →e 1 5
7. r →e 6 4
//goal: get to r
}
//use →i to conclude (q ∧ p) → r
8. (q ∧ p) → r →i 2
}
Example 2
Suppose we want to prove the following sequent:
p → (q → r) ⊢ (p → q) → (p → r)
We see that we will have no information to extract from the premises. The toplevel operator is an implies statement, so we start a subproof to introduce that implies statement. In our subproof, we will assume the left side of our goal implies statement (p → q
) and will try to reach the right side of our goal (p → r
):
p → (q → r) ⊢ (p → q) → (p → r)
{
1. p → (q → r) premise
2. {
3. p → q assume
//goal: get to p → r
}
//use →i to conclude (p → q) → (p → r)
}
We can see that our goal is to reach p → r
in our subproof – so we see that we need to introduce another implies statement. This tells us that we need to nest another subproof – in this one, we’ll assume the left side of our current goal implies statement (p
), and then try to reach the right side of that current goal (r
). Then, we’d be able to finish that inner subproof by using →i
to conclude p → r
:
p → (q → r) ⊢ (p → q) → (p → r)
{
1. p → (q → r) premise
2. {
3. p → q assume
4. {
5. p assume
//goal: get to r
}
//use →i to conclude p → r
//goal: get to p → r
}
//use →i to conclude (p → q) → (p → r)
}
Now we can complete the proof:
p → (q → r) ⊢ (p → q) → (p → r)
{
1. p → (q → r) premise
2. {
3. p → q assume
4. {
5. p assume
6. q →e 3 5
7. q → r →e 1 5
8. r →e 7 6
//goal: get to r
}
//use →i to conclude p → r
9. p → r →i 4
//goal: get to p → r
}
//use →i to conclude (p → q) → (p → r)
10. (p → q) → (p → r) →i 2
}
Example 3
Here is one more example, where we see we can nest a →i
subproof and a ∨e
subproof:
p → r, q → r ⊢ (p ∨ q) → r
{
1. p → r premise
2. q → r premise
3. {
//assume p ∨ q, try to get to r
4. p ∨ q assume
//nested subproof for OR elimination on p ∨ q
//try to get to r in both cases
5. {
6. p assume
7. r →e 1 6
}
8. {
9. q assume
10. r →e 2 9
}
11. r ∨e 4 5 8
//goal: get to r
}
//use →i to conclude (p ∨ q) → r
12. (p ∨ q) → r →i 3
}
Negation Rules
In this section, we will see the deduction rules for the NOT operator. In this section, we will introduce the contradiction symbol (⊥
, or __
in ASCII). This symbol is also referred to as the bottom operator. We will be able to claim that a contradiction occurs when, for some proposition P
, we have proved both the facts P
and ¬ P
. This indicates that we are in an impossible situation, and often means that we have made a bad previous assumption.
Not elimination
The not elimination rule allows you to claim that you have reached a contradiction. We can formalize the rule like this:
P ¬ P
¬ e : 
⊥
Here is a simple example of a proof that shows the syntax of the ¬ e
rule:
q, ¬ q ⊢ ⊥
{
1. q premise
2. ¬ q premise
3. ⊥ ¬ e 1 2
}
We use the ¬ e
rule when we have facts for both P
and ¬ P
for some proposition P
. When we use the justification, we first list the line number of the claim for P
(line 1, in our case) and then the line number of the claim for ¬ P
(line 2, in our case).
Sometimes, the proposition P
itself has a NOT operator. Consider this example:
¬ q, ¬ ¬ q ⊢ ⊥
{
1. ¬ q premise
2. ¬ ¬ q premise
3. ⊥ ¬ e 1 2
}
Here, our proposition P
is the claim ¬ q
, and our proposition that is of the form ¬ P
is the claim ¬ ¬ q
.
Not introduction
The not introduction rule allows us to introduce a NOT operation. If assuming some proposition P
leads to a contradiction, then we must have made a bad assumption – P
must NOT be true after all. We can then introduce the fact ¬ P
. We can formalize the not introduction rule like this:
{ P assume
... ⊥ }
¬ i: 
¬ P
Here is a simple example of a proof that shows the syntax of the ¬ i
rule:
p, q → ¬ p ⊢ ¬ q
{
1. p premise
2. q → ¬ p premise
3. {
4. q assume
5. ¬ p →e 2 4
6. ⊥ ¬e 1 5
}
7. ¬ q ¬i 3
}
Note that the not introduction rule involves a subproof – if we wish to prove ¬ P
for some proposition P
, then we start a subproof where we assume P
. If we are able to reach a contradiciton on the last line of that subproof, then we can use the ¬ i
rule after the subproof ends to claim that our assumption was bad and that it is actually ¬ P
that is true. When we use ¬i
as a justification, we list the line number corresponding to this subproof.
Bottom elimination
There is a special law for reasoning forwards from an impossible situation — the ⊥e law — which says, in the case of a contradiction, everything becomes a fact. (That is, “if False is a fact, so is everything else ¬”.) This rule is called “bottom elimination”, and is written as ⊥e
. Here is a formalization of the rule:
⊥
⊥e :  for any proposition, Q, at all
Q
Here is a simple example of a proof that shows the syntax of the ⊥e
rule:
p, ¬ p ⊢ q
{
1. p premise
2. ¬ p premise
3. ⊥ ¬ e 1 2
4. q ⊥e 3
}
Note that when we use ⊥e
as the justification, we list the line number of where we reached a contradiction (⊥
).
The bottom elimination rule works well with case analysis, where we discover that one case is impossible. Here is a classic example:
p ∨ q, ¬ p ⊢ q
{
1. p ∨ q premise
2. ¬ p premise
3. {
4. p assume
5. ⊥ ¬e 4 2
6. q ⊥e 5
}
7. {
8. q assume
}
9. q ∨e 1 3 7
}
Considering the premise, p ∨ q
, we analyze our two cases by starting OR elimination. The first case, where p
holds true, is impossible, because it causes a contradiction. The ⊥e
rule lets us gracefully prove q
in this “impossible case”. (You can read lines 46 as saying, “in the case when p
might hold true, there is a contradiction, and in such an impossible situation, we can deduce whatever we like, so we deduce q
to finish this impossible case”.)
The second case, that q
holds true, is the only realistic case, and it immediately yields the conclusion. The proof finishes the twocase analysis with the ∨e
rule.
Proof by contradiction
The proof by contraction rule, pbc
, says that when assuming ¬ P
leads to a contradiction for some proposition P
, then we made a bad assumption and thus P
must be true. It is very similar to the ¬ i
rule, except pbc
has us assuming ¬ P
and eventually concluding P
, while the ¬ i
rule has us assuming P
and eventually concluding ¬ P
.
Here is a formalization of pbc
:
{ ¬ P assume
... ⊥ }
pbc: 
P
And here is an example that demonstrates the syntax of the rule:
¬ ¬ p ⊢ p
{
1. ¬ ¬ p premise
2. {
3. ¬ p assume
4. ⊥ ¬e 3 1
}
5. p pbc 2
}
When we use the pbc
rule as a justification for a claim P
, we cite the line number of the subproof where we assumed ¬ P
and ended in a contradiction.
Example 1
Suppose we want to prove the following sequent:
¬p ∧ ¬q  ¬(p ∨ q)
We start by listing our premise, and extracting the two sides of the AND statement:
¬p ∧ ¬q ⊢ ¬(p ∨ q)
{
1. ¬p ∧ ¬q premise
2. ¬p ∧e1 1
3. ¬q ∧e2 1
...
}
Next, we see that our conclusion has the form NOT (something), so this tells us that we will need to introduce a NOT (using the ¬ i
rule). In fact, ANY time we wish to prove a proposition of the form NOT (something), we will always use the ¬ i
rule. (We will discuss proof strategies in detail in the next section.) Since we want to prove ¬ (p ∨ q)
, then we open a subproof where we assume p ∨ q
. If we can end that subproof with a contradiction, then we can use ¬ i
afterwards to conclude ¬(p ∨ q)
.
We know that we want this proof structure:
¬p ∧ ¬q ⊢ ¬(p ∨ q)
{
1. ¬p ∧ ¬q premise
2. ¬p ∧e1 1
3. ¬q ∧e2 1
4. {
5. p ∨ q assume
//want to reach a contradiction
}
//will use ¬ i to conclude ¬(p ∨ q)
}
We know we must reach a contradiction in our subproof. We see that we have claims ¬p
, ¬q
, and p ∨ q
. Since at least one of p ∨ q
is true, and since either one would yield a contradiction with one of ¬p
or ¬q
, then we start on OR elimination:
¬p ∧ ¬q ⊢ ¬(p ∨ q)
{
1. ¬p ∧ ¬q premise
2. ¬p ∧e1 1
3. ¬q ∧e2 1
4. {
5. p ∨ q assume
//OR elimination subproofs on p ∨ q
6. {
7. p assume
8. ⊥ ¬e 7 2
}
9. {
10. q assume
11. ⊥ ¬e 10 3
}
//use ∨e rule to tie together subproofs
//want to reach a contradiction
}
//will use ¬ i to conclude ¬(p ∨ q)
}
We see that both OR elimination subproofs ended with a contradiction (⊥
). Just like any other use of ∨e
, we restate that common conclusion after the two subproofs. We knew at least one of p
or q
were true, and both ended in a contradiction – so the contradiction holds no matter what:
¬p ∧ ¬q ⊢ ¬(p ∨ q)
{
1. ¬p ∧ ¬q premise
2. ¬p ∧e1 1
3. ¬q ∧e2 1
4. {
5. p ∨ q assume
//OR elimination subproofs on p ∨ q
6. {
7. p assume
8. ⊥ ¬e 7 2
}
9. {
10. q assume
11. ⊥ ¬e 10 3
}
//use ∨e rule to tie together subproofs
12. ⊥ ∨e 5 6 9
//want to reach a contradiction
}
//will use ¬ i to conclude ¬(p ∨ q)
}
All that remains is the use the ¬ i
rule to finish subproof 4, as that subproof ended with a contradiction:
¬p ∧ ¬q ⊢ ¬(p ∨ q)
{
1. ¬p ∧ ¬q premise
2. ¬p ∧e1 1
3. ¬q ∧e2 1
4. {
5. p ∨ q assume
//OR elimination subproofs on p ∨ q
6. {
7. p assume
8. ⊥ ¬e 7 2
}
9. {
10. q assume
11. ⊥ ¬e 10 3
}
//use ∨e rule to tie together subproofs
12. ⊥ ∨e 5 6 9
//want to reach a contradiction
}
//will use ¬ i to conclude ¬(p ∨ q)
13. ¬(p ∨ q) ¬ i 4
}
Example 2
When doing propositional logic translations, we learned that p → q
is equivalent to its contrapositive, ¬q → ¬p
. We will prove one direction of this equivalence (to show two statements are provably equivalent, which we will see in section 4.8, we would need to prove both directions):
p → q ⊢ ¬q → ¬p
We notice that the toplevel operator of our conclusion is an IMPLIES operator, so we know that we need to introduce an implies operator. We saw in the previous section that the blueprint for introducing an implies operator is with a subproof: assume the left side of the goal implies statement, and try to reach the right side of the goal implies statement by the end of the subproof.
We have this proof structure:
p → q ⊢ ¬q → ¬p
{
1. p → q premise
2. {
3. ¬q assume
//goal: reach ¬p
}
//use →i to conclude ¬q → ¬p
}
We see that our goal in the subproof is the show ¬p – if we could do that, then we could tie together that subproof with the →i rule. Since our intermediate goal is to prove NOT (something), then we use our strategy for not introduction as an inner subproof. We finish the proof as shown:
p → q ⊢ ¬q → ¬p
{
1. p → q premise
2. {
3. ¬q assume
//use ¬i strategy to prove ¬p
4. {
5. p assume
6. q →e 1 5
7. ⊥ ¬e 6 3
}
8. ¬p ¬i 4
//goal: reach ¬p
}
//use →i to conclude ¬q → ¬p
9. ¬q → ¬p →i 2
}
Example 3
Suppose we want to prove the sequent:
¬(¬p ∨ ¬q) ⊢ p ∧ q
We see that there is nothing to extract from our premise, and that the toplevel operator of the conclusion (p ∧ q
) is an AND. We see that we will need to introduce an AND statement – but the only way we can create p ∧ q
is to separately prove both p
and q
.
It is not immediately clear how to prove either p
or q
. We will discuss proof strategies in more detail in the next section, but pbc
is a good fallback option if you don’t have a clear path for how to prove something and some of the claims in the proof involve negation. Since we wish to prove p
, then will will assume ¬p
in a subproof. If we can reach a contradiction, then we can use pbc
to conclude p
:
¬(¬p ∨ ¬q) ⊢ p ∧ q
{
1. ¬(¬p ∨ ¬q) premise
2. {
3. ¬p assume
//goal: contradiction
}
//use pbc to conclude p
//similarly prove q
//use ∧i to conclude p ∧ q
}
In subproof 2, we know we need to end with a contradiction. The only propositions we have to work with are ¬(¬p ∨ ¬q)
and ¬p
. But if we use ∨i1
with ¬p
, then we could have ¬p ∨ ¬q
– and then we could claim a contradiction ¬ We complete the proof as shown (using the same strategy to prove q
):
¬(¬p ∨ ¬q) ⊢ p ∧ q
{
1. ¬(¬p ∨ ¬q) premise
2. {
3. ¬p assume
4. ¬p ∨ ¬q ∨i1 3
//goal: contradiction
5. ⊥ ¬e 4 1
}
//use pbc to conclude p
6. p pbc 2
//similarly prove q
7. {
8. ¬q assume
9. ¬p ∨ ¬q ∨i2 8
10. ⊥ ¬e 9 1
}
11. q pbc 7
//use ∧i to conclude p ∧ q
12. p ∧ q ∧i 6 11
}
Law of the excluded middle
The law of the excluded middle (LEM) is famous consequence of pbc
: from no starting premises at all, we can prove p ∨ ¬ p
for any proposition we can imagine:
⊢ p ∨ ¬ p
{
1. {
2. ¬ (p ∨ ¬ p) assume
3. {
4. p assume
5. p ∨ ¬ p ∨i1 4
6. ⊥ ¬e 5 2
}
7. ¬ p ¬i 3
8. p ∨ ¬ p ∨i2 7
9. ⊥ ¬e 8 2
}
10. p ∨ ¬ p pbc 1
}
In other proofs involving negation and no clear path forward, it is sometimes useful to first derive LEM (this is possible since no premises are needed). If we have the claim p ∨ ¬ p
in a proof, then we can use OR elimination where we separately assume p
and then ¬ p
to try to reach the same conclusion. Here is one such example:
p → q ⊢ ¬ p ∨ q
{
1. p → q premise
2. { // start of previous p ∨ ¬ p proof
3. ¬ (p ∨ ¬ p) assume
4. {
5. p assume
6. p ∨ ¬ p ∨i1 5
7. ⊥ ¬e 6 3
}
8. ¬ p ¬i 4
9. p ∨ ¬ p ∨i2 8
10. ⊥ ¬e 9 3
}
11. p ∨ ¬ p pbc 2 // conclusion of p ∨ ¬ p proof
12. {
13. p assume
14. q →e 1 13
15. ¬ p ∨ q ∨i2 14
}
16. {
17. ¬ p assume
18. ¬ p ∨ q ∨i1 17
}
19. ¬ p ∨ q ∨e 11 12 16
}
Summary and Strategies
When examining more complex propositional logic sequents, it can be challenging to know where to start. In this section, we summarize all available rules in propositional logic, and discuss stratgies for approaching proofs.
AND rules
Rule summaries:
P Q P ∧ Q P ∧ Q
∧i :  ∧e1 :  ∧e2 : 
P ∧ Q P Q
Rule syntax summaries:
{
...
x. p (...)
y. q (...)
z. p ∧ q ∧i x y
...
}
{
...
x. p ∧ q (...)
y. p ∧e1 x
z. q ∧e2 x
...
}
OR rules
Rule summaries:
{ P assume { Q assume
P Q P ∨ Q ... R } ... R }
∨i1 :  ∨i2 :  ∨e : 
P ∨ Q P ∨ Q R
Rule syntax summaries:
{
...
x. p (...)
y. p ∨ q ∨i1 x
...
}
{
...
x. q (...)
y. p ∨ q ∨i2 x
...
}
{
...
a. p ∨ q (...)
b. {
c. p assume
...
d. r (...)
}
f. {
g. q assume
...
h. r (...)
}
i. r ∨e a b f
...
}
Implies rules
Rule summaries:
{ P assume
P → Q P ... Q }
→e :  →i : 
Q P → Q
Rule syntax summaries:
{
...
x. p → q (...)
y. p (...)
z. q →e x y
}
{
...
a. {
b. p assume
///
c. q (...)
}
d. p → q →i a
}
Negation rules
Rule summaries:
{ P assume { ¬ P assume
P ¬ P ... ⊥ } ⊥e ... ⊥ }
¬ e :  ¬ i :  ⊥e :  pbc: 
⊥ ¬ P Q P
Rule syntax summaries:
{
x. p (...)
y. ¬ p (...)
z. ⊥ ¬ e x y
}
{
a. {
b. p assume
...
c. ⊥ (...)
}
d. ¬ p ¬ i a
}
{
x. ⊥ (...)
z. q ⊥e x
}
{
a. {
b. ¬ p assume
...
c. ⊥ (...)
}
d. p pbc a
}
Strategies
 Write down all premises first. Can you extract anything from the premises?
 If you have
p∧q
, use∧e1
to extractp
by itself and then∧e2
to extractq
by itself.  If you have
p→q
andp
, use→e
to getq
.  If you have
p
and¬p
, use¬e
to claim a contradiction,⊥
.
 If you have
 Look at the toplevel operator of what you are trying to prove.

Are you trying to prove something of the form
p→q
? Use
→i
. Open a subproof, assumep
, and get toq
by the end of the subproof. After the subproof, use→i
to concludep→q
.
 Use

Are you trying to prove something of the form
¬p
? Use
¬i
. Open a subproof, assumep
, and get a contradiction,⊥
, by the end of the subproof. After the subproof, use¬i
to conclude¬p
.
 Use

Are you trying to prove something of the form
p ∧ q
? Try to prove
p
by itself and thenq
by itself. Afterward, use∧i
to concludep ∧ q
.
 Try to prove

Are you trying to prove something of the form
p ∨ q
? See if you have either
p
orq
by itself – if you do, use either∨i1
or∨i2
to concludep ∨ q
.
 See if you have either

 You’ll need to nest the approaches from step 2. Once you are in a subproof, think about what you are trying to prove by the end of that subproof. Follow the strategy in step 2 to prove your current goal, nesting subproofs as needed. As you work, stop and scan the propositions that you have available. See if you can extract anything from them as you did for the premises in step 1.
 No match, or still stuck?
 Do you have an OR statement available? Try using OR elimination to prove your goal conclusion.
 Do your propositions have NOT operators, but don’t fit the form for using
¬i
? Try usingpbc
. If you are trying to prove something of the formp
, open a subproof, assume¬p
, and try to reach a contradiction by the end of the subproof. Afterward, usepbc
to concludep
.  As a last resort, try pasting in the proof for the law of the excluded middle (see section 4.5). Then use OR elimination on
p ∨ ¬p
.
Proofs can be quite challenging. You might follow one approach, get stuck, and not be able to make progress. If this happens, backtrack and follow a different approach. If you are using Logika to verify your work, make sure it does not mark any lines in the proof in red – this means that you’ve made an invalid conclusion along the way, or that your justification for a particular line doesn’t follow the expected format. Try to fix these errors before continuing on with the proof.
Theorems
Definition
A theorem in propositional logic is something that is always true with no need for premises. The truth table for a theorem is always a tautology – it is true for any truth assignment.
To express a theorem as a sequent, we write:
⊢ (theorem)
This shows that we are trying to prove our theorem with NO premises. Such a proof will always immediately begin with a subproof, as there are no premises to process.
Law of the excluded middle, revisited
For example, the law of the excluded middle (LEM), p ∨ ¬p
, is a theorem. We proved in section 4.5 that p ∨ ¬p
is always true with no premises:
⊢ p ∨ ¬ p
{
1. {
2. ¬ (p ∨ ¬ p) assume
3. {
4. p assume
5. p ∨ ¬ p ∨i1 4
6. ⊥ ¬e 5 2
}
7. ¬ p ¬i 3
8. p ∨ ¬ p ∨i2 7
9. ⊥ ¬e 8 2
}
10. p ∨ ¬ p pbc 1
}
We also see that the truth table for LEM is a tautology:
*

p  p ∨ ¬ p

T  T F
F  T T

Tautology
Another example
Suppose we wish to prove the following theorem of propositional logic:
(p → q) → ((¬p → q) → q)
We would need to prove the sequent:
⊢ (p → q) → ((¬p → q) → q)
We see that the toplevel operator of what we are trying to prove is an implies operator. So, we begin our proof using the strategy for implies introduction:
⊢ (p → q) → ((¬p → q) → q)
{
1. {
2. p → q assume
//goal: reach (¬p → q) → q
}
//use →i to conclude (p → q) → ((¬p → q) → q)
}
Inside subproof 1, we are trying to prove (¬p → q) → q
. The toplevel operator of that statement is an implies, so we nest another subproof with the goal of using implies introduction:
⊢ (p → q) → ((¬p → q) → q)
{
1. {
2. p → q assume
3. {
4. ¬p → q assume
//goal: reach q
}
//use →i to conclude (¬p → q) → q
//goal: reach (¬p → q) → q
}
//use →i to conclude (p → q) → ((¬p → q) → q)
}
Now we must prove q
in subproof 3. We have available propositions p → q
and ¬p → q
– we can see that if we had LEM (p ∨ ¬p
) available, then we could use OR elimination to get our q
in both cases. We insert the LEM proof into subproof 3:
⊢ (p → q) → ((¬p → q) → q)
{
1. {
2. p → q assume
3. {
4. ¬p → q assume
//Begin LEM proof, p ∨ ¬p
5. {
6. ¬ (p ∨ ¬ p) assume
7. {
8. p assume
9. p ∨ ¬ p ∨i1 8
10. ⊥ ¬e 9 6
}
11. ¬ p ¬i 7
12. p ∨ ¬ p ∨i2 11
13. ⊥ ¬e 12 6
}
14. p ∨ ¬ p pbc 5
//End LEM proof for p ∨ ¬p
//use OR elimination on p ∨ ¬p
//goal: reach q
}
//use →i to conclude (¬p → q) → q
//goal: reach (¬p → q) → q
}
//use →i to conclude (p → q) → ((¬p → q) → q)
}
Finally, we do OR elimination with p ∨ ¬p
and tie together the rest of the proof:
⊢ (p → q) → ((¬p → q) → q)
{
1. {
2. p → q assume
3. {
4. ¬p → q assume
//Begin LEM proof, p ∨ ¬p
5. {
6. ¬ (p ∨ ¬ p) assume
7. {
8. p assume
9. p ∨ ¬ p ∨i1 8
10. ⊥ ¬e 9 6
}
11. ¬ p ¬i 7
12. p ∨ ¬ p ∨i2 11
13. ⊥ ¬e 12 6
}
14. p ∨ ¬ p pbc 5
//End LEM proof for p ∨ ¬p
//use OR elimination on p ∨ ¬p, try to reach q
15. {
16. p assume
17. q →e 2 16
}
18. {
19. ¬ p assume
20. q →e 4 19
}
21. q ∨e 14 15 18
//goal: reach q
}
//use →i to conclude (¬p → q) → q
22. (¬p → q) → q →i 3
//goal: reach (¬p → q) → q
}
//use →i to conclude (p → q) → ((¬p → q) → q)
23. (p → q) → ((¬p → q) → q) →i 1
}
If we complete a truth table for (p → q) → ((¬p → q) → q)
, we also see that it is a tautology:
*

p q  (p → q) → ((¬p → q) → q)

T T  T T F T T
T F  F T F T F
F T  T T T T T
F F  T T T F T

Tautology
Equivalence
In this section, we will revisit the notion of equivalence. In chapter 2, we saw how we could use truth tables to show that two logical formulae are equivalent. Here, we will see that we can also show they are equivalent using our natural deduction proof rules.
Semantic equivalence
We saw in section 2.4 that two (or more) logical statements S1
and S2
were said to be semantically equivalent if and only if:
S1 ⊨ S2
and
S2 ⊨ S1
As a reminder, the S1 ⊨ S2
means S1
semantically entails S2
, which means that every truth assignment that satisfies S1
also satisfies S2
.
Semantic equivalence between S1
and S2
means that each proposition semantically entails the other – that S1
and S2
have the same truth value for every truth assignment; i.e., their truth tables evaluate exactly the same.
Showing semantic equivalence with two truth tables
For example, if we wished to show that the propositions p → ¬ q
and ¬ (p ∧ q)
were semantically equivalent, then we could create truth tables for each proposition:
*

p q  p → ¬ q

T T  F F
T F  T T
F T  T F
F F  T T

Contingent
 T: [T F] [F T] [F F]
 F: [T T]
*

p q  ¬ (p ∧ q)

T T  F T
T F  T F
F T  T F
F F  T F

Contingent
 T: [T F] [F T] [F F]
 F: [T T]
We see that the same set of truth assignments, [T F] [F T] [F F]
, satisfies both p → ¬ q
and ¬ (p ∧ q)
.
Showing semantic equivalence with one truth table
To show that propositions S1
and S2
are semantically equivalent, we need to show that if S1
is true, then so is S2
, and that if S2
is true, then so is S1
. Instead of comparing the truth tables of both S1
and S2
, we could instead express our requirements as a biimplication: S1 ↔ S2
. To express a biimplication operator, we can use a conjunction of two implications: (S1 → S2) ∧ (S2 → S1)
. If this conjunction is a tautology, then we know that if one proposition is true, then the other one is too – that S1
and S2
are semantically equivalent.
Below, we show that p → ¬ q
and ¬ (p ∧ q)
are semantically equivalent using one truth table:
*

p q  ((p → ¬ q) → ¬ (p ∧ q)) ∧ (¬ (p ∧ q) → (p → ¬ q))

T T  F F T F T T F T T F F
T F  T T T T F T T F T T T
F T  T F T T F T T F T T F
F F  T T T T F T T F T T T

Tautology
Provable equivalence
Two propositional logic statements S1
and S2
are provably equivalent if and only if we can prove both of the following sequents:
S1 ⊢ S2
and
S2 ⊢ S1
We can also write:
S2 ⟛ S1
For example, suppose we wish to show that the propositions p → ¬ q
and ¬ (p ∧ q)
are provably equivalent. We must prove the following sequents:
p → ¬ q ⊢ ¬ (p ∧ q)
and
¬ (p ∧ q) ⊢ p → ¬ q
We complete both proofs below:
p → ¬ q ⊢ ¬ (p ∧ q)
{
1. p → ¬ q premise
2. {
3. p ∧ q assume
4. p ∧e1 3
5. q ∧e2 3
6. ¬ q →e 1 4
7. ⊥ ¬ e 5 6
}
8. ¬ (p ∧ q) ¬ i 2
}
¬ (p ∧ q) ⊢ p → ¬ q
{
1. ¬ (p ∧ q) premise
2. {
3. p assume
4. {
5. q assume
6. p ∧ q ∧i 3 5
7. ⊥ ¬ e 6 1
}
8. ¬ q ¬ i 4
}
9. p → ¬ q →i 2
}
Soundness and Completeness
Section 4.8 showed us that we can prove two statements are semantically equivalent with truth tables and provably equivalent with deduction proofs. Does it matter which approach we use? Will there ever be a time when two statements are semantically equivalent but not provably equivalent, or vice versa? Will there ever be a time when a set of premises semantically entails a conclusion, but that the premises do not prove (using our deduction proofs) the conclusion, or vice versa?
These questions lead us to the notions of soundness and completeness. Formal treatment of both concepts is beyond the scope of this course, but we will introduce both definitions and a rough idea of the proofs of soundness and completeness in propositional logic.
Soundness
A proof system is sound if everything that is provable is actually true. Propositional logic is sound if when we use deduction rules to prove that P1, P2, ..., Pn ⊢ C
(that a set of premises proves a conclusion) then we can also use a truth table to show that P1, P2, ..., Pn ⊨ C
(that a set of premises semantically entails a conclusion).
Propositional logic is, in fact, sound.
To get an idea of the proof, consider the ∧e1
deduction rule. It allows us to directly prove:
P ∧ Q ⊢ P
I.e., if we have P ∧ Q
as a premise or as a claim in part of a proof, then we can use ∧e1
to conclude P
. We must also show that:
P ∧ Q ⊨ P
I.e., that any time P ∧ Q
is true in a truth table, then P
is also true. And of course, we can examine the truth table for P ∧ Q
, and see that it is only true in the cases that P
is also true.
Consider the ∧i
deduction rule next. It allows us to directly prove:
P, Q ⊢ P ∧ Q
I.e., if we have both P
and Q
as premises or claims in part of a proof, then we can use ∧i
to conclude P ∧ Q
. We must also show that:
P, Q ⊨ P ∧ Q
I.e., that any time both P
and Q
are true in a truth table, then P ∧ Q
is also true. And of course, we can examine the truth table for P ∧ Q
and see that whenever P
and Q
are true, then P ∧ Q
is also true.
To complete the soundness proof, we would need to examine the rest of our deduction rules in a similar process. We would then use an approach called mathematical induction (which we will see for other applications in Chapter 7) to extend the idea to a proof that applies multiple deduction rules in a row.
Completeness
A proof system is complete if everything that is true can be proved. Propositional logic is complete if when we can use a truth table to show that P1, P2, ..., Pn ⊨ C
, then we can also use deduction rules to prove that P1, P2, ..., Pn ⊢ C
.
Propositional logic is also complete.
We assume that P1, P2, ..., Pn ⊨ C
, and we consider the truth table for (P1 ∧ P2 ∧ ... ∧ Pn) → C
(since that will be a tautology whenever P1, P2, ..., Pn ⊨ C
). In order to show propositional logic is complete, we must show that we can use our deduction rules to prove P1, P2, ..., Pn ⊢ C
.
The idea is to use LEM for each propositional atom A
to obtain A ∨ ¬A
(corresponding to the truth assignments in the (P1 ∧ P2 ∧ ... ∧ Pn) → C
truth table). We then use OR elimination on each combination of truth assignments, with separate cases for each logical operator being used.