# Lean for Scala programmers - Part 4

June 01, 2021

Over the past few articles we’ve talked about inductive types, dependent functions, propositions as types, type classes, etc.

While this machinery is not strictly needed to start writing simple proofs (as witnessed by the amazing Natural Numbers Game), it certainly is when one is ready to move beyond carefully crafted pedagogical examples.

Today we’re finally in a position to discuss in much more detail how to create proofs in Lean; in particular we’ll analyze examples introduced previously.

# Two styles of proofs

## Forward Proofs

A “forward” or “functional” proof is just a regular function that uses standard functional programming constructs such as function application, pattern matching, variable assignment, etc. to construct a value of a given type.

### Example 1

As a first example let’s analyze the following statement:

Let $p, q$ be two arbitrary propositions.

Then

$p \land q \implies q \land p$

($\land$ is the logical and combinator)

This can be written in Lean as

def and_is_comm (p q: Prop) (h: p ∧ q): q ∧ p :=
sorry

(∧ is entered with \and; ∨ with \or)

This function takes 3 arguments:

• Two propositions p , q
• A value h of type p ∧ q. In other words, h is a proof of the proposition p ∧ q.

The function itself corresponds to a logical implication; in order to prove it we have to create a value of type q ∧ p.

Looking at the definition of the and combinator we notice it is a structure with one constructor:

structure and (a b: Prop): Prop :=
intro :: (left: a) (right: b)

Note on syntax: intro :: changes the default constructor’s name (mk), so that values can be created with and.intro a b

using #print to inspect and:

#print and

-- structure and : Prop → Prop → Prop
-- fields:
-- and.left : ∀ {a b : Prop}, a ∧ b → a
-- and.right : ∀ {a b : Prop}, a ∧ b → b

#print and.intro
-- constructor and.intro : ∀ {a b : Prop}, a → b → a ∧ b

This means that we can use and.intro to introduce (i.e. create) conjunctions, and we can use and.left and and.right to eliminate (i.e. consume, or extract components of) conjunctions.

The unicode symbol ∧ is just an infix alias for the type and, declared like so:

notation a ∧ b := and a b

With this information let’s finish the proof of and_is_comm:

lemma and_is_comm (p q: Prop) (h: p ∧ q): q ∧ p :=
and.intro (h.right) (h.left)

( lemma is a synonym of def)

We can use string diagrams to represent proofs graphically:

• This is a left-to-right diagram, parameterized by two propositions: p and q.
• Boxes represent functions or built-in operations.
• Lines represent input/output terms; in many cases only the type will be shown.
• Δ is the duplicate operation.

## Backward Proofs

A second way to build proofs is using tactics. Tactics are macros that provide a layer of automation on top of the normal “forward” style. In many cases they are more convenient for interactive use:

lemma and_is_comm' (p q: Prop) (h: p ∧ q): q ∧ p :=
begin
apply and.intro,
exact h.right,
exact h.left,
end

Tactics can only be used in tactic mode, which can be entered with the keywords begin ... end.

Before describing each tactic we need to talk about the local context and the proof goal.

The local context can be seen in VSCode’s Lean Infoview. If you place the cursor in the begin line above you should see this:

pq: Prop
h: p ∧ q
⊢ q ∧ p

The symbol ⊢ (turnstile) is used in logic to indicate that the right-hand side (q ∧ p) can be proved, or follows logically from the left-hand side.

The general form is:

Context ⊢ Goal
• The context is a series of statements describing terms of specific types.
• The goal is a type; in many cases a proposition

It is common to use the letter Gamma to represent the context:

Γ ⊢ Goal

When proving things in Lean our objective is to create a term of the type specified by the goal; or alternatively simplify the goal into something that is logically equivalent.

Returning to our proof:

• apply and.intro uses the constructor and.intro to generate two (sub) goals:
pq: Prop
h: p ∧ q
⊢ q

pq: Prop
h: p ∧ q
⊢ p

Intuitively, in order to prove q ∧ p it suffices to prove q and p separately, since we can use and.intro afterwards.

• exact h.right uses the function and.right to select the q component of h, and tries to close the subgoal q with it.
• exact h.left does the same for the second subgoal

After this there are no more goals left so this concludes the proof.

Notice how the context is just the scope of the function that represents the proof.

Graphically we’re going to represent

h: p ∧ q
⊢ q ∧ p

as

meaning that we need to connect the input wires to the output somehow.

This is our initial state.

apply and.intro generates two subgoals, corresponding to the two arguments of and.intro. Each subgoal inherits the original context; represented here by the duplicate Δ operation.

exact h.right completes the first subgoal:

exact h.left completes the proof:

# Induction and Natural Numbers

### Example 2

$\forall n: \N, n + 0 = n$

Addition of natural numbers is defined in Lean like so:

def add : ℕ -> ℕ -> ℕ
| a  zero     := a
| a  (succ b) := succ (add a b)

(the + operator is a synonym of add)

and so the proposition is true basically “by definition”

lemma add_zero (n: ℕ): n + 0 = n :=
rfl

-- or in tactic mode:

lemma add_zero' (n: ℕ): n + 0 = n :=
by refl

this lemma is available in the stdlib as nat.add_zero

• refl is the tactic version of rfl. Both can be used to prove goals of the form a = b when a is equals to b by definition
• when a single tactic is needed we can use by instead of begin...end

### Example 3

On the other hand:

$\forall n: \N, 0 + n = n$

needs to be proved.

We’ll use mathematical induction over the input argument n:

lemma zero_add (n: ℕ): 0 + n = n :=
begin
induction n with d hd,
-- base case:
sorry,

-- inductive step:
sorry,
end

The induction tactic operates on the goal, which has to be a proposition that depends on a variable n: ℕ

 induction n with d hd
• n is the variable (in scope) used for induction
• d and hd are (arbitrary) variable names
• It creates two subgoals, the base case and the inductive step
case nat.zero
⊢ 0 + 0 = 0

case nat.succ
d: ℕ
hd: 0 + d = d
⊢ 0 + succ d = succ d

The first subgoal can be completed with refl

lemma zero_add (n: ℕ): 0 + n = n :=
begin
induction n with d hd,
-- base case:
refl,

-- inductive step:
sorry,
end

For the second subgoal we’ll use the tactic rw to rewrite it towards something that can be proved by refl:

The rewrite tactic rw takes a value of type a = b and uses it to replace all occurrences of a by b in the goal:

In our case add_succ 0 d has type 0 + d.succ = (0 + d).succ, so that

rw add_succ 0 d,

will transform

Γ ⊢ 0 + d.succ = d.succ

into

Γ ⊢ (0 + d).succ = d.succ

Similarly, rw hd will replace 0 + d by d.

Putting all the pieces together:

lemma zero_add (n: ℕ): 0 + n = n :=
begin
induction n with d hd,
-- base case:
refl,

-- inductive step:
rw hd,
end

The line rw add_succ 0 d can be also written as rw add_succ, as Lean can infer the arguments 0 and d.

Before moving on let’s show a different approach that can produce more compact proofs:

lemma zero_add': ∀ (n: ℕ), 0 + n = n
| 0       := by refl
| (d + 1) := by rw [add_succ, zero_add' d]

We are pattern matching on the structure of n: ℕ and returning two proofs, and Lean is using them to assemble the full inductive proof.

• consecutive rw lines can be collapsed into a single rw [...]
• within the inductive step (d + 1) Lean adds the inductive hypothesis as zero_add': ∀ (n : ℕ), 0 + n = n

# Structural Induction

Induction over the natural numbers (mathematical induction) is a special case of Structural Induction, which can be used in Lean to prove properties of inductively defined data types.

### Example 4

Let’s analyze the map_length example we mentioned back in the first part of this series.

Given the following definitions of length and map on lists:

def length {A: Type*}: list A -> ℕ
| []       := 0
| (h :: t) := 1 + length t

def map {A B: Type*} (f: A -> B): list A -> list B
| []       := []
| (h :: t) := f h :: map t

we’ll prove that applying map doesn’t change the length of the resulting list.

length (map f l) = length l

The idea is the same as in induction over the natural numbers: we consider all constructors for the given data type and provide proofs of all cases. Lean will assemble all the pieces into a proof that is valid for all inhabitants of the given type:

lemma map_length {A B: Type*} (f: A -> B):
∀ (l: list A), length (map f l) = length l
| []       := by refl
| (h :: t) := by rw [length, map, length, map_length]

The base case is trivially true (provable by refl) so let’s focus on the inductive step.

This is the initial proof state:

A B: Type
f: A → B
map_length: ∀ (l : list A), length (map f l) = length l
h: A
t: list A
⊢ length (map f (h :: t)) = length (h :: t)

we’re going to manipulate the goal until we get something “obvious” that can be proved by refl:

In this case we’ve used rw with a function as an argument (instead of an equation); rw will “unfold” the function definition, one step at a time.

The last step has the form a = a, and rw map_length will apply refl automatically for us, thus closing the goal.

This is a good moment to mention the tactic simp

simp [h₁ h₂ ... hₙ] uses the provided expressions and other lemmas tagged with the attribute [simp] to simplify the main goal.

We could have proved the inductive step like this:

  | (h :: t) := by simp [length, map, map_length]

As a comparison here’s a different proof, using a more functional style:

lemma map_length {A B: Type*} (f: A -> B):
∀ (l: list A), length (map f l) = length l
| []       := rfl
| (h :: t) :=
let
p1:              1 + length t = 1 + length t         := rfl,
p2:   length (f h :: map f t) = 1 + length (map f t) := rfl,
p3:            map f (h :: t) = f h :: map f t       := rfl,
p4:           length (h :: t) = 1 + length t         := rfl,

e1:             length t = length (map f t) := eq.symm (map_length t),
e2: 1 + length (map f t) = 1 + length t     := eq.subst e1 p1
in
eq.subst p4 (eq.subst p3 (eq.subst p2 e2))
• let ... in ... introduces local definitions
• eq.symm a = b produces b = a
• eq.subst (h₁ : a = b) (h₂ : P a) : P b expects an equation as first argument, and replaces all occurrences of a for b in the second argument. It servers the same purpose as rw in tactic mode.

Graphically (inductive step, top-to-bottom):

If you look carefully at p2 and p4 :

p2: length (f h :: map f t) = 1 + length (map f t) := rfl,
p4:         length (h :: t) = 1 + length t         := rfl,

you’ll notice that we’re creating proofs of something that is not literally of the form a = a.

When we defined length via pattern matching, Lean created automatically two equations:

∀ {A : Type}, length list.nil = 0

∀ {A : Type} (h : A) (t : list A), length (h :: t) = 1 + length t

corresponding to applying the function one step at a time on the two cases [] and (h :: t).

Furthermore, these equations are available to rfl via a special annotation.

Similarly, the definition of map introduces the equations:

∀ {A : Type u₁} {B : Type u₂} (f : A → B), map f list.nil = list.nil

∀ {A : Type u₁} {B : Type u₂} (f : A → B) (h : A) (t : list A),
map f (h :: t) = f h :: map f t

allowing us to use rfl in p3:

p3: map f (h :: t) = f h :: map f t := rfl,

The idea is that we can use rfl (or by refl) to prove that two things are identical or equal “by definition”.

Tactic-mode proofs are very handy for interactive use, since it’s easy to inspect the state of the proof. On the other hand the state itself is not captured in the source code, which can make the proof harder to read.

It is good to remember that tactic mode and regular mode can be mixed as needed.