You are on page 1of 116

Introduction to Mathematical Logic

Adrien Deloro
Spring 2009

Contents
0

Propositional Logic (Allegro)


0.1

0.2

0.3

0.4

0.5

0.6

Syntax

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

0.1.1

The Language of Propositional Logic . . . . . . . . . . . .

0.1.2

Well-Formed Formulas . . . . . . . . . . . . . . . . . . . .

0.1.3

Unique Readability . . . . . . . . . . . . . . . . . . . . . .

Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

0.2.1

Truth Assignments . . . . . . . . . . . . . . . . . . . . . .

0.2.2

Satisfaction and Semantic Consequence

0.2.3

Simplifying the Language

. . . . . . . . . .

. . . . . . . . . . . . . . . . . .

10

Natural Deduction (in Classical Logic) . . . . . . . . . . . . . . .

12

0.3.1

Syntactic Consequences and Deductions . . . . . . . . . .

13

0.3.2

Our First Deductions

16

0.3.3

Simplifying the Language (and Presentation)

. . . . . . .

18

0.3.4

The Soundness Theorem . . . . . . . . . . . . . . . . . . .

25

. . . . . . . . . . . . . . . . . . . .

The Completeness Theorem . . . . . . . . . . . . . . . . . . . . .

26

0.4.1

Extending the Theory

27

0.4.2

Finding a Truth Assignment

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .

29

0.5.1

Compactness

29

0.5.2

Applications to Decidability*

0.5.3

A Topological Proof* . . . . . . . . . . . . . . . . . . . . .

32

A Little Modal Logic* . . . . . . . . . . . . . . . . . . . . . . . .

33

0.6.1

Semantics*

33

0.6.2

Proof Theory*

. . . . . . . . . . . . . . . . . . . . . . . .

34

0.6.3

Completeness*

. . . . . . . . . . . . . . . . . . . . . . . .

36

. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

First-Order Logic (Largo)


1.1

1.2

28

The Compactness Theorem . . . . . . . . . . . . . . . . . . . . .

Syntax

30

39

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .

39

1.1.1

First-order Languages

1.1.2

Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40
41

1.1.3

Formulas

. . . . . . . . . . . . . . . . . . . . . . . . . . .

42

Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

1.2.1

Structures . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

1.2.2

Parameters and Interpretations . . . . . . . . . . . . . . .

45

1.2.3

Satisfaction and Semantic Consequence

45

. . . . . . . . . .

1.3

1.4

1.5

1.6

1.7

Substitutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

1.3.1

Substitutability . . . . . . . . . . . . . . . . . . . . . . . .

47

1.3.2

A Renaming Algorithm

. . . . . . . . . . . . . . . . . . .

48

1.3.3

Substitutions and Satisfaction . . . . . . . . . . . . . . . .

49

Deductions and Soundness

. . . . . . . . . . . . . . . . . . . . .

51

. . . . . . . . . . . . . . . . . . . . . . . . . .

51

1.4.1

Deductions

1.4.2

Simplifying the Language

1.4.3

Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

1.5.0

Strategy and Witnesses

57

1.5.1

Expanding the Language

1.5.2

Extending the Theory

1.5.3

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

53

. . . . . . . . . . . . . . . . . .

58

. . . . . . . . . . . . . . . . . . . .

60

Finding a Structure (and Assignment) . . . . . . . . . . .

62

Consequences and Compactness

. . . . . . . . . . . . . . . . . .

64

1.6.1

Decidability*

. . . . . . . . . . . . . . . . . . . . . . . . .

64

1.6.2

Compactness

. . . . . . . . . . . . . . . . . . . . . . . . .

65

1.6.3

Non-Standard Analysis* . . . . . . . . . . . . . . . . . . .

66

An Alternate Proof of Compactness* . . . . . . . . . . . . . . . .

68

1.7.1

Filters and Ultralters*

68

1.7.2

Ultraproducts (The o Structure)*

. . . . . . . . . . . .

70

1.7.3

Alternate Proof of Compactness* . . . . . . . . . . . . . .

72

. . . . . . . . . . . . . . . . . . .

Second-Order Logic (Presto)

73

2.1

Compactness fails . . . . . . . . . . . . . . . . . . . . . . . . . .

74

2.2

Peano Arithmetic

75

. . . . . . . . . . . . . . . . . . . . . . . . . .

1' Some Model Theory (Furioso)


1'.0 A Word on Model-Terrorists

77
. . . . . . . . . . . . . . . . . . . .

1'.1 Elementary Equivalence; Inclusion and Elementary Inclusion

. .

77
79

1'.1.1 Elementary Equivalence . . . . . . . . . . . . . . . . . . .

80

1'.1.2 Inclusion

81

. . . . . . . . . . . . . . . . . . . . . . . . . . .

1'.1.3 Elementary Inclusion . . . . . . . . . . . . . . . . . . . . .

82

1'.2 Morphisms, Elementary Morphisms, Categoricity . . . . . . . . .

84

1'.2.1 Morphisms

. . . . . . . . . . . . . . . . . . . . . . . . . .

1'.2.2 Elementary Morphisms

. . . . . . . . . . . . . . . . . . .

84
86

1'.2.3 Categoricity . . . . . . . . . . . . . . . . . . . . . . . . . .

87

1'.3 Lwenheim-Skolem Theorems . . . . . . . . . . . . . . . . . . . .

87

1'.3.1 The Substructure Generated by a Set

. . . . . . . . . . .

87

1'.3.2 Skolem Functions and the Descending Version . . . . . . .

89

1'.3.3 The General Version and the o-Vaught Criterion . . . .

90

1'.4 Back-and-Forth Methods

. . . . . . . . . . . . . . . . . . . . . .

1'.4.1 Dense Linear Orderings


1'.4.2

-isomorphisms

92

. . . . . . . . . . . . . . . . . . . . . . .

94

1'.4.3 Finitary Back-and-Forth*


1'.5 Quantier Elimination and
1'.5.1 Elimination Sets

91

. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .

-Saturation .

96

. . . . . . . . . . . . .

99

. . . . . . . . . . . . . . . . . . . . . . .

99

1'.5.2

-Saturated

1'.5.3 Examples

Models

. . . . . . . . . . . . . . . . . . . . . 102

. . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Indices

111

Index of Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112


Index of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

List of Lectures

115

Chapter 0

Propositional Logic (Allegro)


Propositional logic, sometimes called Propositional Calculus, is interested in
compound propositions; it is the logic which underlies truth tables. In spite of
its little mathematical interest, it is a perfect eld for a rst encounter with
important notions.
We shall introduce a formalization of deduction in the setting of propositional
logic, and study (some of ) its properties. This will be a brief account of proof
theory.
We shall also introduce the objects of propositional logic, that is truth assignments.

These are on the side of meanings, called semantics.

They bring

their own notion of consequence; the main result of this chapter which serves
as a training area for more expressive logics, is that in propositional logic, both
notions coincide.

In this chapter:

Describe the language of propositional logic (0.1)

Describe the objects of propositional logic (0.2)

Formalize the notion of entailment (0.2.2)

Formalize the notion of deduction in propositional logic (0.3)

Show that entailment and deduction coincide (0.3.4 and 0.4)

Derive the compactness theorem and applications (0.5)

An incursion to modal logic* (0.6)

Lecture 1 (Language and Wff's; Unique Readability)

0.1 Syntax
We rst deal with the familiar syntax of propositional logic. The expressions are
(nite) sequences of well-known symbols (0.1.1), but not all such expressions
may be given a meaning.
(0.1.2).

This will lead us to the relevant notion of formula

A key, and fairly natural, property of the language is that there is

exactly one way to read a relevant formula (Theorem 0.1.9 of 0.1.3).

0.1.1 The Language of Propositional Logic


As we have said, propositional logic is interested in building coumpound propositions from elementary propositions.

Denition 0.1.1

(language of propositional logic)

The language of proposi-

tional logic consists of:

two punctuation symbols : the parentheses  ( and  )

ve connectives

a set

, , , ,

of sentence symbols

A1 , . . . , A n , . . . .

One should already have an intuition that

and

suce to encode the

other three connectives. This unformal idea will be given a precise meaning in
0.3.3.

Denition 0.1.2

(expression)

An expression is any (nite) sequence of sym-

bols from the alphabet. Its length is its length as a sequence.

Remark 0.1.3.

The set

of expressions of propositional logic is countable.

Technically, Denition 0.1.1 denes the alphabet of propositional logic. It is


never perfectly clear what language means: does one mean the alphabet, the
set of expressions, or the set of well-formed formulas (Denition 0.1.4 below)?
Very fortunately, the cardinals of the alphabet, the set of expressions, and the
set of well-formed formulas, are the same.

0.1.2 Well-Formed Formulas


As  (

A1 

is an expression, we must clarify the formation rules.

Denition 0.1.4

(well-formed formula)

The collection

WFF

of well-formed

formulas (w in short) is the smallest set such that:

each

if

An

and

is a w;

Remark 0.1.5.

are w, so are

(), ( ), ( ), ( ),

and

( ).

As this is the only relevant notion of formula anyway, we shall

soon omit well-formed, and say formula, for short.

Denition 0.1.6

(theory)

A theory is a set of w 's.

W 's are built from the sentence symbols with connectives.

As this is a

recursive denition, most of our proofs on formulas will proceed by induction


(notice that the length of a compound proposition is bigger than that of the
propositions which compose it).
Let us be more technical and check that Denition 0.1.4 makes sense.

Notation 0.1.7.

We dene the following functions.

C :

E
E

7
()

C :

E2

(, ) 7

E
( )

C :

E2

(, ) 7

C :

E2

(, ) 7

E
( )

C :

E2

E
(, ) 7 ( )

Notice that these functions actually take

Notation 0.1.8.
WFFn+1

WFF

(resp.

E
( )

WFF2 )

to

WFF.

WFF0 = A, and for n N,


[ [
[
[
= WFFn C (WFFn ) C (WFF2n ) C (WFF2n )
Let

It is clear that for all n N, WFFn is countable. It is also clear that WFF =
nN WFFn . This proves that WFF is a set, and even a countable set. More
generally, if A is innite (but perhaps uncountable), then Card WFF = Card A.

0.1.3 Unique Readability


There is still something which needs to be proved about our well-formed formulas: that no ambiguity arises when we read one.

Theorem 0.1.9 (unique readability).


C , C , C , C ,

and

Corollary 0.1.10.
position of

Let

The restrictions to

WFF of the functions

are injective and have disjoint images.

be a w of length

> 1.

Then there is a unique decom-

among the following possibilities:

(1 2 ),

(1 ),

(1 2 ),

(1 2 ),

(1 2 )

(This means that the connective and the composing propositions are unique.)

An initial segment of an expression


is an expression

E1

of

E2

with

E = E1 E2

is an expression

E1

such that there

(concatenation). A proper initial segment

is an initial segment such that

E1 6= E .

Notice that the empty (blank)

sequence is an expression but not a w.


The proof of Theorem 0.1.9 will rely on the following key observation.

Lemma 0.1.11

(balanced parenthesing)

(i). A w has as many left parentheses as right parentheses.


(ii). A non-empty proper initial segment of a w has strictly more left parentheses than right parentheses.
(iii). No proper initial segment of a w is a w.

Proof .
(i). We show the property by induction on (the length of )
scription of
If

WFF

WFF0 ,

using the de-

found above (Notation 0.1.8).

then

is a sentence symbol, and this is obvious. Assume

WFFn . Then an element of C (WFFn )


WFFn , so by induction has as many
left parentheses as right parentheses, and so does . The same holds for
2
2
2
2
elements of C (WFFn ), C (WFFn ), C (WFFn ), and C (WFFn ). So any
element of WFFn+1 has as many left parentheses as right parentheses.
the result is know for elements of

= ()

is of the form

with

By induction, the property is true on


(ii). Same method. If

WFF0 ,

then

WFF.

is a sentence symbol, so it has no

non-empty proper initial segments, and the claim is true!

WFFn .

Assume it holds for elements of

WFFn .
show that 1

Let

be of the form

Consider a non-empty proper initial segment


has more

('s

than

(),
of :

for
we

)'s.

If

is  ( or  (, we are done.

If

is  (1  where

is a non-empty proper initial segment of

then we are done by induction.

If

There are no other cases, as

is  (  then we are done as

This shows the property for

has as many

('s

as

)'s

by (i).

is a proper initial segment of

One copes with other four connectives in

a similar vein.
(iii). A consequence of (i) and (ii) (the empty expression is not a w anyway).

Proof of Theorem 0.1.9.

Assume that

C (1 , 1 ) = C (2 , 2 ),

(1  1 ) = (2
for well-formed formulas

1 , 1 , 2 , 2

2 )

and connectives

We may delete the rst  (, getting

1  1 ) = 2

2 )

and

that is:

Then

1 or 2
1 is an

generality
the w

2 ,

must be an initial segment of the other; without loss of


initial segment of

2 .

If

is a proper initial segment of

then from balanced parenthesing (Lemma 0.1.11 (iii)),

can't be

a w, a contradiction.
So

1 = 2 ,

and deleting it we reach

1 ) =
At this point it is clear that

=

2 )

; deleting it, and the nal right parentheses,

we derive

1 = 2
Of course the case of

is even easier than that of binary connectives, so

everything is proved.
Such a quick proof of the Unique Readability Theorem is the reason why we
have introduced so many parentheses, in spite of the disgusting redundancies
they create. From now on we freely omit parentheses when there is no ambiguity,
i.e. we write

instead of

(() ).
End of Lecture 1.

Lecture 2 (The Semantics of Propositional Logic)

0.2 Semantics
We briey escape the claws of syntax, and give sense to our symbols.

So far

they bore no meaning; at no point yet did we agree on what they stand for.

0.2.1 Truth Assignments


Let us establish the connection between the language and the objects it is about.
Due to the relative poverty of propositional logic, not much can be said: everything is described once each sentence symbol of

will have been assigned a

truth value.

Denition 0.2.1

(truth values)

False (F , also denoted

There are two truth values: True (T ) and

).

Notice that everything from now on could be dened with more truth values;
for instance continuous logic investigates the case where the set of truth values
is

[0, 1].

But the important theorems we want might no longer hold.

Denition 0.2.2 (truth assignment).

A truth assignment is a function

v:A

{F, T }.
Once the sentence symbols have been given a truth value, the natural truth
value of every w is known.

Proposition 0.2.3.

Let v be a truth
v : WFF {F, T } such that:

tension

v(An ) = v(An )
v() = T

i

for each

assignment. Then there is a unique ex-

n N;

v() = F ;

v(1 2 ) = T

i

v(1 ) = T

and

v(1 2 ) = T

i

v(1 ) = T

or

v(2 ) = T ;

v(2 ) = T ;

v(1 2 ) = T

i

v(1 ) = F

v(1 2 ) = T

i

v(1 ) = v(2 ).

Proof .

This extension

or

v(2 ) = T ;

v is well-dened by Unique Readability (Theorem 0.1.9).

A quick induction shows that it is unique.

Example 0.2.4.

If v(A1 ) = v(A2 ) = T
v((A1 A2 ) (A3 A4 )) = T .

and

v(A3 ) = v(A4 ) = F ,

Only a little later (at the end of 0.2.3) shall we identify

then one has

to is extension

v.

0.2.2 Satisfaction and Semantic Consequence


Denition 0.2.5 (satisfaction).
v

satises

if for all

Let be a theory and v be a truth assignment.


, v() = T .

Example 0.2.6.
v(A1 ) = F , v(A2 ) = T
v(An ) = T

for all

Denition 0.2.7

satises

nN

(A1 A2 ).

satises

(satisability)

Let

{A1 , A1 A2 , . . . , A1 An , . . .}.

be a theory.

is satisable if there is

a truth assignment satisfying it.

Example 0.2.8.
{A1 , A1 A2 , A1 A3 } is satisable
assignment v(A1 ) = v(A2 ) = F , v(A3 ) = T ).
{A1 A1 }

(try for instance the truth

is not satisable.

{A1 , A2 , A1 A2 }

is not satisable.

A very natural question occurs: when is a theory satisable? it will have an


answer with the Compactness Theorem, Theorem 0.5.2 below.
The most important notion of logic is the following.

Denition 0.2.9
entails

denoted

(semantic consequence)

|= ,

Let

be a theory and

if every truth assignment satisfying

a w.

satises

tautologically implies . If |= , one may then say


is a tautology, or that is valid. We do not insist on classical terminology.

One also says that


that

Lemma 0.2.10. {}
Proof .

Suppose that

is satisable i

{}

6|= .

is satisable. Then there is a truth assignment

v does not satisfy .


.
Conversely suppose that 6|= . There is therefore a truth assignement v
satisfying , but not satisfying . It follows v() = T , so v actually satises
{}.

satisfying

and

This means that

by denition (Proposition 0.2.3),

does not entail

Theorem 0.2.11.

Let 0 be a nite theory and a w.


algorithm which decides whether entails or not.

Proof .

In

0 {},

Then there is an

only nitely many sentence symbols occur. List all (but

nitely many) truth tables involving them. If all truth tables satisfying
satisfy

also

the answer is yes; otherwise answer no at rst aw.

Question.

And what about an innite set?

This question will nd an answer when we can reduce the innite to the nite
(Compactness, Theorem 0.5.2). The answer is exactly half as good (Corollary
0.5.14). This is far from obvious, as

might now involve innitely many sen-

tence symbols, giving rise to continuum many truth tables.


single formula, so one has the feeling that some nite subset
suce to entail

in case

However

of

is a

would

does.

There is however no bound a priori on how many sentence symbols are


involved in 0 , as shows
An , An A1 }. This is

the example

= A1 , = {A2 , A2 , A3 , . . . , An1

actually the reason why the algorithm for innite

given in Corollary 0.5.14 cannot be implemented to answer no: if it hasn't


answered yes after a long waiting time, it could be

either because

|= ,

but we haven't waited long enough to be positive

about it;

or simply because

6|= ,

but we'll never know.

These questions will be made more precise in 0.5.2.

0.2.3 Simplifying the Language


Five connectives are too much:
extremely clumsy.

they made inductive denitions and proofs

Moreover we have a feeling that they are redundant.

We

shall cut down on symbols and rules.


In Proposition 0.2.3, we have treated

, ,

as if they were elementary

connectives, and not abbreviations. We show that considering them as abbreviations is harmless.

This involves rewriting the whole theory with only two

connectives, and checking that this new setting is equivalent to the previous
one.

10

Notation 0.2.12.

each

if

An

and

WFF0

is the smallest set such that:

is a w;

are w, so are

WFF0

The set

The collection

and

would require a version of the Unique Readability Theorem

(Theorem 0.1.9), which may be regarded as a special case of the previous one,
since

WFF0 WFF.

In any case, we keep working with few parentheses.

There is of course a natural way to translate a standard w


connectives) into a w '

Notation 0.2.13.

using only

An

is

()0

is

and

for each

and

(using ve

We inductively translate any w

uses only the two connectives

A0n

into a formula

which

n N;

(0 );

( )0

is

(0 0 );

( )0

is

(0 0 );

( )0

is

(0 0 );

( )0

is

((0 0 ) ( 0 0 )).

Hence for any formula

the translation

uses only two connectives, and

intuitively bears the same meaning.

Example 0.2.14.
0

(A3 A1 ) ),

Let

which is

be (A1 A2 )(A3 A1 ). Then 0 is ((A1 A2 )0


(((A1 A2 ) (A2 A1 )) (A3 A1 )).

The issue is that this new notion of formula gives rise to another extension
of truth assignment, and therefore to a new notion of entailment.

We must

check that these new notions do coincide with their original ve connectives
analogues.

Notation 0.2.15.
extension of

For a truth assignment

v : WFF {F, T },

v 0 () = T

i

for each

n N;

i

v(1 ) = F

or

v(2 ) = T .

Notice that this is a subset of the conditions dening


is dened only for

Lemma 0.2.16.
using only

be the

v() = F ;

v 0 (1 2 ) = T
v 0 ()

v0

dened by:

v 0 (An ) = v(An )

and that

let

Let

(Proposition 0.2.3),

WFF0 .

be a truth assignment,
0
0
and . Then v() = v ( ).

11

a w, and

its translation

Proof .

By induction on

If

Suppose that

is a sentence symbol, then by denition

is

Suppose that
i

v 0 (10 ) = F

v() = T is equivalent to v() = F , which is


v 0 ( 0 ) = F , which is equivalent to v 0 (0 ) = T .

Then

by induction equivalent to

v() = v() = v 0 (0 ).

1 2 . Then v() = T i v(1 ) = F or v(2 ) = T


v 0 (20 ) = T i v 0 (10 20 ) = T i v 0 (0 ) = T .

is

or

Suppose that is 1 2 . Then v() = T i v(1 ) = v(2 ) = T


v 0 (10 ) = v 0 (20 ) = T i v 0 ((10 20 )) = T i v 0 (0 ) = T .

We proceed similarly for the remaining two connectives

and

i

Any modication, even apparently harmless, of the notion of truth assignment, induces a modcation of the notion of semantic consequence (Denition
0.2.9).

Notation 0.2.17.
0

|=

be a consistent subset of WFF0


v satises , then v 0 () = T .

Let

if whenever

Corollary 0.2.18. |=
Proof .

and

WFF0 .

Write

i

0 |=0 0 .

|= . We show that 0 |=0 0 . Let v be a truth


0
0
assignement such that v satises . By Lemma 0.2.16, v satises . By
0
0
assumption, v satises . Using Lemma 0.2.16 again, v satises .
0
0 0
Now suppose that |= . We show that |= . Let v be a truth assign0
0
ment such that v satises . By Lemma 0.2.16, v satises . By assumption,
0
0
v satises . Using Lemma 0.2.16 again, v satises .
Suppose that

From now on, we know that the semantics induced by two connectives is the
same as that induced by ve. In particular we happily forget about
(We are not entirely done with

v0

and

|=0 .

however, since a similar verication will have

to be carried on the syntactic side, 0.3.3.)


We shall also identify any truth assignment

with its extension

v.

End of Lecture 2.
Lecture 3 (Natural Deduction in Classical Logic)

0.3 Natural Deduction (in Classical Logic)


In this section we formalize the notions of deduction and syntactic consequence;
this will be done in 0.3.1.

We shall work in the framework of Natural De-

duction (a certain notion of deduction), using Classical Logic (certain rules of


deduction).
Natural Deduction was invented by Gerhard Gentzen; there are other notions
of deduction (for instance, Hilbert's system, which relies on modus ponens), and

12

there are other logics (for instance, intuitionistic logic, which does not allow
contradiction proofs). They all yield dierent notions of deduction. We made
the choices which best reect a normal mathematician's thought.
After a couple of examples (0.3.2), we shall proceed in 0.3.3 to reducing
the language to two connectives.

There are indeed redundancies as we have

observed (Corollary 0.2.18). We shall show that one may restrict the language

and deduction rules to

and

without aecting the power of the notion of

deduction.

0.3.1 Syntactic Consequences and Deductions


What is a consequence, again? In terms of proofs,

if there is a way to deduce

Denition 0.3.1

(deduction)

follows from assumptions

using the w 's of the theory

A deduction is a nite sequence of basic steps

which are regarded as elementary deduction rules.


The deduction can be drawn as a tree showing the consecutive steps. This
will be more precise when we have given an example of a set of such rules, but
we can already formulate the following important denition.

Denition 0.3.2
a theorem of
says that
If

= ,

(syntactic consequence)

denoted

proves

Let

be a theory and

if there is a deduction of

under

a w. is
. One also

one writes

Example 0.3.3.

` ,

` ,

and says that

At the end of the day,

is a theorem (of our logic).

(( )) should be a theorem.

We now explain the basic steps of deduction. Bear in mind that we chose
the most natural setting (natural deduction), but there are many other possible

settings, and not all are equivalent. A characteristic feature of natural deduction
is that connectives may be introduced and eliminated. Describing the deduction
rules amounts to explaining how connectives are proved and used.
follows,

stands for a (possibly empty) theory, and

, ,

In what

etc. for w 's.

There is only one axiom (so far). In natural deduction for propositional
logic, the only way to start a proof is by assuming something.
Axiom:

{} `

Ax

Weakening allows us to add unused assumptions to an existing deduction.


Weakening:

`
{} `

13

Wk

The case of negation is very subtle. We have a way to introduce a negation provided we deduced an inconsistency (i ), but we also have a way
of removing double-negations (e ). This is a characteristic feature of classical logic, which is actually equivalent to admitting reductio ad absurdum. Keep in mind that this is only one among the possible treatments of

negation and contradiction, though it seems satisfactory to us as working


mathematicians.
The elimination rule

is necessary to show the excluded middle law

(Theorem 0.3.11 below); if we introduce a weaker rule, excluded middle


need not hold any more.
How hard it may be to believe at rst,

absurdum. Only

is not related to reduction to

is.

Introduction and elimination rules for

{} ` {} `
`

There are two elimination rules for

in classical logic :

`
`

as there are two possible ways to

use a conjunction.
Introduction and elimination rules for

` `
`

`
`

e1

`
`

e2

Dually, there are two introduction rules for

as there are two ways to

prove a disjunction. This is in general dicult, since when trying to show

one never knows a priori which is true; contradiction proofs will be

helpful (see the proof of Theorem 0.3.11 below).


On the other hand

e may look subtle at rst sight, but it merely describes

the process of a case-division.


Introduction and elimination rules for

`
`

i1

`
`

i2

and

` 1 2

{1 } `
`

{2 } `

may be regarded as convenient abbreviations (this will be made

formal in 0.3.3). But the connective

14

is essential.

Introduction and elimination rules for

{} `
`

The elimination rule


introduction rule

It is clear that

` `
`

is classically called modus ponens ; dually the

is called modus tollens .

should have the moral status of an abbreviation. Yet in

case it is admitted as a primitive connective, we need deduction rules.


Introduction and elimination rules for

` `
`

:
`
`

e1

`
`

e2

We shall start using quantiers in the next chapters and introduce adequate
rules; for the moment they won't appear, as we work in Propositional Logic.

Remark 0.3.4.

Notice that in our deductions we may have several assumptions,

but at each step we have only one conclusion. This is another typical feature of
Natural Deduction which supposedly reects the way we mathematicians work.
But there are other, more complex, deduction systems (for instance, Gentzen's
Sequent Calculus).
A formalization of deduction yields the following essential notion.

Denition 0.3.5
such that

(consistency)

and

A theory

is consistent if there is no w

` .

In other words, a theory is consistent if it is not possible to deduce a contradiction from it. There is a symbol for inconsistency (or contradiction), which
we shall not use.

Lemma 0.3.6.
Proof .
Hence

Suppose

is inconsistent i

for all w

inconsistent. Then there is

such that

`
`
Wk
{} `
{} `
`
e
`

The converse is obvious.

15

Wk
i

and

` .

0.3.2 Our First Deductions


We begin with an easy example.

Example 0.3.7.

For w 's

, ,

one has

` ( ) (( ) ( ))

Ax

Ax

Verication: On the one hand,

`
`
e
`
`
` ( ) ( )
i
` ( ) (( ) ( ))

e
i

On the other hand,

Ax

( ) ( ) ` (( ) ( ))
e
( ) ( ) `
(( ) ( )) `
` (( ) ( )) ( )
(1 )

where
and

and

(2 )

(1 )
(2 )

Ax
e
i

are the symmetric deductions obtained by exchanging

Combining both trees with

i ,

we nd

` ( ) (( ) ( ))

Theorem 0.3.8

(Contraposition)

Proof .

` .

Suppose

. `

i

` .

Then

Ax

{} `
`
Ax
Wk
Wk
{, } `
{, } `
{} `
e
{, } `
{, } `
{} `
i
`
Suppose

` .

What we just proved implies

` .

Therefore

Ax

Ax

{} `
{} `
Wk
Wk
{, } `
{, } `
i
{} `
`
Wk
{} `
{} `
{} `
e
{} `
i
`

16

Wk
i

WK
e

Parallel to the relationship between entailment and satisability (Lemma


0.2.10) runs the following.

Lemma 0.3.9
Proof .

(see Lemma 0.2.10)

Suppose

{}

is not inconsistent i

is not inconsistent and

(by axiom and weakening), and

{}

. {}

{} `

` .

Then

6` .

{} `

(by assumption). It follows that

6` .
6` , and {} is inconsistent. By inconsistency , there
is be such that {} ` and {} ` . Then by i , ` , against
the assumption. So {} is not inconsistent.
is inconsistent, against the assumption. So

Suppose that

Notice that the latter proof did not involve a semantic analog of the elimination of double negation (e ), but only
believe in

e ,

i ;

whence the odd statement. As you

you may say is consistent for is not inconsistent.

Theorem 0.3.10

(Reductio ad Absurdum)

If

{}

is inconsistent then

` .

Proof .

Suppose

{}

inconsistent. By Lemma 0.3.9,

` .

By

(e ),

` .
Still not impressed?

Theorem 0.3.11 (Excluded Middle holds in Classical Logic).

Excluded middle

is a theorem of Classical Logic.

, one has ` . Notice that one does not know


to prove, so there is little chance to nd a quick
deduction relying entirely on i . We shall work our way through a contradiction
proof involving e .
That is: for any w

a priori which of

Proof .

or

Due to pagesetting limitations, I must pause and resume.

Ax

{} `
Ax
Wk
{( ), } `
{( )} ` ( )
i
Wk
{( ), } `
{( ), } ` ( )
( ) `
i
( ) `
|
{z
}
()

()

( ) ` (()
( ) `
` ( )
e
`

17

Ax
i

The proof of Theorem 0.3.11 shows that contradiction deductions (typically


excluded middle) are always a little counter-intuitive to formalize. Notice that
in some weaker deduction systems, excluded middle cannot be deduced !
End of Lecture 3.
Lecture 4 (Reduction to Two Connectives (1/2))

0.3.3 Simplifying the Language (and Presentation)


We have introduced a battery of deduction rules, though we had concluded in
0.2.3 that two connectives were enough to convey all the desired meaning. We
now must check that working with only two connectives would preserve the
deductive strength, i.e.

check that all deductions can be imitated even when

(, )

one restricts connectives and rules to the


Recall what
ing

and proving

Lemma 0.3.12
Proof .
,

setting.

stands for (Notation 0.2.13). It is intuitively clear that prov-

should be the same.

(see Lemma 0.2.16)

By induction on

. {} ` 0

and

{0 } ` .

Provided the result has been proved for a formula

we shall freely write:

`
` 0

induction

though induction is of course not a deduction rule. This is actually short-hand


for the following:

.
.
. induction
.

{} ` 0
i
` 0
Wk
` ` 0
e
` 0
Of course

` 0
`

induction

is a similar shortcut, and is licit for a similar reason.

Suppose that

is a sentence symbol.

Then

0 = ,

obvious.

Suppose that

is

Then

0 = ( 0 ).

Now

{ 0 } ` 0
{} `
{, 0 } ` 0
Wk
{, 0 } `
{, 0 } `
{} ` 0
|{z}
Ax

=0

18

Ax
Wk
induction
i

and the claim is

and

Ax

{} `
0
{ } `
{0 , } `
Wk
{0 , } ` 0
{0 , } ` 0
{} `
|{z}
0 Ax

Wk
induction
i

=0

Suppose that

is

(1 2 ).

Then

{10 } ` 10
{, 10 } ` 10
{, 10 } ` 1

Ax

{} `
Wk
{, 10 } `
{, 10 } ` 2
{, 10 } ` 20
i
{} ` 0

Ax
Wk
induction
e

induction

On the other hand

Ax

{1 } ` 1
{0 , 1 } ` 1
{0 , 1 } ` 10

Ax

{0 } ` 0
Wk
{0 , 1 } ` 0
{0 , 1 } ` 20
{0 , 1 } ` 2
i
{0 } `
Of course the case of

Suppose that

and

= 1 2 ,

Wk
induction
e

induction

was trivial. The real proof starts here.

so that

0 = ((10 20 )).

One has

Ax

{} `
Wk
0
{, 1 20 } `
e
Ax
{, 10 20 } ` 1
{10 20 } ` 10 20
Wk
induction
{, 10 20 } ` 10
{, 10 20 } ` 10 20
e
{, 10 20 } ` 20
|
{z
}
()

Hence

Ax

{} `
Wk
{, 10 20 } `
{, 10 20 } ` 2
()
induction
{, 10 20 } ` 20 {, 10 20 } ` 20
{} ` (10 20 )
19

On the other hand

Ax

{10 } ` 10
0
{1 } ` 1
{ , 1 , 10 , 20 } ` 10
Wk
{0 , 1 , 10 , 20 } ` 1
{0 , 1 , 10 , 20 } ` 1
0
0
{ , 1 , 1 } ` 20
i
{0 , 1 } ` 10 20
|
{z
Ax

Wk
induction
i

()

Therefore

Ax

{0 } ` (10 20 )
()
{0 , 1 } ` 10 2 {0 , 1 } ` (10 20 )
{0 } ` 1
e
{0 } ` 1

Wk
i

Moreover

Ax

{20 } ` 20
Ax
Wk
0
{ , 10 , 20 } ` 20
{0 } ` 0
i
Wk
{0 , 20 } ` 10 20
{0 , 20 } ` 0
i
{0 } ` 20
e
{0 } ` 20
induction
{0 } ` 2
Combining both threes, we nd

Suppose that

= 1 2 ,

 {1 , 10 }

0 = (10 20 ).

is clearly inconsistent by induction.

{1 , 10 } ` 20

and

On the other hand,


As

so that

{0 } ` .

{1 } ` 0 .
{2 , 10 } ` 20

By Lemma 0.3.6,

by induction, so

= 1 2 , combining both deduction


{} ` 0 . This was the easy part.

{2 } ` 0 .

trees and eliminating

we nd

For the converse we need an easy remark:

Ax

{10 } ` 10
induction
{10 } ` 1
Ax
Wk
{, 10 } ` 1
{} `
i
Wk
{, 10 } ` 1 2
{, 10 } `
i
{} ` 10
{z
}
|
(1 )

Of course one has similarly

{} ` 20 (2 ).
20

Therefore

(1 )
Ax
{} ` 10
{0 } ` 10 20
(2 )
0
0 Wk
0
0
0 Wk
{} ` 20
{ , } ` 1
{ , } ` 1 2
e
0
0
{ , } ` 2
{0 , } ` 20
{0 } `
e
{0 } `

The case of

Hence

is left as an exercise, but essentially easier than

and its translation

prove each other. Of course they do so only

is but a rst step as the notion of deduction

a deduction of

associated to

may involve ); Lemma


` itself has changed.

in the setting involving all connectives (since

Notation 0.3.13.

Wk

0.3.12

be a theory and a w. Write `0 if there is


using only axiom, weakening and the deduction rules

Let

from

and

Preservation of the notion of deduction under translation is therefore the


following result.

Corollary 0.3.14
`

i

Proof .

Let

For the moment we prove only that

` .
1 , . . . , n
pose

(see Corollary 0.2.18)

0 `0 0 .

be a theory and

0 `0 0

implies

a w. Then

` .

Sup-

By niteness of the notion of a deduction, there are formulas


such that

{10 , . . . , n0 } `0 0 .

Using

repeatedly, one nds

`0 10 (20 . . . (n0 0 ). . . )
As all the rules in

`0

are in

`,

one even has

` 10 (20 . . . (n0 0 ). . . )
Now one sees that

(1 (2 . . . (n ). . . ))

is

10 (20 . . . (n0 0 ). . . )

So by Lemma 0.3.12, it follows

` 1 (2 . . . (n ). . . )
In particular,

{1 , . . . , n } ` ,

and

` .

We shall prove the converse implication next time.


End of Lecture 4.
Lecture 5 (Reduction to Two Connectives (2/2); Soundness)

We nish the proof of Corollary 0.3.14.

21

Proof of Corollary 0.3.14, completed.


plies

0 `0 0

The remaining proof that

The cases of axiom and weakening are clear.

Suppose the last step is for instance

i :

{} ` {} `
`
By induction, one has

`,

it follows

And therefore

0 { 0 } `0 0

`0 ).

and

0 { 0 } `0 0 .

0 { 0 } `0 0 0 { 0 } `0 0
0 `0 0

As

is in

e , i , and e

are as trivial (since these rules

We now turn to the interesting congurations.

Suppose the last step is

i ,

that is

` 1 ` 2
` 1 2
We aim at proving

0 `0 (1 2 )0 ;

By induction, we know

0 `0 10

recall that

(1 2 )0

0 ` 20 .

Therefore

and

is

.
.
. induction
.

(10 20 ).

Ax

{10 20 } `0 10 20
0 `0 10
0 {10 20 } `0 10
0 {10 20 } `0 10 20
0
0
{1 20 } `0 20
{z
|

Wk
e

()

and
.
.
. induction
.

0 `0 20
()
0 {10 20 } `0 20 0 {10 20 } `0 20
0 `0 (10 20 )

0 `0 0 .

It is clear that the cases of


are in

` im-

is by induction on the length of the deduction.

Suppose the last step is

e1 :
` 1 2
` 1

22

e1

Wk
i

By induction,

0 `0 (10 20 ).

Clearly

0 {10 } `0 10 20 ().

Hence
.
.
. induction
.

0 `0 (10 20 )
()
0 {10 } `0 10 20 0 {10 } `0 (10 20 )
0 `0 10
e
0 `0 10

0 `0 (10 20 ).

By induction,

e2 :

Suppose on the other hand that the last step is

` 1 2
` 2

Wk

e2

Of course

0 {20 } `0 10 20 ().

Thus
.
.
. induction
.

0 `0 (10 20 )
()
0 {20 } `0 10 20 0 {20 } `0 (10 20 )
0 `0 20
e
0 `0 20

Suppose that the last step is

Wk
i

i1 :
` 1
` 1 2

i1

0 `0 10 . From Lemma 0.3.9 we know that 0


0
{1 } is inconsistent, so by Lemma 0.3.6 0 {10 } `0 20 . Introducing
, we nd 0 `0 10 20 , that is 0 `0 (1 2 )0 .
By induction, we know

i2

is dealt with similarly.

Suppose the last step of the form:

` 1 2

By induction,

{1 } ` {2 } `
`

0 `0 10 20 , 0 {10 } `0 0 ,
and

23

0 {20 } `0 0 . By
`0 ), we easily nd:

and

contraposition (Theorem 0.3.8, which still holds for

0 {0 } `0 10 (1 )

0 {0 } `0 20 (2 )

Therefore
.
.
. induction
.

0 `0 10 20
(1 )
0
0
0
0
0
0
{ } ` 1 2
{0 } `0 10
(2 )
e
0
0
0 0
0 {0 } `0 20
{ } ` 2
0 `0 0
e
0 `0 0

The

rules are easy to deal with.

All is well that ends well and we may forget about

there are two connectives:

and

`0

and

0 .

From now on

The other three are abbreviations. Have I

said that before? It is very important to say it again, and above all, it is crucial
that a redundant connective should be regarded a an abbreviation of the same

expression in either setting (syntactic or semantic).


However we shall be extremely hypocritical about connectives. In our de-

nitions and inductive proofs, we need only to consider


necessary we shall feel free to use

, ,

and

and

but whenever

and the related rules. We know

that these are harmless shortcuts, which preserve the notions of truth and of
deduction.
We now aim at simplifying the presentation of deductions.

Notation 0.3.15.
(via

Whenever an assumption

Let us give an example. Suppose we want to show

` .

{} ` . . .
.
.
.

{} `
`

we now write:
.
.
.

and then

remains uncrossed, one has

.
.
.

As

, we
is eliminated

When we write a deduction under a set of assumptions

on top (deduction under ).


i or i ), we cross it out.

write

` .

Similarly,

{} ` {} `
`

24

Instead of

becomes:

which means

` .

Example 0.3.16.

6 6

then

Do not forget to cross out every relevant occurence of

As an example, we prove Excluded Middle (Theorem 0.3.11)

with this notation.

( )

( )
1.

( ) 6

i

( ) 6

i

i
3.

( 6 ) 6

i

i
( )

i
i

5.

i
2.

4.

( 6 ) 6

i

i
( )
e

i
i

6.

0.3.4 The Soundness Theorem


Recall that we have two notions of consequence: semantic (|=, Denition 0.2.9)
and syntactic (`, Denition 0.3.2).

Once we have proved the Soundness and

Completeness Theorems (Theorems 0.3.17 and 0.4.1), all will be well that ends
well: they do coincide. The easier part comes rst.

Theorem 0.3.17
be a w. If

Proof .

` ,

(soundness of propositional logic)

then

Let

If the deduction is just an axiom, it is of the form

{} `
Clearly

Induction on the length of the deduction. We use Corollaries 0.2.18 and

0.3.14, which enable us to treat only the rules associated to

be a theory and

|= .

Ax

{} |= .

The case of weakening is absolutely trivial.

25

and

Suppose that the last step is by


is inconsistent.

i ;

that is,

because

{}

By induction, there are no truth assignments satisfying

{}. So any truth assignment satisfying (if any) satises .


|= (notice that we did not claim satisability of ).

Suppose that the last step is

`
`

|= . Let v be a
v() = v() = T , we are done.

By induction, we know that


satisfying

Thus

Then

truth assignment

Suppose that the last step of the deduction is the modus ponens:

` `
`

` and ` hold; by induction, |= and |= .


v be a truth assignment satisfying . Then v( ) = v() = T , so
v() = T . This shows |= .

Hence
Let

Suppose that the last step of the deduction is the modus tollens:

{} `
`

|= . Let v be
then v satises
by denition. If v does satisfy then v satises {}; we then
know that v satises , and therefore v satises again. In any case
v satises . This implies that |= .

By induction,

{} |= ;

a truth assignment satisfying

we aim at showing

Even with ve connectives the case of

If

etc.

does not satisfy

wouldn't be very complicated.

Induction completes the proof.


End of Lecture 5.
Lecture 6 (Completeness of Propositional Logic)

0.4 The Completeness Theorem


Theorem 0.3.17 states that every syntactic consequence is a semantic consequence; roughly speaking, if you can prove it, it is true. Very remarkably (and
fortunately), the converse holds.

Theorem 0.4.1 (completeness of propositional logic).


be a w. If

|= ,

then

` .

26

Let

be a theory and

Lemma 0.4.2.

The Completeness Theorem is equivalent to:


every consistent theory is satisable.

Proof .

be a consistent theory.
such that 6` . By
completeness, this implies 6|= . It follows from Lemma 0.2.10 that {}
is satisable; let v be a truth assignment satisfying {}. In particular, v
satises !
Suppose that every consistent theory is satisable. Let 6` . Then {}
is consistent by Lemma 0.3.9. In particular, {} is satisable. So there is
a truth assignment v satisfying {}: v satises but does not satisfy ,
so 6|= .

As

Suppose the Completeness Theorem.

Let

is consistent (Denition 0.3.5), there is a w

0.4.1 Extending the Theory


We shall prove the Completeness Theorem. In view of Lemma 0.4.2, it suces
to show that every consistent theory is satisable. So, given a consistent set,
we must nd a truth assignment satisfying it.
As often in mathematics, choice is embarrassing. We shall force ourself to
have a unique suitable truth assignment.

This amounts to maximizing, in a

consistent way, the set of requirements.

Denition 0.4.3
,

either

or

(complete)
is in

A consistent theory

is complete if for any w

The following remark will be implicitely used in the proof.

Remark 0.4.4.

If

Verication: Suppose
and

` ,

is complete, then

` ,

but

.
6

i

By completeness,

against consistency.

So

We shall extend any consistent theory to a complete one. As we have said,


the problem of satisfying a complete consistent set is actually easier, as there is
no ambiguity on what truth assignment to chose.
Here is the inductive step.

Lemma 0.4.5.
{}

Proof .
w

If

is a consistent theory and

is a w, then

{}

or

is consistent.

= {} is not consistent. By assumption, there is a


` and ` . By i , one has ` . As is consistent,
have ` too, so 6` . So {} is consistent by Theorem

Assume that

such that

one cannot
0.3.10.

Lemma 0.4.6.
theory

Let

containing

be a consistent theory.
.

27

Then there is a complete consistent

Proof . Recall that WFF is countable (Remark 0.1.3, for instance); enumerate
WFF = {n : n N}. We shall extend recursively. Let 0 = . Assume n
has been constructed. By Lemma 0.4.5, n {n } or n {n } is consistent.
If n {n } is consistent, let n+1 be it; otherwise let n+1 = n {n }. So
th
the n
w n has been taken into account, and n+1 is still consistent.
= nN n . Clearly
is consistent, and for any w , = n for
Let
. (It is also clear by consistency
some n, so n or its negation is in n+1

that and are not both in .)

0.4.2 Finding a Truth Assignment


A complete consistent theory induces a unique truth assignment, as for any
sentence symbol

An ,

either

An

An

or

is in

The following lemma checks

that this happens in a consistent way.

Lemma 0.4.7 (truth lemma).

be a complete consistent theory.

.
the truth assignment v : A {F, T } such that v(An ) = T i An

any w , v() = T i .

Proof .

By induction. The result is clear if

pleteness of

Let

Assume

the case

v() = T .

Then for

is a sentence symbol! By com-

is immediate. Now suppose that

We show

Consider

is

1 2 .

. By completeness of
, 1
.
v(1 ) = F , then by induction 1 6

Hence {1 } is not consistent; it follows {1 } ` 2 , and therefore


` 1 2 . By completeness, (1 2 )
.

. Therefore
` 1 2 ; by
If v(2 ) = T , then by induction 2

completeness (1 2 ) .
If

Assume

We show

v() = T .

v(1 ) = T and v(2 ) = F . By induction (and completeness


and 2
. If
` 1 2 , then by modus
1
` 2 , which contradicts consistency. So
6` 1 2 . This
ponens,
.
implies 6

Otherwise,
of

),

this shows

Induction completes the proof. Notice however that proving the theorem with
ve connectives would be essentially as hard, though longer.

Proof of the Completeness Theorem (Theorem 0.4.1).

Due to Lemma

0.4.2, it suces to show that a consistent theory is satisable.


consistent theory. By Lemma 0.4.6, we can extend
set

We consider the truth assignment

satises

In particular

satises

Let

be a

to a complete consistent

induced by

By Lemma 0.4.7,

is satisable.
End of Lecture 6.

Lecture 7 (Compactness of Propositional Logic)

28

0.5 The Compactness Theorem


0.5.1 Compactness
Compactness is the general phenomenon of reduction of the innite to the nite.

Denition 0.5.1 (nite satisability).


if for all nite

0 , 0

Let

be a theory. is nitely satisable

is satisable.

Theorem 0.5.2 (compactness of propositional logic).

A theory is satisable i

it is nitely satisable.

Proof .

An implication is trivial: if

is satisable, then a truth assignment

satisfying it will also satisfy any nite subset.


Let

be a nitely satisable theory. Any nite fragment of

hence consistent by soundness (Theorem 0.3.17).

So

is satisable,

is nitely consistent.

But consistency and nite consistency coincide, as consistency is a nite notion


(our deductions all have nite length). So

is consistent, therefore satisable

by completeness (Theorem 0.4.1 and Lemma 0.4.2).

Corollary 0.5.3.
Proof .

If

|= ,

then there is a nite

such that

0 |= .

{} is not satisable; by compactness (Theorem


0 such that 0 {} is not satisable. With
means 0 |= .

By Lemma 0.2.10,

0.5.2) there is a nite subset


Lemma 0.2.10 again, this

We now give an example of application.

Denition 0.5.4.
of vertices to

A graph is

{1, . . . , k}

k -colorable
{x, y}

such that if

c from
c(x) 6= c(y).

if there is a function
is an edge, then

the set

Recall that a subgraph of a graph is a subset of the set of vertices equipped


with all possible edges.

Theorem 0.5.5
subgraphs are

Proof .

(Erds)

A countable graph is

k -colorable

i all of its nite

k -colorable.

V = {vn : n N} be an enumeration of the


Cv,i for v V and i {1, . . . , k} (their
c(v) = i, that is v bears the ith color).
Let

sentence symbols

vertices.

We add

meanings will be:

Consider the theory made of:

Cv,1 Cv,k

for each

i6=j 6= Cv,i Cv,j

vV

for each

ki=1 6= Cv,i Cv0 ,i

(meaning: each vertex has a color)

vV

(meaning: no vertex has two colors)

for adjacent vertices

v, v 0

(meaning: no two adjacent

vertices bear the same color)


By assumption, the theory is nitely satisable.

By compactness (Theorem

0.5.2), it is satisable: there exists a global coloring.


Still not impressed? Wait and see what we shall do with rst-order logic.

29

0.5.2 Applications to Decidability*


Compactness is very useful is the context of decidability, where one wants to
decide whether or not a given w is a consequence of a theory. We consider this
problem. For the question to make sense, we need of course to have the data in
a form readable by a machine. This is approached by the following denition.

Denition 0.5.6

(decidable)

A set is decidable if there is an algorithm which

can decide whether or not a given object belongs to it.

Example 0.5.7.

The set of w 's is decidable, as there is an algorithm which

determines whether a given formula is decidable or not.


Actually if one just wants to list all possible elements of the set, decidability
is a little too strong, and could be weakened as follow.

Denition 0.5.8

(semi-decidable)

A set is semi-decidable if there is an algo-

rithmic way to write a list such that sooner or later, every element of the set
will appear on the list.
One also says eectively enumerable for semi-decidable. This does not assert
that there is an algorithm deciding whether or not an object is in the set; all you
can get is an algorithm which will answer yes if the object lies in it. Otherwise,
the algorithm doesn't stop, as it scans through an innite list without nding
an answer.

Example 0.5.9.

Suppose that you are looking for Jorge Luis Borges in an

innite phone directory.

If the phone book is in alphabetical order, you start reading. If you reach
Byron without passing through Borges, then Jorge Luis Borges is not

in the phone book. If you reach Borges, Julio without passing through
Borges, Jorge, then Jorge Luis Borges is not in the phone book. Other-

wise you have reached Jorge Luis Borges. The phone book is decidable.

But now suppose that the phone book is not in alphabetical order. The
only way to nd out if Borges is in the phone book is to read through. If
you reach the name Borges, Jorge Luis, then he is in the phone book.
Otherwise, you cannot know whether he is not in the directory, or he is
but you haven't found his name yet. The phone book is semi-decidable.

Remark 0.5.10.

A set is decidable i both it and its complement are semi-

decidable.
We now are ready to explain what a readable theory is.

Denition 0.5.11
set

such that

Denition 0.5.12

(axiomatization)

0 |=

and

An axiomatization of a set of w 's

is a

|= 0 .

(nitely, decidably, semi-decidably axiomatizable theory)

If there exists a nite, decidable, or semi-decidable axiomatization, the theory


is said nitely, decidably, semi-decidably axiomatizable, accordingly.

30

A typical use of Compactness is the following result.

Proposition 0.5.13.
of

If

is nitely axiomatizable, then every axiomatization

contains a nite axiomatization.

Proof .

0 be a nite axiomatization of ; taking the conjunction, we may


0 = is a single formula.
0
0
Let be any axiomatization of . Then {} is not consistent, as
0
0
|= |= . So |= . By compactness, there is a nite subset of 0 , which
taking conjunction we may assume to be a formula , such that |= |= .
Let

assume that

We now state the results which interest us.

Corollary 0.5.14.

Let

be a semi-decidably axiomatisable theory and

Then there is an algorithm which will answer yes if

Proof .

If

|= ,

a w.

|= .

then by Compactness (Theorem 0.5.2, or rather Corollary

0 such that 0 |= . We therefore list all nite


0 of and apply the nite case algorithm of Theorem 0.2.11 to
|= , then we shall be answered yes in a nite time.

0.5.3), there is a nite


fragments
each. If

In other words, if

admits a semi-decidable axiomatization, then

itself,

as a set of w 's, is semi-decidable.

Remark 0.5.15.

The algorithm cannot be implemented to say no, even if

is decidably axiomatizable.

If after a very long time the algorithm hasn't

answered yes, it might be for two reasons:

you haven't waited long enough yet;

6|=

(in which case the algorithm will never stop).

Hence, in general, the set of semantic consequences of a semi-decidable

is

only semi-decidable (as opposed to the nite case, Theorem 0.2.11, where it is
decidable). However, if the theory is maximal in some sense, there is more.

Corollary 0.5.16.

Let

be a semi-decidable theory in a decidable language.

Assume that for any sentence

, either |=
|= or not.

or

|= .

Then there is an

algorithm which decides whether

Proof .
one for

Run two instances of the algorithm of Corollary 0.5.14: one for

and

One of them will sooner or later answer yes.

We do not insist further on decidability; our notion of algorithm is to remain


informal. Decidability and Computability are other branches of Mathematical
Logic, with very interesting results.

31

0.5.3 A Topological Proof*


Compactness phenomena rst arose in the context of topology.

Informally

speaking, a space is compact if whenever one can always nd an element meeting a nite number of requirements (lying in the intersection of nitely many
members), then there is an element meeting all requirements simultaneously.
And this is exactly what Theorem 0.5.2 says; it actually states the compactness of a certain topological space. We therefore give an alternate proof of the
compactness theorem.

Denition 0.5.17

(nite intersection property)

A family

has the nite

intersection property if the intersection of any nite number of sets in

is

nonempty.

Denition 0.5.18

(compactness)

family of closed sets

A topological space

is compact if for all

with the nite intersection property,

F 6= .

For instance, a nite space is always compact.

Theorem 0.5.19

(Tychono )

A product of compact spaces is compact for the

product topology.

Corollary 0.5.20. {0, 1}N

is compact for the product topology.

We now introduce a topology on the set of truth assignments.

Notation 0.5.21.

Let

S = {F, T }N

For every w

Let

be the set of truth assignments.


let

O = {v S : v() = T }.

be the topology generated by the

Proposition 0.5.22.

O 's.

is the product topology on

and totally disconnected. The

O 's

S. S

is Hausdor, compact,

are exactly the clopen sets of

S;

they form

a Boole algebra.
In particular, there is a correspondence between sets of w 's and closed sets:

7 C =

Remark 0.5.23.

This correspondence is not injective (as

{} and {} dene

the same closed set), but it is surjective since any closed set is an intersection
of clopen subsets.

Lemma 0.5.24.
i

satises

Let

be a theory and

32

be a truth assignment. Then

v C

Proof .

v C

i
i
i

Corollary 0.5.25.

, v O
, v() = T
v satises .
for all
for all

is satisable i

C 6= .

Proof of the Compactness Theorem, Theorem 0.5.2.

Let be a nitely
F = {O : }, a family of closed subsetes
of S . As is nitely satisable, F has the nite intersection property. By
compactness of S , F 6= . So C 6= and is satisable.
satisable set of w 's. Consider

Remark 0.5.26.

The only problem with this proof is that it is not eective,

as opposed to the one relying on the completeness theorem. But on the other
hand, it is free of references to a proof theory.
End of Lecture 7.
Lecture 8 (Modal Logic*)

0.6 A Little Modal Logic*


We say a word of modal logic. Modal logic is an extension of propositional logic
taking necessity into account. (One could also formalize belief, provability, etc.)
Syntactically speaking, this is done by adding adverbs, which will be unary
connectives.

Semantically this is a little more subtle.

We need to deal with

several worlds, which will describe the possibilities. In particular a statement


will be necessary if true in all possible worlds.
This notion of possibility requires a treatment of imaginability: some worlds
(not necessarily all) are conceivable from others. The notion of a Kripke model
will be detailed in Denition 0.6.2; for the moment we precise the language.

Denition 0.6.1 (language of modal logic).


connectives:  and
close

WFF

.

To the language we add two unary

They denote necessity and possibility, respectively. We

under  and

,

getting the set

WFFmod .

A2 ) is a w of modal logic, which reads It is necA1 implies the necessity of A2 . Similarly, 
contingent;  reads  is impossible.

For instance (A1

essary that the possibility of


reads  is

0.6.1 Semantics*
The semantics is of course a little more complex than before. As mentioned, we
need a whole collection of worlds, accessible by imagination from each other.

Denition 0.6.2

(Kripke model)

A Kripke model is a triple

33

(W, R, v)

where

W
R
v

is a non-empty collection of worlds;


is a binary relation on

AW

is a function from

Caution!

W ; w1 Rw2
to

means that

is conceivable in

w1 ;

{T, F }.

Truth values depend on the world.

Remark 0.6.3.

w2

v()

is now meaningless.

In Denition 0.6.2, we could require specic axioms on the

accessibility relation

in order to formalize a description of what our intuition

suggests that conceivability means.

Notation 0.6.4.

We extend

v(An , w) = v(An , w)
v(, w) = T

if

inductively to

for each

nN

WFFmod W :

and each world

v(, w) = F

v( , w) = T

if

v(, w) = F
w0

v(, w) = T

if for all world

v(, w) = T

if there is a world

v(, w) = T

or

such that

w0

wRw0 ,

such that

one has

wRw0

and

v(, w0 ) = T

v(, w0 ) = T .

The meaning is clear:  ( is necessary) holds in the world


world accessible from

w,

holds. (An observer from

cannot ever fail.) On the other hand,

is a conceivable world

w0

in which

if for any

will then believe that

( is possible) holds in

holds: the observer of

if there

will think that

could hold somewhere.


One can of course reduce the language to use the sole modal connective ;
clearly

stands for

;

we implicitely do so and forget about

.

Denition 0.6.5 (satisfaction in modal logic). |= if for all models (W, R, v)


such that

is satised in all worlds of

Example 0.6.6.

One has

W,

is also satised in all worlds of

W.

{} |= .

This is somehow problematic, as one should not allow anything similar on


the proof-theoretic side. So we shall have to restrict the completeness theorem
(Theorem 0.6.10 below).
The safest is to consider that Example 0.6.6 is not meaningful.
agree never to use

|= ,

but to focus on the case

|= .

0.6.2 Proof Theory*


The proof theory is very much alike. We add two rules.

Kripke's Rule:

` ( )
`  

34

We then

Necessitation Rule:

Caution!

`
` 

N,

There are no assumptions in rule

Otherwise one nds

{} `
{} ` 
i
` 

i.e.

must be the empty set.

in other words, everything which holds is necessary, a clear insanity.


Recall however that

{} |= ; in particular, we shall not have full com |= will be stronger than ` . Completeness

pleteness, in the sense that


will hold only for

Remark 0.6.7.

= .
Notice that if we have made specic requirements on the ac-

cessibility relation

(see Remark 0.6.3), we might need to add a couple of

extra rules.

Example 0.6.8.

In Denition 0.6.2, we impose on the accessibility relation to

be reexive. Then

|= () .

As there is no way to prove it, we admit it as

an axiom, or equivalently we add the following rule:

` 
`

Theorem 0.6.9
Proof .

(soundness of modal logic)

` ,

If

then

|= .

A quick induction.

The case of the rules of Propositional Logic is essentially as in Theorem


0.3.17. For

{} `
`

you might need to introduce

{w

in

W : v(, w) = F },

W1 = {w

in

W : v(, w) = T }

and

W2 =

and prove the claim piecewise.

Suppose that the last step is

`
` 

(Recall that the Necessitation rule has no assumptions ).


Let

(W, R, v) be any Kripke


v(, w) = T .

model. We show that for any world

of

W,

we have

|= , so our Kripke model


W , one has v(, w0 ) = T . In particular,
0
has v(, w ) = T . Hence v(, w) = T .
By induction,

As

satises
for any

:
w0

w0

that is, for any


such that

wRw

in

, one

is arbitrary, the Kripke model satises . Since we have taken any

Kripke model, we deduce that

|= .
35

Now suppose that the last step is

` ( )
`  
We show

|=   .

(W, R, v)
 .

So let

we show that it satises 

be a Kripke model satisfying

w. We want to show that v( , w) = T . We may


v(, w) = T (as otherwise we are done). By denition, for all w0
0
0
such that wRw , one has v(, w ) = T . We aim at showing v(, w) = T ,
0
0
0
that is v(, w ) = T for any world w accessible from w . Let w be such
0
that wRw .
Fiw a world

suppose

` ( ). In particular, for any world w0


v( , w0 ) = T . As v(, w0 ) = T , it follows
this is true for all w ' accessible from w , one has

By induction, we know

wRw , one has


v(, w0 ) = T . Since
v(, w) = T .

with

One concludes that

|=   .

0.6.3 Completeness*
Theorem 0.6.10

(completeness of modal logic)

Counter-example 0.6.11.
tions, as

{} |= ,

but

Let

Write

Let

Wun

Claim

|= ,

` .

The proof will rely on the construction of a uni-

(Wun , Run , vun ).

be the collection of all maximal consistent modal theories.

wRun w0

i

{ :  w} w0 .
be such that

vun (An , w) = T

i

w An .

Wun is not empty by Lemma 0.4.6, or something similar in WFFmod .

(truth lemma; see Lemma 0.4.7)

mal consistent modal theory

For any

w, vun (, w) = T

i

WFFmod
w.

Verication: By induction. The only non-trivial case is

then

This is not true with a non-empty set of assump-

vun : A Wun {T, F }

Notice that

If

{} 6` .

Proof of Theorem 0.6.10.


versal Kripke model

and any maxi-

=  .

w0 be any world such that wRun w0 . We assumed 


w, so by denition of Run , one has w0 . By induction, vun (, w0 ) = T .
0
This being true for any w ' with wRun w , we deduce vun (, w) = T , that
is vun (, w) = T .
Suppose

w.

Let

36

vun (, w) = T ;

We now suppose

{ :  w}.
If

w. Consider Nw =
= Nw {}.

we show that

We form the modal theory

is consistent, then it can be extended to a maximal consistent modal

theory

w0 .

Nw w0 and by denition of the relation, wRun w0 .


vun (, w) = T , so vun (, w0 ) = T , against w0 .

Hence

assumption,

is not consistent. It follows Nw ` .


Nw such that {1 , . . . , n } ` . We nd
So

There are therefore

By

1 , . . . , n

{1 , . . . , n } `
i
` 1 (2 (. . . (n )...)
N
`  (1 (2 (. . . (n )...))
K
` 1 (2 (. . . (n )...)
w and w is maximal consistent, we see  w.

As 1 , . . . , n are in

We now prove completeness of modal logic.


maximal consistent modal theory

T;

the universal model

Wun

containing

does not satisfy

Suppose

6` .

So there is a

vun (, w) =

In particular,

Hence

6|= .

Remark 0.6.12.

Of course if we have made restrictions the accessibility relations (and cleverly added the corresponding deduction rules), we need to prove completeness accordingly, that is we must show that the accessibility relation
on the universal model

Wun

6|=

(see

The proof fails for the following reason.

We

Notice further that we cannot show that if


Counter-example 0.6.11).

6` ,

naturally restrict our universal model to the set

extensions of

of course

Run

has the desired properties.

W Wun ,

then

of maximal consistent

and we take the induced accessibil-

ity relation and truth assignment. When trying to prove the truth lemma
for

W ,

we shall form

= Nw {};

it is not consistent; we think

we are very clever.


Hence we nd {1 , . . . , n } ` . But now even if we detach a formula
such that {, 1 , . . . , n } ` , we will end up with  (1
. . . (n )...), and there is no reason for  to be in !
This discussion reveals that the universal model
only if

W satises the truth lemma

is closed under . This yields the following generalization of Theorem

0.6.10.

Theorem 0.6.13

(completeness of modal logic revisited)

theory which is closed under . If

|= ,

then

Let

be a modal

` .

In particular, by a method similar to that of Lemma 0.4.2, we nd the dual


statement.

37

Theorem 0.6.14

(completeness of modal logic revisited)

theory which is closed under . If

is consistent, then

Let

be a modal

is satisable.

Notice that the resulting compactness phenomenon does not require closure
under .

Corollary 0.6.15 (compactness of modal logic).

Let

be a modal theory.

Then

is satisable i it is nitely satisable.

Proof .

Suppose

is nitely satisable; we prove that it is satisable (the other

implication being trivial).

0 is the smallest subset of WFFmod


0
which contains and such that =  . As for any WFFmod ,
0
{} ` , it is clear that is nitely satisable.
0
By soundness of modal logic (Theorem 0.6.9), is consistent. By complete0
0
ness (Theorem 0.6.14), is satisable. As , so is .
Let

be the closure of

under :

However interesting epistemologically, one sees on the technical side that


modal logic is but an easy extension of propositional logic. We shall return to
more mathematical views.
End of Lecture 8.
Lecture 9 (ME1)
End of Lecture 9.

38

Chapter 1

First-Order Logic (Largo)


First-order Logic, also called Predicate Logic, is more expressive than Propositional Logic: it can mention elements. This requires extending the language;
in particular we shall need quantiers. Up to an abbreviation, we use only one.
But this quantier deeply aects the notion of deduction; we need new rules.
On the other hand, the meaning of rst-order languages is also richer. We shall
introduce structures, which provide their semantic notion of entailment.
If soundness remains easy to show, completeness of rst-order logic will be
a major and non-trivial result. Hence, though quite expressive, rst-order logic
enjoys excellent properties.

In this chapter:

Dene rst-order languages (1.1)

Introduce the structures associated to rst-order languages (1.2)

An unpleasant journey to the theory of substitutions (1.3)

Extend the notion of deduction to rst-order logic (1.4)

Prove a soundness theorem for rst-order logic (1.4.3)

Prove Gdel's completeness theorem for rst-order logic (1.5)

Derive compactness (1.6.2) and nd applications (1.6.2)

Lecture 10 (First-Order Languages; Terms and Formulas)

1.1 Syntax
First-order is essentially more expressive than Propositional Logic:
tually introduce elements.

we even-

So one will need quantication, but also specic

39

elements (constants), properties of elements (relations), and a way to build on


elements (functions).

1.1.1 First-order Languages


We shall from the start cut down on the number of connectives used; this will
make inductive denitions and proofs way shorter, and is legitimate in view of
the results of 0.2.3 and 0.3.3.

Denition 1.1.1

(rst-order language)

A rst-order language

consists of

the following symbols:

Logical symbols, which are common to all languages:

  ( and  )
 the connectives and
 a set V of variables v1 , v2 , . . . , vn , . . .
 the quantiers and
 the symbol  =, which is a binary relation symbol

Specic symbols, which depend on the language (these form what is called
the signature of the language):





a set

of constant symbols

for each

n 1,

a set

of

n-ary

relation symbols

for each

n 1,

a set

of

n-ary

function symbols

Of course one denes the set of expressions of

Remark 1.1.2.

L.

When giving a rst-order language, one does not specify logical

symbols. In particular, even if it is not mentioned, the symbol  = is always


part of the language.

Remark 1.1.3.
signature:

It is technically possible to have only relation symbols in the

one replaces constant symbols by unary relation symbols (which

stand for singletons), and

n-ary

function symbols by

n + 1-ary

relations (the

graphs of the functions). It is more convenient to allow constant and function


symbols, but bear in mind that we would be able to work with purely relational
languages throughout.

Notation 1.1.4.

One immediately allows the following abbreviations (E1 ,

denote arbitrary expressions):

E1 E2

stands for

(E1 E2 );

E1 E2

stands for

(E1 ) E2 ;

E1 E2

stands for

((E1 E2 ) (E2 E1 ));


40

E2

E1 6= E2

stands for

(= (E1 , E2 )).

This should save us precious amounts of time later, though one could introduce ve connectives (as we clumsily did for Propositional Logic before the
verications of 0.2.3 and 0.3.3).

Remark 1.1.5.

On the other hand, we have two quantiers.

By arguments

ressembling those of 0.2.3 and 0.3.3 (see Corollaries 0.2.18 and 0.3.14), we will

get rid of

showing that it may be considered as a mere abbreviation. This

will be done in 1.4.2 (Corollaries 1.4.8 and 1.4.11).

Example 1.1.6.

The empty language, or language of pure sets, contains the equality as its
only relation symbol.

The language of orderings is


(we omit to mention

The language of groups is

Lord = {<}, with < a binary relation symbol

=).
Lgrps = {1, ,1 }

is a binary function symbol,

The language of rings is


symbols,

+, ,

where

is a constant symbol,

is a unary function symbol.

Lrings = {0, 1, +, , } where 0 and 1 are constant

are binary function symbols.

1.1.2 Terms
We have dened expressions, but only well-formed formulas are of interest. Yet
before we can dene them, we need to explain which combinations of elements
are meaningful.

Denition 1.1.7

(term)

The collection of

L-terms

is the smallest set such

that:

every variable is a term;

every constant symbol is a term;

if

t1 , . . . , tn

are terms and

is a

n-ary

function symbol, then

f (t1 , . . . , tn )

is a term.

Notation 1.1.8
set

Var(t)

(variables occuring in a term)

of variables occuring in

if

is a constant symbol

if

is a variable

if

is

x,

then

c,

For an

L-term t,

we dene the

t:

then

Var(t) = ;

Var(t) = {x};

f (t1 , . . . , tn ), where t1 , . . . , tn are terms and f


Var(t) = Var(t1 ) . . . . . . Var(tn ).

symbol, then

41

is a

n-ary

function

It is quite clear that a variable

occurs in a term

i the symbol

appears

t.

in the expression

Example 1.1.9.
(1, (v1 , 1)) is a term of Lgrps , which is more conveniently written 1(v1 1).
Only v1 occurs in it.
2 v12 3 v2

Lrings , as we have implicitely made the following


2 stands for +(1, 1), 3 for +(1, +(1, 1)), and v12 for (v1 , v1 ).
v1 and v2 occur in the term.

is a term of

abbreviations:
The variables

No meaning (and not much interest) so far! We start building on elements


to create well-formed formulas.

1.1.3 Formulas
Denition 1.1.10
of the form
are

(atomic formula). An atomic formula of L is an expression


R(t1 , . . . , tn ), where R is an n-ary relation symbol of L and t1 , . . . , tn

L-terms.

Example 1.1.11.
v1 = v2

is an atomic formula of any rst-order language (as equality is

always part of the language).

x2 + 1 = 0

Lrings .

is an atomic formula of

Denition 1.1.12 (well-formed formula).


formulas of

The collection

WFFL

of well-formed

is the smallest set such that:

every atomic formula is a w;

if

and

if

is a w and

are w 's, so are

and

is a variable, then

and

are w 's.

From now on, formula will implicitely mean well-formed formula.

Example 1.1.13.
v1 v2 (v1 v2 = 1 v2 v1 = 1)
v1 1 = 0

is a w of

v1 v1 v1 = v1

Theorem 1.1.14
Proof .

is (short-hand for) a w of

Lgrps .

Lrings .

is a w of any rst-order language.

(unique readability)

There is only one way to read a w.

Proceed exactly like for propositional logic, Theorem 0.1.9.

quires a version of the balanced parenthesing lemma, Lemma 0.1.11.

42

This re-

Notation 1.1.15
the set

Var()

(variables occuring in a formula)

if

if

is

if

is

1 2 ,

if

is

is an
Var(tn );

atomic formula

then

or

L-w ,

For an

we dene

of variables occuring in

R(t1 , . . . , tn ),

then

Var() = Var(t1 )

Var() = Var();
then

x ,

Var() = Var(1 ) Var(2 );


with

x V,

Again, it is clear that a variable


appears in the expression

Var() = Var() {x}

then

occurs in a formula

i the symbol

It is clear from Example 1.1.13 that when working syntactically we will have
to be extremely cautious with variables. This will give rise to several painful
technicalities, such as the notion of substitutability, to which the entire 1.3 is
unfortunately devoted.
Notice however that no one is stupid enough to make such mistakes at the
semantic level.

So once we are done with the syntactic study of rst-order

language, we shall happily forget the technical details.

Denition 1.1.16

If

(free or bound variable)

Let

be a w and

if

is atomic, then

if

is

occurs free in

if it occurs free in

if

is

1 2 , x

occurs free in

if it occurs free in

if

is

occurs in

then

or

occurs free in

y , x

occurs free in

if it occurs in

if

a variable.

or in

occurs free in

but is not free, it is said to occur bound in

and

x 6= y .

Example 1.1.17.

In

v1 v1 = v2 , v1

The same holds of

In

is bound and

v2

v1 v1 v1 = v2

is free.
(which is unfortunately a w ).

(v1 v2 = v3 ) (v1 = v2 ), v1 , v2

and

v3

occur free.

Hence free means: we're still missing something to know what it is about.

Notation 1.1.18.
variables in

For a formula

we let

FreeVar()

denote the set of free

Denition 1.1.19
Remark 1.1.20.

(sentence)

A sentence is a w with no free variable.

In a sense, in propositional logic all (well-formed) formulas

are sentences, as there are no variables.

Denition 1.1.21

(theory)

A theory is a set of sentences.

43

This denition is the most natural one (we do not want unexplained variables), but we shall need to be very careful.

Some subtleties around the notion of consistency will arise in 1.4.

We shall dene

|=

and

for sets of formulas, not of sentences, as

this will be useful later.

Example 1.1.22.

Lsets )

The theory of innite sets (in

is

{v1 . . . vn i6=j vi 6= vj : n N}

The theory of orderings (in

Lord )

is

v1 (v1 < v1 ), v1 v2 (v1 < v2 v1 = v2 v2 < v1 ),


v1 v2 v3 (v1 < v2 v2 < v3 v1 < v3 )

The theory of groups (in

Lgrps )

is given by the sentences:

 v1 v2 v3 v1 (v2 v3 ) = (v1 v2 ) v3
 v1 (v1 1 = v1 1 v1 = 1)
 v1 (v1 v11 = 1 v11 v1 = 1)

The theory of rings is the

Lrings -theory

given by the expected axioms.


End of Lecture 10.

Lecture 11 (The Semantics of First-Order Logic)

1.2 Semantics
Let us turn our attention to the meaning a rst-order formula should convey.
This is a matter of interpreting the non-logical symbols (the signature of the
language), but also if necessary the free variables. This section is less clumsy
but more important than 1.1.

1.2.1 Structures
Denition 1.2.1 (L-structure).

a non-empty base set

for each constant symbol

for each

n-ary

we require that

An

consists of:

(called the base set or underlying set);

of

L,

relation symbol

=M

L-structure M

a specic element

of

L,

a subset

must be the equality on

44

cM

RM

of

of

M;

M n;

for each

n-ary

One also refers to

relation function

of

L,

a function

fM

from

Mn

to

M.

as the universe of the structure.

We have thus given a meaning to all symbols of

L.

All?

No!

We have

forgotten the meaning of the variables, which prevent the terms from having a
meaning, which prevent the formulas from having a meaning! So one must also
specify the value of the variables.

1.2.2 Parameters and Interpretations


Denition 1.2.2

(assignment of the variables)

Let

be an

L-structure. An
s from V to

assignment of the variables, or assignment for short, is a function

M.
An assignment is merely a choice of parameters (it species what the variables stand for).

Denition 1.2.3
L-structure

interpretation

s(c) = cM
s(x)

if

(interpretation of a term with parameters)

s : V M be
in M of a term t

and

for a constant symbol

f (t1 , . . . , tn ),

Lemma 1.2.4

then

be an

s,

denoted

s(t).

c;
x;

s(t) = f M (s(t1 ), . . . , s(tn )).

(univocity of interpretation)

s, s0 : V M
s(t) = s (t).
term and
0

Proof .

with parameters

already makes sense for a variable

is

Let

an assignment of the variables. We dene the

Let

L-structure, t
s| Var(t) = s0| Var(t) .

be an

be two assignments such that

an

L-

Then

Clear by induction.

1.2.3 Satisfaction and Semantic Consequence


Denition 1.2.5

(satisfaction)

Let

an assignment. We dene satisfaction in


written

M |= [s].

is atomic,

if

say

R(t1 , . . . , tn ),

(Recall that the symbol

if

is

if

is

if

then

M |= [s]

then

be an

then

and

s:V M

if

be

with parameters

s,

M |= [s] if (s(t1 ), . . . , s(tn )) RM

is always interpreted as the equality on

M ).

M 6|= [s].

M |= [s]

if

M 6|= [s]

x , then M |= [s] if there


s agree on V \ {x} and M |= [s0 ].

L-structure

of a formula

is

45

or

M |= [s].

is an assignment

s0

such that

and

if

x , then M |= [s] if it is the case that


0
0
that s and s agree on V \ {x}, M |= [s ].

is

such

The idea is clear. The

clause means that there is a value for

holds, the other variables being xed as in

freezes all variables


holds in

y 6= x

for any assignment

to

s(y)

but

s.

The

s0

such that

clause means that if one

m M,

assumes all possible values

M.

Lemma 1.2.6

(univocity of satisfaction). Let M be an L-structure, an Ls, s0 : V M two assignments such that s| FreeVar = s0| FreeVar .
0
Then M |= [s] i M |= [s ].
In particular, if is a sentence, then M |= [s] does not depend on s.

formula, and

Proof .

A clear induction.

We naturally write

M |= [s].
If is a

M |= [s]

set of sentences, then

one says that

is a model of

if

is a set of w 's such that for all

M |= [s]

does not depend on

s;

in this case

We do not insist on this terminology now, as

we shall be working mostly with sets of formulas instead of theories.

Denition 1.2.7
there is an

(satisability)

L-structure M

Let

L-w 's. is satisable if


s : V M such that M |= [s].

be a set of

and an assignment

Pay attention to the following. We want our notion of consequence to deal


not only with sentences (which have no free variables), but also with formulas
with free variables. For instance we want to say that

y + 1,

even though we have no idea of what

and

x = y does imply x + 1 =
y are. So we do need to

operate on all w 's.

Denition 1.2.8

(semantic consequence)

|= if for any L-structure M


M |= [s] then M |= [s].
formula.

Let

s : V M,

be a set of formulas,

and any assignment

a
if

Example 1.2.9.

Let

Tgrp

n N,
n m, one

For

the sentence v1 v1 v1 = 1,
v1 v2 v1 v2 = v2 v1 . Then Tgrp {} |= .

be the theory of groups,

the sentence

let
has

n be the sentence v1 . . . vn i6=j (vi = vj ).


n |= m .

and

For each

End of Lecture 11.


Lecture 12 (Substitutions)

1.3 Substitutions
We shall soon give deduction rules extending those of propositional logic. Before
that, we need more notions about the interplay of terms and formulas.

46

1.3.1 Substitutability

Given a formula
term

t.

deduce

x Var ,

and a variable

we may want to replace

by a

This will typically be the case in our proof theory, when we want to

(t)

from

x (x).

But one must be extremely careful as the following

examples show.

Example 1.3.1.

f (v1 ) = 1,

v1 = 1

Consider the formula

and the term

f (v1 ).

Replacing, one nds

and the replacement procedure stops after one iteration.

Consider the formula

v1 v1 = v2 .

v1 by a
1 = v1 )

If we replace

a variable, the result no longer is a w ! ( 1

term which is not

We shall therefore substitute only free occurences. The problem is that


we cannot naively susbtitute all free occurences.

If in

v1 v1 = v2

we replace

v2

by

v1 ,

v1 v1 = v1 ,

we nd

which clearly

should mean something else.

Similarly, if in

v1 v1 = v2

we replace

v2

v1 ,

by any term involving

we

alter the meaning.


These examples suggest that we may replace only certain free variables, but

not all. A problem arises when the replacement bounds a variable of


free in

that was

before.

Denition 1.3.2
for a variable

(term substitutable for a variable)

in a formula

if no variable of

one replaces every free occurence of

by

Var t

A term

is substitutable

becomes bound in

when

t.

Example 1.3.3.

Let

be

v1 = v 2

and

be any term. Then

it is also substitutable for

Let

be

v1 v1 = v1 .

Then

are no free occurences of

Let

be

v2 ,

v1 .

For instance, if
the structure of

is substitutable for

f (v1 )

v1

in

1 ;

v3 .

is substitutable for

It is also substitutable for

v1 v1 = v2 . Then any
t is substitutable

the other hand,

and substitutable for

v1 in 2 ,
v2 , etc.

as there

t is substitutable for v1 in 3 . On
v2 in 3 i v1 does not occur in t.

term
for

t = f (c, v1 ), then writing v1 v1 = t would deeply alter


3 . If t = f (c, v2 ), or t = f (c, v3 ), then replacing v2 by t

is perfectly licit.

Let

(v1 v1 = v2 ) (v2 = v1 ) and t be a term. Then t is substiv1 in 4 . On the other hand, t is substitutable for v2 in 4 i
not occur in t.

be

tutable for

v1

does

Notation 1.3.4.

If

is substitutable for

expression where every free occurence of

47

in

one writes

is replaced by

t.

[t/x]

for the

( / is read for).

We refrain from writing

Example 1.3.5.
1 [t/v1 ]

is

[t/x]

if

is not substitutable for

in

Consider Example 1.3.3 again.

t = v2 ,

2 [f (v1 )/v1 ]

is

whereas

1 [t/v3 ]

v1 v1 = v1 .

So is

v1 = v2 .

is of course

2 [f (v1 )/v2 ].

3 [f (c, v1 )/v1 ]

is

v1 v1 = v2 .

3 [f (c, v3 )/v2 ]

is

v1 v1 = f (c, v3 ).

 3 [f (c, v1 )/v2 ] is of course not allowed (not substitutable), as the


from

v1

would become bound.

4 [f (c, v1 )/v1 ]

is

(v1 v1 = v2 ) (v2 = f (c, v1 )).

4 [f (c, v3 )/v2 ]

is

(v1 v1 = f (c, v3 )) (f (c, v3 ) = v1 ).

 4 [f (c, v1 )/v2 ] is not allowed.


Bear in mind that we replace only free occurences!

1.3.2 A Renaming Algorithm


As we have said, one must be extraordinarily stupid (or wicked) to construct
misleading formulas; intuitively, we always avoid these dangers. Here is why.

Theorem 1.3.6

there is
that

Let be a w, t a term, and x a variable. Then


only in the names of bound variables, and such
x in , and {} ` and {}
` .

(renaming)

which diers from

is substitutable for

We do not wish to dene the notion

` right now (this will be done in 1.4.1),

but it will be clear in due time that the renaming procedure described below
makes

and

mutually provable.

Caution! depends on , on x,
Proof .

We construct a suitable

If

If

is

Again,
If

Suppose that
If

is

1 2 ,

in

x = y,

is

that is if

can be

Clearly

is

we take

free occurences of
If

inductively.

, we apply the algorithm


t is substitutable for x in .

is atomic, then there is nothing to do:

substitutable for

and on t.

y ,

to be

with

to smaller

1 2 .

and take

to be

meets the requirements.

y V.

is x , we may take to be itself. There are


x in , so t was trivially substitutable for x in .

no

x 6= y and y 6 Var(t), then no problem will occur from the quantication


y . We let be y ; clearly t is substitutable for x in .

on

48

x 6= y

If

y Var(t),

but

then something must be done. We let

(or better: the rst!) variable not in

z
in

y in

z [z/y]
.

is then substitutable for

makes sense. Let

be

Var(t) {x}.
Var()

We claim that

[z/y]
x
occur in t.

and

is substitutable for

does not

y .

We use the same method to deal with

be any

(as it doesn't appear in it); so

this is clearly the case, as it is true in

Notice that if

Notice that

has been eectively enumerated, then taking the rst vari-

able not occuring in

Var(t) {x}
Var()

makes

well-dened, and we have

an eective algorithm.

Example 1.3.7.

The easiest is to take


variable in

Example 1.3.3 again.

Similarly,

for

1 .

One can substitute any term for any

1 .

can be

2 ,

as any term is substitutable for any variable in

2 .
t

is substitutable for

If

v1 6 Var(t),

then

v1 Var(t)

v1

in

3 ,

so with respect to

v2

is substitutable for

in

v1 , 3

3 ; 3

could be

3 .

can still be

3 .

t for v1 in 3 , something
t is f (v1 ), then  v3 v3 = v2  will do as
3 . (In this case, 3 [t/v1 ] is  v3 v3 = f (v1 ).) On the other hand if t is
g(v1 , v2 , v3 ), then the 3 given by our algorithm will be v4 v4 = v2 (and
3 [t/v1 ] will now be v4 v4 = g(v1 , v2 , v3 )).
Now if

must be done.

Notice that

and we want to susbtitute

If for instance

does depend on

t!

v2 in 4 ; a problem occurs only if


z be a variable not occuring in t, nor
Then  (z z = v2 ) (v2 = v1 ) will do as
4 . (Of course
ts a certain desired substitution, as z depends on t.)

We want to be able to substitute

v1 Var(t),
equal to v2 .
this
4 only

and

for

which we assume. Let

If we were not interested in the proof theory of rst-order logic, we would


work permanently up to renaming. Unfortunately we will have to be a little
careful.

1.3.3 Substitutions and Satisfaction


What happens if we substitute a term to a variable, and then wonder about
satisfaction?

Fortunately, everything goes as expected.

shall consider an assignement


variable

x.

Lemma 1.3.8.
s

on

and a term

In what follows, we

which we shall substitute to a

This merely has the eect of changing the value of

the new value at

with

being the interpretation of

Let

V \ {x},

t,

be

at variable

t.

L-terms, s an assignment, x a variable. Let s agree


s(x) = s(t). Then s( ) = s( [t/x]).

and such that

49

Proof .

Induction on

For clarity, let

denote

[t/x].

We want to prove that

s( ) = s(
).

This is clear if

If

If

is

x,

then

is a constant, or a variable not equal to

t,

is

and

x.

s( ) = s(x) = s(t) = s(
).

f (1 , . . . , n ), then s( ) = s(f (1 , . . . , n )) = f M (
s(1 ), . . . , s(n )) =
(s(
1 ), . . . , s(
n )) = s(f (
1 , . . . , n )) = s(
).
is

Proposition 1.3.9.

be a formula and t a term substitutable for a variable


L-structure and s : V M an assignment. Let s be
the assignement which agrees with s on V \ {x} but on s
(x) = s(t). Then
M |= ([t/x])[s] i M |= [
s].
x

in

Proof .
prove

Let

Let

be an

. For clarity
M |= [
s].

Induction on

M |= [s]

i

we let

denote

[t/x].

Hence we want to

is atomic, say R(1 , . . . , n ). Then is the


R(1 [t/x], . . . , n [t/x]), which we naturally denote R(
1 , . . . , n ).
Suppose that

formula
Now us-

ing Lemma 1.3.8,

M |= [s]

i
i
i

The case where

We suppose
First, if

y = x,

(s(
1 ), . . . , s(
n )) RM
(
s(1 ), . . . , s(n )) RM
M |= [
s]

is obtained via connectives is trivial.

of the form
then

y .

is exactly

(as

does not occur free in

).

Then

using Lemma 1.2.6,

M |= [s]

i
i

So the real case is when


is is substitutable for

in

is substitutable for

M |= [s]
M |= [
s]

y 6= x, which we now suppose. By assumption, t


, and this implies y 6 Var(t). Furthermore, t
and is y .

in

M |= [s]
; we show that M |= [
s], that is M |=
s0 be any assignment agreeing with s except on y . We
need to show that M |= [
s0 ].
. Let s0 agree
By assumption, M |= [s]
, which means M |= (y )[s]
0
0
with s except on y , and such that s (y) = s
(y). By assumption,
0 ].
M |= [s
0
We note that s and s
0 agree but on x, and s(t) = s(x) = s0 (x). Now
y does not appear in t, and s and s0 agree except on y , so by Lemma
0
0 ], it follows by induction,
1.3.8 s (t) = s(t) = s
0 (x). As M |= [s
0
0
M |= [
s ]. As s is arbitrary on y but coincides with s everywhere
else, this means M |= (y )[
s], that is M |= [
s].
Suppose that

(y )[
s].

Let

50

Now suppose M |= [
s]. We want to show that M |= [s]
, that is
. So let s0 be any assignement agreeing with s except
M |= (y )[s]
on y . Let s
0 agree with s except on y , and such that s0 (y) = s0 (y).
Notice that s
0 (x) = s(x) = s(t) = s0 (t), as y 6 Var(t), and s and s0
agree except on y .
We know M |= [
s], so M |= (y )[
s]. In particular, M |= [
s0 ].
0
0
0
0
But s
and s agree everywhere but on x, and s (x) = s (t). So
0 ]. As s0 was arbitrarily obtained from s
by induction, M |= [s
, and therefore
by changing its value on y , we nd M |= (y )[s]
M |= [s]
.

The case of

is similar.

Counter-example 1.3.10.
for

x in .

substitutable for

v1

in

t substitutable
t is v2 . Clearly t is not
sentence v2 v2 = v2 , resulting

It is extremely important to have

is v2 v1 = v2

Consider the case where


We let

denote the

and

from an illegitimate replacement.

L-structure M and assignment s : V M one will


M |= [
s]. Reasoning
as in the proof of Proposition 1.3.9, one does have s
0 (x) = s(x) = s(t), but now
y occurs in t, and there is no reason why s(t) = s(y) and s0 (y) should agree!
It is the case that for any

have

M |= [s]
.

But it will not be the case in general that

End of Lecture 12.


Lecture 13 (Deductions; Simplifying the Language)

1.4 Deductions and Soundness


1.4.1 Deductions
We already know reasonable deduction rules for

and

In order to extend

natural deduction to rst-order logic, we must explain the roles of

=, ,

and

Equality requires new axioms, as we want the following intuitively obvious


properties to be deducible.

t=t

t1 = t2 (t[t1 /x] = t[t2 /x])


t1 = t2 ([t1 /x] [t2 /x])

where

is a variable,

stitutable for

in

is a w,

t, t1 , t2

are terms with

t1

and

t2

sub-

(This class of axioms could be made considerably

smaller, but it is not our goal.)

adds deduction rules.

One may infer

x from if x does not appear in

th assumption. On the other hand, using a universal quantier amounts


to substituting to a special case.

51

Introduction and elimination rules for

`
` x
if

` x
` [t/x]

does not occur free in

adds dual deduction rules.

if

is substitutable for

in

One shows an existential by providing an

example, and uses one by introducing a witness.


Introduction and elimination rules for

` [t/x]
` x
if

` x

is substitutable for

in

if

{} `
`

is not free in

{}

Of course we shall prove that these rules may be regarded as short-cuts,


provided

is considered as an abbreviation for

(this will be done in

1.4.2).
One could now nish the proof of the renaming algorithm (Theorem 1.3.6)
by showing that

Remark 1.4.1.
Consider

{} `

and

{}
` .

We must slightly update the denition of consistency. Why?

{(x = x)}.

If no reference to equality is made, then this set of

formulas is consistent, but yields inconsistency as soon as one adds the equality
axioms (which we won't renounce anyway).
So we redene consistency as consistency when one adds the axioms of equal-

ity (in practice no ambiguity arises).

Proposition 1.4.2.
` x y (x = y y = x)
` x y z (x = y y = z x = z)

Proof .

Let

Notice that

be the formula z = x, in which we


[x/z] is x = x, but [y/z] is y = x.

shall replace

We nd the deduction

x=y

x = y [x/z] [y/z]
x=xy=x
y=x

x=x

Introducing the implication and quantifying, we do get

` x y (x = y y = x)
Transitivity is an exercise.

52

by

x,

or

y.

1.4.2 Simplifying the Language


We now explain briey why two quantiers are obviously redundant (as far as
the semantics is concerned). This runs parallel to 0.2.3 and 0.3.3.

Notation 1.4.3
not use

(see Notation 0.2.12)

WFF0

be the set of w 's that do

Notation 1.4.4
a formula

Let

of

, 0

for atomic

()0

is

is

We inductively translate any w

into

(0 );

( )0
(x )0

(see Notation 0.2.13)

WFF0 :

(0 0 );

is

is(x

0 )

(x )0 = (x 0 )
Of course

and

bear the same meaning; we check that viewing

as an

abbreviation doesn't alter our notions of consequence.

Notation 1.4.5

(see Notation 0.2.15)

truth assignment, and

M be an L-structure, s : V M
M |=0 [s] inductively.

Let

a w. We dene

M |= [s] if (s(t1 ), . . . , s(tn )) RM

if

is atomic,

if

is

if

is

1 2 ,

if

is

x , then M |= [s] if it is the case that


s and s0 agree on V \ {x}, M |= [s0 ].

say

R(t1 , . . . , tn ),

M |= [s]

then

then

if

then

M 6|= [s].

M |= [s]

if

M 6|= 1 [s]

or

M |= 2 [s].
for any assignment

s0

such that

Notice that this is a subdenition of Denition 1.2.5, but that


only on formulas of

Notation 1.4.6.

Let

|=0 if for any L-structure M


M |=0 [s], one has M |=0 [s].

Lemma 1.4.7 (see Lemma 0.2.16).

an

Proof .

L-formula.

Then

By induction on

WFF0 and WFF0 . We


assignment s : V M such that

Let

M |= [s]

and

M be an L-structure, s
M |=0 0 [s].

Proof .

an assignment,

i

the only non-immediate case being when

But this case is very easy.

Corollary 1.4.8

is dened

be a consistent subset of

write

and

|=0

WFF0 .

(see Corollary 0.2.18)

. |=

Obvious from Lemma 1.4.7.

53

i

0 |=0 0 .

is x .

The syntactic side is hardly more subtle.

Lemma 1.4.9
Proof .
x ,

(see Lemma 0.3.12)

. {} ` 0

and

{0 } ` .

Induction again. It clearly boils down to treating the case where

so that

Notice that

is substitutable for

is

in

0 .

Using induction, one nds

Ax,Wk

{x , , x 0 } `
.
.
. induction
.

Ax,Wk

{x , , x 0 } ` x 0
e
0
0
{x , , x } `
{x , , x 0 } ` 0
i
{x , } ` x 0
|
{z
()

so

x 0 .

is

{x } ` x
{} ` 0

()

We need Reductio ad Absurdum (Theorem 0.3.10) for the converse. We


show that

{0 , }

is inconsistent.

{x , 0 } ` 0

Ax,Wk

.
.
. induction
.

{x , 0 } `
i
{x , 0 } ` x
{x , 0 } ` x
{x } ` 0
{z
|

Ax,Wk
i

()

Now as

does not appear free in the assumptions,

{x } ` 0
{x } ` x 0
It follows that

{, 0 }

is inconsistent, whence

{0 } ` .

Of course the real problem is to show that one may drop the

rules without

aecting the deductive strength.

Notation 1.4.10

(see Notation 0.3.13)

which does not use the

Corollary 1.4.11
`

i

Write

Let

`0

if there is a deduction

rules.

(see Corollary 0.3.14)

0 `0 0 .

54

be a theory and

a w. Then

Proof .

The implication

0 `0 0 `

is very comparable to the proof of

Corollary 0.2.18, thanks to Lemma 1.4.9.


We now show that

implies

0 `0 0

by induction on the deduction.

The only interesting cases are when the last step is a

-rule.

There are two

possibilities.

Suppose that the last step is:

` [t/x]
` x
where

is substitutable for

in

Then

.
.
. induction
.

Ax

{x 0 } `0 x 0
Wk
0 `0 0 [t/x]
0 {x 0 } `0 x 0
e
Wk
0 {x 0 } `0 0 [t/x]
0 {x 0 } `0 0 [t/x]
i
0 `0 x 0
| {z }
=0

so

0 `0 0 .

Now suppose that the last step is:

` x {} `
`
where

{}. By induction, 0 { 0 } `0 0 ;
0 {0 } `0 0 .

does not occur free in

contraposing (Theorem 0.3.8),


Now

0 {0 } `0 0
{0 } `0 x 0
0

since

does not occur free in

By induction again,

0 `0 0

0 {0 }.

0 `0 x 0 .

So

0 {0 }

is inconsistent, and

by Reductio ad Absurdum (Theorem 0.3.10).

Hence for us, an existential statement may be proved by showing that a


universal statement does not hold. Intuitively, this is disputable: one has never
given an example! For instance, in intuitionistic logic where one does not allow

as a deduction rule,

is not equivalent to

.
End of Lecture 13.

Lecture 14 (Soundness; Completeness (1/2))

55

1.4.3 Soundness
Theorem 1.4.12
Proof .

If

(soundness of rst-order logic)

` ,

then

|= .

By induction on the deduction. Notice that we are extending propo-

sitional logic, so weakening, negation, and implication have been dealt with
in the proof of the propositional logic version (Theorem 0.3.17).

Axioms of

equality are easily dealt with. It remains to consider the case of quantication.
By Corollaries 1.4.8 and 1.4.11, only

need be examined:

is now a mere

abbreviation.

Suppose that the last step of the deduction was:

`
` x
where

does not occur free (Denition 1.1.16) in

induction,
Let

[s]

|= .

We show

s : V M an
M |= (x )[s].

be a structure and

and aim at showing

Then

and by

|= x .
assignment. We assume

M |=

s0 be any other assignment agreeing with s on V \ {x}. We must show


M |= [s0 ]. As x does not occur free in , M |= [s0 ] (Lemma 1.2.6).
0
0
As |= , this implies M |= [s ]. So M |= [s ] regardless of the value
0
assigned to x by s . So M |= (x )[s]. As a conclusion, |= x .
Let

Now suppose that the last step was:

` x
` [t/x]

t is substitutable (Denition 1.3.2)


[t/x]. For clarity, let denote [t/x].

where

Let M be
M |= [s].

an

L-structure, s : V M
M |= [s]
.

for

in

We shall show

|=

any assignment, and suppose

We have to prove

` x , and by induction |= x . As M |= [s], it follows


M |= (x )[s]. So for any s0 which agrees with s on V \ {x}, we have
M |= [s0 ]. Let s map x to s(t) (and coincide with s elsewhere); again
M |= [
s]. By Proposition 1.3.9, M |= [s]
. Hence for any s such that
M |= [s], we have M |= [s]
; we are done.

We know

Of course even allowing

as a basic quantier, the proof wouldn't be any

harder.

1.5 Completeness
Soundness (Theorem 1.4.12) was the easy part.

We now show the analog of

Theorem 0.4.1 for rst-order logic; informally speaking, it expresses that if it

56

is true, then there is a deduction of it" (in rst-order classical logic for natural
deduction).
This theorem is due to Kurt Gdel (in his PhD).

Theorem 1.5.1

(completeness of rst-order logic)

If

|= ,

then

` .

We adopt the same strategy as for the case of propositional logic (Theorem
0.4.1). It suces to show the following:

Theorem 1.5.2
,

then

(completeness of rst-order logic, equivalent version)

If

|=

` .

Theorems 1.5.1 and 1.5.2 are equivalent (see Lemma 0.4.2).

Remark 1.5.3.

Throughout we shall be working with sets of formulas instead

of theories (which are sets of sentences: Denition 1.1.21).

Even if you have

a feeling that it would suce to show Theorem 1.5.2 for theories, there will
be a point in the proof (the fairly technical Lemma 1.5.28 below) in which all

formulas must be taken into account.


Such a subtlety was impossible to imagine in propositional logic, where in a
sense all formulas are sentences!

Remark 1.5.4.

The consistent set we start with may be assumed to contain

the axioms of equality. This remark is essential; see Remark 1.4.1.

1.5.0 Strategy and Witnesses


Starting with a consistent set of rst-order w 's, we shall nd an

L-structure

and an assignment of the variables satisfying it. As in the case of propositional


logic, this will rst require extending to a maximal set of formulas (Lemma
1.5.18 of 1.5.2), and then nding an

L-structure

(Lemma 1.5.28 of 1.5.3).

It is not unrealistic to think that the collection of terms will provide the
base set. (Actually we shall need to factor out an equivalence relation, that is
work with a set of equivalence classes; pay no attention now to this slight detail
which will be explained in 1.5.3.) We shall then dene an ad hoc structure on
the base set, ensuring that relations and functions behave as required.

c and a unary
f , we shall have terms c, f (c), f (f (c)), etc., and we know how
to interpret c and f . But the issue is that we also have quantiers; for example,
the sentence v1 f (v1 ) = c, which expresses that f (v1 ) = c has a solution, might
For instance, if in the language there are a constant symbol

function symbol

be in the theory. The brilliant idea is to add a pointer to such a solution.


First we should simplify the language, following Corollaries 1.4.8 and 1.4.11
of s:rstorderreducinglanguage. We systematically replace
For each formula
a new constant

cx,

x ,

if

by

fails, we shall name the aw, i.e. introduce

such that

(x ) [cx, /x]

57

Such a formula means: If

is not always true, it is because of

cx,

precisely.

But this will require extending the language with new constants (1.5.1),
and extending the collection of formulas to force this phenomenon (1.5.2).
This strategy of adding witnesses is due to Leon Henkin.

Remark 1.5.5.

We shall only treat the countable case, but some remarks will

indicate how to generalize to uncountable languages.


Let

be a consistent set of

L-formulas containing the axioms of equality.


in a language which provides witnesses.

The goal is to nd a nice extension of

1.5.1 Expanding the Language


We want to introduce witnesses for formulas of the form

x .

This raises

two (minor) issues.

When we add new constants, we expand the language, so we also create


new formulas, and we need even more witnesses!

When we add new constants, we expand the class of terms, so we also


expand the equality axioms and the

rules. Will

still be consistent?

We deal with the rst issue. The intuitive idea is that for any formula of
the form

x ,

we add a new constant symbol

the language, we add new formulas!

cx, .

Of course when we expand

It thus looks like we need to repeat the

construction; fortunately the cardinal remains the same.

Lemma 1.5.6.

0
0
There is a countable extension L of L by a set C of new
0
constant symbols cx, , such that C is in bijection with the set of pairs (x, )
0
where x V and is a L -formula.

Proof .

WFF0 = WFF(L0 ). For each pair (x, ) V WFF0 ,


cx, , getting the language L1 .
We repeat the construction with formulas of WFF1 = WFF(L1 ), and intro0
0
duce new constant symbols, which yields L2 . Let L = nN Ln . Clearly L has
Let

L0 = L

and

we add a new constant symbol

the desired property.


Notice that at each step

Card Ln

remains countable, so

Card L0 = Card L.

remain consistent in
one can deduce a contradiction in L0 ,
then the deduction has used new constant symbols; L says nothing about these,
We now deal with the second possible issue - why does

L0 ?

The intuitive reason is clear: if from

so they behave like free variables. This is expressed by the following theorem.

[z/c] denotes brute replacement of all occurences of


c by the variable z (this creates no confusion, as there was certainly
quantication over the constant c).
In the statement below,

the constant
no

Theorem 1.5.7 (generalized generalisation).


,
variable z
ing in

and suppose
not occuring

Let c be a constant symbol occur ` where c does not occur in . Then there is a
in such that ` z [z/c].

58

Example 1.5.8.

If

`c=0

Proof of Theorem 1.5.7.

and

does not mention

We write

D:`

c,

then

to say that

(our usual deductive tree). Notice that there is a nite subset

D : 0 ` .
Let z be

` x x = 0.

D is a deduction
0 such that

D nor 0 (which are nite, but we have


c by z in each node of D, forming a tree
D[z/c] : 0 ` [z/c].

a variable not occuring in

countably many variables!) We replace

D[z/c].

We show that

If we use an axiom of equality, then substituting

for

remains an axiom

of equality!

If the step is for instance of the form:

0 `

then by induction

[z/c] [z/c].

0 `
`

0 ` [z/c]

and

0 ` ( )[z/c]

0 ` [z/c]
0 ` ( )[z/c]
0 ` [z/c]
which completes the step. The

which is

0 `

Hence

i , e , i

rules are as easy.

Suppose that the last step was of the form:

0 `
0 ` x
x does not
x ([z/c]). It

where

occur free in

is

follows

0 .

Notice that by choice of

z , (x )[z/c]

.
.
. induction
.

0 ` [z/c]
0 ` (x )[z/c]

Suppose that the last step was of the form:

0 ` x
0 ` [t/x]
t is substitutable for x in .
([t/x])[z/c] is ([z/c])[t/x].

where

By choice of

and

It follows

z , (x )[z/c] is x ([z/c])

.
.
. induction
.

0 ` x [z/c]
0 ` ([t/x])[z/c]
59

0 ` [z/c].
` z [z/c] too.

Hence

As

does not occur free in

0 ,

one has

0 ` z [z/c].

So

Remark 1.5.9. 0 appears just to make sure that we can provide a new variable
z:

perhaps

did mention all variable names... but we used only nitely many

in the deduction anyway.

Corollary 1.5.10.
Proof .

with the new equality axioms remains consistent in

We shall prove a little more:

deduction rules associated to


claim that if

L,

and

write

`0

L0 .

if there is a proof using

if the deduction is carried in

do not use new constant symbols, then

`0

implies

L. We
` .

L0 -consistency of .
So we suppose ` , and we assume nite. Working inductively, for
0
every new constant symbol ci C which appears in the deduction, there is
by Theorem 1.5.7 a variable xi such that ` xi [xi /ci ]; in particular `
[xi /ci ]. As the ci 's don't appear in , we thus have ` .

This will clearly establish

Yes! We have found a reasonable language to work in: it has enough witnesses, and

remains consistent.

Remark 1.5.11.

L0

The construction of

immediately generalizes to uncount-

L (the construction remains indexed


Card L). Corollary 1.5.10 is still true.

able

by

N;

at each stage,

Card Ln =

End of Lecture 14.


Lecture 15 (Completeness (2/2))

1.5.2 Extending the Theory


From now on we consider
the

L -axioms

as a consistent set of

L0 -formulas

which contains

of equality.

C 0 . Let {(xn , n : n
x V is a variable and

We shall next prescribe the role of the set of witnesses

N} be an enumeration
is an L0 -formula.

Notation 1.5.12

of the set of pairs

(Henkin axioms)

Let

(x, ),
n

where

be the formula:

(xn n ) n [cxn ,n /xn ]


cxn ,n is the
{0 , . . . , n1 }. For

where

rst element of

C0

which does not occur in

simplicity, we write

cn

instead of

cxn ,n

{0 , . . . , n }

(there is no risk of

confusion with the original constants).

means: if

is not true for all

xn 's,

it is because of... and points to a

specic constant symbol, which has never been mentioned before.

Notation 1.5.13.

Let

0 = {n : n N}.
60

We have expanded our consistent set. Is it still consistent?

Lemma 1.5.14. 0 is a consistent set of L0 -formulas.


Proof . Let n = {k : k n}. We know that 0 =

is consistent.

n is; we show that n+1 is consistent again.


Otherwise, n ` n+1 . Recall that n+1 stands for

Suppose that

(xn+1 n+1 ) n+1 [cn+1 /xn+1 ]


n ` xn+1 n+1 and n ` n+1 [cn+1 /xn+1 ]. But
cn+1 does not occur in n . So generalizing
on the constant (Theorem 1.5.7), we may replace cn+1 by xn+1 and quantify.
We nd n ` xn+1 n+1 , which is a contradiction to its consistency.
Minimal work shows

by construction (Notation 1.5.12),

is the basis of our inductive process; it is a consistent set of

L0 -formulas.

We now maximize the set of requirements (see 0.4.1). Unfortunately, as observed at the beginning of the proof of completeness (see Remark 1.5.3), we
cannot limit ourselves to sentences, but must take all possible formulas into
account.

Denition 1.5.15 (V -complete; see Denition 0.4.3).


order formulas

Caution!
(for:

is

V -complete

if for any formula

A consistent set of rst-

either

or

This is way stronger than what complete ought to be. A

is in

V -complete

variable-complete) set of formulas even prescribes the behaviour of the

variables (v1

= v2

is in it, or

v1 6= v2

is in it). The good notion, when working

with theories, is that of completeness (Denition 1.6.2 below), which is only


about sentences.

Remark 1.5.16 (see Remark 0.4.4).


has

i

If

is V -complete, then for any w one

We now make our way to a

V -complete set of formulas containing the Henkin

axioms.

Lemma 1.5.17

(see Lemma 0.4.5)

{}

{}

or

Proof .

Very much like Lemma 0.4.5.

Lemma 1.5.18
there is a

Proof .

(see Lemma 0.4.6)

V -complete

If

is consistent and

is a w, then

is consistent.

set

Let

containing

be a consistent set of w 's. Then

and the Henkin axioms.

Like Lemma 0.4.6.

Remark 1.5.19.

If

L0

is not countable, one has to do a little bit of ordinal

induction. If you know about ordinals, you should see that taking unions at
limit stages preserves consistency. The construction is exactly the same.

L0 -formulas which contains all axioms and


also describes the relations between
describes the role of witnesses (notice that
We have reached a maximal set of

variables, as in Denition 1.5.15 we took all formulas, not only sentences). It is


time to construct a structure.

61

1.5.3 Finding a Structure (and Assignment)


Recall that from a set

we built a maximal, consistent set of

which contains all axioms of

L,

for some expansion

of

L0 -formulas

by new con-

stant symbols, and the Henkin axioms (Notation 1.5.12), which explain their
behaviours.

We are very close to satisfying

Notation 1.5.20.

Let

T0

be the set of terms of

L0 .

This looks like a very clever base set, but here is a minor complication. If
in the language there are a constant symbol
then

c, f (c),

and a unary function symbol

f,

in

etc. are terms. But perhaps the formula

which case the terms

f (f (c))

and

f (f (c)) = c

is in

are required to code the same element.

To force this, we identify pairs of terms which are believed to be equal by


the set of conditions.

Notation 1.5.21.

Let

be the relation on

t t0

Lemma 1.5.22.
Notation 1.5.23.
brackets

[.]

T 0:

` t = t0

if

is an equivalence relation.

Let

M = T 0/

be the quotient set of

modulo

We use

to denote equivalence classes.

will be our universe. We have to interpret symbols of

Notation 1.5.24.

We turn

into an

L0 -structure M

L0 .

with the following in-

terpretation:

c L0 ,

For

For an

let

n-ary

cM = [c].

relation symbol

of

L0 ,

let

` R(t1 , . . . , tn )}
RM = {([t1 ], . . . , [tn ]) M n :

For an

n-ary

relation symbol

of

L0 ,

let

f M ([t1 ], . . . , [tn ]) = [f (t1 ), . . . , f (tn )]

Lemma 1.5.25.
Proof .

This is well-dened.

There is no ambiguity for the constants. We do the case of a relation

symbol (a function symbol is handled similarly).

` R(t1 , . . . , tn ); we show that


`
t1 t01 , . . . , tn t0n and
0
0
. As
R(t1 , . . . , tn ). By denition, the formulas  t1 = t01 , . . . ,  tn = t0n  are in
` R(t1 , . . . , tn ) R(t0 , . . . , t0n ).
it contains the axioms of equality, we have
1
` R(t1 , . . . , tn ). Detaching, we nd
` R(t0 , . . . , t0 ).
By assumption,
n
1
Suppose

62

We have an

L-structure,

and now need an assignment of the variables; but

as they are terms, they already have been taken care of.

Notation 1.5.26.
Lemma 1.5.27.
Proof .

M |= [s]

Proof .

map

to

i

Suppose

i

(truth lemma; see Lemma 0.4.7)

.
` !

is

Bear in mind that by

t1 = t2 .

i
i
i

Suppose

is

R(t1 , . . . , tn ).
M |= [s]

i
i

Suppose

is

i

i

is

i
i
i

(We used consistency and

for any

s(t1 ) = s(t2 )
[t1 ] = [t2 ]
` t1 = t2

M 6|= [s]

M 6|= [s] or M |= [s]


or

( )

V -completeness

of

.)

is x . The witnesses play their role


(xn , n ), and therefore is the formula xn n .
that cn denotes the associated witness.
Suppose

Then

M |= [s]

of

Then

i

Suppose

V -completeness

one

(s(t1 ), . . . , s(tn )) RM
([t1 ], . . . , [tn ]) RM
` R(t1 , . . . , tn )

M |= [s]

L0 -formula ,

Then

i

For any

Then

M |= [s]

[x].

L0 -term t T 0 , s(t) = [t].

For any

Induction on

formula,

s:VM

Clear for a constant and a variable; clear by induction.

Lemma 1.5.28
has

Let

here.

(x, )

is some pair

Recall (Notation 1.5.12)

. By assumption, M |=
M |= [s]; we show that
0
(xn n )[s]; let s agree with s except on xn and such that s0 (xn ) =
cn . By denition of satisfaction, M |= n [s0 ]. Proposition 1.3.9 then
. As n

says M |= (n [cn /xn ])[s]. By induction, n [cn /xn ]

and is consistent, one has ` xn n . So .


Assume

63

Assume

M 6|= [s];

we want to show that

there is an assignment

s0

which agrees with

. By assumption,
6
s except on xn and such

M |= (n )[s0 ].
0
By construction of M , s (xn ) is some class [t] for a term t. We would
like to use Proposition 1.3.9 and say that  M |= (n [t/xn ])[s]; the
issue is that perhaps t is not sustitutable for xn in n , so the above
that

does not make sense. We need to rename a bit.


Let
n be a formula in which t is substitutable for xn and such that
{n } ` n and {n } ` n (Theorem 1.3.6). Then M |= (n )[s0 ],
s0 (xn ) = [t] = s(t) (Lemma 1.5.27), and t is substitutable for xn
in
n . The use of Proposition 1.3.9 is now legitimate, and we nd
M |= (n [t/xn ])[s].
. By consistency (and as t is substituable
By induction,
n [t/xn ] 6

for xn in
n ), xn n 6 either. We are done with substitutions;
. So 6
.
as
n and n prove each other, we nd xn n 6
The reader should have another look at remark 1.5.3; it is now clear that we
had to work with all formulas, and not only sentences.

Proof of Gdel's Completeness Theorem for First-Order Logic.


rst-order language

and a consistent set of

structure and an assignment satisfying

We rst expand
We then extend

to a language

to a

V -complete

L0

L-formulas .

Fix a

We nd an

L-

.
having many witnesses (Lemma 1.5.6).

set of

,
L0 -formulas

which describes the

behavior of the witnesses (Lemma 1.5.18).


We construct an

L0 -structure M

M |= [s]:

as in Notation 1.5.24 and an assignment

M |= [s] for all 0 .

as in Notation 1.5.26. By Lemma 1.5.28,

In particular,

is satisable.
End of Lecture 15.

Lecture 16 (Consequences; Compactness; Non-Standard Analysis)

1.6 Consequences and Compactness


1.6.1 Decidability*
The denitions of 0.5.2 immediately generalize to the case of rst-order logic.

Corollary 1.6.1

(see Corollary 0.5.14)

sented in an eective way),

Let

be a rst-order language (pre-

a set of w 's (presented in an eective way) and

a w. Then there is an algorithm which will answer yes if

Proof .

|= .

The set of axioms of equality is decidable; so we may assume that

contains all the axioms. There is an algorithm which produces all theorems of

(try all deductions of length

n,

then all of length

an algorithm which answers yes if


this is equivalent to

` .

|= .
64

n + 1,

etc.). So far we have

But by soundness and completeness,

We want to generalize Corollary 0.5.16 to rst-order logic; this requires dening some maximality condition for a theory.

V -completeness

(Denition 1.5.15,

an ad hoc notion for the proof of the completeness theorem) is obviously too
strong, as one should care only for sentences. The good notion is the following.

Denition 1.6.2
sentence

either

Notice that

(complete)

or

A rst-order theory

is complete if for any

` .

is complete i maximal as a consistent theory. This is more

understandable if one closes

under consequence, in which case this denition

is the same as the one we gave when proving completeness of propositional logic,
Denition 0.4.3.

Caution!

Let us insist that as opposed to

V -completeness, only sentences are


V -completeness, but is

taken into account. Completeness is much weaker than


the only relevant notion when working with theories.

Corollary 1.6.3

(see Corollary 0.5.16)

a decidable language. Assume that


which decides whether

|=

Let

be a semi-decidable theory in

is complete. Then there is an algorithm

or not.

Hence, semi-decidable and complete is decidable.

1.6.2 Compactness
Theorem 1.6.4
formulas. Then

Proof .

If

(compactness of rst-order logic)

is satisable i

Let

be a set of rst-order

is nitely satisable.

is nitely satisable, it is consistent.

As it is consistent, it is

satisable by the equivalent form of completeness.

Caution!

Even if you have a clear idea where to satisfy nite fragments of

you may be surprised by a structure satisfying

Example 1.6.5.

Consider an innite set A, and let the language contain conca (a A). The set of formulas = {v1 6= ca : a A} is clearly nitely
satisable in A, as it suces to take an assignment with s(v1 ) not one of the
nitely many mentioned a's. But of course no assignment to A will satisfy all
of simultaneously.

stants

Corollary 1.6.6.
Proof .

If

|= ,

there is a nite subset

such that

0 |= .

Like Corollary 0.5.3.

One may want to have another look at the notion of nite axiomatizability
(Denition 0.5.12).

Example 1.6.7.

The theory of an innite set is not nitely axiomatisable.

65

Verication: Suppose it is. Then by Proposition 0.5.13 the natural axiomati-

{x1 . . . xn i6=j xi 6= xj : n N} too contains a nite axiomatisation,


{x1 . . . xn i6=j xi 6= xj }. The latter is
satised by any possibly nite set with at least n elements, a contradiction.

zation

which reduces to a single sentence

Before moving to an application, let us comment on a historical misunderstanding.

Ce malheureux Thorme de Compacit est entr par la petite


porte, et on dirait que cette modestie originelle lui cause encore du
tort dans les manuels de Logique. C'est mon avis un rsultat beaucoup plus essentiel, primordial (et donc aussi moins sophistiqu),
que le Thorme de Compltude de Gdel, qui arme qu'on peut formaliser la dduction d'une certaine faon en Arithmtique, et c'est
une erreur de mthode que de l'en dduire.
The poor Compactness Theorem got in through the back door, and
it seems that this initial modesty still mars it in Logic textbooks.

It is

in my opinion a result way more essential, primordial (and thus also less
sophisticated) than Gdel's Completeness Theorem, which states that one
can formalize deduction in a certain way in Number Theory, and it is a
methodological error to deduce the former from the latter.

Bruno Poizat, Cours de Thorie des Modles


The truth is that the compactness theorem is a purely semantic property,
and has nothing to do with the notion of deduction. It rst appeared in days
where deductions were a logician's main preoccupation; this explains its initial
modesty.
There is an alternate proof of compactness in 1.7.

1.6.3 Non-Standard Analysis*


The genesis of calculus is a complicated story. However the notion of innitesimals was one of the ways to start a scientic quarrel (and perhaps have a duel)
in the xviii

th

century. In short, are there numbers which are smaller than all

positive reals? (Of course such numbers could not be real numbers themselves.)
Can one introduce such convenient ctions, and work with them? Is what we
show about real numbers using innitesimals true or do we create inconcisten-

th

cies? We provide a xx

Our goal is to extend


interests us; that is,

century answer to the controversy.

by innitesimals. First, we agree that

as a eld

as a ring-structure (as it is customary to treat elds as

< and the absolute value |.|


R, but we may put them in the language

ring-structures). We even equip it with the ordering


(these are actually denable in the ring

freely). This is the language we are interested in.


But we want a little more. First, we should introduce a new constant for
an innitesimal. Second, we're not interested in nding an abstract structure;
we want one which behaves like

R,

or even better, which contains

66

in a nice

manner.

We shall form an ad hoc theory, which will force the structure to

behave.

L = {0, 1, <, +, , , |.|} and the ordered eld R. We


{cr : r R} (which will stand for elements of R), nding the
0
language LR , and one more new constant ; let L = LR {} be the resulting
rst-order language. Let be the set containing the following:
Consider the language

add new constants

all

LR -sentences

the formulas

which hold true in

R;

{0 < < cr : r R>0 }

(One cannot formally quantify over

in the latter: the

cr 's

are elements

of the language, but their collection is not expressible in the language!)


The theory

thus states that the

cr 's

have the same relations as the real

numbers they stand for (in particular, some sentences express that we are talking
about an ordered eld); the extra formulas say that

is a positive element

smaller than any positive real number.

Lemma 1.6.8.
Proof .

is satisable.

mentions only nitely many cr 's, which we inter < cr are


mentioned, and we interpret as a positive real number smaller than the cr 's
in question. This discussion shows that is nitely satisable; actually that
any nite fragment of is satisable in R.
By compactness, is satisable (but of course, not in R, because R has no

pret in

A nite fragment of

as the reals they stand for; only nitely many conditions

innitesimals; same phenomenon as in Example 1.6.5).

Notation 1.6.9
R

(non-standard reals)

contains an interpretation of

pretation of the

cr 's,

Lemma 1.6.10. R
Proof .

Map

to

Let

be an

L0 -structure

satisfying

which is an innitesimal, and an inter-

which behave like the real numbers they stand for.

R .

injects canonically into

cR
r

cr in R ). This is natural enough,


|= cr 6= cs ; hence R |= cr 6= cs .

(the interpretation of

and injective alright: if

r 6= s R,

then

It goes without saying that the mapping is actually a ring homomorphism.


We now freely identify

with its image in

inclusion is of course proper, as


We now work on retrieving
the non-standard line
also longer than

R .

Notation 1.6.11
of

R.

6 R:

R ,

and consider that

The

it is an innitesimal!

We have added innitesimals, so in a sense

is thicker; we have also added their inverses, so it is

(bounded elements and innitesimals)

R :

b = {x R :

R R .

there is

r R>0 : |x| < r};

67

Consider the subsets

o = {x R :
b

r R>0 : |x| < r}.

for all

is the set of bounded elements (not innitely large);

nitesimals (innitely close to

a better picture of

o,
R . b

and

the non-standard line

0).

Of course

o R = {0}.

is the set of in-

If you want to have

you may consider that the real line

embeds into

is the portion of the non-standard line which is not

incommensurably longer than

R. o

is the thickness around

It goes without saying that though nicely dened, these subsets of

are

not (rst-order)-denable: the denitions are not expressible in our language

L0 .

Lemma 1.6.12. b
Proof .

is a ring, and

is an ideal of

b.

Easy computations.

We shall now dene an isomorphism


interesting.

b/o ' R.

The construction is fairly

Following the same mental picture as before, when one consider

only the commensurable part and factors out the thickness, one retrieves the
standard line

R.

Denition 1.6.13

(standard part)

least upper bound of

For any non-negative

{r R : r < x}.

(We dene

st x

x b,

let

st x be the
x b by

for negative

st x = st(x)).
R is Dedekind-complete. Let x b be non-negative;
A = {r R : r < x} of R is non-empty and bounded above
A has a least upper bound in R; st is well-dened. A little more

This makes sense, as


then the subset
since

x b.

So

basic algebra reveals:

Lemma 1.6.14. st
Now as

st

is a ring-homomorphism with kernel

is clearly onto

R (it
b/o ' R.

induces an isomorphism

is the identity on

o.

R),

we deduce that

In particular, any element

written in a unique way as the sum of a real number


and an innitesimal

o.

st x

x b

st

can be

(its standard part),

(This is of course not true for elements of

R \ b,

which are innitely big.)


One can then start doing analysis in a very elegant fashion, whith arguments
such as:

is continuous at real

aR

i for any innitesimal

, f (a + ) f (a)

is innitesimal.
End of Lecture 16.
Lecture 17 (Proof of Compactness by Ultraproducts*)

1.7 An Alternate Proof of Compactness*


1.7.1 Filters and Ultralters*
Denition 1.7.1

(lter)

Let

X 6=

be a set.

68

F P (X)

is a lter on

if:

6 F ;

if

AF

if

A, B F ,

and

Example 1.7.2.

A B,

then

then

B F;

A B F.

In a topological space, the collection of neighborhoods of a

given point forms a lter.

Remark 1.7.3.

Let

be a family of subsets of

property (Denition 0.5.17). Then there is a lter

having the nite intersection

such that

B F.

F = {Y X : there exist B1 , . . . , Bn in B such that B1


F is the smallest family of sets containing every nite
intersection of members of B . This clearly meets the conditions dening a lter.
Notice that 6 F by the nite intersection property.

Verication: Let

Bn Y } ,

Recall that

i.e.

conite in

Example 1.7.4.
Y

means that

X \Y

is nite.

is innite. The Frchet lter is

F r
e = {Y X :

X}.

is conite in

Remark 1.7.5.
IF

Assume

Let

X . Consider IF = {X \ Y : Y F}. Then


(P (X), , X, M, ). Dually, every ideal of P (X) gives
bijection between lters on X and ideals of P (X).

be a lter on

is an ideal of the ring

rise to a lter: there is a

The collection of lters on

may be (partially) ordered by inclusion. The

bijection of Remark 1.7.5 is order-preserving.

Denition 1.7.6
Lemma 1.7.7.
P (X), A U

Proof .

Let

(ultralter)

U be a lter
X \ A U.

Let

or

A maximal lter on

on

X.

Then

be a lter, and bear in mind that

is called an ultralter.

is an ultralter i for all

in

is closed under nite inter-

sections.

U is an ultralter, and let A P (X). There are three cases.


Y U , A Y 6= , then U {A} is a family as in Remark 1.7.3, so it is
contained in a lter. By maximality of U as a lter, it follows A U . If for all
Y U , (X \ A) Y 6= , then X \ A U similarly. So it remains the case where
there are Y1 , Y2 U such that A Y1 = (X \ A) Y2 = . But then Y1 Y2 = ,
Suppose that

If for all

a contradiction.

A P (X), A U or X \ A U . Let U
. Let Y U. If Y U we are done.
extending U : we show U = U

X \ Y U U , so = Y (X \ Y ) U, a contradiction.
Suppose that for all

be a lter
Otherwise

Ultralters trivially exist.

Denition 1.7.8
on

is

(principal ultralter)

Pa = {Y X : a Y }.

69

Let

a X.

The principal ultralter

Remark 1.7.9.

If

is nite, then every ultralter on

is principal.

In the interesting case (X innite), do non-principal ultralters exist? Assuming AC (or Zorn's Lemma), they do; AC is actually slightly stronger than
necessary to show this.

In fact, there is an Ultralter Axiom which is inde-

pendent of ZF (and weaker than AC) saying that every lter is included in an
ultralter.

Applying this to the Frchet lter on

(Example 1.7.4), we nd

non-principal ultralters. This is however highly non-constructive.

Lemma 1.7.10

(AC, or Ultralter Axiom)

Every lter is included in an

ultralter.

Remark 1.7.11. F

IF

is an ultralter i

is a maximal ideal.

In particular,

Lemma 1.7.10 is equivalent to Krull's Theorem in Ring Theory.

Verication: Clear, as the bijection of Remark 1.7.5 is inclusion-preserving.

Remark 1.7.12.
on

Alternately, an ultralter can be viewed as a

(where a set has measure

if it is in the ultralter,

{0, 1}

measure

otherwise). This

-additive since for instance if we look at the Frchet


N each singleton has measure 0 but their union (equal to N) has measure

measure is however not


lter on

1.

Lemma 1.7.13
ultralter

Proof .

As

. If B
B U.

(AC)

such that

has the nite intersection property, there is an

has the nite intersection property, it is contained in a lter

Remark 1.7.3. Then Lemma 1.7.10 gives an ultralter

Remark 1.7.14.

There are

Card X

22

ultralters on

X.

extending

by

F.

(This theorem, due to

Hausdor, is non-trivial.)

1.7.2 Ultraproducts (The o Structure)*


Denition 1.7.15
and for each

Mi 's
M .

iI

(with respect

The base of

. Let I 6= be a set, U be an ultralter on I ,


Mi be an L-structure.
We dene the ultraproduct of the
Q
to U ), denoted
I Mi /U . For simplicity, we refer to it as

(ultraproduct)

let

Mi

modulo the equivalence relation

(mi ) (ni )

if

{i I : mi = ni } U

by:

is the set

We interpret constants in




cM = (cMi )iI
We interpret relations by:

RM




(m1i ) , . . . , (mki )

if

70


i I : RMi (m1i , . . . , mki ) U

Similarly for functions:

fM



 

(m1i ) , . . . , (mki ) = (mk+1
)
i
if

iI:f

Mi

m1i , . . . , mki


= mk+1
U
i

Checking that these are well-dened is very similar to showing the transitivity of the equivalence relation: if we have two equivalent sequences, then they
must agree on a set in

U , which means that the functions or relations must agree


U , hence lying in U .

on a set containing a set in

Theorem 1.7.16

be a w. Let s : V M be an assignment,


say s (x) = [(mi,x )iI ], and let si (x) = mi,x ; each si is an assignment V Mi .

Then M |= [s ] i {i I : Mi |= [si ]} U .
(o)

Proof .

By induction on

If

Now suppose that

Let

is atomic then it is clear from our denition of

M .

. If M |= [s ], then M 6|= [s ]. By
that {i I : Mi |= [si ]} 6 U , and since U is an

is

induction, this means

ultralter (so it always contains a set or its complement), this means that

{i I : Mi 6|= [si ]} U , i.e.


M |= [s ]. All of these steps

Next assume that

i
i
i
i

is

{i I : Mi |= [si ]} U .

Hence

Then:

M |= ( )[s ]
M |= [s ] and M |= [s ]
{i I : Mi |= [si ]}, {i I : Mi |= [si ]} U
{i I : Mi |= [si ]} {i I : Mi |= [si ]} U
{i I : Mi |= ( )[si ]} U

(We heavily used that

that

are reversible, so the i holds.

Finally, assume that

U
is

If

Now suppose that

is an ultralter here.)

x .

M |= (x )[s ], there is an assignment (s )0 which agrees with s

0
0
except on x, and such that M |= [(s ) ]. Say (s ) (x) = [(ni )] for
0
a family (ni ) i Mi , and for each i I let si agree with si except
0
0
on x, where si (x) = ni . By induction {i I : Mi |= [si ]} U ,
which implies {i I : Mi |= (x )[si ]} U .
J = {i I : Mi |= (x )[si ]} is in U . For
ni Mi and s0i agree with si on V \ {x} where s0i (x) = ni
0
0
be such that Mi |= [si ]. For i I \ J , let si = si . We nd a
0

resulting assignment (s ) : V M which diers from s only on


0
x. As J {i I : Mi |= [si ]} is in U , we nd by induction that
M |= [(s0 ) ]. Hence M |= [s ].
i J,

let

71

1.7.3 Alternate Proof of Compactness*


Proof of compactness of rst-order logic, Theorem 1.6.4.
satisable rst-order theory
For each

i I,

I be the collection
Mi satisfying i by

Let

there is a structure

Fix a nitely

of nite subsets of

the denition of nite

satisability.
For each

let

A = {i I : Mi |= }.

family corresponds to

A1 n ,

to

{A :

M =

Mi /U

(i.e.

is nitely
{A : }.

which must be nonempty since

satisable. By Lemma 1.7.13, there is an ultralter


We let

Then the family

has the nite intersection property, since the intersection of a nite


extending

is the ultraproduct of the

Mi 's

with respect

U ).
Let

We have (using o' Theorem and our denitions):

M |=

{i I : Mi |= } U

and the latter is true by construction, so

Remark 1.7.17.

A U,

M |= .

This modern and elegant proof of rst-order compactness

has one drawback: it is highly non-constructive, as it relies on the existence of


ultralters (Lemma 1.7.13). On the other hand, it is purely semantic and does
not rely on a hidden completeness argument!
Ultraproducts are very important in model theory, if one is not interested
in eectiveness. They appear in important constructions and deep results. For
instance, one could have realized the non-standard reals as an ultrapower of

R:

x a non-principal ultralter

Theorem,

on

N,

and let

R =

Q
N

R/U .

By o'

is an ordered eld. If you want to feel an innitesimal, consider

the equivalence class, modulo

U,

of the sequence

( n1 )nN :

it is positive, and

ultimately smaller than any xed positive real number!


End of Lecture 17.

72

Chapter 2

Second-Order Logic (Presto)


In this short chapter we briey describe second-order logic, in which one may
quantify on collections of elements. This requires new symbols

and

2 .

The

logic is amzazingly expressive, too expressive to have interesting results.

In this chapter:

Explain what second-order means.

Show that it is essentially more expressive than rst-order.

Show that it is essentially too expressive for logic.

Lecture 18 (Second-Order Logic)

Denition 2.0.1 (second-order language).

We now expand our existing collec-

tion of symbols by the following:

for each

k 1,

the quantiers

a set of

and

k -ary

predicate variables

X1k , X2k , . . . , Xnk , . . .

2 now quanties over collections/relations/functions (a relation is a set


1
of n-uples; a function is a certain relation). For instance, X (a) means that
2
the element a is in the predicate X . Or, the formula Y (a, b) means that the
elements a and b are in relation according to the binary predicate Y . In a word,
one may now quantify over predicates.

Caution!

The superscript of a variable denotes its arity, that is the Cartesian

power of the base set in which it lives. But the superscript in

2 and 2 indicates

the order of the logic.


One should dene terms and w 's, but we do not wish to repeat these
discussions, as we shall not try to describe a proof theory. There are two reasons.

73

First, we would have to determine suitable axioms for the relationship


between predicates and elements, as we determined reasonable axioms
for equality in the case of rst-order logic. This amounts to agreeing on
axioms for membership, that is agreeing on a theory of sets, which is not
the spirit of this course.

Second, and worse, this appears a little pointless to us, as compactness


will fail anyhow: hence so would completeness.

We therefore entirely forget about proof theory.

2.1 Compactness fails


Proposition 2.1.1.

There is a second-order formula expressing that a set is

innite.

Proof .

A set is innite i it injects into a proper subset. There is a formula

saying that

Y1

X 1:

is a subset of
subset(Y

, X 1) :

There is a formula saying that

Y1

propersubset(Y 1 , X 1 ) :
There is a formula saying that

functional(2 ) :

x Y 1 (x) X 1 (x)

is a proper subset of

subset(X

ab1 b2 2 (a, b1 ) 2 (a, b2 ) b1 = b2


2

is injective:

a1 a2 b 2 (a1 , b) 2 (a2 , b) a1 = a2

There is a formula saying that

function(2 , X 1 , Y 1 ) :

, Y 1 ) (x X 1 (x) Y 1 (x))

is the graph of a functional relation:

There is a formula saying that the relation

injective(2 ) :

denes a function from

X1

to

Y 1:

functional(2 ) x y X 1 (x) (Y 1 (y) 2 (x, y))

There is a formula saying that

injection(2 , X 1 , Y 1 ) :
There is a formula saying that

X1

injects(X 1 , Y 1 ) :

denes an injective function from

X1

injective(2 ) function(2 , X 1 , Y 1 )
injects into

Y:

2 2 injection(2 , X 1 , Y 1 )

And eventually, the following formula expresses that

infinite(X 1 ) :

X 1:

X1

is innite:

2 Y 1 propersubset(Y 1 , X 1 ) injects(X 1 , Y 1 )

74

to

Y 1:

One should compare this to rst-order logic, in which innite is not a single
formula, but an innite theory not nitely axiomatizable (Example 1.6.7).

Corollary 2.1.2.
Proof .

For

No compactness theorem.

n N,

let

be the (rst-order) sentence:

which expresses that the underlying set has at least

v1 . . . vn i6=j vi 6= vj ,
elements. Let be the

formula constructed in Proposition 2.1.1 and consider the following second-order


theory:

T = {: n N} {2 X 1 (X 1 )}
Then

is clearly not satisable, but is nitely satisable!

In particular, regardless of what our proof theory would have been, completeness would have failed.

2.2 Peano Arithmetic


We now deal with a remarkable property of the integers.

Denition 2.2.1

(Peano Arithmetic)

unary function symbol.

{0, s}

Let

PA

Let

be a constant symbol and

be the second-order theory in the language

with the following axioms:

is an injective function

every element not

there are no cycles, i.e. for each

is in the image of

the axiom:

x |s {z
s}(x)x
n

Every subset which contains

2 Y 1

and is closed under

is the entire base set:


Y 1 (0) (y Y 1 (y) Y 1 (s(y)) x Y 1 (x)

Theorem 2.2.2 (absolute categoricity of PA).


are like

All models of Peano Arithmetic

N (with the natural interpretations of 0 and s as the successor function).

Like stands for isomorphic, which we do not wish to dene now (this will
be done in model theory, Denition 1'.2.6).

It just means that the bijection

preserves the language; that up to names, the objects are exactly the same.
This result will be referred to as absolute categoricity of

PA,

a terminology

which will become clearer after the denition of categoricity in model theory
(Denition 1'.2.14).

75

Proof . Let M |= PA. We nd a bijection between N and M which maps 0 to


0M and which is compatible with succession. For the sake of clarity, we write
n + 1 for sN (n) (succession in N happens to coincide with adding 1, though
addition is not part of the language).

f (0) = 0M . Inductively, we now dene f (n + 1) = sM (f (n)). This does


dene a function N M , which is by denition compatible with s. Clearly f
is injective (induction in N; s has no cycles in M).
It remains to show that f is surjective. Consider im f M . This subset
M
M
contains 0
and is closed under s ; as M satises the axioms of PA, one has
im f = M . So f is surjective; it is a successor-preserving bijection.
Let

Caution!
PA.

The temptation is strong to wonder whether or not

is a model of

But the integers existed before we started doing logic, and the fact that we

have found a nice description of their behaviour does not change it!
There is no reason to stop. One may quantify over collections of collections
of elements and introduce

and so on. Goes without saying that

nth -order

logic remains sound, but does not enjoy compactness.


End of Lecture 18.
Lecture 19 (ME2)
End of Lecture 19.

76

Chapter 1'

Some Model Theory (Furioso)


After the failure of compactness in higher logic, we move back to rst-order
logic, which is obviously the only logic worth considering. Model theory is the
mathematical study of rst-order theories; syntactical considerations disappear,
if not entirely, at least in spirit. One will still argue by induction on the length
of formulas, which is a syntactic notion, but there also exists a purely semantic
approach to model-theory, which emphasizes on back-and-forth methods. For
these reasons, model theory has been called the semantics of predicate calculus.

In this chapter: a rst account of model theory

Elementary equivalence, elementary inclusion (1'.1)

Elementary morphisms (1'.2)

Lwenheim-Skolem theorems (1'.3)

Back-and-forth methods (1'.4)

Quantier elimination (1'.5.1)

-saturated

models and applications (1'.5.2 and 1'.5.3)

Lecture 20 (Models and Theories; Elementary Equivalence)

1'.0 A Word on Model-Terrorists


We are through with syntax, and will start doing some mathematics.

Our

treatment of terms and formulas has been remarkably accurate as long as we


have been interested in proof theory; model theory is on the semantic side, so we
shall make notations more convenient, and disregard the risk of an ambiguity.

77

1'.0.1 Formulas and Parameters


Notation 1'.0.1

(tuple notation)

more conveniently written

a (n

(a1 , . . . , an )
a).

A nite sequence of objects

is then the length of the tuple

is

We shall often abuse notation of Cartesian powers in order to avoid mentioning lengths. For instance, if
instead of

a1 , . . . , an M ,

we shall feel free to write

aM

a M n.

From now on we assume that every formula we consider has been suitably
constructed or renamed (for instance by the algorithm of Theorem 1.3.6); that

(v1 v1 = v2 ) (v1 = v2 ) will play us tricks.


x, y, z , etc. but also x1 , x2 , etc. to be variables themfor general variables). In particular x = x1 , . . . , xn is a

is, we take for granted that no


Furthermore, we take
selves (and not names
tuple of variables.

Notation 1'.0.2

(formula with free variables)

the only free variables of the formula


Hence

(x)

means

variables of the tuple

We write

(x)

to indicate that

are among the tuple of variables

FreeVar x;
x are not used.

but we may write


In any case,

(x)

(with no

x.

even if some

x)

denotes a

sentence. We now get rid of assignments of the variables.

Notation 1'.0.3
variables among
same length as

. Let (x) be a formula with free


L-structure. Let m = (m1 , . . . , mn ) M n have
M |= (m) if M |= [s] with s : xi 7 mi .

(formula with parameters)

x,

x.

and

an

We write

It is customary to expand the language via constants.

Notation 1'.0.4 (expansion by constants; LA ).


and

a set.

symbols

ca

LA

for elements

of

i

L be a rst-order language,
L new constant

A.

In this case, we shall always interpret

(a)

Let

denotes the language obtained by adding to

ca

by

a.

Under this convention,

M |=

M |= (ca1 , . . . , can ).

1'.0.2 Quantications
We rehabilitate poor

Denition 1'.0.5

and make it a quantier again.

(quantier-free formula)

A formula is quantier-free if it

has no quantiers.

Denition 1'.0.6 (universal formula).


v1 . . . vn 0 ,

where

Denition 1'.0.7
form

(existential formula)

v1 . . . vn 0 ,

where

Be very careful that

Denition 1'.0.8
a formula

A formula is universal if it is of the form

is quantier-free.

A formula is existential if it is of the

is quantier-free.

v1 v2 . . .

is neither universal nor existential.

(quantication rank)

.
78

We dene the quantication rank of

if

is atomic, then

if

is

if

is

1 2 ,

if

is

x ,

then

qrk = qrk + 1.

if

is

x ,

then

qrk = qrk + 1.

then

qrk = 0;

qrk = qrk ;
then

qrk = max(qrk 1 , qrk 2 );

We do not capture the alternation of

's and 's, though a recursion theorist

would feel at some point interested.

1'.0.3 Models and Theories


We shall work only with theories (sets of sentences) instead of arbitrary sets of

cx for each variable x, and let


L0 = L {cx : x a variable}. To every (possibly open) L-formula we associate
0
0
an L -sentence by replacing every free occurence of every free variable by the
formulas. Indeed, let us introduce a new constant

corresponding constant; notice that subsitutability is not an issue. And a set


of

L-formulas

is satisable i the

The advantage being that since

L0 -structure:

L0 -theory 0 = {0 : }

assignments of the variables are now useless.

As we shall never use truth values again,

Denition 1'.0.9 (model).


M |= T

is satisable.

is a set of sentences, it suces to provide an

one says that

Let

denotes a rst-order theory.

M be an L-structure and T
T.

be an

L-theory.

If

is a model of

Model-theorists often abuse terminology and say that

is consistent for

satisable; by Gdel's Completeness Theorem (Theorem 1.5.1) this is harmless


anyway. But one should not think that model-theorists are interested in syntax;
all they care about is structures.

We even abuse further, by sometimes im-

plicitely dening a theory to be consistent. This leads to questions like: is this
theory a theory?

(which means: is this theory consistent?). No ambiguity

arises in practice.
Also, we freely identify any theory with the set of its consequences (since
there is no ambiguity on the notion of consequence).

This bridges the gap

between two dierent denitions of completeness (Denitions 0.4.3 and 1.6.2),


which now perfectly agree.

1'.1 Elementary Equivalence; Inclusion and Elementary Inclusion


Remark 1'.1.1.

Elementary is the standard adjective used in model theory;

one must be careful with the denitions (elementary equivalence, Denition


1'.1.5, and elementary inclusion, Denition 1'.1.13), as the role of parameters
is not necessarily obvious.

Terminology tends to be confusing; elementary

captures a way of thought, not a notion.

79

1'.1.1 Elementary Equivalence


Denition 1'.1.2 (theory of a structure).
order

L-theory

of

M,

or the theory of

This amounts to taking only

Let

M be an L-structure. The rstTh M = { : M |= }.

for short, is

L-sentences

of

by new constant symbols for elements of

Denition 1'.1.3
structure and

The quantier-free theory of

over

is

Remark 1'.1.4.

If

Let

be an

L-

over

is

Th0 (M, A) = {(a) LA :

is quantier-free}.

In particular, Th M = Th(M, ).
Th(M, A) { LA : qrk() = 0}.

complete

(Notation 1'.0.4).

Th(M, A) = {(a) LA : M |= (a)}.

The theory of

and

if one wants

is the expansion

a set of parameters.

M |= (a)

LA

(theory of a structure, with parameters)

AM

M;

which are true in

to allow parameters, we must generalize a little. Recall that

is an

One may also notice that

L-structure

and

A M,

then

Th0 (M, A) =

Th(M, A)

is a

LA -theory.

Verication: In

M, (a)

is either true or false!

Model theory is interested in the theories of structures.

As far as rst-

order properties are concerned, being the same amounts to satisfying the same
sentences. This is captured by the following, central notion.

Denition 1'.1.5
and

. Let M, N be L-structures. M
M N , if Th M = Th N .

(elementary equivalence)

are elementarily equivalent, denoted

This behaves very much like an equivalence relation (reexive, symmetric,


transitive), except that the class of all

L-structures

is not, set-theoretically

speaking, a set.

Lemma 1'.1.6.

A theory

is complete i for all models

M, N

of

T,

one has

M N.

Proof .

Recall that we have identied

with the set of its consequences, and

that a theory is complete i it is maximal as a consistent set of sentences.

T complete. If M |= T , then T Th M. As T is complete, one


T = Th M; since T = Th N similarly, it follows M N .
Suppose that any two models of T are elementarily equivalent. Let be a
sentence such that neither not is in T . Then there are models M |= T {}
and N |= T {}. In particular, M N , and this is a contradiction. So
either or is in T .
Suppose

has

So the running question: is this theory complete? Has a partial answer: can
one show that any two of its models are elementarily equivalent?

80

1'.1.2 Inclusion
We now turn our attention to substructures; the naive notion (Denition 1'.1.7)
is not very useful, the interesting form will be elementary inclusion (Denition
1'.1.13). As noted in Remark 1'.1.1, terminology might be a bit confusing, as
elementary will not exactly mean the same thing as in elementary equivalence
(Denition 1'.1.5).
But let us begin with the naive, not-so-useful, notion of inclusion.

Denition 1'.1.7
substructure of

(substructure)

N,

written

Let

M N,

M, N

be

L-structures.

Then

is a

if the following hold:

M N;
cM = cN

for all constant symbols

RM = RN M n

for all

f M = f N M n+1
(f

of

In a word,
of

for all

relation symbols

n-ary

R L;

function symbols

R L.


N |M
may also be written f
, the restriction and corestriction
|M n

n+1

to

n-ary

c L;

M)
M

to the base

bears the

M.

same language, with

L-structure

induced by restriction of the structure

M and N in the
M N as L-structures,

(In general, if you found two structures

M N

but it is not the case that

then presumably there is something wrong.)

Example 1'.1.8.

A subgroup of a group is a substructure as a group structure.

A subring of a ring is a subbstructure as a ring structure.

Remark 1'.1.9.
Then

Let

be an

L-structure and M N
L-structure M N

can be equipped with an

for each constant symbol

c, cN M

for each function symbol

f, M

The structure

is closed under

be a non-empty subset.
i the following hold:

fN .

is then uniquely determined.

When studying Lwenheim-Skolem theorems we shall say a little more about

generating substructures (1'.3.2).


The following is a very healthy exercise.

Remark 1'.1.10.

Let

L-substructures,

an L-structure M

of

(Mi )iI be an increasing chain


i j Mi Mj . Then iI Mi naturally bears

that for all i I , Mi M .

be a non-empty set and

that is
such

Lemma 1'.1.11 ('s go up, 's go down).


(m)

be a formula with parameters in

M.
81

Let

MN

be an

L-structure.

Let

(i). If

is quantier-free, then

(ii). If

is existential, then

(iii). If

is universal, then

M |= (m) N |= (m).

M |= (m) N |= (m).

N |= (m) M |= (m).

Proof .
(i). The case of an atomic formula is by denition. Then a quick induction on
the complexity.
(ii). Induction again. If there are no quantiers, then this is known. Suppose

x (x, m); is existential again. If M |= x (x, m), then


M such that M |= (, m). By induction, N |= (, m); so
N |= (m).

that

is

there is

(iii). Might be obtained via (ii), or by similar methods.


In general, existential witnesses need not go down (dually, universal properties need not go up).

Counter-example 1'.1.12.
ZQ

as ring-structures, but

Q |= x x + x = 1

QR

as ring-structures, but

R |= x x2 = 1 + 1

RC

as ring-structures, but

C |= x x2 = 1

though

though

though

doesn't.

doesn't.

doesn't.

One sees that this notion of inclusion is therefore not very interesting to us.
Elementary inclusions (Denition 1'.1.13) will be more relevant.
End of Lecture 20.
Lecture 21 (Elementary Inclusion; Morphisms)

1'.1.3 Elementary Inclusion


We now introduce a better-behaved notion of inclusion (and substructure). The
intuitive idea is that both structures should agree on as many properties as
possible, allowing parameters.

Of course in order for the question to make

sense, only elements of the small structure can be taken into account.

Denition 1'.1.13

(elementary substructure)

L-structures. M is an
Th(M, M ) = Th(N , M ).
of

Let

M N be an inclusion
N , written M  N , if

elementary substructure of

Example 1'.1.14.
(N, <) 6 (Z, <), as N |= x x 0
0 as a parameter.)

but

82

Z |= x x < 0.

(This formula uses

(Q, <)  (R, <)


M N

(not necessarily obvious yet).

Let

We have observed in Counter-example 1'.1.12 that none of the inclusions


of rings

M  N.

be innite sets, regarded as pure sets. Then

ZQRC
QC

is elementary.

As rings,

When we constructed non-standard reals in 1.6.3, we considered a theory extending

(not necessarily obvious).

Th(R, R),

in order to force the inclusion

R R

to be

elementary.
In other words, M  N i M and N have the same LM -theory, i N |=
Th(M, M ). This is also equivalent to: for any LM formula (m) with parameters in M , M |= (m) i N |= (n).
In particular M  N implies M N , but the converse need not hold.

Counter-example 1'.1.15.

Consider

N = (Z, <)

and

M = (2Z, <) N .

As

they are clearly isomorphic (Denition 1'.2.6 below), it is clear that they are

M N (this will be more convincing later).


M |= x (x 0 x 2), and N |= x (0 < x < 2). Hence M and N can
distinguished by the tuple (0, 2).

essentially the same object, and


Yet
be

Lemma 1'.1.16.
ementary chain of

iI Mi .

Proof .

Then for

I be a non-empty
L-substructures, that
all i I , Mi  M.

Let

(Mi )iI be an increasing eli j Mi  Mj . Let M =

set and
is

A very healthy exercise.

The following gives a criterion for elementarity of inclusions; it should be


used mostly in order to go down, that is, when one already has perfectly under-

N , and tries to see if M N

stood the notion of satisfaction in

is an elementary

substructure. (This will typically be the case in 1'.3.2). The statement is more
subtle than it rst seems.

Theorem 1'.1.17
Then

MN
if

Proof .

(Tarksi's test)

i for all formula

N |= x (x, m)

Suppose

Suppose that

m M.

atomic. Clearly,

Suppose that

s.t.

L-structures.

M,

N |= (, m)

by induction. Let (m)


M |= (m) N |= (m).

We show
We show

M |= (m)

i

be a

N |= (m).

is built from shorter formulas using connectives. There is

not much to prove.

be an inclusion of

with parameters in

then there is

is clear by denition.

formula with parameters

MN

Let

(m)

is

x .

83

Assume that M |= x (x, m). Then there is M such that


M |= (, m). By induction, N |= (, m), so N |= (m). Notice
that we did not use the assumption yet.

N |= x (x, m). By assumption, there is M


N |= (, m). By induction, M |= (, m), and therefore
M |= (m).
Now assume that
such that

In short, the inclusion is elementary i all

's

go down; yet it is not a good

idea to try to put Tarski's test in a nutshell, as there are some subtleties.

Remark 1'.1.18.

Elementary equivalence is a notion involving satisfaction in two structures


(M and

N ).

isfaction:

in

N.

in

However, Tarski's Test reduces to only one notion of sat-

N.

Notice indeed that the condition is only about truth

This will make Tarski's test especially valuable when constructing

elementary substructures (1'.3.2).

One should be very careful that in Tarski's test (Theorem 1'.1.17), one
considers all formulas of the form

x ,

where

too might have quanti-

ers, and not only existential formulas (read Counter-example below).

Counter-example 1'.1.19.

Consider

(N, <)

and

(N {}, <),

notes an extra element greater than all natural numbers.

deN {}

where

Then

has a greatest element (this is a rst-order sentence), but

N doesn't: clearly
N 6 N {}. But for a purely existential formula , one has N |= (n) i
N {} |= (n). (This need not be entirely clear yet, as a proof would rely on
back-and-forth methods.)

1'.2 Morphisms, Elementary Morphisms, Categoricity


We now have the objects of model-theory; we want to nd the suitable notion
of morphisms between them.

1'.2.1 Morphisms
Notation 1'.2.1.
we write

(m)

If

m = (m1 , . . . , mn ) is a tuple and : M N


((m1 ), . . . , (mn )).

Denition 1'.2.2 (L-morphism).


be a function.

is an

(cM ) = cN
m RM

is a function,

for the tuple

i

L-morphism

Let

M, N

for any constant symbol

(m) RN

(f M (m)) = f N ((m))

be

L-structures

if:

c;

for any relation symbol

R;

for any function symbol

f.

84

and

:M N

In short, an
tions of

in

L-morphism
M and N .

is a function which is compatible with the interpreta-

LL-embedding.

In particular, as equality is always among the relations, one sees that an

morphism is always injective. This is the reason why one also says

Remark 1'.2.3.
morphism;

IdM

The composition of two

is always an

L-morphisms

is of course an

L-

L-morphism.

Example 1'.2.4.

Let

and

be orderings. Then

:M N

is an

{<}-morphism

i it

is an order-preserving (injective) function.

An injective group homomorphism is an

Lgrp -morphism.

Any ring morphism between elds is an

Lrng -morphism.

Remark 1'.2.5.

If

M N,

More precisely, let

then the inclusion map

lying sets).

Then

M N

is an

L-morphism.

M, N be two structures with M N (the underM N as L-structures i the inclusion map is an

L-morphism.

M, N

Let

be

L-structures.

Then there is an

can be turned into a model of


ers, parameters in

Th0 (M, M )

L-embedding M N i N
M with no quanti-

(theory of

M ).

One may rush to the denition of an isomorphism.

Denition 1'.2.6 (L-isomorphism).

L-morphism

is an

L-isomorphism

if it is surjective.

N are L-isomorphic, written M ' N (L


L-isomorphism between them.

and

is an

(Recall that an

Remark 1'.2.7.
ping of an

L-morphism

is always injective!)

L-isomorphisms,
L-isomorphisms.

The composition of two

L-isomorphism,

are

being implicit) If there

the reciprocal map-

Example 1'.2.8. (Z, <) and (2Z, <) are clearly isomorphic.
Remark 1'.2.5 however suggested that we did not take the good denition in
the rst place: morphisms reduce to inclusions, which in general have but little
interest. So we look for something more reminiscent of the notion of elementary
inclusion.

85

1'.2.2 Elementary Morphisms


Denition 1'.2.9

(elementary morphism)

elementary if for all formulas

(m)

M |= (m)

An

L-morphism : M N
M:

is

with parameters in

N |= ((m))

i

Stronger than ordinary morphisms which only preserve interpretation, one


could say that elementary morphisms preserve satisfaction.

Remark 1'.2.10.
IdM

The composition of elementary morphisms is elementary;

is always elementary.

The following must be compared with Remark 1'.2.5.

Remark 1'.2.11.

If

M N,

Let

then the inclusion map

MN

be two

L-structures.

M N
Then

is elementary.

MN

i the inclusion map is

elementary.

More generally, there is an elementary embedding

M0  N such
Th(M, M ).
As for

that

L-isomorphisms,

Proposition 1'.2.12.
Proof .

Let

mentary.

An

: M N

M ' M0

i

M N

i there is

can be turned into a model of

we had already reached a very nice notion.

L-isomorphism
be a bijective

By induction on a formula

M |= (m) i N |= ((m)).
If is atomic, this is clear

is elementary.

L-morphism.

We show that

we show that for any tuple

is elem M,

by denition of a morphism. The case of connec-

tives is not hard to deal with.

M |= (m), then there is M such that M |=


N |= ((), (m)); therefore N |= ((m)). Conversely, if N |= ((m)), then there is N such that N |= (, (m)). Since
is surjective, there is M such that () = . Hence N |= ((), (m));
by induction, M |= (, m), so M |= (m).
Suppose

(, m).

= x .

If

By induction,

Corollary 1'.2.13.

If

M ' N,

then

M N.

Proof . Th(M) is the set of properties of the empty tuple in M.


an isomorphism

one has that for any sentence,

86

M |=

i

As there is

N |= .

1'.2.3 Categoricity
Denition 1'.2.14

models and

nality

(categoricity)

and any two models of

Caution!

T be a rst-order theory with innite


-categorical if it has a model of cardicardinal are isomorphic.

Let

an innite cardinal.

of

is

This should be opposed to absolute categoricity of second-order

Peano arithmetic (Theorem 2.2.2), which is about any two models of

PA.

First-

order theories cannot capture cardinalities as second-order logic does (this will
become clear with the Lwenheim-Skolem theorems, in 1'.3); absolute categoricity is impossible for a rst-order theory with innite models. One therefore
compares only models of the same cardinality.

Example 1'.2.15.

Here are some categorical theories (we do not prove these

facts):

0 -categorical
1 (harder).

The theory of dense linear orderings is


below), but not

-categorical

for any

(Theorem 1'.4.4

The theory of vector spaces over a nite eld is categorical in any cardinal.

The theory of vector spaces over


categorical for any

Q is not 0 -categorical, but it is 1 (generalize this statement to bigger base elds!).

The theory of algebraically closed elds of a given characteristic is not

0 -categorical,

but it is

-categorical

for any

(hint: transcendence

bases).
Categoricity will provide a criterion for completeness of a theory (Corollary
1'.3.14). But let us briey leave the syllabus.

Theorem 1'.2.16
some

(Morley's categoricity theorem)

is categorical in any

A theory categorical in

L.

Micahel Morley actually proved the countable language case in 1965; Saharon
Shelah extended it to any language in 1974.
End of Lecture 21.
Lecture 22 (Lwenheim-Skolem Theorems)

1'.3 Lwenheim-Skolem Theorems


1'.3.1 The Substructure Generated by a Set
Notation 1'.3.1.

If

is nite, we write

87

Card L = 0

anyway.

Card L 1 . You may think that


Card WFFL .
Let us start with an L-structure M and a subset A of M . One can easily
nd the smallest substructure of M containing A: it suces to close A (union
Notice that there is no risk of confusion if

in any case,

Card L

actually denotes

the constants) under the functions of the language; the resulting substructure
has cardinality at most

Card A + Card L.

Example 1'.3.2.

Let G be a group and S . Then there is a smallest subgroup


G containing S ; it is precisely the closure of S under the group multiplication
and inversion. If S is countable, then so is hSi.
of

Lemma 1'.3.3.

be a rst-order structure,

smallest

of

Let M
L-structure hAi

containing

A.

A M . Then there is a
CardhAi Card A +

Moreover,

Card L.

Proof .
L

A0 = A {cM : c

We rst deal with the constants by considering

a constant symbol}.
We now write the set of function symbols of the language

where

Fn

is the set of function symbols of arity exactly

A1 = A0

[ [

n.

F =

nN

Fn ,

Let

f M (An0 )

nN f Fn
(The idea is clear.

L.

functions of

One wishes to add the images of elements of

Unfortunately

f (A0 )

A0

through

does not necessarily make sense, as the

arity of the function may vary. The union indexed over

N deals with this minor,

strictly notational, issue.)

A1 we
hAi = nN

From
set

build

A2

contains

in a similar fashion, and keep going. Eventually the

A,

all (interpretations of ) constants, is closed under

(the interpretations of ) all functions in the language.


symbols on
an

hAi

L-structure

M ), getting
M.
most Card A + Card L; at each stage, we
are 0 Card L stages, and this proves

which is a substructure of

A0 has cardinality at
Card L elements; there
CardhAi Card A + Card L.
Notice that

add at most

Example 1'.3.4.

Consider the case of a subset

S0 = S {1}, S1 = S0 S01 S0 S0 ,

AM

Let

be any set. The substructure generated by

substructure of

containing

A.

It coincides with

G again. Then
hSi = nN Sn .

of a group

etc. One does nd

Denition 1'.3.5 (substructure generated by a set).


and

We interpret relation

by the induced interpretation (i.e., as a subset of

hAi

M be an L-structure
in M is the smallest

from Lemma 1'.3.3.

One can prove that an arbitrary intersection of substructures of


substructure of

again; we have just been generalizing to

which is well-known for groups.


substructures of

containing

A.

It follows that

hAi

88

is a

a construction

is the intersection of all

Notice in any case that

that we could construct it explicitely.

hAi

is canonical, and

hAi

All this is very nice, but in general


substructure of

has no reason to be an elementary

(Denition 1'.1.13), and this is precisely what we would be

interested in. So Lemma 1'.3.3 and Denition 1'.3.5 are not that useful.

1'.3.2 Skolem Functions and the Descending Version


If we want to nd an elementary substructure containing

A,

then by Tarski's

Test (Theorem 1'.1.17) we ought to add existential witnesses for all formulas.
Recall how existential witnesses played an essential role in the proof of the
completeness of rst-order logic. We now deal with them in a more semantic
fashion, adding to the language functions which choose them (this construction
is of course not canonical, and heavily relies on the axiom of choice).

Caution!

The substructure we shall construct in Theorem 1'.3.9 will not be

canonical.

Denition 1'.3.6 (Skolem functions).


formula

f ,

(x, y), where y

the Skolem function


We let

LSk = L {f : (x, y)

Denition 1'.3.7
of

is the

Let L be a rst-order language. For any


n-uple of variables, we add a new function symbol
associated to .
is an

an

L-formula}.

L-theory.
T and the

y x (x, y) (f (y), y)

(Skolemization)

LSk -theory TSk

Let

be an

which is the union of

The Skolemization
axioms

The meaning is clear: our new axioms say that witnesses are explicitely given
by our new function symbols.

Lemma 1'.3.8.

Let

be a model of

Skolem functions making

T . Then
TSk .

there is an interpretation of the

a model of

Proof .

We must give a meaning to the Skolem function symbols f . Let (x, y)


L-formula, and m M have same length as y . If M |= x (x, m), then
M
there is M such that M |= (, m); we choose such a and set f (m) = .
If there is no suitable , we map m to any element in M .
be an

With heavy use of the axiom of choice (we perform innitely many choices),
we thus dene functions

fM ,

which are the interpretations of the Skolem func-

tion symbols. By construction,

M |= y x (x, y) (f (y), y)

Let us return to our problem. If we now close a subset

AM

under all

functions including the Skolem functions, we are in good position to nd an


elementary inclusion, as all necessary witnesses have been plugged in.

Theorem 1'.3.9

. Let M be an inCard A + Card L


A M0 and Card M0 = .

(descending Lwenheim-Skolem theorem)

L-structure and A M . Let be


Card M . Then there is M0  M such
nite

a cardinal with
that

89

Proof .

Card A = . We turn M into an LSk -structure


TSk . In the language LSk , we now consider the substructure of
M generated by A (Lemma 1'.3.3). Forgetting the interpretation of the Skolem
functions, this is a fortiori an L-structure M0 . Moreover Card M0 = .
We may assume that

which satises

It remains to prove that the inclusion is elementary. We use Tarski's Test

(m0 ) be a formula with parameters in M0 ; assume


M |= TSk , one has M |= (fM (m0 ), m0 ). But by
M
element f (m0 ) lies in M0 . This is what we need in order to

(Theorem 1'.1.17).

Let

M |= x (x, m0 ).
construction, the

As

apply Tarski's criterion.

Remark 1'.3.10.
M0  M

as

L-structures,

not necessarily as

because when we expand the language to


we did want to ensure

M0  M

as

LSk ,

LSk -structures.

LSk -structures,

new Skolem functions for the new formulas, getting

L-structures

However an inclusion of
guage

LSk

This is

we create new formulas. If


we would need to add

LSk,Sk ,

etc.

is what we want; the expanded lan-

is a technical device we can forget about.

The construction is highly non-canonical (it relies on the axiom of choice).


In general, there is no smallest elementary substructure containing

A:

everything depends on the way we interpreted the Skolem functions, which


is arbitrary. One should not use the notation

hAi

for Skolem hulls.

As a nal interesting word, we prove inconsistency of set theory.

Corollary 1'.3.11

(Skolem's paradox)

(If set theory is satisable) there is a

countable model of set theory.

Proof .

Write down ZFC as a rst-order theory in the language

{}.

Start with

a model and apply Theorem 1'.3.9.


However in this model

U0

of set theory we may construct

and

P (N),

which is notoriously uncountable, and yet a subset of the countable underlying


universe. . .
[Catch: countable does not bear the same meaning in the rst model, say

U , and the countable one U0 .

All that we have proved is that

it does not see (i.e., perceive as a set) a bijection between

U0 is so small that
P (N) and N.]

1'.3.3 The General Version and the o-Vaught Criterion


We now try to go up; that is, provide very large elementary extensions of
existing models.

Theorem 1'.3.12
nite
that

(ascending Lwenheim-Skolem theorem)

L-structure and Card M


Card M .

90

M be an inM  M such

Let

be a cardinal. Then there is

Proof .

C 0 = {ci : i < } be new constants, and consider the language


C 0 . Form the L0 -theory T 0 = Th(M, M ) {ci 6= cj : i 6= j}. This

Let

L = LM

theory is consistent: a nite fragment mentions a nite number of properties

(m), which are


ci 6= cj , meaning
in M again.

trivially satisable in
that there are at least

M, and a nite number of formulas


n distinct elements: this is satisable

T 0 is satisable in M; by compactness, there is a

a model M of T (notice that M is not likely to be M). As M |= Th(M, M ),

one has M  M ; on the other hand, the constants of C say that M has at
least elements.
Hence any nite fragment of

Combining Theorems 1'.3.9 and 1'.3.12, one derives a sharper version.

Theorem 1'.3.13 (Lwenheim-Skolem theorem).


A M . Let be a cardinal
0
0
that A M , Card M = ,

Proof .

L.

Proof .

and

Let M be an L-structure and


Card A + Card L . Then there is M0 such
M0  M or M0  M.

Go up (Theorem 1'.3.12), then down (Theorem 1'.3.9).

Corollary 1'.3.14
guage

with

If there is

Let

(o-Vaught test)

Card L

M, N |= T .

Let

is

such that

be a rst-order theory in a lan-

-categorical,

then

is complete.

M N.

We aim at showing

M0 and N 0
N and N 0 . But

By the Lwenheim-Skolem theorem (Theorem 1'.3.13), there are

such that M  M or M  M , and similarly for


M M0 and N N 0 . In particular, M0 , N 0 |= T .
0
0
As M and N are models of T of cardinal , they are isomorphic
0
0
0
0
goricity; it follows M N . As a conclusion, M M N N .

of cardinal

in any case

Example 1'.3.15.

by cate-

We shall prove later (Theorem 1'.4.4) that DLO is

0 -

categorical. In particular it will be complete!


End of Lecture 22.
Lecture 23 (Back-and-Forth Methods)

1'.4 Back-and-Forth Methods


It is not very common in practice to have an isomorphism; a very good reason is that two structures may have the same theory without having the same
cardinality! So one should aim at weaker correspondences.
And precisely, we shall work with very partial functions, functions which are
dened only on a nite set. But these turn out to be enough (as our formulas are
nite too!) Model theory has a wide variety of so-called back-and-forth methods
which enable to inductively extend partial functions. One should bear in mind
the image of a bootstrap.
Disclaimer: back-and-forth methods are easier to work out in the case of
relational languages. With unary functions they are still manageable; if there

91

is a binary function around, it is not very likely that one will understand any
kind of back-and-forth.

1'.4.1 Dense Linear Orderings


We start with a key example.

Denition 1'.4.1

(DLO)

A dense linear ordering (with no endpoints be-

ing always implicit) is a model of the following theory in the language

{<}

of

orderings:

x (x < x)
xy (x < y x = y y < x)
xyz (x < y y < z x < z)
xyz x < y (x < z z < x)
xyz y < x x < z
DLO denotes either this theory, or a model of the theory (this is harmless as
the theory will turn out to be complete).

Remark 1'.4.2.

The rst three axioms express that

<

is a linear ordering; the

fourth (density) says that between any two points there is another point; the
fth (no endpoints) states that there is no least, nor largest element). In spite

of the name, one should never forget the last one!

Example 1'.4.3.

Recall that given two orderings

the extension of the orderings such that any element of


of

( B is pasted after

A).

(Q, <) is a countable DLO.


`
`
`
(Q < Q, <) and (Q < {} < Q, <)
(R, <)

B, A

< B denotes
A lies below any element

and

are other countable DLO's.

is a DLO of cardinality continuum.

A close inspection will reveal that

<

and

are actually isomorphic.

Back-and-forth methods start here.

Theorem 1'.4.4
Proof .

Let

(Cantor)

M, N

Any two countable DLO's are isomorphic.

be countable DLO's. We shall inductively construct an iso-

morphism. To this extent, we enumerate the underlying sets


and

N = {nk : k N}.

M = {mk : k N}

We want to construct an isomorphism; it suces to

construct an order-preserving function

dened on all of

M , and which is onto.

This will be done inductively by considering two kinds of steps:

at odd stages, we shall make sure that

92

is dened everywhere on

M;

at even stages, we shall make sure that

reaches every element of

N.

We thus want to nd an increasing sequence of partial maps which preserve

0 (m0 ) = n0 .
n0 .

the ordering. Let us start by dening


can say is

m0 = m0 ,

So far, so good, as all we

which certainly holds of

2k is a partial map with nite support, and


a = dom 2k M and b = im 2k N satisfy the same relations.
(You may think that 2k preserves the conguration of the tuple, that
it is a partial L-morphism in the sense of Denition 1'.2.2.) We want to
extend 2k to an order-preserving map 2k+1 with mk+1 dom 2k+1 .
(stage 2k+1) Suppose that

such that

mk+1
mk+1

is in a certain conguration with respect to the nite tuple


lies above any element of

there is certainly
hand,

mk+1

a,

which lies above any element of

lies between two elements of the tuple

the same pattern can be reproduced in


that the extended tuples

We let

2k+1

(stage

2k + 2)

extend

2k

a, mk+1
by

Suppose that

a.

then as there are no endpoints in

and

N.
b,

a,

b.

If

N,

If on the other

then as

In any case there is

N is dense,
N such

still satisfy the same relations.

2k+1 (mk+1 ) = .
2k+1

has been constructed as a nite partial

a0 = dom 2k+1 and b0 = im 2k+1 .


way that nk+1 will be in the image.

map which preserves relation between


We aim at extending

2k+1

in such a

And we argue exactly like above, getting in any case an element


such that
We let

behaves with respect to

a0

exactly the way

M
b0 .

does for

2k+2 () = nk+1 .

At the limit, we let

= kN k .

This is still an order-preserving bijection.

Because of our construction at odd stages, every element of

dom = M . Because of our construction


that : M N is an isomorphism.

domain, so
It follows

nk+1

will appear in the

at even stages,

im = N .

Bearing in mind Denition 1'.2.14, one sees that Theorem 1'.4.4 says that
DLO is

0 -categorical.

As a consequence, one can deduce from Corollary 1'.3.14

that DLO is complete.

Remark 1'.4.5.

This superb proof relies on two essential ideas.

If you can always extend a partial function, then at the limit something
nice will happen. Proposition 1'.4.14 below will generalize this idea.
It is however dicult to understand at rst in what measure exactly countability is needed. Many constructions in model theory rely on induction

beyond

But here we know how to deal only with nite congurations,

and this is why the limiting process can't go further than the rst innite. For instance, there are two non-isomorphic DLO's of cardinality

(Counter-example 1'.4.15 below).

Above all, the proof relies on this bootstrap construction, arguing that
any nite conguration on one hand can be reected on the other hand.
This is worth a general denition (Denition 1'.4.10 hereafter).

93

1'.4.2 -isomorphisms
Denition 1'.4.6

(local isomorphism,

A local isomorphism, or
function

M N

0-isomorphism).

0-isomorphism, : M N

tions, and relations. Hence

is a local isomorphism i

satisfy the same atomic formulas (in

Two tuples

aM

and

bN

Two structures

M, N

are locally, or

if there is a local isomorphism

is an injective partial

with nite domain which preserves the constants, func-

mapping

0-isomorphic,
to b.

N are locally, or 0-isomorphic,


7 is a 0-isomorphism.

and

if the empty function

dom

and

im

respectively).
written

written

a '0 b,

M '0 N ,

Local is of course, opposed to global: we consider only functions with


nite support.

So a local isomorphism is a small fragment of a morphism.

When one wants to insist on the poor quality of

, one calls it a 0-isomorphism:


0. This is very far from being

it preserves only formulas of quantication rank


elementary!

Remark 1'.4.7.
is always

A beginner's mistake is to think that the empty tuple of

0-isomorphic

to the empty tuple from

N.

there might be constants in the language. For instance, the ring

1 + 1 = 0,

but the ring

Z/3Z

doesn't.

This is not the case, as

Z/2Z

saties

So even the empty function is not

compatible with the interpretation!


A little thinking will reveal that

M '0 N

i

Th0 (M) = Th0 (N ) (quantier-

free theories, no parameters: this boils down to atomic formulas using terms
built from constants).

Example 1'.4.8.
`
(Z, <) and`(Z < Z, <); we denote by 0 the 0 of
Z; there are two copies of 0 in Z < Z, which we denote respectively by
01 and 02 . Clearly '0 . Clearly too, 0 '0 01 . Also, (0, 2) '0 (01 , 02 )
(as 0 < 2 and 01 < 02 ).
`
Consider the successor function and the structures (Z, s) and (Z Z, s).
Clearly '0 and 0 '0 01 . But it is no longer the case that (0, 2) '0
(01 , 02 ). Indeed, M |= 2 = s(s(0)), whereas N |= 02 6= s(s(01 )).

Consider the orderings

Example 1'.4.9.

A moment's thought and one realizes that in the language of

orderings, local isomorphisms are exactly order-preserving bijections with nite


support.
We can now give the key denition. Bear in mind the image of a bootstrap.

Denition 1'.4.10 (-isomorphism,

two

L-structures. M

and

is a

non-empty family

Karp family). Let M and N be


-isomorphic, written M ' N , if there
isomorphisms M N such that:

are

of local

94

forth:

for all

and

M , there is F

back:

for all

and

M,

there is

extending

such that dom

extending

is called a Karp family, and its elements are called

such that

im

-isomorphisms.

Remark 1'.4.11.

If there is a Karp family, there is a largest one (take the union of all).

M'N

implies

M ' N

(take all restrictions of a global isomorphism),

but the converse need not hold in general: perhaps

and

don't even

have the same cardinality (Counter-example 1'.4.13 below).

Proposition 1'.4.12.

Any two DLO's are

-isomorphic.

More specically,

the family of local isomorphisms is a Karp family.

Proof .

Local isomorphisms between orderings are just order-preserving, nite

support bijections (which do exist: the family is not empty!). The back-andforth property is the core of the proof of Theorem 1'.4.4.
We emphasize that

-isomorphic

Counter-example 1'.4.13.
-isomorphic,

are

is much weaker than isomorphic.

Consider the orderings

but of course

(Q, <)

and

(R, <).

They

Q 6' R.

There is however a partial correction.

Proposition 1'.4.14.
Proof .

If

M ' N

and both are countable, then

M ' N.

This is an abstract version of the bootstrap construction used in The-

orem 1'.4.4.
Propositions 1'.4.12 and 1'.4.14 together yield the proof of Theorem 1'.4.4.

Counter-example 1'.4.15.

The notions of

-isomorphism

and of a Karp

family (Denition 1'.4.10) are about arbitrarily large nite congurations; one
should therefore not expect to generalize Proposition 1'.4.14 past
and

Consider an ordered sum


and

0 ,

even if

have the same cardinality. Here is a counter-example.

N =

1 (Q {}) (Q

M=

(copies of

copied after each other),

followed by a point). Both are dense linear order-

ings without endpoints, of cardinality

1 .

They are

-isomorphic

(Proposition

1'.4.12), and yet they are not isomorphic. The latter is a non-trivial exercise.
Back-and forth arguments are especially powerful in
(dened below, Denition 1'.5.12).
algebraically closed elds are

-saturated

models

For instance, it is not true that any two

-isomorphic,

but any two

-saturated

models

(dened below, 1'.5.2) are. These are algebraically closed elds of innite transcendence degree; you may already try to establish a back-and-forth between
them. This will be done in 1'.5.3.

95

1'.4.3 Finitary Back-and-Forth*


We build on local isomorphisms to nd weaker versions of back-and-forth.

Denition 1'.4.16 (n + 1-isomorphism).

A local isomorphism

:ab

(n + 1)-isomorphism

is an

if:

for all

there is

and an

n-isomorphism : a 'n b

extending

for all

there is

and an

n-isomorphism : a 'n b

extending

a and b are (n + 1)-isomorphic, written a 'n+1 b, if there exists an (n + 1)isomorphism between them.

Two structures

the empty function

Remark 1'.4.17.

N
7

and

(n + 1)-isomorphic, written M 'n+1 N ,


(n + 1)-isomorphism.

if

Notice that this is an inductive denition, as opposed to

-isomorphism

the notion of

are

is an

(Denition 1'.4.10), which is dened for a class,

simultaneously.
Here begin the funny games: given two structures, how isomorphic are they?

Remark 1'.4.18.

Back-and-forth may be viewed as a game opposing two play-

ers, having two dierent boards

and

N.

A round consists of the following

two steps:

player

puts a pawn on either board;

player

puts a pawn on the other board, in order to keep both congura-

tions similar.

M and N

are

n-isomorphic if (a clever) player 2 is sure to last at least n rounds.

Example 1'.4.19.

Consider

Consider Example 1'.4.8 again.

M = (Z, s)

and

N = (Z

Z, s). They are 0-isomorphic (as the


1-isomorphic, as picking an element

language has no constants!). They are

in one structure can certainly be reected on the other one.

2-isomorphic. Consider the element 01 N ; it will


M by some element which we may assume to be 0; 0 '0 01 .
But now player 1 chooses 02 , which cannot be reected in M, as there is
only one orbit of s in M, and 01 and 02 lie in distinct orbits of N .

They are however not


be reected in

Interestingly, these structures turn out to be elementary equivalent (this


might be a little unclear yet but 1'.5.2 and 1'.5.3 will shed some light).

< Z, <). Let k N; I say that Z and Z


< Z are
k -isomorphic. For clarity, let us call Z the Z which comes alones; and let

Consider

Z1

and

(Z, <) and (Z

Z2

be the left and right

Z's

96

of the pasted structure.

Player
player
player
picks a
Player

2
2
2

will identify

Z1 : whenever player 1 makes a move in Z,


Z1 ; whenever player 1 makes a move in Z1 ,
does the same in Z. Now, when player 1 plays in Z2 , player 1
k
similar conguration in very big elements of Z, of size around 2 .

will win, but it will take him at least

with

does the same in

rounds!

There is a name for the second phenomenon of Example 1'.4.19.

Denition 1'.4.20 ( -isomorphism).


:ab

is an

-isomorphic

et

are

-isomorphism

Two structures are

if for all

if there is an

-isomorphic

if

it is an

n-isomorphism.

-isomorphism
is an

between them.

-isomorphism.

Remark 1'.4.21.

A beginner's mistake:

denotes the rst innite ordinal, which is very

-isomorphism is way weaker than


-isomorphism;
here is why. In Example 1'.4.19 we have seen that (Z, <)
`
and (Z
< Z, <) are -isomorphic. However they are not -isomorphic,
small for set theory.

In particular,

as otherwise, being countable, they would be isomorphic by Proposition


1'.4.14, which is clearly not the case.

-isomorphisms

are a little subtle. They mean that player

strategy that will keep him playing for at least

the strategy might depend on

always has a

rounds, but the choice of

(typically Example 1'.4.19). If there were

a uniform strategy, which does not depend on how long player


keep playing, one would have

-isomorphisms
guage.

-isomorphism

tries to

(Denition 1'.4.10 above).

are especially interesting because of our notion of a lan-

Going down through the recursive denition of satisfaction, one sees

that our formulas are always about nite tuples, of arbitrary length.

Proposition 1'.4.22.

If

M ' N ,

then

M N.

In order to prove this proposition, let us introduce useful terminology.

Denition 1'.4.23

(type of a tuple)

Let

tpM (a) = {(x) : M |= (a)}.


the superscript

L-structure

and

The type of

be an

be a tuple (abusing, as usual, Cartesian powers).

a M
M is

in

When there is no risk of confusion, one drops

Bearing in mind the denition of the quantication rank of a formula (Denition 1'.0.8), one can even dene

Lemma 1'.4.24.
M, N

Let

respectively. If

tpM
n (a) = {(x) : qrk() n

M, N be L-structures and a, b
a 'n b, then tpn (a) = tpn (b).

97

and

M |= (a)}.

be tuples extracted from

Proof .
b,

then

By induction. First notice that by denition (Denition 1'.4.6), if

and

a '0

satisfy the same relations, hence the same atomic formulas; the

claim is proved.

n; we show it for n + 1. So let


n + 1-isomorphic tuples. We show tpn+1 (a) = tpn+1 (b). Let
be a formula of tpn+1 (a). We show that N |= (b).
A quick induction on the structure of reveals that the only interesting case
is that of (a) being x (a, x), for a formula now of quantication rank n.
We have assumed M |= (a): so there exists M such that M |= (a, ).
In particular, tpn (a, ). As a and b are n + 1-isomorphic, there is by
denition (Denition 1'.4.16) N such that a, 'n b, . By induction,
tpn (b, ); that is, N |= (b, ). It follows that N |= (b): we are done.
Suppose that the property is known for

a 'n+1 b

be two

Remark 1'.4.25.

We insist that if

there is a Karp family of extensions of


and

and b are locally isomorphic, and that


a '0 b (one may then write a ' b), a

satisfy the same formulas with quantiers.

Proof of Proposition 1'.4.22.

Suppose M ' N ; we show M N . Our


n, M 'n N . By Lemma 1'.4.24, tpM
n () =
M
()
=
Th
(M)
(the
theory
of
M
restricted
to
formulas
of
()
.
But
tp
tpN
n
n
n
qrk n). As this is true for any n, one gets Th(M) = Th(N ); so M N .

assumption means that for each

Proposition 1'.4.22 does not have a converse, because when

n > 0,

Lemma

1'.4.24 doesn't.

Counter-example 1'.4.26.
Then

and

they are not

01 clearly satisfy
1-isomorphic.

Remark 1'.4.27.

`
(Z, s) and (Z Z, s) again.
of qrk 1, but as observed,

Consider the case of


the same formulas

The choice of Counter-example 1'.4.26 is typical. Actually, in

a language consisting of nitely many relations, the converse to Lemma 1'.4.24


does hold, and in particular

M ' N

i

M N!

We sum up the isomorphism strength as follows (n

'

'

'

< m)

'm

'n

Counter-example 1'.4.28.
(R, <) ' (Q, <) but they are not isomorphic (Counter-example 1'.4.13)
`
(Z, <) ' (Z < Z, <) but they are not -isomorphic (Remark 1'.4.21).
`
(Z, s) (Z Z, s) but they are not -isomorphic (Example 1'.4.19).
`
(Z, s) '1 (Z Z, s) but they are not 2-isomorphic (Example 1'.4.19).
In special cases, converses do hold:

98

If

M ' N

There always is an ordinal such that

In a nite, purely relational language,

are countable, then

M'N

(Proposition 1'.4.14).

M ' N
MN

implies

M ' N .

does imply

N ' N .

End of Lecture 23.


Lecture 24 (Elimination Theorem)

1'.5 Quantier Elimination and -Saturation


Sometimes, every formula (with free variables!) boils down to a formula with
no quantiers.
that

This was typically the case when in high school, you realized

x ax2 + bx + c = 0

i

b2 4ac 0

(over

Th(R)

as an ordered ring). We

give a name to this general phenomenon.

Denition 1'.5.1

(quantier elimination)

quantiers (has QE) if for any formula

(x)

A rst-order theory

eliminates

there is a quantier-free formula

T |= x (x) (x).

such that

(x) and (x) are


L {x}, T {(x)} |= (x)

(One says that


having, in

(x)

T ; this
T {(x)} |= (x).)

equivalent modulo
and

amounts to

Example 1'.5.2.

R as an ordered ring, the formula x ax2 + bx + c = 0 is equivalent to


(b 4ac 0) (a = 0 (b 6= 0 c = 0)). Actually the theory RCF of R
In

as an ordered ring eliminates quantiers.

C as a ring, the formula x an xn + + a1 x + a0 is equivalent to


(ni=1 ai = 0 a0 6= 0) (this is the D'Alembert-Gau theorem). Actually
the theory ACF0 of C as a ring eliminates quantiers.
In

We shall give a semantic criterion for a theory to admit QE (Corollary 1'.5.6


below). As it can be stated in a more general form, we extend the denition to
arbitrary elimination.

1'.5.1 Elimination Sets


Denition 1'.5.3
formulas.
modulo

formulas. Then

and all tuples


if

Let

and

(elimination theorem)

Let E be a set of
of L is equivalent
E.

be a theory.

if every formula

to some Boolean combination of formulas from

Theorem 1'.5.4
of

(elimination set)

is an elimination set modulo

Let

be a theory and

E a set of
M and N

is an elimination set if and only if, for all models

and

extracted from

and

satisfy the same formulas from

same formulas.

99

E,

respectively,

then they satisfy the

Proof .
E

that

One implication is trivial. For the other one, we may clearly assume
is closed under Boolean operations.

tradiction.) Let

(x)

be an

L-formula.
E.

(modulo

T)

Step 1.

There is a nite subset

to a formula of

} {(c) (d)}
and

such that the theory

E contains a con(x) is equivalent

T {(c) (d) :

is inconsistent.

c, d be tuples
L = L {c, d}. Let

Verication: Let
0

x)

(In particular,

We shall prove that

of new constant symbols (of the same length as

T 0 = T {(c) (d)} {(c) (d) : E}


The

L0 -theory T

is not consistent; supppose it is. Then there is a model

with interpretations
same formulas from

m = c and n = d
E , but disagree on :

Notice that

and

M |= T 0

satisfy the

against the assumption. So

T0

is not

consistent. By compactness, there is a nite, inconsistent fragment.

Step 2.

For every

M |= T ,

there is a formula

M (x) E

equivalent to

(x).

M |= T , and X = {m M : M |= (m)} (the subset of


). If X is empty, then bearing in mind that E contains a
contradiction, we are done. So we may suppose that X 6= .
For each m X , let m = { : M |= (m)} and m = \ m . Let
V

m = m
m (hence m describes exactly which formulas of a
tuple m X satises).
Notice that each m is in E , and that only nitely many m 's are possible
by niteness of (even though X might be innite). Let M = X m : this
should be another denition of X ! Notice that M E . We now claim that
M |= x ((x) M (x)).
Let n M . If M |= (n), then by denition n X ; hence M |= n (n)
and M |= M (n). On the other hand, assume M |= M (n). By construction
of M , there is m X such that M |= m (n); by construction of m , one has
M |= m (m). It follows that m and n satisfy exactly the same formulas from
. By Step 1, m and n must agree on too. As m X , one has M |= (m);
it follows M |= (n).

Verication: Let
n

dened by

The issue is that, so far, the formula


theory

is equivalent to

only in M (if the

were complete we would however be done). We solve this problem.

Step 3.

Let M |= T . There is a
T {M } |= x ((x) M (x)).
Verication: Let

sentence

M = { E : M |= }

M E

(this is

true in

E Th(M))

and such that

and

T 0 = { : M } { : 6 M } {x ((x) M (x))}
If

T0

N of T 0 . The empty tuple


E as M ; in particular, by

were consistent, we would have a model

satises exactly the same formulas from

100

satises the
M |= x ((x) M (x)),

M and N , that is M N . Yet


T 0 is not consistent.
By compactness, there is a nite fragment of M , which we can take to be
single sentence M , such that T {M } |= x ((x) M (x)).

assumption,

Step 4.

Finitely many

same formulas in

a contradiction. So

M 's

suce for all models of

T 0 = T {M : M |= T }

Verication: Let

T.

(this makes sense: the collection

M 's is a set alright, as


L-sentences). If T 0 were consistent, we would have
none of the M 's holds, which is impossible!

of models is a proper class, but the collection of

subcollection of the set of

model

N |= T

in which

By compactness,
nite set

Step 5.

There is

of

M 's

Verication: Let

T0

is nitely inconsistent, and this means that there is a

such that

(x) E

T |= M .

such that

T |= x (x) (x).

= {M : M |= T }; we may assume that is nite.


_
(x) =
(M M (x))

Consider

E . Now let M |= T , and m M : we show M |= (m) (m).


M |= (m). By construction, M |= M . In particular, M |=
x ((x) M (x)), whence M |= (m) M (m), and M |= M (m),
whence M |= (m).
Suppose now M |= (m). Then there is a model N such that M |= N
N (m). As T {N } |= (c) N (c), we see that M |= (m).

Notice that

Suppose

This concludes the proof of Theorem 1'.5.4.

Remark 1'.5.5.

This proof is a little long and clumsy.

We have just been

covering a normal, compact space; a topological argument is possible, short,


and elegant, but it requires introducing spaces of types, which is a little beyond
the scope of a rst introduction.

Corollary 1'.5.6.
els

and

if

of

and

A theory

eliminates quantiers if and only if, for all mod-

and all tuples

and

extracted from

and

respectively,

satisfy the same atomic formulas, then they satisfy the

same formulas.
The question is now: how do we prove that two tuples do satisfy the same
formulas? In practice, this often results from a back-and-forth construction.

Corollary 1'.5.7.

Let T be a rst-order theory such that


M, N |= T , local isomorphisms between M and N form a
T eliminates quantiers.

101

for any two models


Karp family. Then

Proof .

We use Corollary 1'.5.6.

Let

m, n

be drawn from

M, N

respectively

and satisfy the same quantier-free formulas; we show that they satisfy the
same formulas. As

and

satisfy the same quantier-free formulas, they are

locally isomorphic. By assumption, they are

tpM (m) = tpN (n) (see Remark


formulas; T eliminates quantiers.

it follows
same

-isomorphic.
m

1'.4.25). Hence

So

m ' n, and
n satisfy the

and

We shall later generalize Corollary 1'.5.7 to a weaker assumption (Theorem


1'.5.19 below).

Remark 1'.5.8.
T,

QE is strongly language-dependent; actually for any

there is a language

L L0

and an

L0 -theory T T 0

such that

T0

L-theory

eliminates

quantiers.

Verication: We rst get rid of

y (x, y) we introduce a new


x (y (x, y)) R (x).

's,

replacing them by

relation symbol

Of course the resulting theory

T1

R (x)

in the resulting language

eliminate quantiers, as we have created new formulas.


construction.

T = n Tn

After

For any formula

and we add to the theory

L0 = n Ln

steps, any formula of

to a quantier-free formula of

L1

need not

But we repeat the


is equivalent modulo

L.

One can actually prove compactness that way, reducing to compactness of


propositional logic.
End of Lecture 24.
Lecture 25 ( -Saturated Models)

1'.5.2 -Saturated Models


Theorem 1'.5.4 suggests a method to prove quantier elimination: just show that
two locally isomorphic tuples satisfy the same formulas. This can be done by
establishing

-back-and-forth ( -back-and-forth

more subtle:

hope for the strongest.)

is weaker and sucient but

So let us go back to back-and-forth

constructions.
For instance, if one tries to show that
should start with algebraically closed elds
isomorphic tuples

Th(C) eliminates quantiers, one


K, L; one would take two locally

a '0 b, and try to check that we can always extend. This


K has an element which is transcendental over a, but L is

fails if, for instance,


algebraic over

b.

The conclusion of this algebraic discussion is that some models are richer
than others, and that presumably back-and-forth is easier to manage in rich
models.

A formal notion of wealth is

-saturation

(Denition 1'.5.12 below).

This requires the following denition, which generalizes the notion of a system

n variables.
x = (x1 , . . . , xn ) of

of equations in

To understand it, x an integer

variables

length

n.

102

and a tuple of

Denition 1'.5.9 (type). Let L be a rst-order language, M an L-structure,


A M a set of parameters. An n-type over A is a collection p(x) of LA -formulas
(x, a) (a A) which is consistent with Th(M, A).
Treating the variables as constants, one sees that an
theory in

LA (x)

extending

n-type

is merely a

Th(M, A).

This generalizes the case of the type of a tuple (Denition 1'.4.23), which is
called realized. But any type is the type of a tuple which hasn't appeared yet:
every type is realized somewhere.

Remark 1'.5.10.
n N such
of M.)

that

If

p(x)

realizes

is a type over

p.

(Of course

A M , then
N will tend to

there is

N  M

and

be a proper extension

Types are amazingly important and interesting (they come with nice topological methods), and students should refrain the teacher from going to deep
into this matter.

Example 1'.5.11.

If

A M and b M ,
n-type over A.

then

tp(b/A) = {(x, a) : LA : M |= (b, a)}

is

an

Consider

(N, <).

is not realized in

Then

N,

{x > n : n N}

1-type

over

N.

Notice that it

When we constructed the non-standard reals in 1.6.3, we simply constructed a

1-type

over

R,

namely

went to an elementary extension

is a

but in some elementary extension.

Consider

p(x) = {0 < x < r : r R>0 },


R  R satisfying it.

as a ring, and study maximal

1-types

over

Z.

and then

There are two

kinds:

algebraic types, types which contain a formula


nomial

with coecients in

P (x) = 0

for a poly-

Z.

the transcendental type, which contains

{P (x) 6= 0 : P Z[X]}.

We build on the previous example to (2-)types of pairs of complex numbers

p(x, y)

over

Z.

There are four kinds:

both x and y are algebraic over Z: the type boils


0, P2 (y) = 0} for some polynomials P1 and P2 .

down to

{P1 (x) =

 x

is algebraic over Z, but y is transcendental: the type reduces to


{P1 (x) = 0} {P (y) 6= 0 : P Z[X]} for a certain polynomial P1 .

 x is transcendental, y is algebraic: p(x, y) = {P (x) 6= 0 : P Z[X]}


{P2 (y) = 0}

 x

P2 .

(x, y) is algebraically de{P (x) 6= 0 : P Z[X]} {P (y) 6= 0 : P


Z[X]} {Q1 (x, y) = 0} for some Q1 Z[X, Y ].
and

for some

are transcendental but the family

pendent: the type contains

103

The family (x, y) is algebraically independent:


{Q(x, y) 6= 0 : Q Z[X, Y }. This is the type of
dence degree 2.

the type is

p(x, y) =

any pair of transcen-

One measures richness of a structure by the amount of types it realizes.

Denition 1'.5.12 ( -saturated).

An

L-structure M
M.

is

-saturated

if any

1-

type over nitely many parameters is realized in

Example 1'.5.13.

A nite structure is always

Any DLO is

-saturated,

but this is a trivial example!

-saturated.

(N, s) is not -saturated. Let n (x) be the formula x 6= sn (0). Then the
1-type p(x) = {n (x) : n N} uses 0 as its only parameter, and is not
realized in N.
(N, <) is not -saturated. Let n (x) be the formula x1 . . . xn x1 <
< xn < x. Then the 1-type {n (x) : n N} uses no parameters and is
not realized in N.

Q is not -saturated. Indeed, any element of Z can be written as


1's. So the transcendental type p = {P (x) 6= 0 : P (X) Z[X]}
parameters. It is however not realized in Q.

The ring

a sum of
uses no

Theorem 1'.5.14.
tary extension

Proof .

Let M be an innite L-structure.


M  M which is -saturated.

Then there is an elemen-

This proof relies on ordinal induction, and may be omitted freely; the

result may not.

M , and all 1-types over a,


perhaps bigger than ) of all 1types over all nite tuples of M : {pi (x) : i < }. The reader not familiar with
ordinals ought to suppose = for simplicity; in this case the construction is
We enumerate all nite tuples

of elements of

nding an enumeration (by some ordinal

very understandable.

pi (x) is a 1-type over Mi , so it is realized in some


i
elementary extension M
of M (a quick look at Remark 1'.5.10 if necessary).

Take unions at limit ordinals (this is required only if > ). Let M1 = M .


Let

M0 = M.

For each i,

i+1

As we have taken the increasing union of an elementary chain, Lemma 1'.1.16

M  M1 . By construction,
M1 .
M1 need not be -saturated, as we

says that

all

1-types

over nite tuples of

are

realized in

have added new elements, so we might

have created new types. But we resume the construction, getting


countably many steps, the union
sion of

is

-saturated,

by Lemma 1'.1.16 again.

104

M2 .

After

and an elementary exten-

Remark 1'.5.15.
mentary extension

One has no control on the cardinality of an

M ,

-saturated eleM. Indeed,

even starting with a countable base model

at each step the number of types might be huge.


For instance, if one consider

Z as a ring, then there are continuum many


1-types: for each subset I of the primes, consider pI = {x 6= 0}
{y x = p.y : p I} {y x 6= p.y : p 6 I} which expresses that the only primes
dividing x are the elements of I .
So there is no countable, -saturated elementary extension of the ring Z.
dierent

Dealing with

1-types

or

n-types

is the same, as a quick induction reveals.

Lemma 1'.5.16. M is -saturated i any n-type over nitely many parameters


is realized in

Proof .

M.

n-type with nitely many parameters is realp(x, y) be an n + 1-type with nitely many parameters a (x
th
still stands for (x1 , . . . , xn ); the n+1
variable is y ).
Let q(x) = {y (x, y, a) : (x, y, a) p}. This is clearly an n-type over a;
by induction, it is realized by an n-uple b M .
We now let r(y) = {(b, y, a) : (x, y, a) p}. This is clearly a 1-type over
b, a, which is again a nite tuple. So r is realized in M: there is c M realizing
it. It is clear that b, c realizes p.
ized in

Induction. Suppose any

M,

and let

Remark 1'.5.17.

Indeed, even though


Again, why are

p,
s(y) = {x (x, y, a) : (x, y, a) p}.
of s, (a, c) has no reason to realize p!

Observe that one couldn't have taken both projections of

constructing and realizing rst

q,

then

is a realization

-saturated models so important?

Because when one tries to

check completeness of a theory, one may focus on them, which are presumably
nicer than the general models.

Here is a generalization of Lemma 1'.1.6 to

weaker assumptions.

Lemma 1'.5.18. T

is complete i any two

-saturated

models are elementary

equivalent.

Proof .

Let

M, N

be models of

T;

we show that they are elementarily equiva-

M  M, N  N which are -saturated.

As M M and N N , M and N are models of T . By assumption, they

are elementary equivalent. It follows M M N N ; we are done.


lent. By Theorem 1'.5.14, there are

Here is our generalization of Corollary 1'.5.7.

Theorem 1'.5.19.
models
Then

Proof .
let

M, N |= T ,

Let

be a rst-order theory such that for any

local isomorphisms between

and

-saturated

form a Karp family.

eliminates quantiers.
We use the elimination theorem, Theorem 1'.5.4. Let

m M, n N

M, N |= T ;

we

satisfy same quantier-free formulas; we want to show that

they satisfy the same formulas.

105

-saturated. By Theorem 1'.5.14,


M  M and N  N . As m

satises the same formulas in M and M , it actually suces to work in M

and N , that is we may assume M and N are -saturated.


As m and n satisfy the same quantier-free formulas, they are locally isomorphic, so by assumption they are -isomorphic. They must therefore satisfy
First, we may assume that

there are

-saturated

and

are

elementary extensions

the same formulas.


The conclusion of this abstract discussion is that

-saturated

elementary

extensions always exist, and form a good ground on which to study a theory
(the details are not too important).
End of Lecture 25.
Lecture 26 (Examples)

1'.5.3 Examples
We eventually shed light on all notions by studying examples.

DLO's
The theory of dense linear ordering has already been dened and studied in
1'.4.1. We put the existing pieces together.

Proposition 1'.5.20

(DLO's)

quantiers. The theory is


model is

Proof .

-saturated

DLO is complete and decidable; it eliminates

0 -categorical

but not

and local isomorphisms are

1 -categorical. Moreover,
-isomorphisms.

any

We know everything, but here is how a study could go. Past the two

rst, many steps are permutable.


(i). The rst step is to realize that any DLO is indeed
(ii). Then, prove that any local isomorphism is an

-saturated.

-isomorphism.

is a local isomorphism.
-isomorphic.

(iii). Moreover, as there are no constants,


from (ii) that any two models are

It follows

(iv). As a consequence of (iii) and with Proposition 1'.4.14, any two countable
models are isomorphic; the theory is

0 -categorical.

(v). One may uses the o-Vaught criterion (Corollary 1'.3.14 to argue that
the theory is complete.
(vi). It would be as simple to say that any two models are elementary equivalent
by (iii); so the theory is complete, again.
(vii). As the theory is clearly eectively presented (nitely many axioms!), and
complete, it is decidable (Corollary 1.6.3).

106

(viii). Turning back to categoricity questions, one would see that the theory is
not

1 -categorical

(Counter-example 1'.4.15).

It follows from Morley's

-categorical

theorem (Theorem 1'.2.16) that it is not

in any

1 .

(ix). It follows from (ii) that DLO eliminates quantiers (Theorem 1'.5.19).

The Successor Function


Proposition 1'.5.21 (N, 0, s).

Let

have the following axioms:

xx0 s(x) = s(x0 ) x = x0 ;


x s(x) 6= 0;
xy x 6= 0 s(y) = x;
x sn (x) 6= x

for each

n N.
It is not 0 embeds elementarily

The theory is complete and decidable; it eliminates quantiers.


categorical but is

-categorical

1 .

in any

Moreover

into every model.

Proof .

This is a new study. We must understand the

-saturated

models. In

order to do this we must develop some understanding of the general model.


(i). A model is a set equipped with a function

s which is onto except on 0, and

which has no cycles. There are therefore two kinds of orbits: the orbit of

0,

which starts and never ends, and other orbits, which have no starting

nor ending point: they are like

(Z, s).

We see that the general model of

has the form

is any set. There is no way to order

Z-like

Mi = N

``

Z,

where

orbits.

(ii). Local isomorphisms are easily understood; one would however fail to establish back-and-forth between any two models.

are not

-isomorphic

It is clear that

and

(being countable they would be isomorphic

by Proposition 1'.4.14).
(iii). In particular, the theory is not

0 -categorical.

On the other hand since

isomorphism classes are determined by the sole set


structure,

is obviously

-categorical

I,

which bears no

in any uncountable

(iv). One may apply the o-Vaught criterion to derive completeness of

(and

its decidability, as it is eectively axiomatized).


However we insist that the sooner one understands the

-saturated models,

the better.

1-type p(x) = {sn (0) 6= x : n N} (no parameters!


` N appears as
indexing set!) is clearly not realized in N, but it is in N
Z. On the other
n
hand, if we x a Z, then the 1-type q(x) = {s (a) 6= x : n N} over a is

(v). The

107

not realized in

Z.

If we want to realize it, we must go to

` ` 0
Z Z.

And so on.
We then understand that a model is
for an innite set

-saturated

i of the form

``

I.
-saturated models do
-saturated
want to nd such that

(vi). One then sees that local isomorphisms between


form a Karp family:
models, and let

a '0 b
side as a.

suppose that

be on the same

are drawn from


We

a, '0 b, .

lies in
= sn (bi ).

If

the orbit of some


As

a '0 b,

If on the other hand

too: this is possible, as

ai ,

say

= sn (ai )

n N,

for

lies in a new orbit, we pick

in a new orbit

is innite!

0 is
-saturated

(vii). Any two models are clearly locally isomorphic (the orbit of
in either model).
are

-isomorphic.

we let

this creates no confusion!

It follows from (vi) that any two

similar
models

It follows that the theory is complete and eliminates

quantiers.
(viii). In particular, any inclusion of structures is elementary. As

embeds into

any model, it embeds elementarily into any model.

The Ordering
Proposition 1'.5.22 (N, 0, <).

Let

be the theory:

x (x < x)
xy (x < y x = y y < x)
xyz x < y y < z x < z
(y 0 y) (xy x y x = 0)
xyz y > x (x < z y z)
(There is an element lying immediately above

x)

xyz x 6= 0 (y < x (z < x z y))


(If

is not

0,

then there is an element lying immediately below

The theory is complete and

embeds elementarily into any model. It does not

admit QE, but it does eliminate quantiers in the language


where

dn

(0, <, {dn : n N}),

is a new binary predicate symbol standing for

dn (x, y) :

x1 . . . xn1 x < x1 < < xn1 < y

(the algebraic distance from

x).

to

is at least

108

n)

Proof .

Things are a little more subtle here.

(N, 0, s): one can dene (by


s
. In particular, any model
` `
MI = N < (I,<) Z, where I is now an ordered set.

(i). A key observation is that this theory interprets


a rst-order formula) the successor function
of

has the form

(ii). One must be very careful that local isomorphisms do not form a Karp
family between

-saturated

models.

Indeed, pairs

(x, x+ )

and

(x, x++ )

are locally isomorphic, but back-and-forth between them is not possible!


We already have a feeling that distance comes into play. But we reserve
language-related issues for the end.
(iii). If one has studied
are

-isomorphic,

-isomorphisms

(1'.4.3), one sees that any two models

so the theory is complete. There is another proof in (v)

but this one is recommended, being simple and elegant.

-saturated, I must be innite. This is however


a in some copy of Z, and consider the 1-type p(x) =
{x1 . . . xn1 a < x1 < xn1 < x : n N} over a. It means that x
is innitely bigger than a, that x lies in a copy of Z lying above that in
which a lives.

(iv). For a model to be


not enough.

Fix

In particular, we understand that for the model to be

-saturated, I

must

have no largest element. And similarly, it must have no least element: it


has no endpoints. This is still not enough.

a < b in I , and consider the 1-type q(x) = {x1 . . . xn1 a < x1 <
xn1 < x : n N} {x1 . . . xn1 x < x1 < xn1 < b : n N} over
(a, b): it expresses that the orbit of x contains neither a nor b, that x lives
in a copy of Z which lies between that of a and that of b. This is density
of the ordering I !

Fix

And we now see that

MI

is

-saturated

i

is a DLO.

(v). One could reprove completeness: the theory is of course not


(neither is it

1 -categorical,

0 -categorical
-

since DLO isn't), but any two countable,

saturated models are isomorphic.


Indeed, let

MI

and

MJ

whence isomorphic.
identifying copies of

be such models.

and

are countable DLO's,

MI ' MJ

by

Z.

In particular, any two countable and


equivalent.

We then construct an isomorphism

-saturated
Let N

This is however not enough.

models are elementary


be any model; by the

Lwenheim-Skolem theorem (Theorem 1'.3.13), it contains a countable

N0 = MI , where I is countable. Now I embeds


I 0 , so N0 embeds elementarily into N1 = MI 0 . It

elementary substructure
into a countable DLO
follows that

N ' N1 ,

and completeness is proved.

This method is however not recommended, as it relies on the existence of


countable,

-saturated

models; one should prefer the proof of (iii).

109

L = {0, <} and L0 = L {dn : n N} be the expanded language.


0
Notice that WFF(L) = WFF(L ); we don't change the expressive strength

(vi). We let

of the language, but merely force certain quantied formulas to now be


quanter-free.

In particular, a model of
description of

-saturation

(vii). One now sees that

L0 -local

is

-saturated

in

i

-saturated

in

L0 :

our

does not change.


isomorphisms between

-saturated

models are

L --isomorphisms.
L0 is immediate.
nds N  MI .

From there, quantier elimination in

L -structures,

an inclusion of

one

As

N MI

is also

Algebraically Closed Fields*


Denition 1'.5.23

(ACF)

An algebraically closed eld is a model of the fol-

lowing theory in the language

the structure is a eld;

for each

n N,

{0, 1, +, , }

the axiom

of rings:

an . . . a0 x an 6= 0 an xn + + a0 = 0.

ACF denotes either this theory, or a model of the theory.

We let

denote the set of prime numbers.

Denition 1'.5.24

For

(ACFq )

p P , an algebraically closed
ACF {1 + + 1 = 0}
| {z }

eld of characteristic

is a model of

the theory

p times

An algebraically closed eld of characteristic

is a model of the theory

ACF {1 + + 1 6= 0 : p P}.

Proposition 1'.5.25. ACF


{0}, ACFq

uncountable

qP
kappa-categorical in any

eliminates quantiers. Moreover, for any

is complete. It is not

0 -categorical,

but is

Proof .

We begin with simple remarks: given algebraically closed elds

one has

K '0 L

-saturated

i they have the same characteristic.

i it has innite transcendence degree over its prime eld.

It is then very easy to see that if

K, L

are

-saturated

ACF's of the same

characteristic, then local isomorphisms form a Karp family. In particular,

L. ACFq

K, L,

Moreover, an ACF is

is complete! Now the characteristic is in the quantier-free type of

any tuple, so

ACF

eliminates quantiers.

As for categoricity,

and

Q(X)

are clearly non-isomorphic; but playing

with transcendence bases, one easily nds that


uncountable

110

ACFq

is

-categorical

in any

Corollary 1'.5.26

(Hilbert's Nullstellensatz). Let K be an algebraically closed


X = (X1 , . . . , Xn ) a tuple of indeterminates, and I C K[X] an ideal of
K[X]. Then I has a solution in Kn .

eld,

Proof .
ity,

We may assume that

is a maximal ideal

m; notice that by Noetherian-

is nitely generated. In particular, existence of a solution is a rst-order

formula with parameters in


We let

both are models of

K  L.

K.

be the algebraic closure of the eld

ACF

K[X]/m;

clearly

K L.

As

and the theory eliminates quantiers, one actually has

Now there is a solution to

m in Ln

(the class modulo

since this is rst-order, there is a solution in

Kn

m of the tuple X );

too.

More interesting results hold, but one unfortunately has to stop some day.
End of Lecture 26.

111

Index of Notions
Inclusion . . . . . . . . . see Inclusion

*: may freely be omitted

Morphism . . . . . . see Morphism

Substructure . see Substructure

Algebraically Closed Field* . . . . 110

Elimination Set . . . . . . . . . . . . . . . . . 99

ACFq . . . . . . . . . . . . . . . . . . . . . 110

Equality Axioms . . . . . . . . see Axiom

Assignment

Truth Assignment . . . . . . . . . . . 8
of the Variables . . . . . . . . . . . . 45

Filter* . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Atomic Formula . . . . . . . see Formula

Finite Intersection Property* . . . 32

Axiom

Formula

Axioms of Equality . . . . . . . . . 51

Atomic Formula . . . . . . . . . . . . 42

Deduction Rule . . . . . . . . . . . . 13

Existential Formula . . . . . . . . 78

Axiomatization of a Theory . . . . . 30

Quantier-Free Formula . . . . 78
Universal Formula . . . . . . . . . . 78

Turning one into a Sentence 79

Back-and-Forth

Finitary* . . . . . . . . . . . . . . . . . . . 96
of Height

* . . . . . . . . . . . . . . . 97

Inclusion . . . . . . . . . . . . . . . . . . . . . . . . 81

Innite . . . . . . . . . . . . . . . . . . . . . 94

Elementary Inclusion . . . . . . . 82
Interpretation . . . . . . . . . . . . . . . . . . . 45

Isomorphism . . . . . . . . . . . . . . . . . . . . 85

Categoricity . . . . . . . . . . . . . . . . . . . . 87

0-Isomorphism . . . . . . . . . . . . . 94

Absolute Categoricity . . . . . . 75

n-Isomorphism* . . . . . . . . . . . . 96

Compactness

-Isomorphism* . . . . . . . . . . . . 97
-Isomorphism . . . . . . . . . . . . 94

in Topology* . . . . . . . . . . . . . . . 32
Complete Theory . . . . . . . . . . . 27, 65

V -Complete . . . . . . . . . . . . . . . . 61
Consequence

Karp Family . . . . . . . . . . . . . . . . . . . . 94

Semantic Consequence . . . 9, 46

Kripke Model* . . . . . . . . . . . . . . . . . . 33

Syntactic Consequence . . 13, 51

Consistency . . . . . . . . . . . . . . . . . 15, 52
Language

of Propositional Logic . . . . . . . 5

Decidable . . . . . . . . . . . . . . . . . . . . . . . 30

of Modal Logic* . . . . . . . . . . . . 33

Semi-Decidable . . . . . . . . . . . . . 30

First-Order Language . . . . . . 40

Deduction . . . . . . . . . . . . . . . . . . . 13, 51

Second-Order Language . . . . 73
Local Isomorphism . . . . . . . see Isom.

D. Rules
for Connectives . . . . . . . . . . 13
for

and

. . . . . . . . . . . . . . 51

Model . . . . . . . . . . . . . . . . . . . . . . . 46, 79

Dense Linear Ordering . . . . . . . . . . 92

in Modal Logic* . . . see Kripke

Modus Ponens . . . . . . . . . . . . . . . . . . 15

Elementary

Modus Tollens . . . . . . . . . . . . . . . . . . 15

Equivalence . . . . . . . . . . . . . . . . 80

Morphism . . . . . . . . . . . . . . . . . . . . . . 84

112

Elementary Morphism . . . . . . 86

Weakening . . . . . . . . . . . . . . . . . . . . . . 13

Well-Formed Formula

Non-Standard Reals* . . . . . . . . . . . 67

in Propositional Logic . . . . . . . 5
in First-Order Logic . . . . . . . . 42

Witness . . . . . . . . . . . . . . . . . . . . . . . . . 58

Peano Arithmetic . . . . . . . . . . . . . . . 75

Q
Quantication Rank . . . . . . . . . . . . 78
Quantier Elimination . . . . . . . . . . 99

S
Satisfaction
in Propositional Logic . . . . . . . 9
in Modal Logic* . . . . . . . . . . . . 34
in First-Order Logic . . . . . . . . 45
Satisability
in Propositional Logic . . . . . . . 9
in First-Order Logic . . . . . . . . 46

-Saturation . . . . . . . . . . . . . . . . . . . 104
Sentence . . . . . . . . . . . . . . . . . . . . . . . . 43
Signature . . . . . . . . . . . . . . . . . . . . . . . 40
Skolem Function . . . . . . . . . . . . . . . . 89
Standard Part* . . . . . . . . . . . . . . . . . 68
Structure . . . . . . . . . . . . . . . . . . . . . . . 44
Substitutable Term . . . . . . see Term
Substructure . . . . . . . . . . . . . . . . . . . . 81
Elementary Substructure . . . 82

T
Term . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Substitutable Term . . . . . . . . . 47
Theory
in Propositional Logic . . . . . . . 6
in First-Order Logic . . . . . . . . 43
of a First-Order Structure . . 80
Truth Assignment . see Assignment
Type . . . . . . . . . . . . . . . . . . . . . . . . . . 103
of a Tuple . . . . . . . . . . . . . . . . . . 97

U
Ultralter* . . . . . . . . . . . . . . . . . . . . . 69
Ultraproduct* . . . . . . . . . . . . . . . . . . 70

V
Variable, Free or Bound . . . . . . . . . 43

113

Index of Results
O

*: may freely be omitted

-Saturated

Models Do Exist . . 104

Categoricity
Absolute C. of

PA . . . . . . . . . . 75

QE

Morley's C. Theorem* . . . . . . 87

Criteria . . . . . . . . . . . . . . 101, 105

1 -C.
0 -C.
1 -C.

of

of
of
of

ACF* . . . . . . . . . . . . 110
DLO . . . . . . . . . 92, 106
Th(N, 0, s) . . . . . . . 107

of
of

Compactness

of

DLO . . . . . . . . . . . . . . . . . . . 106
Th(N, 0, <) . . . . . . . . . . . . . 108
Th(N, 0, s) . . . . . . . . . . . . . . 107
ACF* . . . . . . . . . . . . . . . . . . 110

of Propositional Logic . . . . . . 29

of Modal Logic* . . . . . . . . . . . . 38

Renaming Algorithm . . . . . . . . . . . . 48

of First-Order Logic . . . . . . . . 65
fails in Higher Logic . . . . . . . 75

Completeness

Skolem's Paradox . . . . . . . . . . . . . . . 90

of Propositional Logic . . . . . . 26

Soundness

of Modal Logic* . . . . . . . . 3638

of Propositional Logic . . . . . . 25

of First-Order Logic . . . . . . . . 57

of Modal Logic* . . . . . . . . . . . . 35

Completeness of a Theory

of First-Order Logic . . . . . . . . 56

Criteria . . . . . . . . . . . . 80, 91, 105


of

DLO . . . . . . . . . . . . . . . . 93, 106


N . . . . . . 108

of the Ordering on

Tarski's Test for

of the Successor Function . 107


of

ACFq * . . . . . . . . . . . . . . . . . 110

 . . . . . . . . . . . . . . 83

U
Unique Readability . . . . . . . . . . . 6, 42

D
Decidability*
Semi-D. of a Theory . . . . 31, 64
of a Complete Theory . . 31, 65

E
Elimination Theorem . . . . . . . . . . . 99
Excluded Middle . . . . . . . . . . . . . . . . 17

G
Generalisation Theorem . . . . . . . . . 58

L
o' Theorem* . . . . . . . . . . . . . . . . . . 71
o-Vaught Completeness Test . . 91
Lwenheim-Skolem Theorem
Descending Version . . . . . . . . . 89
Ascending Version . . . . . . . . . . 90
Final Version . . . . . . . . . . . . . . . 91

114

List of Lectures
Lecture 1 (Language and W 's; Unique Readability) . . . . . . . . . . . .

Lecture 2 (The Semantics of Propositional Logic) . . . . . . . . . . . . . .

Lecture 3 (Natural Deduction in Classical Logic) . . . . . . . . . . . . . .

12

Lecture 4 (Reduction to Two Connectives (1/2))

. . . . . . . . . . . . . .

18

Lecture 5 (Reduction to Two Connectives (2/2); Soundness) . . . . . . . .

21

Lecture 6 (Completeness of Propositional Logic)

. . . . . . . . . . . . . .

26

Lecture 7 (Compactness of Propositional Logic) . . . . . . . . . . . . . . .

28

Lecture 8 (Modal Logic*)

. . . . . . . . . . . . . . . . . . . . . . . . . . .

33

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

Lecture 9 (ME1)

Lecture 10 (First-Order Languages; Terms and Formulas)

. . . . . . . . .

39

. . . . . . . . . . . . . .

44

Lecture 12 (Substitutions) . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

Lecture 13 (Deductions; Simplifying the Language) . . . . . . . . . . . . .

51

Lecture 14 (Soundness; Completeness (1/2)) . . . . . . . . . . . . . . . . .

55

Lecture 15 (Completeness (2/2))

60

Lecture 11 (The Semantics of First-Order Logic)

. . . . . . . . . . . . . . . . . . . . . . .

Lecture 16 (Consequences; Compactness; Non-Standard Analysis)

. . . .

64

. . . . . . . . . . .

68

Lecture 18 (Second-Order Logic) . . . . . . . . . . . . . . . . . . . . . . .

73

Lecture 19 (ME2)

76

Lecture 17 (Proof of Compactness by Ultraproducts*)

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Lecture 20 (Models and Theories; Elementary Equivalence)

. . . . . . . .

77

. . . . . . . . . . . . . . .

82

. . . . . . . . . . . . . . . . .

87

. . . . . . . . . . . . . . . . . . . .

91

. . . . . . . . . . . . . . . . . . . . . .

99

Lecture 21 (Elementary Inclusion; Morphisms)


Lecture 22 (Lwenheim-Skolem Theorems)
Lecture 23 (Back-and-Forth Methods)
Lecture 24 (Elimination Theorem)

Lecture 25 ( -Saturated Models) . . . . . . . . . . . . . . . . . . . . . . . 102


Lecture 26 (Examples) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

115

You might also like