The incompleteness theorems appear mysterious to many people, from sheer confusion of the statements themselves, to wrongfully applying the theorems to scenarios way out proportion, such as (dis)proving the existence of god. It doesn’t help that when actually learning about the theorems in a logic course, most details are usually admitted. This is probably not the case at all universities, of course, but I have now personally experienced two different approaches to Gödel’s theorems:
Spend most of the time on the recursion theory prerequisites to the theorems, without actually covering the theorems themselves, save for the statements;
Skip the recursion theory and only give an informal argument of the incompleteness theorems without really showing why we should care about recursiveness.
The reason for not giving a full account of the theorems is of course the perennial enemy of lecturers: time. What I’ll try to do in this post is still not to give a complete account of the proofs, but try to explain how it all fits together. Fill in the gaps that I’ve at least encountered during my studies, which can then hopefully help others stitch together whichever parts they might have learned throughout their studies. Here we go.
It’s quite standard nowadays to characterise the measurable cardinals as the cardinals such that there exists a normal -complete non-principal measure on . As we continue climbing the large cardinal hierarchy we get to the strong cardinals, Woodin cardinals and superstrong cardinals, all of which are characterised by extenders, which can be viewed as particular sequences of normal measures on . This trend then stops, and there’s a shift from measures on to measures on , being the set of subsets of of cardinality less than . Now, how does one work with such measures? Where are the differences between our usual measures and these kinds? And how can we view this shift as expanding the amount of things that we can measure?
Last time we delved into the world of ideals and their associated properties, precipitousness and saturation. We noted that these properties could be viewed as a measure of “how close” a cardinal is to being measurable, and furthermore that all the properties are equiconsistent; i.e. that the existence of a precipitous ideal on some is equiconsistent with the existence of a measurable cardinal. But we can do better.
A long time ago I made a blog post on the fascinating phenomenon of generic ultrapowers, where, roughly speaking, we start off with an ideal on some , force with the poset of -positive sets and then the generic filter ends up being a -measure on . If this sounded like gibberish then I’d recommend reading the aforementioned post first. The cool thing is that we can achieve all this without requiring any large cardinal assumptions! We’re not guaranteed that the generic ultrapower is wellfounded however, but if it happens to be the case then we call precipitous. We have a bunch of other properties these ideals can satisfy however, usually involving the term ‘saturation’. What’s all that about and what’s the connection to precipitousness?
In this day and age we got a massive jungle of forcing notions, each with it’s own very specific purpose and technicalities. For set theorists who aren’t specialists in forcing theory this might seem daunting when stumbling across open questions that cry out for a forcing solution. I’m precisely one of those people, and this is my attempt at providing a brief non-technical toolkit of various forcing notions. I won’t go into how any one of the notions are defined — I’ll purely talk about their properties.
The axiom of choice, by which I mean that every collection of non-empty sets has a choice function, is usually an axiom most working mathematicians accept without further thought. But in set theory we usually get ourselves into situations where we simply cannot have (full) choice — most notably in determinacy scenarios, giving rise to several weakened forms of choice. might seem like an isolated axiom without much direct connection to other axioms, as we usually simply assume choice and get on with our day. But choice is in fact implied by the generalised continuum hypothesis , which can then also be seen as a choice principle, and choice even forces us to work in classical logic.
In a previous post we proved that whenever a countable mouse M has n Woodins it understands sets, implying that whenever A is such a set it holds that . As we mentioned back then, this is not as good as being correct about these sets, which would mean that whenever A of course is non-empty as well. Another way to phrase this is to say that iff for every -sentence. Now, what does it then take for a mouse to be projectively correct?