There’s a very neat way of encoding any set as a set of ordinals, which has the somewhat peculiar feature of it being hard (which here meaning that it requires the axiom of choice) to encode sets, but easy to decode them. Like some kind of a very ineffective crypto-system.
Mentioning the core model induction to a fellow set theorist is akin to mentioning that you’re a mathematician to the layman — you receive a reaction which is struck by a delightful mix of terror and awe. My humble goal with this blog post is not to offer a “fix-all” solution to this problem, but rather to give a vague (but correct) explanation of what’s actually going on in a core model induction, without getting too bogged down on the details.
D. S. Nielsen and P. Welch, Games and Ramsey-like cardinals, 2018, manuscript under review — arXiv.
Abstract. We generalise the -Ramsey cardinals introduced in Holy and Schlicht (2018) for cardinals to arbitrary ordinals , and answer several questions posed in that paper. In particular, we show that -Ramseys are downwards absolute to the core model for all $\alpha$ of uncountable cofinality, that strategic -Ramsey cardinals are equiconsistent with remarkable cardinals and that strategic -Ramsey cardinals are equiconsistent with measurable cardinals. We also show that the -Ramseys satisfy indescribability properties and use them to provide a game-theoretic characterisation of completely ineffable cardinals, as well as establishing connections between the -Ramsey cardinals and the Ramsey-like cardinals introduced in Gitman (2011), Feng (1990) and Sharpe and Welch (2011).
The incompleteness theorems appear mysterious to many people, from sheer confusion of the statements themselves, to wrongfully applying the theorems to scenarios way out proportion, such as (dis)proving the existence of god. It doesn’t help that when actually learning about the theorems in a logic course, most details are usually admitted. This is probably not the case at all universities, of course, but I have now personally experienced two different approaches to Gödel’s theorems:
- Spend most of the time on the recursion theory prerequisites to the theorems, without actually covering the theorems themselves, save for the statements;
- Skip the recursion theory and only give an informal argument of the incompleteness theorems without really showing why we should care about recursiveness.
The reason for not giving a full account of the theorems is of course the perennial enemy of lecturers: time. What I’ll try to do in this post is still not to give a complete account of the proofs, but try to explain how it all fits together. Fill in the gaps that I’ve at least encountered during my studies, which can then hopefully help others stitch together whichever parts they might have learned throughout their studies. Here we go.
It’s quite standard nowadays to characterise the measurable cardinals as the cardinals such that there exists a normal -complete non-principal measure on . As we continue climbing the large cardinal hierarchy we get to the strong cardinals, Woodin cardinals and superstrong cardinals, all of which are characterised by extenders, which can be viewed as particular sequences of normal measures on . This trend then stops, and there’s a shift from measures on to measures on , being the set of subsets of of cardinality less than . Now, how does one work with such measures? Where are the differences between our usual measures and these kinds? And how can we view this shift as expanding the amount of things that we can measure?
Last time we delved into the world of ideals and their associated properties, precipitousness and saturation. We noted that these properties could be viewed as a measure of “how close” a cardinal is to being measurable, and furthermore that all the properties are equiconsistent; i.e. that the existence of a precipitous ideal on some is equiconsistent with the existence of a measurable cardinal. But we can do better.
A long time ago I made a blog post on the fascinating phenomenon of generic ultrapowers, where, roughly speaking, we start off with an ideal on some , force with the poset of -positive sets and then the generic filter ends up being a -measure on . If this sounded like gibberish then I’d recommend reading the aforementioned post first. The cool thing is that we can achieve all this without requiring any large cardinal assumptions! We’re not guaranteed that the generic ultrapower is wellfounded however, but if it happens to be the case then we call precipitous. We have a bunch of other properties these ideals can satisfy however, usually involving the term ‘saturation’. What’s all that about and what’s the connection to precipitousness?