I’m giving a contributed talk at the inner model theory conference in Girona and at the Set Theory Today conference in Vienna. Both talks are going to be on results from my recent paper, where the Girona talk will be more specialised in a game-theoretic direction and the Set Theory Today talk will be more of an overview of the Ramsey-like cardinals and some of our results. Here are a couple of abstracts.
When studying games in a set-theoretical context, we’re mostly interested in the existence of winning strategies — that the given game is determined. Underlying the game-theoretic framework however, there’s an assumption which usually is assumed without much thought: that both players have perfect information. There’s nothing hidden, like what you got on your hand in poker or which routes you’re trying to build in Ticket to Ride. What happens to our determinacy questions if we allow hidden information?
There’s a very neat way of encoding any set as a set of ordinals, which has the somewhat peculiar feature of it being hard (which here meaning that it requires the axiom of choice) to encode sets, but easy to decode them. Like some kind of a very ineffective crypto-system.
Mentioning the core model induction to a fellow set theorist is akin to mentioning that you’re a mathematician to the layman — you receive a reaction which is struck by a delightful mix of terror and awe. My humble goal with this blog post is not to offer a “fix-all” solution to this problem, but rather to give a vague (but correct) explanation of what’s actually going on in a core model induction, without getting too bogged down on the details.
D. S. Nielsen and P. Welch, Games and Ramsey-like cardinals, 2018, manuscript under review — arXiv.
Abstract. We generalise the -Ramsey cardinals introduced in Holy and Schlicht (2018) for cardinals to arbitrary ordinals, and answer several questions posed in that paper. In particular, we show that -Ramseys are downwards absolute to the core model for all of uncountable cofinality, that -Ramseys are also strategic -Ramsey, and that strategic -Ramsey cardinals are equiconsistent with measurable cardinals, both by showing that they are measurable in and that they carry precipitous ideals. We also show that the -Ramseys satisfy indescribability properties and use them to characterise ineffable-type cardinals, as well as establishing connections between the -Ramsey cardinals and the Ramsey-like cardinals introduced in Gitman (2011), Feng (1990) and Sharpe and Welch (2011).
The incompleteness theorems appear mysterious to many people, from sheer confusion of the statements themselves, to wrongfully applying the theorems to scenarios way out proportion, such as (dis)proving the existence of god. It doesn’t help that when actually learning about the theorems in a logic course, most details are usually admitted. This is probably not the case at all universities, of course, but I have now personally experienced two different approaches to Gödel’s theorems:
- Spend most of the time on the recursion theory prerequisites to the theorems, without actually covering the theorems themselves, save for the statements;
- Skip the recursion theory and only give an informal argument of the incompleteness theorems without really showing why we should care about recursiveness.
The reason for not giving a full account of the theorems is of course the perennial enemy of lecturers: time. What I’ll try to do in this post is still not to give a complete account of the proofs, but try to explain how it all fits together. Fill in the gaps that I’ve at least encountered during my studies, which can then hopefully help others stitch together whichever parts they might have learned throughout their studies. Here we go.
It’s quite standard nowadays to characterise the measurable cardinals as the cardinals such that there exists a normal -complete non-principal measure on . As we continue climbing the large cardinal hierarchy we get to the strong cardinals, Woodin cardinals and superstrong cardinals, all of which are characterised by extenders, which can be viewed as particular sequences of normal measures on . This trend then stops, and there’s a shift from measures on to measures on , being the set of subsets of of cardinality less than . Now, how does one work with such measures? Where are the differences between our usual measures and these kinds? And how can we view this shift as expanding the amount of things that we can measure?