Last time we proved that mice M with Woodins knows about sets A, meaning , using Woodin’s genericity iterations and the notion of mice understanding sets of reals. But what good is a projectively aware mouse? To give an example of the usefulness of this property, we show that the existence of these projectively aware mice yields determinacy of sets of reals, shown by Neeman (’02).
I’ve previously covered Woodin’s genericity iterations, being a method to “catch” any real using Woodin cardinals. Roughly, given any countable mouse M and a real x, we can iterate M to a model over which x is generic. An application of this is the phenomenon that Woodins present in mice allows them to be more projectively aware.
When working with most of modern set theory we tend to transcend ZFC, always working with some strong background hypothesis, whether it being the existence of some elementary embedding, a colouring for some partition property, a generic for some uncountable poset or something completely different. When it comes to using these strong hypotheses in mainstream mathematics it seems that we hit a brick wall, as most of our strong hypotheses don’t easily translate to the language of everyday mathematics.
The last few posts I’ve been covering a characterisation of pointclasses that admit scales. To make scale theory even more confusing there’s a completely different notion of scale, which really has nothing to do with our previous one — this one being of a more combinatorial nature. To avoid unnecessary confusion I’ll call these new objects pfc scales (but usually they’re simply called scales as well, however).
So far we’ve characterised the scaled pointclasses among the projective hierarchy as well as establishing Steel’s result that is scaled for all such that . We now move on to boldface territory, finishing off this series on scales.
The last two posts covered the ‘classical’ theory of scales, meaning the characterisation of the scaled pointclasses in the projective hierarchy. Noting that and , the natural generalisation of this characterisation is then to figure out which of the and classes are scaled, for . This is exactly what Steel (’83) did, and I’ll sketch the results leading up to this characterisation in a couple of blog posts. This characterisation is also precisely what’s used in organising the induction in core model inductions up to .