### Wednesday, August 25, 2004

## Transvision & Set Theory

So I went to the transvision conference... I thought my talk went quite well, lots of young people had insightfull questions and seemed quite excited about Statistical Metaphysics. I'm still talking to Anders Sandberg about it too, he's concerned about the noise ASORs dominating the counting, which they do unless you introduce representation independence (which also conviently converts complex explicit ASORs into nice finite implicit ASORs). He suggested reading Gregory Chaitin's work, which had I linked to before, and now I've gone ahead and read his online book Meta Math, which is quite good. In particular he suggests that perhaps infinite precision reals don't exist, i.e. there are only infinite sets of finitely complex objects. I need to hunt down his more formal books where he defines algorithmic complexity... This also generally reminds me of Murray Gell-Mann's thoughts on complexity - i.e. the interesting stuff is inbetween minimal and maximal entropy. I've also had some conversations with Rudy Rucker who leans the other way, so that all these infinitely complex sets (large cardinals and so on) exist. He acutely points out that the number of permutations of observers is always countable at any finite time (i.e. at any finite complexity). Indeed, I've been thinking about splitting the argument for both cases - infinite sets containing only finitely complex structures, and those also containing infinitely complex objects (like un-nameable reals). If evolving observers form the largest subsets (largest proper classes?) of these classes, then it is explained why we are observers, and we get a hell of a falsifiable prediction as well! Actually, that reminds me - I've started reading Thomas Jech's Set Theory so I can place all this in a more formal langauge, and amusingly enough, immediately on the 3rd page we introduce the INFORMAL concept of classes! It's stated that this is because classes are easier to work with than formulas, which is reasonable enough, but then on the next page we define the universal class: V = {x:x=x}, the class of all sets. We can't define the set of all sets due to things like Russel's Paradox (i.e. the barber that shaves all people who don't shave themselves... more here and history), which in set theory means we have to drop the axiom Schema of Comprehension: Y={x:P(x)} for the weaker axiom Schema of Separation: Y={x ε X:P(x)}. All sets are classes, but classes that are not sets, like the universe V, are proper classes. Very interesting stuff.

The talks at Transvision I was most interested in were scientifically oriented, like Aubrey de Grey's idea of designing enzymes to break down chemical waste that our lysosomes are incapable of splitting, like oxydized cholesterols. Rafal Smigrodzki argued that we will be able to replace our mitochondrial DNA, which doesn't have nearly as much error correction as nuclear DNA. Furthermore he suggests that we don't have to use virii as the transport mechanisms to replace the DNA, but instead use special protein structures that can pass through both the cell and mitochrondrial membranes. Ramez Naam also talked about some intriguing neural implant upgrades - 16,000 pin grids are being developed I believe.

The CN tower is pictured below, and the conference was in the building to the left. After the talks on Saturday night I walked down to it, and it takes an amusingly long time to get to it (maybe 10 blocks?) - it really is quite big!

The talks at Transvision I was most interested in were scientifically oriented, like Aubrey de Grey's idea of designing enzymes to break down chemical waste that our lysosomes are incapable of splitting, like oxydized cholesterols. Rafal Smigrodzki argued that we will be able to replace our mitochondrial DNA, which doesn't have nearly as much error correction as nuclear DNA. Furthermore he suggests that we don't have to use virii as the transport mechanisms to replace the DNA, but instead use special protein structures that can pass through both the cell and mitochrondrial membranes. Ramez Naam also talked about some intriguing neural implant upgrades - 16,000 pin grids are being developed I believe.

The CN tower is pictured below, and the conference was in the building to the left. After the talks on Saturday night I walked down to it, and it takes an amusingly long time to get to it (maybe 10 blocks?) - it really is quite big!