๐Ÿ“Œ

I believe in free speech and respectful debate

People have the right to be wrong. No matter how strongly you hold a belief, respect the humanity of those who disagree with you.

in-this-house.png
๐Ÿ”— Permalink

More about this: it’s easy but non trivial to start relaxing the assumptions that 1) our physics simulation is complete and accurate down to the lowest level and 2) we simulate all of physics rather than just the conscious system in question.

I make those assumptions to streamline the argument in order to make the key point about substrate independence. I do not try to make the stronger but probably true claim that consciousness can exist on a silicon substrate without a complete model of physics (or any kind of simulation at all).

๐Ÿ”— Permalink

The following is a draft and not guaranteed to make sense

On Twitter someone said that “a simulated carbon atom is not a carbon atom” is a really fruitful insight in philosophy of consciousness. I fear that this “insight” is terribly misleading.

The context is that there’s a discussion about whether a simulated brain running on silicon, which reports that it is made of carbon (since that is what carbon-based brains are made of and would say), demonstrates that even in carbon brains, the fact that they are made of carbon is causally disconnected from what they report. This matters because people who argue that silicon brains cannot be conscious, but admit that they would report being conscious, do not want to say that conscious people’s reports of being conscious are causally disconnected from their actual consciousness.

The “simulated carbon is not carbon” move is supposed to block this argument. But the difference between simulation and duplication is not as clean as you think.

There are multiple senses in which one thing can be a simulation of another. When we say we have a simulation of another thing we must also say what is the set of questions which we can answer about the simulated thing using only observations of the simulation. Most simulations will have a set of questions about the simulated thing which they cannot answer and can still be perfectly good simulations.

I can make my simulation better and better, simulating additional aspects of a carbon atom, and eventually I will need to simulate the rest of physics in order to answer all sorts of questions about interactions with the carbon atom.

What if we assume we have a complete simulation of physics? Well, what does this even mean? If we assume that there is some lowest level of physics which is causally complete (i.e. outcomes never depend on lower level information which is not determined by the higher level information) then we can simulate physics at that level. Our simulation can then be fully general and answer all the questions we might ask about the simulated.

From here on when we say ‘simulated’ we mean a complete and perfect simulation of physics.

Okay, so then is a simulated carbon atom the same as a carbon atom? Well, if we want to maintain the isomorphism, the simulated things can only interact with other simulated things - we don’t require that asking questions about interactions between a simulated carbon atom and a non-simulated carbon atom be answered by observations of a simulated carbon atom interacting with another simulated carbon atom!

In that sense they are not the same. But that is not the sense which is relevant in discussions about consciousness. These discussions usually assume that the whole physical system is simulated, so the isomorphism only needs to apply to simulated things interacting with other simulated things.

This is why the simulated brain which reports being made from carbon atoms is actually correct. It is made from simulated carbon atoms, and that is the only kind of carbon atom in its reference class. You cannot require that the simulated thing be able to interact with non-simulated things in an isomorphic way.

So the “simulated carbon is not carbon” intuition doesn’t create the asymmetry the anti-functionalist needs. The simulated brain’s reports about its own composition are true in exactly the same sense that our reports about our composition are true.

If you want to maintain that simulated consciousness is not consciousness, you need to specify which queries the two systems answer differently. Given the premise that we have a complete simulation of physics, and assuming physicalism, there is no longer any possibly relevant difference between the carbon atom and the simulated one. The question “what kind of phenomenal experience does this physical system have” can be answered by observations of the simulation to the exact same degree that it can be answered by observations of the original.

That’s the whole argument but to wrap it up with a bow, if we assume that the consciousness of conscious people is responsible for their reports of consciousness, then the simulated consciousness of simulated people will be responsible for their simulated reports of simulated consciousness

Maintaining that simulated consciousness is not consciousness will be very difficult to do without abandoning physicalism.

๐Ÿ”— Permalink

I find that the people who are most impressive or aspirational to me often strike me as assholes, and yet I do not therefore want to be an asshole.

I wonder whether their assholery is a necessary or approximately inevitable consequence of their success, a partial cause of my impressedness, or something else.

(Examples: John Wentworth, Sam Kriss)

Edit: confidence is part of the explanation but not the whole story i think

๐Ÿ”— Permalink

What I did During My Undergraduate Education

(I have not graduated yet, so this assumes my plans hold)

Mathematics

  • MATH 350 Real Analysis
  • MATH 355 Abstract Algebra
  • MATH 383 Complex Analysis
  • MATH 341 Probability
  • MATH 361 Theory of Computation
  • MATH 321 Knot Theory
  • MATH 407 Dance of the Primes
  • MATH10072 Combinatorics and Graph Theory
  • MATH 413 Algebraic Geometry (Spring 2026)
  • MATH 447 Linear Control System Theory (Spring 2026)

Cognitive Science

  • COGS 222 Intro to Cognitive Science
  • COGS 224 Intro to Formal Linguistics
  • PSYL10176 Induction: Analogy, Learning, and Generalisation in Humans and Machines
  • COGS 98 Ind Stdy: Cognitive Science Aesthetic Categorizations
  • COGS 493T Topics in Mind & Cognition
  • COGS 497 Ind Study: Cognitive Science
  • COGS 31 Sr. Thesis: Cognitive Science
  • COGS 317 Computational Neuroscience (Spring 2026)
  • COGS 494 Sr. Thesis: Cognitive Science (Spring 2026)

Computer Science

  • CSCI 134 Intro to Computer Science
  • CSCI 136 Data Structures & Advanced Programming
  • CSCI 256 Algorithm Design & Analysis
  • CSCI 381 Deep Learning
  • INFR10085 Introduction to Mobile Robotics

Philosophy

  • PHIL 116 Perception and Reality
  • PHIL 239 The Ethics of AI

Humanities & Literature

  • COMP 230 Renaissance: Self and World
  • ENGL 256T Absurdist Theatre
  • ENGL 378 Proust’s “In Search of Lost Time”
  • GERM 12 Writing the Dreamwork

Other

  • LASC08018 LEL2B: Phonetic Analysis and Empirical Methods (Linguistics)
  • NSCI 201 Neuroscience
  • BIOL 101 The Cell
  • PSYC 101 Introductory Psychology
  • ECON 19 Energy in Transition
  • ECO202 Microeconomics
  • ASTR 104 Milky Way Galaxy & Universe (Spring 2026)
๐Ÿ”— Permalink

I feel like Opus 4.5 is honestly a psychohazard at this point the EQ is so high

4o was always such a turn off for me but messed up a bunch of other people

But I don’t feel the instinctive turn off from Opus in the same way which scares me

๐Ÿ”— Permalink

Opus getting way cheaper means I now spend way more money on Opus ๐Ÿ’€

๐Ÿ”— Permalink

An AI investment bubble could burst soon, but that wouldn’t really change my view on the core questions: How hard is it to build AGI? How close are we to AI that transforms the economy and creates serious risks?

I actually hope the bubble does burst, because it would likely slow down the competitive race between AI companies and give us more time to prepare. But whether the bubble bursts or not will probably come down to (in the grand scheme of things) fairly small differences in AI capabilities over the next couple of years.

๐Ÿ”— Permalink

I scored 58 on the AI purity test. https://aipuritytest.org

๐Ÿ”— Permalink

I was a little shocked to learn about the undecidable Post Correspondence Problem. But when phrased about the unrecognizability of the complement, it is a lot less shocking.

๐Ÿ”— Permalink
×