📌

I believe in free speech and respectful debate

People have the right to be wrong. No matter how strongly you hold a belief, respect the humanity of those who disagree with you.

in-this-house.png
🔗 Permalink

Thoughts after reading lots of Steve Byrnes:

The important question is whether or not it is possible get to superhuman coders within the current AI paradigm. I’ve previously argued that it is, but it’s easy to see that LLMs knowing the “thing that humans would output under similar circumstances” won’t get you there.

So future LLMs must be very good at not knowing a thing and then figuring it out. This is in principle possible, and the idea is that RL on base models gets you there, as I’ve said previously. They already do this in many cases.

It’s just that the “imitating humans” confounder is so strong it’s often very difficult to tell which capabilities are which.

🔗 Permalink

A good list from Emmett Shear:

Priors include:
- Objects arise from dependent origination
- Things that get surprised too much die
- Care is the driver of intelligence
- Objectivity is the surface of subjectivity
- The universe is alive
- The fucking hippies were right again, fuck

🔗 Permalink

Looking for a single word that means internal perspective without implying consciousness or personhood - I can’t find anything but I like ‘autoception’

🔗 Permalink

I’m running the MacOS 26 Beta and it’s a complete mess. I should go back to stable but I’m resisting for some reason

Edit: it turns out you cannot go back. Yay.

🔗 Permalink

Goodhart’s Curse says given a target optimization function f(x) and an approximation of f(x) called g(x), the argmax of g(x) is in expectation some x with large g(x) - f(x). This is pretty rigorous.

This is often used to argue that when creating an AI, any difference between the intended utility function and the true utility function is likely to blow up.

It turns out that if we make some assumptions about the error distribution, the expected error in optimizing f(x) grows very slowly (O((log n)^{12})) with the size of the searched solution space.

So maybe this curse won’t bite so hard after all.

🔗 Permalink

“The phuckening is upon us,” muttered Dr. Abernathy, adjusting his goggles.

🔗 Permalink

When people talk about LLMs ‘just’ pattern matching without ‘real understanding’ one way of understanding what they mean is that the patterns AI is using to solve problems are less deep/generalizable than the patterns which humans use.

🔗 Permalink

Me-Stew (by Shel Silverstein)

silverstein-me-stew-907670726.jpg
🔗 Permalink

How to tell when something is conscious

1: Ask it. If it says yes, it’s conscious. Example:

def answer_question(question):
    return "yes"

# Call the function
response = answer_question("Are you conscious?")
print(response)

2: Check what it’s made out of. If it’s made out of something natural, it might be conscious. Example:

Trees might be conscious.

3: See if you can understand how it works. If at a low level it makes sense, it’s not conscious. If it’s something magical and mysterious, very possibly conscious. Example:

Government bureaucracies are definitely conscious.

4: Check for a recursive loop. Example:

Two mirrors facing each other are conscious.

🔗 Permalink
×