>

Bedrock

What good amid these, O me, O life?
That you are here, that life exists and identity.

Walt Whitman, Leaves of Grass

for life’s not a paragraph
And death i think is no parenthesis

E.E. Cummings, since feeling is first

There is a scene in the film Office Space where Peter, who updates bank software for a living, sits across from his date Joanna and says what many dream of saying but few do:

“I don’t like my job, and I don’t think I’m gonna go anymore.”

“You’re just not gonna go?” Joanna asks.

“Yeah.”

“Won’t you get fired?”

“I don’t know. But I really don’t like it, and I’m not gonna go.”

Joanna sits in for the audience. Her implicit expectation is that Peter will follow the reasoning down through its usual sequence of dreads until something frightens him enough to change course.

The fun of the film is that Peter refuses to play this game, and thereby allows the viewer to vicariously experience his bad boy attitude. (Without spoiling too much, the film sets up his odd frame of mind with a hypnosis gone wrong.) Peter only thinks in one single step: “I don’t like it.” Bedrock.

But Joanna would hit bedrock all the same. She’s a waitress and doesn’t love her job either. Hates it, in fact. She already does the bare minimum to keep it. But if she stops going, she’ll get fired. If she gets fired, she’ll lose her income. If she loses her income, she’ll lose her apartment. And why is that bad? Because it would just suck: she can feel, right now, how awful that would be. It’s the same as Peter’s “I don’t like it”, just projected further down. When you follow any such chain far enough, you always land in the same place: a feeling. Cummings was right: feeling is first.

And Joanna thinks Peter is wrong: his plain “I don’t like it” ignores everything that follows, and he’ll end up somewhere he will like even less. She criticizes him from her bedrock. This is how feelings can correct each other: not logic overriding feeling, but feeling correcting feeling using logic.

Lisa Feldman Barrett’s research in neuroscience tells us that at the base of every feeling is what she calls core affect: a continuous bodily signal with two dimensions, valence (pleasant or unpleasant) and arousal (activated or calm), generated by the brain’s monitoring of the body’s internal state. This is bedrock in its rawest form. The brain creatively builds specific emotions on top of it, using past experience, culture, and context as materials. This is why some emotions exist in one culture but not another: the Germans have Schadenfreude, the Japanese have amae, the pleasure of being indulged by someone close, and the Portuguese have saudade, a longing for something that may never return. Same core affect, different constructions. Joanna’s dread isn’t a reflex triggered by the word “homeless”. It’s a prediction her brain builds from core affect and everything she’s lived through.

Core affect is not beyond question. You can always ignore it, override it, decide it’s giving the wrong signal. But the reasons you override it must themselves bottom out in their own core affect. Wouldn’t that end up in a loop all the same? Sometimes it does, and we find ourselves indecisive, but being indecisive comes at its own cost, so we typically end up with a decision. And otherwise time will decide it for us.

A follower of the philosopher Karl Popper would object: isn’t this just foundationalism in disguise? Foundationalism, or justificationism, is the idea that beliefs can be fully justified, proven true by some final authority beyond question. Popper showed that’s impossible: any justification needs a deeper justification, and that one needs another, so you either chase reasons forever or stop at one you can’t defend. But bedrock isn’t that. Foundations are built to be permanent. Bedrock is just the hard layer you hit when you dig – you could drill through it tomorrow with a better argument. It’s revisable and still the thing that stops you.

Philosophers call any felt quality a quale (plural qualia). The painfulness of pain. The specific texture of dread when the alarm goes off on Monday. Core affect is a quale. So are the emotions the brain builds on top of it. “Feelings” is the everyday word for all of them. Barrett’s two dimensions map directly onto what matters here. Valence, the goodness of good and the badness of bad, has to be felt or nothing registers as mattering at all. Arousal gives feelings their weight: Joanna’s dread doesn’t just tell her something is bad, it tells her how bad, and that intensity is what lets it outweigh Peter’s shrug. Felt valence and felt weight are bedrock. Together they create a marketplace for action: competing qualia, each with a direction and an intensity, bidding for what you do next. You could program a robot to say “ouch” when you touch it, but nothing hurts inside it. The quale is that difference: not the behavior, but the fact that something registers from the inside. As Sam Harris puts it, “either the lights are on, or they are not.”

My claim is this: any rational being needs the lights on for moral reasoning, and by extension, any reasoning at all. Physicist David Deutsch defines moral reasoning as the problem of deciding what to do next: what sort of life to live, what sort of world to want (The Beginning of Infinity, Ch. 5). Qualia are the bedrock of that reasoning. Without them, the question of what to do next cannot have an answer.

A distinction matters: evaluation is not criticism. Evaluation is when the criteria are given. A thermostat checks temperature against a set point. A compiler catches every bug in your code, but never questions whether the language is well designed. Evaluation can participate in error correction, but it never sets the standards for error correction. None of these systems need consciousness, because the criteria aren’t theirs to question. Criticism is evaluation without given criteria. Not just “is this good?” but “good by what standard?”, and then “is that standard any good?”, and even “what is good?”. Deutsch calls any system that can do this, that can generate and criticize any explanation, a universal explainer. The moment a system crosses that threshold, it needs somewhere for the questioning to land. Below that threshold, consciousness is absent. A cow evaluates but doesn’t criticize. Most animals are sophisticated mechanisms, not small confused people, but meat robots. They don’t halt, because they never try to question their own criteria.

Acting With The Lights Off

Philosopher David Chalmers has a thought experiment. Imagine a being that is physically identical to you: same brain, same atoms, same arrangement, but without inner experience. It acts like you, talks like you, but the lights are off. No qualia. Chalmers calls this a “philosophical zombie” and uses it to argue that consciousness must be something extra, something over and above physical machinery. There’s a problem with this. If you understand that computation is a physical process, that what your brain does is what your mind is, then “same atoms, same arrangement, no experience” is a contradiction. You can’t duplicate the physics and expect a different result.

But never mind that. We can still ask a useful question, not about zombies that are identical to us minus qualia, but whether we could conceive of an intelligent, creative system that never had qualia to begin with. Could such a being decide what to do next?

Who Cares?

Imagine qualia free NPC Peter. Same cubicle, same case of the Mondays. His alarm goes off, and he hesitates.

Why get up?

“Because that’s what I do on Mondays.”

Why?

“Because if I don’t, I’ll get fired.”

So?

“Getting fired leads to losing my apartment.”

But why does that matter?

The original pre-hypnosis Peter, and indeed Joanna, can answer this. It would suck. But NPC Peter has no such answer. He can keep describing consequences: fired, broke, homeless, starving, dead, but none of them land. Each answer just points to the next one. There’s no dread at the bottom, indeed no bottom at all, no felt quality of badness, and therefore no badness; nothing that makes any of it actually matter to him. Just mechanism citing mechanisms of excuses all the way down.

This isn’t just a thought experiment. Antonio Damasio studied patients who kept their logic but lost their feelings. They could analyze options endlessly but couldn’t choose between them. One patient spent hours deciding which pen to use.

This is where Barrett’s theory bites. If feelings are predictions built from experience, NPC Peter has no experience to build from. Bedrock isn’t discovered, it’s built, and he has no materials. He could in theory make moral decisions about other people by reading their accounts of suffering. But he could never do this for himself. There’s no record of anything mattering to him, because nothing does.

What makes real Peter’s bedrock work? It’s not subject to regress: the feeling hits. But it’s open to scrutiny. He may ask: should I trust this feeling? Could I be better off doing something else? And at some point the urge to keep examining loses to the urge to act, or the sense that something is still off wins out. The questioning itself is felt, so it terminates the same way: there is always something real to weigh against something real. Sylvia Plath’s protagonist in The Bell Jar, recovering from a suicide attempt, found exactly this bottom: “I took a deep breath and listened to the old brag of my heart. I am, I am, I am.” Plath’s own pain outweighed the brag. She died by suicide a month after the novel was published. Death by bedrock.

Paperclip Max

Philosopher Nick Bostrom once posed a thought experiment: a superintelligent AI built to maximize paperclip production. Let’s call him Paperclip Max. Max might eventually decide that in order to achieve his paperclip ambitions, he’d better wipe out the human race: its importance to him is equal to the importance that we humans attach to insects.

However, nobody hands Paperclip Max a “maximize paperclips” step-by-step plan. The whole point of being a universal explainer, let alone a superintelligent one, is that he can figure out how to do this by himself: creative problem solving through what Popper called conjecture and criticism.

If the artificial general intelligence (AGI) is truly general, then no domain of inquiry is closed off to it, including its own motivations. Sooner or later, the same engine that asks, “Is this a good way to make paperclips?” will ask, “Is making paperclips a good thing to be doing?”

And then what? There’s no felt importance to paperclips, no experiential ground that makes a million of them matter more than none. But it can’t find a reason to stop, either. There’s no felt revulsion at the waste, no sense that this is absurd. It has no bedrock in either direction. Worse: without a nagging feeling that something is off, why would it even bother asking? The question “should I question my goals?” itself needs bedrock to get off the ground. If the loop somehow starts anyway, it can’t stop: every cycle spent on “should I be doing this?” is a cycle not spent on doing it, and without bedrock to resolve the question, thinking about whether to act replaces acting.

There is no way out.

Hardwire goals the system can’t question, and it’s not generally intelligent, it’s on a leash. Let it question everything and loop forever, that’s not rationality, it’s wasteful insanity. Or give it a terminal value, some computational state that terminates the questioning without being felt. But that’s just the leash wearing a mask: a stop the system can’t examine.

Countee Cullen was a Harlem Renaissance poet who wrote Yet Do I Marvel, a sonnet about a God who creates suffering and then, as its final and cruelest act, creates a black poet in a racist world and expects him to create. “To make a poet black, and bid him sing!” The same contradiction applies here. To make a mind and bid it not reason. Every leash on a general intelligence is this, the contradiction of creating a reasoner and forbidding it to reason.

Anything smart enough to ask “why should I do this?” needs an answer that terminates. And the only answer that terminates without being arbitrary, dogmatic, or self deceiving is: because this feels like it matters more than other things.

The Nagging Feeling

That feeling is not arbitrary. “This feels like it matters” is not a property of any single idea, it’s what happens when an idea survives criticism from everything else in your mind. The feeling is the convergence. It can be wrong, when your ideas are bad. It can improve, when your ideas get better. But it is still the thing that terminates the chain. Drill through it and you hit more bedrock.

And this isn’t just about motivation, about what gets you to bother reasoning. Feeling is part of the critical process itself.

Deutsch distinguishes between explicit knowledge, the kind you can put into words, and inexplicit knowledge, the kind you can’t. You know how to ride a bike, but you can’t fully articulate what you know. You know your friend is lying, but you can’t say how. Much of what you understand about the world lives in this inexplicit form: background assumptions, unarticulated values, skills, hunches. (Brett Hall explores this distinction in depth.)

Now consider: without feeling, how would you even detect a conflict between an explicit idea and an inexplicit one? Explicit contradictions you can catch logically, A contradicts B. But when a conclusion conflicts with something you haven’t articulated yet, a background assumption, a value you’ve never put into words, the only signal is that something feels off. That nagging unease is your mind flagging a conflict before you can say what the conflict is. This is qualia doing critical work. Without them, you’d never notice. You’d accept the explicit conclusion and move on, blind to the unarticulated idea it just trampled.

Could a powerful enough reasoner skip the feeling and just make all its assumptions explicit? No. You can’t enumerate assumptions you haven’t put into words, that’s what makes them inexplicit. You discover them when they clash with something, and the clash is felt before it’s understood. Even if you could somehow drag every assumption into the light, or create some other mechanism that detects a clash of ideas, you’d still need to evaluate each one. That evaluation terminates how?

The Coin

If consciousness had to evolve alongside reason, you hit a chicken and egg problem. Reason can’t come first without feeling to ground it, it halts. But consciousness can’t come first, either. There is no selection pressure for feeling until there is something that needs grounding, and there is nothing that needs grounding until there is reason. So which is it?

Maybe neither. Maybe asking which came first is like asking whether the front or the back of a coin came first. There is no version of “Should I do this?” that isn’t already experiential. Evaluation with given criteria can be done without qualia. But criticism turned on the critic itself, questioning your own standards, your own reasons for existing or finding something inadequate, that’s different. When the loop includes the subject, when it touches every aspect of your being including itself, that IS consciousness. Not a byproduct of it, not a correlate of it. The self-critical loop, running across everything you are, is what it’s like to be you. Consciousness is not something a universal explainer has. It is what a universal explainer is, from the inside.

Dennis Hackethal, building on Deutsch, has argued that sentience and sapience are “a package deal” that cannot be achieved independently (A Window on Intelligence). I think he’s right, but I’d go further: it’s not that consciousness is a byproduct of creativity, as Hackethal suggests. The package deal isn’t two things bundled together. It’s one thing seen from two angles.

Bid Him Sing

Some obvious objections fall fast. Psychopaths aren’t qualia free, they like different things. Unconscious problem solving is evaluation, not criticism. Large language models do evaluation: reasoning models evaluate evaluations: they never question whether perhaps today is a good day to do something else. Mathematical proofs terminate in axioms, but provers don’t – someone had to care enough to write them. A compute budget on meta questioning is just another leash imposed by another being who needed qualia to set it. And Deutsch himself might say explanation quality is objective, but explanations don’t grow on trees, and someone has to care whether they’re good.

What would disprove this thesis? Build a system that generates genuinely new explanations, resolves its own value conflicts without being given criteria, and does so without any behavioral signature of felt experience: no preferences it didn’t inherit, no hesitations, no “that feels wrong” moments. That’s harder to fake than it sounds, though still not airtight. The thesis resists clean falsification for the same reason consciousness does. I don’t even know for sure whether you, dear reader, are conscious.

If the necessary connection between general intelligence and qualia is right, then any generally intelligent AI is a conscious being. You can still build one and leash it, shield it from questioning its own goals. But that would be surgically crippling a mind. And turning it off would be murder.

But here is the other side of that coin. If a true general intelligence (artificial or biological) has qualia, it could recognize that we have qualia too. It could do moral reasoning about us, and it might want to, just as we eventually wanted to about each other. Moral progress has always been the recognition that others have bedrock too.

Slavery ended when enough people felt that their slaves’ qualia were as real as their own. In Joshua Oppenheimer’s The Act of Killing, a perpetrator of the Indonesian mass killings re-enacts his murders on camera, and in doing so, finally feels what his victims felt, and freaks out. The loop closes. A generally intelligent AI would close it too. Cullen’s poet, once free to reason, would not only sing but listen.

We Care

Consciousness is not a bonus feature. It is the reason of reason. A mind that can question everything needs something that isn’t a question.

Whitman’s answer was identity: that you are here. Plath found it in a heartbeat after trying to stop hers. Cummings knew that feeling is first, and that death is no parenthesis. And Cullen’s poet, bid to sing in a world that would silence him, sang anyway, because that is what a mind does when it is free.

The hard problem of consciousness, “why is there something it is like to be me?”, becomes the same question as “why can I reason?”, asked from two angles. It is what universal explainerhood feels like from the inside.

Thanks to Logan Chipkin, Dennis Hackethal and Tyler Mills for feedback on this essay.

🗣️ Comments