413 Comments
Comment deleted
Expand full comment
deletedMay 14, 2022·edited May 14, 2022
Comment deleted
Expand full comment

How Science Is Trying to Understand Consciousness:

https://www.youtube.com/watch?v=Xetgy2tOo9g

Expand full comment

The weakest part of this review is Appendix A, which goes into a good discussion of composition starting from Dennett's "multiple drafts" model. But composition, while interesting, simply doesn't address the "hard problem," which is: "why are there any qualia *at all* in conjunction with the relevant neural activity?"

We know perfectly well that there *are* qualia in conjunction with the relevant neural activity. One can claim that the qualia "just are" said neural activity, but that is simply to abuse or misunderstand the concept of qualia.

I would much prefer than scientists say "I'm not going to discuss that," or "that's outside the scope of my study," than to fudge the concepts and pretend to answer a question that they haven't.

Expand full comment

Wonderful review! I will buy the book, and look forward to reading about the details. Beyond the book, your comments were incredible interesting. Thank you so much!

Expand full comment
May 14, 2022·edited May 14, 2022

What sort of displays did the perception tests use?

Most displays for the last decade or two (thin panel displays specifically) run at 60hz. Now, on the one hand, it's entirely unsurprising that displays intended for human perception would end up with refresh rates that are only slightly above the levels that cause distracting artifacts in movement when viewed by a human.

On the other hand, when you're trying to tickle the nature of consciousness based on visual stimuli, it's suspicious when the numbers you come up with to explain them are small multiples of the common refresh rates: 30ms is very likely only 2 frames of image. Many displays would struggle to even update the physical pixels completely in that time, if the change was extreme (i.e., a scene change).

It's entirely possible to overcome these limitations in a variety of relatively easy ways; my question is, did they think to?

Expand full comment

- So our consciousness runs at most at 2 perceptions per second, and other than unconscious operations, it can not be parallelized.

This initially shocked me. How then, is reading possible? Then I checked the numbers. Google says people read at 200 to 250 words a minute, or about 4 words a second.

two words per perception. That seems pretty reasonable. I guess the right way to test that would be to see if words were remembered in chunks? Is a paragraph of 100 words remembered in 50 distinct perceptions, or one longer perception?

-Do you feel disappointed by the book?

From the review, I feel the exact opposite. All attempts to answer the hard problem of consciousness always seemed incapable of prediction to me. This seems like its showing real and interesting information about the brain.

Expand full comment

The mode of my beliefs is on physicalism, but the hard problem is the main thing decreasing its subjective likelihood for me. I don't find explanations like the one given in the appendix at all satisfying! The multiple drafts model, for example, seems to just kick it down to the local level. It is very unclear to me why I can't have a physical system which does what a human does, but without the first person experiencer.

That's basically the p-zombie argument, but because my prior on physicalism is so high, my conclusion is we just don't know what the physical system must give rise to an experiencer yet.

Expand full comment

My beef with the "ELIZA passed the Turing Test for some people" claim is that Turing imagined the judge in the test actually *trying* to find the robot - trying to present the subject with questions that would require human-like knowledge and creativity to answer as well as a human. His original paper on the test includes things like "Write me a sonnet" and "Do you play chess? Ok, here's a chess problem for you:"

ELIZA can't fool a human who's putting in any amount of effort. It could fool people who were looking for a Rogerian therapist, which perhaps tells us something about how to efficiently give people therapy, but the correct response to "ELIZA passed the Turing test" isn't "guess we have to redefine 'intelligence' again," it's "you didn't actually test it."

(Your review was really interesting, I just wanted to nitpick.)

Expand full comment

>For example, binocular rivalry is different in autistic people.

I see both images, I guess it's not just me.

Expand full comment

It's striking to me how few people seem to realize that thinking consciousness has a function implies overdetermination. Sure, we can't do certain things without consciousness, but if consciousness is a product of neural activity, then really what you're saying is we can't do certain things without the *relevant neural activity*.

Imagine a facial recognition software program. It detects faces in digital images. If you plug a computer screen into the PC running this program, it will display an output of the analysis with vertices and edges over the input image. Of course, we all know that this display is NOT the analysis per se, the analysis is the voltages on the computer chip (or whatever). The computer (presumably) does not need the visual 'experience' of the face, everything it does can be represented as ones and zeros. You could unplug the monitor, and it would have no impact on the computer's ability to actually perform the analysis. What could it even mean for the visual output on the screen to be functionally responsible for the analysis, when the output on the screen is a product of voltages of a computer chip itself?

Well, that's what you're saying when you say that consciousness does stuff. If consciousness is generated by neural activity, then that neural activity is what is CASUING behavior, cognition etc. It's just that humans are not modular the way computers are, you can't 'disconnect the screen' (i.e. remove consciousness) without the underlying 'circuitry' also being removed or damaged.

But if neural activity causes consciousness, then fundamentally neural activity is responsible for everything that consciousness supposedly 'does', and there's no apparent reason why neural activity without consciousness would lead to different behavior, anymore than unplugging a computer monitor doesn't affect computation.

Now, of course, it's hard to understand how consciousness can't be casually effective, precisely because of the phenomenon of verbal report (implying consciousness is affecting our brain), but that does not resolve what I wrote above. I don't know what the answer is either way, but I'm skeptical of progress being made when there's so much difficulty in understanding what is is that we're supposed to be working out.

Expand full comment

I am sorry to stop reading after one paragraph to write an angry reply, but this is one of my pet peeves: the Turing test has not been passed by Eliza or anything else. The test is not “can some human somewhere be fooled”, the test is “will an adversarial expert judge fail to distinguish between a human and a computer despite being able to ask any question they like”.

Turing’s paper: https://academic.oup.com/mind/article/LIX/236/433/986238

Expand full comment

That was concise and brilliant. Thank you, writer!

Expand full comment

Nitpick: it's been a while since I read Consciousness Explained, but I'm fairly sure that "Cartesian Theater" is the name he gives to the (strawman?) view of consciousness that he's arguing against, the one in which consciousness sits in a little theatre like a homunculus viewing experiences presented to it by the rest of the brain.

It's not to be confused with the "pandaemonium" view that he's actually proposing.

Expand full comment

I wish I could write a well informed comment, but I have no training in the field. However, a decade ago, for work in an unrelated field, I began several years of reading in neuroscience and found these issues so interesting that even though I've forgotten most of what I learned then, I'm succumbing to temptation and writing a poorly informed comment as an enthusiast who has lost his whatever edge he had. Blame Substack!

I read Dehaene's book soon after it came out, and don't recall much of it now independently of this review and my marginal notes. In my notes, I got excited about elements of his models of consciousness and memory as distributed networks, and particularly his notion of memory as synaptic circuits that persist latently, and that we revive to consciousness as reenactive performance when interactive perception shifts the circuit into attention (p. 196). That dovetails with a model introduced in Rodolfo Llinas's "I Of the Vortex," based on Graham Brown's portrait of brain activity as a complex steady-state, elicited and modified by interactive perception, rather than as a compound of stimulus-response patterns. (My interest was in the interplay between conscious and unconscious elements involved in execution of complex skills, which was part of the work on "flow" done by Mihalyi Csikszentmihalyi. Dehaene's approach fit.)

On the hard problem, I find it pretty satisfactory to use models of emergent structure as a framework for relating the unified experience of consciousness to its distributed neural elements. As a non-specialist, I don't see why there is a problem in the qualitative difference between material and experiential, objective/subjective modes, or why we should think that a theory could dissolve subjectivity into objective components. The physical sciences, confined to analytic models, don't seem to me the right grounds for resolving that--they tend to force us into consciousness as an epiphenomenon, which still doesn't unpack qualia. Models of emergent structures, or supervenience (I'm never quite sure of the difference), seem to me to represent the way that we can understand subjective phenomena as non-reductive without giving up a commitment to materialism or fleeing to pan-psychism.

I'll add that when I was into this stuff, I found the most satisfying approach accessible (barely) to me to be Olaf Sporns's "Networks Of the Brain," which uses the neural architectures of different species to explore how complexity theory bears on the "shape" of consciousness (or on different forms of consciousness, if you don't subscribe to a unified, on/off toggle model of what consciousness is). Sporns also writes in awareness of the "embodied consciousness" approach, which Dehaene doesn't engage with in "Consciousness and the Brain," and which I think is a promising approach. But Dehaene's work is grounded in his own clinical neurological research, while Sporns, I believe, was working as a theoretical cognitive scientist.

Expand full comment

The given definition of consciousness, "a perception or a thought is conscious if you can report on it", seems to me to include lots of things that I don't consider to be conscious. I don't just mean robots or the canton of Glarus. I mean any measuring device which automatically records what it measures.

Is a (film) camera conscious? Is a seismograph (recorded by a pen on paper) conscious? They can report on perceptions that they have had, so I think that they should be under this definition. Or, more likely, I'm misunderstanding what is meant by 'report' or 'perception' or even 'you'.

Can you explain how Dehaene's notion of consciousness excludes cameras and seismographs?

Expand full comment
founding

> Are babies, animals, or robots conscious? For babies, yes, they are conscious. Their consciousness is 3 times slower than that of adults, which probably has purely physical reasons. The cables in the baby brain are not isolated. The isolation just doesn't fit into the baby skull. Unisolated fibers have lower transmission speed. The isolation is added later in several surges, the last and most drastic of which happens during puberty. Be patient with your babies and kids, and yes, even with your teens.

Huh, this feels like the opposite of my subjective experience (and, I thought, 'common knowledge'), that 'time passes more quickly as you get older.' That is, I thought there were more 'moments per unit time' (which I'm guessing are these consciousness moments?) when I was a child than now as an adult.

[I can give a story for why it makes sense that adults are faster at this--they have more experience, if nothing else--but it's weird that those observations are out of line with other observations I have, unless I'm misunderstanding some link.]

Expand full comment

Apropos of unconscious performance:

when I am trying to solve some sub-problem I haven't before, the first thing I do is bang out some code as fast as possible without thinking about what I'm doing. I keep my goal in mind and just write lines until I feel like I'm done.

Sometimes this stream of consciousness code is useless garbage, but usually It is useful as a starting point and sometimes after some cleaning up it is outside my capacity to make it better.

Can ya'll report similar patterns in things you are good at?

Expand full comment

>We know this because it happened several times. The first time was in 1966, when ELIZA passed the Turing test. ELIZA was a chatbot who could fool some people to believe that they talk with a real human. Before ELIZA, people assumed that only an intelligent machine could do that, but it just turned out that it is really easy to fool others.

Also, see Scott Aaronson's interview with "Eugene Goostman", a chatbot that was widely hailed as having passed the Turing test.

https://scottaaronson.blog/?p=1858

Very funny. And it shows that passing the Turing test doesn't require a smart AI: a stupid human interviewer works equally well.

Expand full comment

What's studied in this book sounds barely relevant to the thing normally called "consciousness" in English. It's basically a giant bait-and-switch.

"I am going to give you an explanation of what happens in black holes," a similar book could start. "Of course, black holes are defined as holes which are black, like the ones made by moles in my back yard. After careful scientific study, the following types of mushrooms can grow in black holes..."

Expand full comment

This book isn't about a topic I have much knowledge or interest in, so I don't have strong opinions on the content. But I wanted to say that you're a really good writer, so it was a really enjoyable read! I genuinely laughed out loud at the sentence "If you find this disappointing, then you will also be disappointed by "Consciousness and the Brain" by Stanislas Dehaene."

Expand full comment

Judging by this review, the book does a good job in reporting the biological phenomena correlated with consciousness, but makes no progress whatsoever in solving the mind-body problem. The reviewer comes close to recognizing this at the end: "More importantly, it goes against the intuitive meaning of consciousness for 99% of the people. So if we want to describe the concept of 'all-parts-communicate-and-are-coherent-and-Granger-causal', then we should better invent a new name for it." In other words, the physical characteristics described by the book--all parts communicating, coherent, Granger causal, and all the rest--are NOT equivalent to consciousness. As the critics in Appendix A point out, consciousness is qualia: pleasure and pain, the experience of seeing red, love and fear and hate. Explaining that the whole brain is involved in these processes, or that they form episodic memories, or that they're slow instead of fast, comes nowhere close to explaining the nature of these subjective experiences or how they arise from matter (if they indeed do). It is as if I asked what makes a car move, and Dehaene told me that when cars move, they get hot, their components work together, they emit something from the back, and they get lighter. Those are all true, and they might be useful insights toward an explanation of how cars move, but they're nowhere close to a complete explanation. For one thing, none of them even mention the concept of movement!

The practical impact of not having any answer to the hard problem of consciousness is pointed out by the reviewer, but then unconvincingly dismissed:

"Robots won't be conscious in the exact same way as we are. They might be “conscious” in a fascinating different way, but that seems to become a matter of definition and taste, not a matter of insight. So we should not base our treatment of robots on the question whether they are conscious."

Really? So if you learned that forcing a damaged robot to work puts it in extreme pain, akin to the suffering felt by gulag prisoners on the edge of death, you would treat it exactly the same way as if you learned that the robot is no more conscious than a rock? That goes strongly against my moral intuition, and probably against the moral intuitions of 99% of the population.

Expand full comment

Author here.

Embarrassingly, the link which says "This link is worth clicking" is broken. It should refer to Figure 2 of the paper, which shows a ridiculously strong effect of schizophrenia for unmasked priming. This is the right link:

https://www.pnas.org/doi/10.1073/pnas.2235214100#fig2

Expand full comment

This was excellent, thank you!

Your further extrapolation that consciousness is fundamentally about the formation of episodic memory makes a testable predication, though. Are there any situations where we can be confident that a person is unconscious, and yet later on they have non-spurious episodic memories of the event?

Expand full comment

Great review. It's made me want to read the book, which is one measure of success. And the sentence, "Dehaene phrases this in a way that ACX readers will love" made me feel like a very special flower (among a whole field of other special flowers here, I know!). Having book reviews written just for us is surprisingly complimentary.

Expand full comment

Consciousness changes the universe from a complex, interesting, but morally neutral system of physical forces and particles to a place where suffering is is possible. It's the difference between a machine of metal screeching because its mechanisms are jammed, and a person being tortured. It's a question of enormous weight, the biggest weight of all, and to hear advances in neural sciences lead people to dodge the hard problem or say that they're now more bored of it - rather than excited and more interested - is both baffling and worrying for me.

Expand full comment
founding

Did Scott picked a good first review on purpose, to keep us hooked? Because it worked. It's the first time in decades that I really updated on what I understand by "consciousness".

Expand full comment

The notion that schizophrenia and conscious deficiency are associated has obvious parallels with Julian Jaynes. Does the author explore that at all?

Expand full comment
May 14, 2022·edited May 14, 2022

More of a university class than a book review, but really interesting and well written. I really am unconvinced by the tie of Dehaene's idea of consciousness to what most people picture when they think about, say, unique properties of sapient beings. I know the review tried to bridge the gap, but it failed miserably.

My favorite review so far!111

Expand full comment

Fascinating!

Slight tangent about binocular rivalry:

"Binocular rivalry occurs if your two eyes are presented with different images. In this case, most of the time you don't see a weird overlay of the two images, but instead your conscious perception flips between seeing either one or the other." ... "binocular rivalry is different in autistic people."

That's really interesting, as it suggests it could be used as an objective test for autism, rather than observing behaviour. The link is paywalled, but I found some other links suggesting the speed of flipping is different in autistic people or something.

I also found some links about binocular rivalry and aphantasia: most people can prime themselves to see one image rather than the other by visualising red or blue before looking at it, but people without visual imagination can't do this.

I just tried some binocular rivalry tests on myself using a cheap pair of 3D glasses. I did the ones on https://en.m.wikipedia.org/wiki/Binocular_rivalry and https://aphantasia.com/binocular-rivalry/ . I mainly did just see a superposition of both images, not the alternation you're supposed to see. (The only one where the alternation worked was the "warp and weft" one on the Wikipedia page, which worked like a Necker cube or the spinning dancer illusion for me. But with the words on Wikipedia, or the animals on aphantasia.com, I just saw both superimposed.)

I am probably on the autistic spectrum and very likely aphantasic, but just seeing the images superimposed doesn't seem to correspond to either of those, and I wonder what it does correspond to (except maybe having inadequately colour-filtering 3D glasses?)

I'm also confused how 3D glasses in general work as intended for anyone if binocular rivalry is a thing and works as described. Isn't the 3D image caused by the superposition of the two images? If people actually see the two images alternating, how can they see the 3D image?

Expand full comment

OT: DeepMind are hiring for alignment research (https://www.lesswrong.com/posts/nzmCvRvPm4xJuqztv/deepmind-is-hiring-for-the-scalable-alignment-and-alignment) and that post contains a link to a paper (https://arxiv.org/abs/2201.02177) describing work around the topic of "grokking*", that is, deeply understanding a concept and how ML systems achieve that from the data that they are trained with.

Anyway, the quick thought is.... Is it possible to take a trained model (say a large language model) and then calculate the optimum training set for creating that model? Think of it as asking "what is the smallest amount of training data that could have turned a random network into this one given the training algorithm and what should that training data consists of? ".

* From Heinlein's 'Stranger in a Strange Land', since you asked.

Expand full comment

Thank you so much for this wonderful review of a great book on a fascinating subject!

I read the book a few months ago (many thanks to the ACX reader who insisted on recommending it to me during a ACX meeting!) and found it a wonderfully clear synthesis on a fascinating and very complex subject. I find your review a great presentation of the book with some very interesting additions, and the synthetic way you described the book clarified some points for me. Thank you!

Expand full comment

Does anyone else start salivating when they read about Pavlov's dogs?

Expand full comment

I kept stumbling over the term "unconscious". To me, someone is unconscious if they get clubbed over the head. Is "subconscious" not, or less correct?

Expand full comment

Not very related to the review, but I just had a crazy experience with the rotating mask illusion: when I now first watched the video, I didn't know what the illusion is about - and I normally saw the hollow, back side of the mask. Then I read the description of what people usually see - and now I can't see the hollow side any more!

Expand full comment

> “However, if the image enters consciousness, then after 120-140 ms all neurons in the lower layers suddenly start to encode "diagonally". Now they agree on the same interpretation of the world.”

This concept of unconscious processing possibly followed by conscious processing reminded me of the description of two distinct sensations for every perception in the book “Mastering the Core Teachings of the Buddha” (presented in this blog some time ago). I refer to the following text passage (Part I, Chapter 5, on Impermanence):

> “We are typically quite sloppy about distinguishing between physical and mental sensations (memories, mental images, and mental impressions of other physical or mental sensations). These two kinds of sensations alternate, one arising and passing and then the other arising and passing, in a quick but perceptible fashion.

[...]

This habit of creating a mental impression following any of the physical sensations is the standard way the mind operates on phenomena that are no longer actually there, even mental sensations such as seemingly auditory thoughts, that is, mental talk (our inner “voice”), intentions, and mental images. It is like an echo, a resonance. The mind forms a general impression of the object, and that is what we can think about, remember, and process.

[...]

Each one of these sensations (the physical sensation and the mental impression) arises and vanishes completely before another begins, so it is possible to sort out which is which [...]”

I assume that the mental “echo” corresponds somewhat to consciousness as defined by Dehaene. The indicated possibility of consciously observing the preprocesses of consciousness seems fascinating.

Expand full comment

I loved this review! Extremely interesting. I will definitely be telling friends about it. And now I want to nitpick:

"The brain is very good at decomposing the world into units that make sense." tripped my passive voice sensor, and I asked, "Make sense to who?" Then it tripped my tautology sensor, because "who" is "A person with a human brain".

Does "make sense" mean anything more than "has been decomposed into units by the brain"? Because, if not, this sentence can be transposed into, "The brain is very good at decomposing the world into units that the brain is very good at decomposing the world into."

I don't think this nitpick represents a problem with the argument itself, which introduces an alien observer to help guard against exactly this sort of human-brains-evaluating-human-brains tautology trap. Maybe the first sentence just needs some slight tweak to say more precisely what it means.

Expand full comment

I wonder if this approach to consciousness resolves some of the arguments about free will. For a while, it seems like people have used the timing difference between actions and our conscious thoughts about those actions, which appear later, to show that the actions are not the result of free will. Hence free will is an illusion. Sure, pulling your hand off a hot stove is automatic rather than conscious, at least until after the fact. But in perception the conscious brain also rewrites the activity of the base level sensory neurons to correspond to the conscious interpretation. I wonder if a similar process works for decisions and actions, where the conscious brain makes the decision and then sends signals to the motor neurons, and then the action happens. I don't know, maybe the empirical results don't fit that, but I find it appealing as a mental model.

Expand full comment

Suggestion: in the brackets link 'This is a finalist' to a Google Doc with the list of all finalists. So that the ambitious readers can read them back to back if they'd like to.

Expand full comment

This may be telling us that intelligence is not what we think it it. It seems like a good overview of the mechanical basis of consciousness, but this review makes me think it would be shorter to read the book.

Expand full comment

As I believe Scott himself once also related, I find that the main thing that happens to me after discussing the hard problem of consciousness with people who think it has been solved or reduced is that I start worrying that half of the people around me actually are p-zombies.

Expand full comment
May 14, 2022·edited May 14, 2022

Review-of-the-review: 9/10

This is a very strong early contender! Super clear, surprisingly concise, thought-provoking. My favorite features were the motivating introduction which was compelling and very "Scott-ish", and the two appendices addressing _exactly the questions I had_ after reading the main body of the review. My least favorite parts were the paragraphs on memory vs consciousness and "Are We Smart Enough To Know How Smart Animals Are"; the injection of author's own knowledge and commentary disrupted the flow of the review for me.

Substantively, the review persuaded / informed me that the threshold of "consciousness" studied by Dehaene is a real phenomenon with important effects on how we perceive, experience, and model the world. It reminds me of Kahneman's "System 1 / System 2" distinction except with even smaller thresholds of latency and intentionality. On the other hand I'm even less convinced than the author is that this version of "consciousness" is useful for answering philosophical questions about the mind or subjectivity. If you did a masking experiment on me and then took me through the results, showing me the subconscious cues and that my responses were more accurate than chance, I might affirm that I didn't _notice_ the cues but I wouldn't deny that I _saw_ them. In other words I wouldn't disavow the "I-ness" of my subconscious responses. That I can't report on experiences that never "reached fixation" in my brain is vacuously true in a way that makes me suspicious of it as a philosophical argument. It's like reducing the mind-body problem to "if the brain is destroyed there's no longer an observable mind"; yes, of course you can observe an effect in that direction, but it doesn't demonstrate the inverse!

As always, many thanks for contributing!

Expand full comment

Amazing review. I'll definitely have to read the book.

Expand full comment

"from my internal perspective, I don't think neurons account for what I'm experiencing."

I said nothing of the sort. I am making no assertion about the causal relation between neural activity and experience.

Expand full comment

I have not said anything about experience being “mysterious.” (You keep saying that.)

Expand full comment

There's a large logical gap in your account of what one "has to accept."

I fully accept the bulk of modern neuroscience. (The only studies in neuroscience of which I tend to be skeptical are the ones of which Scott also tends to be skeptical.)

I have mentioned no "demon." (You have.)

Expand full comment

I'm not a Cartesian. Not even close.

There are many cases in science in which X causes Y, and yet X and Y are distinct phenomena. Your argument here is not valid.

Expand full comment

The much slower rate of conscious analysis is why martial artists train repeatedly. Even when people know a move, the point is to practice it enough that it starts before people notice the attack coming

Expand full comment

I don't understand why people are confused by 'qualia'. To me, it very clearly seems to be that the ability to notice that I am experiencing {whatever is happening at the moment}, is just a special case of consciousness, being conscious about being conscious. When I notice my own consciousness and me being a person in a real world right now, it also evokes feelings of awe, excitement and solemnity, among other less discernible ones.

But neither the noticing or the feelings are special, they just apply to higher-level concepts. Also, they are really complex/'high-bandwidth', which makes them feel vivid and special.

If anyone is confused about 'qualia', I would love to try and answer concrete questions, because the 'hard problem' just doesn't qualify as a problem _at all_ to me, if stripped of all the superfluous words around it.

Expand full comment

I see a lot of ideas here that are reminiscent or straight up the same as in the book "A Thousand Brains: A New Theory of Intelligence".

Expand full comment

I loved this discussion but I think your discussion of the hard problem of conciousness leaves much to be desired.

In particular, the primary philosophical argument for taking the hard problem seriously isn't Searle but Chalmer's work on the conceivability of philosophical zombies: that is people who acted exactly like us (including claiming they were consciousness) but lacked experiences. Indeed, the Searle style arguments such as the Chinese room etc.. are relatively disfavored lately (though, based on my discussions with Searle many of the views he actually takes are different than the way his views are often summarized in the literature).

What makes the hard problem hard isn't explaining why experience would have structure but why it would be there at all. Moreover, if you can't explain why such and such neural firings give rise to any kind of experience you can't explain why that experience has a feeling that in some sense reflects the representational work that neural activity is doing in the brain.

Expand full comment

One issue I have with this review that I have not seen mentioned is that you make a lot of claims of the form "science shows..." without explaining the study setup. For example, you say "babies are conscious" without explaining what exact experiment was done to supposedly show this. This is a big problem, because there's no conceivable experiment I can think of that can show a 2-week old is conscious -- 2-week olds are extremely hard to study!

Even for, say, 4-month olds, the main way people study them is by measuring how long they spend looking at different stimuli. How do you get from that to consciousness, i.e. to self-awareness of their own thought processes? Perhaps there is a way to do this, but the fact that you don't explain what it is makes me skeptical. It makes me think you're trying to pull a fast one on me.

This keeps happening in the review, not just in the appendix. For example:

"In dream phases (REM sleep), external stimulation usually does not spark consciousness. However, the brain does react like a conscious brain if the stimulus is directly implanted into the brain via magnetic stimulation (TMS)"

Oh yeah? And what were the experiments that demonstrated that people do, or do not, have self-awareness of their own thoughts while they were dreaming? I am almost certain that any such experiment is subject to critiques of the form "this doesn't show what you say it does". Just because you have a paragraph saying "oh but researchers were very very careful" doesn't exempt you from explaining *how* they were careful -- you have to show, not tell.

In other words, I accuse you (and perhaps the author of the book) of editorializing: instead of presenting the scientific findings ("experiment X showed Y"), you present a story you claim has been proven by these findings, without nearly sufficient justification.

(I come across as bitter because this review annoyed me, but I should also mention that I found it to be well-written and I did learn some things from it, so thank you for writing it.)

Expand full comment

1. Minor error: the city of Glaurus has a population of 12k, but the canton of Glaurus has a population of 40k. All the pictures look awesome and I want to go there. Maybe split an AirBNB with somebody. I have nothing better to do.

2. I just did a self-experiment with binocular rivalry. I held my phone up to one eye as close as possible while looking at my desktop monitor with the other eye. There was no alternation. I saw both complete images simultaneously, blended together without obscuring each other. I can read text on the screen with my left eye at the same time as I consciously notice which apps are moving around in my right eye. It sounds like this is far from the normal result, but I've never been diagnosed with autism.

Expand full comment

Very strong start to the book review contest! I've decided to take notes this time around. I'll just plop those in below, might edit into a slightly more directed form later, after I've had a chance to read through others' comments.

I don't think I experience binocular rivalry as described? Curious on link with autism, don't know if I am or not, never cared.

Procedural vs Episodic memory: I seem to be vastly better at the former than the latter. Mental quirk? Affected by diet?

On response time and consciousness: I not infrequently respond in seemingly unconscious ways (jumps, starts, vocalizations, etc.) with an odd delay (sometimes even over a second)

> Some cool experiments show that when we are shown a surprising image, the time we believe the image to appear is 300ms after it actually appears. But if we can predict the image, there is no such delay, and we perceive the timing correctly.

When watching people play games, if they very quickly open and close a menu (to check a number, say), I typically can't perceive the number even as existing, unless I am watching intently for it. This gives a plausible and interesting explanation.

Musing: what we decide, we get more of.

>since we never "observe" different parts of our mind to be incoherent or even independent

Hmm. I agree, given "observe" as "have a conscious perception" with "conscious" defined as it's being used here. Otherwise, I strongly disagree, at least for my personal experience. I regularly notice my own internal disparate nature.

Expand full comment

I thought this post was quite excellent

Expand full comment

If you want to see what something being visible for only a few tens of milliseconds looks like, Paul Christiano made an anagram-solving game with an option to have the letters only show up for a certain amount of time. You can set it as low as 0.02s:

https://paulfchristiano.github.io/anagrams/

Depending on the monitor, they'll generally show up for a single frame. Makes for a fun challenge.

(of course, this is different from masking experiments since it all takes place on a white background, but still, gives a sense of the time scales involved)

It's a great game in general - oddly rewarding.

Expand full comment
May 16, 2022·edited May 16, 2022

I am seeing a lot of people arguing that there is no Hard Problem of Consciousness. In this TED talk I will attempt to convey why there is such a problem, and what properties a valid solution to it would need to have by analogy with another Capital-H-Hard Problem.

Specifically, I claim that the Hard Problem of Consciousness is "Hard" in precisely the same way as what I will call the Hard Problem of Metaphysics, that being "Why is there something rather than nothing?".

Suppose that we perfected the Standard Model of particle physics, i.e. we found a set of equations that predict all natural phenomena to arbitrary levels of precision. Someone might then ask the question "Why is there something rather than nothing?" and receive the response "Because of this set of equations". I claim that this response does not answer the question at all, and the hypothetical person delivering the response has made a category error.

The equations of the final draft of the Standard Model might describe the behavior of matter and its interaction with spacetime perfectly. They would not in any way address why matter and the spacetime containing it exist at all.

I am not aware of any candidate solutions to the Hard Problem of Consciousness, but I am aware of one valid solution to the Hard Problem of Metaphysics, that being the Mathematical Universe Hypothesis.

The MUH actually answers the question. There is something rather than nothing because Mathematical Platonism is true and the universe we perceive is precisely a mathematical structure. Why is there Mathematical Platonism rather than no Mathematical Platonism? Because the absence of Mathematical Platonism is a logical impossibility; mathematical truths are logically necessary.

Proof: The statement "2+2=4" is not logically necessary. However, the statement "Given the axioms and definitions of number theory, it follows that 2+2=4" is. It is as unavoidably true as any syllogism. All mathematical truths are statements of the second type.

The equations of the Grand Unified Theory solve the Easy Problem of Physics. The Mathematical Universe Hypothesis, if true, solves the Hard Problem of Metaphysics.

Appendix A of this review addresses the Easy Problem of Consciousness. It does not interact with the Hard Problem.

Expand full comment
May 16, 2022·edited May 16, 2022

Charmers once joked that Dennett might be a p-zombie himself. When I read works by the “explain away consciousness” crowd I get the feeling that they perhaps dont really grok what people like charmers and nagel means by qualia or consciousness. lots of the arguments talk past each other. are some of us… zombies? and that draws the line between those who recognize qualia and those who genuinely seem to be confused about what we’re even talking about? (kinda like that mental imagery study that showed some of us lack it completely-but never knew!)

Expand full comment
May 16, 2022·edited May 16, 2022

Great review, much appreciated. Now let me go off in a linguistic tangent :)

I find it quite confusing and unsettling how English seems to lack a general word for what the mind does, in the broadest sense. (Same goes for other languages as far as I know).

We have words for perceiving, thinking, giving attention, awareness, self-awareness, having feelings, emotions, memories, etc. And of course we have the word consciousness, which is hugely overloaded — this review makes a good point that the phenomenon it chooses to call "consciousness" is indeed a worthy candidate for the name. Then we have words like "know", which can refer to a broad range of mental events, but often has extraneous implications such as the cognition being correct or somehow justified.

I was kind of hoping that we could use "cognition" as such a general term, ie. that we "cognize" a thought, a perception, a memory, a sensation, etc. But this article then goes and reserve the word "cognition" for the super narrow sense of abstract reasoning!

Then there is "qualia", but that is again super technical, specifically refers only to the subjective experience, and is hardly ever used outside of philosophical wrangling and thought experiments.

Is our experience of the mind's operation so scattered, that we forgot to make a general verb for it?

Then again, people often take the word "mind" itself in all sorts of restricted senses, equating it with thought, and contrasting it with other types of cognitive events like emotions and intuitions. Looks like we don't have either a good verb *or* a good noun for the whole thing!

Expand full comment

Complete tangent, but what does the acronym "UNO" refer to with regards to the United Nations?

Expand full comment
May 16, 2022·edited May 16, 2022

I'm thoroughly unconvinced by this review/book's point of view about consciousness. Instead of talking about word definitions, I'd like to express my points of disagreement as empirical questions that I don't think the given paradigm answers:

- What *precisely and in general* (on the maths/physics level) causes some groupings of matter to "feel like they are something" or "feel sensations"? Many animals have different nervous systems, different neuron types of sensing different things (e.g. nociceptors), and so forth. Some structural, describable, measurable thing shared by humans and cows and maybe insects and probably fish, but not rocks or probably plants, and maybe some future AIs.

- Are there any systematic differences (on the maths/physics level) between pleasure and pain *within* a type of sentient being?

- Are there any systematic similarities (on the maths/physics level) between pleasure/pain experiences had by *different* types of sentient being? If I stub my toe, how similar is that feeling to the feeling when a tiger stubs his toe? Where does that similarity come from?

- Is there a way to measure anything like utility/hedonic units/whatever, across individuals and species of sentient beings? If not, what if anything can we construct as a "ground truth" for utilitarianism/consequentialism? If we can't use anything for that, what do we do with the obvious resulting problems? (e.g. tradeoffs between two vague types of "betterness", neither of which are actually measurable for some reason. *Moral Uncertainty* is kind of about this, I think.)

- Will [QRI](https://www.qualiaresearchinstitute.org/) *ever* come out with good research? Seriously, they seem to be the only group thinking in these general-yet-reductionist (i.e. good) terms about consciousness (especially as laid out in Principia Qualia).

And yet, their newsletter and blog are heavy on speculations and trip reports, but light on brain scans and testable predictions (like "if you grow X cells in Y shape and then poke it, it is mathematically guaranteed feel pain" ).

TLDR knowing that neurons exist doesn't "solve" consciousness any more than ELIZA "solved" AI. We should expect a real explanation to be "meaty" and good in the sense that it would pass smell tests from like The Sequences. An example of this kind of a preliminary sort of this explanation (which may or may not be true) in AI would be https://www.gwern.net/Scaling-hypothesis#blessings-of-scale .

Expand full comment

> The isolation just doesn't fit into the baby skull

That answers that question. It always bothered me: if we don't create new neurons after birth, why do our skulls grow? Making space for the myelination makes absolute sense!

Expand full comment

How does the theory about shizophrenia fit in with the autism-is-reverse-schizophrenia idea? (Not that I think that idea is bulletproof.)

I don't think I understand in any depth, but it kind of sounds like we'd expect a reverse schizophrenic to consciously notice something faster than average? Which seems to track with sensory sensitivity.

Expand full comment

Great review. Point of contention: Does the brain truly break everything downinto discrete units, or does the left-brain break everything down into discrete unit?

Expand full comment

It occurs to me in reading Appendix A that in the classic formulation due to Descartes ("cogito ergo sum"), the "I" is smuggled in grammatically. Perhaps writing in Latin, Descartes simply failed to notice that his supposed first-principles logic had introduced a first-person.

Expand full comment

This was a great book review. Very high information content, very clearly communicated, on a difficult and profound topic. Kudos!

Expand full comment

My question about the topic of qualia as touched on above revolves more or less around the following: is it substrate-specific or algorithm-specific? That is, if you use a biomimetic approach to building an AI, taking some liberties to avoid doing a 1:1 molecular simulation, just implementing the basic necessities, would it still have qualia, or could those be just a byproduct of how organic cognition works? Could you get a p-zombie still able to mimic organic behavior by not having it be comprised of entirely independent computational elements like neurons? Could the conscious subjective individual experience as we experience it be a coincidental 'luxury' organic systems get as an outcome of their general composition? I guess, to sum these up: does having qualia for a sentient mind require embodied cognition? Maybe I'm asking nonsensical questions however...

Expand full comment

I wonder why Scott still isn't gathering book review ratings at the end of each review rather than ... much later. Seems like the delayed approach not only annoys readers, but will introduce some kind of bias (most likely a recency bias for the last reviews in the series).

Expand full comment

> So perhaps we are conscious in dreams, and we are only cut off from outside perception. But it is too early to be certain.

I am so confused when people say things like this. How is it not completely tautologically obvious that we are conscious during dreams? If Dehaene's neural activity criteria for consciousness doesn't happen to capture what is happening during dreams then, BY DEFINITION, he hasn't captured all of human consciousness. If we think we are conscious and a scientific experiment tells us we are wrong then it has to be that the experiment is wrong, not us. As Searle says: when it comes to consciousness, the appearance IS the reality.

I might be grossly overgeneralising but it seems to me that it is only people who deny the hard problem that say things like this. It's a really fascinating subject area where very intelligent good faith proponents on both sides seem to be talking past each other and have a while different fundamental conception of what the problem is about

Expand full comment

This was a fantastic read, and has done more to convince me of anything I've seen before about the hard problem of consciousness simply not existing. Bravo!

Expand full comment
founding

Ahh, yes – questions like "Why I am I 'me' and not someone else" and "Why is there something instead of nothing" are indeed exactly the kind of 'confusing mysteries' that I think can't be answered directly. I strongly suspect they need to be 'dissolved'.

But it's still not clear whether you're claiming that qualia are things that are "inherently unanswerable" – EVER – by any possible future neuroscientist. It sure _seems_ like you're claiming that.

I just can't understand how something could be NOT-independent of the brain but somehow also NOT observable at all, by "externally observed neurological processes". Are qualia some kind of non-physical thing? How do they 'interface' with the brain?

To me, the strongest evidence of the 'existence' of qualia is, besides introspection, communication. I'm very sure that both introspection and communication are neurological and, in principle, 'externally observable neurological processes'. It seems impossible for people to be able to communicate about qualia if they're not also, fundamentally, a neurological process too (and thus one that _could_ be 'externally observable').

I think the review describes the best 'dissolution of the mystery' of qualia – they're just a kind of special 'conscious memory'.

I can't figure out what _other_ 'framework' you or anyone else could possibly have in mind that provides testable predictions. It sure _seems_ like you all are claiming that qualia are 'magic', i.e. non-physical.

Expand full comment

When I try to distill the hard problem of consciousness, to argue that it is real, here's what I come up with. In theory, you could systematically replace every cell in my brain with a machine, large or small, that would receive and transmit signals to the other cells the same way the organic cell it replaced did.

As long as all of these machines operate in the same time scale relative to each other, they should be functionally equivalent to my brain. And it shouldn't matter what they're made out of. Maybe each one is the size of a swimming pool and rolls billiard balls around, passing some balls to other swimming pools nearby. Say these balls move around at 1 inch per hour. In theory, every aspect of how my brain works could be captured accurately by this system.

The question, then, is when is swimming pool billiard system remembers a mistake it made 30 years ago, will it really feel the onset of crushing shame the way I feel it? As these trillions of balls all slowly roll at a maximum speed of 1 in per hour? If you believe there is no hard problem of consciousness, I think your answer has to be yes.

Expand full comment

> 1) "I have seen the word range", or 2) "What word? There was no word!?"

This isn't necessarily measuring consciousness, it's measuring memory? Unless I'm missing something, we could consciously see the word range, but not commit it to memory and be able to report on it afterwards?

Expand full comment

Sorry to break the "420 Comments" -- too good.

But I just don't understand people who dismiss the "hard problem" and talk about the experience of red. That is not it at all. Haven't you ever felt lust? Or deep hunger? Or overwhelming pain? It FEEL like something. It isn't some academic "qualia" and it isn't a sense of "I" -- matter and energy have arranged such that it freakin' FEELS like something. This is never-endingly amazing to me, no matter how many things like this I've read.

Expand full comment

Thanks for this review; it sounds like my kind of book. I have long favored an account of consciousness offered by William Powers in his 1973 book, Behavior: The Control of Consciousness. Unfortunately it's not a view that's easily stated, but Dehaene's view seems consistent with it. But, Powers offered his account on the basis of an abstract theoretical model of the mind, but had little to no empirical evidence for his account of consciousness, though he offered informal observations in addition to his model. It sounds like Dehaene has plenty of empirical evidence.

BTW, Scott reviewed Powers's book back in 2017, https://slatestarcodex.com/2017/03/06/book-review-behavior-the-control-of-perception/, but had nothing to say about Powers' account of consciousness and, on the whole, gives the book a mixed review. I've got a recent post where I quote Powers on consciousness at some length, https://new-savanna.blogspot.com/2022/08/consciousness-reorganization-and.html.

Finally, I agree with you on the so-called hard problem of consciousness: There's nothing there.

Expand full comment