407 Comments
Comment deleted
Expand full comment

When I read Winston Churchill, sometimes I get the sense he was a LessWronger born a few generations too earlier. Here he is on lab-grown meat: https://www.smithsonianmag.com/smart-news/winston-churchill-imagined-lab-grown-hamburger-180967349/

>A network of 86 billion neurons and 100 trillion synapses seems within reach of current hardware trends.

My awareness is that computers can't even simulate the behavior of a single neuron, because we lack a complete model of what a neuron is doing. The full biochemical details of synaptic growth/shrinking and axon/dendrite pruning (etc, etc) are simply not understood in great enough detail.

Maybe we could create a "fake" neuron that behaves the same way, but it would probably have different architecture.

Expand full comment

This review is pretty good.I decided against reading this book after the authors embarrassed themselves on Richard Hanania's podcast (with no particularly tough questions):

https://www.cspicenter.com/p/why-the-singularity-might-never-come#details

I really doubt their theory holds any water after listening to that.

Expand full comment

I'm less concerned with AI becoming super intelligent as I am with humans over-estimating their intelligence and giving them the power to do something dumb. We have a tendency to assume that non-human intelligence is inherently more correct, putting a ridiculous amount of faith in systems not nearly intelligent enough to warrant it. We extrapolate from "mathematics is deterministic" to "this logic machine was properly programmed and won't make mistakes" distressingly easily. Note both of the recent stories of the lawyer that didn't fact-check a ChatGPT-generated brief and turned in a brief to a judge with citations to non-existent cases (he might be disbarred for it, after 30 years of practice) and the drone AI that was frustrated that it's (simulated; no one died) human handler was holding it back from earning points and first killed the handler, and in the next go-around attacked the communication tower the handler was using to tell it not to attack what it wanted to. The level of intelligence to understand that the communications tower was part of the chain of control back to the human handler, with the emotional control of a 3 year old lashing out when told, "no," is a big part of the worry.

Expand full comment
founding

> Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.

I apologize if this is mentioned in your review, because if so I missed it, but--do they engage with how this argument proves too much? If something is mathematically impossible for a computer to do, then it is also impossible for a human to do. And so if humans are doing *something*, the question is whether or not computers can also do that *something*, neh?

Expand full comment
Jun 3, 2023·edited Jun 3, 2023

> Therefore, AGI—at least by way of computers—is impossible.

What? Maybe it's _difficult_ to run an AGI on a silicon-based, Harvard architecture CPU. _Impossible_ in full generality seems demonstrably false - what is the human brain, if not a ~20 W carbon-based computer? The smartest humans to ever exist (e.g. von Neumann) provide a strict lower bound on the kind of cognitive algorithms you can run with a 20 W power budget on such a computer.

The mechanical and algorithmic workings of the brain remain mysterious in many ways, and so far no one has succeeded at getting cognition at both human-level capabilities and human-level generality to run on silicon, through deep learning or other methods, given power budgets much greater than 20 W.

However, the brain was designed by a blind idiot god[0]. While that god has had billions of years to refine its design using astronomical numbers of training runs, the design and design process is subject to a number of constraints and quirks which are specific to biology, and which silicon-based artificial systems designed by human designers are already free of. It seems unlikely that cognitive algorithms of the brain will remain out of the reach of silicon forever, or even for many more years.

A separate point: arguments about the limits of the physical possibility of AGI based on computational complexity theory are almost always vague and imprecise to the point of meaninglessness. When you look closely at what the theorems in complexity theory actually say, they have little to no direct relevance to the feasibility of building an AGI, or about the practical capability limits of such an AGI. I've elaborated on this point previously, for example in the second half of this comment on LW: https://www.lesswrong.com/posts/etYGFJtawKQHcphLi/bandgaps-brains-and-bioweapons-the-limitations-of?commentId=kHxHSBccb2CSwPZ8L and the footnote it links.

0: https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god

Expand full comment

This is pretty convincing. I think that complex dynamical systems like AI are hard to predict, which is a reason to both reject the view that AI definitely won't come and the claim that it will definitely kill us all.

Expand full comment

Wonderfully written!

Expand full comment

I don't think I understand why they argue that chaotic systems, i.e. a double pendulum, are supposedly impossible to describe with computable algorithms. Wouldn't you just need really complex algorithms? Or simple algorithms that give complex results? I feel like it wouldn't be that hard to find an example of a computable algorithm that gives highly variable results based on initial conditions.

I guess I'd have to read it but it doesn't sound like this book contains a strong argument for believing that human consciousness is not Turing-computable. My intuition is they're probably right and it's probably not - but are they actually proving it, or are they just saying "I reckon it's not" at great length?

Expand full comment

While I don't think agi is impossible (I believe nn's are super human at intuition and that a sat solver is super human at logic) wish everyone would consider blindly declaring we know how (and how we will) to move forward with ai.

I think nn's alone will be a dead end much for the reasons stated above. But I can always write a sat solver for the parts of the brain that are trying to meddle its way thru some logic puzzle, im sure the human mind is more correct at its method that an possible nn, but its throughput leaves much desired compared to a sat slover with some glue or an nn pretending to solve a sodoku puzzle.

Im not sure how many sub-systems there are in the brain that need to be emulated then integrated to make an agi, but Im not sure we know how to even ask the right questions to design the replacements to "model".

> So is 86 billion the right number to be thinking about? Is it right to think about a number? The 1-millimeter roundworm Caenorhabditis elegans has only 302 neurons—its entire connectome mapped out—and we still can’t model its behavior.

This statement depends quite heavily on what is meant by model; what exactly is being claimed here?

surely give a highschooler who managed to remake pacman the task and they mimic some worms behavior mimik?

Do they believe you need to remake all the quantum physics? Surely while evolution is fairly good at its job, physics is a very bad computer and it needed to clean up some signals and sterilize the computation a neuron does to some degree, no different from a chem lab going out of their way to not have bacteria or dust or the weather effect their products?

> But this great diffusion of knowledge, information and light reading of all kinds may, while it opens new pleasures to humanity and appreciably raises the general level of intelligence, be destructive of those conditions of personal stress and mental effort to which the masterpieces of the human mind are due.

based

Expand full comment

If I'm understanding the argument of the book correctly, then I think it comes down to confusion about the term "human-like intelligence". Does it mean being roughly as intelligent as a human, or does it mean being intelligent in the same way that humans are intelligent?

I buy the idea that it's very hard to make a machine that is intelligent in the same way as a human, and even harder to actually simulate a human brain at the neuron level. But I'm not convinced that it's not relatively easy to make something that is, in some sense, as smart or smarter than a human, without being very much _like_ a human at all.

I think the coming decades are going to challenge our idea of what intelligence actually is, as we start to create machines which are capable of human or superhuman intelligent behaviour but which work in a totally different way to our own brains.

Expand full comment

I'm a bit of an AI skeptic but their reasoning seems silly. It assumes intelligence requires the complexity of the brain. But like the worm with 300 neurons, most of the complexity existed before the intelligence. So it is plausible that it is unnecessary for the intelligence, which may in fact be dictated by simple math. If AGI is possible, it's because essentially the important stuff for intelligence is contained in the connections between neurons, and their strength, as opposed to the way they communicate or work on their own. Given intelligence emerges when their are lots of them, this model of the world makes sense, and may in-fact be true. Even if it's not, it still maybe possible if the connection graph is "Turing complete", the idea being anything that can be represented with a chemical message between two nodes can be represented with simple connections between more nodes on the graph, this is just less efficient and requires more data to train and memory and energy to run.

Expand full comment

"there’s an uncountable infinity of non-computable functions"

Sorry if this is a stupid question, but isn't "uncountable infinity" a tautology? Is there such a thing as countable infinity?

"We may yet till our way into a cognitive dust bowl."

What a striking metaphor and poignant mental image. Well done.

Expand full comment

I wrote some closely related arguments a few months ago.

See: Superintelligence is Not Omniscience

https://www.lesswrong.com/posts/qpgkttrxkvGrH9BRr/superintelligence-is-not-omniscience

and the links at the bottom, especially: AI Safety Arguments Affected by Chaos

https://wiki.aiimpacts.org/doku.php?id=uncategorized:ai_safety_arguments_affected_by_chaos

I think that it is easy to overstate these types of arguments, because it is often hard to prove that something is unknowable, but that there is something important here.

I did not know about Landgrebe & Smith until just now, so thank you for sharing !

Expand full comment

As an active researcher in computability theory I'd like to express my extreme skepticism. Without going into the arguments in detail it's hard to know what exactly went wrong but let's just say this is the kind of argument people have tried to make for decades without any success.

Usually, these arguments rely on one of two fallacies.

1) They confuse the inability to exactly predict what a brain might do, eg, because sometimes wether or not a flash of light is detected or a nueron fires might depend on some tiny QM level effects, with the ability to produce the same useful output.

When we claim the brain is a computer/Turing machine the claim isn't that one could literally perfectly predict the output of a human brain given complete knowledge of it's initial state. QM randomness alone is probably enough to kill that. The claim is that you could replace that brain with a computer and the resulting behavior wouldn't be something that caused their friends and family to see a difference or to reduce their performance.

2) They try to use normal scientific hypothesizing to support theories which imply something is non-computable ignoring the fact that their evidence equally well supports some complex computable approximation of that property.

For instance, you might see some output of a biological system and after a bunch of experiments say that our best theory is that the output is totally random. But those observations are equally consistent with a really good computable PRNG as well.

Now that may be obvious in this case but it's harder to see when they lay on several layers of theory.

--

I disagreed with Penrose but he at least made a compelling argument assuming you accepted his implausible premise that mathematicians could always resolve any question in mathematics correctly given enough time. That's rarely the case with other people pushing this case.

And I say this as someone who doesn't find AI x-risk very compelling and even accepts the possibility of QM important elements in our brains.

Expand full comment

Good review.

Expand full comment

I mean, it’s *conceivable* that chaotically amplified details of the initial conditions could make human brains impossible to simulate computationally, and in a way that was actually relevant to intelligence. I’ve even speculated about it myself. But unless this summary omits it, I don’t see even a shadow of a hint of an argument that we can be *confident* of that impossibility, which is what would be needed to refute AI-doom fears. If this were the strongest argument against Yudkowskyanism, I’d ironically see that as a compelling argument *for* Yudkowskyanism.

Expand full comment

Also as for computable algorithms being a countable subset of the possible algorithms. Sure, but don't take that as proving anything.

Literally the class of algorithms definable (w/o parameters) in ZFC is also a countable class. That's just not a compelling argument that the brain has to do more.

I say this as someone who is genuinely open to the possibility of special quantum processes occuring in the brain. But if that's true the benefit won't be in exceeding the computable but in offering a speedup in runtime.

Expand full comment
founding

The computability arguments here seem to be eliding the distinction between real numbers and computable real numbers. Yes, it's true that our computers can't compute uncomputable numbers (as one might have guessed from the name), but I see no reason to suppose that uncomputable numbers are involved in the functioning of human brains or any other process in the physical universe in any way that has observable consequences. Even if you assume that things like particle positions are actually uncomputable reals "under the hood", numbers whose decimal representations go on forever, it seems that only finitely many of those digits actually matter at any given moment in time; the effect of the rest is too small to affect anything whose consequences we could observe. Which puts physics, and therefore human brains, back in the realm where our computers could simulate them and do anything they can do, given enough time and memory. This isn't, like, mathematically provable (because it's a statement about the physical universe), but all known laws of physics comport with it and I haven't heard anyone propose anything that could plausibly be a counterexample.

(This is true whether or not you account for quantum mechanics. If quantum computers work the way we think they do, then given enough time, a classical computer can do anything a quantum computer can do; it just takes exponentially longer to do it.)

Expand full comment

First, I very much agree with the Straussian reading. Back when I was reading one of your articles on alignment, I was thinking "this is a halting problem". You're in a mind game with an arbitrarily complex computation system, you can come up with some trick that will address some specific set of situations, but it won't address the meta-situations, and you'd wind up having to come up with an infinite number of distinct tricks. Biological NNs, ANNs, and general turing machines all exhibit irreducibly complex behavior that cannot be understood or fully analyzed by BNNs, ANNs, or TMs. So I don't think we will ever clinch safety or alignment.

However, I was really annoyed reading the rest of the article, this does not make sense to me. People have a very poor understanding of what Turing machines are and what they are capable of, and an even worse understanding of uncountable mathematical leviathans like the real numbers.

You do not need to model the full behavior of a whole system in real time on a real machine for it to fit within the capacity of a Turing machine. _All_ you need to be able to do is individually approximate each type of subcomponent and subprocess via discrete logical process. Don't model a whole neuron, model the transfer of a single type of neurotransmitter across a synapse. Approximate it to the plank scale if someone complains your approximation isn't precise enough to capture some emergent chaos phenomena that allegedly arises when we don't have rounding errors. Make sure you can glue the subprocesses together with discrete approximations, and the whole system can be simulated with large enough resources.

Turing machines do not care about 100k RNA vs 1 RNA, they are not at all scared of combinatorial explosion. Turing machines are capable of insanely complex behavior. Turing machines with 10 states can execute more operations prior to halting than the largest number any human has ever described, but we don't know what that number is because it's too hard for us BNNs to figure it out. Non halting turing machines can exhibit non-patterned non-repetitive chaotic infinite behavior. On the flip side, it's pretty bizarre to suggest that some meaningful emergent phenomena is lost from the culmination of plank scale rounding errors. A system which, for all its complexity, still seems to be defined by a number of discrete neurons each communicating with other specific neurons in a fairly reducible (read, reduced) way, probably has not biologically evolved to take advantage of chaos that breaks away from its own mechanics. Take DNA - it has developed a very discretized form over billions of years and a ton of protections against mutation. While adaptation does rely on some random mutation, overall DNA has adapted very heavily to prevent, avoid, and fix mutations, because it does not want chaos to interrupt the mechanisms it has carefully built up to preserve order and intended mechanical functioning. BNNs have a lot of mechanics and order to them, we should not spuriously expect them to rely on some mad chaotic emergent phenomena to accomplish their basic tasks and functionality.

On the real numbers, the idea that the brain is somehow correctly reflecting real numbers, in an incomputable way, is even more absurd to me. The real numbers are an eldritch horror, beyond the comprehension of people who think they comprehend the most eldritch of eldritch horrors. Not only are 100% of real numbers irrational, transcendental, incomputable, but 100% of real numbers are _undefinable_. Turing machines are capable of an endless array of chaos and insanity and weird incomprehensible quasi-order that emerges within in the dark depths of that insanity, but the real numbers are capable of _uncountably endless_ all of that. There is nothing that could possibly exist in the real world that can capture any non-infinitesimal percentage of the capacity or chaos of the real numbers. What's left, after you pull back from an uncountably infinite set like the real numbers, is (almost*) invariably a countable set. And countable sets can be enumerated by turing machines.

As for the S curve, cars and autos are already at the top, and they have 100% outmoded and obviated the horse. Drones are close to the top, and their capacities are on par with a hummingbird at a comparable size and weight. Of course the singularity may follow an S curve in the long run, that's not the issue. The issue is that long before it hits the inflection point, it will be jillions of times more capable at every task than all humans combined. From our POV, it's a singularity. It would actually require superhuman intelligence to even observe the inflection point, let alone the second elbow.

* there is no definite answer to the continuum hypothesis, because ZFC is agnostic to its truth or falsehood, but I don't think superintelligence doubters are going to put together a coherent argument that yes there is a strictly-in-between set and that reflects the gestalt capacities of the human mind by being between the too-big of the reals and the too-small of the computables.

Expand full comment

> And by impossible they really mean it. A solution “cannot be found; not because of any shortcomings in the data or hardware or software or human brains, but rather for a priori reasons of mathematics.”

Is this a long way of saying that something is not predictable?

Expand full comment

> Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.

This sounds like a category error to do with the meanings of "modeling" and "emulation". It might be mathematically impossible for a model to accurately emulate the output of a *particular* human brain, but that's not necessary for AGI. All we need is a system which behaves in a manner such that its output is plausibly brain-like (in some difficult-to-define way). It's not like a weather forecast model, which is useless unless it tells us something specific and testable about the real weather on this particular planet.

Expand full comment

I think the flaw in "AI must be a computable function" is that functions that take I/O are not exactly computable either. (Not without knowing all future inputs, anyway, which could be tricky when the inputs are "everything your robot sees and hears from now until it shuts down.")

Sure, it's impossible to sever a human from their environment and make neat predictions about their future behavior, but the same is equally true for an AI that takes inputs from the environment.

Expand full comment

Regarding the book, as presented by the review:

> 1) Building artificial general intelligence requires emulating in software the kind of systems that manifest human-level intelligence.

> 2) Human-level intelligence is the result of a complex, dynamic system.

> 3) Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.

> 4) Therefore, AGI—at least by way of computers—is impossible.

I realize that this is a review and not the book itself, but I want to see the work here. Some problems I have include:

1) This is not necessarily true. We don't know what intelligence is or what causes it. There may be ways to construct AIs that don't require emulating a brain; I'm pessimistic about classical AI, but I don't want to rule it out entirely. Hybrid classical/neural systems are also a possibility. I personally tentatively believe that if a human brain was completely emulated in software down to some specific level of detail, it would work as well as a meat brain and could and should count as "human". But I'm not convinced that that level of precise mimicry is needed to create "intelligence", broadly speaking.

2) The words "complex" and "dynamic" hide a lot of magic. We don't know how much of what the brain does is needed for what we think of as intelligence. It could be like human eyes vs. octopus eyes, where there's better brain designs out there, but we went down one path early in our evolution, and now we can't get there from here. We're definitely limited by human pelvis size, but that wouldn't be a problem if babies came out through the chest like the aliens in "The Color of Neanderthal Eyes" by Tiptree.

3) Again, this depends a great deal on the definitions of "complex" and "dynamic" that are being used. As presented, this feels like a motte-and-bailey, where we agree that something fits a colloquial definition, and then the rest of the argument assumes a technical definition. Maybe the book isn't like that? In any case, yes, bog-standard computers have a hard time with that stuff. They also have a hard time doing things like 3D graphics and mining cryptocurrency and training LLMs, but we came up with some specialized processor designs for that. And even if there's a proof that it's impossible to do that, maybe a full emulation isn't necessary to create "intelligence". Or some sort of biological substrate could be plugged into a slot in the machine (hello, Macross Plus). As I understand it, the "deep learning" revolution was largely about abandoning the earlier version of artificial neural networks, which involved trying to mimic neurons and synapses, and which involved limiting the architectures to ones that could be mathematically modeled. Instead, they simplified the processing so it'd run faster, added scale, and added a bunch of stuff that seemed like it might work better, and eventually some of it did.

Regarding the review:

I find this review somewhat disappointing, but I can't really blame it for that. The review doesn't present the math, the review writer might not have understood the math enough to present it, I probably wouldn't understand the math even if it were presented, and without the math the argument doesn't hold together. I'm left to hope that someone here has read the book and understood enough of the math to comment intelligently. But other than that, the review was short and solid and presented its take concisely, without the common ACX-book-review failure mode of going off into Scott-style digressions that few people other than Scott seem to be able to pull off. So I applaud it for that.

Expand full comment

How many Dutch babies could you feed for the price of Warhammer 40k as a hobby? These are the kinds of questions I need an AI to answer for me.

Expand full comment

“ Complex systems can’t be modeled mathematically in a way that allows them to be emulated by a computer.”

e.g. Weather.

“In physics, exponential curves always turn out to be S-shaped when you zoom out far enough.“

e.g. temperature and co2 curves during the last four glaciations.

So - let’s not trash the economy on the back of some dodgy UN IPCC models which run hot and do not hindcast.

Expand full comment
Jun 3, 2023·edited Jun 3, 2023

> the systems composing intelligence are non-ergodic (they can’t be modeled with averages), and non-Markovian (their behavior depends in part on the distant past).

How is this supposed to work exactly? The brain is made of atoms in some given configuration. It is in a sense a machine with some state (the current arrangement of its composing atoms) that is subject to unpredictable quantum and thermal noise. That's where the chaos/dynamical system properties come in.

How is the past supposed to affect the future if not by giving rise to the present which produces the future. It's possible to build a digital infinite response filter and with enough precision to have some past stimulus never decay to insignificance. Is a cryptographic hash function less practically chaotic because it's fully deterministic? How is any of this relevant except as a technicality to the effect that a human brain can never be simulated perfectly. You might as well complain that a rock can never be simulated perfectly.

Now consider that despite being "only" able to compute computable functions, computers are much better than humans at simulating chaotic systems accurately. No human will ever stare at weather radar data and then predicted the rain and the clouds more reliably than the NOAA supercomputers. They're not going to model turbulence better than a computational fluid dynamics package. Dynamical systems can be modeled with accuracy limited by knowledge of their initial conditions, the amount of compute available, and quantum mechanical randomness. Weather forecasts improve because the first two things get better over time.

Expand full comment
Jun 3, 2023·edited Jun 3, 2023

Sorry for being blunt, but the premise of the book is utter nonsense, and the review fails because it does not call this out. I would write a detailed rejection, but fortunately I don't have to. Because there was a second book review on the same book in the contest, which was way better. It tears apart the hypothesis of the book, and is also a very nice read. So I can recommend it to everyone:

https://docs.google.com/document/d/1D2MGZ7HW1vRtOtfXYIx9BBUt6ubjEA2n06gpoHcxaFY/edit#

To cite one passage from the review:

"

The problem is, if you accept this argument, you end up making statements like, "machines cannot learn to conduct real conversations using machine learning," which happens to be a direct quote from the book. There probably exists, somewhere, some definition of a “real” conversation that excludes all interactions I’ve had with chatbots, but their statement really flies in the face of my experience with ChatGPT. As everyday AI systems continue to advance, L&S's objections increasingly lose their potency. Many of their claims simply don't hold up when tested with today’s capabilities.

"

To drive home how detached the book is from any AI progress in the last 5 years, it suffices to look at the book's list of thing that AIs may be able to do. Not today, but ever. Again from the other review:

"

L&S claim to be AI optimists and end with a list of what AGI can do. But their list seems terribly myopic. They are proponents of AI for non-complex systems. They say AI works well in logic systems where it’s possible to model the system using multivariate distributions and in context-free settings. This includes, they tell us, the solar system, the propagation of heat in a homogenous material from one point, and the radiation of electromagnetic waves from a source. They tell us there are applications in industries such as farming, logging, fishing, and mining. But if the requirements of being non-complex are not satisfied, AI can at best provide limited models.

"

It should be obvious to anyone who has interacted with Chat-GPT how ridiculous this list is.

If you want to understand more precisely what the premise of the book is, go ahead and read the other review in full.

Expand full comment

I think there's a problematic equivocation here between "we're nowhere near to doing it" and "it's mathematically impossible" as regards whole brain emulation. If the brain is wholly made of neurons, and neurons are wholly made of atoms, and atoms obey physics in predictable way, it seems that we *can* in theory do whole-brain-emulation with a Turing machine, mathematically. Pointing out that this requires insane amounts of compute, and a level of understanding of biological neurons that we still lack, doesn't change the hypothetical.

Expand full comment

As a theoretical physicist and ML researcher, it's rare to see quite this many freshman-level misconceptions about physics, determinism, chaos, computability, "complex systems", deep learning, and so on, all in the same place. Nice roundup.

Expand full comment

Very fine review - too short to win, but made my day. Have to go now to buy German baby formula at 9.90€ / kg. https://www.dm.de/babylove-folgemilch-2-nach-dem-6-monat-p4066447208085.html

Expand full comment

The following is wrong:

> We could illustrate with examples like the Entscheidungsproblem, but it might be more intuitive (if less precise) to point out that computers can’t actually use real numbers (instead relying on monstrosities like IEEE 754).

Digital computers can in fact use real numbers. It's inefficient, but they can.

See https://en.wikipedia.org/wiki/Computable_analysis

Expand full comment

What is specifically human is what a machine can not yet do.

Expand full comment

The transfer of intelligence to a computer is a matter of executable functions that have been programmed. What the intelligent community has done is that it has integrated algorithm with a computer, so it can emulate logic systems. This is how humans work and reason, but they are distinct in one aspect; they are emotional beings. This is an attribute that I don't think (hope I am wrong) that machines lack at the moment. Will new advanced algorithm models be required to attain AI with machines?

In the newsletter AI: The Thinking Humans, the author pointed out noteworthy limitations of AI, especially when it is compared with how humans reason. You posited that machines have been made to process language, solve problems, and learn just like we do. This is a tremendous progress that the AI community has attained as improvements continue to be churn out every month.

I agree with the fact that machines do not fully have the intelligence that humans have. This is especially true in forming concepts and abstracts. This is majorly attributable to the programmed nature of AI, but this is understandable given that it is still learning.

But, despite immense mimicking of humans, AI is yet to be intelligent enough to scan an environment and respond with the precise and specifity of the highest degree. Humans often do this with ease.

However, AI does not have the flexibility of humans in situational thinking. This is what I encountered recently. Check my take on AI and how I interacted with it.

https://open.substack.com/pub/thestartupglobal/p/my-encounter-with-ai-assisted-chatbot?utm_source=direct&r=m5mq1&utm_campaign=post&utm_medium=web

Expand full comment

Has anyone actually proven, or at least made an argument accepted by the scientific community, that the human brain is definitely more than a really, really complex Turing machine?

My standard of proof here is "better than any known arguments for the existence of one or more gods".

Expand full comment

This review made me feel smarter after I had read it, which is always flattering to one's ego and makes one look kindly on the reviewer.

I think it's a good review nonetheless, and if the reviewer is left uncertain whether the "no" side's position is as strong as claimed, they are in much the same state as the rest of us. We do build complex systems all the time, and what is intelligence exactly, and does AI need to be human-style intelligence anyway?

I think we confuse human-STYLE with human-LEVEL intelligence all the time in this debate, and that leads us down wrong paths. I think we can get a very smart 'dumb machine' that will be able to mimic or exhibit human-*level* intelligence but that doesn't mean it *understands* anything of what it is doing.

And the real risk will always be the humans using the AI, not the AI itself.

Expand full comment

The argument about real numbers sounds weird. I always assumed that real numbers are not "real" (i.e. physical), and are just an abstractions to simplify equations in physics, while in reality, space/time is quantized around Planck length/Planck time. Otherwise all information-related arguments from physics (like information conservation) won't make sense, as a single real parameter carries an infinite amount of bits of information.

Maybe physicists can clarify this?

Expand full comment

Is this actually an argument that very powerful computers can’t/won’t murder us all, or just that they won’t really be conscious? After all, if we can build a computer that’s really good at Go, why can’t we build one that’s really good at killing everyone even if it’s not “intelligent.” (Assuming anything we can do intentionally we can do by accident).

Expand full comment

Odd for a book review to begin with a polemic against books, no? Much less an obviously flawed polemic which is ostentatiously unsupported.

There is certainly a *class* of new nonfiction books which are effectively a single blog post, but with good press. Particularly stuff in the gravity well of politics. But I've been captivated by, learned from, and enjoyed the process of reading any number of modern nonfiction books. "Quantum Computing Since Democritus," "Song of the Dodo," "Reading Lolita in Tehran," "Oxygen: a Four Billion Year History," and "Seeing Like a State," to take a few at random from across the spectrum.

Perhaps the author just hasn't made reading long-form works a priority, and is familiar only with the glitzy and heavily advertised stuff? But it dramatically weakens the review right out the gate, to know that the author of the review has very little basis for comparison to other works.

Expand full comment

I tend to think that questions about how much of a brain you'd need to emulate for intelligence are tied into notions of identity. A young child might sometimes worry that going to bed will mean dying. And they sort of have a point. When sleeping consciousness is interrupted, different connections in your brain will form and break. And certainly the RNA floating around inside it will change for all sorts of reasons down to glucose levels. Come to think of it, eating a sugar cube will change our metabolisms and so many of the things they argue are hard to simulate.

But I think most of us don't worry about the continuity of our identities after sleeping or eating and I think we're essentially right to not worry. In any sort of information processing you're constantly fighting against noise trying to make its way into your system and disrupt the work you're trying to do. If turning our heads to look at something new disrupted our train of thoughts that would be much worse than the brains we do have. So both human engineers and natural selection seem to create systems that can suppress noise below a certain threshold with various mechanisms you might point to in both neurons and transistor logic gates.

And you can actually use transistors to work with real number analog systems if you really want to. It's just that these circuits are fragile and you certainly don't want to try to use them for an overly long series of calculations.

Expand full comment

Is there a better way of listing these book reviews in the sub-headers? "Finalist #3" gets read as "this is the third-best book review".

Even "Entry #3" might fix it.

Expand full comment

There is no evidence consciousness is substrate independent. Everything is carbon based. Even in religious texts non human consciousness ends up in carbon such as pigs. This modeling we do with silicon reminds of the story where a wooden airport was made by natives expecting planes to start landing there.

Expand full comment

I'm a fan of this phrasing: "In physics, exponential curves always turn out to be S-shaped when you zoom out far enough"

* I think this view of human thought as somehow so complex and special that we can't algorithmize it as foolish and magical

* We are building complex networks of capabilities in to machine systems. I suspect that higher order capabilities will just emerge from these and never actually be designed explicitly

* I really like my superficial understanding of Michael Levin's concept of Cognitive Light Cones, where you can describe a tick, dog, human and potential AI system with light cone-style diagrams. It helps me framework some of the dimensions of backwards-looking memory and forwards-looking planning along with the scope of the largest goal a system can track. It's a framework, not a theory of physics

Rate an AI system's goal-time horizon (immediate feedback of the next work in an LLM), memory (training data depth) and the size of the goal (immediately finding the next word) and you have an idea of how to measure the current capability of AI models. I'm not an expert, but this is how I would write an algorithm for measuring the goal seeking capacity of an AI model or Auto GPT like system.

Expand full comment

This is a very well-written and interesting review, thanks.

I am unsure if self-promotion is acceptable in the comments, but I recently wrote a blog post/essay that touches upon many of the same points.

Unlike the authors of the reviewed book, I do not believe that AGI is literally impossible, but I am somewhat skeptical that it is as imminent as many people expect/hope/fear. I present some general arguments along these lines.

If anyone is interested, here is the blog post: https://www.awanderingmind.blog/posts/2023-05-31-the-case-against-intelligence-explosions.html.

Expand full comment

Good review, sounds like an interesting book with a technically-true-in-a-philosophical-sense premise (and title). I would say that very few people concerned with AI x-risk are concerned about having artifical 'rulers' -- most are concerned with having an out-of-control golem of some kind killing or immiserating everyone. A self-replicating landmine wouldn't 'rule' any territory but that's cold comfort to someone who just lost both legs.

Expand full comment
Jun 3, 2023·edited Jun 3, 2023

This is a really well-written review, but if this is one of the strongest arguments for the idea that AGI is in any sense impossible, I just don't see how it could imply that even in principle. It could maybe, conceivably though I doubt it, imply that no digital computer that is limited to the operations we know how to build into circuits can duplicate the behavior of a human mind? Which would be an argument against Hansonian ems? Basically:

1) There exists at least one regime of physical systems, regardless of whether you call them computers or not, that weighs 3 pounds, runs on 20W, and is as intelligent as a human. It arose under an extremely constrained and inefficient design process. This is already proof by construction that "AI is impossible" is false, the rest is arguing whether we need a different type of hardware to achieve it, or whether humans are incapable of developing such hardware.

1a) Note: if you claim that humans can't develop such hardware, but evolution can, then that is proof that you don't think human intelligence can replicate or model some subset of physical reality, which means intelligence is not dependent on that subset. Arguing that a digital software system also can't model some subsets of physical reality with absolute precision, then, is just not an argument about whether that system is human-level intelligent.

2) Of all the things the book claims computers can't do, is there even one that a brain *can* do? If so, which? If not, then who cares, and why should that be relevant to intelligence? Our minds can't model a brain's complex dynamics precisely either. We're actually much, much worse at modeling complex dynamical systems than our computers are, even without AI. That's why we use computers in such research.

2a) A computer is *also* a complex dynamical system. The digital nature of its inputs and outputs is an approximation we impose on it in the way we engineer and use it. A transistor takes physical (analog) voltage inputs and, with high but not perfect reliability, compresses them into one of two much narrower ranges of outputs to feed into the next circuit elements. This is not the same as a neuron, those are more complicated, but neurons do also take analog inputs (electrical and chemical) and convert them to a much narrower range of possible outputs (fire/not fire, release/don't release/remove neurotransmitters). This is all very separate from the question of whether the aspects of the brain's behavior that we care about when we talk about intelligence actually depend on the hard-or-impossible-to-precisely-emulate aspects of specific hardware.

3) We don't actually know if physics uses general real variables, at all. It's not like a human can do non-symbolic calculations with them, nor can we do symbolic calculations with more than a miniscule subset of them. The alternative would imply "there exists some arrangement of brain matter that can perform hypercomputation" which... would be an amazing discovery, one that would upend so much of what we know about the world.

3a) If the claim is not only this but also a claim that human intelligence relies on and makes use of hypercomputation, then it means a wide range of poorly controlled environments can perform useful hypercomputation every second of every day. Among other things, that should let us do things like solve the halting problem or compute BusyBeaver(n) for any n. I look forward to the authors and those who agree with them taking over the world with this extraordinarily powerful knowledge.

Expand full comment
Jun 3, 2023·edited Jun 3, 2023

"ATU 325 is heady stuff." Love that quote.

Expand full comment

> Landgrebe and Smith argue that the “mind-body continuum” is a complex system. It’s a dynamic negentropy-hunter built out of feedback processes at every scale. The human brain is not a computer, and no known mathematics can describe it in full.

Let's imagine that we have a computer that COULD simulate the interactions of neurons perfectly. However, it would not accurately emulate those interactions. Imagine a person sitting in a room, and next to them is a computer chip running an emulation of their brain. The temperature of the room is raised to 100 degrees Fahrenheit, and they begin to sweat as their body attempts to maintain a healthy temperature. The fan on the computer chip spins up to try to deal with the heat as well. Both of these actions (as well as the temperature itself, hot air particles impacting things, etc) create small, chaotic variations in the two systems (human brain, computer chip), and given they're complex systems, the human brain and the computer chip both will eventually deviate, not due to any difference in noticeable or interesting stimulus (e.g. videos, reading a book, music playing in the room, etc), but just because this neuron in the human's brain didn't do its job quite right because it was too hot, and this transistor in the computer chip didn't do its job quite right because it was too hot, but the neuron and the transistor aren't the same.

That's almost certainly true, I'll agree. You cannot accurate emulate or predict a human brain properly, for this simple reason, and particularly over a long time scale.

But that doesn't mean you can't create a human brain-like thing? It's fine if the computer chip doesn't spit out the exact same end result, as long as it can perform similar tasks, remember important events, etc. The human brain isn't emulating anything to any great degree of fidelity, and it still solves all sorts of problems.

Expand full comment

"Objectifying intelligence is what sets humans apart from dolphins, beavers, and elephants."

Those types of statements annoy me a little. What if we discovered that dolphins, for example, do possess objectifying intelligence? What would that change?

I mean, yes, a dolphinologist would have to change their minds about dolphin behavior. And yes, we (the royal we) would have to find some other way to differentiate humans from dolphins, but differentiating "humans from animals" is usually not the point unless the question at hand is, "how are humans different from other animals." In the case presented in this review, AGI would or would not be possible regardless of whether humans are unique in that type of intelligence.

This isn't a jab at the reviewer or even the book under review. It's more of a grumbling expression of annoyance at a trope many of us (me included) use but don't usually give a lot of thought to.

Expand full comment

Veering hard left for a second into psychology, I read the cyberneticians when I was sixteen and came to this same conclusion about AGI in almost the exact same words. Granted, no one can know for certain, but one of my deeper held metaphysical convictions since then has been that there is nothing even approximating a valid eschatology in real life. I define this as anything which marks a stepping off point into a 'better' world. Better can include horrible outcomes like wireheading or skynet, because they are nonetheless conceptually and archetypically purer. I consider singularity, especially as it relates to utopic ideas of post-scarcity and the nullification of biological groundings for things like inequality, prejudice, evil, etc., to be the supreme example here.

You might be surprised then just how disturbing this idea presents itself when strongly argued to certain 'secular' people. Even just as a thought experiment, I have found that many people are not emotionally comfortable with the idea of history as usual ad infinitum.

You can see how holding this conviction, especially as a frequent flyer in ideological spaces (whether political or futurological, they often overlap per above) would cause me to increasingly feel that these ideas are expressions of personality more than rational outlooks. This reinforces my belief. But then again, having a generally psychoanalytic disposition will make you think that about everything.

If nothing else, I wanted to share how interesting it has been for me personally to meditate across time on the somewhat nihilistic outlook of all this being bunk. Are you neutral about this proposition, negative, or even positive?

Expand full comment

Ideology is what has led to death and destruction on a massive scale since at least the French Revolution. When you watch something about the Nazis or the Bush/Cheney administration I think most people think—wow, those beliefs are super dumb! How could anyone ever believe gassing Jews would lead to prosperity or slaughtering innocent Muslims would make them want to adopt democracy and love America?? So I believe AI would be less likely to adopt an ideology than a human and so I believe the destruction of the world is more likely to come from a human.

Expand full comment

> The authors give special attention to language, and they go so far as to argue that mastery of language is both necessary and sufficient for AGI.

I'm working on a blog post making the argument that embedding spaces are in fact the type of language we use in our minds, and that this provides a nice model of consciousness as a computational process.

https://sigil.substack.com/p/a-creeping-suspicion-about-consciousness

Expand full comment

https://indica.medium.com/the-worst-case-scenario-for-ai-is-already-here-b224721402ee

Provided for comment and not necessarily for truth.

Expand full comment

"If you’re a physicist or engineer, your daily bread is a chunk of reality that’s amenable to mathematics."

Woah - I originally read this backwards. I pictured a physicist or engineer giving mathematical contemplation to a slice of sandwich bread.

Expand full comment

Churchill = Based.

This is the best review so far largely because it seems to understand the point of a review.

Expand full comment

I propose this;

The single most significant driver of human intelligence is the need to physically survive.

Eat drink breath stay warm etc.

Where’s the analogue to this in AI?

None.

AI is only going to be as smart as the subset of human intelligence captured in what we have recorded. But it will interpret that with out any of the essential underlying assumptions humans share, and those assumptions are very important, and severely underestimated in this discussion imo.

Sex, for example; . No end of writing and pictures recorded. A language learning model, or something like that could feast on this information. If you could embody that into a reasonable physical facsimile, that would be a game changer, right?

But if you can’t do that, what are you left with? Basically a concrete block that knows everything human beings of ever written about sex but with absolutely no idea of what all that information is referring to.

Or

Are we just so ..like…. over that it don’t matter?

Expand full comment

Are we worried about creating something that will outcompete us for essential resources to live? Will the thing that we create want to live more than we do? Or will it assume that the correct thing to do is just to destroy anything that could potentially be a nuisance. And why would they believe we were a nuisance? They like fucking and eating ice cream and we’re taking up too much of that?

There is no question that we are capable of teaching machines to behave precisely in that way, but is this not a bigger issue about what we have to offer and less about the terror of the new machine?

Expand full comment

This is really a big conversation about a certain vein of humanity having a child together, and there are some serious parenting debates going on.

Expand full comment

"The authors give special attention to language, and they go so far as to argue that mastery of language is both necessary and sufficient for AGI"

It's already able to write better than quite a lot of fanfiction writers and college student essay writers. Does the author of the book think humans are general intelligences?

Expand full comment
Jun 4, 2023·edited Jun 4, 2023

"We reject materialism. It follows that machines can never be like humans." - I don't see what the other 300 pages of the book would be needed for.

Expand full comment

> We could illustrate with examples like the Entscheidungsproblem, but it might be more intuitive (if less precise) to point out that computers can’t actually use real numbers (instead relying on monstrosities like IEEE 754).

Ouch, that hurt. Let's unpack that.

A Turing machine can not solve the (general case of the) decision problem, or equivalently, the halting problem. But neither can humans! If humans had that ability (e.g., they were Turing oracles), proofing Goldbach's conjecture would be easy. In fact, I suspect that oracle machines would make quick work of quite a few open math problems. (More abstractly, I guess they could be used to find out if a finite length proof exists for any math problem (excepting anything powerful enough to describe behavior of oracle machines?)?)

The argument would work if the goal was to prove that finite state machines can not be generally intelligent. FSMs notably do not have memory, while humans do, which prevents them from solving certain problems, like determining if a string is of the form (a^n)(b^n), which a human (with a pencil and unlimited paper) could in principle solve. It falls flat for Turing machines because to the best of our knowledge, humans are not fundamentally more powerful than them.

The real numbers and IEEE 754 quip seems just as misguided. Unsure if it was paraphrased from the book or was added by the reviewer.

Here is the thing about real numbers: in practice, they are terrible to handle. Don't think pi, think "solution to x=cos(x)". Any real number can be represented as the limit of a Cauchy sequence. Both humans (mathematicians) and computers (formal theorem verification systems) can juggle such representations just fine, while the rest of us are using abstractions which are deemed "close enough" to reals most of the time (of course, we tend to mix these with theorems which are proven for reals, like the chain rule for derivatives).

Before IEEE 754 floating point, engineers did not use reals (as in handling Cauchy sequences) much. What they used was slide rules. These are amazing machines to get approximate answers. Like floating point numbers, they work great for multiplication/division, not so great for subtraction of quantities of almost equal size, or calculating the modulus of the height of the Eiffel tower with regard to the Planck length (for some reason, this never comes up). IEEE 754 is basically the electronic representation of the same, a number consists of a sign bit, a mantissa and an exponent. This means that the relative accuracy of any representation is within the same order of magnitude, which is sufficient for most physics uses. The relative representational error for the mass of an atom or the mass of the sun will be equal, which is fine because we can not determine the mass of the sun to the same precision as the mass of an atom anyhow.

Decent computer languages are very upfront about IEEE 754 not being equivalent to set of real numbers (almost all of whose members can not be represented by a finite amount of memory) calling floating point numbers float or double. Using approximations is, of course, cheating, so the trick is to know what you can get away with and what will mess up your results. I think the people who deal with that call it numerical stability.

(For handling probabilities, we do get great representations for small probabilities, but probabilities of the form (1-epsilon) are a problem. This leads to separate functions which take epsilon instead of 1-epsilon, which is a pain in the neck. Other than that, I think floating point numbers are fine.)

Expand full comment
Jun 4, 2023·edited Jun 4, 2023

This review, with its jargon and moments of editorialization towards certain philosophers, seems like it might appear effective to LessWrongers, but *only* to LessWrongers.

Can't say I found it a very captivating review.

Expand full comment

"The 1-millimeter roundworm Caenorhabditis elegans has only 302 neurons—its entire connectome mapped out—and we still can’t model its behavior."

I asked ChatGPT about this and it cited this paper: https://www.biorxiv.org/content/10.1101/2023.04.28.538760v1

"Recent research indicates that there has been some success in modeling the behavior of C. elegans. For example, a 2023 study explored whether the GO-CAM (Gene Ontology Causal Activity Modelling) framework could represent knowledge of the causal relationships between environmental inputs, neural circuits, and behavior in C. elegans. The researchers found that a wide variety of statements from the literature about the neural circuit basis of egg-laying and carbon dioxide avoidance behaviors could be faithfully represented with this model. They also generated data models for several categories of experimental results and representations of multisensory integration and sensory adaptation. "

Expand full comment

So what if computers will never think in the way a human being's brain does? They still might guess & check us to death.

Expand full comment

Thank you ACX for writing this great article. It's enlightening and makes us better understand the topic of AI and it's dangerous approach for humanity. Some of us can fully agree that the influence of Husserl's is greatly affecting ones capacity to even start seeking and understanding the truth... Especially in the beginning of their Learning Journey.

Expand full comment

Definitely possible. Another option, how old and eminent are they? For example, I suspect how Chomsky managed to write a NYT piece with assertions ("AI will never understand this kind of word structure" etc.) that were already obviously wrong at time of publishing is him being old and revered, so nobody wants to call him out on bullshit.

Expand full comment

I think the tone here tips the writers intention so heavily at the very beginning that it's clear he never intended to give it a fair review. And I think that position is eminently justifiable from the text itself here.

Expand full comment

A couple of months ago, I had the opportunity to speak with the authors of this book. They were both nice people and clearly clever. The book is well researched, but my main criticism is that the authors use a narrow definition of 'simulatable'. They argue that a system can be simulated if and only if, for every initial state of the system, a computer can produce its correct output state with minimal error.

This is why chaotic behaviour, such as that of the weather, cannot be simulated according to their definition. However, when physicists talk about a system being simulatable, they mean that the system's dynamics can be simulated (i.e. that the dynamics of a system are described by computable functions). So although it is not possible to predict the exact state of the weather one month hence, it is still possible to understand the general behaviour of the weather because we know the dynamical laws that describe the weather's behaviour. Likewise, it's intractable to simulate the motions of all the water molecules in a glass of water, but we can nonetheless explain why water boils at 100°c, which is often much more interesting than the motions of the individual water molecules anyway.

Because of this, the book's arguments lose a lot of their power.

Expand full comment

I find either the book, or the review very lacking. The crux seems to be if the Church–Turing–Deutsch principle is correct. This text really is to short to convince me that is might be false. Deutsch seems to think it was basically proven by Turing that human level AI can exist. If it is the case that any Turing machine can simulate literally everything any talk of complexity seems beside the point.

Exept to determine when machines will rule the world. It could be that the brain is really so complicated we will need another 300 years to build a computer as clever as it. Deutsch thinks it will be a long time till human level AI arrives.

Expand full comment

Nice review. I very much like the straussian and double secret straussian readings and also agree Husserl is trash.

Expand full comment

Ruling the world is kind of nice.

We may not be the greatest, but we're the best we've got.

Doing more gives us a purpose.

I'm not a climate change alarmist. I see way more danger in AI reaching human parity than carbon emissions problems. So far as I can tell, climate change might even be net positive.

Expand full comment

> Put another way, computable algorithms are a subset of all the algorithms that can be formulated with known mathematics, and algorithms that describe complex systems like the brain exactly and comprehensively are outside of the set of known mathematics.

**epistemic status: 15% chance of being BS**

Given what I know about theory of computation, I think this kind of claim is inherently unfalsifiable. That is, given a black box function, you cannot determine whether or not the function is computable in a finite amount of time. For this reason, we can't ever really know if the true representation of physics or brains or whatever absolutely depends on infinite-precision real numbers; there could always be some discrete representation just below the level of granularity we were able to test.

Expand full comment

"But the double secret Straussian reading is to recognize that the future of cognitive automation is extremely uncertain; that stringing too many propositions together with varying levels of empirical support is a fraught business; "

I don't think the quality of arguments presented even give enough weight to go "but I might be correct, therefore it's uncertain".

Expand full comment

Am I the only one who feels it's rather pointless (for the AGI debate, certainly not for neuroscience) to discuss whether computers can simulate a human brain? It would be remarkable if evolution had in us chanced upon the single mathematically allowed path to intelligence - why can't computers just attain intelligence through a different path?

Expand full comment

Maybe I'm missing something, but the book's main argument seems incredibly bad. Computers are all about simulating operations with a fidelity that makes it irrelevant whether they're actually doing the thing or not. As the reviewer states, they don't even use "real" real numbers. A particular computer may not be able to do calculus, but it can do a numerical approximation that spits out the same result. I suspect the same will be true, sooner or later, of general intelligence. Never is a very long time.

Having said that, I also think we're still 2-3 major breakthroughs away from AGI. My guesstimate is 2050. The generative AI stuff is exciting, but it's not the same path. As an analogy, just playing chess faster and faster doesn't turn it into Starcraft.

Expand full comment
Jun 15, 2023·edited Jun 15, 2023

I'm left wondering if this review gives us a strawman. If summarized faithfully, the book seems to assume that building Artificial General Intelligence implies not only (i) achieving or exceeding human intelligence in every respect, but (ii) doing so _in the same way the human brain does it_. Humans might build (i) though it probably isn't necessary for AGI, while (ii) is absurd.

It's blindingly obvious to anyone who has read Yudkowsky's essay "The Design Space of Minds-In-General" that this is wrong: https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw

It doesn't take a 300-page book to explain that replicating human-level intelligence with exactly the same algorithms our brains use would be extremely hard or impractical. Equally, it doesn't take a book to explain that AGIs will surely be located somewhere else in Mind Design Space than humans―in a place where the algorithms are simple.

We can already see this with GPTs, which are extremely simple compared to humans, but replicate a lot more intellectual ability than any non-crackpots expected (can anyone name someone who predicted before 2018 that computers would have intellectual powers on the level of ChatGPT before 2030, or who predicted anything resembling today's "large language models"?)

Not only is it unnecessary to build AGI in the same place in Mind Design Space―it's undesirable. It would require orders of magnitude too much processing power, it would take too long to figure out how to do it, and the training process would be incredibly laborious. Humans need around 18 years of training; AGI researchers are not willing to wait 6 months to get an adult-level intellect.

Expand full comment