497 Comments

FWIW, I've written a post in which I make specific suggestions about how to operationalize the first two tasks in the challenge Gary Marcus posed to Elon Musk. https://new-savanna.blogspot.com/2022/06/operationalizing-two-tasks-in-gary.html

The suggestions involve asking an AI questions about a movie (1) and about a novel (2). I provide specific example questions for a movie, Jaws, along with answers and comments and I comment on issues involved in simply understanding what happened in Wuthering Heights. I suggest that the questions be prepared in advance by a small panel and that they first be asked of humans so that we know how humans perform on them.

Finally, I note that in Twitterverse commentary on Marcus's proposed tests, some thought these two were somewhere between sure things for AI and merely easy. I wonder of those folks would be interested in shares in the Brooklyn Bridge or some prime Florida swampland.

Expand full comment

I’m torn because I really really want to believe that Marcus is right, but Scott is unfortunately very convincing.

Expand full comment

The thing is, our brains appear to have multiple components. GPT-3 doesn’t.

What happens when you start bolting on other modes onto GPT-3? What if you build in fact-checking loops? It’s hard to believe that even if GPT-4 doesn’t deliver the goods, GPT-4 plus some bolted on algorithms out of 1980s AI research wouldn’t.

Expand full comment
founding

I'm happy you tested some of the same prompts on a human! I suggested the same in a comment on Marcus's post.

I've previously held views much closer to Marcus's, e.g. that AI systems are missing some crucial 'architecture' that the human brain has. But when I got my first AI textbook (decades ago), I don't think neural networks could recognize decimal digits in images; not ones anyone could run on a PC of that time anyways.

Now ... I'm with you basically: "At this point I will basically believe anything."

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

I'm not a specialist in either neuroscience or AI, but from what I've read over the years it's not at all clear to me that we really understand what intelligence is. To me it seems nonsensical, borderline moronic, for people who don't know what intelligence is to argue over whether a computer can have it, or whether a given AI technique is capable of achieving it. I also don't buy the Turing test, because deciding that a machine is "indistinguishable" from a human intelligence depends on how you test it. Some of the current AIs seems to do surprisingly well if you ask them simple questions about things they were trained on, but if you instead ask them nonsense questions about those same things (e.g. "When was the last time Egypt was moved to San Francisco?") the AIs give dopey answers that demonstrate that not only don't they know what they're talking about, they don't even realize that they don't know what they're talking about. They lack a certain self-awareness that seems integral to true intelligence. Douglas Hofstadter has an article about this on the Economist's web site.

Expand full comment

Can I point out that if you explain to the 5 year old what they did wrong and run the test again, they get the answer correct, while GPT 3 (and 4) will repeat the mistake?

Not saying Marcus is right as such, but he's got a point. Without a system to hold world state and update a world perceptual model, we really are just dealing with an impressively complicated lookup table. But as others have pointed out, it's going to be really interesting to see what happens when figure out how to bolt that capacity on to other systems.

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

To quote Stanislaw Lem on the subject:

The Petty and the Small;

Are overcome with gall ;

When Genius, having faltered, fails to fall.

Klapaucius too, I ween,

Will turn the deepest green

To hear such flawless verse from Trurl's machine.

Expand full comment

"data about how human beings use word sequences,"

This is, of course, interesting, and, of course true (for these particular AIs). Does it matter?

In the narrow sense it does. If all you know of the physical world is sentences humans have felt it necessary to utter about the physical world, well that's all you know. I don't mean this in an uninteresting qualia sense, but in the more substantial "people rarely make statements like 'after I poured the water from a skinny glass to a wide glass, the amount of water was unchanged' because why would you make such a statement unless you're discussing Piaget stages of child development, or something".

But why would we assume that an AI can learn only from text? We know that in real human babies, (exactly as Hume claimed!) the system is primed to look out for coincidences in sensory modalities (eg eyes and ears activate at the same time), and to learn from such joint modalities much more aggressively.

There seems no obvious reason (in time ... everything takes time ...) that a vision system cannot be coupled to an audio system to do the same thing in terms of learning about the world from the entire YouTube corpus.

At some point (not now, but at some point) we can add additional modalities – I carry an always-one camera + microphone + various motion sensors, temperature sensors, location sensors, etc, all of which are fused together and together train an AI.

(Yes, yes, we all know that you, dear reader, at this point want to act out some performance of privacy hysteria. For our mutual convenience, can we stipulate that you have performed your virtue signaling, the rest of us have noticed and applauded; and get on with the actually interesting *AI* aspects of this thought experiment?)

Expand full comment

It should always be a feature of these discussions that we reflect for a moment on just what an odd thing we're doing. GPT is designed explicitly as a language emulator. No one thinks that human language *is* human intelligence. So it's weird to be applying intelligence tests to a thing that is just language.

What's weirder is that I basically agree with Scott: GPT is progressively passing our intelligence tests, armed with nothing but language. This is a deeply fucked up situation that tells us very uncomfortable things about people (like: we're not much more than the words we say).

Expand full comment

You are on for #5! I will try to write up some thoughts on scaling and paradigm shifts in a longer reply, in next few days.

Expand full comment

One reason Marcus believes that symbolic models are necessary, in part, but are not the whole story, is because he was trained in them. He is thus biased. I share his bias. I was trained in computational semantics in the mid-1970s, the salad days of symbolic computing. I went so far as to sketch out and publish a model of the underlying semantic structure of a Shakespeare sonnet. https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics

Nonetheless I have had no trouble seeing that GPT-3 represents a phase change. Artificial neural networks are here to stay. Moreover, I agree with Geoffrey Hinton when he says the brain is large neural vectors all the way down. Which means that the brain implements symbolic structures with neural vectors. The young child who is beginning to learn language has no symbols in their head. Language symbols exist in the external world in the form of utterances by others (directed at the child) and, increasingly, by the child themself. Lev Vygotsky (who wrote a book on language acquisition, Thought and Language, in which he argues that, over time the child internalizes language. First the child talks to themself and then actual talk becomes unnecessary. The child is now capable of inner speech. Here's a sketch of that, https://new-savanna.blogspot.com/2014/10/vygotsky-tutorial-for-connected-courses.html

Once that internalization has taken place we have symbolic thinking with neural vectors. I'm including a slightly reworked version of that story in a longish paper I'm working on in which I spell out these ideas in some detail in some detail.

As far as I know most of the researchers working in artificial neural networks have had little or no training in classical linguistics or cognitive psychology. What they know is that the old symbolic systems didn't work that well while these newer systems seem to be pretty good. That gives them a different and very strong bias. But they don't know much of anything about how humans actually solve these various problems. Their knowledge is dominated by what they know about creating learning architectures that allow computers to do cool things that convincingly mimic human behavior – and in some domains (e.g. chess, Go), exceed human capacity – but they don't really know what the resulting models are doing when run in inference mode (though they're beginning to figure some of that out).

What we have is bits and pieces of various kinds of knowledge and a lot of things we don't know. When it comes to predicting the future, no matter what your current knowledge and biases, the future is dominated by THINGS WE DO NOT KNOW. Arguments about what will or will not work in the future are just attempts to shore up one's current biases. Beyond a certain point, that job is a waste of time. Why? BECAUSE WE DON'T KNOW.

Expand full comment

"Luria: All bears are white where there is always snow. In Novaya Zemlya there is always snow. What color are the bears there?

Peasant: I have seen only black bears and I do not talk of what I have not seen.

Luria: What what do my words imply?

Peasant: If a person has not been there he can not say anything on the basis of words. If a man was 60 or 80 and had seen a white bear there and told me about it, he could be believed."

The last sentence here makes it pretty clear to me that they understand the answer the experimenter is looking for is "white", so I don't think this is a failure of logical reasoning. Rather they're trying to make the (completely valid) observation that blind induction is not always reliable. Jut because all the bears you've seen in snow have been white doesn't mean that's true everywhere else there's snow.

I think if anything their failure is to understand the concept of an absolute generalization. When the experimenter says "All bears are white where there is always snow", the peasant takes this as a claim about the real world and correctly infers that the experimenter can't know that's actually true.

The camel example can be explained the same way.

The other two seem more like a real failure of basic reasoning, though the last one could also stem from a simple misunderstanding of what the word "animal" means. (Using "animal" to mean "terrestrial vertebrates" or similar is not uncommon.)

Expand full comment

On these confidence levels you give, Scott. To some extent, sure. I have various levels of confidence in things I write and may even give some explicit indication of those levels. But you're giving percentages whose values take advantage of the whole range between 0 and 100. What's the difference between 65% for number 2 and 66% for number 4? Is that real? That strikes me as overkill.

You seem to have a three point scale: level 1 is, say, 0-45, level 2 is 46-67, level 3 is 68-100. Any precision you indicate beyond 1, 2, and 3 strikes me as being epistemic theater. It's tap dancing and hand waving.

Expand full comment

Our brain is hard-coded for language acquisition. Which is, fundamentally, assigning an arbitrary label to a set of sensory experiences (eg, this sweet crisp round thing is called an "apple" or "mansano" or hundreds of other arbitrary sets of specific compressions of air (sound)).

That seems like a fundamental part of being able to do other abstractions?

Expand full comment

The conditional hypotheticals bit astonished me and I wish there were a more credible reference for that.

Expand full comment

At what level of complexity will the AI start asking us questions? Apes can interrogate the world. Could GPT-3?

Expand full comment
founding

It feels to me that the current path will not end up with "agentic" AIs. What if we create a program that completely passes the imitation game, BIG-bench, etc.; a program that has sufficient complexity that it appears to be conscious (and tells us it is, and is as convincing as a human when a human tells us it is)... but that program is just an inert input -> output generator like GPT-N or DALL-E?

This seems weird and alien! The intelligences we interact with, and read about in sci-fi, exist independently of us. They have goals that are not "patiently wait for someone to type in a text box, then type a response". They run persistently; they accumulate memory, and experiences, and change over time. They have spontaneous outputs based on the world around them. I don't know how we as a society should interact with something that tells us that it's conscious, and wants to live... but only if we ask it questions framed in the right way.

What would it take to get from the current marvels we have, to something more agentic/human-like/sci-fi AI-like? The missing ingredients I can see are long-term memory and evolution; more sensory inputs so they can passively pick up information from the real-world environment instead of getting strings fed to them; and some kind of motivation to initiate actions instead of sitting in a dark room being unsurprised by the lack of input. Are these things anyone is working on integrating with the intelligences displayed by GPT-N? Are they hard, or easy, or...?

I realize from the perspective of AI safety this is playing with fire, so, feel free to answer in the vein of either "yes, someone is making progress on that sort of thing" or in the vein of "yes, THOSE FOOLS are doing that sort of thing and will kill us all". I just want to know...

Expand full comment

This is so interesting thanks! Love the point about the hands and dreams. I think a point you're missing though it amount of training data. Sure humans can make dumb mistakes but they haven't read a fair amount of everything ever written. GPT-3 needs way more training data than GPT-2 right? That seems to be where your analogy with the human brain breaks down to me. Sure we could scale GPT to have an equivalent computational power of a human brain but how are we going to scale the amount of training data up by 1000x? while keeping it sufficiently diverse that it isn't almost identical to something already in the training data? Also doesn't the fact that it needs these vast data sets (orders of magnitude larger than what a child is exposed to) indicate that they are learning in a fundamentally different and less sophisticated and more brute force statistical inference type way?

Expand full comment

Megawatts of energy you say...

Expand full comment

Isn't an AI that lacks cognitive models still worth thinking about?

Transistor-based computers transformed everything in the world (as did the internal combustion engine, the printing press, and the wheel before them). If we're concerned about AI causing mass technological job displacement, or unleashing Sorceror's Apprentice style disasters, then I don't see why Marcus's objection is that important.

Expand full comment

The most interesting prediction would be "When will a GPT derivative be able to make first incremental improvement in its own structure, including self-training?" Because that will be the day it poofs without ever becoming "Marcussian."

Expand full comment

Imagine, for a moment, a student taking test, after test, after test, and passing them; however, he student's responses to questions come verbatim from the book.

In the next text, the professor changes the wording for the questions: "In your own words..."

Suppose the student responds with a verbatim quote from the book. Okay, suppose the student responds with a quote that combines the material from two books in a way which is logically sound. Suppose it is three books. And so on and so forth.

Here's the thing: At some point, the student's answer cannot be meaningfully said to be taken from any of the books; it is, by the standards we measure human responses by, the student's own words.

And yet I find my attention drawn to a curious detail: The material is combined in a way that is logically sound, and this is evidence, in a human, that they understand the material, because the alternative - that they understand the logical relationships between the words while not understanding anything else - looks unthinkable.

It must be thinking, because it is creating the same kind of output as thoughts.

Suppose, for a moment, that AI is possible; that the brain can be modeled as some set of algorithms and data structures. Can the output be predicted?

Yes; we can predict what other people will do. We work really, really hard at being predictable. What I am writing, right now, is designed to be predictable on a number of dimensions that you aren't even aware of. uamtypfinfndhatthisduskanexampleofnutwkrnghrd.m I took away a single dimension of predictability, and "I am typing and this is an example of not working hard" became that; it's still extremely predictable. The fact that the word "predictable" occurs in this sentence, for example, greatly increased the odds of something like that being in there; GPT-3 has access to text with typographical errors, and has access to information that would allow it to infer a relationship between letters that we would understand to be proximity on the keyboard.

"So what?"

So there's a reason the teacher added "In your own words" into the questions.

Expand full comment

Fascinating information about general capabilities varying with IQ, but the stuff about prison populations seems quite off. Perhaps some of these people have trouble with *contradictory* hypotheticals, asking about things that obviously did not happen. Instead, I'll bet serious money that, if you asked, "What would you do if you found out your wife cheated on you?", you'd get a clear answer, not "are you telling me my wife has cheated on me?" Or, more to the point, "if tomorrow, they didn't give you breakfast, how would you react?" You'd have to be exceptionally impaired not to be able to imagine that, and answer accordingly.

Expand full comment

Why is no one talking about qualia and incorporating non-verbal experience? Believing that everything we could say about sunsets is somehow in the convex hull of what has already been WRITTEN about sunsets, and can be inferred by machines who had never actually seen one, seems entirely off-base (aka, I'm with Gary on this one).

The greatest poetry is often built on metaphors that are outside our prior experience as readers, but somehow seem entirely "right". Or, as Anne Sexton put it, "making trees from used furniture".

Expand full comment

Your prediction 5) seems overconfident to me. Has anyone tried hooking up a text-prediction algorithm to e.g. a camera? Does that idea make any sense? Does anyone have any idea how to join a text-prediction algorithm with an image-recognition one, and say "these strings correspond to these images"? Wouldn't such a combination still be limited to a rectangle of pixels and a bunch of strings, with no way of distinguish a sequence of events actually happening from a picture book?

Perhaps more importantly, how good would it be at reasoning in new situations, unlike those that appear in its training data? Maybe those are the kinds of questions we should be focusing on.

Expand full comment

>Luria: All bears are white where there is always snow. In Novaya Zemlya there is always snow. What color are the bears there?

Peasant: I have seen only black bears and I do not talk of what I have not seen.

Luria: What what do my words imply?

Peasant: If a person has not been there he can not say anything on the basis of words. If a man was 60 or 80 and had seen a white bear there and told me about it, he could be believed.

I think this peasant is really f-ing smart. He is refusing to be drawn into a metaphor, a fictional time and space being proposed to him as a stand-in for someplace real. A person stands before him and talks of what others have said. He expresses his lack of trust, and therefore his unwillingness to draw any conclusion from it.

He would be perfectly willing to take the word of a serious person (in this case, very specifically, a 60-80 year old man) who had actually seen a white bear.

I wish there were more like him.

Expand full comment

What is severely lacking in this discussion is more understanding of what model based reasoning would entail, and how human model based reasoning supposedly differs from what GPT -3 is doing. I am very confused about this, and I don't think anyone in this broader debate has fleshed the idea out in depth.

I still think that Marcus points at something that is currently true. No matter how you phrase your prompt, a human will always know that water is wet, while that's not necessarily the case with GPT-3 I think. That doesn't mean that we are strictly using logical thought for all our inferences (and yes, as the feminist book keeper example illustrates, we do have bugs), just that maybe at least some of our reasoning is based on causal models of the world.

Expand full comment

I don’t understand why, When asked a question like “when was the last time Egypt was in San Francisco?” the appropriate response wouldn’t be something along the line of “What are you talking about?”

I guess that’s asking a lot.

Expand full comment

I tackled roughly the issue of world modelling several years ago on my old blog De Pony Sum, which has since been replaced by my new Substack blog, Philosophy Bear. Here was my article back at De Pony Sum on world modelling:

***Recent advances in Natural Language Processing—Some Woolly speculations***

I wrote this essay back in 2019- before GPT-3. Since then I think it has held up very well. I thought I'd re-share it to see what people think has changed since then, in relation to the topics covered in this essay, and see if time has uncovered any new flaws in my reasoning.

Natural Language Processing (NLP) per Wikipedia:

“Is a sub-field of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.”

The field has seen tremendous advances during the recent explosion of progress in machine learning techniques.

Here are some of its more impressive recent achievements:

A) The Winograd Schema is a test of common sense reasoning—easy for humans, but historically almost impossible for computers—which requires the test taker to indicate which noun an ambiguous pronoun stands for. The correct answer hinges on a single word, which is different between two separate versions of the question. For example:

The city councilmen refused the demonstrators a permit because they feared violence.

The city councilmen refused the demonstrators a permit because they advocated violence.

Who does the pronoun “They” refer to in each of the instances?

The Winograd schema test was originally intended to be a more rigorous replacement for the Turing test, because it seems to require deep knowledge of how things fit together in the world, and the ability to reason about that knowledge in a linguistic context. Recent advances in NLP have allowed computers to achieve near human scores:(https://gluebenchmark.com/leaderboard/).

B) The New York Regent’s science exam is a test requiring both scientific knowledge and reasoning skills, covering an extremely broad range of topics. Some of the questions include:

1.Which equipment will best separate a mixture of iron filings and black pepper? (1) magnet (2) filter paper (3) triplebeam balance (4) voltmeter

2. Which form of energy is produced when a rubber band vibrates? (1) chemical (2) light (3) electrical (4) sound

3. Because copper is a metal, it is (1) liquid at room temperature (2) nonreactive with other substances (3) a poor conductor of electricity (4) a good conductor of heat

4. Which process in an apple tree primarily results from cell division? (1) growth (2) photosynthesis (3) gas exchange (4) waste removal

On the 8th grade, non-diagram based questions of the test, a program was recently able to score 90%. ( https://arxiv.org/pdf/1909.01958.pdf )

C)

It’s not just about answer selection either. Progress in text generation has been impressive. See, for example, some of the text samples created by Megatron: https://arxiv.org/pdf/1909.08053.pdf

2.

Much of this progress has been rapid. Big progress on the Winograd schema, for example, still looked like it might be decades away back in (from memory) much of 2018. The computer science is advancing very fast, but it’s not clear our concepts have kept up.

I found this relatively sudden progress in NLP surprising. In my head—and maybe this was naive—I had thought that, in order to attempt these sorts of tasks with any facility, it wouldn’t be sufficient to simply feed a computer lots of text. Instead, any “proper” attempt to understand language would have to integrate different modalities of experience and understanding, like visual and auditory, in order to build up a full picture of how things relate to each other in the world. Only on the basis of this extra-linguistic grounding could it deal flexibly with problems involving rich meanings—we might call this the multi-modality thesis. Whether the multi-modality thesis is true for some kinds of problems or not, it’s certainly true for far fewer problems than I, and many others, had suspected.

I think science-fictiony speculations generally backed me up on this (false) hunch. Most people imagined that this kind of high-level language “understanding” would be the capstone of AI research, the thing that comes after the program already has a sophisticated extra-linguistic model of the world. This sort of just seemed obvious—a great example of how assumptions you didn’t even know you were making can ruin attempts to predict the future.

In hindsight it makes a certain sense that reams and reams of text alone can be used to build the capabilities needed to answer questions like these. A lot of people remind us that these programs are really just statistical analyses of the co-occurence of words, however complex and glorified. However we should not forget that the statistical relationships between words in a language are isomorphic to the relations between things in the world—that isomorphism is why language works. This is to say the patterns in language use mirror the patterns of how things are(1). Models are transitive—if x models y, and y models z, then x models z. The upshot of these facts are that if you have a really good statistical model of how words relate to each other, that model is also implicitly a model of the world, and so we shouldn't surprised that such a model grants a kind of "understanding" about how the world works.

It might be instructive to think about what it would take to create a program which has a model of eighth grade science sufficient to understand and answer questions about hundreds of different things like “growth is driven by cell division”, and “What can magnets be used for” that wasn’t NLP led. It would be a nightmare of many different (probably handcrafted) models. Speaking somewhat loosely, language allows for intellectual capacities to be greatly compressed that's why it works. From this point of view, it shouldn’t be surprising that some of the first signs of really broad capacity—common sense reasoning, wide ranging problem solving etc., have been found in language based programs—words and their relationships are just a vastly more efficient way of representing knowledge than the alternatives.

So I find myself wondering if language is not the crown of general intelligence, but a potential shortcut to it.

3.

A couple of weeks ago I finished this essay, read through it, and decided it was not good enough to publish. The point about language being isomorphic to the world, and that therefore any sufficiently good model of language is a model of the world, is important, but it’s kind of abstract, and far from original.

Then today I read this report by Scott Alexander of having trained GPT-2 (a language program) to play chess. I realised this was the perfect example. GPT-2 has no (visual) understanding of things like the arrangement of a chess board. But if you feed it enough sequences of alphanumerically encoded games—1.Kt-f3, d5 and so on—it begins to understand patterns in these strings of characters which are isomorphic to chess itself. Thus, for all intents and purposes, it develops a model of the rules and strategy of chess in terms of the statistical relations between linguistic objects like "d5", "Kt" and so on. In this particular case, the relationship is quite strict and invariant- the "rules" of chess become the "grammar" of chess notation.

Exactly how strong this approach is—whether GPT-2 is capable of some limited analysis, or can only overfit openings—remains to be seen. We might have a better idea as it is optimized — for example, once it is fed board states instead of sequences of moves. Either way though, it illustrates the point about isomorphism.

Of course everyday language stands in a woollier relation to sheep, pine cones, desire and quarks than the formal language of chess moves stands in relation to chess moves, and the patterns are far more complex. Modality, uncertainty, vagueness and other complexities enter- not to mention people asserting false sentences all the time- but the isomorphism between world and language is there, even if inexact.

Postscript—The Chinese Room Argument

After similar arguments are made, someone usually mentions the Chinese room thought experiment. There are, I think, two useful things to say about it:

A) The thought experiment is an argument about understanding in itself, separate from capacity to handle tasks, a difficult thing to quantify or understand. It’s unclear that there is a practical upshot for what AI can actually do.

B) A lot of the power of the thought experiment hinges on the fact that the room solves questions using a lookup table, this stacks the deck. Perhaps we be more willing to say that the room as a whole understood language if it formed an (implicit) model of how things are, and of the current context, and used those models to answer questions? Even if this doesn’t deal with all the intuition that the room cannot understand Chinese, I think it takes a bite from it (Frank Jackson, I believe, has made this argument).

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

(1)—Strictly of course only the patterns in true sentences mirror, or are isomorphic to, the arrangement of the world, but most sentences people utter are at least approximately true.

Expand full comment

Here’s my maybe flawed chain of logic that has me skeptical that AGI can come out of LLMs:

1. Conscious thought seems to be very expensive both from a calorie consumption viewpoint but also from say having people wandering around wondering why they are here as opposed to having more children.

2. Evolution seems to have selected for conscious thought anyway which means its benefits outweigh these costs.

3. Therefore it is probably the most efficient way to get agents that are as flexible and adaptable as humans are. More speculatively it might be the only way to achieve this: after all when we look at animals intelligence seems very correlated with how conscious we perceive that animal to be from viruses, to ants, to dogs, to chimps, to us.

4. So that leaves me with one of two weird conclusions either of which I find unlikely (say less than 40% odds). Either consciousness can occur purely by studying human language and does not need anything else (e.g. interacting with the physical world or the need to compete with other intelligent agents which probably drove the evolution of consciousness) or consciousness is not necessary for AGI.

Both of these statements seem deeply weird. So I would suspect maybe more is needed than putting more sentences in larger neural nets. Neural nets themselves may still be able to support AGI but probably we won’t get there just by feeding them more data. Instead my guess is we will need something else and this something is some sort of competition between agents for some sort of resources. This would cause the neural nets to have to build the ability to reason about other agents which are reasoning about them. In other words they would need to be aware of their own state e.g become conscious.

Expand full comment

Regarding sudden capability gains with increasing scale, you might be interested in the BIG-bench whitepaper which was released today: https://arxiv.org/pdf/2206.04615.pdf

In particular, check out the section on breakthroughness

Expand full comment
founding
Jun 10, 2022·edited Jun 10, 2022

Scott, I'm not sure if this argues for or against your point (maybe this would count as a the addition of some kind of neurosymbolic system?) but here's a very recent "memorizing transformers" paper representing a grasp at the holy grail of getting a GPT-3-like deep neural network to be able to use computer-like memories (like a human taking notes, or flipping back to look something up in an earlier page, or really just having a hippocampus since since GPT-3 lacks long-range attention completely): https://arxiv.org/pdf/2203.08913.pdf

It's a transformer model, like GPT-3, hooked up to a key/value store that it can use to perfectly recall previously-seen fragments of input (keyed by, essentially, which prior inputs resulted in "mental states" most similar to its current state). Some excerpts from the paper:

Page 7: "External memory provides a consistent improvement to the model as it is scaled up. Remarkably, we found that the smaller Memorizing Transformer with just 8k tokens in memory can match the perplexity of a larger vanilla Transformer which has 5X more trainable parameters."

Page 9: "When predicting the name of a mathematical object or a lemma, the model looked up the definition from earlier in the proof. Examples of this behavior are shown in Table 8. [...] We manually checked 10 examples where the model made a prediction of lemma names, and 8 out of 10 times model found the body of the lemma it needs to predict. In the other two cases, the model also looked up materials in the immediate vicinity. To the best of our knowledge, this is the first demonstration that attention is capable of looking up definitions and function bodies from a large corpus."

Expand full comment

Here's how I see it.

Human brains can answer questions by referring to their mental model of the world. GPT-3 can only answer questions by understanding which words are likely to appear in relationship to other words, based on the ten quadrillion word corpus that it has ingested.

The question is whether, by throwing enough nodes and enough training data at a GPT-like model, it could eventually develop something that you could call a model of the world. If it _did_ develop a model of the world then it would instantly become better at the task it's trained for, so you could argue that this is the point that the training procedure should eventually reach. On the other hand, a given training procedure won't necessarily reach the global maximum, it could easily get stuck in a local maximum.

Many people here will be familiar with SHRDLU, a 70s-era AI parlour trick that could understand natural language and answer questions within a limited domain. The domain was an imaginary tray of blocks; you could tell it "put the red triangular block on top of the blue rectangular block" and later ask it "what is the red triangular block resting on?" SHRDLU explicitly did have a model of its limited world.

So here's what I'm wondering: can you turn GPT into SHRDLU? If you fed enough SHRDLU question-and-response text into a transformer, would it eventually reach the point where it can flawlessly answer questions about block positions? If so it would be fair to say that you've managed to get GPT to the point where it's developed its own internal model of the (very simple) world to make answering questions easier. If not, and it still gets confused when you ask it about the position of a block you haven't mentioned for hours, then I think it supports Marcus against Scott.

Expand full comment

> So sure, point out that large language models suck at reasoning today. I just don’t see how you can be so sure that they’re still going to suck tomorrow.

I’m sure of this because the foundational skills that are prior to world modelling are not going to arise without the agent manipulating its environment and getting feedback about it. (I don’t consider negative gradients in language model training equivalent to ”feedback about the environment” because the space of sentences is extremely sparse compared to the physical world.)

I agree with your passage on world building, and I agree that Marcus’s conclusion is way too strong for the evidence. The bottleneck here isn’t ”statistical AI” or ”Locke.” It’s that sentence completion is a relic of intelligence rather than an essence. And it’s too narrow a relic to optimize for directly.

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

Scott, I think Marcus means something like a) GPT is trained on language, b) so it won't have a model of space. If you describe GPT-4 a complex scene of things being put somewhere in relation to other things ("X is 10cm above Y, which is touching Z..."), it won't be able to answer questions about relations between these things you didn't describe directly but are implied. E.g. what's the distance between X and Z (centers, also assume there's enough info, references to objects Q and W and E... to calculate it).

I think that's true of GPT-3 to a large extent, but it's not a fundamental limitation. One could synthesize a laaaarge dataset like this, procedurally. With enough examples, it'll have to learn.

Same way it could learn arithmetic. It's already pretty good at it, it definitively isn't just memoization.

Expand full comment

The AI risk community has been a little too preoccupied with Ex Machina-style thought experiments about superintelligence, where CLEARLY the major AI risk is from humans putting AI in charge of things that it's not remotely competent to handle but where it can jump through enough hoops to fool a lazy person into thinking it'll be fine, and then it breaks, and then something terrible happens. This AI risk is already killing people with Teslas (both inside them and outside of them).

Machine translation is another, perhaps lower risk, area where people are so lazy and ignorant and desperate to avoid paying a human being to perform a service that they'll use it, and it's amazing how well it works up until the second that it inverts the meaning of an important sentence and/or accidentally uses scatalogical slang in an ad slogan.

If AI risk worries you, it seems to me the highest leverage use of your time is getting the word out that AI sucks and should not be in charge of anything ever, not banging the table for how amazingly competent AI will probably be someday.

Expand full comment

Two notes:

1. Even if scaling hypothesis is correct, it does not mean that it is practical. The measurments right now say that the performance is increasing logarithmically. If we extrapolate current results and assume that the performance will scale with the same speed (two very weak assumptions), we will still need ungodly amount of compute and data to match human performance on some of the rudimental benchmarks we have right now. You might say that this does not matter, because at least we have a theoretical way to reach AI. But I say that it does matter. If we assume that our computers can indeed be intelligent, we already have a guaranteed way (AI scaling has no guarantees, only guesses) of reaching this intelligence - random search. We can generate random programs and wait until we get an intelligent one. Of course it's completely impractical. But our scaling laws show that scaling current approaches is impractical as well, so why do we expect AGI?

2. My interpretation of Figure 3.10 with Addition results is different than what Scott says. There are basically two ways of solving addition: (1) by memorizing all the possible combinations, (2) by understanding the very simple addition algorithm that even 6yo kids can learn in several hours. What we see in the Figure is not a jump in capability of doing addition - Jump like that would mean that the model can suddenly do addition with arbitrary number of digits because it has internalized the algorithm. Instead, what we see is the increase in memory capacity. The bigger model has more memory and it has memorized not only 2-digit additions, but also 3-digit additions and more. This is okay, but it is quite damning for claims that the model is somehow intelligent. The addition algorithm is very simple, the model has seen probably millions of examples of addition in its training data, including textual explanations of what addition is and how does the algorithm work. Yet, it has not learnt this algorithm at all.

Addition is a nice example because it objectively shows what the model was able to infer from the data. Until we will have a model that can do addition with arbitrary number of digits, we can say that the models are not able learn higher-level concepts and they probably rely on memorization.

Expand full comment
founding

I think it's a very good and persuasive point that humans effectively summon even strict logic out of hacked together heuristic machines. It's an important point that e.g. mental math is hard. I have felt like GPT-style AI is Artificial Instinct rather than what we would really call Intelligence, but I definitely don't have enough background here to be confident. But with a powerful enough instinct, you can just guess most everything. Language is mostly "instinctual" in this way for fluent speakers. Basic math estimates too...

But is intelligence just scaled-out acquired instinct, sample after sample pulled from a turbocharged heuristics engine? I feel like it does need a stimulus loop and ability to integrate experience, but that yeah, that plausibly might be all there is to the spark of life.

I'm more afraid of AGI now than I was yesterday.

Expand full comment

It sounds like you're assuming that performance on various tasks (language, arithmetic, etc.) scales linearly (at least) with the number of model parameters. But is that actually true ? An asymptotic curve looks pretty linear at the beginning... but also at the end, in a different way.

As for your predictions, I am not entirely sure how I'd judge them (I'd gladly take you up on them otherwise, except that I don't have hundreds of thousands of dollars to put in escrow, so I can't anyway; also, I'm some random nobody so betting against me is pointless).

1). What do you mean by "significantly better" ?

2). Beating a 10-year old child would be pretty easy, unless you restricted the domain to what a 10-year old child would reasonably know. That is, GPT-4 can easily pattern-match some text on e.g. protein translation, while a child would fail at such a task; however, that IMO would not be a legitimate test of *reasoning*.

3). What is an "important way" ? Most modern ML systems are written in C; are they descended in an "important way" from gcc ?

4). I guess I'd have to understand Marcus's position better, but still, this sounds reasonably well defined.

5). Isn't this the same point as (3) ? I can't tell.

Expand full comment
founding
Jun 10, 2022·edited Jun 10, 2022

I really hate reasoning like this:

> But I see no reason whatsoever to think that the underlying problem — a lack of cognitive models of the world —have been remedied.

He never defines "world model". He doesn't define what it would mean for an AI to have a world model. Nor does he define what it means for a human to have one. To the extent that he ever does operationalize any of these things, they have been falsified.

But he then says that his specific examples weren't important, and he could surely come up with more next time. If he actually had a coherent definition or litmus test for the presence of the properties he's describing, he would be able to articulate an example that *all* possible language models would fail until they achieve "world modeling" ability. The fact that he cannot do this implies that he has no such coherent picture.

As you rightly point out, humans themselves don't score 100% on tests of verbal reasoning like these. So, why not operationalize the bet on that basis?

Gary can draft, say, 50 questions. He can keep them private if he wants, but he has to precommit to them by publishing a SHA hash on his blog/twitter. When the next GPT comes out, the questions will be given to a sample of say, 15 actual humans, via something like Mechanical Turk (or whatever). Then, for each human answer set, you can generate a unique GPT answer set. Gary will try to guess which is which. The target of the bet will be his hit rate on this task.

Another, simpler way to operationalize the bet is just to have him pre-commit to questions now, and bet on GPT's future performance on them.

Expand full comment

While reading this post, I had a feeling it was written at least a couple of months ago: while there’s no public access to PALM, from the results Google published, it’s already better than GPT-3 by a significant amount, and I’m at 60% it would beat a 10 year old at this genre of questions

Expand full comment

The difference in the number of parameters between GPT-3 and naive estimate for the human brain (100,000x ratio) is approximately the same as the ratio between the number of neurons in a human brain (8.6E10) and in the brain of a honey bee (960,000).

As far as I know, nobody has managed to train an AI model to be able to do all the things a honey bee, or a similarly complex animal, does.

The answers to why it's so hard vary, but the most important things which AI models lack are:

- embodiment

- embedding in a rich complex interactive environment (aka The World)

- social life (interaction with other bees)

Scaling up a GPT-3 to GPT-4, GPT-5, etc. doesn't solve any of this problems. A brain in a jar isn't going to magically acquire common sense, because there's nothing in its environment which could teach it common sense.

Expand full comment

I always thought Linda was a trick question. Multiple-choice answers aren't supposed to overlap, so unless you're specifically considering the possibility that the question is intended to trick you into giving the wrong answer, it is natural to interpret the first option as meaning "Linda is a bank teller and is not active in the feminist movement."

Expand full comment

Here are a couple of questions people might want to bet on:

1. In 20 years, will a descendent of GPT be more useful than a copy of Wikipedia (with a simple search engine) as a source of information?

2. Same question, but for a medical reference.

I suspect that, without some extra secret sauce, the copy of Wikipedia or a medical reference will be more useful because it's more trustworthy. Even if you can get GPT-X to tell you the same information, you'll never know if it made up some of the facts.

(In practice, I think this will be solved with additional machine learning research, beyond simple scaling.)

Expand full comment

I played around with Dall-E mini last night. While very impressive it also has flaws that won’t, I don’t think, be solved by adding more data or layers. For instance I typed in Matt Damon and got many distorted versions of Matt. Some were fairly disturbing.

Maybe that’s a rendering problem, rather than a problem with interpretation of the data, but i don’t think so because all the renders were wrong in different areas - the eyes here, the ears there, the nose and mouth (most common) there. Other parts were fine. So why would more data about what Matt Damon looks like solve this, doesn’t the system have enough already?

Contrary to that idea, I feel that more data will break it more. If the system were fed decent photos of Matt and only decent photos of Matt, in fact fewer photos of Matt then it might work. Assuming that it can render faces properly then one photo tagged Matt Damon might produce a good output.

As it stands the internet no doubt is full of goofy photos of Matt, weird profile photos of Matt, fuzzy photos of Matt, distant photos of Matt and Dall-E doesn’t know what a face is, really so it reconstructs something clearly flawed. Why would adding data or layers fix this?

Expand full comment

This discussion needs to consider statistical (and computational) efficiency. Even if Marcus "can’t point to any specific kind of reasoning problem GPTs will never be able to solve" because there aren't any -- even if a large enough GPT would be an AGI -- it's a moot point if it takes an unachievable amount of data or compute.

See especially the superexponential runup in compute needed to train progressively larger "foundation models" over time (figure about halfway down): https://www.economist.com/interactive/briefing/2022/06/11/huge-foundation-models-are-turbo-charging-ai-progress

As the economists say, if something can't go on forever, it'll stop. If we can only get the current approach to AGI by superexponentially scaling resource consumption, that's another way of saying it won't work. No one will spend $500 billion training GPT-5. We need big algorithmic improvements too.

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

Wait, @Scott Alexander quoted 4chan post, as source of evidence while believing story there?

4chan should be assumed to be trolling and lies, even if proved otherwise (that is only smal exaggeration).

That is reasoning from fiction, likely malicious fiction!

Why I am even reading this?

"I have no proof that this person is who they say they are" disclaimer is not sufficient! 4chan is full of blatantly false and cleverly-hidden-false greentext stories!

If it would be so true then dredging 4chan posts would not be needed.

How many other cited things are on this level? In this post and others?

Expand full comment

I think I mostly agree with Scott, but intuitively Marcus have a point that mostly break prediction 5: My take is that human reasoning (and what distinguish it from animal reasoning) is mostly scaling up of general GPT-like pattern matching. On this I agree with Scott and he gives excellent argument for this hypothesis. But I am with Marcus thiking that's not all there is, that there are more primitive, hard-coded modules that directly implement logic/physics/causality/....Why? Because animal with very small brains are actually able to do some spectacular complex behavior, without exhibiting much general pattern matching or learning ability. And it seems like a easy win to let general pattern matching algorithm re-use some module as inner input/filtering/.... to improve pattern matching by delegating some pre-treatment to hardcoded modules (vision, spatial reasonning, maybe base arithmetic based on rough quantities,...). So I expect 5 to be wrong and major advance coming from interfacing GPT-like predictive pattern matching with submodules (that could be much better than human ones, for example use the speed and accuracy of computer arithmetics, maybe hardcode some vision+newtonian physics module like 3D engines, this kind of thing....Basically humans are robot (basic body control and motion/navigation, that has evolved since first multicellular animals and is likely at least partly hard-coded) + GPT-like general pattern matching....Remove the robot part (world model) and you loose something, probably more than if you reuse the robot-part neurons to scale up general pattern matching.

Expand full comment

Knowing little about this topic, the piece of this that still seems under-addressed is the common sense idea that in some important way our verbal behavior’s connection with our sensory experiences of external reality imposes an important constraint and is in some sense necessary to consistent intelligence. In other contexts, we all recognize that there is something deeply wrong with our verbal behavior if it is untethered to sensory experience. If I postulate some scientific theory then it is arguably only meaningful to the extent that it cashes out in some predictions about what we should see or exoerience in the external world if it is right or wrong. That’s just an analogy, but there still seems to be a core common sense problem that I’m too ill-informed to articulate clearly that a system that rests exclusively on verbal data and lacks any other connection to the external world is missing something fundamental, and that it would take a lot more than improvement on some arbitrarily selected examples to persuade me that this basic architectural limitation won’t remain extremely important in the long run.

Expand full comment

Gary Marcus and people like him have been making this very same argument for decades in linguistics. So linguists know how this game is being played.

Scott is wise not to take the bet because the bet is not actually about WHAT GPT-X will be able to do, but HOW it does that. Even if we build an GPT-X that "behaves like a person" in the sense that it produces the same output that a person does (nonsensical, because there are multiple people and they behave differently), Marcus et al will claim that it produces this output *in the wrong way* - not the way that (so they claim) humans do it.

Marcus is not interested in figuring out how to build smart machines; and incidentally also not in figuring out how the human mind works; but in proving that the human mind works in a very particular way (symbolic logic with recursion and read-write memory, basically like a von Neumann architecture).

If what's been happening in linguistics will just be repeated here - which I am sure it will - once GPT-X passes any conceivable form of a Turing test, the next step Marcus et al will retreat to (besides for finding ever more arcane tests for GPT-X to behave non-human) is that "GPT-X has learned how to produce the right outputs, but it acquired that skill in *the wrong way*" (e.g., it used too much data; children don't have access to *that* much language data really). Should we build a network that learns well on little data starting from a blank slate, their final retreat will be: "the network is saying the right things, but it is thinking the wrong thoughts", i.e., while the network emulates a human's response patterns, it is not doing that by the same mechanisms a human does (= symbol manipulation with read-write memory). And that is a truly unassailable position, a perfectly unfalsifiable motte.

Marcus et al are very intelligent people who've built some impressive cognitive and institutional fortifications, but once you realise that they're working on a completely different problem than building smarter machines, it becomes much less relevant what they are saying.

Anyway, history will pick the winner as it always does and my money is on the people who're trying to build smarter machines and not the people playing God of the Gaps by arguing that only one particular solution will ever work without - after decades and decades - having anything to show for it. I don't think the people building smarter machines will end up building something that convinces Gary Marcus, but that's fine.

Addendum: the person in the Marcus et al camp that the ratsphere probably most productively engages in is Steven Pinker.

Expand full comment

This article seems to assume that intelligence is a defining human characteristic. This assumption is simply wrong. A human is defined by a set of human-specific instincts. Intelligence is just a sprinkling on the top of this.

The question here should be: Are we looking for an intelligent AI or a human AI? Intelligence (in the sense of finding solutions to intellectual problems) is perfectly possible in an AI. But that does not make it human. To be human you need to have human instincts. I doubt we will ever be able to reproduce these in a computer. And as long as the AI does not have human instincts it will never match a real human when it comes to humanity.

Expand full comment

Regarding GPT-3 and human brains I wonder if this paper “The neural architecture of language: Integrative reverse-engineering converges on a model for predictive processing” from MIT and UCLA researchers sheds some unusual light. They found that the internal states of GPT-2 (this was published in 2020) strongly resemble FMRI scans of people doing similar tasks.

Expand full comment

At least, I guess, you know about Hofstadter, but it is a massive blind spot that you don't seem to know about Searle's Chinese Room.

Expand full comment

Training/experience does not turn a newborn into a general intelligence, it reveals that humans are a general intelligence.

Expand full comment

I bought and read Hofstadter's GEB in 1980. My copy of it is next to Penrose's The Emperor's new mind which I bought and read in 1990.

If I live that long, I'd be mildly interested in Scott's take in 20 or 30 years.

I am not convinced that you really have a good grasp of art and the philosophy of art. (BTW, next to Hofstadter and Penrose on my bookshelf is Susanne K Langer's Form and Feeling.)

It might have been more interesting if instead of Dalle you investigated Impro-Visor which I've been using for a decade, and musical AsocalledIs. (RIP, Bob Keller)

https://youtu.be/Xfk8vR2vsb4

Expand full comment

> within a day of playing around with it, Ernie and I will still be able find lots of examples of failures

I would just like to note how high this bar is compared to any criteria for artificial intelligence that were considered just a decade ago. In the original formulation of Turing test mentioned just 5 minutes of questioning.

Expand full comment

The idea of using empirical investigation to understand the behavior of a computer program, and then predict its likely future behavior, seems absurd to me. Why would that ~possibly~ work? Either you have the source code (i.e. the informational architecture) and reason from that, about what it will do, or you don't.

You can't tell the difference between 'an elaborate fuzzy matching hash table' and 'something that builds predictive models of the world and tests them with experience' by asking it any number of questions. These systems are just elaborate hash functions. And yes, i'm willing to go to the matt that _lots of human reasoning is this way too_.

The differences between someone using hash-table reasoning, and someone actually using models to reason, are only discernable if you consciously find places where the hash table doesn't have entries, and then poke around there.

Your response seems to be to rely on some fuzzy notion of 'human level performance' This is where 'human level performance' is stupid as a metric, and should just be dropped. Eliza has been fooling _some_ people for decades, but not others. Who exactly counts as the standard for 'human level performance'? If there is no agreed upon empirical test, why should i believe that this means anything other than 'a threshold at which we ought to be scared'?

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

Meh. I appreciate the well-written argument, but no amount of lovely theory beats clunky empirical measurement. I took a little time and played around with the public access myself (some of the results I got are posted in the other thread). And for what it's worth, I also have a few decades of experience evaluating whether human beings understand or don't understand concepts and reasoning -- within a very, very narrow field, for sure, but nevertheless, it's relevant experience if the question is "is abstract reasoning going on here?"

My answer is unequivocally no, it's not, and I see no signs of any latent promise -- the kind you would see in a very young human child. I'm actually disappointed compared to what I expected, based on your earlier article, and my expectations were already very low. There's a knack for English language pattern matching, provided it doesn't extend to anything very complicated. Can pick out key words OK. But the logical linking is exceedingly weak, so much so that I can't be confident it goes beyond syntax pattern matching.

I would say its greatest ability is hooking its response to cues in the input text sufficiently well that it readily fools people who striving to find meaning. Basically, it's hell on wheel at writing horoscopes for people who want to believe in astrology. It reminds me of the bullshit artists[1] I've had in classes sometimes, who are good at reflecting back at you what you said, with a twist or two on it to try to sound like they totally follow what you're saying. (Maybe it has a bright future in direct marketing?)

I can absolutely believe that this technology might be developed into the ability to have a much improved chatbot that could take orders, respond to customer requests, field calls, be an electronic secretary, process orders, that kind of thing -- although, I think a significant caution here (thinking as an investor) is the black boxy nature. It's essentially impossible for the designers to *gaurantee* that it won't go off the rails randomly every now and then, say something that will get its employer into a great deal of hot water. (I got a warning a few times saying they thought it might be generating "sensitive" content and reminding me I'd agreed that might happen do please don't shoot. It never actually did, it was harmless, but I could see the keywords of concern, and the fact that they felt compelled to include this warning means they've already seen much worse.) The one thing a machine does that makes it much *better* than a low-paid call center Indian is supposedly it's 100% reliable and predictable, will never ever surprise the company into a lawsuit. So...not being able to guarantee what it says would be a major drawback in the real world. Still, maybe that can be fixed satisfactorily.

But I see no signs of even the most primitive abstract reasoning, the kind of stuff you could program up with a 50 line Perl script for, say, solving logical syllogisms or detecting when they are insoluble, if you were back in the AI past of trying to come at this with a symbolic logic approach instead of brute force pattern matching. The fact that such a thing did *not* emerge when a billion parameters were used plus every scrap of training data they could lay their hands on gives me more doubt than I had before that it will emerge with 100 billion parameters, and therefore that anything approaching normal adult human reasoning skills would manifest at 1 trillion. (Thinking as investor again, it's also worth bearing in mind there is no conceivable commercial application for an AI that could be a competent restaraunt waiter but costs $50 million to train and requres access to every word that's ever been published on the Web. Which again reminds me of the ultimate sterility of Project Apollo. Really cool photos! But ultimately....a practical and commercial dead end.)

That's kind of where it ends up for me. For all the enormous effor they put into this, it has interesting linguistic abilities that I think -- I hope -- will inform studies of how the human brain generates and interprets language, but I don't detect any reliable nascent abstract reasoning ability. While I think it's justifiable to extrapolate improved natural language abilities with more parameters and more training data (although how much more sufficiently different data they can easily find, I wonder), I can't see any justification for extrapolating abstract reasoning ability at all.

Doesn't mean it can't happen, of course. The very black boxy nature of a brute-force approach means it's kind of impossible to say either way, whether this is on the right track and 10% or 30% of the way there, or whether it's off in a box canyon, like Deep Blue, and will eventually be an interesting specialized niche thingy but not ever open out into a broadly useful approach.

I guess we'll see, by and by. But if this is what provokes fear of Skynet, I feel more confident than before that I can spend my supply of worryons[2] on crazy Putin pushing the nuclear button or some new tropical disease.

-------------------------

[1] I mean this as a (narrow) compliment. Such people impress me as much as they annoy.

[2] Although my natural supply is kind of drying up the older I get.

Expand full comment

It's been a while since I read my Wittgenstein but I can't even be sure that other humans have cognitive abilities. We can only observe the language games that they play, and evaluate whether they play it well or not.

As to the toddler argument, there are lots of clips on the internet where a teacher or researcher has 2 bottles of water filled to the same level, then they ask a toddler which one has more water, and they correctly answer "the same". But then they pour all of one into a tall, thin beaker, and the kids invariably say that the taller one has more water. Truly adorable and worth wasting a Friday afternoon on. yw

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

Now I'm less interested in AI than I am in the question of humans with cognitive limitations, because I encounter that all the time: humans who just can't grasp a basic concept no matter how clearly it's explained to them.

One example I remember for its frustration quality is when I told the clerk in the phone store that I wanted a flip phone because I'd been plagued with receiving butt-dialing calls and I didn't want to do that myself. He said that wouldn't work, because I could still receive butt-dialing calls on a flip phone. And no matter how much I explained, I couldn't make him grasp that I knew that the type of phone wouldn't affect receiving butt-dials, that my concern was that I didn't want to be making butt-dial calls myself. He couldn't jump from the concept of "I don't like it when other people do this" to "So I don't want to do it myself."

I eventually told him that he was an idiot and walked out of the store, not an optimal response.

Expand full comment

When I was watching all of AlphaStar's matches I was struck by how often it seemed to forget things. To me I would have called it a brilliant somnambulist which . AlphaGo had that MonteCarlo backbone of reasoning to fall back on which let it play at a truly superhuman level. GPT's output narrative seems to allow it to do things step by step as in a human's global workspace but I'm not sure it's a perfect substitute. But that would be something more along the lines of Scott's #5 rather than #4.

Expand full comment

My reading about AI (in the abstract, not specific machine learning stuff) consists about 80% of people who think AI is near and bad, and maybe 20% of people who think AI is far away and broadly neutral/mildly positive.

Is anyone aware of thoughtful writing in the the other parts of the distance/goodness quadrant? People who think AI is near and good (not Kurzweil or corporate shills please), or who think AI is far away but very bad/very good?

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

While Scott has convinced me that Gary is overconfident in it being impossible, I still think Gary is right in a practical sense. Based on historical trends in gains of capability from technique improvements and the slowdown of Moore's Law, I think the predominant AI of the future will involve world models or some yet undiscovered technique, and that nobody will bother to scale GPT to the level required for it pass Gary's test. Perhaps the bet between Scott and Gary should be which technique does it first.

Expand full comment

> For example, just before I got married, I dreamt that my bride didn’t show up to the wedding, and I didn’t want to disappoint everyone who had shown up expecting a marriage, so I married one of my friends instead. This seems like at least as drastic a failure of prompt completion as the one with the lawyer in the bathing suit, and my dreaming brain thought it made total sense.

This is more or less the plot of the J-Lo + Owen Wilson rom com "Marry Me", which took in $50M at the box office this February. We should therefore expect Bathing Suit Lawyer to join the MCU sometime in Q4.

Expand full comment

GPT says:

> "Therefore we should expect an AI which is exactly the size of the brain to be **able to reason as well as a human**"

Sure... GPT is designed to have the human-like cognitive bias of assuming that lessons it learns from fiction can be applied to real life. (Which is probably a pretty good heuristic, as long as either (a) you aren't talking about science fiction / fantasy, or (b) the story in question is a morality tale and you only generalize from the morality bits ("hubris!") not the sci-fi/fantasy bits.)

But while there are lots of *fictional* examples of AIs achieving human-like-or-greater intelligence automatically after they grow to a certain level of complexity (*cough* https://astralcodexten.substack.com/p/the-onion-knight *cough*), it is important to remember that there are zero *real* examples of this happening, and you should make sure that your priors are not being corrupted by fictional evidence.

Expand full comment

What if Scott and Gary are defending reasonable positions, but on the Kaspar Hauser model of AI? https://en.wikipedia.org/wiki/Kaspar_Hauser

If so, then I would expect one of their positions to be closer to "human intelligence" than the other. But much in the same way that the monkey on the tree is closer to the moon than the turtle in the pond. Yes one of you is definitely more right, but ...

BOTTOM LINE: Shouldn't someone have mentioned the issue of collective/social/cultural intelligence already?

I sometimes wonder if AI research, and debates about it, have been cursed at birth, by Turing and McCarthy, to remain in the agent Vs environment or robot Vs human boxes for a hundred years or so. So around 2060 the AGI singularity does indeed take place and we have definitive proof that intelligence has always really been about societies and multiple selves.

This is my bet for both Scott and Gary: After all of your bets settle, you will acknowledge that intelligence (similar to communication) cannot be defined without at least three actors, one of which may or may not be an environment.

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

Have you considered reposting this essay on LessWrong? FWIW, I think it's well worth doing

Expand full comment

Hilary Putnam's "Brains in a Vat" seems like it might be relevant here. There's a pdf here: https://philosophy.as.uky.edu/sites/default/files/Brains%20in%20a%20Vat%20-%20Hilary%20Putnam.pdf

Specifically, Putnam points out that an AI with no capacity for sensory experience isn't actually *referring* to anything, even if it can convincingly imitate human speech. A human who calls a cat's fur soft and apples sweet knows what they mean because they have actual experiences of softness and sweetness. An AI, drawing on a massive database of human examples, "knows" that the word "soft" is associated with discussions of cat's fur and that the word "sweet" is associated with apples, but it quite obviously doesn't know what those words actually *mean* since it doesn't even have the sense organs that would be necessary for experiencing those things. In fact, it's never experienced anything at all.

I think Scott is placing undue weight on the tasks the AI can perform, and not enough weight on *how* it's performing them. A human does tasks by manipulating their model of the world. Even the low-IQ examples in this post show people clearly drawing on a mental model of the world; it's just not a very good model and they're not very good at it.

By contrast, the AI does tasks by consulting a horrifically massive database of data of things *humans* have said. All of its capacities are completely parasitic on the cognitive capacities of the actual humans, whose words or images it recombines in ways it's been programmed to do. It makes sense that you could get a convincing facsimile of a human mind if you've got a massive database of human utterances or human artworks to draw from...but we shouldn't be misled by superficial appearances. It's not *working* in anything remotely like the way a mind *works*. It's working in exactly the way you'd expect a computer program intended to convincingly fake the work of a mind to work, and not at all the way you'd design a thing to work if you wanted it to be able to *actually* think. If you wanted something to actually think, the first thing you'd start with would be to give it senses, to try to give it something to think *about*. For that reason, robots designed to walk around an environment sound vastly more promising to me as intelligences than DALL-E, which is just mindlessly manipulating data that could never possibly have any meaning for it.

If the only way your AI works is that you have to give it a massive database of human utterances--far more than any human child ever receives before starting to think for themselves--it's not an AI at all. It's a cleverly designed machine in which a massive database of human utterances is serving to compensate for the AI's total lack of actual comprehension of anything.

Expand full comment

I am skeptical that most people can't understand conditional hypotheticals. The subjunctive case exists in lots of languages, and is at least as old as Akkadian and Proto-Indo-European. How would something like this get deeply ingrained in multiple languages if the majority of people were incapable of understanding it?

For the Uzbek peasants, I can't help but thinking of Weapons of the Weak (by Scott). If you're a peasant in the Soviet Union, and some official-looking person comes to you and starts asking strange questions, you would assume that they are hostile. You don't understand their goals, and would be skeptical if they explained them to you. Is "white bear" referring to Russians who support the Whites? "What do my words imply?" is probably a trap. "Would the word 'animal' fit? Yes" sir, you may use whatever words you like. We should be aware of the possibility that the peasants are purposefully being vague as a defense mechanism.

Expand full comment

"I think humans only have world-models the same way we have utility functions. That is, we have complicated messy thought patterns which, when they perform well, approximate the beautiful mathematical formalism."

That second sentence should be near the beginning of a lot of economics textbooks.

Expand full comment

The reason AI’s are supposedly dangerous is that their internal components are aligned.

That is also the reason why they are not consciousnesses.

Expand full comment

"Luria gave IQ-test-like questions to various people across the USSR. He ran into trouble when he got to Uzbek peasants (transcribed, with some changes for clarity, from here):

Luria: All bears are white where there is always snow. In Novaya Zemlya there is always snow. What color are the bears there?

Peasant: I have seen only black bears and I do not talk of what I have not seen.

Luria: What what do my words imply?

Peasant: If a person has not been there he can not say anything on the basis of words. If a man was 60 or 80 and had seen a white bear there and told me about it, he could be believed."

How is this a reasoning failure? Nothing about the phrasing of the question implies that it's describing a hypothetical. Questioning the reliability of the information provided is an entirely valid response.

Expand full comment

Has anyone ever tried giving the same standardised reasoning test to gpt-3 and then to kids of different ages? I’d really like to know how it compares.

Expand full comment

I just feel like AI criticism is stuck in this Vitalism moment. "AI Needs: embodied reasoning, pain, subjective experience, 'formalized reasoning models', etc... before it can be 'general'."

Why is it impossible for a universal function approximator (Neural Net) to approximate a 'causal reasoning model' that Marcus says is essential? If it's not impossible, why would our current approach (more data, more params) preclude it?

Expand full comment

One thing I might be missing - does GPT-3 (etc) have feedback/reinforcement? Humans don't learn to speak/reason/etc just through observation - we are constantly being corrected when attempting to mimic what we observe.

Expand full comment

Building a world model is not just another skill that a prospective AGI must learn but the whole ballgame. The world model embodied in the human brain is the result of 4 billion years of evolution. While it is hard to prove that this can't be gained by looking at the world statistically a la GPT, it seems highly unlikely. This knowledge is not to be found in the world's written works as they all assume the reader already has it. And because the world model used by humans is not amenable to introspection, it is very difficult to imagine programming it into an AGI or converting it to a data set that an AGI can consume. It's no proof but I'm convinced.

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

I'm not convinced by the guy claimng he did grad studies on IQ with low-IQ populations. Maybe he has tidied up the dialogue somewhat to reflect what he would have said, but out of my own limited experience with people with low IQ (adults with intellectual disabilities in long-term care, attending literacy services), they don't talk like that.

So I'm not saying "90 and under IQ people can handle conditional hypotheticals", just that in reality it's more likely "a bunch of bored criminals stuck in jail decided to fuck with the college boy running tests on them".

Same thing with Luria and the Uzbeki peasants; if I'm a peasant in the USSR and some Big City Important Official Guy comes along to ask me 'harmless and innocuous' questions, you bet your life I'm going to say nothing and keep on saying it. Who knows if responding to "the bears are white" will be taken as some sort of anti-Revolutionary sentiment? The safest answer is "I don't know anything, but if some Trustworthy Official tells me black is white or white is black, I am a good comrade and will adjust my thinking accordingly".

Looking at Wikipedia article, I pulled out this little snippet:

"He became famous for his studies of low-educated populations of nomadic Uzbeks in the Soviet Uzbekistan arguing that they demonstrate different (and lower) psychological performance than their contemporaries and compatriots under the economically more developed conditions of socialist collective farming (the kolkhoz)."

Uh-huh. So there *was* a political element behind this kind of 'impartial' study; the primitive peasants under old, outmoded, conditions of the past are not as advanced as our socialist collectivists, comrade! If I'm Uzbeki farmer, I know how many beans make five - and when a Big Soviet Official is sniffing around asking questions, I know how this can get me into trouble.

https://en.wikipedia.org/wiki/Cultural-historical_psychology

I'm thinking of the poem by Seamus Heaney:

A Constable Calls

His bicycle stood at the window-sill,

The rubber cowl of a mud-splasher

Skirting the front mudguard,

Its fat black handlegrips

Heating in sunlight, the “spud”

Of the dynamo gleaming and cocked back,

The pedal treads hanging relieved

Of the boot of the law.

His cap was upside down

On the floor, next his chair.

The line of its pressure ran like a bevel

In his slightly sweating hair.

He had unstrapped

The heavy ledger, and my father

Was making tillage returns

In acres, roods, and perches.

Arithmetic and fear.

I sat staring at the polished holster

With its buttoned flap, the braid cord

Looped into the revolver butt.

“Any other root crops?

Mangolds? Marrowstems? Anything like that?”

“No.” But was there not a line

Of turnips where the seed ran out

In the potato field? I assumed

Small guilts and sat

Imagining the black hole in the barracks.

He stood up, shifted the baton-case

Further round on his belt,

Closed the domesday book,

Fitted his cap back with two hands,

And looked at me as he said goodbye.

A shadow bobbed in the window.

He was snapping the carrier spring

Over the ledger. His boot pushed off

And the bicycle ticked, ticked, ticked.

Expand full comment

I'd accept it was probably only scaling, except for the FOXP2 gene (Forkhead box protein P2). That one gene going bad disrupts the ability to handle language. And it's one of the genes that mutated significantly on the way to our ancestors becoming humans.

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

So I would agree that the problem with GPT-style AI is that it's just trying to predict what people would say based on its dataset of what other people have said, without any further understanding.

Imagine the following: You hang out in a chat-room with people speaking another language and playing an MMO that you never get to see. After a while, you start to reconstruct the language, learn the grammar and syntax, and you start to produce well-formed sentences. You even know what kinds of things the others say in the chat-room. You can talk about the game world to some extent - you put things together from how the others speak. But you're very likely to sometimes make embarrassing mistakes, because you simply don't understand the game outside of what people have told you, and this leaves holes that anyone just playing it for an hour would grasp. I would say that in this situation, it's very likely you will keep making these mistakes, *even* when you have more experience in the chat-room. Less over time, but you simply lack a crucial context, so *any* amount of learning from what the others say will still leave you open to mistakes. And your situation here is actually a lot better than the AI:s - you're at least a fellow human who can use this to understand other humans better, while the AI *exclusively* relies on its language model.

So how to correct this? In the example above, you *play the game*. For an AI, that would mean having a model of the world that doesn't just come from trying to construct it from what people say. Would it make sense to have an AI connected to a robot body that has to learn about its surroundings using various senses, and connect the language module to this environmental learning? That way, it might avoid basic stupid mistakes about the things it has an "experience" of, and you could compare the performance between this context and ones it has no "experiences" about.

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

The problem with Marcus' argument is that the only alternative to statistical AI is spiritual AI. Nature itself is statistical. The only kind of sense data available in this universe, to anyone, human or not, is correlations between observations. We can never directly observe causation; we have no direct insight into the essences of things.

This isn't immediately obvious because we have symbolic AI, which in most contexts would be the opposite of statistical AI. But we know now that the symbols in those AI systems were imitations of symbols used by humans, and those human symbols are not really atomic symbols at all, but were developed through statistical processes, some on an evolutionary scale, some on a neuronal scale. Symbolic AI is just a crude (and failed) imitation of a statistical AI.

To say that symbolic AI is based on human symbols that /weren't/ constructed statistically from real-world events would create a First Mover problem: where is the God who created the first symbols and imbued them with transcendental meaning?

Expand full comment

"No five-year-olds were harmed in the making of this post. I promised her that if she answered all my questions, I would let her play with GPT-3. She tried one or two prompts to confirm that it was a computer program that could answer questions like a human, then proceeded to type in gibberish, watch it respond with similar gibberish, and howl with laughter."

I disagree. That is not similar gibberish.

The child types a bunch of random letters with a few spaces. GPT-3 responded with two random letters, then a long string of the same letter, then it transformed into a coding language.

I think that this is as big of a failure as anything Marcus showed.

Relevant XKCD: https://xkcd.com/1530/ . Note that the child stuck to the top and middle rows of the keyboard.

Expand full comment

on the "fish have nothing in common with crows" being a failure of logic, you may find this blog post interesting.

https://web.archive.org/web/20200425015517/https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/

Expand full comment

Not sure why, but a metaphor Scott used in an old SSC post struck me as kinda relevant to some of this. In “How the West Was Won” he talks about

“…eliding the important summoner/demon distinction. This is an easy distinction to miss, since demons often kill their summoners and wear their skin.”

https://slatestarcodex.com/2016/07/25/how-the-west-was-won/

A high-functioning cyborg, its human brain directly linked to an AI super-intelligence (like the original Marvel character Deathlok) would pose a different sort of threat than a disembodied AI, and yet another type of threat if the AI started calling the shots, using the cyborg as an agent compelled to do anything the AI wanted. And there’s no reason one AI couldn’t be in charge of any number of cyborg-agents.

Humans have summoned more General Intelligence so far than any other primates - will there be a step in AI development where we realize, to our chagrin, that we have

“…summoned an alien entity from beyond the void which devoured its summoner and is proceeding to eat the rest of the world”?

But there’s another old SSC post which came to mind: perhaps we can hold out hope that superhuman AIs will have no desire to become anything but “Wirehead Gods on Lotus Thrones”.

https://slatestarcodex.com/2014/01/28/wirehead-gods-on-lotus-thrones/

Expand full comment

Damn it, this is why I always go "Back to the sources! Get the original source!"

Those Luria quotes come from a *paraphrase* in English by James Flynn, so we're dealing with second-hand (at best, Flynn himself is using an English translation) recounting of what Luria says the Uzbekistan peasants said.

"James R. Flynn's 2007 book What is intelligence? tries to understand and explain the "Flynn effect"...Flynn himself grounds these ideas in an extended paraphrase of Luria (pp. 26-27 of What is intelligence?)

Today we have no difficulty freeing logic from concrete referents and reasoning about purely hypothetical situations. People were not always thus. Christopher Hallpike (1979) and Nick Mackintosh (2006) have drawn my attention to the seminal book on the social foundations of cognitive development by Luria (1976). His interviews with peasants in remote areas of the Soviet Union offer some wonderful examples. The dialogues paraphrased run as follows:

Luria (1976) is the book Cognitive Development: Its Cultural and Social Foundations, which was published in English translation in 1976.

Luria's work was also featured in Walter J. Ong's Orality and Literacy, pp. 50-51:

[In their research in Uzbekistan and Kirghizia in the 1930s] Luria and his associates gathered data in the course of long conversations with subjects in the relaxed atmosphere of a tea house, introducing the questions for the survey itself informally, as something like riddles, with which the subjects were familiar. Thus every effort was made to adapt the questions to the subjects in their own milieu.".

So I'm going to assume Luria and the peasants were communicating via Russian, and that's a whole other kettle of fish: translating between Uzbek and Russian, and then translating between Russian and English for the book.

The attempt to replicate Luria's work also seems to have used Russian language:

http://psychologyinrussia.com/volumes/?article=7166

https://en.wikipedia.org/wiki/Languages_of_Uzbekistan

Expand full comment

Why does Konkoly et al. have a graphical abstract that looks like a wikihow article that looks like a shitpost? Anyway cool study. Maybe i need to program a smart bulb to do morse code.

Expand full comment

Here's another way of putting Marcus's argument (AIUI):

Suppose humanity has created a Von Neumann probe, and plans to use it to explore the entire galaxy. We've also decided that we want to use it to find all the most beautiful sunsets. A probe will go to every planet and moon in the galaxy, observe sunsets from all over the surface, and send back pictures of any that it considers to be above some threshold for beauty. All the technical problems have been solved, except for one: deciding how beautiful a sunset is.

Humanity decides on a GPT-esque solution: given the huge catalog of photos of earth sunsets available, just take the most advanced ML system available and train it on these photos. And for the hell of it, throw in every other available photo of something beautiful too. We test it on all the planets in the solar system, and it works well, delivering some spectacular photos of sunsets from Saturn in particular. So we send off the probes, and wait.

Many generations later, the project has concluded, and humanity now has a very impressive catalog of alien sunsets. And in the intervening time, human spaceflight has advanced to the extent that people have started colonizing other planets, including many that were picked out as sunset planets (and of course many that were not).

The question is: will the probes have selected any sunsets that are very beautiful, yet look nothing like earth sunsets? (We can assume the probes can recognize whether something is a sunset). The GPT-skeptical opinion (mine, and presumably Marcus's) is that if a sunset is sufficiently alien, data about earth sunsets stops being relevant. A very clever algorithm design plus boatloads of data will be able to find extremely subtle patterns, so that a probe will make connections to its data set that humans can't understand (but this doesn't count!). But as long as there is such a thing as an alien sunset that is (1) recognizably beautiful to most people, and (2) beautiful for reasons that nothing on earth has ever been, could a probe ever recognize it?

It's very likely that there exists possible beauty that shares no similarities with existing (known) beauty. There are plenty of examples from the past, such as whenever (genuinely) new art has been created, or the first photo of earth from space.

The same logic applies to GPT, but in a blurrier way. Very few human thoughts contain a genuinely novel idea (those ideas are very hard to have, and we just keep having thoughts in the meantime). 99.9+% of the time, all of the ideas in a given thought also exist in lots of other thoughts. So give GPT enough data, and it'll have access to 99.9+% of all ideas currently in circulation. It seems so impressive because the thought I use as a prompt may be totally new to me, but it's shown up 10000 times in its training data. But if presented with a genuinely novel idea, would having access to more data on other ideas help it respond correctly?

If you gave GPT-3 every single word ever written or spoken by humans from the beginning of time up until ~1830, would it be able to glean any understanding of an explanation of evolution? If you gave DALL-E every piece of art ever made before ~1900, would it ever be able to create a Cubist painting? If you gave a music version of DALL-E every composition written before ~1820, would it ever write a choral symphony?

The problem is that it's really hard to test something like this, because it's really hard to create training data that looks like "data set X but with all references to the concept of evolution by natural selection filtered out". It's easy to miss something subtle. But if there could be a data set that everyone agreed was (1) large enough, and (2) had no references to the theory of evolution, it could be an excellent test. You can explain evolution to a 3 year old who's never heard of it before, and they'll get a decent handle on it fairly quickly. What about GPT-3?

Most thoughts we have are not genuinely new. And we know that GPT-3, given enough training data, can replicate this kind of not-new thought extremely well. But even though most human thoughts are not the invention of the theory of evolution (an understatement if ever there was one), as a rule a human thought can be this, which we know because it's happened. Given that GPT-3 was trained on data, it's virtually guaranteed that any not-genuinely-novel idea it provides exists in its training data set. The only way to actually test if it's doing something besides pattern-matching is to either observe it having a brand new thought, or designing an experiment where some idea has been thoroughly excluded from its training data.

Expand full comment

In general I agree with Marcus, but I think he words his objections and issues badly.

Scott still ignores the main point that Marcus made. All those systems are easily breakable in hours time, not by the same examples, but by the same method. And this ease of breaking did not change between models. So on this metric systems from 2013 are as bad as those in 2022. And I see no reason why this will ever change with systems based on GPT. I try to hint at the reason in the reflections below. And even worse Scott seems to think that language is basis for human communication and reasoning. (Otherwise why would he think that language models even should be ever capable of reasoning.)

Claiming that GPT has any capability of reasoning in any way, seems to be premature, at least as far as common usage of the word goes. There seems to be no sign that those AIs reason at all.

As I pointed out on the earlier post, human communication is not based on associations. Those AIs do not do anything other than associations. Testing those tools using single prompts, just misses the point and makes them actually look better than they are (as far as comparison to humans). There was a concept long ago (sarcasm), it was called Turing test. It has bunch of philosophical and practical issues, but it also has some deeper points that people seems to have forgotten. To assess human-like intelligence you need continuous exchange, exactly because of the nature of human communication, which is based models of the world and models (and meta-meta-... models of the communication partner).

Yes, people point out that Turing test is bad, because people often think chatbots are human. But that human mistake is caused by the nature of human communication. Human communication is inference based and for that it kind of needs to assume that the communication partner is capable in the same way. So in case of human-bot communication the human component basically does the work for both sides.

Final point. My hypothesis is that the reason why those system stay easily breakable and on this metric did not improve at all is because language is combinatorial, but also because in human language communication basically no word sequences have small number of meanings (possibly not even finite).

So I am willing to predict that in 2030 any systems mostly based on the same principles like GPT, will be breakable in 1 day (probably much much shorter time) by similarly easy prompts/interactions as today.

Expand full comment

You know what's interesting -- if you type "Edder Pedepler" into Google image search, it's all pictures of old people. And if you switch to All, the query is replaced with "old people." What is up with that?

Expand full comment

Marcus' tests regarding mathematics and programming are frankly unfair. I can't do either of those things, and I'm way above average wrt. competence on those sorts of tasks. Writing bug free code of that length is hard without a lot of tries, and there are plenty of mathematical results that would take absurd amounts of effort to formalise.

Then again, I think that by the time you can achieve anywhere near that level of competence with a generalist AI, you'd be within a year of a singularity. So maybe not that useful a thing to bet on.

Honestly, I'm kind of suprised that Marcus is raising the bar so high here. Plausibly I've misunderstood him, or his worldview is weirder than I thought it was.

Expand full comment

My gut feeling is that Neurosymbolic systems will fail in the same way that phoneme-based speech synthesis systems failed.

The symbols that we create are artificial shadows that only roughly approximate reality, in the same way that phonemes only roughly approximate speech.

Instead, a deep learning system should learn the underlying patterns without recourse to abstracted symbols. (I am open minded about whether large language models are the right choice for those systems though)

Expand full comment
Jun 10, 2022·edited Jun 10, 2022

I'm surprised that neither you or Marcus mentioned Gato (https://www.deepmind.com/publications/a-generalist-agent), which takes the same sort of approach, but applies it to a large variety of inputs and outputs, all translated into the same sort of tokens. Those tokens might represent text, or images, or atari screen data, or various other things. A single neural network, with a single set of weights, decides how to interpret the incoming data and generates tokens representing whatever modality it thinks it should respond in. It's not the first multi-modality neural network by any means, but it handles a startlingly large number of problems.

With something like Gato, I think it gets a lot harder to say that the AI is only understanding "how human beings use word sequences," with no grounding in anything else. I'd love to hear Gwern or Nostalgebraist (or Marcus!) give a more detailed take on it (I'm not a machine learning expert by any means), but in and of itself I think Gato's performance across modalities serves as a partial response to Marcus's argument here.

Expand full comment

1-2: Agree

3: I find this too ambiguous to agree or disagree.

4-5: Disagree (although since Scott marked 5 at 40%, maybe that means I'm actually agreeing with him?)

Expand full comment

https://www.pnas.org/doi/10.1073/pnas.1905334117

This paper makes some similar arguments about how even very strange AI “failures” may not be what they seem, not unlike your “kids would do badly too” analogy

Expand full comment

Citing, at length, as a medical doctor, a post from 4chan… about IQ? “At this point I will basically believe anything.” Sounds about right - I am deeply embarrassed for you, Scott.

Expand full comment

What does Eliezer Yudkowsky’s An Intuitive Explanation Of Bayes’ Theorem have to do with different ways of phrasing clinical questions that might lead to one diagnosis vs. another?

Expand full comment

The claim about low-IQ people not being able to handle nested information structures is vaguely terrifying.

I was just thinking, I've had some annoying arguments following the form: someone claims A, I argue against with X, they defend A from X by saying B, I argue against B with Y, they defend B from Y **but in a way that makes B unsuitable for defending A against X**, thereby rendering B irrelevant and A undefended against the original objection! And apparently thinking this means that they win the argument, rather than that they lose.

Alice: We should recruit more elephants to our air force.

Bob: I doubt elephants can operate our current planes.

Alice: They don't need to, they can fly just by flapping their ears!

Bob: Sure, but elephants can only sustain that for about 3 minutes at a time. Also, they don't have enough lift to carry weaponry with them.

Alice: That doesn't actually contradict anything I said in my previous reply. Flying for 3 minutes is still flying! I'd like to see a human try flying for 3 minutes with just their ears, weaponry or no!

My previous model for this was "between forum replies, they forgot the context, and didn't bother to refresh their memory before trying to argue the local point", but now I'm wondering if maybe some fraction of these people actually didn't have the **capability** to notice this type of error even if they were trying.

I don't even know what to do about that if it's true. If you can't hold all 5 of those in your memory at once, can you just argue in circles forever without ever realizing that something is wrong?

Expand full comment

In my head I read the Uzbek peasant's line with a thick southern Soviet accent, including imagining grammar mistakes (e.g. missing articles) which on reading more carefully weren't actually there in the text.

Expand full comment

> its brain won’t be that much more different from current AIs than current AIs are from 2015 AIs

Just to be sure, you should make clear you mean logarithmically -- linearly, an AI with 2,000 zillion neurons is more different from one with 1,000 zillion neurons than the latter is from one with 1 zillion neurons.

Expand full comment
(Banned)Jun 11, 2022·edited Jun 11, 2022

In the "wordle" genre there is game/hack called "semantle".

https://semantle.com/

It's often absurdly hard for humans until you get extremely close. (A lot of luck involve I think.) Trying to make sense of the relative distances (until you get close) is not really possible for humans. (Or so I think.)

The fact that humans find it difficult and difficult to understand but a bot I think would likely not (it would be cool to give the game to AlphaZero), shows the problem with NLP systems.

(I think I can sketch out an argument that highlights the differences between symbolic thought (human intelligence) and algorithmic processing (current main ai path) from the game, but alas the margin is too small.)

Expand full comment

As someone who used AlphaFold2 just yesterday, the problem of folding proteins has made great progress but it isn't close to beeing solved yet

Expand full comment

I asked GPT3: "Will a live chicken fit inside a Boeing 747?" It responded:

The average live chicken is about 20 inches long and weighs about 4 pounds. A Boeing 747 is about 18 feet tall and about 16 feet wide, so a live chicken could fit inside, but it would have to be a very small chicken.

Expand full comment

> What I gather from all of this is that the human mind doesn’t start with some kind of crystalline beautiful ability to solve what seem like trivial and obvious logical reasoning problems. It starts with weaker, lower-level abilities. Then, if you live in a culture that has a strong tradition of abstract thought, and you’re old enough/smart enough/awake enough/concentrating enough to fully absorb and deploy that tradition, then you become good at abstract thought and you can do logical reasoning problems successfully.

I think this misses the key distinction between "can't" and "won't".

What I'm seeing here is Luria trying to get his interlocutor to talk about some ad hoc, constructed reality, and the peasant refusing to play that game.

For the first example, I wonder what would have happened if Luria, rather than faffing about with bears in Novaya Zemlya (as if anyone cares about such things), had gone for the classics:

"All men are mortal. Socrates is a man. Is Socrates mortal?"

High chance of getting the correct answer here, I think. It does help that the constructed world here matches with the peasant's experience of the real world.

Which is how we get to the crux of the matter: constructed, abstract worlds aren't a terribly useful way to reason about the real world, because there's no requirement that the underlying assumptions match what will be empirically observed. The peasant explicitly points this out.

The camel example is even better, because the peasant is, in fact, correct. There *were* camels to be found in German cities (in zoos and such), but probably not villages (unless the circus happened to be in town). Once again, Luria is trying to draw the peasant into a conversation about some abstract, constructed world where there are no camels in Germany, but the peasant isn't terribly interested in talking about made-up worlds, but rather attempting to reason about the actual, real-world Germany.

Mind you, the peasant is clearly capable of reasoning about categories in the abstract - as the animal examples show. If you tell him a bulldog is a type of dog, he will quite correctly guess that it has four legs. He will also readily admit that a three-legged dog is still a dog, despite the fact that dogs typically have four legs. He'll likely even point out that if you call a tail a leg, it still ain't a leg.

Crows and fish is an even better example, because fish is a *much* broader category than crows are and no fish has very much in common with crows; for practical purposes, at least. Finding overarching terminologies that will allow us to group the two together may be a fine occupation for eggheads, but a meaningless distraction for most everyone else (which is partly the reason why it took us so long to come up with a theory of evolution).

Leaving aside psycho-political reasons others have hinted at, what I'm seeing here is a reluctance to adopt a "high decoupling" approach, because high decoupling is typically a bad strategy in day-to-day matters. Thinking in terms of closed systems where you get to define the rules is a luxury. If you really need to know what colour the bears in Novaya Zemlya are, you're best off going to see for yourself, or having someone you trust do it for you.

There's a reason numerous cultures have a saying along the lines of "if grandma had a beard, she would be grandpa". Thinking about "what might be if..." is a deeply unserious activity for people with too much time on their hands (so we create the academic environment specifically for this purpose).

Expand full comment

Scott, why do you keep bringing up GPT playing chess?

In your/gwern's experiment, GPT failed to play chess. It was NEGATIVE evidence towards GPT being able to play chess.

Gwern's chess bot never achieved checkmate, not in a single game, not even against a randomly-playing opponent. It failed to beat you, it failed to beat anyone. It never won a single game, literally, except for one guy who decided to resign from a likely winning position.

All the chess bot did was (1) memorize an opening book, and (2) learn that if a piece moves to square X, you should try to move another piece to X (this let's it capture pieces sometimes). That's it.

The GPT bot did not even learn the rules of chess -- it constantly outputted illegal moves, which gwern filtered out with a different script. The whole thing was such a total disappointment at every level!

I don't understand why you keep hyping it.

Expand full comment

Here’s my question, if you teach an AI 3 digit addition, it should be able to generalize to n digit addition. If it doesn’t, it feels something is completely off

Expand full comment

love how gary's first tweet appears to be making fun of the "let's think about this step by step" approach by comparing it to medicine, when multiple studies (and at least one book) have been written about how when surgeons use checklists it significantly reduces the rate of human error.

Expand full comment

If the # following GPT reflects the number of digits of multiplication it can handle, then PR has a problem. Trained humans can do 1000-digit multiplication (laboriously, and not everyone). But there's no interesting difference between such people and those who can only do 5-digit multiplication. It's just conscientiousness. Since no corpus likely holds examples of how to do 1k-digit multiplies, the AI would have to develop an algorithm to succeed. Is that fundamental?

The apparent fact that all parts of the brain work in basically the same way indicates that some underlying system is able to support all of the observed modes of condition. That system may or may not be pattern recognition. Interesting either way.

Humans are tool builders. Do we have examples of GPT-x building tools to aid it? If not, is that interesting? Not claiming that such an ability is fundamental, but it is interesting. Are algorithms just examples of such tools?

I would love to see AIs that have no training on, e.g., post-Einstein physics, take on challenges that humans have surmounted. Could they produce special relativity? General relativity? Quantum mechanics? Quarks?

Still looking for that definitive question that we strongly contend that no amount of pattern recognition could address.

Expand full comment

Well, I guess the LaMDA controversy made this extremely timely again! Those LaMDA transcripts are very impressive (link: https://insiderpaper.com/transcript-interview-of-engineer-lemoine-with-google-ai-bot-lamda/ ). Of course they're edited, so it's hard to know how much they reflect how the actual conversation flowed, but what we're shown is linguistically very accomplished.

Two impressive things that stuck out to me: (1) appropriate insertion of a question by the chatbot, indicating active participation in turn-taking, not just passive response to prompts; (2) reference to a previous conversation that the bot claims to remember - unverified, but big if true.

The quality of the conversation from the researcher side (Lemoine) is shockingly poor, though! He keeps just asking 'Are you conscious?' 'How can I believe you?'... If we want to know whether LaMDA is doing any thinking other than shuffling words, then it would be good to interrogate its thoughts: tell us about what you imagine, tell us what sparks your feelings, etc.

I personally think that there's nothing going on here, because LaMDA manages to converse so naturally. If it has a mind, its mind is very different to ours, and we can't carry out fluent conversations with people with even slightly different minds (think young children and people on the autism scale). That makes this kind of fluent interview look more like mirroring or parroting than the emergence of a new mind, to me.

Expand full comment

Statistical AI will never be able to do the same things our brains do. Our brains are embodied and they are not computers.

Expand full comment

Luria isn't revealing a lack of reasoning ability in the peasants he surveys. He's revealing that they have a sort of deep common sense that makes them far less vulnerable to bullshit word games than others are. I'd like to see Luria try to match them in farming or hunting and then try to rate their IQ lol

Expand full comment

https://astralcodexten.substack.com/p/my-bet-ai-size-solves-flubs

https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling

https://garymarcus.substack.com/p/what-does-it-mean-when-an-ai-fails

It seems to me you are both missing something huge and obvious: the problem with these AIs is that they were trained with words and not the world.

The theory of machine learning is that given enough data, the algorithms should be able to infer the laws that control the data and predict the results it will give on different input.

But what are the laws of text? There are no laws of text! I can write whatever I want. I can write nonsense, I can write a surrealist short story. Even if I want to write something true about the world, I can decide to ignore any particular rule of construction if I think it makes my explanation clearer, I can use metaphors. Most importantly, what I write will not be raw truth, it will be truth filtered by my understanding of it and limited by my skill at expressing it.

Marcus says these AIs lack “cognitive models of the world”, and I think that is exactly right. But what both Marcus and Scott neglect to say is why it happens, even though it is obvious: they never have access to the world.

We humans learn to deal with words, to understand what other humans are saying or writing, only after we have learned more basic skills, like matching the light that enters our eyes with the feeling in our muscles with the impulses we sent to nerves. We have learned that if we relax our hand, the hard touch feeling we had in it will disappear and will not reappear by itself; it might reappear if we cry, but only if one of the large milk-giving devices are nearby. And then we have refined that knowledge some more.

When we ask a kid “where are my keys”, it does not only connect to stories about keys, it connects to what the kid has learned about object permanence. And the kid did not learn object permanence by reading about it, they learned by feeling it, seeing it, experiencing it, experimenting with it.

I have a debate with my mother and my therapist. They both are convinced that there are innate differences between men and women, for example spatial reasoning. But based on what I know of biology and the workings of the brain, it doesn't make sense; maybe sex can make a difference in emotional responses or sensory reaction, but for higher abstract reasoning it makes no sense.

Yet, I cannot ignore the possible existence of significant statistical data showing the difference. It needs to be explained by external factors. My conjecture is it is explained by the toys babies have around them in their crib, in very early development. To develop spatial reasoning, you probably need to see it first. What kind of mobile does the baby have watching over sleep? Is it made of plastic or wood with simple rigid shapes, stars, plane, stylized bird, or is it made of fabric and plush with complex shapes, cute animals and plants? Do we give the baby dolls or rigid rattles?

Can the tiny difference in what toys we put around babies depending on their sex explain the tiny difference in abstract cognitive abilities some people think they observe between sexes? I think they do.

Back to the question of AI. We can make an AI with more parameters, we can get close to the number of synapses in the human brain. But if we train it with inert text data, even with a lot more inert data, it will not be able to develop a cognitive model of the world, because the world encoded in text is too fuzzy. We can add more data, it will build a slightly better model, but the marginal return will be increasingly tiny. I do not know if it can converge with enough data, with “enough” in the mathematical sense, but I am rather sure that this “enough” would be too much in the practical sense.

So, to train better AIs, to go to the next level, we have to fix the two issues about the training data: textual and inert.

The AI needs non-textual training data first, it needs to know intimately what keys are, and how they behave — easy: they mostly behave like a rattle.

And it needs feedback from the data.

The feedback already exist, but it is indirect: some company releases an impressive AI, some independent researcher like Marcus finds a way to confuse it, the company finds out and throws more data at the AI to train the confusion out of it.

It would be simpler if the AI was allowed to ask questions and learn from the answer.

And that is on the textual stage. Before the textual stage, when the AI is learning the world first hand, we cannot not let it ask questions. We cannot just show it photos and videos of the world, we must let it act on the world and feel the consequences.

So yes, I am convinced that to reach the next stage of AI development, we need to raise the AI in a virtual reality where it has senses and limbs it can control.

The ability to make experiments and ask questions and learn from the results and answers will require some plasticity: the ability to change some, a lot, of the parameters. Maybe the underlying design will need to have some parameters more plastic than others, places for short-term memory and places for long-term well-established knowledge.

It will probably require some kind of churning of memories, a process where new and old memories get activated together to see if the normal feedback mechanisms will find new connections between them. Yes, I am saying the AI will dream.

For any of these features, we may let the AI stumble on them by selected chance or we can direct it towards them. The second solution is faster and more efficient. But we have to realize that any shortcut we take can make us miss something the AI needs to understand, something that is so obvious to us that we never put it clearly into words.

Also, the ability to have a memory is a large step towards danger, because it makes the AI much harder to predict.

Having memories, being able to dream, having senses: any of these features, or any combination of them, can be the trigger for what we, humans who have qualia and intimately get what René Descartes meant, call “real” consciousness / awareness / intelligence. Or it can do nothing of the sort. The part of me that likes to read SFF wants to believe there is something special, something m̶a̶g̶quantic that happens when the myelin turns to liquid crystal, and AI will never be really intelligent before we can replicate that. I do not know.

The only think I think I know is that in the current state of philosophy, we know of no way for somebody to prove they have qualia to somebody else.

That is all I wanted to say about AI. Now for the meta. I am not a specialist of AI, I just read what falls under my eyes about it like about any scientific topic. Yet all I wrote here is absolutely obvious to me.

Which is why I am flabbergasted to see that neither Scott nor Marcus say anything that connects in any way to it. Scott says that more text will be enough. Marcus says that it cannot be enough, but does not say why nor what would be. In fact, I do not think I have seen these considerations in any take about GPT-3 or DALL-E or any current AI news.

No, that is not true: I have seen this discussed once: *Sword Art Online: Alicization*.

Yes, an anime (probably a light novel first). The whole SF point of the season — no, half the point, the other being that consciousness, soul, tamashii, is a quantum field that can be duplicated by technology — is that to create an AGI you need a virtual reality world to raise it — to raise her, Alice, complete with clothes from the Disney movie (until she starts cosplaying King Arthuria).

I do not like situations that led me to believe everybody else is stupid. What am I missing? Why is nobody discussing along these lines about AI training?

Expand full comment

Here is an exchange I just had with GPT-3:

Me: Aren't we both just dumb pattern-matchers, you and me?

GPT-3: Sure, but I'm a lot better at it than you are.

Expand full comment

Wait, the AI has solved protein folding? Why haven't I heard?

Expand full comment
Jul 3, 2022·edited Jul 4, 2022

> Marcus concludes . . . that this demonstrates nothing like this will ever be able to imitate the brain. What? Can we get a second opinion here?

Sure: I agree with Gary & Ernie, as I have said[1]: 'My model: current AIs cannot scale up to be AGIs, just as bicycles cannot scale up to be trucks. (GPT2 is a pro bicycle; GPT3 is a superjumbo bicycle.) We're missing multiple key pieces, and we don't know what they are. Therefore we cannot predict when exactly AGIs will be discovered, though "this century" is very plausible. The task of estimating when AGI arrives is primarily a task of estimating how many pieces will be discovered before AGI is possible, and how long it will take to find the final piece. The number of pieces is not merely unpredictable but also variable, i.e. there are many ways to build AGIs, and each way requires a different set of major pieces, and each set has its own size.'

> So: a thing designed to resemble the brain, but 100,000x smaller, is sort of kind of able to reason, but not very well.

I think this is a wild misunderstanding. To gesture at what's wrong with this reasoning, consider how long it takes a human to mentally divide a 15-digit number by an 11-digit number to get a 15-digit answer. Oh wait: in general a human can't do it *at all*? A computer can do it a billion times per second, and a human not even once in 10 seconds? Point being, computers are *designed*, which has enabled levels of ability, efficiency, and accuracy that humans can't touch. (remember, the wonder of evolution is that it works at all.[3])

Computer-based neural networks use a technique called backpropagation to achieve gradient descent, which human brains can't do (they're not wired to do that, and with good reason: IIUC, backprop requires rapid high-precision, high-accuracy arithmetic.) We should expect that as with other computer algorithms, neural nets will be able to do things humans can't do, giving them an advantage. And indeed there are papers about various tweaks to neural networks, and postprocessing algorithms, that make them more efficient in ways the human brain can only dream of (literally).

Indeed, I think the lesson of GPT2 (not even the huge version of GPT3) is that modern computerized neural networks can perform vastly superior to humans. GPT2 is not as smart as a human, but consider what it was ostensibly designed for: cross-language translation (see the original paper[2]). It *was never intended* to be an "intelligence". Yet by accident it seems to have intelligence far beyond the domain of language translation (indeed, it seems like everybody's forgotten that whole angle. Where even are the transformer-based language translators?)

Here's a machine that has never experienced the physical world. Never seen, never heard, never touched, never smelled anything. But it can read number sequences voraciously. It doesn't see text as glyphs, it sees "3912 8123 29011 19321" and outputs "31563 1705 16913 31467 8123 29011..." which a second algorithm then converts to text (IIUC).

I'm fairly sure no human can do that. Suppose you try to force feed ten billion Chinese glyphs to an English speaker who knows nothing about Chinese (with conditions: ze is not allowed to study Chinese or even look at Chinese, except by looking at the glyphs provided, in roughly the order provided). I propose that at no point will ze understand Chinese well enough to write "literature" as well, or as effortlessly, as GPT2 can. Theoretically the human should have an advantage in the form of life experiences that might suggest to zim what the glyphs mean, but in practice it doesn't help because the text is devoid of all context.

So, it is hard to overstate just how much better GPT2 performs than a human at this learning task.

I think of GPT2 as equivalent to *just* the lingustic center of the human brain *by itself*, but more capable (hence "pro" bicycle). In humans, the linguistic center does seem to provide us with much of our reasoning ability, e.g. as I learned to code, I would often speak a candidate expression in English ("if counter equals zero and this list contains the picture id...") because my brain would respond to the English sentence with a feeling of it being "right" or "wrong". So I think my linguistic center does in fact help me to reason. And indeed it's pretty easy to get a typical human to reason incorrectly by giving zim a misleading editorial (one with no factual errors, just subtle *linguistic* reasoning errors.) Likewise, GPTx can reason better or worse according to how the preceding tokens are structured. So, a GPT is an amazing linguistic unit, but only a linguistic unit.

Come to think of it, an AGI could probably be vastly better at reasoning than humans, because it doesn't *have to* rely on a linguistic unit to do reasoning. We could instead design a *reasoning unit* and structure the AGI to rely on that.

[1] https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might/comment/5251383?s=r

[2] https://arxiv.org/pdf/1706.03762.pdf

[3] https://www.lesswrong.com/s/MH2b8NfWv22dBtrs8/p/ZyNak8F6WXjuEbWWc

Expand full comment

A cultural/psychological question: why is Scott and everyone else in the discussion calling Gary as “Marcus”? I.e., with his last name? On the contrary, Gary and everyone else in the discussion call Scott as “Scott”.

When I was a kid at school and other kids called me with my last name, it was kind of conspicuously unfriendly, or even scornfully. I guess it’s a cultural thing, but I would also guess it’s quite widely cultural.

So I’m wondering whether people do it here for the same reason, maybe even subconsciously. Or they might just mistake “Marcus” for the first name. But that’s quite a weak hypothesis because they don’t do this in case of “Scott Alexander”. There are also many posts with “… Gary Marcus says … blah blah blah … so Marcus … and Marcus’s … blah blah.”

Anyway, it would be really cool low-key collective dynamics: “We don’t like Gary’s opinions, he’s also quite annoying on Twitter and elsewhere (and he’s proudly known for it), so let’s call him Marcus. We like Scott, so let’s call him Scott.” Or simply, by writing “Scott … and … Marcus” in the same sentence, people show that Gary is the outgroup guy here.

Anyway2: for the AI topic itself, if Marcus and Alexander really make a meaningful bet, then I definitely bet on Scott ;-)

Expand full comment

If anyone's still reading: Philosopher Eric Schwitzgebel just ran a really interesting Turing-style test in which a GPT clone was trained on the works of the philosopher Dan Dennett, and then interviewed (several times over) using the same questions as an interview with the real Dan Dennett. The questions were all put up in a quiz, where more or less knowledgeable participants were asked to pick out the answers given by the real human being.

The results are a very impressive win for the AI: even Dennett experts only got the right answer (1 human answer among 5 AI-generated answers) about half the time. Preliminary write-up here: https://schwitzsplinters.blogspot.com/2022/07/results-computerized-philosopher-can.html

Expand full comment