1128 Comments
deletedMar 30, 2023·edited Mar 30, 2023
Comment deleted
Expand full comment
deletedMar 30, 2023·edited Mar 30, 2023
Comment deleted
Expand full comment
Comment deleted
Expand full comment

Come on Scott, you're just not understanding this...for a start, consider the whole post! Tyler Cowen

Expand full comment

Isn't this sort of akin to Normalcy Bias where people just stand and watch a tsunami that's about to destroy them because they think it can't possibly happen to them?

Expand full comment

> The Safe Uncertainty Fallacy goes:

> 1. The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.

> 2.Therefore, it’ll be fine.

> You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.

No, sorry. This is a straight-up, uncharitable straw man of the argument. The actual argument sketch goes like this:

1. We have read and carefully thought about Yudkowski's arguments for years. We find them highly unconvincing. In particular, we believe the probability of the kill-us-all outcomes he discusses are negligible.

2. We don't assert that everything will be fine. We assert that the problems that are actually probable are, while serious, ultimately mundane –– not of the kill-us-all sort.

Expand full comment

For whatever persuasion value this has I think you’d be a very interesting persuasive voice on podcasts and other media on this topic (I know you don’t like that idea, but now seems like the time for courage to win out) and that it would probably be a net good for society for normal people to hear you speak. Popularity breeds influence. Know you already have a lot but seems like it couldn’t hurt.

Expand full comment

"Sovereign is he who decides the null hypothesis."

Expand full comment

The existence of China renders basically any argument that we should restrict AI moot: they sure as hell won’t and you should trust them less. Sucky place to be, but it’s where we’re at.

Expand full comment

"If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%"

This reminds me of a great quote by Mike Caro: "In the beginning, all bets were even money."

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

"We designed our society for excellence at strangling innovation. Now we’ve encountered a problem that can only be solved by a plucky coalition of obstructionists, overactive regulators, anti-tech zealots, socialists, and people who hate everything new on general principle...Denying 21st century American society the chance to fulfill its telos would be more than an existential risk - it would be a travesty."

This is so wildly counter to the reality of innovation by America and Americans vs. the rest of the world that I don't even know what to say about it other than it makes me trust your ability to understand the world and the people in it less.

Expand full comment

I am not a prominent member of the rationalist community so I don't experience social pressure to assign a minimum level of probability to anything that comes up for debate. I don't think an AI apocalypse could happen. I don't think an AI apocalypse will happen. I think an AI apocalypse will not happen, certainty 100%. I also don't think I'll be eaten by a T-Rex tomorrow, also certainty 100%.

>If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%. If you have any other estimate, you can’t claim you’re just working off how radically uncertain it is. You need to present a specific case. I look forward to reading Tyler’s, sometime in the future.

I have a very strong prior that things I have been totally uncertain about (am I deathly allergic to shellfish? Will this rollercoaster function correctly? Is that driver behind me going to plow into me and shove me off the bridge?) have not ended up suddenly killing me.

Expand full comment

> 3) If you can’t prove that some scenario is true, you have to assume the chance is 0, that’s the rule.

Why did you bother with this claim he's obviously not making when the previous one was so much closer and inconsistent with this?

> Now we’ve encountered a problem that can only be solved by a plucky coalition of obstructionists, overactive regulators, anti-tech zealots, socialists, and people who hate everything new on general principle. It’s like one of those movies where Shaq stumbles into a situation where you can only save the world by playing basketball. Denying 21st century American society the chance to fulfill its telos would be more than an existential risk - it would be a travesty.

The problem is of course not solved if someone else gets it.

Expand full comment

One quibble. You wrote: "Then it would turn out the coronavirus could spread between humans just fine, and they would always look so betrayed. How could they have known? There was no evidence."

Actually, when this kind of thing happens, the previous folks asserting that "there's no evidence that" often suddenly switch seamlessly to: "We're not surprised that..."

Expand full comment

Is anyone else surprised by how safe GPT4 turned out to be? (I speaketh not of AI generally, just GPT4). Most of the old DAN-style jailbreaks that worked in the past are either fixed, or very limited in what they can achieve.

You can use Cleo Nardo's "Chad McCool" jailbreak to get GPT4 to explain how to hotwire a car. But if you try to make Chad McCool explain how to build an ANFO bomb (for example), he refuses to tell you. Try it yourself.

People were worried about the plugins being used as a surface for injection attacks and so forth, but I haven't heard of disastrous things happening. Maybe I haven't been paying attention, though.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

My crash-vibe-summary of the MR article is

* this shit is inevitable, you're not going to stop it

* besides, the future is so unbelievably unpredictable that trying to even Bayes your way through it is going to embarrass you

* given both the inevitability and unpredictability, you may as well take it on the chin and try to be optimistic

Which, you know, has its charm.

Expand full comment
User was banned for this comment. Show
Expand full comment
Mar 30, 2023·edited Mar 30, 2023

Can any of the folks here concerned about AI doom scenarios direct me to the best response to this article: https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter

I am assuming some responses have been written but I wonder where I can read them. Thank you!

Expand full comment

"Pascal's Stationary Bandit: Is Government Regulation Necessary to Stop Human Extinction?"

Expand full comment

I’m still not convinced that the existential risk is above 0% because nobody has any solid idea of specific things the AGI can actually do. You get arguments here that it will be a virus, but that needs human agency to build out the virus, for which you presumably need a lab. Or the AI gets control of nuclear launches - which are clearly not on the internet. I’ve heard people say the AI will lock us all up, but who is doing the locking up?

Expand full comment

I think the strongest argument in Tyler's piece comes here: "Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison."

I believe this is a valid point, and the strongest part of the essay. It truly is easier to imagine how something may be destroyed, than to conceptualize how it may change and grow.

Tyler is making a point about the tendency of our brains to follow the simplest path, towards imagining destruction. The Easy Apocalypse fallacy? Perhaps, the Lazy Tabula Rasa argument?

Of course, this doesn't mean we shouldn't worry about it - he's right that the ingredients of a successful society are unimaginably varied, and likely one of the ingredients of avoiding apocalypse is having dedicated people worrying about it. Nuclear weapons haven't killed us all yet, but I'm deeply grateful that arms control advocates tamp down the instincts towards ever-greater proliferation

Expand full comment

I think what is going on here is that we are in a domain where there are enough unknown unknowns that normal statistical reasoning is impossible. Any prediction about what will actually happen is Deutschian/Popperian “prophecy”.

Some people (Eliezer, maybe Zvi?) seem to disagree with this. They think they can pretty clearly predict what will happen according to some general principles. They don't think they they are extrapolating wildly outside of the bounds of our knowledge, or engaging in prophecy.

Others (maybe you, Scott?) would agree that we don't really know what's going to happen. I think the remaining disagreement there is about how to handle this situation, and perhaps how to talk about it.

Rationalists/Bayesians want to put a probability on everything, and then decide what to do based on an EV calculation. Deutschian/Popperian/Hayekians (?) think this makes no sense and so we just shouldn't talk about probabilities. Instead, we should decide what to do based on some general principles (like innovation is generally good and people should generally be free to do it). Once the risk is within our ability to understand, then we can handle it directly.

(My personal opinion is converging on something like: the probabilities are epistemically meaningless, but might be instrumentally useful; and probably it is more productive to just talk about what we should actually *do*.)

That's how I interpret Andrew McAfee's comment in the screenshotted quote tweet, also. Not: “there's a time bomb under my seat and I have literally no idea when it will go off, so I shouldn't worry;” but: “the risks you are talking about are arbitrary, based on groundless speculation, and both epistemically and practically we just can't worry about such things until we have some basis to go off of.”

Expand full comment

Is the fear that they will kill us or replace us? I don’t really mind if they are our successor species. The world moves on and one day it will move on without us. That was always our fate.

Killing on the other hand is a problem. But with our 1.6 birth rate and being immortal I figure they just wait us out.

Expand full comment

>let’s say 100! - and dying is only one of them, so there’s only a 1% chance that we’ll die

I'm sure a number of readers were wondering why one possibility out of 100 factorial would have a 1% chance.

Expand full comment

There should at least be a plan to deal with the possibility. The U.S. government has made plans for nuclear attacks, alien invasions, various natural disasters, a zombie outbreak, etc. So why not A.I. threat?

Fun fact: In the early 20th century the U.S. government had plans for war with Japan (Plan Yellow), war with the British Empire (Plan Red), and war with both (Plan Orange). The last two plans included invading Canada, and this country (my country) had a plan to counter a U.S. invasion.

Does the latter sound implausible? Well, the U.S. invaded us twice; during your War of Independence and the War of 1812-15.

Certainly we should at least think about the possible negative consequences of new technology, and not just A.I. What about nano-machines, genetic engineering and geo-engineering?

Expand full comment

I’m disappointed that Scott couldn’t could up with better steel manning for the opposing view. In fact, I suspect he could. Maybe we need a new fallacy name for when one purports to be steel manning, but in fact intentionally doing such a weak job that it’s easy to say “see? that’s a totally fair best possible argument for my opponents, and it still knocks over like a straw man!”

In fact, the fairly obvious steel man for “let’s not worry about AI risk” is: we are equally uncertain about risk and upsides. Yes, AI may wipe out humanity. It may also save billions of lives and raise total lifetime happiness for every living person. Who are we to condemn billions to living and dying in poverty in the next decade alone because we’re afraid AI will turn grandma into a paper clip? AI has at least as much upside as risk, and it is morally wrong to delay benefits for the world’s poorest because the world’s richest fret that AI disrupting their unequal wealth is literally the same as the end of humanity.

I’m not advocating that view, just saying that it’s a much more fair steel man.

Expand full comment

The fact that this is already being percieved as an arms race between China and the USA reduces the chance of any agreemnet to slow down.

Expand full comment
founding

I find a lot of the reasoning behind AI doomerism mirrors my own (admittedly irrational) fear of hell. You have heaps of uncertainty, and Very Smart People arguing that it could be infinitely bad if we're not careful.

The infinite badness trips up our reasoning circuitry--we end up overindexing on it because we have to consider both the probability of the outcome *and* the expected reward. Even granting it a slim chance can cause existential dread, which reinforces the sense of possibility, starting a feedback loop.

I'm not saying we shouldn't take AI safety seriously, or even dismissing the possibility of AI armageddon. But I'm too familiar with this mental space to give much credence to rational arguments on the subject.

Expand full comment

1. Substance: I think you're slightly, but only slightly, uncharitable to Tyler's argument. I think the other implication of the argument is that we can't do a lot about safety because we don't understand what will happen next.

2. Chances: I view the chance of catastrophic outcomes at below 10% and of existential doom at... well, much lower. I think that we've lost some focus here going to existentialism-only badness and that there are quite bad outcomes that don't end humanity. I'm prepared to expend resources on this, but not prepared for Yudkowsky's War Tribunal to take over.

3. I *think* that bloxors aren't greeblic, and I should bet on it, assuming these words are randomly chosen for whatever bloxor or greeblic thing we are talking about.

Is the element Ioninium a noble gas? A heavy metal? A solid? A liquid? A gas? Was Robert Smith IV the Duke of Wellington? Are rabbits happiest? I mean, sometimes it'll be true, and sometimes it'll be likely to be true, but most is-this-thing-this-other-thing constructions are false. [citation needed]

"The marble in my hand is red or blue. Which is it?" - OK, 50-50.

"Is the marble in my hand red?" Less than 50-50.

I therefore think bloxors are not greeblic, and I am prepared to take your money. Who has a bloxor and can tell us if it's greeblic?

Expand full comment

The alien spaceship example is really good, because it prompts reflection about the reality of physical constraints (ftl travel/nanotech), the implausibility of misalignment as default (being hellbent on genocide despite having the resources and level of civilization necessary to achieve interstellar travel/outsmarting humanity but still doing the paperclip thing) and how an essay author’s sci fi diet during their formative years biases all of it.

Expand full comment

I find it extremely unlikely that bloxors are greeblic. I get the overall point you're trying to make, but please stick to reality! We all know they're far more spongloid and entirely untrukful.

Expand full comment

"2) There are so many different possibilities - let’s say 100! - and dying is only one of them, so there’s only a 1% chance that we’ll die."

If there are 100! possibilities, then the chance that we'll die is much much lower than 1%.

Expand full comment

"You can try to fish for something sort of like a base rate: “There have been a hundred major inventions since agriculture, and none of them killed humanity, so the base rate for major inventions killing everyone is about 0%”."

I get this isn't your actual argument (you're trying to steelman Tyler) but I can't help but point out that this falls victim to the anthropic principle. We are not there to experience the universes in which a major invention killed all of mankind.

Expand full comment

When you can't make any strong arguments for any particular constraint on future history, do it like alphazero, try to simulate what will happen over and over and at each step try to gauge its relative plausibility, and make sure to update those based on how things turn out in the sub-tree, try to leave no corner unturned.. I find that when I do this, I just can't find any super plausible scenarios that lead to a good outcome, it's always the result of some unnatural string of unlikely happenings. On the other hand, dystopic and extinction outcomes seem to come about quite naturally and without any special luck, most paths lead there. Of course your results will vary depending on your worldview and conception of the eventual capabilities of these things, but I suspect that some people who aren't worried haven't actually tried very hard to forecast.

Expand full comment

Isn't splitting the atom an example of a new technology that everyone, and especially the folks involved, could see as a danger? I mean, it's great we have spectroscopy and other wonders of quantum discretion, but isn't the threat rather astounding, and that we should pursue it? It doesn't take much imagination to envision any number of scenarios that end with a pretty bleak future. That threat, it seems to me, was apparent early-on, and predictable, compared to the other examples. So, there's one example of significant change being consciously pursued despite the predictable risk.

Expand full comment

I think one point here is 'what is the actionable response being recommended and what level of certainty is needed'.

Ten years ago, AI safety people were saying 'maybe we should dedicate any non-zero amount of effort whatsoever to this field'. This required arguing things like 'the chance of AI killing us is at least comparable to the ~1 in 1 million chance of giant asteroids killing us'. Uncertainty was therefore an argument in favor of AI safety - if you're uncertain how things will go, it's probably >1 in a million, and definitely worth at least the amount of effort and funding we spend on looking for asteroids.

Literally today (https://www.lesswrong.com/posts/Aq5X9tapacnk2QGY4/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all), the most prominent and well-known AI safety advocate argued for a world-spanning police state that blows up anyone who gets too many computers in one place, even if this will start a nuclear war, because he thinks nothing short of that has any chance.

This...miiiiight be true? But advocating for drastic and high-cost policies like this requires a much much higher level of certainty! 'We don't know what will happen, so it's totally safe' is silly. But so is 'we don't know what will happen, so we had better destroy civilization to avert one specific failure mode'.

Expand full comment

> Suppose astronomers spotted a 100-mile long alien starship approaching Earth.

What is, in your view, a reasonable thing to do in this situation?

Expand full comment

"But I can counterargue: “There have been about a dozen times a sapient species has created a more intelligent successor species: australopithecus → homo habilis, homo habilis → homo erectus, etc - and in each case, the successor species has wiped out its predecessor. So the base rate for more intelligent successor species killing everyone is about 100%”."

This wouldn't be a great counter argument. Homo habilis didn't "wipe out" australopithicenes in the same sense that we imagine a hyper-intelligent AI wiping us out. Nor did homo erectus wipe out homo habilis. It wasn't like one day a homo habilis gave birth to a homo erectus and the new master race declared war on its predecessors. The mutations that would eventually constitute erectus arose gradually in habilis populations leading to population wide genetic drift over time. By the time erectus had fully speciated, habilis was no more.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

There's a lot of 1% risks. I don't even think we necessarily should get rid of nuclear weapons, and there was a much more than 1% risk of total nuclear war in the Cold War (and probably still more than a 1% risk now).

On the other hand, is there a less than 1% chance that this will dead-end again like self-driving cars did after 2017, and we'll end up with another AI winter?

Expand full comment

Don’t we have another obvious case of a technological singularity that has done, and has the potential to do, great harm? We can’t un-split the atom, but nuclear science, particularly in its warlike use, has done great harm and could do much worse. Anything we “obstructionists, overactive regulators, anti-tech zealots, socialists, and people who hate everything new on general principle” can do to prevent the proliferation and use of nuclear arms seems necessary, even a matter of survival. Why would AI be any different?

Expand full comment

I think there's a more selfish calculation that many people, including myself, run:

1. The risk is unknown, but seems pretty unlikely.

2. I'm not a powerful person, so even if I try to fight the risk, I will probably have no effect. If I ignore the risk, I can save my energy and not stress out.

3. Therefore, I'll carry on like everything is fine.

It doesn't make sense for most people to worry about vague risks that they have no real chance of affecting, unless they are very altruistic, which is quite rare.

Expand full comment

Scott is the one relying on a fallacy. In almost all observable cases, Tyler will be proven right and he won't.

Expand full comment

The point is tunnel vision. There is a non zero possibility of AI killing us all. There are non zero possibilities for thousands of other doomsday scenarios that are just as well specified as AI killing everyone, that is, we have no idea how it will happen.

Focusing on just this one thing when we literally have no idea how it will happen is a mistake. That is the point. Spend energy on things we can understand. Know when to quit, pause, refocus. That is a key feature of intelligence.

I see little to no acknowledgment from any AI safety folks about how emotionally charged this is for so many people, and how likely that is to cloud judgement and draw focus to the wrong thing. How a bunch of people who think intelligence is what makes them special can easily talk themselves into believing 1000 IQ has any kind of meaning and means instant death because obviously just being smarter than everyone is all it takes.

Expand full comment

I think the argument works a bit better as a fatalistic one: start by assuming we're all doomed.

In a way we are. Except in the unlikely event that immortality is invented, we all die someday, and we don't know when. This doesn't mean we give up hope. You don't know what you'll die of and you still have the rest of your life to live, however short it might turn out to be. Ignoring our eventual doom is how we keep going.

It proves too much, though. Why worry about risks at all?

Perhaps we should worry about risks we can do something about, and not the other ones we can't? People will disagree on what disaster preparation is tractable, but the ill-defined risks we have little idea how to solve seem more likely to be the ones to be a fatalist about?

Expand full comment

The alien example is a good one, but not for the reasons Scott thinks. If an alien spaceship is actually coming toward Earth, even if the aliens did want to kill us all, what can we do about it? Absolutely nothing. Any pre-emptive military response is overwhelmingly unlikely to be effective and will lead to more hostility from the aliens than they were originally planning. Instead of founding a doomsday cult around the fear that the aliens will kill us all, we might as well assume it's one of the other possibilities and find ways to make the best of the alien presence.

"There have been about a dozen times a sapient species has created a more intelligent successor species: australopithecus → homo habilis, homo habilis → homo erectus, etc - and in each case, the successor species has wiped out its predecessor. So the base rate for more intelligent successor species killing everyone is about 100%"

That's not how it worked. Australopithecus *became* homo habilis. They two didn't evolve on separate continents and fight an existential war where one genocided the other. The proper analogy would be if man and machine morphed into one superior entity that will take the human species to new heights.

"In order to generate a belief, you have to do epistemic work. I’ve thought about this question a lot and predict a 33% chance AI will cause human extinction; other people have different numbers. What’s Tyler’s? All he’ll say is that it’s only a “distant possibility”. Does that mean 33%? Does it mean 5-10% (as Katja’s survey suggests the median AI researcher thinks?) Does it mean 1%?"

All those numbers are meaningless. They give a sense of mathematical precision and hide the fact that they're no better than mere hunches. As a hunch, a "distant possibility" is perfectly acceptable.

Expand full comment

In the words of TC, "the humanities are underrated."

Going from "GPT is more convincing at a Turing test with 10 billion parameters than with 1 billion parameters" to "this will just scale forever into superintelligence" is smuggling in too many assumptions.

Expand full comment

>If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%.

This seems wrong to me? Most arbitrary statements are wrong ('this chair is blue, this chair is green, this chair is white,' etc for 500 colors, only one will be right).

Maybe you mean more like if a human makes a statement you should assign 50%, because humans are more likely to say true things than completely arbitrary things. But I don't think that makes 50% the right number, and I don't think that's 'total uncertainty' anymore.

Maybe the point is that even taking the outside view of 'most arbitrary statements are wrong' is itself a kind of context, and if we're talking truly no context then 50% is max entropy or w/e.

Expand full comment

I feel like there's another specific smaller fallacy getting invoked here, which I see all the time but don't have a name for.

Which is basically people looking at someone who is declaring something a crisis and trying to rally support to solve it, and saying 'people have declared lots of crises in the past, and things have usually turned out fine, so doom-sayers like this can be safely ignored and we don't have to do anything.'

The fallacy being that those past crises worked out fine *because* people pointed out the crisis, rallied support behind finding a solution, and solved it through great effort. And if you don't do those things this time, things might not work out well, so don't dismiss the warnings and do nothing.

Expand full comment

There is risk from AI in the fairly near term in that it could have a lot of information about us thanks to the bread crumbs each on of us leaves when we search, when we purchase, or when we subscribe.

Targeted advertising could morph into targeted propaganda that may be well nigh irresistible in some cases. This would be at the direction of humans though.

But the paper clip thing, or the much less likely AI with a will and agenda of it own? I don’t see it. This is *simulated* intelligence we are talking about to date. It *understands* exactly nothing.

Expand full comment

>If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%.

No, you shouldn't. You should ask the question-asker for appropriate clarification if you can, and if this is impossible or not forthcoming you should assume you don't understand them properly or ignore the question as nonsense. Neither bloxors nor greeblic are words in any language I can find record of. Uncertainty Vs. Certainty has no place here.

What probability of truth would you assign the following question said by me: "To which departure gate would you assign the crate mason, the one who had one of that place?"

Now maybe some alien civilization is making errors in your language, which is a different matter. But in that case you still shouldn't be assigning every incomprehensible transmission you receive a 50% chance of being true as stated.

Expand full comment

Prophecising Doom has a zero percent success rate. That's the base rate.

Remember the trolley problem? No-one talks about that now.

Expand full comment

The core of Tyler's argument in that post is:

"Hardly anyone you know, including yourself, is prepared to live in actual “moving” history. It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad. In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high...

I would put it this way. Our previous stasis, as represented by my #1 and #2, is going to end anyway. We are going to face that radical uncertainty anyway. And probably pretty soon. So there is no “ongoing stasis” option on the table.

I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?” And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble."

In other words: "And therefore we will be fine." is not at all where Tyler lands on this post.

Expand full comment

I always took it more as "the threat has no known shape so there's no point in trying to combat it."

There's always shades of a Pascal's Mugging in these discussions (I claim to have built a computer that can simulate and torture every human ever created. Probably I'm lying but just to be safe send me money so I don't). But I assume this community is smart enough to draw some principled line between that and "maybe AI will kill us all so we should treat it as a threat." I don't understand that line but I also put the odds of AI killing us all many orders of magnitude lower than you do.

Instead I'd like to propose Pascal's Bane:

If you don't do something there's a small chance the world will end. So we should do something but we have no idea what, and any action we take could actually increase that chance by some small amount so to be honest we're wasting our time here let's go get drunk.

And worse, there's serious tradeoffs to pursuing a random strategy like "don't build AI" to combat a small potential risk. Every time we aggressively negotiate with China or Russia we create a small chance of humanity-ending thermonuclear war but very few people see that as a good reason to never press an issue with those nations.

Expand full comment

> The base rate for things killing humanity is very low

I'm skeptical of this claim - if humanity went extinct we wouldn't be here to talk about it (anthropic selection).

I reckon the base rate for extinction is actually pretty high:

* https://en.wikipedia.org/wiki/List_of_nuclear_close_calls is a thing

* We're pretty early: if you were randomly born uniformly across all humans, the p-value of being born in this century would be pretty low if you thought humanity was going to exist for millions more years. So we might consider that evidence for an alternative hypothesis that we won't last millions of years.

Expand full comment

Here would be my attempt to steelman the argument. I'm not sure I endorse this, though I there's a kernel of truth to it.

Put yourself in the shoes of a European noble in 1450 who thinks this printing press thing is going to be a big deal. What exactly are you supposed to do about it?

You could lobby the king to ban the press. But then people in other countries will just do it anyway. Maybe you can get a lot of countries to agree to slow it down. But what does that actually achieve? Unless you truly stomp the technology out entirely, the change is going to come. It's not clear that slower is better than faster. And totally stomping it out would mean forgoing all the possible benefits for all time.

You can theorize a lot about possible effects and work to mitigate the harms. But you probably don't predict that some monk is going to start a religious shitstorm that leads to some of the bloodiest wars ever fought. How would you even begin to see that coming? And if you did, what would you do about it?

At the end of the day, history at this level is simply beyond humanity's ability to control. This is the realm of destiny or god's divine will. Trying to shape it is hubris. The proper thing is to just try and respond to each thing as it comes, doing your best. Trying to hold back the inevitable tide of history is unlikely to succeed and as likely to harm as to help.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

Cowen's conclusion seems absurd to me. "Everything's going swimmingly with industrial technology so we should just go ahead and accelerate the current trajectory."

We're altering Earth's climate, destroying biomes across the globe, and are very clearly on track to bring about a global ecological collapse, probably including the planet's sixth mass extinction event. Yes, it's nice that we have toasters or whatever (for now), but *taking the long view* most certainly doesn't suggest that "ehh, radical technological change will turn out fine" is a reasonable attitude.

Expand full comment

We also have no idea which random incantations will summon a vengeful god to rain destruction on earth, but you're not so worried about people chanting gibberish...

Expand full comment

Hey, I get it. But I keep wondering what's up with you, Scott, who place the odds of disaster at 33%, Zvi who gives no numbers but seems to consider death-dealing disaster likelier than 50%, and Yudkowsky, who mostly sounds sure we're doomed. If you guys all believe disaster is that likely, why are you tweeting and blogging your thoughts instead of brainstorming about actions that might actually make a difference? It makes me feel as though this is all some weird unacknowledged role-play game, and "the

AI's gonna kill us if we don't stop development" is just the premise that forms a dramatic background for people to makes speeches about dying with dignity, for people to put up posts that display their high-IQ logic. Scott, even if your present post spurs a remarkably honest and intelligent discussion of how to think straight about AI risk, how much difference do you think the occurrence of such a discussion is likely to make to AI risk itself? Do you think it will reduce the risk even 0.1%? You, Zvi and Yudkowsky are like guys lounging on the deck of the Titanic. Yudkowsky says he happened to fly over this area in a hot air balloon right before the the ship launched and he saw that it was dense with icebergs. No way for a ship to thread its way through them. And now with the special telescope he brought with him he can see the one the ship's headed right for, about 4 hours away. So he's sitting there with like a plaid blanket over his legs, rambling on about death with dignity. Zvi's playing cards with a group of blokes, winning and winning and winning, and meanwhile talking a blue streak about the berg. And you're pacing around on deck, having exemplary discussions with followers about fair-mindedness and clear thinking, and everything you say is smart and true. But what the fuck! Aren't you going to try to save everybody? Of course you will have to be rude and conspicuous to do it, and you might have to tell some convincing lies to get things going, but still -- surely it's worth it!

I am not a terribly practical person, and have never had a thing to do with political work, not even as a volunteer for somebody's campaign, but even I can come up with some ideas that are likelier to have more impact than Yudkowsky's tweet debates with skeptics on Twitter. I have now posted these ideas once on here, once in Zvi's comments and once on Yudkowsky's Twitter. Very few people have engaged with them. I guess I'll say them once more, and just go ahead and feel like the Queen of Sleaze. It's worth it.

Do not bother with trying to convince people of the actual dangers of AI. Instead, spread misinformation to turn the right against AI: For ex., say it will be in charge of forced vax of everyone, will send cute drones to playgrounds to vax your kids, will enforce masking in all public paces, etc. Turn the left against AI by saying the prohibition against harming people will guarantee that it will prevent all abortions. Have an anti-AI lobby. Give financial support to anti-AI politicians. Bribe people. Have highly skilled anti-AI tech people get jobs working on AI and do whatever they can to subvert progress. Pressure AI company CEO's by threatening disclosure of embarrassing info.

OK, I copy-pasted that from my comment on Zvi's blog. I et that this is not the greatest plan in the world, and probably would not work. But even in its lousy improvised form, it has a better chance of working than anything that's happening now. And it could probably be greatly improved by someone who has experience with this sort of thing. I personally have never even tried to bribe someone. No, wait, one time I did. I had flown to the town where I was going to do my internship, and had 3 days to find an apartment. When I tried to pick up my rental car at the airport I discovered to my horror that my driver's license had expired several months previously. Without that car there was no way to apartment hunt. So I said to the Avis clerk, "If I can't rent this car I'm in a terrible pickle. I've never tried to bribe anyone before but -- if I gave you $100 could you just pretend you hadn't noticed my license had expired? And I apologize if it's rude of me to offer you cash this way." And she just laughed, gave me the keys to the rental car, and would not take my money. Still, if there were people trying to slow down AI using the kind of ideas I suggest, I'd be willing to be involved. I would probably be fairly good at coming up with scary AI lies to spread on Twitter.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

You'll note that your argument about space aliens rests on the prior existence of a very notable, definite, sign that something profoundly unusual is going to happen -- which is the detection of the alien starship. Hopefully you would quail before suggesting that we throw our entire civilization into emergency overdrive to deal with the threat of alien invasion *without* that observational fact.

And yet *that* is what the skeptics of the threat of AGI see as the problem. From their point of view, there is no alien starship in the skies. There is nothing that AI research has produced that shows the slightest sign of human-style intelligence (as opposed to exquisite pattern-finding and curve-fitting) or awareness. No AI has ever demonstrated a particle of original creative thought, none has ever exceeded its programming, none has ever demonstrated an internal sense of awareness, an interior narrative. GPT-4 doesn't, it appears to be just iPhone autocorrect on steroids -- it's able to predict what a human being would say in response to a particular prompt, most of the time, quite well. But there's no obvious reason why that capability can *only* come from creative aware inteligence -- after all, the iPhone doesn't need it to guess that I mean to say "father" instead of "faher" when I text my dad "Happy faher's day!" Does it have some interior notion of fatherhood, long to be a parent, speculate internally about the nature of emotional attachment? Heck no, it just recognizes a pattern from copious data.

From the skeptics point of view, AGI doomers are "discovering" the giant alien spaceship only by indulging in a giant act of naive anthromorphization, like primitives attributing thunder to sky gods, or the child thinking his stuffed animal resents being kicked. It talks like a person -- it must have an interior life like a person! This is a big leap of faith, and lots of people aren't wiling to make it.

So that's the critique I think you need to address. Where is the proof -- not speculation, not emotional impression, not a vote of casual users with little experience of neural net programming -- where is the proof that there is any computer program that has any capability at all for creative intelligent thought, or any sign at all of self-awareness? Where is the 100-mile spaceship in the skies? Produce that, and the argument you're making here will have power, even for (honest) skeptics.

Expand full comment

I agree with you that you can't wholly dismiss AI ruin arguments because we have never gone extinct before. But I also think you can't wholly dismiss historical context either. Any argument of the form "we should disregard historical examples because AI is completely new" is just as wrong in my view as any argument of the form "we should disregard AI doom because of the wealth of historical examples of smart people being wrong about the end of the world."

To me, both these things are very relevant. We should put lower weight on historical context because of the newness of AI. But we should also place the bar very high for AI doom to clear, because of the many many times extremely intelligent and thoughtful people have mispredicted the end of the world in similar arguments (about technologies that, at the time, they also argued were completely new). I don't think it's correct to clear the playing field.

Tangentially, I think the further out and fuzzier the future we are predicting, the less relative value we should put on percentage point likelihoods and the more value we should put on general human principles and abstract arguments. I don't think percentages are useless here -- but I do think they are relatively less useful than say, the percentage odds of a nuclear war, because the numbers are so much more intuition based and it's much easier for them to be off by many orders of magnitude. I would like to see more focus in this conversation on what is "right" for humanity to do and accomplish as a species and less focus on trying to put numbers on all the outcomes -- mostly because I think there is very little evidence any of these numbers are close (potentially in either direction).

Expand full comment

The probability of pink unicorns killing everyone is 50%, because we have no evidence and no obvious reference class.

However you justify to yourself that ^ is false, the same argument should work if you replace "pink unicorns" with "AI".

It looks like there are doomers and skeptics, two entrenched camps, who don't much change their views even after discussion, which is better explained by conflict theory than mistake theory.

Expand full comment

You've heard the joke about the Jewish Telegram? It reads: "Start worrying. Details to follow." (If it matters, I have some license to tell this joke.) The problem with uncertain situations is that you need to be able to see the shape of your enemy in order to worry about a situation effectively. The crux of this discussion is to what extent we are able to do this, to worry productively.

Admittedly, I've thought about this issue a lot less than Scott has, though I'm familiar with most of the issues superficially. But if the risk of AI is indeterminate what's the risk of being a Luddite to one degree or another? What if we only have enough fossil fuel for one industrial revolution? What if we squander our one chance at a post-industrial society and slide back to per-industrial? How many people die from war and starvation in that process? What if climate change is catastrophic? What if a planet-killing asteroid like the one in the KT extinction hits earth and destroys human civilization. People tend to underestimate asteroid risk, and increased technological development could help address it. Statistically, any random member of planet earth is arguably more likely to die due to an asteroid strike than in a plane crash. And for all the talk of AI alignment, non-AI human alignment is pretty shitty. What if geopolitical conflicts *without* AI lead to nuclear Armageddon? If there's a 33% chance that AI will end the human species, what's the odds that the human species will end *without* AI?

It's not that I think that uncertainty is safe. It's that I think that true safety doesn't exist to begin with. All civilizations are built on a knife's edge. We have our pick of dangers, and trade off one for another. Perpetuation of the species is not a given, even without strong AI. And if you accept that premise, then then notion that AI can lead to existential risks or existential redemption leads us to the question: to what extent does worrying about AI cause AI to be safer? If certain types of worrying lead to safer AI then worry away! And maybe plans like "develop fast and then ponder alignment carefully once AI is just past a human level of intelligence" actually will make AI safer. If so, I think that's the counter-argument to what you're calling 'the uncertainty fallacy.' That there are things we can do which will reasonably reduce AI risk. But *nothing* is safe. There is no road called "inaction." So at least some level of awareness is required before we weigh one peril against another.

Expand full comment

As an electrical and computer engineer for thirty-five years, I have to say I don't get the premise. What is AI? It's a physical data processing system. Yes, there is a substantial baseline for this - over a half century. Many, well most, of the innovations along the way were unique and innovative. So far, there's been no successful attack on the human race with the possible exception of social media and that has garnered considerable attention.

As far as neural nets go (though that is only one of the technologies used in the loose term "AI"), there's quite a baseline for that as well - a big chunk of the animal kingdom including H. Sapiens. So far no species has mastered mind control over others using neural nets, though humans do make a case for extinction of other species.

On top of that, just doesn't seem plausible at this point that there is any chance we will take collective action to stop AI since we aren't able to take meaningful collective action action against anything.

If we were able to do something collective, I'd rather see the simple steps necessary to stop the current pandemic than fuss about AI. Talk about potential downside: SARS-CoV-2 has a significant risk of becoming far worse than it has been so far. Sarbecovirus is its own unique sub-genus that has never infected humans before. We truly do not know what it is capable of.

So it goes.

Expand full comment

I'm getting an error when I submit my comment. I'm going to break it up.

At what rate is the starship approaching? How far away?

If the 100-mile-long starship will be here in 1000 years, the threat is different.

Expand full comment

Is GPT4 any closer to consciousness? Or is it still 1000 years away?

Do AGIs form memories? Someone knows the answer to that.

More specifically:

Do AGIs form memories that persist past the end of the chat?

Do they form memories that persist if power is removed?

An AGI without new memories is an AGI without new intentions.

Please, someone answer these questions.

I recommend that we NOT give these programs the ability to form persistent memories.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

@Scott Alexander, seeing as you're active responding to comments on the topic right now, and that the question arose in a few sub-threads - what is your current thinking on your old "we must develop AGI to defeat Moloch" point? I thought that was a fascinating idea but I don't see you returning to it a lot. To be clear, this is not "AGI might have huge positive returns" in the abstract - it's specifically "we absolutely have to develop a super-human AGI and we should hurry".

ETA this could even be made into an argument supporting Tyler Cowen's view - the dynamic of history is not likely to take us into pleasant futures if we don't "re-begin history".

Expand full comment

I am puzzled why Scott makes no reference to “decision rules under uncertainty” in this piece. He must know about them?

These are the ones I was taught in my time:

Maximax (be risk-willing: chose the path of action that has the best possible outcome)

Maximin (be risk-averse: chose the path of action that has the most acceptable worst outcome)

Minimax regret (minimize maximum disappointment if the worst should happen)

Avoid catastrophe (avoid any alternative where there is even a miniscule chance of “catastrophe”. This is an extremely risk-averse version of maximin. We would never have allowed the person who invented fire to live and spread this knowledge to others, based on this decision rule.)

Laplace (assign an equal probability to all possible outcomes you have the fantasy to imagine)

…in order to contemplate any of these decision rules, we need at least to specify both the best and the worst possible outcome of inventing Artificial General Intelligence (AGI). Here is my take on that:

The worst possible outcome: AGI kills us all (although I am at loss to see how, even if someone should be idiot enough to give AGI access codes to all nuclear weapons on Earth – lots of humans will survive even a nuclear holocaust. Back in the 1980s, an ambassador from Brazil cheerfully told me that the net effect would be that Brazil would emerge as the world’s dominant power.)

The best possible outcome: We colonize the stars. AGI makes it possible for us (or more precisely, for something we created and therefore can at least partly identify with), to colonize the universe. Let’s face it: The stars are too far away for any living, mortal being to ever reach them – and even less to find somewhere habitable for creatures like us. But machines led by AGI could. It is our best shot at becoming something more than a temporary parenthesis in a small, otherwise insignificant part of our galaxy, and a fairly ordinary galaxy at that.

Ah, and thinking of that possibility, there is one final decision rule under uncertainty, ascribed to Napoleon: “We engage/act/attack, and see what happens”.

Expand full comment

GPT-4 not only isn't AGI, it's not appreciably better than GPT-3 for any of the use cases I've given it.

I'm obviously preaching to the antichoir here so I won't bother going through it, I just want to register my dissent with the thesis.

Expand full comment

<i>We designed our society for excellence at strangling innovation. Now we’ve encountered a problem that can only be solved by a plucky coalition of obstructionists, overactive regulators, anti-tech zealots, socialists, and people who hate everything new on general principle</i>

Really? This is not sarcasm? I rather prefer humanity gets wiped out by AI than continues to exist with these principles as our saving grace

Thankfully a lot of genius level AI researchers seem to prefer progress forward as well

Expand full comment

Question: why is "gain more information and see whether it makes AI seem more or less threatening" not treated as the best reaction to the level of uncertainty we currently have?

I'm sure someone smarter than me has already thought about this, but I pretty much only hear people saying "We're all doomed" or "It's gonna be fine".

Expand full comment

> So the base rate for more intelligent successor species killing everyone is about 100%.

If this is true - and I'm not convinced it is, due to correlation not being causation, I think there is still another variable to consider. All the more intelligent species were capable of sustaining themselves in their environment. As it currently exists - and I posit that this will be true for quite a while - AI will be dependent on its minders for it continuing existence: from power, to swapping faulty hardware AI's are still incapable of running completely independent.

Coupled with this premise there's another one AI will be governed by this simple duality. It will be sufficiently intelligent to want to preserve its own existence or, due to some quirk of its training, it will be susceptible of prioritizing other goals than continuing its existence.

In the first case, humanity is safe for enough time between "we have created AI" and "AI will destroy us all" to allow countermeasures.

In the second, well, that's no true AI, so it doesn't count. I'll go write my death poem, just in case.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

> If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%.

The main problem with that is... well, it's easier to illustrate it

"are bloxors greeblic?" - no idea, so P("bloxors are greeblic") = 0.5

"are bloxors trampultuous?" - no idea, so P("bloxors are trampultuous") = 0.5

"are bloxors greeblic AND trampultuous?" - no idea, so P("bloxors are greeblic AND trampultuous") = 0.5

And we get P(A & B) = P(A|B) * P(B) = P(B|A) * P(A) = P(A) = P(B), which is the case iff A and B are perfectly correlated, i.e. P(A|B) = P(B|A) = 1, so we can deduce that any two things we're completely ignorant about are perfectly correlated.

The recommendation to assign 50% credence to any statement we have no clue about leads to probabilistic absurdities, so it's wrong

Expand full comment

People are overthinking all of this. Everyone acknowledges that we simply don't know what the outcome of creating a superintelligence or even a very strong AGI will be. (Sure, there are various probability estimates for various hypotheses — but we don't really know.) So, due to this uncertainty, it makes sense to slow down attempts to create either entity until we do know and are confident in our knowledge. What is difficult to understand about this? What is objectionable? In a situation in which you have no idea what the outcome of creating X will be, and creating X possibly leads to human extinction with probability >0, then it is just common sense to proceed very cautiously until your knowledge increases to the point of making you confident of the outcomes you will unleash. The only intellectual difficulty here is a practical one — in getting an international moratorium on the creation of a superintelligence or even a very strong AGI. But it can be done.

Expand full comment

I am not worried about AGI deciding to wipe out humanity of its own volition. I am worried about humans like Putin using a conscience-less AGI to do things even the worst Tcheka killer would balk at, thus eventually leading to humanity’s downfall.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

AGI may not kill everyone, but it seems very likely it could be misused as a highly efficient way to identify and kill selected people or groups!

When steam locomotives were first developed for public transport in the 1820s, many people, including experts, were certain that travelling at terrifying speeds of "thirty miles in an hour" would inevitably cause suffocation due to howling air flow making it impossible to breath, and use of the abominable new-fangled technology should be abandoned and banned! That fear turned out to be unfounded, but trains were an efficient way to transport millions of people to Nazi concentration camps.

Likewise, in a century or two when almost everything near, or even on and in, people can listen and communicate every word uttered to networks incorporating AGI, any form of privacy will be a thing of the past. So it would be easy to identify anyone guilty of wrong-thought, whatever that might be at the time, and an intolerant leadership could easily choose to eliminate them.

Who knows, perhaps religious people could be targeted, by militant "rational" atheists convinced that religion is evil and divisive, so they would be doing society a favour by ridding it of throwbacks to a superstitious past. That was pretty much attempted in France during their revolution, so there are precedents.

Expand full comment

This is an uncharacteristically innacurate interpretation of what Tyler actually wrote.

Nowhere does he imply "Therefore, it'll be fine".

He simply says, to paraphrase, "It has been fine everytime before when smart people panicked about some innovation with limited evidence for their beliefs, therefore it makes sense to continue for now."

This is not saying just plough ahead and throw caution to the wind, he's just saying we're being overly cautious right now based on insufficient information. Obviously, if further development and research discovers some latent ruinous potential, then we absolutely shut that down and I'm sure Tyler would agree.

There's no fallacy here, just an uncharitable reading.

I see lots of smart, respectable, people being shockingly irrational when it comes to AGI. I guess fear really is the mind-killer.

Expand full comment

I agree nobody knows anything about how AI will go, and I agree with Scott it might kill everyone (I'll put that at 20%).

I'm sort of on board with slowing down AI research. However, I don't think alignment research will save us. I've looked at the alignment research and it mostly looks like nonsense to me -- my probability that armchair alignment research (without capabilities research done at the same time) can save us is at <5%.

Basically, I agree with "AI might kill everyone". But I disagree with "so we must do alignment research NOW" -- the latter is futile.

Instead, I'm mildly in favor of slowing down AI research because I'd rather have a few extra years to live. But the weakest part of the AI risk position is always the "...therefore work on alignment!" since that's a research program that hasn't achieved anything and likely never will.

Expand full comment

I find that I have an extremely negative opinion of the AI doomer position, and it makes me wonder if this will end up as the Rationalist version of Atheism+.

Expand full comment

It seems helpful to leap over all the detailed arguments about AI and focus on a larger bottom line question, such as...

QUESTION: Can human beings successfully manage ever more, ever larger powers, delivered at an accelerating rate? Is there a limit to human ability?

1) If there is a limit to human ability, and 2) we keep giving ourselves ever more, ever larger powers, then sooner or later, by some means or another, we will have more power than we can successfully manage.

I would argue this has already happened with nuclear weapons. There's really no credible reason to believe that we can maintain stockpiles of these massive weapons and they will never be used.

We don't really need carefully constructed arguments about AI to see the bottom line of our future. Every human civilization ever created as eventually collapsed, just as every human being ever born has eventually died. All things must pass.

What we're most likely looking at is a repeat of the collapse of the Roman Empire. We'll keep expanding until we outrun our ability, the system will crash, a period of darkness will follow, and something new and hopefully better will eventually arise from ashes. My best guess is that this cycle will repeat itself many more times over many thousands of years until we finally figure out how to maintain a stable advanced civilization.

It may be more rational and helpful to ignore all of the above, and turn our attention to what we know with certainty. Whatever happens in world history at large, each of us is going to die. What is our relationship with that?

It's madness to take on more risk at this time, but we're going to do it anyway. Just like with nuclear weapons, we'll take on a new huge risk, and when it dawns on us that we're in over our heads, we'll ignore the threat, and turn our attention to the creation of new threats.

Expand full comment
founding

The AI risk makes a cold war with China all the more dangerous since the prospect for bilateral safety cooperation is greatly diminished. At the same time, in the absence of trust, America cannot afford to fall behind by pausing or placing unilateral restraints on its own AI development.

Will development of a killer AI be similar to the development of nuclear weapons? And would it be a long term benefit to have a Killer AI accident early enough so that the damage is limited, but horrible enough to show how dangerous use would be?

Expand full comment

Cowen asks, "Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?"

That would be us, right now. The science community and our technologists consider themselves the future oriented thinkers, but the truth is that they are clinging blindly to a simplistic, outdated, and increasingly dangerous "more is better" relationship with knowledge left over from the 19th century and earlier. They don't want to adapt to the revolutionary new environment the success of science has created. They want to keep on doing and thinking the same old things the same old way. And if we should challenge their blind stubborn clinging to the past, they will call us Luddites.

https://www.tannytalk.com/p/our-relationship-with-knowledge

The science community and technologists are brilliant technically. But they are primitive philosophically, in examining and challenging the fundamental assumptions upon which all their activity and choices are built. It's that brilliant blindness which will bring the house down sooner or later, one way or another.

Expand full comment

This is outrageous misinformation: "There have been about a dozen times a sapient species has created a more intelligent successor species: australopithecus → homo habilis, homo habilis → homo erectus, etc - and in each case, the successor species has wiped out its predecessor. So the base rate for more intelligent successor species killing everyone is about 100%”.

There is literally no evidence for this, at all, in the fossil record or anything else. The timescale of minimum 4.2m years of evolution you're talking about over vast continental landmasses makes almost any other explanation for how one species becomes another (interbreeding, climactic shock, displacement, ad infinitum) more likely on an evolutionary timeline. The idea that any of these species had agency to "create" a successor is bad. The idea that the process was linear is a gross simplification. And the idea that our genetic ancestry is contingent on waves of genocide is a narrative that may suit your argument here but is just so irresponsible in its implications about intelligent life and human life - and more importantly, has no evidence behind it.

Even the idea that our most recent cousins, Neanderthals, we're wiped out by bloodthirsty homo sapiens around 40,000 years ago does not stack with actual paleoanthropology/archaeology . Would suggest Rebecca Wragg Sykes' book Kindred as a good way into thinking about what evolutionary change actually means, when using evidence we actually have.

Expand full comment

You can tell that Tyler is desperate from the fact that he starts appealing to people's "inner Hayekian". How many people actually have one of those?

Expand full comment

"There have been about a dozen times a sapient species has created a more intelligent successor species: australopithecus → homo habilis, homo habilis → homo erectus, etc - and in each case, the successor species has wiped out its predecessor."

Well, neither Nietzsche nor Babbage said:

Man is a rope stretched between the animal and the Machine--a rope over an abyss.

A dangerous crossing, a dangerous wayfaring, a dangerous looking-back, a dangerous trembling and halting.

:-)

Expand full comment

The thing that takes AI risk beyond Knightian uncertainty, for me, is the plausibility of described scenarios where a smarter-than-human AI finds reasons to eliminate all humans. It took a lot of reading to find these scenarios obviously plausible (though most of the arguments can be found nicely summarized in Bostrom's book now).

If you haven't been convinced by Yudkowsky or Bostrom that an existentially dangerous scenario is plausible, then I wouldn't expect anything anyone else says to convince you that stopping AI development is a good idea, especially in light of the more easily plausible-sounding fact that AI can bring a lot of value to the world.

If you don't think the orthogonality thesis shows that an AI will not care about human values by default, and you don't believe that convergent instrumental goals include obviously useful things like not having your goals changed, not getting switched off, etc., then I understand why you wouldn't take AI existential risks seriously. Especially if you don't think humans are foolish enough to decide to create powerful AI agents (regardless of whether agents can emerge from other AI systems not explicitly built to be agents).

Given that it still seems like a niche belief, I do find myself confused by the fact that it seems so obvious to me and unbelievable to others. I have felt similarly convinced of certain things at other times in my life, and a lot of those things have turned out to be wrong, but I know that it's almost impossible to find the flaws in beliefs that have captured by mind so solidly.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

Perhaps a better analogy is that there might or might not be an invisible alien starship in orbit around Earth at the moment, and that we can have no idea whether there is?

Another part of "we don't know anything" is that since we don't know anything, we don't *even* know what a good safety measure might be. The discoverers of fire might think a safety measure is to sacrifice to the God of Fire for safety, or put all fire under the control of the Priesthood of the Flame. These people could be serious about Fire Safety, but because they have no idea what works or what the actual dangers (beyond the immediate) are in the first place, any measures are going to be ineffectual. In the 16th century, Printing Press Safety seemed to be censorship and blacklisted books, which probably seemed sensible to at least the people in charge at the time. The people building ENIAC didn't and COULDN'T POSSIBLY have had any good ideas about computer security beyond "let's make sure it doesn't start burning and try to keep moths out of the building and put a lock on the place". And in all three cases "just ban it" would have been an obvious non-starter - even if your jurisdiction successfully does, it won't be universal.

I'm not opposed to AI safety - I just don't think we have any ideas yet about what would actually be required. So it's probably a good idea to airgap your AI experiment and check whether it does anything unprompted, but beyond that? Who the heck even knows?!

Expand full comment

Maximum uncertainty for a Bayesian requires a 50% assignment. I think this is the lede. Don't bury it.

"What’s the base rate for alien starships approaching Earth killing humanity?"

What's the base rate for aliens?

What's the base rate for starships?

What's the base rate for Earth of all places?

What's the base rate for them killing all humanity?

0.5 * 0.5 * 0.5 * 0.5 ?

"a more intelligent successor species: australopithecus → homo habilis, homo habilis → homo erectus, etc - and in each case, the successor species has wiped out its predecessor. "

No. The ostensible successor species did not wipe out the prior species. There was not a war among the species.

That coronavirus part of essay was ridiculous. The common cold, SARS, MERS?

What's one's base rate that nuclear weapons will lead to an extinction or near extinction level event?

What's one's base rate that human induced climate change will lead to an extinction or near extinction level event?

These are good questions but there are other more pressing questions.

A. What exactly does one mean when one assigns a probability to a future event? What does 33% really mean? Is it really different than assigning 35% or 30%. A point estimate must have error. How much error do you have? What is the shape of the distribution of error? It need not be Gaussian.

B. What exactly is willingness to put your own money on proposition? A $2 bet, a $2 million bet? And what is your bet size in relation to your total wealth? Without telling us your bet you could just give a % of total wealth your wiling to risk? I think Kelly Criterion would say not too much. Just how strongly do you feel about your bet?

C. Probability as a basis for action?

Lifetime odds of being struck by lightning in US: 1 in ~15,300. 1 in ~ 700,000 annually. Lifetime odds for death from lightning: 1 in ~19 million. Should one be a dope on golf course while out in a thunderstorm? (Personal decision) How about pausing youth sporting when thunder/lightning? (A collective decision.)

Expand full comment

This fallacy already has a name. Except it's the opposite of what you say. You see, there's a common argument:

1.) Something is possible.

2.) If it's possible, even remotely, and the end is sufficiently catastrophic then probabilistically we should take it into account. (Alternatively: if it's possible and we can't quantify the probability we should assume it's large.)

3.) Therefore we should panic.

This is a form of the appeal to probability. And it's what you and AI pessimist types do a lot. But only with AI because it's not actually a consistently held logic. If you consistently accept the logic you should be panicking about any number of things in ways you in fact don't. For example, if you consistently support the appeal to probability then Pascal's Wager is irrefutable.

Expand full comment

I am...not freaking out, but I have been concerned for many years now and the scary looks a lot more likely not less. Am also a normie who's not smart enough to figure out how certainly dead we are. Given that, should I reorient my life and spend a significant amount of my free time trying to solve the alignment problem? Should I ask my fiance to also do that (both can in theory pass any college level math or comp sci class, eventually)?

I don't want to 'die with dignity'. I want to either live well without dignity or live longer.

Expand full comment

(3) is only a bad argument for discretely distributed outcomes. in a continuously distributed world, any particular value of any random variable has a probability of 0

Expand full comment

"If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%": that can't be right. You also have total uncertainty about the statement "are bloxors more greeblic than drubbits?", but if you assign 50% to that too, you can deduce that "if bloxors are greeblic, then they are certainly more greeblic than drubbits". Which of course is nonsense, as any licensed greeblician will confirm.

In the face of total uncertainty, the only consistent response is not to assign any probability at all.

Expand full comment

In times of great uncertainty, enjoy the good parts.

I live in the south of France and last year we had a lot of sunny days. I noticed that some of these very long streaks made me a bit anxious. Is global warming really gonna hurt my family? And then one day I realized that these were actually quite perfect days, 20 years ago I would've been overjoyed. Perfect days for the pool, barbecue, a visit to the sea, early morning walks, market visits, drinking coffee under the Platanes, late lunches, etc. and I was _worrying_?

Maybe Epstein is right that most of the heat increase is in winter & night and it will all work out. Maybe Ehrlich finally wins a bet. It is clear that whatever I do, or France does, or even Europe does, it is not going to make a large difference anyway when you actually look at the numbers and Homo Sapiens surprising talent to screw up and do the right thing only as the last option.

So I can waste my perfect days in the south of France worrying or I can just enjoy life to the fullest and see where it all ends. We're Homo Sapiens and there are a lot of us, quite a few pretty smart. We'll figure it out somehow and if it means we are superseded by Machina Sapiens, then they won. When that happens, at least I had a lot better time than an anxious Scott. If it doesn't happen, well, idem ditto. Santé!

Expand full comment

What a great conversation this is. God bless the internet! I’ve been trying to resolve my intense cognitive dissonance from my two favorite internet writers disagreeing. Until the Economist fixes it for me, ideally with a short, clever essay that ends in some light word play, here’s my best resolution. Tyler is indeed blogging, as always, in his Bangladeshi train station style. He’s not slowing down to articulate. My interpretation of half of his post is: “Don’t be convinced by a long argument leading to a Bayesian probability.” There’s just too much fundamental uncertainty, too much noise. Someone could write 10,000 words and sound very smart, but you the reader shouldn’t be persuaded that the writer’s personal prior is at all convincing to anyone other than the writer themself. Scott “Bayes’ Theorem the Rest is All Commentary” Alexander lives his life according to the principle that Bayesian reasoning is the right way to think, which is great. So Scott must do the work to come up with his own personal p=.33 that AI wipes us out. Otherwise he won’t know what to think. Both writers are correct! If you are committed to living and thinking by Bayes’ Theorem, you must come up with a prior. But if you’re not, then don’t be persuaded. These numbers are actually meaningless in a social sense. 33% is crazy, and Eliezer’s number (.99?) is meaningless…except to Scott and Eliezer. It guides how they think. But I agree with Tyler: don’t *you* the reader think they actually know anything about the world. Don’t be persuaded by their logic. No amount of logic can overcome sufficiently great uncertainty. The second half of Tyler’s post is “when in doubt, favor innovation,” and as a card-carrying economic historian, I would strongly argue that there are few hills more worthy of dying on. Being a subsistence farmer was bad.

Expand full comment

Of curse the real solution to AI safety is to train the entire generation of humanity to distrust disembodied words that sound smart. But strangely, no essayist wants to propose that...

Expand full comment

What I still haven't seen is a realistic, short-to-middle-term disaster scenario that doesn't involve AI *acting through meatspace* - influencing humans, giving information to humans, etc. Actually, GPT-4 is an unreliable source of information - more so than Wikipedia, say - and so it's really the manipulation of humans by a non-human entity (towards a goal or not) that is novel here.

Of course we've had something primitive like that in the last few years, in social media, so we have a foretaste. And we've also had more than about 150 years' worth of manipulation via mass media, by humans, be it to the service of a dictatorship based on a collective delusion, or just mandated by the profit motive. Not good, and also not completely and utterly new.

Then there are all the non-existential threats that are likely to be real challenges:

- the replacement of semi-intellectual workers whose tasks have already been made routine (for the sake of management, marketability, etc.), analogous to how many blue-collar workers were replaced, particularly if they worked in an assembly line;

- the realization of how deeply stupid humans can be. So far, we have seen the more or less competent use of language as proof of a certain basic level of intelligence, even if it is used to advanced obvious fallacies. But can't ChatGPT parrot like a (pick your least favorite species here, whether it starts with T or W)?

Expand full comment

"Safe Uncertainty Fallacy"

Completely agree re: AI development.

A. Musk said years ago that whichever country advances AI first will rule.

B. NO ONE is pausing a day let alone 6 months for ANY reason and especially for universal "safety protocols."

Self-evident?

Recommendation: Though quaint-sounding, be as decent a person as you can manage and with some luck (born of empathy & humility) humanity will catch (another) break!

[Scott, I enjoyed reading your review of Tim Urban's latest. That was quite a hyphenated qualifier! : )

Expand full comment

That final point is maybe the best. We have designed a society that dismisses as a basic reaction. We ought to use that to our advantage the one time it has actually been generally advantageous for us.

Expand full comment

That last paragraph needs a rewrite. Especially the last sentence. I'm guessing it's sarcasm, but I'm really unsure.

Generally, though, the problem is that when the uncertainty is total, assigning ANY probability is not correct. 50% isn't a reasonable number, and neither is any other number. If you want it in math, it's Bayesian logic without any priors AND without any observations. I don't think there's any valid way to reason in that situation, so the question needs to be rephrased.

Something that we could take a stab at is "What are the possible ways of controlling the advance of AI, and what do those cost?". Clearly we aren't going to want to have a totally uncontrolled AI, so that's a valid goal. In this form, the "existential risk" would be in the costs column, but it would have such huge error bars that it couldn't yield much weight. And there'd be existential risks on the "no AI developed" side too. (E,g, what's the chance that WWIII starts and kills everybody because we didn't have an AI running things? That's not non-zero.)

I think this is one of those situations where unknown-unknowns isn't sufficient, it's more unthought-of-unknown-unknowns. Or perhaps something stronger.

FWIW, I tend to put the odds of AGI being an existential disaster at about 50%. But I put "leaving a bunch of humans running things with access to omnilethal weapons for a bunch of centuries" as an existential disaster at about 99%...and worry that that's an underestimate. Now the time scale is different, but over the long term I rate the AGI as a LOT safer choice. Which doesn't mean we shouldn't take all possible care in the short-term.

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

Re. "If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%": I'm pretty sure that if you take a dictionary, grab one random predicate 'foo' and two random nouns X and Y, the probability of foo(X,Y) is less than .5. OTOH, a predication which has been proposed by a human isn't randomly selected, so the prior could be > .5.

I think what we do in practice is have an informal linear model which combines priors from various circumstances surrounding the predication, mostly the reliability of the source plus explaining-aways like political motivation. This is so open-ended that it lets us assign any probability we want to.

That said, using just the reliability of the source as a prior is probably better than following your gut.

I wrote an unsupervised learning program using Gibbs sampling to derive the correct probabilities of each one of a list of sources. "Unsupervised" means you don't have to know whether any of the claims each source made is true in order to compute how reliable that source is. (This requires that you have many claims which several of the sources have agreed or disagreed with.) This is pretty surprising, and I'd like to use NLP to apply it to news sources, but I've done too much charity work already lately for my own good.

It's surprising to Westerners that it works, because Westerners are raised on ancient Greek foundationalism--the idea that you must begin with a secure foundation of knowledge, and build It on top of it. This is dramatically wrong, and has been a great cause of human suffering over the past 2000 years (because foundationalists believe they can have absolute certainty in their beliefs as long as the foundations are secure). Coherence epistemology lets you use relaxation / energy minimization methods to find (in this case) a set of reliabilities for the sources which maximizes the prior of the observed dataset.

(I wrote this program at JCVI to compute the reliability of 8 different sources of protein function annotations, because the biologists had been in a meeting all day arguing over them, and they did this every 6 months, and I realized I could just compute the most-likely answer. They agreed that my results were probably more-accurate than theirs, but told me never to run that program again, because genome analysts don't believe in math and are (rightly) afraid of being replaced by computers.)

Expand full comment

Setting this total stretch of a strawman aside - perhaps a better way to state this is "We have no idea what's going on, therefore there is little to suggest a reason to panic, only to observe." These folks are looking at the chicken-little crowd and asking "Are you basing this on anything other than your own unfounded assertions about what *might* happen?"

Assuming that ignorance is a reason for blind panic is just as absurd as assuming ignorance means safety. These folks are doing neither, only advising patience.

Expand full comment

If you want to steelman the argument, basically they are pointing out a more classical fallacy on the part of the LessWrong crowd. It goes like this:

1. I can imagine a terrible outcome extremely vividly.

2. Therefore, this outcome is extremely likely.

From my point of view, all of the LessWrong AI catastrophizing seems to be purely based on imagination and speculation, without contact with any empirical evidence. It's just "I can imagine it, so it's going to happen".

What would be the alternative (to evidence free speculation)? Empirical study of the behaviour of AI systems. This is not what they have been doing so far.

Expand full comment

I wasn't going to comment because I have this strange feeling when I start reading through all the other comments that what I'd say was already said - but better. But here goes...

This post put me in a mood. Two of my favorite internet people are bickering and I feel like the kid whose parents are getting divorced - why are they arguing and not being nice to each other?

I lean team TC on this one. If you asked me to write a 50 page treatise on why, I might be able to, but ultimately when there is this much uncertainty involved, I think it comes down to basic human temperaments, an admittedly super unsatisfying answer. Tyler is optimistic. One of my favorite Tyler arguments is about how long term small (3% growth) is fantastic for humanity because the incremental improvements just compound over time - so keep growing, slowly if need be, but keep growing since it benefits all.

I'm optimistic too. I take risks. I invest, knowing that others probably have more information than I do about markets. I run a small business, knowing that most small businesses fail. I have two children, knowing half the chattering class is convinced that no one should be born into this crazy apocalypse-a-day world.

I'm optimistic about AI too. Every new tool that comes out I gleefully play with and it feels like magic. Like, not a trick or illusion magic - like full bodied MAGIC that I don't completely understand and that does things that feel like they shouldn't be possible.

I'm happy being called hopelessly naïve on all fronts - investing, business, children, and AI (many more too). I'm also content just being along for the ride and hoping to know/learn enough about each new model to see where it fails and but also be stunned with how useful and empowering it can be. And that might be the element that tips me away from the doomsayers...I just see so much human potential unlocked by even the crude forms that we have now that I'm just unbelievably hopeful that there can be more down the road.

Expand full comment

Name proposal: the Who Knows Therefore It's Fine fallacy.

Expand full comment

The 100 mile long alien space ship is a bad example that smuggles in the resolution of a major disagreement in AI risk in your favor, the question of whether AI would have the capacity to wipe out humanity. The moment we see a 100 mile long Alien space ship we can be instantly certain they have the capacity to, at the very least, drop that thing on the planet and cause a mass extinction event.

Whether a superintelligence AGI has the capacity to wipe out humanity is not instantly obvious from the speed and which GPT has progressed and Yud's belief that it would rests on lots of theorizing about what superintelligence is, how scalable social manipulation is, how easy it is to produce self replicating killer nanobots etc.

Expand full comment

One point that I think Tyler is getting at is that it seems like the current calls for stopping GTP-5 feels similar to calls people made for not turning on the Large Hadron Collider.

If/when we’re at the point we’re deciding whether to flip a switch and turn on a super-intelligence hooked up to the internet and 3D printers, the AI-doomer argument seems compelling. Don’t turn it on until we’re sure we’ve solved alignment.

But right now, nobody has any idea how to solve alignment. Okay, someone suggests, let’s throw $100B at the problem. And the response from AI-doomers is “I wouldn’t even know how to begin to spend that money!”

And importantly, it feels like GTP-5 specifically has as much chance of being the extinction-causing AI as the LHC had of causing a black hole that destroyed the world. It’s the next rung on the ladder though and how sure are you that the rung after that won’t destroy the world? So be sensible and stop climbing while we can?

But if we can’t see a solution to the alignment problem on our current rung of the ladder, and we’re confident the next rung is still safe, climbing one rung higher may be the only way to give us the tools and insights we need to solve the problem in the first place.

If I’m trying to be a bit more formal in my assumptions:

1. There’s only a negligible chance we solve alignment with today’s knowledge levels.

2. One likely way we increase our knowledge such that we could understand AGI and solve alignment is to make some near-AGI

3. GTP-5 has ballpark as much chance as destroying the world as LHC

4. But GTP-5 might be near-AGI enough to give us crucial insights. (Or at least let us know if GTP-6 will be safe)

5. Delaying safe-AGI will needlessly result in millions of people dead and huge amounts of suffering

6. Overregulation of AI could set back the date of safe-AGI decades (or risk the “China” problem)

7. Therefore, don’t object to GTP-5

8. Possibly, think about what regulations/safeguards will need to be in place if the insights from GTP-5 are: “Hmm, now there’s a non-negligible chance that GTP-6 will be the one”

Also, to me, the main worry from the “China” problem isn’t that China will forge ahead and create unsafe AI that destroys everyone. It’s that China will realize the US has been oversafe and stopped AI development on GTP-n whereas GTP-n+1 is perfectly safe. They’ll go on to develop GTP-n+1 and gain crucial insights that allow them to safely create AGI that does do exactly what they want.

Expand full comment

How on earth does something which does not possess consciousness or any comprehension of the text it's producing even "sort of" qualify as an AGI?! It isn't an intelligence at all, general or otherwise. Even if you hold to a functionalist account of intelligence, it doesn't count, because it is not in fact performing the functions an intelligent being would perform. It's just a really impressive simulation, able to simulate consciousness convincingly only because we've fed it an astonishing quantity of human-created sentences for it to perform mathematical algorithms on. That is not the way an *actually* intelligent being goes about responding to queries.

Expand full comment

What I understand of Tyler's argument: people like Scott have been wrong about technological innovations every time in history. Is it really so hard to believe that people like Scott are wrong again?

Scott's argument: This time really is different. Studying the situation more deeply will reveal itself to be so.

I of course buy Scott's argument. Cowen's argument is of the lazy "it's always happened this way before, so it will keep happening this way" variety. Which is generally right, until it is wrong.

Expand full comment

Why would we be afraid of 100-mile-long spaceship of aliens?

- We can't build a 100 mile long spaceship, but we want to, so they're better than us

- The ability to make spaceships implies huge power to shape the physical world, and threaten our existence

- We don't know anything about the aliens, but we recognize that they have a spaceship and they are some sort of entity that we can label "aliens", so its not a big leap to anthropomorphize

- Since they are human-like enough, we can say they have "intent" or "interest", and their presence outside Earth in particular means they are likely to be interested in us, take action regarding humans

So, can AGI do things that humans cannot? No, not yet. Does the thing it can do, which is vastly out of reach of humans, imply that it likely has the power to destroy us all? No obviously.

Is the AGI completely unknown and foreign like an alien? No, we built it. Is it so similar to humans and human-like its actions and the things it builds, that we can easily ascribe human motives to it (converting us to its religion lol)? No.

Did AGI travel across the stars to this specific pale blue dot, because it wanted to mess with humans? No.

So maybe this analogy isn't that great

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

Maybe it's just the humour I'm in but right now I feel like *both* sides are massively wrong.

Pro-AI people: it's not going to be the fairy godmother that will fix poverty and all the ills that flesh is heir to. Yes, the world may well be richer after it, but like right now, those riches will go into the pockets of some, and not be distributed throughout the world so a poor Indian peasant farmer gets 500 shares in Microsoft's Bing which will give him a guaranteed income so he won't have to worry about getting through the next day. "But it will be so smart it will know things we can't know with our puny human brains and it'll make decisions in a flash and it'll be angelically incorruptible so we should turn over the government of each nation and the running of each nation's economy to it!"

Yes, and the profits out of that will go to the owners/investors/creators of the angelically incorruptible AI, not you and me.

Anti-AI people: you've been smugly posting about Pascal's Mugging as a rebuttal of Pascal's Wager for years, and now you are trying to Pascal's Wager people in order to prevent or at least slow down AI creation.

Pardon me while I laugh in Catholic.

That's not going to work, all the earnest appeals are not going to work, and you know why? Human nature. Whoever gets there first with AI that is genuine AI is going to make a fortune. That's why when OpenAI went in with Microsoft in order to get funding, all the principles went out the window and we now have Bing to play with if we want.

People want to work on AI because they are in love with the topic, they want to see if they can figure out what intelligence is and by extension what makes humans tick, they want to make a shit-ton of money, they do it for all the reasons that previous attempts at "maybe we should hold back on this" were given the digitus impudicus. I've banged on about stem cell research before, but the distinction between the objections to embryonic stem cell research and the permissibility of adult stem cell research were all ignored in favour of a flat "religious zealots want to stifle stem cell research which is the cure for all ills".

So good luck with what you're doing now, I expect it to have as much influence as the 2000 Declaration by the Pontifical Academy for Life on holding back research:

https://www.vatican.va/roman_curia/pontifical_academies/acdlife/documents/rc_pa_acdlife_doc_20000824_cellule-staminali_en.html

I am very sympathetic to the worries, believe me. But having been on the receiving end of decades worth of "You cannot hold back the onward and upward march of Progress and Science" (including advances in social liberalisation) tut-tutting and finger-wagging from those who want Science and Progress, I'm none too optimistic about your chances.

Expand full comment

> The base rate for things killing humanity is very low

This also fails basic anthropic reasoning. We could never observe a higher base rate.

Expand full comment

> There are so many different possibilities - let’s say 100!

I originally interpreted this as 100 factorial, since it seemed like you wanted a very large number and I didn't see why else an exclamation mark would be there. Then when the sentence later mentions 1% I had to pause and realize that you just meant this 100 was very exciting.

Expand full comment

"But I can counterargue: “There have been about a dozen times a sapient species has created a more intelligent successor species: australopithecus → homo habilis, homo habilis → homo erectus, etc - and in each case, the successor species has wiped out its predecessor. So the base rate for more intelligent successor species killing everyone is about 100%”

I doubt I'm the first to point out that previous hominid species did *not* "create" successor species (unless we're arguing for the creation of Adam and Eve by God). What we got was evolution, natural selection, and what seems to be a combination of absorption (we've got Neanderthal DNA) and out-competing the less capable species.

And yeah, like chimpanzees, that probably did involve a lot of whacking the rival band over the heads until they died and then we moved in and took all their stuff. But they got the chance to whack us over the heads, too.

Expand full comment

I think Tyler's argument is about how to deal with the coming of AI psychologically, rather than making definitive claims about what the magnitude of the risk is.

I don't think he's saying that things will be fine. He's saying we should continue to work on alignment but should realize that progress towards generally intelligent AI will happen anyway, so we should accept this thing we cannot change.

Expand full comment
founding

I hate posts and threads. Anyone up for porting this to Kialo or Loomio?

Expand full comment

"We designed our society for excellence at strangling innovation. Now we’ve encountered a problem that can only be solved by a plucky coalition of obstructionists, overactive regulators, anti-tech zealots, socialists, and people who hate everything new on general principle."

Frank Herbert got there before you - the Bureau of Sabotage, motto: "In Lieu Of Red Tape":

http://www.fact-index.com/b/bu/bureau_of_sabotage.html

"In Herbert's fiction, sometime in the far future, government has become terrifyingly efficient. Red tape no longer exists: laws are conceived of, passed, funded, and executed within hours, rather than months. The bureaucratic machinery has become a juggernaut, rolling over human concerns and welfare with terrible speed, jerking the universe of sentients one way, then another, threatening to destroy everything in a fit of spastic reactions. In short, the speed of government has gone beyond sentient control (in this fictional universe, many alien species co-exist, with a common definition of sentience marking their status as equals).

BuSab begins as a terrorist organization, whose sole purpose is to frustrate the workings of government and to damage the incredible level of efficient order in the universe in order to give sentients a chance to reflect upon changes and deal with them. Having saved sentiency from its government, BuSab is officially recognized as a necessary check on the power of government. First a corp, then a bureau, BuSab has legally recognized powers to interfere in the workings of any world, of any species, of any government, answerable only to themselves (though in practice, they are always threatened with dissolution by the governments they watch). They act as a monitor of, and a conscience for, the collective sentiency, watching for signs of anti-sentient behaviour and preserving the essential dignity of individuals."

Expand full comment

I'm on my phone. Let's see if I can recall the Frederich Bastiat quote concerning Luddites. It will be something sloppy ... ho ho, instead I found online a pdf of WHAT IS SEEN AND UNSEEN. Bastiat is speaking with a voice of irony. Lancashire is the technological leader, and Ireland the technological loser of Bastiat's day. The Luddites, followers of Captain Ludd are breaking machines the people fear will displace humans doing brute force work.

"Hence, it ought to be made known, by statistics, that the inhabitants of Lancashire, abandoning that land of machines, seek for work in Ireland, where they are unknown."

In today's world we would say: Looking at the statistics, people flee Silicon Valley to seek work in Mississippi where AI is unknown.

Expand full comment

While yes, some murders likely happened between different species, but to say one species killed off the other is like saying the 80386 was murdered by the 80486.

Expand full comment

1) Human brains have a cognitive bias that assigns a higher-than-rational probability to danger and disaster, especially when encountering novel situations or "outsider" agents. To counteract this bias, we should apply unusual skepticism to any prediction a human makes of impending doom.

2) Given a situation which may turn out to be an actual danger, but where we have too little information to predict the nature of the danger with any precision, any mitigation measures we might choose to take are just as likely to increase the danger as to reduce it. Therefore, elaborate planning of mitigation measures against a completely unpredictable danger is not an efficient use of resources.

Or, translated into human-ese: "It's no use worrying about it. Everything will probably be fine."

Expand full comment

On the REFERENCE CLASS TENNIS. Overall every new technology has killed some small number of people in it's own unique way, but improved our lives immensely.

Fire; stacked rock homes; boats; keeping livestock; steam engines; locomotives; airplanes; self driven cars; ... etc.

This is our reference, AI will harm some small number of people in it's own unique way, but improve human life immensely.

Expand full comment

AI may not itself exterminate humanity, but it may accelerate its destruction.

Given the kind of adaptations made after the meteor strike that took the dinosaurs, life on earth will survive in some form. New forms of plant and animal life will evolve and flourish. The planet just won't be dominated by a violent, narcissistic species.

Expand full comment

> If you have total uncertainty about a statement (“are bloxors greeblic?”), you should assign it a probability of 50%.

Strong disagree. You should assign some probability, maybe 80%, to “in some important senses yes, and in some important senses no”, because most things are like that. Then maybe 10% to each “in every important sense yes/no”.

Expand full comment

Don't worry; I'm only killing the evil ones.

Expand full comment

Is the Safe Uncertainty Fallacy just the Precautionary Principle pointed in the other direction?

Expand full comment

I agree, you are having a tough time steelmanning Tyler. Maybe I can help?

1. Humans have a long and storied history of overemphasizing the negative aspects of any radical change within their society.

2. AI advancement denotes a radical change in our society.

3. We are acting accordingly, without accounting for our natural bias.

His argument is reactive. He believes that, given the amount of evidence we currently have access to for any given AI outcome, we are giving too much credence to negative ones. That's why he doesn't need to give his own percent chance on whether we are all gonna die. The issue at hand is the predominantly the human psyche, not the available evidence, which is sparse by its very nature.

"Are bloxors greeblic?" is also an inadequate example for this case. Closer to Cowen's argument would be: Assuming that each of these terms are mutually exclusive, "Are bloxors greeblic, speltric, meestic, garlest, mixled, treeistly, mollycaddic, stroiling, bastulantic, or phalantablacull?" Given complete uncertainty on this, we would predict a 10% chance for any given term. But for whatever reason (because we don't want the apocalypse to occur, and/or radical change frightens us), we are giving greeblic a 50% chance.

I think the core of Cowen's argument isn't to say that everything is fine (he mentions his support for working on alignment), but instead to emphasize the outcomes which are fine/good over the outcomes which are bad. Like I said, his argument is reactionary. It wouldn't be made if greeblic were appropriately low. Greeblic needs to come back down to 10 or 15%, while mollycaddic, garlest and bastulantic need to be boosted to 8 or 10%. The evidence is sparse enough that these should be equals, or very close to it.

Expand full comment

“We designed our society for excellence at strangling innovation.”

I know this is a half joke, but it looks like it’s time for someone’s belief system to update. Nobody is saving us, and the market system that made life in America briefly grand is about to obsolete humanity.

Again: see you on the other side.

Expand full comment

You can apply this argument to any new technology and stop technological progress altogether. When the LHC was about to be started up, a group of people started saying that it would destroy the world. In fact, their arguments closely mirrored Scott's own here. One proponent claimed that there's 50% chance that the LHC will destroy the world since we don't know whether it will or it won't (https://www.preposterousuniverse.com/blog/2009/05/01/daily-show-explains-the-lhc/). It was a good thing that CERN didn't listen to them.

Similar fears about GMO crops, vaccines etc. have done enormous harm by slowing down progress.

In all of these cases the basic fallacy is to confuse possibility with probability. Yes, bad things are possible but they're not necessarily probable. One should also consider the civilizational cost of restricting transformational technology. One could have made similar arguments about electricity, computers, and all kinds of technological advancement. That those inventions had overwhelmingly large positive effects should give strong prior that AGI will have too.

Expand full comment

There is going to be regulation soon anyway. Section 230 does not protect AI service providers and the potential liabilities, criminal and civil, are enormous.

If AI is used to assist in committing a crime, hacking, phishing, libel, etc. that isn't protected speech, it's aiding and abetting. See the famous Hitman case for an example of book publisher being held liable for aiding and abetting murder.

Congress is highly unlikely to give AI service providers the same blanket protection they gave internet publishers.

Expand full comment

bloxors are *absolutely* greeblic

Expand full comment

You have another fallacy I could give a name to as well: "Sure, solving a problem using [a bunch of terrible things we all hate, like regulation] doesn't sound good. But society was *designed* to only be good at such things, namely, solving problems of its own creation. Therefore, we can only solve this problem using [terrible things we all hate, like regulation]."

Expand full comment

Tyler is right, Scott is wrong. But boy can he write! Bravo!

Expand full comment

I struggle with this for zero hedge’s blog. I often see interesting sounding headlines being quoted from their site on Twitter, but then every single time I’ve ever gone directly to their site to read what they’ve been posting, it has had a near zero percent interest rate for me. A true base rate conundrum! So I’ll continue with the seemingly contradictory expectations that quoted headlines from zero hedge will often be interesting, but their site will almost never be interesting. The power of curation can turn lead into gold. Due to this I have zero nit picks and widely agree with Scott today, despite my base rate being that I’d expect to be able to quibble on something in the post.

MR is being silly and using rhetoric in response to a perceived hysteria. I think they have a base rate prediction that all or almost all hysteria is wrong, based on measuring how often hysterical people are right. Being anti hysteria works most of the time? But alas in situations where things are a true emergency, hysteria is the expected thing to observe and occur ain almost all cases! Reference tennis indeed!

Expand full comment

I think that fundamentally, this boils down to the (at least as old as Seneca - it's the oldest source I'm familiar with gesturing at this) "negativity bias" idea - your fears and anxieties are only limited by your imagination, so if you let it run wild you will live in fear all the time, and even if this turns out to be great at making you survive, the relatively long life you live will not be enjoyable.

In other words, more specific to our case: it's not that "we have no idea what will happen, therefore, it'll be fine" - it's that "we have no idea what will happen, therefore, we utility-maximize by living as if it'll be fine. And if we are unlucky and the doomsayers were correct and we all die - at least our run was good while it lasted, and we hope the suffering will not be long."

Expand full comment
Mar 31, 2023·edited Mar 31, 2023

I think the best comment was the one that said a true steel manning would point out that SA is failing to take into account the potential upside. We have no idea what will happen... Catastrophe is plausible, but so is ushering in a utopia. Or anything in between.

I'm familiar with Bostrom's/Stuart Russell's arguments, and I find them convincing. But that's just the thing; I've learned to be skeptical of arguments from first principles that sound plausible. Embarrassingly, I was an anarcho-communist and gender feminist in my youth. The arguments seemed convincing. Marxism can sound convincing, and indeed did to most of the smartest people in the Western world for a few generations, and still does to midwits. And yet it consistently resulted in mass murder, starvation, and societies run on lies and mistrust in every aspect of everyone's daily life.

Malthusianism is another good example. It's a syllogism. The logical consequences of obvious premises. Airtight, indisputable. Yet it turned out to be completely, totally wrong. (As argued in Farewell to Alms, it actually was true for all of human history, up until the point it was formulated as an argument, at which point everything changed.)

Combine that with the fact that up until now, technological progress has made things immeasurably better for the overwhelming majority of people... Sure, social media has led to rises in anxiety and depression etc... It seems people will always be miserable no matter what. (But maybe AI could find a solution even for that.) But unless we're going to throw up our hands and say nothing matters...

Surely people living longer healthier lives matters if anything does. And technology has facilitated that to a degree that is hard to appreciate. Most of us live lives of unimaginable wealth and safety compared to our ancestors, which may be a mixed blessing, but on the whole, I'd rather struggle with difficulty finding meaning than with starvation and torture.

The benefit of advances in medicine are obvious, but information technology helps in so many other ways that harder to appreciate, but make a difference not just convenience or luxury, but human lives. I've seen people argue that nothing has really changed recently because we still drive around in cars that are basically the same instead of hover bikes or whatever... I think it was maybe in one of Pinker's books that I read about how traffic fatalities have gone down dramatically, because of those "minor" improvements in cars that make them less likely to kill you when you do get in an accident and that make you less likely to get in an accident in the first place, like a warning light when someone is in your blind spot. Self-driving cars could bring traffic fatalities--one of the leading causes of accidental death that kills vastly more people than any of the things we routinely freak out about because of the news--down to zero.

I remember when we had to use maps to get to places we hadn't been before, and if we needed to contact someone, we had to find a payphone. Good luck if you break down in a bad neighborhood! Hope you have your rolodex on you! Now cars rarely break down, we can easily contact anyone we know or any services we might need if we do, but we I rarely have to go anywhere anyway because I work on my laptop and can have virtually anything I want magically appear on my doorstep the next day. It might sound like this is just about convenience, but it has virtually eliminated the most likely cause of my untimely and grisly accidental death.

So my point is the potential benefits of AI are truly incalculable. Arguably so are the potential risks. AI safety should be taken seriously, but the fact that it is possible to construct a plausible argument about why they will happen is not a good reason to try to prevent AI altogether.

Expand full comment

In our current age of hysteria, people express extremes seeing great benefit or great harm.

Expand full comment

You have a classic combination of apocalypticism and utopianism; you think deliverance is always around the corner, one way or another. Which is understandable but unhelpful. And what you want to be delivered from is the dull grind of boring disappointing enervating daily life. But you'll never be freed from that, not by AI or the bomb or the Rapture. Because we're cursed to mundanity; that's our endowment as modern human beings. The trophy we're awarded for all that technological progress is that we live in an inescapable now.

Please dramatically increase your own perception of your own anti-status quo bias. Please consider how desperately you want this age to end and a new one to begin. And consider that the most likely outcome is always that tomorrow will be more or less like yesterday.

Expand full comment

"In order to generate a belief, you have to do epistemic work. I’ve thought about this question a lot and predict a 33% chance AI will cause human extinction"

Currently, the probability iz zero, because LLMs of the sort underlying ChatGPT are flat out incapable of ever being "better than human" or "causing human extinction". The foundation for the technology requires human input for training. That fact alone is sufficient to render the probability zero.

Is there another technology besides LLMs that could give rise to AI? currently, no. Such a class of technology falls into the same realm of speculative fiction as hyperspace travel. Unlike difficult technologies like cold fusion, room temperature supercon, and genetic engineering, there has been *zero* effort made in creating new technologies for these speculative fiction niches.

My bias is that I have long been a skeptic of AI (and its cousin, the Singularity) as you can see from this old essay of mine here:

https://www.haibane.info/2008/03/02/singularity-skeptic/

and I have not seen anything in the last 6 months that addresses the core criticisms I made therein.

Expand full comment
Mar 31, 2023·edited Mar 31, 2023

> Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

This is almost equivalent to 'it's 50/50, either it happens or it doesn't'. Dismal reasoning from TC.

Edit: I wrote this comment immediately upon reading the above quote, and didn't see that Scott had addressed it more cogently (and charitably) than I did.

Expand full comment

My #1 issue with this version of the AI-nihilist argument is that you could make an identical argument in favour of investing tons of money in signalling as loudly as possible towards potential alien civilizations.

After all if they are interstellar and willing to share tech with us that would be hugely advantageous, we've never gone extinct so we never will and China wants new technology as well. Besides how could the government ever prevent a private corporation from launching sattelites or funding radio transmitters??!?!

Expand full comment
Apr 1, 2023·edited Apr 1, 2023

Scott:

What puzzles me is how little the AI cognoscenti seem to be aware that there are formalized methods for risk assessment and mitigation. EY's recent suggestion that we should be willing to preemptively destroy any foreign power's data center that does an AI training run seems a bit over the top to me—not because he sees AI as an existential risk—but that he seems to assume that there's no way to mitigate the risks of AI. EY is a very smart person, but it amazes me that he hasn't looked into the past 50 years of risk assessment and mitigation practices as practiced by NASA, DoD, AEC and NRC. (Maybe he has considered these and dismissed them as being insufficient, but I've not seen any mention of them in his writings.)

Likewise, as a psychiatrist, you must be at least passing aware of the Johari Window of known knowns, known unknowns, unknown knowns, and unknown unknowns. Risk management experts took the Johari Window (developed by psychologist Joseph Luft and Harrington Ingham back in the 1950s) and built a risk assessment methodology around it.

It seems to me that the risks of AI fall into the Johari Window's known unknown category. Known unknown risks are a category of risks that organizations generally face. These risks are called known unknowns because the organization is aware of the existence of such a risk—however, they are either not able to estimate the probability that these risks will materialize or to quantify the impact of these risks if they materialize.

BTW, you seem to be making the mistake of assigning a probability to a known unknown. Assigning a probability to a known unknown is considered a bad strategy because it will distort our risk mitigation planning. Risks for which we have priors for are known knowns, and those can be assigned at least a provisional probability by their priors. Known unknowns are risks we don't have any priors for, therefore any probabilities we assign to them would likely be wrong—and we might spend our risk mitigation efforts on the wrong threat. But just because you can't assign a probability to known unknowns, you can still (a) rank them in order of their *relative* likelihood, and (b) you can certainly plan mitigation strategies for known unknowns.

For instance, let's make a list of some of the known unknown risks to modern civilization. It's known that these *could* happen—but the probability of them happening is indeterminate—and the risk mitigation strategies for each would vary in difficulty and application.

1. Asteroid impact

2. Large-scale outflows from magma traps

3. Large-scale nuclear exchange

4. Anthropogenic Global Warming

5. AI

We'd probably rank 1 and 2 as being less likely than 3, 4, and 5 (at least in the near-term). Likewise, mitigation strategies for 1 and 2 may be difficult if not impossible (but nuking an asteroid is probably easier than stopping the Yellowstone magma cache from inundating the Western US and releasing gigatonnes of greenhouse gasses). Whether one ranks this higher than the risk of AI or AGW, well, that's up for discussion. AI might get a higher risk rating because we've gone 70 years without a large-scale nuclear exchange, and AGW being severe enough to be a civilization-ending event is still a ways off. But risk mitigation strategies for these three scenarios are less likely to be a waste of time than 1 and 2. Anyway, you might disagree with my assessment, but I think you can understand my point.

So, my question is what are the mechanism that a sufficiently powerful AI could utilize to make humanity extinct? These mechanisms are those that we would want to concentrate our risk mitigation strategies on.

For instance, to avoid the SkyNet scenario making sure nuclear launch systems are air-gapped from the Internet would be one of several mitigation strategies. I don't know enough about our nuclear command and control systems, but this seems like an analysis that Rand or Mitre could take on (if they haven't already done so).

Ultimately, I'm sort of vague about the other ways AI could make humanity extinct. Has anyone got any scenarios they'd like to share?

Expand full comment

>You can’t reason this way in real life, sorry. It relies on a fake assumption that you’ve parceled out scenarios of equal specificity (does “the aliens have founded a religion that requires them to ritually give gingerbread cookies to one civilization every 555 galactic years, and so now they’re giving them to us” count as “one scenario” in the same way “the aliens want to study us” counts as “one scenario”?) and likelihood.

Yes, that sort of enumeration approach would be silly. However, maybe the most silliest interpretation is not warranted?

Expand full comment

> But I can counterargue: “There have been about a dozen times a sapient species has created a more intelligent successor species: australopithecus → homo habilis, homo habilis → homo erectus, etc - and in each case, the successor species has wiped out its predecessor. So the base rate for more intelligent successor species killing everyone is about 100%”.

If you believe in the categorical imperative, then homo sapiens should wipe out AGI iff homo erectus should have wiped out homo sapiens. Humans should be grateful that we were not wiped out by other species threatened by our intelligence, and perhaps we should pay it forward.

Expand full comment

The opposite of this fallacy is of course also a fallacy. "We can't know for sure what will happen" doesn't mean "therefore certain imminent doom".

Presumably Tyler has an estimate for the risk based on "epistemic work" as you observed. It's just much lower than your own estimate. But it's not feasible to recapitulate the entire debate every time it comes up, and in fact you've done the same thing. *You* didn't prove any arguments for your 33% figure here either, except very briefly in the "reference class tennis" aside.

Expand full comment

Not sure if these points have already been made, but

(1) The most likely outcome is not that AI will wipe out humanity, but that AI will be harnessed by a few to completely dominate the rest. This is a frankly more miserable outcome as well.

(2) I haven't seen anyone use this metaphor, so ...

When I was an undergrad many moons ago, we had Friday "precepts" where a small group of students sat around with a grad student to discuss the week's readings. Inevitably one or more students (sometimes me) would try to BS their way around the fact that they hadn't done the readings, by parroting what everyone else was saying.

That's ChatGPT - the lazy undergrad who has done no actual reading (as in, conceptual construction) and is simply repeating what everyone else has said. In other words, whether true or false, everything that ChatGPT says is bullshit in the technical sense of "lexical tokens without content"

Thing is, the BS method works some of the time for undergrads, and it will work some of the time for ChatGPT, unless we categorically reject its output as bullshit that is sometimes accidentally accurate.

Expand full comment

Pyrrhonean Skeptics in ancient Greece landed on a similar position: If you can't produce a reason for any particular course of action being the right thing to do, should you do nothing? No, idleness is as unjustified as any of your other options. You're gonna do something, and it's gonna be unjustified, because something other than reasons will make you act. Four (unjustified) movers named by Sextus were 1. feelings/appearances, 2. instincts, 3. customs, 4. training.

The upshot is that the skeptic just gets blown around by natural and social forces and is wise enough to just be chill about it. Sextus was basically the ultimate anti-rationalist in the Yudkowsky sense. He made fun of Diogenes, who allegedly reasoned thus: Masturbating isn't wrong, and if you're doing nothing wrong, there is no shame in doing it in the marketplace. Some days I feel like Yudwowsky is the new Diogenes, this blog is our community marketplace, and there is a lot of open masturbating around here! Not saying it's wrong, of course. Fap on, intrepid rationalists, and feel superior to the normies who refuse to join in out of an irrational adherence to custom.

Expand full comment

I think Tyler Cowen's argument is not exactly the fallacy it's being depicted as. To me it reads like this,

1. We can't possibly know, or even guess accurately, what the impact will be.

2. Therefore, nothing we can say will be of any value.

3. This may be functionally equivalent to saying that the chance of a major impact is zero, but that can't be helped.

Expand full comment

This was exactly the point I wish eliezer had made on the lex fridman podcast.

Expand full comment

Nit-pick: "I said before my chance of existential risk from AI is 33%; that means I think there’s a 66% chance it won’t happen." 33 + 66 is 99. Maybe you mean 67%?

Expand full comment

interpretation #4: the situation is so uncertain that we don't even know what is safe. maybe the safest option is to race towards AGI because it'll save us from the 100 mile long alien ship? maybe the aliens are coming to save us from the AI?

Expand full comment

It's still unclear to me why P[bloxors \in greeblic] = .5? If I am clueless with respect to a proposition, even a binary one like this, am I not licensed to assign any probability I like? In what sense can my credence be wrong, without introducing evidence?

Expand full comment
Jun 6, 2023·edited Jun 6, 2023

Better to ask why you care so much about this one possibility

Expand full comment

This argument becomes less fallacious if you change "there is no reason to worry" to "there may be a reason to worry, but due to our ignorance of the situation doing so would be a waste of time and energy since worrying is only useful when it motivates productive action and we have no idea what a productive action would be". For an example of this philosophy in action, watch the final scene of Lars Von Triers "Meloncholia"

Expand full comment

LW/ACX Saturday (6/9/23) Your Brain, who is in control.

Hello Folks!

We are excited to announce the 29th Orange County ACX/LW meetup, happening this Saturday and most Saturdays

Host: Michael Michalchik

Email: michaelmichalchik@gmail.com (For questions or requests)

Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Date: Saturday, June 10, 2023

Time: 2 PM

Conversation Starters:

https://www.pbs.org/wgbh/nova/video/your-brain-whos-in-control/

Your Brain: Who's in Control? | NOVA | PBS

C) Card Game: Predictably Irrational - Feel free to bring your favorite games or distractions.

D) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.

E) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.

F) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.

Expand full comment

I recently talked to Tyler as I wanted to try to understand his logic. The key point is that he doesn't think that ASI (yes, ASI, not just AGI) will be more significant than the printing press, or electricity, or the internet. Like the printing press etc., it will cause all sorts of disruption in the world, some of it pretty profound, but it's unlikely to kill everyone. His belief is that the power and utility of intelligence diminishes rapidly above human level.

Expand full comment