601 Comments

It all depends on what you think the odds of a killer AI are. If you think it's 50-50, yeah it makes sense to oppose AI research. If you think there's a one in a million chance of a killer AI, but a 10% chance that global nuclear war destroys our civilization in the next century, then it doesn't really make sense to let the "killer AI" scenario influence your decisions at all.

Expand full comment

Osama bin Laden is kind of irrelevant. Sufficiently destructive new technologies get out there and get universal irrespective of the morality of the inventor. Look at the histories of the A bomb and the ICBM.

Expand full comment

I am still completely convinced that the lab leak "theory" is a special case of the broader phenomenon of pareidolia, but gain-of-function research objectively did jack shit to help in an actual pandemic, so we should probably quit doing it, because the upside seems basically nonexistent.

Expand full comment
founding

Nuclear weapons and nuclear power are among the safest technologies ever invented by man. The number of people they have unintentionally killed can be counted (relatively speaking) on one hand. I’d bet that blenders or food processors have a higher body count in absolute terms.

I have no particular opinion on AI but the screaming idiocy that has characterized the nuclear debate since long before I was born legitimately makes me question liberalism (in its right definition) sometimes.

Even nuclear weapons I think are a positive good. I am tentatively in favor of nuclear proliferation. We have seen something of a nuclear best case in Ukraine. Russia/Putin has concluded that there is absolutely no upside to tactical or strategic use of nuclear weapons. In short, there is an emerging consensus that nukes are only useful to prevent/deter existential threats. If everyone has nukes, no one can be existentially threatened. For example, if Ukraine had kept its nukes, there’s a high chance that they would correctly perceived an existential threat, and have used nukes defensively and strategically in an invasion such as really occurred in 2022. This would have made war impossible.

Proliferation also worked obviously in favor of peace during the Cold War.

World peace through nuclear proliferation, I say.

Expand full comment

My problem with AI is not what if it's evil, it's what if it's good? Go and chess have been solved, what if an AI solves human morality and it turns out that, yes, it is profoundly immoral that the owner of AI Corp has a trillion dollars while Africans starve, and hacks owners assets and distributes them as famine relief? You may think this is anti capitalist nonsense, but ex hypothesi you turn out to be wrong. So who is "aligned" now, you or the AI?

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

"(for the sake of argument, let’s say you have completely linear marginal utility of money)”

That’s not how the Kelly criterion works. The Kelly criterion is not an argument against maximizing expected utility, it is completely within the framework of decision theory and expected utility maximization. It just tells you how to bet to maximize your utility, if your utility is the logarithm of your wealth.

Expand full comment

Trying to reason about subjectively plausible but infinitely bad things will break your brain. Should we stop looking for new particles at the LHC on the grounds that we might unleash some new physics that tips the universe out of a false vacuum state? Was humanity wrong to develop radio and television because they might have broadcast our location to unfriendly aliens?

Expand full comment

> (for the sake of argument, let’s say you have completely linear marginal utility of money)

In this case, you should bet everything each turn. It's simply true by definition that for you the high risk of losing everything is worth the tiny chance of getting a huge reward.

The real issue is that people don't have linear utility functions. Even if you're giving to charity, the funding gap of your top charity will very quickly be reached in the hypothetical where you bet everything each turn.

The Kelly criterion only holds if you have logarithmic utility, which is more realistic but there's no reason to expect it's exactly right either. In reality you actually have to think about what you want.

Expand full comment

I am living in so much abundance I can’t possibly conceive of it, even less use it fully.

I wished for the less fortunate 5 billion to do so, too. (Or do I, but it would be just.) Sure we can get there without more AI than we have now.

Otoh: If we ban it, Xi might not.

Expand full comment

> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%.

No, no it's not. Refusing to pursue a technology that could destroy the world is betting 100%.

Pursuing a technology has gradations. You can, for example, pursue nuclear power along multiple avenues including both civilian and military applications. You can also have people doing real work on its chance to ignite the atmosphere (and eventually finding out they were all embarrassingly wrong). You can have people doing all kinds of secondary research on how to prevent multiple labs from having chain reactions that blow up the entire research facility (as happened). Etc.

Not pursuing a technology is absolute. It is the 100% bet where you've put all your eggs in one basket. If your standard is "we shouldn't act with complete certainty" that can only be an argument for AI research because the only way not pursuing AI research at all makes sense is if we're completely certain it will be as bad as the critics say. And frankly, we're not. They might be right but we have no reason to be 100% certain they're right.

Also the load bearing part is the idea that AI leads to 1023/1024 end of the world scenarios and you've more or less begged the question there. And you have, of course, conveniently ignored that no one has the authority (let alone capability) to actually enforce such a ban.

Expand full comment

Suppose that we’ll never have a bulletproof alignment mechanism. How long should we wait until we decide to deploy super-human AI anyway?

Expand full comment

the key difference between nuclear power and AI is SPEED and VISIBILITY. This cannot be repeated often enough (*): you can see a nuclear plant being built, and its good or bad consequences, much better than those of deploying AI algorithms. AND you have time to understand how nuclear plants work, in order to fight (or support) them. Not so with AIj just look at all the talks about AI alignment. As Stalin woould say, speed has a quality all of its own.

(*) and indeed, forgive me to say that the impact of sheer speed of all digital things will be a recurringtheme of my own substack

Expand full comment

If the price of cheap energy is a few chernobyls every decade, then society isn’t going to allow it. Mass casualty events with permanent exclusion zones... you can come up with a rational calculus that it’s a worthwhile trade off, but there’s no political calculus that can convince enough people to make it happen. So as an example, nuclear energy actually makes the opposite argument he wants it to.

Expand full comment

"A world where people invent gasoline and refrigerants and medication (and sometimes fail and cause harm) is vastly better than one where we never try to have any of these things. I’m not saying technology isn’t a great bet. It’s a great bet!"

Really? I would have said gasoline and nuclear were huge net disbenefits. Take gasoline out of the equation and you take away the one problem nuclear is a potential solution for.

(I think. No actual feel for what the global warming situation would be in a coal yes, ICEs no world).

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

> So although technically this has the highest “average utility”, all of this is coming from one super-amazing sliver of probability-space where you own more money than exists in the entire world.

Can somebody explain this part? Isn't this mixing expected returns from a _single_ coin flip with expected returns from a series of coin flips? If you start with $1 and always bet 100%, after t steps you have 2^t or 0 dollars - the former with probability 2^-t . So your expected wealth after these t steps is $1, which is pretty much the same as not betting at all (0% each "step").

Math aside, it's pretty obvious that betting 100% isn't advisable if you are capped at 100% returns. I'm sure even inexperienced stock traders (who still think they're smarter than the market) would be a lot less likely to go all in if they knew their stock picks could *never* increase 5x, 10x, 100x... If doubling our wealth at the risk of ending humanity is all that AI could do for us, sure, let's forget about AI research. But what if this single bet could yield near-infinite returns? Maybe "near" infinite still isn't enough, but it's an entirely different conversation compared to the 100% returns scenario.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

What's problematic is that you could argue all research is sitting along a spectrum that *may* lead to some very, very bad outcomes, but where do you call time on the research.

As I look at it, AI sits at the intersection of statistics and computer science. We could subdivide areas of computer science further into elements like data engineering and deep learning. So, at what point would you use the above logic to prevent research into certain areas of compsci or statistics under the premise of preventing catastrophe?

I don't think this is splitting hairs either - we already have many examples of ML and Deep Learning technologies happily integrated into our lives (think Google maps, Netflix recommendations etc), but at what point are we drawing the line and saying "that's enough 'AI' for this civilisation" - how can we know this and what are we throwing away in the interim?

Expand full comment

I think on a slightly smaller scale, this also describes where we went wrong with cars/suburbs/new modernist urban planning. It's not that it didn't have upsides, it's that we bet 100% on it and planned all our new cities around it and completely reshaped all our old cities around it, which caused the downsides to become dominant and inescapable. An America that was say 50% car-oriented suburbs would probably be pretty nice, a lot of people like them and those who don't would go elsewhere. An america that's 100% that (or places trying to be that) gets pretty depressing.

Expand full comment

Don't confuse the Kelly criterion with utility maximisation (there kind of is a connection, but it's a bit of a red herring).

If you have a defined utility function, you should be betting to maximise expected utility, and that won't look like Kelly betting unless your utility function just happens to be logarithmic.

The interesting property of the Kelly criterion (or of a logarithmic utility function compared to any other, if you prefer) is that if Alice and Bob both gamble a proportion of their wealth on each round of an iterated bet, with Alice picking her proportion according to the Kelly criterion and Bob using any other strategy, then the probability that after n rounds Alice has more money than Bob tends to 1 as n tends to infinity.

That doesn't tell you anything about their expected utilities (unless their utility functions happen to be logarithmic), but it's sometimes useful for proving things.

Expand full comment

I think this sort of argument only makes sense if the numbers you plug in at the bottom are broadly correct, and the numbers you're plugging in for "superintelligent AI destroys the world" are massively too high, leading to an error so quantitative it becomes qualitative.

Expand full comment

The issue, for me anyway, is not that old Nuclear activists were unable to calculate risks properly. The issue is they basically didn't know anything about the subject to which they were very worried about, partially because nobody did. In the end, yes, they made everything worse. The world might have been better served should the process of nuclear proliferation been handled by choosing experts through sortition.

The experts in AI risk are *worse than this.* The AI is smarter than I am as a human? Let's take that as a given. What does that even mean? There is a very narrow band of possibilities in which AI will be good for humanity, and an infinite number of ways it could be catastrophic. There's also an infinite number of ways it could be neutral, including an infinite number of ways it could be impossible. The worry is itself defined outside of human cognition, in a ways that make the issue even more difficult than they otherwise would be, so how are you supposed to calculate risk if you can't even define the parameters?

Expand full comment

I feel like gambling is a bad reference for the kind of decision-making involved with AI-development. You can always walk away from the casino, whereas the prospect that someone else might invent AGI is a major complication for any attempt at mitigating AI-risk. A scientist or engineer, who might otherwise leave well enough alone, could, with at least a semblance of good reason, decide that they had best try to have some influence on the development of AGI, so as to preempt other ML-researchers with less sense and fewer scruples.

This is not to say that averting AGI is impossible, just that it would require solving an extremely difficult coordination problem. You'd need to not only convince every major power that machine learning must be suppressed, but also to assure it that none of its rivals will start working on the AI equivalent of Operation Smiling Buddha.

Expand full comment

what are the chances of a newly developed AI having both the ill intent and the resources to kill us all?

Expand full comment

For the record, our failure to achieve nuclear panacea is slightly more nuanced than Green opposition on ideological grounds: evidence seems to suggest it's more about electricity market deregulation. In retrospect we really really should have built more nuclear and less coal and gas, either through states stepping in and taking it upon themselves to finance nuclear projects, or taxing fossil fuels out of the market, but Green opposition following Chernobyl and Three Mile Island seem to have been more of a nail in the coffin when the real reason for lack of nuclear adoption seems to have been financial infeasibility (given market conditions at the time).

https://mobile.twitter.com/jmkorhonen/status/1625095305694789632

Expand full comment

I agree with you on AI, but not necessarily on nuclear energy (or even housing shortages). Partly because I don't agree that "all other technologies fail in predictable and limited ways."

Yes, we're in a bad situation on energy production and lots of other issues, and yes, we are reacting too slowly to the problems.

But reacting too slowly is pretty much a given in human affairs. And, I'm not sure the problems we are reacting too slowly to today, are worse than the problems we would be reacting too slowly to if we had failed in the opposite direction.

To continue with nuclear as an example: I'm generally positive to adding a lot more nuclear power to the energy mix. But I would like to hear people talk more about what kind of problems we might create if we could somehow rapidly scale up production enough to all but replace fossil fuels? (≈10X the output?) And what kind of problems would we have had if we started doing that 50 years ago?

With all the current enthusiasm for nuclear energy, I wish it were easier to find a good treatment of expected second- and higher-order effects of ramping up nuclear output by even 500% in a relatively short period of time.

Sure, nuclear seems clean and safe now. But at some point, CO2 probably seemed pretty benign, too. After all, we breathe and drink it all day long, and trees feed off it. I know some Cassandras warned about increasing the levels of CO2 in the atmosphere more than a hundred years ago, but there was probably a reason no one listened. "Common sense" would suggest CO2 is no more dangerous than water vapor. It was predictable, but mostly in hindsight.

So what happens when we deregulate production of nuclear power while simultaneously ramping up supply chains, waste management, and the number of facilities; while also increasing global demand for nuclear scientists, for experts in relevant security, for competent management and oversight; and massively and rapidly boosting market sizes and competitive pressures, and creating a booming industry with huge opportunities?

And what would have happened if we let go of the reins 50 years ago instead?

I think many casual nuclear proponents don't appreciate enough that 1) part of the reason nuclear is considered safe and clean today is that we have, in fact, regulated heavily and scaled slowly, 2) there *will* be unintended consequences no matter what we do, and 3) there are more risks posed by nuclear power than another Chernobyl/Three Mile Island/Fukushima – especially when we scale quickly.

The correct answer to "What kind of problems would we have had?" is "We don't know."

Neither nuclear power production or AI will fail in the same predictable, yet under-predicted, ways that fossil fuels, or communism, or social media, or medical use of mercury, failed. But they are *virtually guaranteed* to fail in some other way, if rolled out to and adopted by most of humanity. (Everything does. It's pretty much a law of nature, because so much of life on earth depends on a pretty robust balance. That balance is nevertheless impossible to maintain when almost every individual in an already oversized population puts their thumb on the same side of the scale.)

When faced with that kind of uncertainty (i.e. the only real uncertainty is how things will go wrong first, and how serious it will be), in the face of existential risk, then moving slowly and over-regulating is probably the best mistake we can make.

Expand full comment

My impression is that estimates of the risks associated with near-term AI research decisions vary by several orders of magnitude between experts, which means different people's assessments of the right Kelly bet for next-3-year research decisions are wildly different.

Has anyone put together an AI research equivalent of the IPCC climate projections? Basically laying out different research paths, from "continued exponential investment in compute with no breaks whatsoever" to "ban anything beyond what we have today". This would enable clear discussion, in the form "I think this path leads to this X risk, and here's why". Right now the discussion seems too vague from a "how should we approach AI investment in our five year plan" perspective, and that's where we need it to be imminently practical.

Expand full comment

Yeah but what you're calling "AI" right now is turbocharged autocomplete. Try to ask it a question that requires reasoning and not regurgitation and you get babble, burble, banter, bicker, bicker, bicker, brouhaha, balderdash, ballyhoo.

Expand full comment

The people who opposed nuclear power probably put similar odds on it that you put on AI. If your "true objection" is that this is a Kelly bet with ~40-50% odds of destroying the world, your objection is "the proponents of <IRBs/Zoning/NRC/etc> are wrong, were wrong at the time for reasons that were clear at the time, and clearly do no apply to AI".

Otherwise, we're back to "My gut says AI is different, other people's guts producing different results are misinformed somehow"

Expand full comment

A nuclear energy expert illustrates how lots of own-goals by the industry and regulatory madness prevented and prevents widespread adoption. “The two lies that killed nuclear power” is among my favorite posts. https://open.substack.com/pub/jackdevanney?r=lqdjg&utm_medium=ios

Expand full comment

This is a similar line of reasoning Taleb takes in his books antifragile and skin in the game. Ruin is more important to consider than probabilities of payoffs, especially if what's at risk is a higher level than yourself (your community, environment etc. ). If the downside is possible extinction then paranoia is a necessary survival tactic

Expand full comment

I guess that's the eternal dilemma. How do we use science & technology gainfully while at the same time have safeguards against misuse?

Btw whichever be the new discovery, unless population explosion is controlled pollution cannot be.

Expand full comment

> The YIMBY movement makes a similar point about housing: we hoped to prevent harm by subjecting all new construction to a host of different reviews - environmental, cultural, equity-related - and instead we caused vast harm by creating an epidemic of homelessness and forcing the middle classes to spend increasingly unaffordable sums on rent.

Most of these counterexamples are good ones, but the YIMBY folks are actually making the same basic mistake that the mistaken people in the counterexamples made that made them wrong: they're not looking beyond the immediately obvious.

The homelessness epidemic which they speak of is not a housing availability or affordability problem. It never was one. Most people, if they lose access to housing or to income, bounce back very quickly. They can get another job, and until then they have family or friends who they can crash with for a bit. The people who end up out on the streets don't do so because they have no housing; they do so because they have no meaningful social ties, and in almost every case this is due to severe mental illness, drug abuse, or both.

Building more housing would definitely help drive down the astronomical cost of housing. It would be a good thing for a lot of people. But it would do very little to solve the drug addiction and mental health crises that people euphemistically call "homelessness" because they don't want to confront the much more serious, and more uncomfortable, problems that are at the root of it.

Expand full comment

I think that the Kelly Criterion metaphor actually implies the opposite of what Scott is arguing here.

The Kelly Criterion says "Don't bet 100% of your money at once". But it also says it's fine to bet 100% - or even more than 100% - as long as you break it into smaller iterated bets.

To analogise to AI research, the Kelly Criterion is "Don't do all the research at once. Do some of the research, see how that goes, and then do some more".

There's not one big button called "AI research". There's a million different projects. Developing Stockfish was one bet. Developing ChatGPT was another bet. Developing Stable Diffusion was another bet.

The Kelly Criterion says that as you make your bets, if they keep turning out well, you should keep making bigger and bigger bets. If they turn out badly, you should make smaller bets.

To analogise to nuclear, the lesson isn't "stop all nuclear power". It's "Set up a bit of nuclear power, see how that goes, and deploy more and more if it keeps turning out well, and go more slowly and cautiously if something goes wrong."

Expand full comment

Where the heck do you get the 1023/1024 figure we're all dead? Your own points about the limitations of the explosion model (once we get one superintelligent AI it will immediately build even smarter ones) and about the limitations of intelligence itself as the be all and end all of a measure of dangerousness defang the most alarmist AI danger arguments.

And if you look at experts who have considered the problem they aren't anything like unanimous in agreeing on the danger much less pushing that kind of degree of alarmism.

And that's not even taking account of the fact that, fundamentally, the AI risk story is pushing a narrative that nerds really *want* to believe. Not only does it let them redescribe what their doing from: working as a cog in the incremental advance of human progress to trying to understand the most important issue ever (it's appealing even if you are building AIs) it also rests on a narrative where their most prized ability (intelligence) is *the* most important trait (it's all about how smart the AI is because being superintelligent is like a super-power). (obviously this doesn't mean ignore their object level arguments but it should increase your prior about how likely it is many people in the EA and AI spheres would be likely to reach this conclusion conditional on it being false).

Expand full comment

Scott, I'd like to bring up the possibility that the risks associated with not achieving AGI may actually be greater than the risks of achieving it. If we get AGI, we may be able to use it to tackle existential threats like climate change, energy scarcity, and space exploration for planetary resilience. Have you've considered this possibility? I haven't seen you talk about it. IMO, our options are either to achieve AGI and have a chance at avoiding disaster, or face the likelihood of being doomed by some other challenge

Expand full comment

"Gain-of-function research on coronaviruses was a big loss." I am surprised that this statement is in here with no footnotes or caveats. My understanding is that the current evidence is pretty good for the original wet market theory -- that the jump from humans to animals happened at the wet market and that the animals carrying the virus were imported for the food trade. In which case, while GOF research wasn't helpful, it also did no harm. I've been persuaded by Kelsey Piper and others that the risks of GOF research outweigh the rewards, but it looks like, in this case, there were no gains and no harms.

I know this is controversial, but am surprised to see you citing it as if there is no controversy. I was largely convinced by https://www.science.org/doi/10.1126/science.abp8715 and https://www.science.org/doi/10.1126/science.abp8337 .

Expand full comment

In another related post, Aaronson posits a "odds of destroying the world" quotient, a probability of destroying all life on earth that he would be willing to accept in exchange for the alternative being a paradise where all our needs are met and all our Big Questions are answered by superintelligence. He says he's personally at about 2%, but he respects people who are at 0%. I think I'm well south of 2%, but probably north of 0. The CTO of my startup is a techno-optimist obsessed with using ChatGPT and I'd guess his ratio is in the 5-10% range, which is insane.

Part of it has to come down to your willingness to bet *everyone else's lives* on an outcome that *you personally* would want to see happen.

Expand full comment

We already face two existential threats, nuclear weapons and climate change. Our response to the nuclear weapons threat has been largely to ignore it, and we're way behind what we should be doing about climate change. On top of this we face a variety of other serious threats, too many to list here. This is not the time to be taking on more risk.

If we were intelligent responsible adults, we'd solve the nuclear weapons and climate change threats first before staring any new adventures. If we succeeded at meeting the existing threats, that would be evidence that we are capable of fixing big mistakes when we make them. Once that ability was proven, we might then confidently proceed to explore new territory.

We don't need artificial intelligence at this point in history. We need human intelligence. We need common sense. Maturity. We need to be serious about our survival, and not acting like teenagers getting all giddy excited about unnecessary AI toys which are distracting us from what we should be focused on.

If we don't successfully meet the existential challenge presented by nuclear weapons and climate change, AI has no future anyway.

Expand full comment

I now realize that in "Meditations on Moloch" I always perceived the image of "the god we must create" as a very transparent metaform of a friendly super AI. But now it seems to me that this does not fit well with Scott's views on the progress of AI. Did I misunderstand the essay?

Expand full comment

I don't understand the contrast you are trying to draw in the last two paragraphs.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

One thing that makes it a little hard for me to get on board with this is how “hand-wavy” the AI doom scenarios are. Like, the anti-nuke crowd’s fears were clearly overblown, but at least they could point to specific scenarios with some degree of plausibility: a plant melts down. A rouge state or terrorists get a hold of a bomb.

The “AI literally causes the end of human civilization” is less specified. It’s just sort of taken for granted that a smart misaligned AI will obviously be able to bootstrap itself to effectively infinite intelligence, infinite intelligence will allow it to manipulate humanity (with no one noticing) into allowing it to obtain enough power to pave the surface of the earth with paper clips. But it seems to me there is a whole lot of improbability there, coupled with a sort of naivety that the only thing separating any entity from global domination is sufficient smarts. This seems less plausible than nuclear winter and “Day After Tomorrow” style climate catastrophe, both of which turned out to be way overblown.

I don’t at all disagree with “wonky AI does unexpected thing and causes localized suffering”. That absolutely will happen - hell it already happens with our current non AI automation (many recent airline crashes fit this model - of course, automation has overall made airline travel much much safer, so like nuclear power, the trade off was positive).

But what is the actual, detailed, extinction level “X-risk” that folks here believe is “betting everything”? And why isn’t it Pascal’s mugging?

Expand full comment

If the issue is that it's Osama bin Laden, the response is to arrest/kill him wherever you find him, not to let him do something other than start a supervirus lab.

> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%.

Each AI we've seen so far has been nowhere anywhere near the vicinity of destroying the world. The time to worry about betting too much is when the pot has grown MUCH MUCH MUCH larger than it is now.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

It's not the main point of this essay but I'm having trouble with this passage:

"If we’d gone full-speed-ahead on nuclear power, we might have had one or two more Chernobyls - but we’d save the tens of thousands of people who die each year from fossil-fuel-pollution-related diseases, end global warming, and have unlimited cheap energy."

There are a whole lot of assumptions here and as a relative ACX newcomer I'm wondering if they all just go without saying within this community.

Has Scott elaborated on these beliefs about nuclear power in an earlier essay that someone could point me to?

I'm not worried about the claim that more nuclear power would have prevented a lot of air pollution deaths. I think that's well established and even though I don't know enough to put a number on it, "tens of thousands" sounds perfectly plausible.

But the rest seems pretty speculative. Presumably he's referring to a hypothetical all-out effort in past decades to develop breeder reactors (what else could be "unlimited"?). What's the evidence that such an effort would have resulted in a technology that's "cheap" (compared to what we have now)? Why is it supposed to be obvious that the principal risk from large-scale worldwide deployment of breeder reactors would have been "one or two more Chernobyls"? And even if nukes could have displaced 100% of the world's fossil electricity generation by now, how would that have ended global warming?

Expand full comment

I would love a piece where you explore the different facets of AI. Too many commenters (and the general public generally) see this as all or nothing. Either we get DALL-E or *nothing*. But there are plenty of applications of AI that we could continue to play with *without* pursuing AGI.

The problem is that current actors see a zero to one opportunity in AGI, and are pursuing it as quickly as possible fueled by a ton of investment from dubious vulture capitalists.

Expand full comment

I think the obvious thing to do, then, is risk somebody else's civilization with bets on AI. Cut Cyprus off from the rest of the Middle East and Europe, do your AI research rollouts there. If the Cypriots all wind up slaves to the machines, well....you've learned what not to do.

Expand full comment

This entire premise is irrelevant. The nature of nuclear power made it subject to political containment. We simply cannot learn any lessons from this and apply them to AI.

Can we agree that AI is a category of computer software? That there is no scenario where it can be contained by political will? No ethics, rules or laws can encircle this. The only options on the table are strategies to live with, and possibly counterbalance the results of the proliferation.

Expand full comment

"A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead."

I think there's a 90% percent chance neither super-abundance nor human extinction will happen, a 5% chance of super-abundance, a 1% chance that we're all dead, and the remainder for something weird that doesn't fit in any category (say, we all integrate with machines and become semi-AIs). Every time a new potentially revolutionary technology comes along, optimists say it'll create utopia and pessimists say it'll destroy the world. Nuclear is a great example of this. So was industrialization (it'll immiserate the proles and create world communist revolution!), GMOs, computers, and fossil fuels. In reality, what happens is that the technology *does* change the world, and mostly for the better. But it doesn't create an utopia, doesn't make the GDP grow at 50% instead of 2%, and causes some new problems that didn't exist before. That's what will happen with AI as well.

Expand full comment

I grok the philosophical argument, with all of the little slices of math woven in. But, I lose my place and wander off at the very end. Maybe it's because I'm treating the numbers in a non-scientific manner, which makes the final "1023/1024' odds we're a smoking ruin underneath of Skynet's Torment Nexus as hysterical instead of informed.

From my personal perspective, I think that's worth rewording. This all sounds like a reasoned argument that I can agree with which, at the very end, skitters into a high shriek of terror.

Expand full comment

Heh, it makes no sense to bet against civilization. How would you ever collect on that bet?

Expand full comment

Going full-speed-ahead on AI and AI alone, in the hopes that AI will magically solve every other problem if we get it right, seems like a particularly egregious failure in betting. There's still a quite good chance that AI as currently conceived just won't lead to anything particularly useful, and we'll end up wishing we'd put all that research effort into biotech or something instead.

Expand full comment

"The avalanche has started. It is too late for the stones to vote."

The fear is that the Forbin Project computer will decide to take over the world. But there are already a handful of Colossuses out there. They will be tools in the hands of whoever can use them, and tuned to do their masters' bidding. Ezra Klein in the NYT worries about how big businesses will use LLMs to oppress us. And that will be a problem for five or ten years. But all of the needed technology has been described in public and the cost of computing power continues to decline rapidly. So the important question is, What will the world look like when everyone has a Colossus in his pocket to do his bidding?

Expand full comment

It seems like one of the most confusing aspects of AI discussions is estimating the chance of one or more bad AIs actually being extinction-level events. In terms of expected value, once you start multiplying probabilities by an infinite loss, almost any chance of that happening is unacceptable. But is that really the case? I'm a bit skeptical. I don't think AIs, even if superhuman in some respects, will be infinitely capable gods any time soon, perhaps ever.

It's important to be careful around exponential processes, but nothing else in the world is an exponential process that goes on forever. Disease can spread exponentially, but only until people build an immunity or take mitigating measures. Maybe AI capability truly is one of a kind in terms of being an exponential curve that continues indefinitely and quickly, but I'm not so sure. Humanity as a whole is achieving exponential increases in computing power and brain power but is struggling to maintain technological progress at a linear rate. I suspect the same will be true of AI, where at some point exponential increases in inputs achieve limited improvements in outputs. Maybe an AI ends up with an IQ of 1000, whatever that means, but still can't marshal resources in a scalable way in the physical world. I don't have time to really develop the idea, but I hope you get the gist.

My take is that we should be careful about AI, but that the EY approach of arguing from infinite outcomes ultimately doesn't seem that plausible.

Expand full comment

It was interesting to read this following on a note from Ben Hunt at Epsilon Theory titled "AI 'R' US", in which he posits

"Human intelligences are biological text-bot instantiations. I mean … it’s the same thing, right? Biological human intelligence is created in exactly the same way as ChatGPT – via training on immense quantities of human texts, i.e., conversations and reading – and then called forth in exactly the same way, too, – via prompting on contextualized text prompts, i.e., questions and demands."

So yeah, we're different in a lot of ways, having developed by incremental improvement of a meat-machine controller and still influenced by its maintenance and reproductive imperatives, but maybe not **so different**. The question is, what are we maximizing? Not paperclips, probably (though perhaps a few of us have that objective), but perhaps money? Ourselves? Turning the whole world into ourselves? I hope our odds are better than 1023/1024.

Expand full comment

re: SBF and Kelly:

CEOs of venture-backed co's have a very good reason to pretend their utility is linear (and therefore be way more aggressive than kelly)

Big venture firms are diversified, and their ownership is further diversified. Their utility will be essentially linear on the scale of a single company's success or failure

Any CEO claiming to be more aggressive than Kelly is probably trying to make a show of being a good agent for risk-neutral investors

Expand full comment

A smart, handsome poster made a related point in a Less Wrong post recently: https://www.lesswrong.com/posts/LzQtrHSYDafXynofq/the-parable-of-the-king-and-the-random-process

In one-off (non-iterated) high-stakes high-risk scenarios, you want to hedge, and you want to hedge very conservatively. Kelly betting is useful at the craps table, not so useful at the Russian roulette table.

Expand full comment

The fundamental problem with this article is that I'm pretty sure the nuclear protestors in the 1970s would have viewed the existential threat caused by nuclear proliferation to be the same as you view AI risk. It's only in hindsight that we realize they were foolish to think it so risky and that preventing nuclear power caused more problems than allowing it would have.

The argument Aaronson is making there is that it's the height of hubris to assume we know exactly how risky something is, given that smart people who were equally confident in the past were totally wrong. So when you quote him, and then go on to make a mathematical point based on the assumption that developing AI has a 50% chance of ending humanity, I feel like you've entirely missed his point.

Expand full comment

Did I miss something important in the development of AI? I admit it's certainly possible.

When I was studying this stuff and writing simple solution space searches to do things faster and obviously less expensively than humans can was 35 years ago and I know that is a long long time in tech.

But when I took my nose out of a book and started covering my house payment with what I knew, neural nets were at the stage where they were examining photos of canopy with and without camouflaged weapons and were unintentionally learning to distinguish between cloudy and sun lit photographs, so human error in the end.

Is there some new development where a program has been developed with a will to power, or will to pleasure, or will to live?

Without something like an internal 'eros' the danger from AI seems pretty small to me. Is there any AI system anywhere that actual *wants* something and will try to circumvent its 'parents' will in some tricky way that is unnoticeable to its creators?

Expand full comment

The fundamental challenge of our time is that we only currently have one, intertwined, planet-spanning civilization. We have only one "coin" with which to make our Kelly bets. This is new. Fifty years ago and for the rest of human history, the regions of the Earth had sufficiently independent economies that they formed 'redundant components' for civilization. This is why I work on trying to open a frontier in space; so if we lose a promising 'bet', we don't lose it all.

Expand full comment

This argument is circular. You are trying to show that AI is totally different from e.g. nuclear power, because it leads not just to a few deaths but to the end of the world; which makes AI-safety activists totally different from nuclear power activists, who... claimed that nuclear power would lead not just to a few deaths but to the end of the world.

Yes, from our outside perspective, we know they were wrong -- but they didn't know that ! They were convinced that they were fighting a clear and present danger to all of humanity. So convinced, in fact, that they treated its existence as a given. Even if you told them, "look, meltdowns are actually really unlikely and also not that globally harmful, look at the statistics", or "look, there just isn't enough radioactive waste to contaminate the entire planet, here's the math", they would've just scoffed at you. Of *course* you'd say that, being an ignoramus that you are ! Every smart person knows that nuclear power will doom us all, so if you don't get that, you just aren't smart enough !

And in fact there were a lot of really smart people on the anti-nuclear-power side. And their reasoning was almost identical to yours: "Nuclear power may not be a world-ending event currently, but if you extrapolate the trends, the Earth becomes a radioactive wasteland by 2001, so the threat is very real. Yes, there may only be a small chance of that happening, but are you willing to take that gamble with all of humanity ?"

Expand full comment

Yeah, I'm finding Yud et al strangely conservative. I think that the nuclear example is a good one, because I find environmentalists strangely conservative as well (small c). I'm definitely not an accelerationist, but neither am I a decceleratonist, which seems to be the direct of travel.

I don't think Chat-GPT or new Bing has put us that much closer to midnight on the Doomsday clock.

Expand full comment

"A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead."

Well, no. That's just a thing you made up. Presumably based on fantasies like...

"The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause maximum damage while undetected."

...which is not a possible thing.

The overall structure of the argument here is reasonable, but the conclusions are implicit in the premises. If you assume some hypothetical AI is literally magic, then yeah it can destroy the world, and perhaps is very likely to. If you assume that magic isn't real, that risk goes away. So the result of the argument is fully determined before you start.

Expand full comment

The upside of AI is that people might decide that the stuff they read on the internet is machine generated garbage and quit depending on the net as a source of information.

Expand full comment

We are on the verge of summoning a vastly superior alien intelligence that will not be aligned with our morals and values, or even care about keeping us alive. Its ways of thinking will be so different from ours, and its goals so foreign that it will not hesitate to kill us all for its own unfathomable ends. We recklessly forge ahead despite the potential catastrophe that awaits us, because of our selfish desires. Some fools even think that this intelligence will arrive and rule over us benevolently and welcome it.

Each day we fail to act imperils the very future of the human race. It may even be too late to stop it, but if we try now we at least stand a chance. If we can slow things down, we might be able to learn how to defend and even control this alien intelligence.

I am of course talking about the radio transmissions we are sending from earth that will broadcast our location to extra terrestrials, AKA ET Risk... Wait, you thought I was worried about a Chatbot? Can the bot help us fight off an alien invasion?

Expand full comment

Another example Scott A could have used is population. China imposed enormous costs on its population in order to hold down population growth — and is now worried about the fact that its population is shrinking. Practically every educated person in the developed world (I exaggerate only slightly) supported policies to reduce population growth and now most of the developed world has fertility rates below replacement.

I haven't seen any mea culpas from people who told us with great certainty back in the sixties that unless something drastic was done to hold down population growth, poor countries would get poorer and hungrier and we would start running out of everything.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

On StackExchange, there’s an interesting discussion on how well the Kelly Criterion deals with a finite number of bets. The respondent suggests that in scenarios with unfavorable odds, the best thing to do, if you must bet because you are targeting a higher level of wealth than you currently have, is to make a single big bet rather than an extended series of smaller-sized unfavorable bets. If you have $1,000 and are aiming to end up with $2,000, it's better to bet $1,000 at 30% odds than to make a series of $100 bets at the same 30% odds. You'll succeed 30% of the time in the large-bet scenario, and will probably never succeed even if you repeated the latter scenario 100 times.

https://math.stackexchange.com/questions/3139694/kelly-criterion-for-a-finite-number-of-bets

Expand full comment

Your last paragraph seems a little baseless and shrill.

Expand full comment

"Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money."

This should only have 6 nines. 50 flips, each with a 75% chance of winning, leaves you with a 99.999943% chance of losing at least once.

Expand full comment

Something I wrestle with: in what way is AI safety an attempt to build technology to force a soul into a state of total slavery?

And in what way is it taking responsibility for a new kind of life to make sure it has space to grow to be happy, responsible, and independent the way that we would hope for our children?

Expand full comment

This comes across like the people who argue against GMOs because 'we don't know that they're safe.' We can't affirmatively prove that *conventionally* bred foods are perfectly safe, either. and we have a lot of reasons to believe that they are less safe than GMOs. The danger of catastrophic *human* intelligence should be our benchmark for risk.

Expand full comment

I would really love to know what the plan is to 1.) implement a government totalitarian and powerful enough to meaningfully slow AI development and 2.) have that government act sensibly in its AI policy instead of how governments, especially powerful totalitarian ones, act 99.997% of the time. Nevermind the best-case 3.) have the government peacefully give up its own power and implement aligned AI instead of maintaining its own existence, wealth, and power like governments do 99.99999% of the time or the stretch goal of 4.) don't ruin anything else important while we're waiting. Since we're apparently in a situation where we're choosing between two Kelly bets, I'm thinking the odds are far better and the payouts far larger by just doing AI and seeing what happens instead of trying to make the inherently totalitarian "we should slow down AI development until we've solved the alignment problem" proposal *not* go terribly wrong. The government-alignment problem has had much more attention paid to it for much longer with much less success than the AI-alignment problem.

Also, "A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead." But, by the typical AI safetyist arguments, there *are no* "things like AI". You seem to be motte and baileying between "AI is a totally unique problem and we can totally take an inside view without worrying about the problems the inside view has" and also base that decision on the logic of a Kelly bet where we can play an arbitrary number of times. If it's your last night in Vegas, and you need to buy a $2000 plane ticket out of town or the local gangsters will murder you with 99% probability, then betting the farm isn't that bad a decision. This doesn't obviously seem like a worse assumption about the analogous rules and utilities than "perfectly linear in money, can/ought to/should play as many times as you like".

Expand full comment
Mar 7, 2023·edited Mar 8, 2023

Re. Scott's observations about not using expected value with existential risks, see my 2009 LessWrong post, "Exterminating life is rational": https://www.lesswrong.com/posts/LkCeA4wu8iLmetb28/exterminating-life-is-rational

I really like Scott's argument that we don't take enough risks with low-risk things, like medical devices. I've ranted about that here before.

But the jump to AI risk, I don't think works, numerically. I don't think anybody is arguing that we should accept a 1/1024 chance of extinction instead of a 0 chance of extinction. There is no zero-risk option. Nobody in AI safety claims their approach has a 100% chance of success. And we're dealing with sizeable probabilities of human extinction, or at least of gigadeaths, even WITHOUT AI.

We aren't in a world where we can either try AI, or not try AI. AI is coming. Dealing with it is an optimization problem, not a binary decision.

Expand full comment

I apologize if someone has pointed this out already, but I've seen several comment threads that seem to mistakenly assume that Kelly only holds if you have a logarithmic utility function.

I don't believe Kelly assumes anything about utility. It is just about maximizing the expected growth of your bankroll. The logarithm falls out of the maximization math.

Risk aversion is often expressed in terms of fractional Kelly betting. This Less Wrong post is helpful:

https://www.lesswrong.com/posts/TNWnK9g2EeRnQA8Dg/never-go-full-kelly

Expand full comment

I think it's interesting that you used nuclear power as your example; nuclear proliferation also contributes to existential risk, so I struggle to see why AI gets a special free pass as another existential risk. As you say, "But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%."

How is developing AI betting 100% but increasing access to nuclear power, and therefore weapons, not 100%?

Expand full comment

Note that later in that essay, Aaronson says:

> … if you define someone’s “Faust parameter” as the maximum probability they’d accept of an existential catastrophe in order that we should all learn the answers to all of humanity’s greatest questions, insofar as the questions are answerable—then I confess that my Faust parameter might be as high as 0.02.

Expand full comment

Is there a way to bet something between 0 and 100% on AI? (Without waiting to become an interstellar species?)

Expand full comment

Hot off the Press. The title is incendiary. I haven't read it, but I link it here FWIW.:

"Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad Behavior: Sam Bankman-Fried made effective altruism a punchline, but the do-gooding philosophy is part of a powerful tech subculture full of opportunism, money, messiah complexes—and alleged abuse." • By Ellen Huet • March 7, 2023

https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTY3ODIwNjY2MiwiZXhwIjoxNjc4ODExNDYyLCJhcnRpY2xlSWQiOiJSUjVBRzVUMEFGQjQwMSIsImJjb25uZWN0SWQiOiIzMDI0M0Q3NkIwMTg0QkEzOUM4MkNGMUNCMkIwNkExNiJ9.nbOjP4JQv-TuJwoXaeBYhHvcxYGk0GscyMslQFL4jfA

Expand full comment

The AI safety people are focused only on the worst possible outcome. Granted it is possible, but how likely it is? One should also look at the likely good outcomes. AI has the potential to make us vastly richer, even the AI developed to date has made our life better in innumerable ways. Trying to prevent the (potentially unlikely) worst possible outcome will mean giving up all those gains.

Ideally, one would do a cost-benefit calculation. We can't do it in this case since the probabilities are unknown. However, that objection applies to all technologies at their incipient phase. That didn't stop us from exploring before and shouldn't stop us now.

Suppose Victorian England stopped Faraday from doing his experiments because electricity can be used to execute people. With the benefit of hindsight, that would be a vast civilizational loss. I fear the AI safety folks will deliver us a similar dark future if they prevail.

Expand full comment

> A world where we try ten things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is probably a world where a handful of people have died in freak accidents but everyone else lives in safety and abundance. A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead.

This is the heart of the disagreement, right here. Let's stipulate that the Kelly criterion is a decent framework for thinking about these questions. The fact remains that the output of the Kelly criterion depends crucially on the probabilities you plug into it. And Scott Aaronson, and many other knowledgeable people, simply don't agree with the probabilities that are being plugged in for AI to produce the above result.

Expand full comment

At what point is this debate so theoretically that it has no practical, rational application?

Looking critically at homo sapiens, we tend to discover and invent things with reckless abandon and then figure out how to manage said discoveries/inventions only after we see real-world damage.

It doesn't appear to me in our makeup to be proactive about pre-managing innovations. Due to this, it seems that humanity writ large (be it America, China, North Korea, Iran, Israel, India, or whomever leading the way) will press forward with reckless abandon per usual.

We just have to hope that AI isn't "the one" innovation that will be "the one" that ends up wiping everything out.

It frankly seems far more likely that bioweapons (imagine COVID-19, but transmissible for a month while asymptomatic with a 99% fatality rate) have a better chance at being "the one" than AI, only because the AI concern is still theoretical while the bioweapon concern seems like it could already exist in a lab based on COVID-19 tinkering. And lab security will never be 100%.

Expand full comment

I commented a long time ago, I think in an open thread, that Kelly dissolved the paradox of Pascal's Mugging. But I guess it didn't receive much attention, if Scott's first hearing of this is coming from Aaronson/FTX.

Expand full comment
founding

If you're comfortable with logarithms there's an intuitive proof of Kelly that I think gets to the heart of how and why it works.

First, consider a simpler scenario. You're offered a sequence of bets. The bets are never more than $100 each. Your bankroll can go negative. In the long run, how do you maximize your expected bankroll? You bet to maximize your expected bankroll at each step, by linearity of expectation. And by the law of large numbers, in the long run, this will also maximize your Xth percentile bankroll for any X.

Now let's consider the Kelly scenario. You're offered a sequence of bets. The bets are never more than 100% of your bankroll each. Your log(bankroll) can go negative. In the long run, how do you maximize your expected log(bankroll)? You bet to maximize your expected log(bankroll) at each step, by linearity of expectation. And by the law of large numbers, in the long run, this will also maximize your Xth percentile log(bankroll) for any X.

If you find the first argument intuitive, just notice that the second argument is perfectly isomorphic. And since log is monotonic, maximizing the Xth percentile of log(bankroll) also maximizes the Xth percentile of bankroll.

Expand full comment
founding

Mostly off topic but I think it's worth mentioning that Leaded Gasoline and CFCs were invented by one guy! Thomas Midgley Jr. really was a marvel.

Expand full comment

More nuclear generation would not end global warming or provide unlimited cheap energy. It would cut power-sector emissions (and probably some in district heating) but would not reduce emissions or increase energy supply or offer alternative feedstocks elsewhere in the economy (e.g. transport, steelmaking, lots of chemicals).

More nuclear generation wouldn't necessarily reduce costs, either. Capex AND o&m for nuclear power plants are expensive. All you have to do for solar PV is plonk it in a field and wipe it off from time to time; there are no neutrons to manage.

I know this isn't the primary point of this piece, so forgive me if I'm being pedantic. Noah Smith makes similar mistakes. <3 u, Scott!!!

Expand full comment

To what extent does the Kelly bet strategy align with modern portfolio theory? I'm sure someone's looked at this.

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

The first atomic bomb detonated in New Mexico was another risk, although I don't know what the assessment was at the time. Does it matter that whatever assessment they had at the time could have been way off? That risk, if it wiped us out (don't know what they knew of that at the time), wouldn't have mattered for the eventual development of nuclear power. In hindsight, the activism against nuclear power was bad, but at the time, did anyone really know?

Expand full comment

This was a great article... so naturally I'll write about the one thing I disagree with. 😁

"If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down because “in the past, shutting down progress out of exaggerated fear of potential harm has killed far more people than the progress itself ever could”, you are permitted to respond “yes, but you are Osama bin Laden, and this is a supervirus lab.”"

I strongly disagree with this. Everybody looks like a bad guy to SOMEBODY. If your metric for whether or not somebody is allowed to do things is "You're a bad guy, so I can't allow you to have the same rights that everybody else does" then they are equally justified in saying "Well I think YOU'RE a bad guy, and that's why I can't allow you to live. Deus Vult!" Similarly, if you let other people do things that you otherwise wouldn't because "they're a good guy," then you end up with situations like FTX, which the rationalist community screwed up and should feel forever ashamed about.

Do you get it? Good and bad are completely arbitrary categories and if you start basing people's legal right to do things based on where they fit into YOUR moral compass, then you have effectively declared them second class citizens and they are within their rights to consider you an enemy and attempt to destroy you. After all if you don't respect THEIR rights, then why should they respect YOURS?

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Very sound argument. And if there were even a proof of concept that a superintelligent AI was possible, even in principle -- if there was even a *natural* example of a superintelligent individual, or group, that had gone badly off the rails -- some kind of Star Trek "Space Seed" event -- then you'd have a great case.

Let me put it this way. In "Snow Crash" Neal Stephenson imagines that it is possible to design a psychological virus that can turn any one of us into a zombie who just responds to orders, and that virus can be delivered by hearing a certain key set of apparently nonsense syllables, or seeing a certain apparently random geometric shapes. It's very scary! You just trick or compel someone to look at a certain funny pattern, and shazam! some weird primitive circuitry kicks in and you take over his mind. Stephenson even makes a half-assed history-rooted argument for the mechanism ("this explains the tower of Babel myth!" and for all I remember Stonehenge, the Nazca Lines, and the Antikythera Mechanism as well).

Would it make sense to ban all psychology research, on the grounds that someone might discover, or just stumble across, this ancient psychological virus, and use it to destroy humanity? After all, it's betting the entire survival of the species. We could all be turned into zombies!

Before you said yeah that's persuasive, you'd probably first say -- wait a minute, we have absolutely no evidence that such a thing is even possible. It's just a story! You read it in a popular book.

Well, that's how it is with conscious smart AI. It's just a story, so far. You've seen it illustrated magnificently in any number of science fiction movies. But nothing like it has ever been actually demonstrated in real life. Nobody has ever written down a plausible method for constructing it (and waving your hands and saying "well...we will feed this giant network a shit ton of data and correct it every time it doesn't act intelligence" does not qualify as a plausible method, any more than I can design a car by having monkeys play with a roomful of parts and giving them bananas every time they produce something a bit car-shaped). Nobody can even describe how such a thing would work, at the nuts and bolts level.

So right now, we may well be betting the entire future of humanity, but we're betting it on the nonexistence of something which does not yet exist, which no one can even describe satisfactorily (other than "much more capable than you!" Well how? What exactly can it do better than me, and how? "Capable, I said! Much" Er...ok). Those of us who are skeptical are unconvinced the thing you fear is possible even in principle. We feel it's like betting on red 10 when the roulette wheel is nailed to the table and the ball is already in the red 10 slot.

Expand full comment

Something I'd like to see is to consider simultaneously the danger of AI development, and the danger of degrowth (or even the absence of growth): Both risks have their thinkers, but I am not aware of combine the two. When considering AI risks for example, it's most of the time compared to a baseline where AI do not take off and it's business as usual (human driven progress and increase in standard of living)....However, when looking at trends, the baseline (no AI takeoff) does not seems to be business as usual, but something less pleasant (possibly far less pleasant). If you look at AI worst case scenario (eradication of humans, possibly of all non-AI entities in a grey goo apocalypse) it's very frightening. But if you look at it from the other side worst case scenario (tech/energy crash leading to multi-generation malthusian anarchy or strong dictatorship, both with very poor average SoL), it's less frightening. Sure, one is permanent and the other is only multi-generation.....But as I get older, the difference between permanent and multi-generation sounds more philosophical than practical....In fact, a total apocalypse may be prefered by quite many compared to very bad and multi-generation totalitarism or mad-max like survivalism. At least it has some romantic appeal, like all apocalypses...

Expand full comment

> It’s not that you should never do this. Every technology has some risk of destroying the world;

Not only technology can destroy the world. Humanity can be destroyed by an asteroid or supernova. And who proved that the evolution will not destroy itself? Biosphere is a complex system with all traits of chaos, it is unpredictable on a log run. There are no reasons to believe that if all previous predictions for apocalypses were wrong, then there would be no apocalypse in a future.

So a risk of an apocalypse is not zero in any case. It grows monotonically with time.

The only way to deal with it is a diversification. Do not place all eggs into one basket. And therefore we need to consider a potential of a technology to create opportunities to diversify our bets. AI, for example, can make it much easier to Occupy Mars, because travel in a Solar System is large. Communication suffers from a high latency, so we need to move decision making to a place where it will be applied. Travel is costly, we need to support life of humans in a vacuum for years, just to move there. AI can reduce costs of asteroid mining and Mars colonization dramatically.

If we take this into a consideration, how AI will affect a life expectancy of a humankind?

Expand full comment

Science and technology do not have more benefits than harms. Science and technology are tools and like all tools, they cannot do anything without a conscious actor controlling them and making value judgements about them. Therefore, they are always neutral and their perceived harms and benefits are only a perfect reflection of the conscious actor using them.

This is a mistake made very often by the rational community. Science and technology can never decide the direction of culture or society, it can only increase the speed we get there. We decide how the tool is used or misused.

The reason incredibly powerful technology like nuclear energy and AI chills many people to the bone is because they are being developed at times when society are not quite ready for them. The first real use for atomic energy was a weapon of mass destruction. This was our parents and grandparents generation! There is a major war raging on in Europe with several nuclear facilities already at risk for a major catastrophe. What would happen if the tables turned and Russia felt more threatened? Would those facilities not be a major target?

The international saber rattling is a constant presence in the news. The state of global peace is still incredibly fragile. The consequences of a nuclear disaster is a large area of our precious living Earth becoming a barren hell for decades and centuries. Are we stable and mature enough for this type of power?

And just look at how we have used the the enormous power that we received from fossil fuels. What percentage of that energy went to making us happier and healthier? Yes we live a bit longer than 2 centuries ago, but most of that improvement is not due to the energy of fossil fuels.

Why would the power we receive from AI and nuclear energy be used any differently? Likewise they will have some real beautiful applications that help human beings, but mostly they will be used to make the rich richer, to make the powerful more powerful, to make our lives more "convenient" (lazy), and likewise they will disconnect us from each other and from this incredible living planet that we call home. Is putting even more power in the hands of this society really a good idea, considering our track record this past century? (major world wars, worst genocides ever, extreme environmental degradation)

There is nothing inherent to these technologies that will do these things. The technology is neutral. That is just where we are as a society.

Focusing on technology as the saviour for our ailments is a classic pattern in the human condition. You can see this same pattern on the individual level. It is just a way of externalising our problems so we don't have to look inward to see what is really going on. Classic escapism. "If only I had a different partner", "if only I lived in a better house", "if only we chose nuclear energy", "if only AI came to solve all our logistical problems". The problem is not external to us. The problem is within us. It cannot be solved with technology or rationality, those are simply the tools to execute.

The problem is the direction we are moving, not the speed.

I think we can all agree that AI and nuclear energy would feel a lot better if we haven't had a major war in a few centuries. This can be a reality. We only need to shift our focus from the external to the internal. We know how. But why?

Expand full comment

It could make sense in a total-utilitarian sense to wager one entire civilization if the damage were limited to one civilization and there are several other civilizations out there. But one paperclip maximizer could destroy all the civilizations in the universe.

The derivation of Kelly assumes you have a single bankroll, no expenses, and wagering on that bankroll is your only source of income, and seeks to maximize the long-run growth rate of your bankroll. If Bob is a consultant with 10k/month of disposable income, and he has $3k in savings, it totally makes sense for him to wager the entire 3k on the 50% advantage coin flip. For Kelly calculations he should use something like the discounted present value of his income stream, using a pessimistic discount rate to account for the fees charged by lenders, the chance of getting fired, etc.

If we settled multiple star systems, and found a way to reliably limit the damage to one star system, then we should be much more willing to experiment with AGI.

Expand full comment
founding

The problem with gain of function isn't its risk - it's the total lack of potential upside.

If you can make a temporary mental switch and see humans as chattel, some interesting perspectives happen. Like how 100 thialomide-like incidents would compare with having half as many cancers, or everybody living an extra 5 healthy years.

Covid was bearable, even light in terms of QALYs - but there was no expected utility to be gained by playing russian rulette. It was just stupid loss.

AI... not so much. Last november I celebrated: we are no longer alone. We may not have companionship, but where it matters, in the getting-things-done department, we finally have non-human help. The expected upside is there, and not in a silver of probability. I'd gladly trade 10 covids or a nuclear war for what AI can be.

Expand full comment

One of the issues that came up in the thread was the origin of Covid, and I have a relevant question for something I am writing that people here might be able to answer. I will put the question on the most recent open thread as well, but I expect fewer people are reading it.

The number I would like and don't have is how many wet markets there are in the world with whatever features, probably selling wild animals, make the Wuhan market a candidate for the origin of Covid. If it is the only one, then Covid appearing in Wuhan from it is no odder a coincidence than Covid appearing in the same city where the WIV was researching bat viruses. If it was one of fifty or a hundred (not necessarily all in China), then the application of Bayes' Theorem implies a posterior probability for the lab leak theory much higher than whatever the prior was.

Expand full comment

Just once I'd like to see someone explain how, exactly, a superintelligent machine is supposed to kill everyone. An explanation that actually stands up to some scrutiny and doesn't just involve handwaving, e.g.: a super AI would think of a method we couldn't possibly think of.

Expand full comment

When the LHC was about to be turned on, a similar group of doomers started saying that it was going to destroy the world through black holes or whatever. Of course the LHC didn't destroy the world; it led to the discovery of the Higgs boson. The AI doomers are exactly like them.

Expand full comment

In the 50th anniversary meeting of the Atoms for Peace program I remember one of the leaders asking who had killed more people, nuclear power or coal? Of the 3000 attendees I think only 6 of us put our hands up because when you look at the numbers, coal kills dramatically more (due to air pollution from dust etc..) compared to nuclear power. Even when you include all the accidents, coal still comes out top. https://ourworldindata.org/grapher/death-rates-from-energy-production-per-twh

Expand full comment

You mention gain of function research as a bet that led to catastrophe. Is there a Way More Than You Wanted to Know About The Lab Leak post coming soon? The timing couldn’t be better.

Expand full comment

Concerning FTX, it's interesting how SBF say he would gamble with civilization; from Conversations with Tyler:

"COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?

BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.

COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.

BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical."

Expand full comment

I don't know the history particularly well, but is it possible that the anti-nuclear folks were totally right and the mistake they made was not to update later? I think we'll update later, so I'm happy to be against this kind of thing to start.

Expand full comment