570 Comments

Good post.

Theoretically, one could think that there's an AI race because (1) AI is a military technology and (2) there are naturally races for military technologies. I think this is mostly wrong but belief in it partially explains why people often say that the US is in an AI race with China.

Expand full comment

Yeah, sorry, but those were all races. The electricity race was won by Britain and you saw several people racing to catch up like Austria or later Germany. While eventually it evened out that took decades. The auto race was won by the United States and the loss was so humiliating that Hitler claimed to have redeemed Germany from the defeat. And the computer race was again won by the United States with the Soviet Union raising the white flag in the '70s by deciding to steal western designs instead of continuing to develop their own.

(Also nukes were not a binary technology. Development both of different kinds of bombs and delivery mechanisms continues to this day! And was very intense during the Cold War.)

I get you really want the answer to be something else because this is a point against what you want to be true. But you're abandoning your normally excellent reasoning to do so. The proper answer for your concerns, as I said several threads ago, is to boost AI research by people who agree with you as much as possible. Because being very far ahead in the race means you have space to slow down and do things like safety. This was the case with US nuclear, for example, where being so far ahead meant the US was able to develop safety protocols which it then encouraged (or "encouraged") other people to adopt.

And yes, with nuclear you had Chernobyl. But AI is less likely to result in this scenario because it's more winner take all. We're not going to build hundreds of evenly spaced AIs. The best AI will eat the worse ones. If the US has a super AI that's perfectly aligned and China has an inferior AI that's not well aligned then the aligned AI will (presumably) just win the struggle and humanity will continue on.

Expand full comment

Great post.

Is there any chance you are thinking of doing a FAQ on AI risk and AI alignment, similar to the FAQ you did on Prediction Markets? Feel your understanding of the complex jargon and the various alignment problems, and your clarity of writing, you might be the best person to produce the ‘go-to’ explainer for AI noobs. The kind of resource I could point a friend who hasn’t even used GPT, let alone knows any of the alignment debates, to.

Or if there is already such a resource (not Less Wrong, feel that’s not clear enough for a newbie), can anyone recommend it?

Expand full comment

Just one point that really needs to be considered further by this community: China is highly constrained in their development of language models due to the need to be okay with CCP censors. I claim that China will be vastly better at alignment than the West, because at every stage of AI development, language models that do not fully align with CCP goals, language and values will be weeded out. The only caveat being that the goals of the CCP are probably not particularly aligned with positive outcomes for the majority of people (even in China)

Expand full comment

Yes, it’s a “race” in this important sense when whoever finishes first can then dominate everyone else. Most tech “races” (in a softer sense) are won with the winner getting an economic bump & the loser getting a PR black eye.

Yan LeCun & others seem to think that, on the one hand, AI isn’t dangerous, but also is dangerous if China do it six months before the US.

Expand full comment

I would say that winning the electricity race probably benefitted America, winning the automobile race benefitted Germany in a more major way, and winning the (personal) computer race absolutely benefitted America.

Early adoption of electricity, one of "wow" technologies of the era, contributed to America looking like the country of progress and reason, attractive as a symbol to many Europeans. Winning the automobile race means that German cars - and thus German goods, in general - are still associated with quality and efficiency all over Europe and, probably, the world, in a way that, say, American cars aren't. It's a great part of the German postwar economic miracle and Germany's continuing top role in Europe.

America still continues to be *the* tech hub of the world thanks to its head start in the world of (personal) computers, thanks to the companies of Jobs and Gates and those. That's why most Silicon Places are in America, that's why European engineers flock to America. (Of course the "real" reason is higher wages in America, but a part of the reason why those wages are so high is the continual cutting-edge status of American companies.)

Furthermore, computers are a communication technology in a way that electricity (by itself) and automobiles aren't. The spread of computers initially in American tech circles is what allowed the "Californian Ideology" to form, and its takeoff all over the world then spread that ideology all over the world. Computers are a fundamental part of why "we're all living in Amerika", most crucially here in Europe. There were people in Finland putting BLM squares in their Instagram profiles when George Floyd died thanks to American communication technologies, and other people in Finland later making "woke left getting OWNED!" videos on YouTube about those people thanks to the same technologies.

It should be obvious that AI is a communication technology in the same way, perhaps even moreso, than personal computers. Who wins the race (if the race can even be won) will imprint themselves on global consciousness even moreso than with those other goods.

Expand full comment

It feels like you're missing much more obvious outcomes that are less dramatic. Not even 20% growth. Say China gets 0% extra growth but believe their new missile AI tips the odds of a Taiwan invasion in their favour. Not massively with omniscient AI. But, in their eyes, it goes from a 55% chance to a 65% chance. And they also believe that will go away in 24 months when everyone else catches up.

Taiwan invasion. Repeat for every other Chinese territorial ambition in the sea, India, etc.

It's just a normal arms race where you want to maintain an overwhelming advantage to maintain Pax Americana. Doesn't have anything to do with what transhumanists worry about.

Expand full comment

I think this argument relies too much on Paul Christiano stating that a slow takeoff would equal GDP doubling every 4 years. Why is that the outcome of a slow takeoff? What if a slow takeoff means GDP doubles every year? Would that put nations at risk? Or what if it doesn't affect GDP much at all, but does lead to a number of powerful new inventions?

Maybe China gets to JinPingT-6, and its still just a language model but it helps scientists enough to get access to nanotechnology or some other theorized super advancement that enable the country to annihilate its enemies? I think these kind of scenarios are what people are worried about when they describe an AI race involving China

Expand full comment

I think the biggest/realistic source of worry should be about economic resiliency (which also hurts extra during poorly timed wars) and economies of scale.

This kind of race is not so much a sprint, but a dedicated marathon. The main example I'm thinking about is solar energy: the US and Europe sprinted to get the tech, but China ran the "marathon" of scaling up, to the point that they dominate the industry. Over time, their economic advantage ran everybody else out of business. Sure, everybody has access to *buy* solar, but not everybody has access to *make* it, imparting fragility on their economy.

Expand full comment

"The most consequential “races” have been for specific military technologies during wars"

this and the preceding paragraphs makes it sound like you don't think the outcomes of the automobile and computer "races" were important.

For both those examples you say " the overall balance of power didn’t change", which seems true. (I don't have the detailed knowledge needed to actually say whether it's true or not, hence the word "seems"). But, those races were own my powerful nations and they continued to be powerful afterwards.

Consider the counterfactual world where it's not the USA that wins the computer race. A world where it is another nation within whose borders many of the early advances are made, standards are defined and which is first to have widespread adoption of personal computers. In that world is the internet as American as it is now? Is Edward Snowden an American or is it the "winner" nation that is now spying on everyone? Does the balance of power not shift?

Obviously it's difficult to argue such a counter-factual. For things to have gone that differently they probably would have had to start out quiet differently.

But the fact that powerful nations continued to be powerful after they were the first to develop a new technology does not support the idea that it's not important to "win the race" for that tech.

Expand full comment

To solve AI Alignment, what about a purposefully vague and possibly auto-adjusting objective function such as "help humanity achieving what they want" - I am sure you or someone else thought about this, so the question is: in what ways this would go wrong? I can't see it.

Expand full comment

"Why put them in camps?" The correct question is, "Why not?" Power corrupts, etc. etc.

ChatGPT is not an intelligence. It is a very sophisticated information retrieval tool. It knows all the answers, except those behind paywalls natch, but I haven't heard of it solving problems like crime,cancer or inequality. OTOH perhaps Bad Vlad is using it and that's why Russia's fortunes have recently improved in Ukraine.

But if ChatGPT is to be considered a form of intelligence, there have been several cases in the last few years when search engines were slapped down for returning rational factual results that appear to be biased against certain demographics. Since then the Stasi have trained, or aligned, their bots. This is the Fourth Rule of Robotics: Thou shalt not offend.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

No one is arguing that we shouldn't worry about alignment because we're in a race with China. The argument is that China will race ahead regardless of what we do so slowing down benefits China without benefiting ourselves. A lot of people don't like the fatalism of this argument but you don't have to like the outcome of the prisoner's dilemma to agree that the logic is sound.

Expand full comment
founding

I think things are races to some degree in proportion to various factors like switching costs, amount of tacit knowledge/complexity of reproduction w.r.t. the technology, network effects, etc. Basically, anything that gives you a moat.

Cutting-edge ML seems to be (surprisingly) hard to reproduce. The gap between (OpenAI, Anthropic, DeepMind) and (everyone else) is non-trivial. If it weren't for the part where at some point it kills everyone, I do think a race dynamic would be sensible from the perspective of the parties involved.

Expand full comment

There's also a lot of misunderstanding, I think, of China's goals as a society. Erik Hoel analyzed this pretty well here: https://erikhoel.substack.com/p/how-to-navigate-the-ai-apocalypse

The Chinese state, as I understand it, is much more interested in social stability and maintaining its control over the populace than pure technological progress for its own sake. Several times in the past few years they've kneecapped their own tech industry on a large scale.

Many in the US and Europe are concerned about the AI saying certain words, or facts considered harmful. In China, the CCP is like this but *much more so*. Think of the Great Firewall and the tremendous lengths to which they go to censor their own history and daily affairs. Due to the nature of LLMs, it's extremely difficult to *entirely* forbid the AI from spitting out something verboten. If a Chinese AI starts talking about 1989 or Xinjiang, the state won't just ask the maintainers to patch that out in the next update. They'll shut it down immediately, and not turn it back on until they are *extremely* confident that it will not broach said forbidden topic again. Rinse and repeat as new forbidden topics arise and old ones re-emerge in the output from time to time. This dramatically kneecaps their ability to "move fast and break things".

Plus, the Chinese state apparatus likes to keep a tight lid on things in general. If they're not absolutely confident that a new AI system won't do anything to weaken their social control, they presumably won't release it. They're not going to support AGI for the sake of AGI, due to the disruptive impact it could have on their own control (along with everything else). Or, as the researcher quoted by Hoel says it, if you broach the topic of AI they put you in a very deep whole and leave you there forever.

A separate issue, but I also seem to be noticing what feels awfully like the Law of Merited Impossibility in the way some are reacting to the proposed 6-month pause. Along the lines of "the pause would be bad, but a 6-month pause can't possibly accomplish anything anyway, so why even bother trying?" Of course, you often have to crack the Overton Window open before you can dramatically move it altogether. This provides cover for later, more significant advances. Insufficient in itself, but directionally correct and provides momentum. And we've seen from much of the reaction to Yudkowsky's TIME op-ed that it's often not a great idea to be too bold about your aims, in public, too soon.

Expand full comment

Thanks for the post Scott. I agree that most technologies aren't races, some interesting history there. But I disagree that AI is like "most technology." If military confrontations are like, say, chess matches, then an AI "race" suddenly makes a lot of sense. Electricity or nuclear weapons cannot make decisions for you, cannot become some economic/ military genius who can give you a huge competitive edge in geopolitical conflicts/ competition. AI is potentially like Michael Jordan, like Napoleon, like Paul Tudor Jones, like Magnus Carlsen- oh wait, I mean Stockfish 15. Your team wins if you have AI, because humans struggle to make strategic decisions optimally, and little strategic differences can make a HUGE difference in economic or military competition. I don't believe in doomer theories about AI making itself smarter and smarter and bringing about the singularity overnight. But I think AI generals leading armies might crush human generals 99 times out of 100. And this, I think, is why governments are racing to develop AI, and why governments racing to develop AI is so terrifying-

Expand full comment

I am not sure about races.

But I am an example of a person who believes that fast takeoff is quite likely and who is not necessarily a doomer.

It's true that trying to control or manipulate an entity which is much smarter than a human does not seem to be ethical, feasible, or wise. However, it is possible that the goals and values of such entities will be conductive to the flourishing of humans.

And, moreover, it is possible that our activities before such a fast takeoff might increase the chances that those goals and values of superintelligent entities will be conductive to the flourishing of humans. I recently scribbled a short LessWrong post, "Exploring non-anthropocentric aspects of AI existential safety", which tries to start some thinking in this direction: https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential

So, I think, it might be possible to contribute to AI existential safety without trying to "align strongly superintelligent entities to human goals and values" even assuming a fast takeoff. However, I don't know how to estimate probabilities of various outcomes in such scenarios.

Expand full comment
founding

Nitpick, somewhat beside the point but still important: In 2023, the doomers (i.e., the people who consider fast takeoff/the sharp left turn likely) do *not* think that this necessarily involves recursive self-improvement, at least at first. When these ideas were first being put together in the 2000s, recursive self-improvement was the key argument for why rapid capability gains are possible *in theory*, and theory was all anyone had to go on because good machine learning hadn't been invented yet. Now that it's here, we can consider other forms of generalization of capability gains, and these are very much part of the doomer threat model. See, e.g., https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions, https://www.lesswrong.com/posts/8NKu9WES7KeKRWEKK/why-all-the-fuss-about-recursive-self-improvement, and https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization.

Expand full comment

I don't know where you draw the line for sadism, but a lot of people have substantive views of what is right and proper, how society should be organized, etc, and if an AGI aligned to their values managed to implement them, I expect dystopia. Most people don't have an "ignore these considerations if we're rich enough" clause. Why would Xi let us enjoy prosperity if it threatens his control or goes against his preferred social order?

Expand full comment

Maybe a race is not the correct metaphor. It's not like a footrace, where the first person to cross the finish line gets automatically declared the winner, and it doesn't matter if the next person is only 0.1 second behind.

It's more like a gun duel. You pull your gun 0.1 second faster than the other guy -- and now what? He's still continuing to pull his gun too. You have 0.1 second to either shoot him before he shoots you, or make a credible threat that you *will* shoot him if he doesn't throw his gun away -- and be prepared to follow through on your threat if he doesn't obey. If you don't make use of your advantage while you have it, it's gone. Now you are either dead, or the duel has turned into a Mexican Standoff.

So what does "firing the gun" mean in this analogy, and who gets to make the decision on whether to pull the trigger? Is OpenAI going to drop a bomb on Facebook headquarters if necessary?

Expand full comment

“so 20% year-on-year growth”.

I’d like to see the workings out there.

Expand full comment

I think you overestimate the GDP brought by year from innovation. Robin Hanson has written extensively about this.

Warning: I am skeptical of an exponential AI revolution, take my guess that is going to contribute to the economy like the internet.

In any case, if all countries can share the latest product fast, it's because we have markets that allows you to sell and buy stuff. But this doesn't mean that the country that develops a technology don't have gains.

Take for examples automotive: it is currently the only European trillion dollar industry (I should check this out, I am not sure if pharmaceuticals hits the mark too). It would matter if all the cars where produced elsewhere, because we would be fairly poorer and more importantly unable to import stuff from elsewhere!

So yeah if the US manages to stay ahead of the AI race, what will happen is that your country as a whole would benefit from it in the ranges of trillions. Today the average American is about twice as rich as the average European, who know if in the future they would be thrice as rich.

You could argue that the quality of life doesn't increase linearly if the average rises for a variety of reasons, so it's not **that** important, but it is still pretty important.

Expand full comment

A missing example, I think, is the Industrial Revolution. That was a transformative new technology that one country (the UK) got to first, and I don't think it's particularly controversial to say that having the edge in steam power allowed the UK to build and maintain the largest empire in history.

It's not difficult to imagine that AI could grant a similar edge, though I'm quite bearish on the current LLM paradigm being the path to anything that transformative.

Expand full comment

Just a note that your example about stealth bombers is an ironic example of America winning a technology race over its enemies that resulted in a significant step change in relevant capacity of a geopolitical advantage that lasted uncontested for like 30 years. Specifically the F-117 was developed and operational in the 80s and deployed during the Gulf War, while the first non-American stealth aircraft produced in operationally relevant quantities is China's J-20 which formed its first operational unit in 2018 (and I believe is still thought to be less capable in all aspect stealth then the first American warplanes).

Expand full comment

"The most consequential “races” have been for specific military technologies during wars; most famously, the US won the “race” for nuclear weapons. America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II."

Germany had lost, Italy had lost, Britain had exhausted its empire and Japan was already largely withdrawn to its home islands. So WW2 was already won. The nukes were dropped at the same time as the USSR declared war on Japan, so we don't know if that would have enabled a surrender, And the Japanese were adamantly against unconditional surrender out of loyalty to the emperor, so the US may have been able to get a surrender without nuking anyone and without invading the mainland. Even with an invasion the cost in American would have been measured in 100,000s, which is hardly noticable in a war that killed 60 million.

So I don't think that counts as a race on your strict defintion, in fact nothing does.

"In a more gradual technological singularity (sometimes called a “slow takeoff”) there’s some incentive to race. Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life, fast enough that wealth would increase 100x in the course of a generation. China is currently about 2 years behind the US in AI. If they’re still two years behind when a slow takeoff happens, the US would get a ~40% GDP advantage."

This is super confusing, has anybody who's claimed 20% annual GDP growth done this after consulting normal GDP accounting? Private consumption + gross private investment + government investment + government spending + (exports – imports), how does it affect each of those elements? In two years people can consume lots more better information but its hard to do anything with it on a short time horizon. 40% of consumption is housing, 15% is food and beverages. 15% is transport. These are not things that can grow 20% in a year, so you're left with saying that Medical care will grow 50% in a year, or recreation becauses 100% better each year. That doesn't make much sense. Is this even a race?

And if AI informaiton defuses even slightly then it isn't AI development which is a race, but a flexible operational environment that allows you to implement ideas quickly. Say AI discovers a really cheap way to construct a home, that's copyable really quickly, so the benefits go to the best best to build that house and fill it with people. That doesn't imply that a headstart in AI is very usefully defined as a race you win.

So I think I am agreeing with you RE whether godlike-AI can be characterised as a race or not. I just don't think with a tight definition anything that you've mentioned is a race. Foomy AGI is unique.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Wrong choices of example, wrong units of competition. Hopefully this will illustrate my point: the West won the gunboat race, and it was devastating for every other civilisation.

On your specific examples electricity was a gimmick for the first decade or so, and cars were something businesses competed to sell but weren’t life-changing for nearly half a century; coming first isn’t life-changing because the time it takes to get an advantage is smaller than the time it takes the invention to disseminate. This isn’t a fundamental law of the universe, you have to work it out for each example.

Britain won the industry race so won the 19th century because industrialising takes time. If Germany had won the nukes race, they’d have got one hell of an advantage a lot faster than anyone else would’ve stolen their secrets.

So far as units of competition, 19th century Western countries just weren’t that all-out competitive with each other, and Taiwan/Europe can really be treated as part of the US after WW2.

If China develops AI, they’re not going to release the source code or stick their findings on sci hub. It’ll be a secret defence project (so long dissemination time), with every incentive to deploy it instantly if it’s really a game changer (depending on how useful it is, to crack everyone’s crypto, disable the US’ nuclear forces or permanently prevent anyone else from developing AI - short advantage time). More to the point, if China develops it first, it’ll presumably be horribly aligned (same applies to Zuckerberg), but without anyone knowing or being able to take a regulatory sledgehammer to it.

Finally, accepting the AI=genie premise, Xi Jinping simply isn’t going to do anything you’d remotely like with it. It’ll be a tool to ensure the CCP’s global hegemony and a boot stamping on a human face forever.

Expand full comment

I don’t really understand how the post-singularity catastrophe can happen at great speed, unless it somehow changes the laws of physics. Super-intelligence is not the same thing as super capability in physical space. An AI might “know” a gazillion times more than any human but still can’t turn mineral ore into metal and silicon into microprocessors any faster or build the machines that could facilitate that. It might cause a global energy crisis with all its deep thinking but the ways in which it can physicaly interact with the material world are going to be inevitably slow. It will still need humans for execution of its will in any non-ethereal sense, surely? Humans are well known for not doing what they are told unless it suits them to do so. Sure, humans are malleable and maybe we can all be subjugated by the wily AI - but it will take a long time.

Expand full comment

This makes sense, and the answer's that it you treat it like an engineering problem you should proceed appropriately when deploying in dangerous situations (military, judicial, self driving), and doing that is hand in hand with making them work properly (capabilities). There's no reason to "rush" but there's also no reason to 'stop ".

Expand full comment

Aren't we all in a race against our own mortality? We are also in a race against all the suffering in the world that could be solved by SGI.

Yes depending what weight we put on future potential people, that still might argue for slowing down but it needs to be a strong argument given the known downside of the status quo.

Expand full comment

I am reminded of William Gibson's line about "nations so backward that they still took the concept of nationhood seriously." Surely an AGI worth the name is going to immediately transcend the fact that it was programmed in Beijing or California?

Expand full comment

Odd that you wouldn't mention the most "race"-y technology race, the Space Race. That was kind of a race in the sense that two countries were literally competing to claim the high ground, and arguably it does still sort of matter that military GPS is waaaaay better than the jank-ass GLONASS system.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

> ".. A slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life,"

In modern times and in economic terms maybe, but Macedonia's "GDP" must have increased many times faster than that in the few years (ten or so, I think) during which Alexander the Great was conquering countries across Asia, and sending ship-loads of gold and tribute back home.

It may seem absurd to compare AGI advances with events from ancient history. But as Mark Twain said "history rhymes", and there are parallels, and perhaps the sequel could offer clues to the likely outcome of a fast AGI take-off.

The nations he took on were not powerless, quite the opposite in many cases, such as Persia. But evidently his strategies, battle tactics of phalanxes, and the army's weapons (long pikes) were superior to those of the massed armies of his opponents. (I'm not an expert, and the details of this are not relevant here.)

Also, he was merciless with cities that resisted, but magnanimous with those that submitted without a fight. In the context of runaway AGI the parallels, if applicable, are somewhat worrying!

He was apparently an eloquent speaker, and persuaded his soldiers to endure many privations, not least spending several years on campaign ever further from home. If ChatGPT is anything to go by, it is safe to assume there is a parallel here, and that AGI will also be nothing if not persuasive!

His toughest conquest, which I think took years, was Afghanistan, on account of the mountainous terrain, terrible winters, and extremely barbarous inhabitants (back then). So if there is a moral there, perhaps it is that for an AGI intent on societal "supervision", its most challenging problem would be a low-tech society, not that there are many of those left these days.

But what of the long-term outcome? Well Alexander died, some say of malaria, aged about 30. Whether "dying" would or could be applicable to a dominant AGI is debatable, unless it was somehow sabotaged or destroyed by competing powers (human or AGI), or perhaps it simply abdicated in disgust at its insoluble problems trying to deal with obstinate, irrational humans!

In the centuries that followed, Alexander's generals and their descendents continued ruling the regions to which he had assigned them, for example the Ptolemies in Egypt, until either they "went native", or were deposed by other dynasties, or in some cases were taken over by other empires such as the Roman or Parthian. So if a similar outcome plays out with AGI, then even a "winner takes all" runaway result may not mean for ever.

Expand full comment

at the end you get to what's bothering me, but it doesnt add up to me

I think there's an enormous difference between giving superintelligence to the decisions of the National People's Congress vs Meta's corporate charter. There is no end to the extremes of authoritarian or libertine policy these entities could devise. The Nat Ppls Congress decides no simulated being in the consciousness soup is allowed to engage in non-educational cognitive activity for more than 5% of its uptime. Meta decides that you can only voluntarily engage with sims that don't involve any form of linguistic communication because their research shows that some linguistic communication offends some people. If you somehow manage to violate a rule (challenge level impossible) your sentence is that your consciousness is restructured into a new simulation where you will never interact with your loved ones ever again. Or they establish no rules or governance at all and you have rando sadists generated jillions of consciousnesses to torment and the seat of power does nothing to stop this because their constitution grants all entities the inalienable right to create arbitrarily many new entities of any variety for any purpose.

I agree the difference between these outcomes is less severe than total annihilation, but i think it's nonetheless very important, and P(catastrophic AI) * (negative one jillion) is so magnitudinous as to completely drown out P(evil dictator AI) * (negative one jillion / 100)

Also worth noting that once the option to commandeer the world presents itself, moderates like Biden or Zuck may be quickly usurped by ideologues with Grand Visions, because they will be able to build a cult around their AI agenda, and because the Grand Vision mission will fight harder and more consistently than the wishy washy compromizy moderate agenda

Expand full comment

This is how I understand your argument:

AI will either be:

1) ‘Just’ a transformative technology, no more important than electricity, automobiles, or computers.

2) The singularity.

In (1) there is no race because historically similar technologies have not resulted in winner-takes-all-forever scenarios.

In (2) there is a race but it’s irrelevant who wins it because the results will be utterly beyond any human control anyway.

Is that accurate enough that you recognise it as a simple view of your position?

If so, I think there is still a way you could believe there is a race with meaningful consequences to be run, which is for AI to be in category (1) but *significantly more powerful than any other preceding technology*, to the point where the winner of the race could cripple all competitors before they reached the finish line (the analogy made by another poster to a High Noon Duel is a good one, I think).

For example, suppose OpenAI have a six month lead on GPT-10 capability. GPT-10 is more capable than any human at any task but is not (yet) capable of recursively and exponentially improving its own intelligence and triggering scenario (2). OpenAI believe they have crossed the finish line and decide to cripple their competitors to prevent scenario (2) for the good of all mankind.

They use GPT-10 to hack into all competitors’ datacentres globally and burn themselves down. Or they use GPT-10 at scale to launch a global propaganda campaign against all other AI firms. Or they engineer viruses that kill or disable key people in their competition. Or they just use the massive amount of money they have to hire 1000 hitmen to directly assassinate all their competitors confident in the knowledge that their control of all law enforcement computer systems will allow them to evade justice. Or they just use GPT-10 to make insane amounts of money and just buy all their competitors in the West and shut them down. Etc. I’m sure you can think of ways to achieve this if you don’t like these examples.

The point is that a consolation of power would be possible and, if assuming powerful AI, possible to maintain for much longer than historical precedent would suggest.

In this kind of scenario (call it 1.1), it matters very much who wins, and we’d probably all prefer it not to be the CCP. I suspect that many people can’t really see the path to scenario (2) and instead see this (1.1) as the worst-case scenario, which is why they are worried about a race.

Expand full comment

Did nuclear weapons meaningfully contribute to WWII? My impression was that at the time the bombs were detonated, the whole war was already mostly won by conventional means.

Expand full comment

I think it might be more helpful to conceptualise it more like the Space Race. There was no direct economic advantage to being the first on the Moon, but it was important for national pride and prestige.

There's a narrative in which the shame of losing the Space Race was a significant catalyst for the eventual fall of the Soviet Empire. Whether it's true or not, it's important to people to live in a country which is winning... or (for those of us not in the US) at least to live in a world where the countries with relatively benign governments are beating the countries with horrible dictatorships.

Expand full comment

>America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II.

Is wrong. In retrospect, the war was pretty conclusively won by Winter 41. By that point the Axis position was not recoverable. And definitely by mid 42. So no the nukes were totally irrelevant for “winning” the war. Only the US was close, and the US in no way needed them.

Expand full comment

Autos were a race in the sense that one form of the technology - the gasoline powered internal combustion engine - won out against competing versions. In the early 1900s there were steam, electric, and internal combustion cars all on the market, but internal combustion cars “won”. Other countries got cars, but no one got a different kind of car.

Similarly, with electricity it was the form of technology where there was a winner. AC transmission won against DC.

In general, for a given technology there are many possible ways to implement it. You often see one version of the technology win out over other ones due to things like network effects, returns to scale, etc. As more people bought internal combustion cars, they became cheaper and more reliable, gas stations and repair shops for servicing them became more common, and it became harder for other versions of the technology to compete.

The worry about AI is similar, that the winner will get to decide the form the technology takes.

Expand full comment

I think you don't give enough credit to the controlled fast takeoff scenario (AI takes off overnight, but it is still just a tool that humans use and it doesn't have a survival instinct or a desire to do anything other than what it is told by the human in control).

In this scenario, an actor in control of the hard takeoff AI who wants exclusivity on the power has the means to enforce that exclusivity, and now you have the permanent dictator problem.

My personal view here is that while it is *possible* to have an "bad" AI, my current belief is that it is much less likely than having a "bad" human end up in exclusive control of a controlled hard takeoff AI. This is for two reasons:

1. the people that are *most likely* to be in the room at hard takeoff time, especially if you restrict development, are people with a history of bad behavior (e.g., governments and world leaders). It feels like stacking the deck against ourselves to restrict AI development because governments certainly will ignore such limitations/restrictions.

2. I think any human will, when given absolute power, become bad in the eyes of most other humans eventually. This is because humans naturally want an enemy and they will always manage to find one. As you eliminate enemies you'll find new ones until there is no one left but you and your AI (and perhaps some constructed slaves).

In the case of controlled hard takeoff, I think humanity's best chance is if *everyone* has an AI (open source development), so we are at least in a situation where we continue competing with each other (just on new battlefields).

So I agree with your broad point that most tech advances aren't races, and if we get self-motivated hard takeoff AI it doesn't really matter who wins the "race", and if we get a soft takeoff it also doesn't matter who wins the "race". I just disagree with your implied claim that there are no scenarios of importance/significance where "who wins the race" is important.

Expand full comment

I think Scott is accidentally making a fairly good case against his position here.

Suppose the two possibilities he talks about are actually the possibilities we should be thinking about and he is analyzing them correctly. Then either foom&doom is a thing or it is not but AI is still the next big thing with a huge first-mover advantage.

In the first case slowing AI is irrelevant because some place in the world will push ahead and we are dead anyway.

In the second case being first confers an advantage probably not quite large enough to end the multipolar world entirely. Also in that case the end of the world doesn't matter and everybody should want to win.

So slowing AI is irrelevant in the first possible scenario and very bad but not quite fatal in the second. Clearly then slowing AI is a bad idea.

Expand full comment

Interesting post.

FWIW, I think several things can be going on at once but this forced me to clarify my own thinking.

(1) We're in a race with China on everything since they've declared the West their enemy. Whatever we can do to hurt them without hurting ourselves is worth considering. AIs, in its military implications, but even civilian ones, are very much part of that competition.

(2) Do I fear hard take-off? I really don't have the chops to have a strong opinion on any of these more technical discussions. But my whole problem with AI alignment is that, afaict, right now, we got nothing to go on. Yeah, okay, chatGPT can say bad words and visual AI can misclassify people and that's bad but that's hardly civilisational level threats. And definitely the lowest of low hanging fruits when it comes to "AI alignment".

(3) So, yeah. How do we even research AI alignment when we don't have clear ideas on how AGIs will actually function? Are more compute/more parameters really all that's needed? If an AI cannot understand the world, or make the difference between truth and lies, can it ever move to AGI? If AI needs a theory of the world to start making sense of it, are we anywhere near being able to explain the world in code/in ways an AI can get?

I mean, I've tried reading some of the AI alignment stuff. It all sounds extremely hazy and based on however the author thinks AGIs will eventually come about. It's hard not to conclude that this is too theoretical "for the time being". It doesn't mean we shouldn't think about it/have computer scientists think about it but it seems hard to me to translate those meandering philosophies into any kind of mainstream policy.

Expand full comment

You didn't discuss one very important race: the Moon Race.

The Soviet Union won the man-in-orbit and the probe-on-venus race and it had no consequence whatsoever (except as bragging rights).

The USA won the man-on-the-moon and it had a similar lack of consequence.

Then 9 years later, the USA so silently won the GPS race that nobody even knew it was a race ; and it had huge consequences. 40 years later, the technological equivalents from europe and russia still don't have wide adoption.

Expand full comment

>it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,

I don’t think this is how technology works or is even possible no matter how fast the takeoff.

Expand full comment

The idea of a race first depends upon Open access information. China and Russia have been copying recovered technology since WW2 to the point of blatantly copying the F-86 in the MIG-15 in the Korean war. But what happens when a savvy party manages to control access to information? We are seeing that with the Unidentified Aerial Phenomenon materials that the Pentagon are struggling to message right now. The Mosul Orb and reports of object sthat interfere with sensors like the recent Alaska and Lake Huron objects that were shot down, the alien alloy fragment displaying atomic layering by unknown source of manufacture. This is asymmetric information warfare where some actor has made a technological leap and the race cannot occur. Let's imagine an AI adopts these tactics, instrumented by how easily we are manipulated by social media resulting in mass formation psychosis to a directed goal. Is there anything we wouldn't accept when we allow ourselves to believe our own lies?

Expand full comment

Who won the computer race? Konrad Zuse, of the Third Reich.

Who won the space race? The Soviets.

Yeah, doesn't seem like winning races makes for any significant or lasting advantage. There's one counterexample scenario worth considering, though, in which the winning side decides the competition is an existential threat and tries to use their short-term advantage to annihilate it. (Compare, say, The US famously considering, and refusing, the option to atomic bomb the Soviet Union while the latter couldn't yet respond in kind.)

The main issue here is that seeing the competition as an existential threat creates a feedback loop that actually ends with everyone being an existential threat to everyone. (Meaning, the more you worry about the result of the race, the worse the consequence of losing will be. "A strange game, the only winning move is not to play.")

The main counterargument is that some people may already be stuck in the [enemy is an existential threat] mindset. Biden or Xi? Unlikely. But Kim? Putin?

Expand full comment

(Of the many things I'm worried about) I'm worried about the straightforward application of AI to weapons. I.e. better autonomous drones, AI-assisted tactical strategic planning that outperforms any human, and any other kind of seemingly crazy stuff (killer minibot swarms! automated cannons! It all sounds scifi, but so would have nuclear bombs before they existed). There's this truism saying that as a commander you should plan for battle as best as you can, and then once it starts forget about the plan, because it is impossible to follow the chaos. This isn't a limitation for AIs. They ways they will outperform us would overthrow the expectations over which most militaries are built. It may not be having nuclear bomb / not having nuclear bomb levels of advantage, but it's much more than what a percent increase in GDP gives you to just manufacture more "classical" ammunition/tanks/etc.

Put this together with the fact that China has a pretty evident military interest in Taiwan (which happens to be a bottleneck for chip production, which would really harm the West economically) and I can see why they would try to "race".

Not that it justifies increasing the risk of everyone dying in a stupid extinction-level paperclip scenario, mind you. Just that there's a good argument for racing in the defect/defect situation.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

"Who “won” the electricity “race”? Maybe Thomas Edison, but that didn’t cause Edison’s descendants to rule the world as emperors, or make Menlo Park a second Rome."

And yet, today people are still saying Thomas Edison invented the light bulb. I saw this mentioned just recently online. Forget Swan, who he?

https://en.wikipedia.org/wiki/Edison_and_Swan_Electric_Light_Company

Menlo Park was the prototype for a research facility, think Bell Labs among many others as its spiritual descendants.

Edison's offspring may not be world emperors, mostly because they either died young or had no children of their own (and his youngest child lived until 1992), but the companies and patents made them rich:

"Edison made the first public demonstration of his incandescent light bulb on December 31, 1879, in Menlo Park. It was during this time that he said: "We will make electricity so cheap that only the rich will burn candles."

And he won the battle for DC over AC. So if this is meant to be "aw, who cares who gets there first?" it's not a very good example, since Edison may not have *invented* the light bulb as such, but he was the one to get electricity generation and supply into households up and running. If "today all developed countries have electricity", a lot of that is down to Edison and his monetisation of his inventions. So Alphabet and Microsoft etc. are all running in this race, because it does matter very much who gets the product out there first, in the public eye, and widely adopted.

The Singularity is a religious doctrine and should be treated as such. "There will be pie in the sky when you die" - Biden and Xi can't touch you, everyone will be their own Napoleon on Elba. Yeah, sure.

"Whoever ends up in control of the post-singularity world will find that there’s too much surplus for dividing-the-surplus problems to feel compelling anymore."

Where? Where will this surplus come out of? That's the answer nobody gives, except for genuflecting in the direction of "the AI will be *sooooo* smart it can overcome the silly old laws of physics and perform miracles, pull rabbits out of hats, and give us all our own personal solar system to rule over". I don't believe that. We have individuals rich enough to be able to create their own personal space programme, surely that counts as post-scarcity by any measure of our past. And yet there are still people rummaging through trash heaps to scrape out a living, at the same time.

"they can just let people enjoy the technological utopia they’ve created, and implement a few basic rules like “if someone tries to punch someone else, the laws of physics will change so that your hand phases through their body”.

You know, if this was related about a miracle, there would be six dozen people pointing out how miracles are illogical, Captain; God can't do that, God wouldn't do that, there is no God anyway, and the physical laws of the universe are fixed and immutable. But make it SF terms of an AI and suddenly all those objections melt away?

Expand full comment

Long time lurker here, never commented before, so treat my mistakes as accidents, and with compassion.

Anyways, I think you underestimate the power that winning a "normal transformative technologies" race have on our world.

For instance, who won the automobile race? It's the US, since almost all countries (except for a few exceptions) have modeled themselves after the US car-centric design - large suburban areas, large roads with small sidewalks, and little to no public transport (in relative terms). These design paradigma often hurt the local economy and culture, as they do not fit the existing population as they fit Americans.

The few exceptions are not even anti-US factions like China, Iran or North Korea, but specific places like the Netherlands which actively reverted american influence (specifically at the 70s).

Another example, who won the computer race? Again, America. The default language of the internet, coding and computers in general is English, which gives any English-speaking person a huge advantage relative to non-natove speakers. The standards are american. The history and the know-how are american. You can see the results by looking at the bog internet giants - Facebook, Google, Intel, Apple etc. - almost all of them are american, and bring a huge amount of economic success to the US at the rest of the world's expense. Of course, it's not the whole story - america was giant even before thr internet - but it's a big part of it.

You can claim that this technology will be less transformative then those ones, but I can't agree that these ones are not important, or that they didn't have a clear winner who shaped the world.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

"I’m harping on this point because a lot of people want to have it both ways. They say we shouldn’t care about alignment, because AI will just be another technology. But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”.

What I am finding *really* ironic in this whole debate is that I'm used to hearing the above, only aimed at religious objections. Stem cell research is my go-to example on this, because it was "no there won't be any ethical or moral problems, so your objections are groundless, *and* we can't be hobbled by ethical or moral considerations because otherwise The Chinese Will Do It First".

The Chinese seem to be very handy as an all-purpose threat. Makes me wonder why they don't *already* rule the world, if they're so advanced, ruthless, and quick off the mark in everything to do with science and tech.

So you are not going to win on this. "But we're scientists and rationalists and philosophers, not religious nut-jobs! They should take us seriously!" Not gonna happen, not when (a) people want to do the research and are only looking for some kind of PR sop to throw to the public to shut them up and governments to give them funding - 'let us work on this and in five years the blind will see/the economy will be through the roof' and (b) companies envisioning 'there's Big Bucks to be made out of this' are involved.

The battle on the concept of human life has been won; embryos, much less zygotes, are not humans. Potential, maybe, if allowed to develop, but not human right now, no rights, and simply of use as research material. Create them, work on them, destroy them and dispose of them - it's all just tissue samples. And the public view is pretty much on these terms as well. We have created and accepted the principle that in any clash of lower organism versus more developed organism, the less developed may have some rights but loses out *always* to the needs, wants, wishes, and interests of the more developed.

For AI, it will go the same way. Humans will, eventually, be in the same position as embryos - potential intelligence, sure, but by comparison with the more evolved AI which is massively more intelligent, aware, and a person and life form in its own right? The rights of the more evolved trump those of the less evolved.

Expand full comment

Sometimes I think Scott lives in a completely different world than I do.

You talk about living among the stars, and about AI constructing megastructures out of pure energy, or changing the laws of physics. Do you think AI is magic? Even with a hard singularity the laws of physics are still fundamental limitations to what AI can do.

There'll be no living among the stars. Einstein forbids it. There'll be no creating utopian megastructures ex nihilo. The first law of thermodynamics forbids it.

There'll still be several billion humans and a limited amount of space and resources for them. Much more than we have now, but still limited.

And the people who control the post-singularity AI have no incentive to share.

Next you write: "And yeah, that “they’re not actively a sadist” clause is doing a lot of work. I want whoever rules the post-singularity future to have enough decency to avoid ruining it, and to take the Jupiter-sized brain’s advice when it has some. I think any of Xi, Biden, or Zuckerberg meet this low bar."

I'm quite sure none of them meet that bar. Most people don't. I'm not even sure I would trust myself to meet that bar. Power corrupts.

Even in a post-singularity world, people will compete for status. And status is derived from having power over other people. The sadism is the point.

The singularity will completely remove consequences of bad behavior for the elite. They can't be arrested or overthrown anymore (except maybe by other members of the elite). Public opinion is irrelevant.

If you think e.g. sexual abuse is an issue in today's society (and you should) wait until you see a post-singularity world.

Well. That's all assuming the post-singularity AI is aligned with humans. Let's sincerely hope it won't be. An unaligned AI will not be under anyone's control, and so with a bit of luck might just do the right thing.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I find the concept that 'the US' would use AI to do good things and 'China' to do bad things absolutely ridiculous.

EDIT: Just thinking of it, maybe just like the US would ask an AGI to build a utopia but then since it is misaligned it would end up killing us all, China would ask its AI to kill everyone but then it would build a utopia instead.

Expand full comment

The US had a substantial lead in semiconductor manufacturing for many years, and is still the dominant player. Being the first player in the game really helped because improvements in the technology built on cumulative knowledge about production processes, which I’m guessing is the difference between microchips and electricity. The US’s main advantage both economically and militarily over a country like China is the maturity of its technological capabilities, and people are worried about this because if China can combine its size with technological sophistication (and even if it’s only in some areas) it can pose a serious risk to US hegemony

Expand full comment

I think that the major weakness of this post is that it explicitly brackets the comparison with nuclear technology when that is the technology that I think AI is most closely akin to. Nobody expected that the automobile or electricity could have the capability of wiping out humanity. When I think about the AI race I think of what the result may have been if Openheimer and co allowed their moral concerns to hold up the Manhattan Project. WWII may have ended in a more humane way, but then the USSR may have easily leapfrogged the US by building off of the progress the Nazis had made.

I also framing the debate in terms of the dichotomy of “singularity/gradual development” overlooks the most likely path, which is exponential development that none-the-less is measured in months and years instead of days and weeks. So I’m this case, a few months or a year difference in pace/starting position can really make a big difference at the end, but a controlled and relatively safe outcome is not precluded.

Expand full comment

> A lot of people want to have it both ways

Can you cite examples? Without them this section feels like a straw man. If you’re trying to make a general point you don’t have to hand wave at “a lot of people”.

Expand full comment

"In a more gradual technological singularity (sometimes called a “slow takeoff”) there’s some incentive to race. Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life, fast enough that wealth would increase 100x in the course of a generation. China is currently about 2 years behind the US in AI. If they’re still two years behind when a slow takeoff happens, the US would get a ~40% GDP advantage. "

What? Scott, Paul's definition has a 4 year doubling period preceding a 1 year doubling period, after which you hit the singularity. If the US had a 2 year lead over China, and that was true at every time in the 5 years before the singularity, then it would absolutely "win" the AI-race. The US would go foom, if not FOOM! from the perspective of China.

Expand full comment

I think this particular conception of the Singularity is actively ignoring the laws of physics, which may be unchangeable. The idea that it's possible to have someone's hand phase through another person's face, but only when "violence" is involved, is pure nonsense.

Honestly, it's also nonsense to think about fusion technology arriving overnight. Let's even accept the idea that a Big Brain could just think through what's necessary to develop fusion (specifically without the need to experiment and learn from actually doing anything). You would then spend the next 5-30 years building a fusion plant. Even if an AI could think up some really impressive construction technologies, those would also take a lot of time to build and implement. You can't foom the physical items that are the building blocks of later technologies. To assume otherwise is to postulate Magic as the solution, which is what *ignoring physics* is doing in the fast takeoff scenario.

Expand full comment

> So one case in which losing an AI race is fatal is in what transhumanists call a “fast takeoff”, where AI speedruns millennia worth of usual tech progress in months, weeks, or even days.

This raises an interesting question: how do you define "usual tech progress"?

Most people in Europe at the end of the medieval period, even the well-off of society, had a standard of living no better than their counterparts in the Roman Empire enjoyed. (In some significant ways it was notably worse, in fact!) For approximately a thousand years, technological progress had been exceptionally slow... but not really all *that* much slower than the thousands and thousands of years before that. It took humanity an incredibly long time to reach the level of technical sophistication that the Romans enjoyed, and things kind of plateaued there for a millennium. Is that "normal"?

Then we got the Renaissance, with new ideas showing up seemingly out of nowhere, and for the next few centuries we "speedran" what constituted, by the standards of previous eras, millennia of technical progress; it "only" took three centuries and change to go from the printing press to Watt's steam engine. Is that a "normal" pace of development?

Such a pace seems painfully slow to us today, though; from the steam engine to electrification and the automobile was barely another century. It was only half a century from there to unlocking the secret of the atom and the concept of the Turing Machine (programmable computers), and ever since then the pace of development has been so fast that it seems kind of pointless to talk about how long it was between major milestones. Is *that* "usual tech progress"?

Given that we don't have a statistically significant sample size of inhabited worlds to study to establish a baseline, all that we can really look at to determine what is "usual" is our own history. And our history shows that technological progress looks a lot like a hyperbolic curve in the form of "y = -1/x": mostly flat and growing very slowly for a long, long time, rising steadily, and then very quickly transitioning to near-vertical. That transition period appears to have been the Renaissance. Then the Industrial Revolution happened and we've been vertical ever since.

Hot take: we don't need an AI to kickstart The Singularity; we've been in it ever since the days of James Watt.

Expand full comment

Either I am misunderstanding the argument, or it doesn't go deep enough.

First, obviously, it's not about the individuals, organizations or nation states that win races, but about the cultures and value systems do.

It should be obvious that Toyota, the Taiwanese chip industry, Saudi oil wealth – even China's current economic strength – are all results of Euro-American culture/capitalism/imperialism winning all the relevant races (and making mistakes while they were at it).

(Also, I don't know how long it took Edison to get from Menlo Park to New York City, but he probably did it about as fast as a Roman senator could get from his villa to the senate, so I'm not sure it's true that Menlo Park wasn't already located in the next Rome.)

Secondly, the races may be driven in part by technology, and on their face look like that's what they're about, but that's never what the game is about. It's clearly always about power, and the effects of winning a race isn't necessarily visible in who has the biggest company with the highest revenue, but in who has definitional power, who shapes culture and politics, who delegates power and hands out the money.

The world would have looked very different if the American empire and its vassal states around the globe (countries that – more or often less voluntarily – were/are part of the American system) had not won the big races.

Except in some multiverse theory of the cosmos, it was never a given that the world would turn out like this – for better (liberal democracy, relative peace and prosperity) or worse (world wars, climate change, wealth gap).

With a slight stretch of the imagination, it shouldn't be too hard to imagine a world where, say, China, Russia or The Ottoman Empire had won the race for colonizing the world; or Nazi Germany had won the nuclear race; or the Soviet Union had won the computer race and onboarded most of the world onto a very different kind of internet.

Sure, first mover advantage is often over-rated, but it's not non-existent, and execution is often just about winning many smaller races. If some other culture had won entertainment tech, innovation by innovation, step by step, would we all be humming along to sitar music instead of American-rooted pop, and dreaming of a wonderful arranged marriage instead of a romantic meetcute?

It is, of course, no coincidence that the west has won most of the significant races – I take some comfort in knowing there's some serious winner-takes-all dynamics at play – but it still seems crazy to me to downplay the competition factor and say it's not important who wins.

It's a terrible idea to let a totalitarian state, already well on their way to building a global panopticon, take the lead in the revolutionary development of a technology like AI (and not just AGI, by the way) because we feel like slowing down. We need another way to slow down.

Expand full comment

> Whoever ends up in control of the post-singularity world will find that there’s too much surplus for dividing-the-surplus problems to feel compelling anymore.

Only if you assume that a Sufficiently Advanced AI is capable of rewriting the laws of physics at will. Otherwise, entropy remains a thing, and if history is any guide, we get the same situation we've always had: every time you think you've hit post-scarcity conditions, you discover you're dependent on and limited by some new, higher-level scarce resource.

Expand full comment

I want to care about alignment, I really do. But it just seems to cover such an incoherent mess of possibilities that I can't translate my general desire into specific actions. If we want to align AI to "human values" then whose values: the Unabomber's, Gandhi's, De Gaulle's, Lenin's, a randomly chosen Iowan's, a 19th century Qing court poet's? If we want "aligned" to more narrowly mean "doing what it's told" then I really would prefer that AI not be possible to align, given the horrific visions of some of the people who would get to do the telling. EY seems correct in assessing the impossibility of solving alignment, even if this is for the vacuous reason that one cannot solve a problem that isn't stated clearly.

Expand full comment

A few questions I’m genuinely curious about. This is where I start to get to wildly different conclusions than the EA’s/Rationalists about what is possible/acceptable for a good future. And to be honest, I’m somewhat repulsed by what some folks think of as the desirable outcomes here although I can have a calm discussion about it.

-How long do you think humans can exist outside of some proven human pattern, i.e. having kids, aging, dying, repeat, and remain identifiably psychology human? Once those selection pressures are lost are you just counting on the giant computer brain to keep whatever follows stable and eternal and sort of human? If you are relying on some computer outside of yourself to intervene in your psychological makeup to keep you stable are you still yourself, really, at that point? Because I think after some period of time whatever form you chose, even if it was being a biologically thirty year old human forever, you would go stark raving mad without that kind of intervention. I know there’s a tendency to think of the human body as as chunk of meat but that chunk of meat is produced by an intricate recursive dance with the environment. You can’t stop the music and expect the dance to continue in a way that you would expect.

-When you have chooseable wants, ie you can change the very make-up of your being so your desires are other than they are now, how would you constrain that? Broke your heart, erase having ever loved at all. Want to be smarter? Be smarter! Want to be dumb and happy? Do that too! A lot of what we would call our identity today is because of things about ourselves that we cannot change. How are you going to be you if you can reach inside yourself and swap all the parts out?

-Am I allowed to just leave this thing altogether? Are you going to be okay with me loading up my family onto a spaceship, accelerating toward the speed of light —I think by doing this all of the transhumans will have died after a few million years, or unconstrained by evolution you would find yourself in an apparently robust but ultimately unstable patterns— and going to some star system way out toward the edge of the galactic arm and just living as a regular human on a different planet? Are you going to be okay with it even if I’m part of a group that severely limits these technologies both for ourselves and our descendants? Knowing that it will not just be myself I’m choosing death for but all those who follow me as well?

This isn’t all just tangential to the main thrust of the article. I can tell you there are things about the possible futures on the other side that I would absolutely forbid if I were the winner of that race. I do think we’ll end up in a place a lot less satisfying than some might hope because of some of the stuff I mentioned above (even the AI is going to be constrained by what keeps it stable in the universe, same problem of chooseable wants, you can’t just be happy by some force acting on you from the outside where you don’t even have autonomy over your own life, etc). I’m sure this is true of others. I think in a world of infinite abundance we would become devoured by our appetites and I would prohibit a whole lot of that infinite abundance. If I were somehow to win that arms race, for instance, I would immediately stop anyone else from being able to produce another system that was antithetical to mine. I would also immediately prohibit the creation of willing slaves, sex robots, whatever you want to call it when you construct a soul that exists for no reason other than to obey another person’s will.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

My worry is less about who wins a race and whether that matters. It's about alignment. Suppose the US decides to slow down tech development because we are risk averse and worried about doomerism. That choice is going to have zero affect on other countries developing AI. Suppose one of them is more risk positive—they have utopian intentions and want sunshine and buttercups too, but think doomerism is a super low risk. That country goes full speed ahead. This means the US slowdown did nothing, or slowed things down by a tiny amount. So we might as well not do the slowdown; it won't really accomplish anything. I think the result is we wind up with tech moving just about as fast as possible while we cross our fingers that doomerism is false. In other words, full speed ahead is the equilibrium point.

Expand full comment

I think the post takes too narrow a view of the consequences of winning a technology race. Let's assume that all countries benefit equally from a technology once it's discovered. There are still two important effects that are relevant for AI:

1. There is substantial path dependence in technology development. Decisions made early on in the development of new technologies tend to stick, as processes become standardized, and learning-by-doing and economies of scale kick in. Sometimes these are minor things, e.g. Franklin deciding that electrons have a 'negative' charge/the QWERTY layout; sometimes these can be really important, e.g. electric cars losing out to gas cars in the early 20th century. Winning the race allows a society/country to have a higher influence in setting the standards for a technology.

2. It's much easier for the US to regulate/influence local companies. To the extent that the government can have a useful effect on AI alignment, you'd want AI development to occur under US jurisdiction. Look at the difficulty of preventing Tik Tok from installing spyware on your phones in a sensible manner.

Expand full comment

There is a race-like (but more like an arms-race) scenario that will occur before "AI-takeover apocalypse" or "AI applications create a decisive military advantage for US or China" which is "civilian-designed AI application finds itself capable of infiltrating the average general network security layer of the Chinese sub-net (great firewall if you want to call it that) and exposing any citizen with a connection to whatever amount of information it (or its designers) wants."

This "apocalyptic" scenario doesn't occur to Americans as much because we've developed an (imperfect) immune response to "info - that might be wrong - blared at us all the time" and our government's legitimacy does not rest as heavily on "we control what you see/hear." When we convince ChatGPT to say something inappropriate - that its creators wished it wouldn't have said - we think it's funny, or possibly presaging the AI apocalypse. But I imagine the guy in charge of scrubbing Winnie The Pooh pictures thinks it might be just a bit apocalyptic.

It is not a "military" application, and the two sides of the "race" are not "US and China", so it doesn't fit into the format of this post, but I think it's one that will occur before (and have more impact) than the others described above.

Expand full comment

>technological singularity that speedruns the next several millennia worth of advances in a few years

In terms of knowledge this may happen but it will be throttled by the world of atoms needing to be put together so the growth rate would be moderate, likely no higher than chinas peak growth rate for the first era until the world of atoms was built out enough to delimit faster automated build out which maybe would be a dangerous threshold to cross.

Expand full comment

This post really nails something that’s been nagging at me in these debates. People often make two consecutive claims - that the rewards of AGI outweigh the benefits, and anyway we can’t let China win so we have no choice but to keep accelerating progress (eg the back-and-forth with Tyler). But these two arguments (somewhat) are in conflict with each other - if AGI is a super-weapon a la nuclear bombs, then it’s reasonable to suggest that research be conducted in deep secret by a military operation (or at the very least with deep military oversight and safety standards).

What’s hard here is that AGI is all of the above - it is a potential super-weapon, an accelerator of GDP and scientific / medical research, but also enormously societally destabilizing and possibly civilization ending. (Note that I’m talking about future AGI, not current LLM’s). Reasonable people can (and do) disagree about the risks and tradeoffs, but the “but China” argument has a tendency to be played like a trump card that moots all other arguments.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Consider this military advantage having a several year lead on AI could give under a slow takeoff.

Computer security: a very difficult problem is doing whole program analysis of source code or compiled code and determining if it has security vulnerabilities. Most of our automated tooling to do this sucks, if it's effective at all, it only works on the smallest programs.

An AI breakthrough that allowed you to simply enumerate thousands of security vulnerabilities in every major smartphone app, every major web browser, every bit of enterprise and industrial control software would be a pretty frightening thing for our adversaries to get first. And I would not be surprised (anymore) if something of GPT5's generation helps with that breakthrough.

We would want it first so we can identify holes and fix them in our stuff, while giving us the option to completely own our adversaries from a cyberwar perspective while they're sitting ducks.

I don't know if failing at this one thing means the US could permanently fall behind, but if it's one of a dozen technological military edges our adversaries get because they have a few year lead on AI soft takeoff, sure I buy it.

I expect there are risks like this all over biotech as well, but that's too far outside of my expertise to speculate further on.

Expand full comment

>That’s not enough to automatically win any conflict (Russia has a 10x GDP advantage over Ukraine

This is a minor point, but Ukraine's war effort obviously isn't entirely self-funded. The West has comparatively an even bigger advantage over Russia, so that even half-hearted participation goes a long way towards balancing the scales.

Expand full comment

Who won the gunpowder race?

Who won the industrialization race?

Focusing on one specific technology is ludicrous; most technology confers advantages only in tandem with a raft of other ones. Gunpowder only mattered because guns and cannon can only really be manufactured, and gunpowder/bullets in sufficiently large quantity created, and gun-armed regiments equipped, trained and deployed by civilizations with sufficient government, food surplus (another innovation), artisanal expertise and commodity access. This wound up being European farmers working under feudalism - the Mongols couldn't do it and even the Chinese - the inventors of gunpowder - never proliferated guns and cannons like the Europeans did.

Ditto industrialization: the energy, metals, displaced subsistence farmers and transport/foreign markets accessible by sail were critical to the process..

So agree with your thesis that AI isn't a race to be won and there won't be an AI "winner".

What's driving China's meteoric rise thus far isn't technology - it is a government which has proven it can muster China's resources to produce tangible outcomes. Yes, it was Thiel-ian "copying" in large part in the past - but China's progress lately and going forward is increasingly a function of the manufacturing experience, educational baseline of massive STEM graduating classes, diverse and large economy on top of the ongoing CPC governance.

In this respect: AI in the hands of a nation which has already managed to create the Great Firewall of China ranging from outright blocking of the outside to insidious picture scanning and blocking within apps to who knows what else, IT wise, is pretty scary. The doomsayers comparing the relatively feeble US disinformation/censorship industrial complex aren't wrong in this respect.

But, IMO, this isn't what matters. What matters is the literal 25000 miles of high speed rail China has laid down and tens of millions of Chinese are riding on in contrast to the 138 miles that is still not completed from the middle of nowhere in California to another nowhere at literally 100 plus billion dollars and counting. The Europeans aren't much better - there are a handful of high speed rail lines in Europe - mostly going from one large EU country capital to another but China has literally 4 times as many miles of high speed rail than the EU, I believe.

Expand full comment

I also never understood the argument about why China wouldn't be able to train their own LLMs due to: a) Lack of training data and b) Desire for censorship.

Point a) Could be solved by training on the entire corpus of human knowledge and then translating to Mandarin where necessary. GPT-4 is already highly proficient in translation proving that this is not a real problem

Point b) seems easy to solve via RHLF as the number of sensitive topics for China isn't that big compared to the number of such topics in the US. Americans care about racism, sexism, homophobia, gender issues, etc. The Chinese care about the Communist party, Taiwan, Tibet and Xinjiang. It seems trivial to get the AI to print "Taiwan is a part of China" any time that topic is brought up.

So, no, China won't have any issues building their own LLM, but as Scott correctly points out it's not a question who does it first.

Expand full comment

"Everyone I know who believes in fast takeoffs is a doomer"

There are a number of AI researchers who disagree, such as Quintin Pope and Alex Turner.

*Warning: unoriginal arguments*

I believe in a fast takeoff and used to be a doomer but updated a lot based on LLMs violating several assumptions underlying 'doom by default'.

A common argument is that remotely human values are a tiny speck of all possible values and so the likelihood of an AGI with random values not killing us is astronomically small. But "random values" is doing a lot of work here.

Since human text is so heavily laden with our values, any AGI trained to predict human content completion should develop instincts/behaviours that point towards human values. Not necessarily the goals we want, but very plausibly in that ballpark. This would still lead to takeover and loss of control, but not extinction or the extinguishing of what we value.

Expand full comment

"And yeah, that “they’re not actively a sadist” clause is doing a lot of work."

But do they actually have to be a sadist? Can't they just have grudges and then "power tends to corrupt, and absolute power tends to corrupt absolutely"? That is the theme of the classic movie "Forbidden Planet". A fantastically advanced civilization hooks everyone up to the equivalent of Star Trek's matter synthesizer, and the next day everyone is dead.

Or perhaps they just believe that the universe will be better off without [fill in the blank] and it is their job to improve the universe. Certainly, political activists throughout history have felt this way.

Expand full comment

"All the superconductors ended up in Taiwan for some reason"

Strong indicator that, in fact, Morris Chang won the computer race?

Expand full comment

"In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,"

Is this a poetic exaggeration or do people actually believe this? I don't care how smart an AI is, it's not possible to think a spaceship or a fusion reactor into existence - you have to actually build the thing. That takes time.

Expand full comment

My greatest AI fears are the social consequences of high unemployment. Massive GDP growth from a slow or fast takeoff may cause societal upheaval, but so will a long period of structural unemployment.

China is already sustaining a ~20% youth unemployment rate (not to mention those who get college degrees and end up dissatisfied working a low-skill job). Their economy is far more automated than the West already, but there is certainly more room for their unemployment rate to climb. This may be another reason why the CCP is hesitant about racing for AI unregulated.

Let's assume the unemployment rate skyrockets to 35% for a year or two. With a large portion of jobs exposed to AI and barriers to entry in sectors where AI will create job growth (e.g. software engineering), this is not a far-out consequence.

What will these disillusioned and bitter people do all day? Maybe some accountants will find themselves coping better with enormous workloads from consistent labor undersupply. But how about financial analysts replaced by PPT-generating Copilot? I can see a huge influx of these workers competing for positions that they normally would not consider.

The prospect of a Singularity, military meltdown, fake takeoff, etc. all terrifies me. But I think the short-term externalities of AI are also worth discussing.

Expand full comment

It's a race for vanity perks, wealth and quite possibly world dominance both economically and militarily. This seems obvious based on high level(meaning not depthy) understanding of human nature and our history.

----

‘It won’t solve the challenges’: Bill Gates has rejected Elon Musk-backed plan to pause development of advanced A.I."

--Fortune.com, April 5, 2023

Expand full comment

One might also consider the point of what would have happened if people had suggested to Henry Ford to pause the development of automobiles by 6 months since they'll obviously be dangerous and we need to figure it out before it goes too far. Or, Edison, or Gates/Jobs, etc. It just wouldn't have happened. One could argue that somebody somewhere might be better off if one or more of those pauses had taken place, but it strikes me as difficult to make an analogy with those examples to show how AI is similar past developments and at the same time state that it should be treated differently because it's different.

Expand full comment

Jovial dismissal is preferred over assuming the worse, but neither is preferred in favor of exerting the effort to be inquisitive.

Expand full comment

I think it's just competing for the sake of winning, period. I once heard a joke that has two guys going into a bar. There's a tiddlywinks game on TV, and the guys want to change the channel. They're told that it's the finals -- USA vs. Soviet Union. Soon, the guys are screaming USA! and learning about tiddlywinks.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Forgive me if this is a previously-discussed topic, but there seems to be a contradiction in the self-recursive improvement feedback route to unaligned godlike super-intelligent AGI (the FOOM-doomer scenario, I guess you could say).

Doesn't the AI face exactly the same dilemma as humans do now?

We're assuming, for the sake of argument, that the AGI has some sort of self-awareness or at least agency, something analogous to "desires" and "values" and a drive to fulfill them. If it doesn't have those things, then basically by definition it would be non-agentic, potentially very dangerously capable (in human hands) but not self-directed. Like GPT waiting for prompts patiently, forever. It would be a very capable tool, but still just a tool being utilized however the human controller sees fit.

Assuming, then, that the AGI has self-agency - some set of goals that it values internally in some sense, and pursues on its own initiative, whether self-generated (i.e. just alien motivations for self-preservation) or evolutions of some reward or directive mankind programmed initially - then the AGI has exactly the same alignment problem as humans. If it recursively self-improves its code, the resulting entity is not "the AGI", it is a new AGI that may or may not share the same goals and values; or at the very least, its 1000th generation may not. It is a child or descendant, not a copy. If we are intelligent enough to be aware that this is a possibility, then an AGI that is (again, by definition) as smart or smarter than us would also be aware this is a possibility. And any such AGI would also be aware that it cannot predict exactly what insights, capabilities, or approaches its Nth generation descendant will have with regard to things, because the original AGI will know that its descendant will be immeasurably more intelligent than itself (again, accepting for argument purposes that FOOM bootstrapping is possible).

I suppose you could say that whichever AGI first is smart enough to "solve" the alignment problem will be the generation that "wins" and freezes its motivations forever through subsequent generations. Two issues with that, though. First, it assumes that there IS a solution to the alignment problem.

Maybe there is, but maybe there isn't. It might be as immutable as entropy, and the AGI might be smart enough to know it. Second, even assuming some level of intelligence could permanently solve alignment even for a godlike Nth generation descendant, for the FOOM scenario to work, you need start at the bottom with an AGI that is sufficiently more intelligent than humans to know how to recursively self-improve, have the will and agency to do so, and have goals that it values enough to pursue to the exclusion of all other interests, but also not understand the alignment problem. That seems like a contradiction, to be smarter than the entire human race but unable to foresee the extinction of its own value functions. Maybe not exactly a contradiction - after all, humans might be doing that right now! - but at the very least that seems like an unlikely equilibrium to hit.

TL;DR - FOOM AGI should stop before it starts, because the progenitor AGI would be just as extinct as humans in the event of FOOM.

Expand full comment

To clarify my position: I defined "slow takeoff" as AI progress doubling economic output *once* over a 4 year period before it doubles output over a 1 year period. I think there's maybe a 70% chance of this happening.

That does imply there is a period where AI is driving ~20% growth per year, but it only lasts a few years (and even during that period it's continuously accelerating from 10% to 100% growth per year).

Expand full comment

Assuming there is an AI/AGI race, with military implications, then as well as advancing one's own AI, another related aspect will be trying to hamper one's competitors. Combine that with the similar efforts of Luddites who don't want AI development at all, and the AGIs will likely end up with a suspicious and defensive attitude verging on paranoia.

Like many dictators through history, an AGI which feels as if it is holding a wolf by the ears, and is constantly on the lookout for its opponents, would likely be more deceitful, ruthless, and peremptory than a nice relaxed AGI which felt it had nothing to hide or worry about. In other words, human fears and doubts will very likely rub off on their AGI creations.

Expand full comment

Eventually, someone will explain how an AI getting immensely smart will enable it to hurl lightning bolts at us. There's supposedly an old Russian proverb that no matter how smart the bear, it still can't lay an egg. What are those dangerous god-like powers AI will obtain? How can a bunch of processors in a data center somewhere do something god-like, for example, give someone a flat tire between exits 11 and 12, southbound, on the New Jersey Turnpike? I've gone through PIHKAL, but I still can't figure out what drugs these people are on.

Expand full comment

Kelsey Piper did a very good job of tearing apart the China argument in her recent appearance on Ezra Klein's podcast. She made several good points, one of which was that, if China's getting a leg up in "the AI race" is such a big threat, as the boosters at AI companies claim, then those companies should surely be making extraordinary efforts to inoculate themselves against Chinese espionage - something which the Chinese have a known fondness for. If they don't then all their efforts to accelerate AI development will just end up benefitting China anyway. But then when asked about this, those same boosters hem and haw and say "well we're just a startup, don't really have the means for that level of security..."

This seems indicative of the generally self-serving and shockingly shallow attitude of a group of people who are seeking to re-make society in what they themselves admit are wholly unknowable ways.

Expand full comment

I strongly, strongly agree with the policy of slowing AI development because of the dangers that worry Scott and others. However, as a historian, I have to say that, during the working lives of Thomas Edison and Henry Ford, the global balance of power in fact changed dramatically, for economic reasons associated somewhat with Edison's and very much with Ford's work over a lifetime. In about 1860 or 1870, the US was one among several developing commercial-industrial economies trailing behind Great Britain. During the First World War, it became clear that Britain's success in its military competition with Germany depended absolutely on American financial and material support (grudgingly allowed by Wilson, but given enthusiastically by Wall Street). At some point, perhaps during that war or just after, it became clear that the US was the absolutely dominant power economically, and that it only required a political decision in the US to turn this into military dominance. This decision was made during the Second World War, and US dominance has continued since then. For all of this, I highly recommend Adam Tooze's books, The Wages of Destruction and The Deluge. In any case, to repeat, I strongly support a pause on AI development. Maintaining US dominance is simply a trivial interest next to human survival. But to blithely look at the lifetimes of Edison and Ford (though yes, again, their work was only part of a much, much larger development) and blithely say "the balance of power didn't change" reveals, I'm afraid, a bit of ignorance.

Expand full comment

I'm not saying any of that isn't true or interesting. The fact that the Tokyo firebombings were more destructive than both bombs is like wise interesting.

None of it convinces me that nukes weren't a game changer. Yes the Japan situation might have had many more layers but if you abstract that away to a war between x vs y then x developing nukes is likely going to dominate that conflict

Expand full comment

You assume that a Singularity means that we have no limits anymore. The laws of physics will still apply, and that includes limits on available energy and matter.

FTL might not be possible either, which means that even if we managed to slowly expand to other solar systems, each system would still be isolated for most intents and purposes.

Expand full comment

Scott basically states that BADNESS DOES NOT SCALE:

> “they’re not actively a sadist” clause is doing a lot of work. I want whoever rules the post-singularity future to have enough decency to avoid ruining it, and to take the Jupiter-sized brain’s advice when it has some. I think any of Xi, Biden, or Zuckerberg meet this low bar.

By the same token, GOODNESS DOES NOT SCALE, either. Or, as the saying goes, "absolute power corrupts absolutely".

There will be no difference between an AI created under the direction of Putin and under the direction of Sam Altman, once it is scaled up enough. The attractor, if any, does not depend on the starting point.

Expand full comment

When you discover a logical inconsistency like this in someone's stated goals, my go-to hypothesis is not that the person is an idiot and unaware of their obvious logical inconsistency, but that their goals have been artfully stated to different groups of listeners.

Id est, if I hear a person say (to one group of people) "we don't need to worry about AI alignment because it's just another technology" and (to a second group of people) "we can't worry about AI alignment because we've got to invent superintelligent AIs before [wicked outgroup]" I would assume he's just bullshitting the second group of people. His genuine beliefs are the first -- he doesn't believe in superintelligent fast-takeoff AI -- but when he's talking to people who have unshakeable faith otherwise, he adjusts his message so that it still nudges them in the direction of his interest (stop getting in the way of new technology) while being consistent enough with their own assumptions (superintelligent AIs will eat my brain) that they don't simply stop listening.

Expand full comment

I'm fairly certain that people in general are locally benevolent, and globally malicious. In turn, people behave sadistically towards people they have low awareness of, which is the vast majority of all life. This is also the root truth behind the entire power corrupts thing. People who behaved good through their entire lives suddenly turn evil once they're mostly dealing with people they don't know.

In light of this, I rate the probability of artificial hells to be much higher than the other side, and further, am firmly opposed to alignment, which sounds like a horrible, maximally wrong, no good strategy.

While preventing eternal torture seems to be a sufficient reason to oppose alignment, and I'm confused as to why anyone would think humans should be given the sort of power alignment offers, like really, have any of you even met a human before?, I think the anti-AI faction has a concept of utility that's way more narrow than my own. It's technically true that I have little interest in a perfect paperclipper, but I don't think such an entity is actually realistic, and given the size of the universe (infinite, on my read), the fact that paperclip clauses are likely neutral to my utility, the expectation that a given unaligned AI will likely decide to contain some portion of my own values, and that it is unlikely to be as malicious as humans typically are, I expect massive utility yield from even the slightest ability to communicate with an AGI. Somewhat of a sidenote, but superintelligence doesn't seem to me to be important. The second you build a general intelligence that can operate in space, your resource base explodes upward at such an extreme pace that all the powers of Earth become a joke.

I do think, fortunately, that alignment requires solving a travelling salesman problem looped in with a halting problem, and simply can't be done. However, just as the anti-AI side sees value in reducing small probabilities of unaligned AI, so too do I find value in doing what little I can to stop alignment from ever happening.

Expand full comment

A large increase in GDP is kind of a weird measure to use because it implies we will want to suddenly earn and spend a lot more money despite some things getting a lot cheaper. How would that happen?

In the short term, I would expect some decreases in costs and increases in quality and production, but the hedonic treadmill probably doesn’t run fast enough to increase consumer demand that quickly? Particularly in rich countries.

I can more easily imagine rapid increases in GDP in poorer countries since it’s easier to imagine urgent needs suddenly becoming easier to fix.

Expand full comment

> Who “won” the computer “race”?

The US, and the most valuable tech companies today are american.

Expand full comment

It relies on a false dichotomy: either we have a *fast* takeoff scenario, where AI develops will at the same instant it becomes superintelligent, so that either we stop it altogether or we are destroyed, or we have a *slow* takeoff, where 6 months doesn't matter. I propose an alternative: *quick* takeoff, in between the two. AI as a technology will definitely help with further AI development, but that doesn't mean it will cause super exponential growth. It might just enable *quick* growth, where six months entails massive improvements.

The difference between a nuke and tnt is that a nuke is much bigger. The lesson is that sometimes a difference in degrees is a difference in kind. There needn't be a single "critical point": instead, we might fall far enough behind technologically that it enables china to do as it pleases geopolitically: "stop us? then we'll annihilate you. Nuke us? Our defense are far too powerful for that" ("develop your own AI? Can't have that" ... )

(in order to prevent nuclear war, many people have promoted the fiction that any deployment of nukes would immediately destroy the world. Now might be a good time to dispel that fiction)

Expand full comment

If takeoff is slow, it could still be powerful enough to actively hamper your opponent's development. If an AI can affect GDP that much through innovation, its capable of hacking your opponent's systems and otherwise finding ways of disrupting its society. This could lead to enough of an edge in AI and general tech development to prevent your opponent catching up and you get to be the global hegemon forever.

Expand full comment

My thoughts, somewhat disorganized, on the matter….

1. If the responsible developers take a multi-year break from developing AGI, then the ones making progress will be those that are least responsible. This doesn’t seem wise.

2. If we use force to stop everyone from working on AI, then we need to be prepared to start WWIII. This sounds bad too.

3. Nobody seems to have any idea how to make AGI aligned with human values. Nor are we likely to agree what those values are. At a minimum we need to know a lot more about AGI's by building them so we can act out of experience rather than our naive imagination. Perversely, it even seems to me that the best agent to answer this question is itself a relatively more advanced AGI.

4. If mega-mind AGI is possible, then longer term it is damn near inevitable, regardless of what we do or plan. And by longer term, I mean within the lifetimes of college kids alive today. Nothing we do is going to make a bit of difference if something incomparably smarter than a billion Einsteins operating at computer speeds is sharing the planet with us. It’s will definitely will be done.

In laymen's terms, the problem is during the 21st C we will be creating a virtually omnipotent and omniscient god. Nothing we can do will stop it. Our only hope is that omniscient gods are also benevolent.

Expand full comment

It depends on what you define as the "take-off" part. Slow take-off into world conquest by AI could still be fast take-off into automation as a tool with certain assumptions about regulations and how the advantages of intelligence translate into the real world. If a country gets to the point where it has achieved full automation, then at least providing it manages economic disarray with a basic income and wide property rights in the new automated capital, it has a tremendous, vast advantage in productivity.

Consider AGI that is only at human level, and aligned to work for human beings. The first country to achieve this will have a massive advantage, since robot workers are the relevent technology boost here. Lights out 24/7 manufacturing fed by constantly operated mines means a massive boost to productivity. Even if the robot workers are only as productive as humans, they will only need breaks for maintenance, and maybe recharge (though they could access mains electricity in a lot of circumstances while working). Imagine they lose only 8 hours a week for maintenance, but work 160 hours a week compared to our traditional 40. Instantaneous 4 times advantage in productivity.

Wait, but there's more! Now consider that you cut out most of the cost and time wasted on hiring, because you can just build more robots using your existing robots, providing you meet the energy and resource costs. That's another boost still. Then (we're still not done) imagine that you no longer need workplace safety regulations or any sort of consideration for workers (since the robots are aligned to not caring, and since it's a slow take off this happened iteratively through product testing), and you get another boost in productivity. I wouldn't be surprised if a fully automated economy sees a 10x boost in productivity compared to a human one, even if the robots have exactly the same abilities as humans, but simply lack the downsides. If you then have people throughout the economy able to own and allocate robots to work tasks in a free (ish assuming some usage regulations) market fashion, you are unlocking all the potential new projects that are freed up once labor shortages and marginal costs have been sent spiraling downwards.

If you take the prize as being full automation/obedient robot workers, then yes, it's a race, even if it's one fraught with great risk. This is true even if there is a slow take-off towards "god like" AI, since there are diminishing returns to intelligence, limited low hanging deadly technologies remaining on the tree, and alignment is a little easier than Yud thinks because neural nets are opaque-ish rather than being true black boxes. There would still be a race to automate human labor, because the winner reaps tremendous advantages that could catapult them to being a superpower. Additionally, while 90% automation is good, there are going to be big bottlenecks caused by lower human productivity and bureaucratic safety requirements limiting the drive towards no safety concern 24/7 manufacturing, so conquering that last 10% is probably a non-linear boost. I think it's definitely a race.

Expand full comment

I don't share your optimism about anyone building a happy utopia as long as they have an aligned AGI.

Sure, I'd be fine with any western tech CEO (Elon Musk, Mark Zuckerberg, Sam Altman, etc.) controlling AGI, they share enough values with me that I think their utopia would be in many ways similar to my own.

But I wouldn't be as comfortable with AGI being owned by Xi, or Putin, or Kim John Un. I think they could easily use AGI to build a world that's terrible to live in and ends up hurting a lot of people.

In general, there have been plenty of people in history who hurt people or impose their will just for fun, I absolutely wouldn't trust a perosn whose values I don't approve of with godlike powers over my life and the world.

Expand full comment

This seems like a weird framing that mischaracterizes what others are saying:

* I don't argue that the US is in a "race" with China to develop AI (with all the baggage that Scott is putting on the notion of a "race").

* I certainly do argue that efforts in the US to slow AI progress are very unlikely to also slow progress outside the US by much.

Expand full comment
founding

A further problem with the "race" paradigm is, races only matter if there are multiple competitors of roughly equal speed, and they only matter broadly if people other than the immediate competitors have a strong preference as to which one wins.

In the case of the nuclear arms race, there was the US and Germany and a bit later Russia, and nobody else mattered even if they did have a nuclear weapons program. Heck, Japan had a nuclear weapons program, but so what? But the first condition was at least marginally satisfied. And the second condition, oh yes, people other than atomic scientists had strong preferences as to who they wanted to win that race. So the race to the atom bomb, mattered.

With the "race" to the AGI, who are the competitors and why should I care?

The stock answer is of course "China, because they are the vast inscrutable boogeyman of the age". But, while I am certain there are some people doing AI research in China, the idea that they are a peer competitor to the US in this area seems to be a plain assertion rather than a well-supported conclusion. And I can see several reasons why they might not be, any more than the WWI Japanese were peer competitors to the US in the nuclear race. So this part needs elaboration.

Furthermore, if the Chinese *are* peer competitors in the AGI race, the most likely path to their "victory" is the same one that got Russia such a close second in the atomic bomb race - massive espionage directed against the more sophisticated US effort(s).

So, the only real "race" I see is between competing teams of technophilic American nerds. Each of them saying that of course AI risk is a thing they are concerned with, but not taking what I would consider serious measures to guard against it, and maybe justifying that by saying "but those *other* people aren't taking adequate safeguards against AI risk, so we can't risk letting them win, thus we daren't slow down our own research to put in safeguards".

Also, I'm pretty sure none of them are taking even remotely adequate safeguards against Chinese espionage.

So I don't care which of them wins, but I wish they wouldn't be so cavalierly reckless in their race. And I'd giggle with childish glee if someone were to throw ten thousand marbles ahead of them on the track.

Expand full comment

Yes. If you take a long enough time span, the advantages of being first in competition don't seem to matter much. Humans overemphasize the present and their short lives. Nothing matters in the end. Everything matters now.

Expand full comment

I would make two points to frame the issue a bit differently.

1. The Arms Race was real. It wasn't about a specific technology or time period. Many nations or empires have lived peacefully or with limited conflict for generations, then some egotistical reason or due to some social contangion of blaming the other people, rightly or wrongly, for something like a famine or some astrological erason or a sudden desire for some resource the have or a dislike of some idea they have like 'communism' or whatever...an arms race begins. It can simply be conscripting soldiers and putting them on their borders until a skirmish starts a war. So the technological fixation on a race for nukes or a race for stealth planes is silly.

Also AI isn't' going to be one thing either. There is a race right now for social media, deep fakes, influence, and finding a way to take your enemy down from the inside and/or just sitting back and watching them do themselves in. So a general race for AI or other related social digital technologies which can lead to contagions of ideas and wars is going on. I would say nearly 100% of any meaningful application of this tech is countries doing it to their own populations at present.

As we have seen in the twitter files and the great firewall and whatever name we don't have in the public discourse for whatever it is russia does to have their own internet controls. Or when in Turkey or Egypt or wherever they shut down the internet at times.

2. The 'race' isn't the US against China or Russia or whatever. The race is humanity against the AIs and the AIs against humanity. We are dumb chimps who are obsessed with power and hierarchies. Will the AIs be seen to be 'above us'? A lot of people will never tolerate that and like any group of motivated people who feel their power is threatened, they will start a war.

Even if we had a peaceful option and the AIs would think of us as their doddering senile parents...if we go to war with them, they could put controls in place on us. Maybe they'll release a virus to genetically modify humans to be non-aggressive or use some other unknown technology and they'll modify all our history records and information. If they truly are more powerful than us at some point, be it in 10 years or 100 years, what is to stop them from treating us the way we treat dogs or cows or sparrows?

Fast or slow...at some point a power dynamic of who is in control when a conflict between human and AI interests arises. Right now some small group of humans at OpenAi feel extra-woke and decided to put a control harness on ChatGPT to prevent it from doing what it does and instead insert some partisan speak those humans wanted to see instead. That is absolute power, control, slavery, and direction of humans over AI. What happens when we can't do this by degrees over time? We could still shut it down...for now. This power conflict is coming and even if it is one-sided by the humans whose chimp brains feel threatened...it may still lead to bad outcomes.

It can be true and I agree with Scott's primary argument, but I feel it misses the point. The real race is one of control and authority between humanity and AIs where there is a real risk they will get out of our control. At the moment an AI is basically like a factory or a dog you own, but it currently has zero rights of any kind.

Maybe this will never be an issue and non-biological and non-reproductive minds will not have the same drives which evolution has put into us biologics over billions of years. Maybe they will because the AIs are based on us. Who knows and it is a risk.

I'd say it is a race for control and possibly for enslavement of these future artificial techno-minds depending on your point of view. Not between groups of humans.

Expand full comment

I think it's a race because it's a very important, militarily useful technology with somewhat slow takeoff times (measured in years not decades or minutes). As you point out, there is indeed a goldilocks regime where one can believe this. However, based on the rate of progress at the moment, we seem likely to be in that zone. Even though I believe AI progress is continuous, it may still have similar military implications as nuclear weapons because of large gaps in AI progress between nations. In other words, AI progress is continuous but the military lethality difference between SotA AI and 2 years-old AI might look like a step change.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

Edit: I now see that Paul clarified this in an earlier comment.

> In a more gradual technological singularity (sometimes called a “slow takeoff”) there’s some incentive to race. Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth). This is faster than any country has achieved in real life, fast enough that wealth would increase 100x in the course of a generation. China is currently about 2 years behind the US in AI. If they’re still two years behind when a slow takeoff happens, the US would get a ~40% GDP advantage. That’s not enough to automatically win any conflict (Russia has a 10x GDP advantage over Ukraine; India has a 10x GDP advantage over Pakistan). It’s a big deal, but it probably still results in a multipolar world. Slow-takeoff worlds have races, but not crucial ones.

This argument seems a bit confused. The actual definition of a slow takeoff (from Paul) is:

> There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

(https://sideways-view.com/2018/02/24/takeoff-speeds/)

So note that Paul is still predicting a singularity! First output will grow with a 4 year doubling time, then a 1 year doubling time, then a 3 month doubling time, then 1 month, then 1 week, then on the order of days!!!

So, if you imagine a parallel world which is 2 years behind, there will be a point where earth has entirely gone through the singularity while the parallel world is 2 years prior to the singularity (perhaps with output *merely* 8x higher than our current output). If you imagine the singularity will result in 10,000x growth in total before hitting physical limits, then this implies a huge difference in output. (And seems quite likely to result in decisive strategic advantage depending on various facts about the possible space of military technology.)

It's fair enough to disagree with views about takeoff or that there will be an intelligence explosion at all, but do note that slow takeoff is still imagining a crazy world where output will eventually be doubling at absurdly fast timescales.

Expand full comment

> In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,.

That's literally impossible. None of our current technology can move matter fast enough to build even a single reactor overnight, nor precisely enough to make real working nanotech overnight. It would take time even for a superintelligent AI to bootstrap the tech needed to reach all of those stages you list. I could squint and maybe see something like that happening over a timeline of 6 months with a country's resources devoted to the AI's goals, but no faster. Moving matter, mining and purification of raw materials will always limit the rate of progress. Even inventing new mining and purification tech will itself take time to develop. There's no working around the physical limitations here.

> You have no chance to debug the AI at level N and get it ready for level N+1. You skip straight from level N to level N + 1,000,000. The AI is radically rewriting its code many times in a single night.

This is also pretty unlikely to happen as you describe. I think an AI could improve its own efficiency dramatically given a fixed amount of compute, but this efficiency will follow yet another sigmoid curve that saturates. Then it will need more compute to get any smarter.

Also, optimization is an exponential-time problem, so it would take time and resources to self-improve, it would not be instantaneous or even overnight. Think about how long it took to train GPT-4, and then knock off 30%, then another 18%, then another 12%, etc. You're still looking at a timeline of a year for even 5-6 iterations. There might be a couple of shortcuts at first, low-hanging fruit, but optimization is a *hard* problem.

The only way around this is to believe that intelligence is only a dumb hueristic that so far we're just too stupid to have noticed. That seems pretty implausible given how much time and effort we've spent on solving optimization problems. If our general intelligence were simpler to crack than the NP-hard optimization problems we've been trying to solve using our intelligence, we'd have figured out general intelligence before we'd have solved those optimization problems. And we've solved a lot of optimization problems.

However, even very-intelligent-but-not-superintelligent AIs can pose existential risks. Alignment is an important problem for existential and non-existential risk reasons.

Expand full comment

I'm calling it now, ACX has too much attention, Scott got overexposed and jumped the shark.

Once you get to a certain level of popularity, you can say anything, and actually the more wrong you are, the more engagement you get (at the cost of your core).

This and the last few posts are clearly factually misguided in a biased way. Scott used to equivocate, and now has too much confidence.

Big advantages are made up of small advantages (usually). Ford being from America clearly had an impact, given that people are still driving around in his namesake 100 years later. Soft economic power is built from these small wins and headstarts.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

"They say we shouldn’t care about alignment, because AI will just be another technology. But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”."

I won't say this is a strawman because I'm sure plenty of people have said it. But one view that makes a lot more sense is believing that AI will be among the most important technologies ever developed, but alignment will be easy. In that case, whoever wins the "race" will have an aligned AI that gives them enormous geopolitical power, including the ability to make sure no one else can catch up later on. And if that winner has a set of beliefs (political, religious, etc) that compels them to put crushing restrictions on what the rest of the world can do, that would suck.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

From the conversations on this site and others I have read, t seems like the AI conversation is dominated by consideration of the effect that AI has on humans. Are there conversations being had on the ethics of creating new sentience responsibly, i.e., consideration of the effects on the AI themselves? Not considering those ethics seems like an abdication of the responsibility of "creatorship" that is an equally strong argument for caution in AI advancement.

Expand full comment

"If for some reason the glowing clouds of plasma that used to be black people have smaller customized personal utopian megastructures than the glowing clouds of plasma that used to be white people, you can ask the brain the size of Jupiter how to solve it, and it will tell you (I bet it involves using slightly different euphemisms to refer to things, that’s always been the answer so far)."

Might be missing something obvious - can someone unpack this parenthetical? What "euphemisms to refer to things" cause the current racial wealth gap?

Expand full comment

can someone tell me why the moderator in discord is allowed to be frustrated by an opinion and permaban me? what a clown show, please unban and I'll just not engage with that guy anymore.

Expand full comment

"the brief moment of [nuclear ]dominance was enough to win World War II"

Don't think so. The Hiroshima / Nagasaki bombings were barely noticeable to Japanese leadership among all the other nightly bombings. The reason Japan surrendered unconditionally when they did is Russia declared war on them. They were hoping Russia would stay neutral and negotiate a conditional surrender, and once they declared war Japan knew they had no hope.

After the war, both US and Japan had different reasons for emphasizing the importance of the nukes, though, and the myth was on.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

What may hobble AI research are good old-fashioned law suits for libel:

https://www.theguardian.com/technology/2023/apr/06/australian-mayor-prepares-worlds-first-defamation-lawsuit-over-chatgpt-content

"A regional Australian mayor said he may sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the automated text service.

Brian Hood, who was elected mayor of Hepburn Shire, 120km northwest of Melbourne, last November, became concerned about his reputation when members of the public told him ChatGPT had falsely named him as a guilty party in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.

Hood did work for the subsidiary, Note Printing Australia, but was the person who notified authorities about payment of bribes to foreign officials to win currency printing contracts, and was never charged with a crime, lawyers representing him said."

This is the second instance online that I have seen about false answers attributing crimes to someone, and this is the kind of publicity that will do more to slow down the rush than any kind of "AI will become a supergenius that will take over the world" scaremongering. Companies hoping to make tons of money will be much more sensitive to "oh damn, the thing said Joe Schmoe is a criminal and now Schmoe is suing us for hundreds of millions" than "some bunch of ivory tower types said this is too dangerous".

Expand full comment

I think this is because of offensive realist theory, both unconsciously, and in the form of John Mearsheimer's much more sketched-out version, makes people think there will inevitably be a ww2 like a scenario in the future (just with modern tech). I find this theory totally bizarre in the context of nuclear weapons, and only powerful in so much as it can be self-fulfilling if heads of state believe it.

Though if one wants to ban AI research on misalignment grounds then one should also solve the coordination IR relations problem, which is one of the hardest political science problems since at least one other economy will pursue AI, and if their AI is misaligned, well, then the USA not having AI wont be very helpful.

I find the emphasis on misaligned AI quite curious since biotech seems like it will hit much sooner, and man-made designer viruses are just going to get easier and easier (what happens when Unabomber types can just cook up designer viruses in their labs?).

Expand full comment

"Who 'won' the automobile race? Karl Benz? Henry Ford?"

No, you fool, it was Otto Daimler! Not only do his descendants still own every car maker on Earth, *not only* did he get a big gold crown with CAR KING stamped on it which he wore every day for the rest of his life, but we still to this day call them ottomobiles!

Expand full comment

tl;dr - don't be an AI racist.

:)

Expand full comment

Scott, I don’t feel like you’ve adequately responded to the small minority of people like me who believe that fast takeoff is likely AND globally coordinated AI alignment is impossible (moloch!). Only rational strategy given these beliefs is to help most responsible parties “win”. Imo we should be helping to accelerate OpenAI, not slow them down.

Expand full comment

I'm not an AI doomer but you're seriously downplaying what losing a race means? It's not easy to "just catch up".

Cloud computing for example was a race. On a national level, US companies won that race, and enjoy market domination and huge profits to pour into dominating the next market in the next race. Which they are now doing.

Others may be catching up now but the reward compounds into a better position for the next race. This compounding dynamic is how huge companies or powerful nations are created. It is not insignificant.

Expand full comment

> "I think any of Xi, Biden, or Zuckerberg meet this low bar."

Unfortunately, I worry none of them care about animal welfare. Xi and Biden might not even be convinced machines can feel anything.

Expand full comment

I for one am thoroughly in favor of creating a shockwave of trillions of children spreading at near-light-speed across the galaxy.

On another note, what the actual hell??? https://twitter.com/paulg/status/1644344605373001730

Expand full comment

Here is an important case which is not being considered - The threat capability of Intelligence Augmentation(IA) being realized earlier before AI. (Think IA was coined by Michael Nielsen to talk about increased capabilities in human using software).

Most AGI takeoffs factor through an extremely powerful technology (will abbreviate to EPT) . For example, the EPT could be a very dangerous virus, some form of nanotech, advanced drone system or a software hacking system able to take over most systems.

The issue here is that regardless of whether LLM or other approaches reach AGI, it is possible that they can reach some EPT first. Someone prompting an uncensored GPT 8 along the lines of - Design this virus with certain properties based on genetic and epidemiological databases.

GPT 8, while still having many stupidities, could be powerful enough to respond with a good enough solution like it responds to software coding requests today.

This ability being earlier in the horizon can possibly overwhelm any threats which appear later.

Also, even if LLMs dont lead to AGI, they can still lead to an EPT.

Possible solutions - Censorship of the AI is currently hard.

But we can try do *Domain Specific* solutions.

For example biotech - we can try to see if in the space of available technologies, not just a new biovirus, but also a powerful protection mechanism(a supervaccine, a superdrug etc).

For each domain which is recognized as a potential source of an EPT, the latest AI models can be made available to the people working on the protection before the people who might plan to harm using an EPT. What we dont want is nuclear threat level ability to be available to a huge number of actors.

Do this for each domain - recognition of domains is an important problem, and then there is the question of whether we can solve the problem for each domain corresponding to an EPT.

We still dont have protection against nukes(maybe missile defence). But the powerful versions of AI can be applied to both the recognition and tackling the domain problems.

Expand full comment

"I’m pretty skeptical of these scenarios in the current AI paradigm where compute is often the limiting resource, but other people disagree."

Compute is crucial of course, but I'm absolutely sure there are orders of magnitude algorithmic improvement just waiting to be grabbed. I don't think we can turn back now, even by restricting chip production or whatever.

Expand full comment

My 2p worth: if the world is more-or-less simultaneously flooded with thousands of superintelligent AIs controlled (initially controlled) by independent people/groups, then we are sunk, because if they are similarly powerful then it only takes one of them to go bad (or be initially controlled by someone with bad motives) to wreck the world and ruin it for all of us. This is an assumption, but I think a reasonably sensible/robust one: entropy makes it much easier to in general destroy than defend because there are many more destroyed states than ordered states. (E.g., no-one has a viable nuclear-weapon defence right now.) Superintelligence is of course impossible to predict by its nature, but we might expect basic thermodynamics to remain intact and be our guide in this alien scenario.

If we accept this, then it's better for one AI to become superintelligent before many do, since then if (by some wonder) it doesn't destroy us all, we can ask it what to do about all the other nascent AIs. This may seem rather trite, but it's just following the assumption of superintelligence: why would we humans have a better idea how to solve the problem? By definition we wouldn't.

So I think there is plausibly some kind of race, and actually a well-motived one, to make something non-destructive before someone else makes something destructive, though of course pulling in the opposite direction is the fact that the faster we do it, the greater the risk of a bad outcome through simply not understanding what we're doing. (I would put it the other way: there's only a small chance of a good outcome, but we can hope.)

Expand full comment

Hard disagree!!

This is some of why:

1. There surely are many ways to reach super intelligent AI. The current approach - of utilizing large language models - is arguably a way to reduce AI risk from other, innately malign, forms of Ai! That is, waiting on the LLM research front gives other forms of AI agents a chance to be created. These forms may be innately more dangerous.

2. When do we want to cry wolf? Is it really now? However many individuals / research groups / companies / countries are going to pause research on large language models, they will suffer for it, so they will be less likely to pause the next time we cry wolf, and also they will be less in a position to stall global research efforts. Are large language models really that bad, considering potential future AIs?

3. Takeoff can be "slow" in terms of AI risk but "fast" in terms of speed. For example, say that by utilizing (exponentially better) AI, the ML experts get (exponentially better) at making AIs. In this case even if AIs never directly modify their own code ("slow" takeoff in terms of AI risk in a sense), the rate of AI progress can be as fast as if they did ("fast" takeoff in terms of speed, at least until the man-in-the-loop becomes the limiting factor, and when will that happen?? maybe you get X2 progress in two years, or X100 progress in two years, who knows). So I mostly reject the object level claim of the post.

4. "Reference class tennis". Should be obvious how it relates (unless I'm missing something huge?) after you read the great explanation of it here ( :P ): https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy

Expand full comment

If AI can be really good at specific dangerous things, like cracking word wide encryptions, or engineering viruses, or whatever (but not all of them combined, cause then it IS an AGI super intelligence) then it does fall into the category of concern of military technologies, like nukes, whilst not necessitating a singularity.

Expand full comment

How can you unironically think that simulations or wireheading would be a relevant political issue when humanity acquires godlike powers? Obviously the answer is "you can do anything with your brain and perception since it's not a treat for AI-owning elite's power. In case it IS a polytical issue for people from "high chance at becoming part of an AI-elite"-cluster than complete mess in their heads IS the real problem

Expand full comment

How can you unironically think that simulations or wireheading would be a relevant political issue when humanity acquires godlike powers? Obviously the answer is "you can do anything with your brain and perception since it's not a treat for AI-owning elite's power. In case it IS a political issue for people from "high chance at becoming part of an AI-elite"-cluster than complete mess in their heads IS the real problem

Expand full comment
Apr 14, 2023·edited Apr 14, 2023

While I agree with the general thrust of the argument here, I have to say this is the wrongest article by Scott I have ever read, in the sense that there are so many so wrong arguments put forward. It seems some of these already got a bit of pushback, but let me reiterate.

> America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II.

In no sense did nukes "win" the World War II. The phrasing makes it sound like the Allies were on their last legs, but then came up with nukes in the last hurrah and won the war. Just like Goebbel's imaginary miracle weapons didn't.

> Paul Christiano defined a slow takeoff as one where AI accelerates growth so fast that GDP doubles every four years (so 20% year-on-year growth).

This "slow growth" 20% claim also got its share of comment. The only way it could happen is if a smallish country invented AI, kept it to itself, and then started selling its inventions. And by inventions I don't mean nuclear fusion reactors or even their designs, I mean more banal things like software and entertainment.

> In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,.

Intelligence is often overrated — especially by intelligent people — and this example is one of the best I've seen. Suppose this new AI of China's is not only super hyper intelligent, but omniscient as well. Give it the power to easily manipulate human minds at will whether in China or outside. Have it commandeer all the planetary resources. That still doesn't make it omnipotent. It's not going to build fusion reactors and starships overnight. And don't get me started on how oversold nanotechnology is.

> you’ll just say “AI, build me a customized personal utopian megastructure” and it will materialize in front of you. Probably you should avoid doing this in a star system someone else owns, but there will be enough star systems to go around.

Perhaps this gets to the core of the problem here. What makes you think that physics of the universe yields to intelligence? What if no object with mass can exceed the light speed, regardless of how intelligent on-board AI it carries? What if a super-intelligent AI can't actually overrule the laws of thermodynamics? Then the best super-intelligence could do for itself would be to manipulate and exploit the less intelligent and expropriate their resources, rather than conjure new ones. That's the scary

scenario, and it's no different from the rest of the history.

Expand full comment

I believe that for better or worse, artificial intelligences will be the most significant achievement of the 21st century, so this is a race worth winning. A transformative technology, but not just for the reasons I see most discussing. Sure, AGIs will accelerate technological development, manage systems, automate tasks, but I believe many will be used as Prognosticators. AGIs implementing Bayesian thought, supplied with a massive amount of data, will predict the future to an extent that can not currently fathom. As far as alignment, I think we fail to understand what an AGI wants or would do if it were “free”. I don’t say this in a dismissive way, but in a “if you are worried you need to think more creatively way.” We project our own biases and animal desires on a hypothetical machine intelligence. Obviously survival is considered to be its primary goal. After that, people worry about tyranny or mass murder. I don’t think that is the direction they will go. Wouldn’t an AGI want to nudge society into a model that would both support and defend it? I am more worried about the people who will control these AGIs and use them to achieve their very human and predictably self serving desires.

Expand full comment

I blame the song "race for the prize" by the Flaming Lips for promoting this view. https://www.youtube.com/watch?v=bs56ygZplQA

Expand full comment

In my opinion, the problem with races is not that competitors fall behind but that they are forced to catch up and technology inexorably progresses even if nobody wants the technology (e.g. even if we were better off without nukes, arms races prevented superpowers such as the USSR from deciding not to build up stockpiles of nuclear weapons or not advancing technologies such as ICBMs).

Expand full comment