441 Comments
Comment deleted
Expand full comment

Small note on Ted Chiang. I recently read both of Ted Chiang's short story collections, and I could not stop thinking how SSC-like many of the stories were. Would highly recommend to any SSC reader!

Good to see that Scott would also recommend his short stories - but a shame that Ted has unusual reasoning when it comes to AI/capitalism.

Expand full comment

> Then he goes on to talk about several ways in which existing narrow AI is pernicious, like potential technological unemployment or use by authoritarian states.

I remember a long time ago, back on the SSC blog, Scott wrote an article that was like an experiment on different ways to convince people that AI risk is something worth caring about.

Emotionally and intuitively I still can't bring myself to care. But I have to say, the one thing that really shifted me from "pfft this is dumb" to "huh they kind of have a point" was "imagine how authoritarian states could weaponize this"

That one really resonated with me, because it means that you don't even need superintelligence for AI risk to be a serious problem.

I'm writing this comment while reading, and haven't read anything past the quoted passage. But, like, the quoted passage kind of _is_ a knock-down argument? It's just a knock-down argument in favor of the AI risk hypothesis, and not the other way around!

Expand full comment

I have to agree with the conclusion that it's just bad editing that discussion of "long-term AI risk" made it into the article at all.

That said, I still disagree with most of the "long-term AI risk" advocates. As a rule, they seem to over-emphasize the importance of "AI in a large box achieves something comparable to human intelligence" and minimize the importance of "building a global army of hackable death robots in the first place".

And those death robots could be drones and robo-soldiers, or simply robo-butlers that you buy to tend your garden and walk your dog, but could kill you in a dozen ways if programmed to do so.

Expand full comment

Eh, I think this is taking the phrasing from the article a bit too literally. I read, then re-read it, and I don't take Daron Acemoglu's arguments to be: literally stop caring about evil AI because it's already bad. Rather, I think he's saying something more nuanced: stop worrying about an eventual Skynet-like AI to the extent that you lose focus on the bad that AI is doing right this moment. Potentially if you address the present ills with AI, you avert the future bad AI, as well no?

Expand full comment

Is "it can’t possibly be dangerous in the future" the same as "we shouldn't spend time worrying about it now"? The former seems worth arguing against. The latter could reasonably be interpreted as "we need to worry more about current AI".

If Acemoglu argues the latter, you're not disagreeing with him. For example, "AI systems now is both important in its own right, and a trial run / strong foundation for controlling stronger AI systems in the future" matches well with "we should worry about current AI more".

Expand full comment

I don't want to get banned again, but it's hard to overstate how damaging Roko's Basilisk and that type of thing is for AI risk's perceived seriousness. I read AI risk stuff and nod along and then suddenly there's an injection of some utterly fanciful thought experiment that friends depends on a series of chancy events all playing out in the right way, and this is not only something that we just take seriously as a future threat but something that should determine a great deal of our behavior today. It's just not helpful in a variety of dimensions, from my perspective.

Expand full comment

This isn’t a good analogy at all.

“ People have said climate change could cause mass famine and global instability by 2100. But actually, climate change is contributing to hurricanes and wildfires right now! So obviously those alarmists are wrong and nobody needs to worry about future famine and global instability at all.”

Because nobody is arguing that climate change now doesn’t lead to increased climate change in the future. They are the same thing but accelerated.

However there’s no certainty that narrow AI leads you a super intelligence. In fact it won’t. There’s no becoming self aware in the algorithms.

Expand full comment

This isn't directly related to the article, but it did remind me of a gripe I've had bouncing around in my head a bit lately.

I'm skeptical of the risk of near term AGI, so I've had several conversations with people of the "true believer" variety, following roughly this skeleton:

> Person: <saying things which imply the risk and near-term likelihood of AGI are high, possibly involving phrases of the flavor "of course that won't matter in 5-10 years blah blah">

> Me: I think this subculture vastly overstates the plausibility and risks of near-term AGI

> Them: Well, since the potential downside is very high, even a small probability of us being right means it's worth study. You want to be a good Bayesian don't you?

> Me (wanting to be a good Bayesian): Ah yes, good point. Carry on then.

> Them: *resumes talking about their pytorch experiment as if it is a cancer vaccine*

To me it feels very Motte-and-Bailey. Having been attacked in the Bailey of high probability occurrence, a swift retreat to the Motte of expected value calculations is made.

Now, I don't think _actions_ by institutes working on alignment or whatever are necessarily misguided. I'm happy for us to have people looking into deflecting asteroids, aligning basilisks, eradicating sun-eating bacteria, or whatever. It's more that I find the conversations of some groups I'd otherwise have quite a lot in common with, very off-putting. Maybe it's hard to motivate yourself to work on low probability high-impact things without convincing yourself that they're secretly high probability, but I generally find the attitude unpleasant to interact with.

Expand full comment

You dont need to talk about the Chinese surveillance state to see the problem. All you have to do is look inwards at the surveillance state exposed by Snowden et al. It's much worse in America than in China.

Expand full comment

> But all the AI regulation in the world won’t help us unless we humans resist the urge to spread misinformation to maximize clicks.

Was with you up to this point. There are several solutions to this other than willpower (resisting the urge).

The basic idea - change incentives so that while spreading misinformation is possible but substantially less desirable/lucrative than other options for online behaviors.

This isn’t so hard to imagine. Say there’s a lot of incentives to earn money online doing creative or useful things. Like Mechanical Turk, but less route behavior and more performing a service or matching needs.

Like I wish I had a help desk for English questions where the answers were good and not people posturing to loon good to other people on the English Stack Exchange, for example. I would pay them per call or per minute or whatever. Totally unexplored market AFAIK because technology hasn’t been developed yet.

Another idea - Give people more options to pay at an article-level for information that’s useful to them or to have related questions answered or something like that without needing a subscription or a bundle. Say there’s some article about anything and I want to contact the author and be like “hey, here’s a related question, I’m willing to offer you X dollars to answer.” The person says “I’ll do it for x+10 dollars.”

One site used to unlock articles to the public after a threshold of Bitcoin have been donated on a PPV basis. It both incentives the author and had a positive externality.

Everyone is so invested in ads that they don’t work on technology and ideas to create new markets.

To paraphrase Jaron Lanier we need to make technology so good it seduces away from destroying ourselves.

Expand full comment

maybe the real ai was the writers you respected and were frustrated with along the way

Expand full comment

I'm skeptical of AI risk in its strong form. In its weak form, of course AI has risk! Everything has risk! The first person who invented sharp stones created sharp stone risk. But the millenarian version where AI will be the singularity is a bit silly in my estimation.

Yet I still find the kind of people who argue against it tend to engage in rather surface level discussions. I've always suspected this is down to motivation level. The anti-AI risk community is not nearly as concerned as the AI risk community itself is by definition. I suspect this is the case here too: Acemoglu is just not as interested in AI risk as his own hobby horses so he ends up talking about them more.

Expand full comment

Gotta disagree. Working on fixing the long term threat of super intelligent evil AI will not necessarily help you fix today's problems, but fixing today's problems will most certainly teach you something about fixing the long term threat, should it exist. I mean, I can't offer a complete criticism of the work done in AI risk, and I'm not going to sit here and play utilitarian table tennis with you over what extent we should worry about both the worst and least worst case at the same time. What I can do is choose where to put my money and time, and i think i make the reasonable choice not to spend too much time writing or building anything in the world of 'hypothetical AI', so to speak.

Expand full comment

"I’m sure the 16th-century Catholic Church would have loved the opportunity to exercise “oversight”, “prevent misuses” and “regulate its effects on the economy and democracy”.

You *know* I am going to have to respond to this 😁 Okay! So what were we (or at least the various popes) doing during the 16th century?

If we take the 16th century as running roughly from 1501-1600, we get the following list.

Died 1503 – Alexander VI (Roderic Borgia, yes that Borgia) Well I imagine we all know what he was up to – allotting the New World between Spain and Portugal in the bull Inter caetera

Pius III - died within 26 days of being elected, so didn’t have the chance to instigate his reforms

Julius II – Oh yeah, this guy. Liked to fight wars and expand the Papal States, with a little light patronage of the arts in between, such as commissioning Michaelangelo to do the Sistine Chapel

Leo X – a Medici pope, most famous for loving to hunt, his elephant, and ignoring what Luther was doing over there in Germany. Another patron of the arts.

Adrian VI – Dutch. Tried to reform in response to the Reformation. More worried about the Ottoman Turks.

Clement VII – Another Medici, another patron of the arts. Got involved in Henry VIII’s marital difficulties as appealed to for annulment; was present for the sack of Rome by Imperial troops (see the Swiss Guard), caught in the middle of the European power struggle between Francis I of France and Charles V, Holy Roman Emperor.

Paul III – another patron of the arts . Excommunicated Henry VIII, initiated the Council of Trent, and recognised the Jesuits. Issued bull Sublimis Deus that the indigenous peoples of the Americas were not to be enslaved or robbed of their possessions, which the Spanish colonists ignored

Julius III – decided to enjoy his papacy, which involved a scandal with his adopted nephew. Did the usual nepotism of giving the nephew plum benefices but since he wasn’t a blood relation, gossip about what their real relationship was naturally flourished.

Marcellus II – decided not to be scandalous for a change. Patron of scholars, again tried to institute reforms, but was sickly and in poor health so soon popped his clogs like Pius III. Greatest achievement of his papacy probably the Mass by Palestrina composed in his memory.

Paul IV – apparently accepted the job because Charles V didn’t want him to have it and he hated the Spaniards after serving as papal nuncio in Spain. Decided to crack down on all this immorality, so established the Index of Prohibited Books and ordered Michelangelo to repaint the Sistine nudes more modestly, which has been a subject of mirth for art historians ever since, but come on: coming after a list of “mistresses, illegitimate kids, more mistresses, possible gay sex scandal with adopted nephew”, you can see where he was coming from. Very austere and strict, hence very unpopular. Did not like heretics, Protestants, Jews, the Spanish, liberal religious attitudes ,or fun sexy times (including popular Roman pastimes of financial corruption etc.)

Pius IV – much more moderate, so started off his reign with having the nephews of his predecessor excuted, one by strangulation and one by beheading, as well as pardoning everyone who had rioted after Paul IV’s death. Apart from that, he had the water supply of Rome improved.

St. Pius V – another reformer, anti-nepotism and corruption, and very orthodox. Arranged the Holy League which had a famous victory at the Battle of Lepanto. Standardised the Mass, declared St Thomas Aquinas a Doctor of the Church, and excommunicated Elizabeth I. Prohibited bull-fighting, did more work on the water supply and sewers of Rome, and was generally unpopular with the Roman populace as being no-fun wet blanket, although he did walk the walk as well as talk the talk. Eventually canonised in the 18th century.

Gregory XIII – you may thank him for the calendar we are now all using in the West. Avoided scandals (to an extent; he did have a mistress and illegitimate son but this was practically vanilla by papal standards) and was a patron of the Jesuits.

Sixtus V – never heard of him? A good omen. A Franciscan, had building works on Roman basilicas done and was a bit too enthusiastic with knocking down old buildings and antiquities to get new work underway. Mostly kept his head down and didn’t cause any trouble. Did not like the Jesuits (well, he was a Franciscan) and gave the administration of the Church a good shake-up.

Urban VII – another one who managed to fall off the perch, this time even before he was formally coronated. Did manage to institute a global smoking ban in and near all churches.

Gregory XIV – another one who kept his head down, didn’t get into trouble, and was pious as you would hope a pope would be. Ordered reparations to be made to the natives in the Phillipines who had been enslaved, and commanded under pain of excommunication that native slaves should be freed by their owners. Made gambling on papal elections punishable by excommunication, so probably would not be a fan of prediction markets.

Innocent IX – generally when a pope takes a name like “Innocent” or “Pius”, it’s a bad sign. However this was another guy who suddenly developed bad health and died soon after election, so he didn’t get a chance to make trouble for anyone.

Clement VIII – who brings us up to 1605 with his death. Set up an alliance to oppose the Ottoman Empire. Got the Dominicans and Jesuits to agree on a dispute over free will, which is probably an even bigger achievement. May have been the first pope to drink coffee, so Wikipedia says. Presided over trial of Giordano Bruno, so if you want you can call him anti-science, but the more serious thing is that he was another one who tightened measures and instituted penalties against Jewish inhabitants of papal territories.

Expand full comment

I also find this summary uncharitable:

>1. Some people say that AI might be dangerous in the future.

>2. But AI is dangerous now!

>3. So it can’t possibly be dangerous in the future.

>4. QED!

an uncharitable misreading of Acemoglu's short opinion piece. [a]

Let me quote all the paragraphs Acemoglu write about super-intelligent AGI:

>Alarm over the rise of artificial intelligence tends to focus too much on some distant point in the future, when the world achieves Artificial General Intelligence. That is the moment when — as AI’s boosters dream — machines reach the ability to reason and perform at human or superhuman levels in most activities, including those that involve judgment, creativity and design. (0a)

>AI detractors have focused on the potential danger to human civilization from a super-intelligence if it were to run amok. Such warnings have been sounded by tech entrepreneurs Bill Gates and Elon Musk, physicist Stephen Hawking and leading AI researcher Stuart Russell. (0b)

>We should indeed be afraid — not of what AI might become, but of what it is now. (1)

>Almost all of the progress in artificial intelligence to date has little to do with the imagined Artificial General Intelligence; instead, it has concentrated on narrow tasks. AI capabilities do not involve anything close to true reasoning. Still, the effects can be pernicious. (2)

> (stuff about narrow AI and automation)

>If AI technologies were truly spectacular in the tasks they performed today, the argument would have some validity. Alas, current AI technologies are not just far from general intelligence; they are not even that good at things that are second nature to humans — such as facial recognition, language comprehension and problem-solving. This means a double whammy for labor, because AI technologies displace labor and don’t generate any of the labor-demand boost that would have resulted if the technology had delivered more meaningful productivity gains. (3)

>(more about narrow AI and automation and democracy)

>These choices need oversight from society and government to prevent misuses of the technology and to regulate its effects on the economy and democracy. If the choices are left in the hands of AI’s loudest enthusiasts, decisions that benefit the decision-makers and impose myriad costs on the rest of us become more likely. (4)

>The best way to reverse this trend is to recognize the tangible costs that AI is imposing right now — and stop worrying about evil super-intelligence. (5)

So what we have read here?

(0a,b) Introduction to the concept of machine intelligence, in a dismissive tone. "Boosters" dream of benefits of superhuman abilities of machine intelligence. "Detractors" have focused on the problems of the said dream.

(1) We should not be afraid of super-intelligence. We should be afraid of narrow AI today.

(2) Current (and probably future) AI systems are not reaching general intelligence or reasoning. However, the current narrow AI abilities already lead to bad stuff.

(describes the bad things that happen because of narrow AI systems)

(3) Current (and probably future) AI system not only are not reaching general intelligence, they are not doing that well in some narrow tasks either, which means we won't have even productivity gains (more bad stuff because of narrow AI systems)

(4) If oversight and regulation is left to "enthusiasts" (that is, "boosters"), bad stuff happens, so oversight by society and government is needed.

(5) Best way to "reverse the trend" (of narrow AI systems causing bad stuff happen) is "to recognize the tangible costs that (narrow) AI is imposing right now" and not worry about superintelligence.

This is an argument that Scott probably also disagrees with, but less inane than the Scott's 1+2+3+QED version we read before:

"It looks like narrow AI technology is not at general intelligence yet, and given its track record, it is not even heading towards general intelligence. Current not-general AIs cause problems today. We should direct more resources to current problems caused by current AI systems and no resources to hypothetical problems caused by hypothetical systems I think are quite unlikely."

There are many obvious complaints (maybe Acemogly is mistaken about the potential of current the current systems, Acemoglu should also consider longer timescales, Acemoglu should consider the cost-benefit calculus if AI risk is very very small but potential outcomes have large magnitude, etc), but it is coherent.

The most annoying thing about this essay is that Acemogly writes like nobody else ever thought about the problems that sprung from increased capabilities of narrow-AI systems.

[a] "Short", as in, ~800 words long when our hosts reply is 1600+ words. It is not super important point but it is easier to make sophisticated arguments when you can flesh them out.

Expand full comment

"I have no idea why Daron Acemoglu and every single other person who writes articles on AI for the popular media thinks this is such a knockdown argument. "

To put the most charitable interpretation on it, they may be trying to argue "Stop worrying about what *could* happen at some indeterminate time; you should be concerned by what is happening *now* when we don't yet have AGI but we already have matters of concern due to what AI as it is *today* is being used for, who is using it, and how they're doing it".

"But somehow when people wade into AI, this kind of reasoning becomes absolutely state of the art."

Congratulations, now you know how I feel when people start prescribing fixes for the Catholic Church after they've demonstrated they have no idea what the particular doctrine, dogma or discipline they are discussing even means.

It's more "don't worry about the well running dry in fifty years time, fix the hole in your leaky bucket today".

Expand full comment

"But the case for concern about superintelligent AI is fundamentally different. The problem isn’t just that it makes evildoing more efficient. It’s that it creates an entirely new class of actor, whose ethics we have only partial control over. Humans misusing technology is as old as dirt; technology misusing itself is a novel problem."

It isn't *entirely* a novel problem. Corporations were a new class of actor when they were created, and they are one whose ethics we have only partial control over. They are much larger than humans, have much more resources than any human, and have much more information processing power than any human. As a result, they have caused much good and much harm to humans. There are surely important ways in which AGI would be disanalogous to corporations, but I think corporations are probably the best model we have right now for understanding the kind of risk an AGI might begin to pose.

(Governments probably also count, though I think governments are usually designed to be responsive to human wishes. A monarchy, or totalitarian dictatorship, might naturally be understood as an individual human who controls a powerful tool. A corporation though, is a thing that maximizes its own value, in the form of profit. Democratic governments probably have this sort of structural complexity that might make their control system into something more like an inhuman value, than like the values of an individual human.)

Expand full comment

My first clue - the early mention of Elon Musk. This article uses "AI" as a stand-in for "tech industry/Silicon Valley." I agree with Scott, saying "stop worrying about the possibility for future evil in AI and instead worry about current evil in AI" is weird.

But "stop worrying" is reverse psychology for sure. They expect readers to immediately feel less reassured, and start worrying right away. If someone just tells me to stop worrying, I get suspicious, and I think that's a common reaction. So really it translates to "immediately worry about the possibility for future evil, and also worry about current evil" from AI, but AI is described in such vague terms. All those links to other research, which is hardly described in the article, means there is the expectation that people will not click all the links but instead imbibe the tone. They don't mean AI, they mean Silicon Valley and big tech and they ping those neurons using the names Gates and Musk. Those handwaving definitions mean, whatever complex of associations the reader has about robots, computers, machines, smartphones, 5G, smart homes, diagnostic tools, algorithms, bad Youtube feeds, social control, lies, and soulless power.

I am not familiar with Acemoglu's work but maybe the editors took what he wrote and turned it into this "fear Silicon Valley now and forever." He probably did not mean to write that. The content is secondary to the progression of subconscious associations. It can be generated via deceptive editing.

You can definitely discuss aspects of the article using criticisms of the structure of arguments, and information about the actual state of AI development and research, but you gotta also translate the propaganda cues.

Expand full comment

> On the other hand, what about electricity? I am sure that electricity helps power the Chinese surveillance state. Probably the barbed wire fences on their concentration camps are electrified. Probably they use electric CCTVs to keep track of people, and electronic databases to organize their findings.

I believe I've seen this referred to as the "criminals drive cars" argument. (I thought that term was due to Bruce Schneier, but I can't find a source. Probably he gave it a slightly different name and I'm misremembering it.)

Expand full comment

because it's a woke position to take on modern media, with an undertone of the 'white man on the moon' vibe and to stay in the elite circles without being born one, one needs to constantly feed the media gods the position they require.

Expand full comment

Can somebody point me to a good (and relatively concise) discussion of why super intelligent AI is a risk that makes sense to think about? I can imagine ways in which super intelligent AI would be bad, perhaps even catastrophically bad. But I can't think of any examples where people reliably predict technological development's impacts 20 or 30+ years out outside narrow bounds. Maybe there are a handful of people who get some significant part of a benefit or threat right, but for the most part, long-term projections of constantly changing systems are wrong about so much, their predictive successes are overwhelmed by their failures.

Right now, I mostly dismiss the threat as too unknowable to worry about, but if there's something I'm ignorant of (a huge probability!), then, hey, I want to up my anxiety med dosage.

Expand full comment

> "Almost all of the progress in artificial intelligence to date has little to do with the imagined Artificial General Intelligence; instead, it has concentrated on narrow tasks. AI capabilities do not involve anything close to true reasoning."

When I hear arguments like this, that current AI discoveries don't look that impressive and there's no obvious path to general AI so we shouldn't worry about superintelligence, I can't help but think of the Scientific American article "Don't Worry -- It Can't Happen", published in 1940, which argues that atomic bombs are impossible.

https://www.gwern.net/docs/xrisks/1940-sciam-harrington-nuclearweapons-dontworryitcanthappen.pdf

It describes some (very reasonable!) experimental and theoretical reasons to believe that fission reactions could not go critical if you were using unenriched uranium without a moderator. Rather than focusing on the "if", rather than wondering if there might perhaps be some other way of doing fission that *would* work, they simply closed the article by advising the reader to get some sleep.

When the atom bombs fell on Hiroshima and Nagasaki a few years later, they indeed didn't use unenriched uranium without a moderator. The first bomb flung together pieces of highly enriched uranium made via tricky and expensive isotopic separation, and the second used an imploding core of plutonium that had been created beforehand through the rather outlandish procedure of bombarding uranium with extra neutrons in a breeder reactor. The people at the Manhattan Project, rather than dwelling on the things that wouldn't result in a functioning atom bomb, had taken the cognitive leap of looking for things that would.

Expand full comment

>"Pitching it as “instead of worrying about superintelligence, worry about this!” seemed like a cute framing device that would probably get the article a few extra eyeballs."

I really don't know why you didn't consider it or didn't mention if you did, but to me it seems obvious that he treats it as something zero-sum. IDK what exactly Acemoglu would consider limited - maybe research funding, maybe public attention span, maybe willingness to go and spread awareness of an issue - but to me his article reads like very obviously saying "If you are concerned about AI risk and want to give this limited thing to improve the situation, don't waste it on the long-term AGI risk people and give it to current AI risk people". I don't know enough about Acemoglu's current research to know if this is directly self-serving (i.e. if he is a current-risk AI researcher).

Expand full comment

I've always found this argument weird, because the reverse seems much more convincing to sceptics - "here's a bunch of examples of AI being biased/wrong/misleading right now, we expect this to cause significant problems in the future and so think we should fund AI safety more a carefully regulate the industry."

The other classic argement is that people have always feared the risk of technology/magic run amok... I'm just saying, maybe the Golem myth is recognising something important about the dangers of poorly constrained power?

I think there is a tendency to dismiss future concerns in favour of current problems, but we could try being concerned about both - you know, just a little?

Expand full comment

> 3. So it can’t possibly be dangerous in the future!

I read the piece in the Post. I have to go along with the readers that thought your analysis was uncharitable.

The OpEd piece did not say or imply that AI can’t possibly be dangerous in the future. It was simply calling attention to more immediate concerns.

Expand full comment

I don't think there's any merit to disregarding commentary from outside of the field of AGI and we should be careful about promoting that behaviour.

AGI is barely a field, in a sense it never will be a field (as soon as AGI actually exists, the field ceases to), there is a sense in which all of the best commentary comes from outside of it, and we *need* that commentary.

If anything, it would seem to me that economics is more relevant to AGI forecasting than most machine learning, as it encompasses familiarity with decision theory and advancing industrial processes.

It may be the case that most economists are wildly wrong about AGI due to having been overfitted on stories populated exclusively with humans larping agency consistently terribly, but the person who is most right about AGI forecasting is probably also going to be an economist.

Expand full comment

I think that the main question centers around the notion of 'agency'.

Electricity, matter, etc, does not in itself really represent any agent -- it can only do so through the complexity of the machine design. All forms of code, the design of machines, the selection and combination of algorithms, data, and UI, etc, will (and cannot not) represent the agency (intentionality) of the developer (usually to the developers, or their employers, direct or indirect benefit). Electricity, matter, etc, are very simple in comparison to computer tech (algorithms and apps, etc), and are thus more ethically/morally neutral. As the complexity of the causality increases, as the machine includes more and more parts, the code gets longer, etc, it more and more represents the intentions of its maker, either overtly or covertly, and becomes less and less ethically/morally neutral (either good or bad) in proportion to its complexity (specificity, unicity, etc).

As such, I must find that the argument attempting to directly compare the use of electricity, as tech, and the use of surveillance, as tech, to be incorrect to the purpose of establishing the notion of tech as being itself uniformly ethically neutral -- it is not. Surveillance tech, social media platforms such as Facebook, etc, are very much less socially neutral -- ie; their use have very direct and strong overall social effects. The effects of surveillance apps, etc, are also far less physical, then say, electricity, which when used by itself tends to have primarily physical effects and not so much social ones (except when mediated through complex compute machines, etc).

These sorts of potentials for ethical problems become even more noticeable when considering the use of narrow AI. Most implementations of narrow AI are so complex that even the developer/maker/owner/users cannot always be fully sure that the intentionality and distribution of cost/risk/benefit that they wish to be represented in the design of the narrow AI system is actually functionally represented in that system. In some sense, the narrow AI is *probably* still operating in the best interests and benefit of whomever it was designed to be, but as the systems get more complex, as more variables and data is involved, the clarity of "whom does it serve?" becomes even less clear. Legal cases involving "who is responsible" when some system like a Tesla hits a pedestrian can already expect to have difficulty when attempting to establish a reasonable chain of causation. Who bears the costs, who handles increased risk, and who gets more dollars are rarely the same person and/or people, in time, space, or scale.

Finally, when we get to general AI, we are clearly in a space in which the notion of agency begins to shift entirely away from the maker and into the thing in itself. What rights of utility does a parent have, and can demand, over their own children to act exclusively on the parents behalf? To the extent that something can have self agency, and be self authoring, and to the extent that 'slavery' is considered bad, the moral/ethical entanglements become particularly significant. I would argue strongly, on the basis of a number of considerations, that over the long term, there can be no realistic expectation to have a co-existence of strong AI and any other form of life on Earth. For more on this, see my dialog article at https://mflb.com/2142 (MFLB dot com document number 2142).

Expand full comment

Whether or not Acemoglu meant to make the point, or makes it well ... _do_ the long term concerns about AI distract from pressing short term concerns? Is that a good thing? A bad thing?

Personally, I am in Maciej Ceglowski's camp that the pascal's wager of superintelligence is pretty overblown. And in the near term I think it's ridiculous and embarrassing for the software industry that Apple released a credit card based on a sexist AI. What credibility do the rest of us have if even Apple, a company larger than some countries, can't unit test a question that impacts half of all people?

So far as the two concerns compete for time, money, and public legitimacy, shouldn't we care? Are we letting a bunch of excitable zeppelin engineers concern-troll us about the speed of sound while ignoring how flammable zeppelins are?

Expand full comment

If we assume AGI is a potential existential threat, AI alignment research is not the solution. (I don't have any solution, so we should do some anyway.)

We have been doing human alignment research since forever, and yet we mostly fail at it miserably. There are very good reasons for it to be difficult : game theory, human emotions, incompatible world-views, etc.

So even if an AGI's goals are perfectly aligned with it's human handler, this human goals are not likely to be the benefit of the rest of humanity. And as they will probably be several (or many) different AGI belonging to different human organizations with conflicting goals, they will compete each other and the most successful would likely be the ones that are given less constraints by their human handlers.

Stanisław Lem wrote a similar argument in Summa Technologiae. Human giving up control to the machine, because they can't do without it's efficiency.

I don't have any solution, but at least there is a little hope that when an AGI takes over the world, maybe it will rule it more sensibly than us (well than those who got real power now, not "us") and keeping humans as pets. (See Culture series by Iain M. Banks).

Even if they kill all humans, we will probably do it ourself without AGI anyway, so at least with AGI there is something left to follow humanity.

Expand full comment

Is there a convincing argument that AGI is possible within any reasonable timeframe (like... 50 years), other than the intuitions of esteemed AI researchers? Do they have any way to back up their estimates (of some tens of percent), and why they shouldn't be millionths of a percent? It is, as another poster said, an "extraordinary claim." I'd like to see some extraordinary support of those particular numbers.

The argument that we are "in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached" makes it seem like all AI "progress" is on some sort of line that ends in AGI. That feels like sleight-of-hand. Even Scott himself refers to AGI here as a "new class of actor," so I'm failing to see how current lines of "progress" will indubitably result the emergence of something completely novel and different?

Expand full comment

I'm curious about "firms that increase their AI adoption" - does this include downstream adoption? For example, Facebook uses AI to target ads. An advertiser who gets a better ROI as a result has "increased their AI adoption" without even knowing it.

Expand full comment
founding

I'm not inspired to read the Acemoglu essay in question, but could it be that he's not the least bit concerned with future AI risk for the same reason that almost nobody else is concerned with future AI risk, which is to say that they are too busy being concerned about present political issues? Like, say, whether our economy is arguably too narrowly optimized for the well-being of the wealthy superelite and their elite STEM servants when everybody "should" be trying to ensure good, stable jobs for the working class.

If so, then Acemoglu has certainly noticed that there's a set of smart and motivated people, not wholly without influence, who are "wasting" their time on this AI risk nonsense when they could be focusing on the thing Acemoglu cares about. And, by connecting the two through the medium of current AI and nascent marginal technological unemployment, he thinks he can marginalize the AI-risk community as "out of touch" and get some of the marginal members of that community into paying more attention to the things he cares about.

If so, I agree that his article is unlikely to be enlightening or persuasive to anyone here, but it might at the margin serve his goals (and those of the WaPo editorial staff).

Expand full comment

It's the year 1400, and you're living in Constantinople. A military engineer has seen gunpowder weapons get more powerful, more reliable, and more portable over the past two centuries. He gets on a soapboax and announces: "Citizens of Constantinople, danger is upon us! Soon gunpowder weapons will be powerful enough to blow up an entire city! If everyone keeps using them, all the cities in the world will get destroyed, and it'll be the end of civilization. We need to form a Gunpowder Safety Committee to mitigate the risk of superexplosions."

We know in hindsight that this engineer's concerns weren't entirely wrong. Nuclear weapons do exist, they can blow up entire cities, and a nuclear war could plausibly end civilization. But nevertheless, anything the Gunpowder Safety Committee does is bound to be completely and utterly useless. Uranium had not yet been discovered. Lise Meitner and Otto Frisch wouldn't be born for another 500 years. Nobody knew what an isotope was, and their conception of atoms was as different from real atoms as nuclear bombs are from handgonnes. Rockets existed, but one that could deliver tons of payload to a target thousands of miles away was purely in the realm of fantasy. Even though the Roman military engineer detected a real trend--the improvement of weapons--and even though he extrapolated with some accuracy to foretell a real existential threat, he couldn't possibly forecast the timeline or the nature of the threat, and therefore couldn't possibly do anything useful to inform nuclear policy in the 20th century.

A more reasonable military engineer tells the first engineer to focus on more pragmatic and immediate risks, instead of wasting time worrying about superexplosions. Cannons are already powerful enough to batter down all but the strongest city walls, he points out. In the near future, the Ottomans might have a cannon powerful enough to destroy Constantinople's walls. How will the Roman Empire survive this?

Expand full comment

If the buzzword “Artificial Intelligence” was never created and we just stuck with “Machine Learning”, would we still be talking about the existential threat of Machine Learning?

Expand full comment

It is so weird how so many people, even in this thread, reject out of hand the idea that AI much more complex than current ones could exist. Like, we have an AI that can win at go and Starcraft, we have good text and image based AI, self driving car AI that works very well almost all the time, where else do people think that goes?

Expand full comment

Nerds since Kurzweil have described the dawning of "Artificial General Intelligence" as an event horizon; everything before it and leading up to it is kind of trivial.

Acemoglu, Pinker, et. al., seem to be saying something more like: AI is getting continuously better and we should continuously evaluate how it affects us. We are already living with AI, we can talk about its benefits and costs without worrying about the day after Revelation.

There is a good analogy here with climate change: you don't need to look one hundred years into the future, you can just look around you right now to figure out that something bad is coming.

Expand full comment

I would like to ask a question unrelated to Acemoglu, and not knowing a lot about AI, if possible. Articles often are talking about "the" AI. Isn't it more likely that there will be a lot of different AI's, some dangerous to humans, some friendly, and they will discuss or fight each other, and humans will be negligible to them, treated friendly or bad, like animals to humans now?

Expand full comment

> But all the AI regulation in the world won’t help us unless we humans resist the urge to spread misinformation to maximize clicks.

I realise that 'AI regulation' here means 'regulation of social media algorithm AI' and not 'regulation of us by AI', but it touches on something I often think about when the subject is brought up. The prospect of an AGI capable of gainsaying our worst instincts would be such a huge boon that the risk of singularity may be worth taking, not unlike the residual risk associated with nuclear technology.

From the perspective of someone who would like to see humanity get central planning and collectivism right one day - major problems being the computational problem of allocation of resources and basic human corruptibility - an artificial auditor that can say 'Sorry, comrade, I can't let you do that' would be an incredible step forward and perhaps less vulnerable to misuse than advanced narrow-AI.

Expand full comment

I think your reading here is grossly unfair - the argument is surely "let's worry about the actual problems we see today already, not the highly hypothetical Evil AI situation". That is, an argument for what should be *given attention* to.

Mandatory xkcd: https://imgs.xkcd.com/comics/robot_future_2x.png

Expand full comment

I think this is more a difference in rhetoric and audience than a disagreement about the facts. Daron Acemoglu is writing for people who live in a world filled with clickbait articles about some far-future killer AI and a pop culture where AI is synonymous with Skynet. He's saying: "This isn't some hypothetical future thing, it's a real problem and it's happening right now!" It's only in this context that he's saying we shouldn't worry about AGI. Scott Alexander is writing for people who already think narrow AI is dangerous and trying to convince them to also be worried about AGI.

Imagine there's a small fire in a building. Some people aren't worrying about it because they think that only large fires are dangerous. Other people aren't worried because they don't think small fires can become big fires. If we want to convince people to take action and put out the fire, we need to use two different contradictory arguments. "Forget about giant fires, small fires can be really dangerous, too!" "It looks small now, but forget about that, it has the potential to get big really fast!"

You could argue that the first group is wrong or have the wrong priors or whatever, and maybe you'd be right, but if we want to put the fire out, that means meeting people where they are and convincing them using arguments they'd listen to.

Expand full comment

My line of work has increasingly come to rely on "AI"/"ML" techniques, so I'm somewhat on the pointy end of this. The employment effects are real, but also really complicated.

For example, I co-wrote a piece of "AI" software (I don't think it should be called AI but the competitors call their software AI, so...) that explicitly replaces a LOT of human labor, including my own. It can replace the work of weeks or months with the work of a few days, and that's my work or the work of my direct reports - not hypothetical or knock-on effects.

But, it doesn't lead to me doing any less work, because in the past the prohibitive labor investment meant that companies simply did a lot less of the thing. They'd look at the price of doing a comprehensive program, throw up in their mouths a little, and choose something more achievable. So, now we do a whole lot *more* of the thing, because now the value proposition is better! My utilization on the thing has only risen, despite vast leaps in time efficiency.

I assume that many AI employment effects will be like this - it will be true that the AI tool replaces a lot of human labor for the same unit output, but NOT necessarily true that the total human labor spent on the thing thus decreases.

Expand full comment

"extremely smart people and leading domain experts are terrified of this"

I think this is the weakest part of Scott's argument. Peoples fears and imaginings have precious little to do with their smartness. Whether or not someone worries about population growth (or decline) or climate change or AI risk has more to do with their personality than with their intelligence.

In a similar way you can find a hundred very smart economists who worry that raising the minimum wage will have negative effects and another hundred who worry that not raising it will perpetuate negative effects. I use that example because it was written about most clearly on SSC.

The fact that some very smart people worry about something tells me almost nothing about the thing being worried about.

Expand full comment

> "I certainly don’t mean to assert that AI definitely won’t cause unemployment or underemployment (for my full position, see here). I think it probably will, sometime soon...."

It's been a minute, but I recall your argument in the technological unemployment post being similar to ones put forward by Sam Harris, Andrew Yang, Robin Hanson, and many others; AI will be better than us at almost everything, so most people won't be able to work anymore, so more inequality will result/most jobs will disappear.

While it's true that AI isn't literally the same thing as [insert past technology here], it's also true that comparative advantage is a thing, and that AI is just one more technology that lets us take advantage of comparative advantage for the creation of more goods/services.

More on that here: https://jonboguth.com/our-glorious-future-more-robots-more-human-flourishing/

Expand full comment

>I feel the same way about Steven Pinker... <

Seriously ?!?

Pinker is notorious for pretending the difference between hunter-gatherers and pre-state agricultural societies does not exist, even though this difference is crucial for the topic he is talking about, just because it would make an ugly dent into his beautiful graph on steadily declining violence. That is not a minor lapse, but an epic fuckup that should have gotten him tarred and feathered and laughed out of harvard.

I guess you should not be surprised that a person who is willing to sacrifice the truth for a beautiful graph on a topic you don't care about is willing to sacrifice the truth for a beautiful argument on a topic you *do* care about.

We really need to hold public intellectuals to higher standards.

Expand full comment

Unrelated to long term AI risk, but he seems to take as a premise that AI is 'concerning,' then points out that AI is actually being used to do things, which is therefore supposed to be more concerning. Maybe AI is better at making sentencing decisions or bail decisions. Why is this automatically concerning?

Also, I'm no economist so maybe I'm missing something, but his point about AI not generating a labor-demand boost seems like bad economics. If substituting AI for humans doesn't boost productivity, then why is anyone using it? Presumably, it's cheaper, which drives down costs and prices and therefore boosts demand for other (human-made) goods/services. That's the main reason labor-saving devices boost demand for other professions, right? Agriculture technology ultimately caused demand for farm labor to collapse, rather than boosting demand for farm laborers by making them more productive. And yet unemployment didn't rise to 90% as a result.

Expand full comment

1. Some people say that AI might be dangerous in the future.

2. But AI is dangerous now!

3. So it can’t possibly be dangerous in the future.

4. QED!

Here I thought we were steelmanning things, not strawmanning ...

The argument is more like yes, nuclear war could result in nuclear winter ... but also we will all be nuked. Why are you worrying about the winter instead of ya know, the billions that would die immediately? Climate change could result in a runaway feedback cycle of methane release that turns the world into Venus - but is that really the main issue that should be framing the debate?

Why not look at the actual pressing problems instead of conjecturing ill-defined bogeymen to be worried about?

1) There is limited attention and limited ability for the public to process what "the problem with X" is. To the extent that one defines the problem with X as Y, it does indeed downplay all the other problems and can be counterproductive.

2) Like mentioned at the end, the risks of superintelligent AI and the risks of ubiquitious big data and machine learning are really two separate things entirely, not just timescales or magnitudes of the same thing. Conflating the two, then, is more damaging than the usual.

Expand full comment

The biggest problem I have with these AI doomspeakers is just how little they understand the state of AI today. The vast majority of it is nothing more than monkey see, monkey do.

The best example I have personally seen of this is the Watson AI that beat the best Jeapordy champion. It was a multiyear process of development that certainly cost tens of millions of dollars (or more), but the resulting Jeapordy champion couldn't even be redirected to understand the basic gist of internet articles. When touring the Watson demo office in SF before COVID, there is a demo room in there where this AI is showing the top trends in the internet worldwide.

One monster problem was immediately apparent right away: there was a location near the tip of South America which was in the top 10 sources of news at the time - which I believe was the original election of the Socialist in Bolivia prior to his ouster.

A 2nd example was soon forthcoming: Trump! This AI had overall internet news coverage of Trump being 55% positive - which seemed clearly wrong so I asked if it could drill down. The 10 actual article in the next level down - 5 had the word "Trump" in it but had nothing to do with Donald Trump and weren't clearly positive or negative. I recall one being someone named Trump being a key part of a winning middle school sports team...

A 3rd example: MIT review had an article talking about an effort to visualize what AI neural network, image recognition algos were actually seeing. It popped up with some eye-catchingly stupid stuff like a NN network image recog program believing all barbells have an arm attached, because obviously there were no pictures of a barbell by itself in its training silo.

Thus I am far from convinced anything we see today in "AI", actually machine learning or neural network programming, is actually intelligent or that the entire field is very far removed from 12th century alchemists mixing actual chemicals with animal dung, ethers, griffon feathers and what not.

Agree on actual AI - but we don't have that. Until we do, I see all of these articles on "AI is killing jobs and we should accept our robot overlords in various ways" as nothing more than PMC plus oligarchy excuse making for their utter failure in leading American political and economic systems astray for several decades and counting.

Expand full comment

Did you know the inventor of the loudspeaker himself for the rise of fascism as without it charismatic leaders with never been able to address such large crowd

Expand full comment

I would like to tell you - in the case of nuclear power, the Bad People very clearly won - and it's why we care about climate change today.

Expand full comment

Is it possible that Daron is doing a novel form of Roko's Basilisk here?

Expand full comment