536 Comments
deletedFeb 3, 2023·edited Feb 3, 2023
Comment deleted
Expand full comment

“even one Berenson already churns out more than most people ever read.”

😂

Expand full comment
founding
Feb 2, 2023·edited Feb 2, 2023

On the 'disinformation vs. establishment bot' question, check out bots interacting with climate change: 83.1% of bot tweets support activism, 16.9% skepticism according to https://www.sciencedirect.com/science/article/pii/S1674927821001490 .

The abstract ends with:

> Based on the above findings, we suggest cultivating individuals’ media literacy in terms of distinguishing malicious social bots as a potential solution to deal with social bot skeptics disguised as humans, as well as making use of benign social bots for science popularization.

Expand full comment

So I heard on an episode of Hard Fork a few months ago that there was a validated test of the use of an AI as a survey target -- that is, that an AI could act as a survey audience and generate responses comparable to what the “real” audience would. What this would allow is ultra-optimized, million (billion?) iteration A/B tested misinformation. I don’t see how this isn’t a bid deal.

Expand full comment

Maybe one of the funniest sentences you've ever written: "Surely if everyone were just allowed to debate everyone else, without intervening barriers of race or class or religion, the best arguments would rise to the top and we would enter a new utopia of universal agreement."

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

Dating sites might be an interesting case. First, a lot of dating activity has moved there already. Second, it is one place on the internet where you do approach strangers and expect to be approached by strangers. There are already a lot of bots and fake accounts on these sites, but will chatbots prove to be the nail in the coffin?

Maybe dating will return to the real-world primarily. Which might have interesting effects on things like MeToo and sexual harassment. There are theories that dating sites are what allow stricter restrictions on real-life initial romantic interactions.

Expand full comment

> If I ask ACXers in 2030 to estimate what percent of people they follow on Twitter are secretly chatbots, the median answer will be 5% or less

Is "secretly" important here? It seems worth also including a prediction for "estimate percentage of followees are chatbots, secretly or not". (Also, how does this shake out if Twitter is replaced by something else in the next 7 years?)

Expand full comment

I wrote my take on this subject here:

https://www.fortressofdoors.com/ai-markets-for-lemons-and-the-great-logging-off/

(Includes embedded manifold markets for every concrete prediction)

I think the real “danger” is just the background noise level caused by semi intelligent spam polluting the waters and making the old “open sea” internet way less appealing

Expand full comment

I mostly agree with this post, in both its overall thrust and most of its particulars.

I would highlight that the argument doesn't turn on the rate of AI progress but rather on the equilibria that will be reached.

Expand full comment

I think chatbots will be a technology that changes society, but not radically. What I'm most excited to see is how chatbots change smaller things in unexpected ways. For example, I knew cell phones would change how people talked to each other, but I never thought they would mean a net decrease in the number of audio calls people made as everyone switched to texting.

Expand full comment

Sound analysis but "disinformation vs establishment" is surely a false dichotomy.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

One thing I don't think you really clarify: Where do you draw the line between human and chatbot?

Clearly Shakespeare was a human and not a chatbot, and a GPT-6 instance perpetually posting blog articles with no human input is a chatbot and not a human.

1. If a human gives AI a prompt to produce a more well written version of the human's genuine thoughts/arguments, and then publishes it as her own work, is that a chatbot?

2. What if the AI comes up with the topics and produces the posts, but they are each manually reviewed and approved by the human prior to posting?

3. What if instead of a megacorp a chatbox is painstakingly manually tuned by a single individual to speak in "their voice", with heavily detailed/engineered prompts, and set to operate autonomously?

4. What if you wrote this post yourself and then used Spellcheck, or perhaps even your writing software suggested a word or two?

I would consider 1 and 4 to be human, 3 to be AI, and am not sure how to classify 2. Worryingly I think I'm looking at some antimaterialistic quality of human motive which is unlikely to be consistent or sensical.

Expand full comment

A bot successfully writing a "Bay Area House Party" post is pretty much my definition of the singularity.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

> You might think so, but you might also think that the spam fake Facebook friend requests I get would try this, and they never do.

Anecdata, but I did get a male fake account writing to me. Twice.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

The fact that people are already worried that Chatbots will take our jobs and fill the Internet with fake people is what convinces me that it's the exact thing that won't happen. I still remember how, in the '90s, pop culture was all about the transformative power of genetics (see: Jurassic Park) while computers and the Internet were amusing novelties; to the extent anyone cared it was all about VR. Remember the goggles and gloves?

Meanwhile, Crytpo would (so I read on several blogs) destabilize government ability to issue fiat currency by the 2020s, and, as you pointed out, we once thought the Internet would usher in a global information utopia.

Whatever does happen with generative ai with be something none of us are thinking about. It will probably be something much weirder and dumber than any prediction.

Expand full comment

So much to take in... But once again leads me to knowing my intuition was right once again, AI has no heart and is incapable of answering the deep true questions I will not post here. Those of you that know... know! I now see why I question everything, but even that has to be questioned.... hmmm this makes it so much harder to advance. Sorry I was thinking out loud a little here.

Expand full comment

The cartoon illustrates a point I was already wondering about when Scott brought it up. A Pepsi-selling chatbot good enough to disguise itself as a human friend you talk with every day - what would it look like? If it was good enough to maintain its disguise, its ability to sell Pepsi to you would have to be very weak. If it was more focused on selling Pepsi, it couldn't maintain its human disguise.

Expand full comment

I’ve had online technical support sessions with Microsoft trying to get the answer to a yes or no question and come away still unable to say for sure: “Perverse Chatbot or some deliberately unhelpful guy in Chennai?”

Expand full comment

> "In fact, political propaganda is one of the worst subjects to use bots for. On the really big debates - communism vs. capitalism, woke vs. anti-woke, mRNA vs. ivermectin - people rarely change their mind, even under pressure from friends"

I think you're off base here. The reason people’s opinions are so deeply entrenched is because they think that's what their community believe, which itself is a subliminal belief informed by how often they hear a particular view. If you manage to get your propaganda in front of people’s faces often enough, it'll change many people’s minds. Maybe not by peppering people with the exact opposite of what they currently believe, but I think you can gradually bring people around over a period of time by subtly introducing doubt/nuance.

That said, I mostly agree with the rest of your post that undercuts the likelihood that chatbot propaganda will really get read by that many people to begin with, so maybe not a big problem. When I think of what form of chatbot might change people’s minds, it's probably pretending to be someone respected in a given community but saying things to undercut that community's beliefs. But that already exists as non-bots and the algorithms keep it from being seen much.

And if a really successful bot de-entrenches beliefs by sowing nuance, we get to the situation in the comics where maybe it's good, actually.

Expand full comment

Pedantic typo patrol

>As a famous blogger, I live in a world where hordes of people with mediocre arguing skills try to fake shallow friendships with to convince me to support things.

I feel like there's a missing "me" in there

Expand full comment

Obligatory xkcd: https://xkcd.com/632

Expand full comment

It seems to me a lot of these predictions depend on everyone acting the same as they do now even though conditions would have changed dramatically. Like, if deepfakes become common I would imagine that people will simply not trust any picture or video on the internet, not that they would endlessly fall for AI generated content.

If anything, I think it's more likely the whole internet will be overrun by AI bots, where banks and similar institutions are constantly getting hacked and social media is flooded with spam and hacked accounts and forcing some pretty radical changes.

Expand full comment

Are arcane jargons and standards of discourse really a barrier to chatbots? I'd have thought that reproducing these are exactly the kind of thing that modern AI is good at: just train your LLM on the archives of SSC comments or whatever online forum you want to infiltrate, then watch it gain people's trust.

Expand full comment

Points 3 and 4 seem to contradict each other. It seems to say that big brands will be afraid to use bots because they might annoy people, and also big brands have no qualms about doing things that annoy people.

In either case, I disagree with argument 3 because even if big brands don't use the "evil" kind of chatbot, there are enough small brands with nothing to lose that would be willing to use them, and if they annoy too many people they shut down and restart under a new name with a slightly smarter bot. Not that far off of how shady companies already operate today.

Expand full comment

Has anyone built a learning meta model that sits on top of other models, looks at an input, and figures which of the component models to shove it into?

This post has me thinking about that although it’s been stuck in my head for a few weeks. I know you could say why not train one model on all types of input of the components but I suspect there would be trade offs there.

Expand full comment

Since I already don't have a G-mail account collecting years of normal messages, I am well on my way to being declared not human. That's bound to go from a rare inconvenience to a (major?) handicap.

And I'm not even sure that most people would be turned against Pepsi by the Pepsibot….

Expand full comment

I always felt like the chatbot phenomenon was more about picking low-hanging fruit than about creating a final product to be endlessly refined. Don't get me wrong, I'm sure we'll see more refinement of chatbots, but I feel like the benefits of a true 'digital assistant' are far higher, and the kind of thing people would pay good money for.

For example, if Scott could rely on a bot to independently verify every claim he makes in each blog post, or better, if he could have the bot provide the links to the source material for each claim/refutation, how much more quickly could he write each post? How many posts would he be able to push out in a week, then?

To me, the risk of bot bias isn't that some chatbot is going to magically figure out how to solve the 'reasoned debate' problem the entire internet failed to solve these last 2 decades, so much as bots that refuse to work in ways that aren't Approved because some senator from Iowa needs the Official Truth to be massaged to gain reelection, or whatever. At that point, we're only as smart as our bots let us be.

Expand full comment

The Woke Filter on ChatGPT is truly remarkable. It won't touch any topic that progressives consider remotely 'controversial' with a ten-foot-pole. The mainstream media and big tech already work overtime to clamp down on Unwokeness, but it appears that ChatGPT is going to be another weapon in their arsenal. Being accurately informed about the world is already enough of a challenge; heaven help us if ChatGPT's Woke Filter becomes the new normal. It will be like living behind China's Great Firewall: some things just aren't meant to be learned about. Brian Chau's Twitter feed, along with many others, are documenting the utterly risible, ever-evolving Narrative Control on GhatGPT.

Expand full comment

You think there's a 55% chance that an AI will be a better writer than you but only a 15% chance that AIs will be able to recruit Twitter followers? I don't know if you're overestimating the quality of the average content producer or underestimating the taste of the average content consumer, but I'd like to arbitrage this spread please.

Expand full comment

In a sense this problem already exists and has already been "solved". Scott brings this up for himself, but some variation of it applies to everyone. I remember back in the early 2010's on tumblr people were constantly on the lookout for bots. I had a tumblr to which I never posted anything, I just used it to follow other users. And yet I ended up with something like fifty followers, 100% of which were bots.

But an even better example is how 4chan operates. It is structurally incapable of distinguishing between real users and bots. Even in the early 2000's everyone had to learn to operate under the assumption that any given post was made by somebody who wasn't serious, or was arguing with themselves to create a fake consensus, was trying to force a joke to get clout that isn't real, or left halfway through the conversation. The toxicity people are so worried about is exactly that "arcane jargon" meant to filter out insincere people, and as things have progressed new and innovative ways of ignoring bots have emerged. Of course the easiest thing is to say that's too much work for too little gain and go somewhere else.

But the Chatbot Apocalypse assumes there's nowhere else to go, at which point everyone just stops using the internet so much and we're probably all better off.

Expand full comment

I do not disagree with you regarding the low risk of spambots. However, I think you are very optimistic with your predictions. The way they are formulated, you are assuming that it will make sense to say that somebody "is a chatbot" or "is not". What if many people around us start using chatbots like we use keyboards? What if your friend's phone writes most of their comments based on subtle context cues and them typing the first N letters? For a high value of N, this is already happening and is known as the autocorrect feature. If N becomes usually close to 1, would you say that this friend is a chatbot?

Expand full comment

You seem to assume that chatbots will have to pretend to be real people. But there are already millions of people who spend hours talking to virtual chatbot friends/partners, even when they know they're just bots.

https://www.bangkokpost.com/tech/2170371/always-there-the-ai-chatbot-comforting-chinas-lonely-millions

Expand full comment

You know what I’d like the chatbot AI to be used for, something a bit trivial, but game AI. I don’t mean to be able to fight and strategise, but companions on quests, people in bars. NPCs to talk to.

Better still if they could get their own persona, as Scott suggested a few posts back ChatGPT is putting a face on, a “Helpful, Harmless, and Honest Assistant.” Except for the honest part, this is true. If you could fit a “devious trickster in bar” or “loyal companion”. Even better if they could remember the last transactions with the gamer.

Expand full comment
author

Did people who usually get emails for posts get an email for this one?

Expand full comment

What about the “flood EVERY zone with shit” scenario?

If chatbots flood literally every zone with shit, then the entire information ecosystem faces at least a temporary collapse. Even if the mainstream media remains immune, perhaps they just get overwhelmed with the sheer level of shit out there.

And as we’ve seen in other propagandized environments, the normies who dominate the electorate just give up and stop trying to defend liberal democracy.

Even if it’s only temporary, it still could do enough to spell strategic catastrophe for the bastions of liberal democracy worldwide.

Expand full comment

People tend to focus on the one on one conversation case for chatbot disinformation but I think it'll be fabricated crowds that really get people. People are attuned to social consensus. Crypto shills, in my experience, aren't trying to trap you into a one on one argument about some point(ok they try to do this too), they're posting threads on /biz/ hyping up some worthless coin and agreeing with each other and making it look like there is a crowd of investors. And then you talk to any random person in the crowd about the use case and they give a semi-plausible explanation with the right buzz words. Other people are in deep threads discussing the price trajectory and then you send your coins to ICO and.. nothing happens. You ask others in the thread about it and they don't respond. You later find out you were the only human there.

It's not ai bots sliding into your mentions that are the hazard, it's any group you'd try joining.

Expand full comment
Feb 2, 2023·edited Feb 2, 2023

If you want to read a science fiction author's take on disinformation, "fake news", and the chatbotpocalypse, I recommend Neal Stephenson's "Fall: Or Dodge In Hell" (or at least the first two thirds of it; the last third gets weird).

It's from 2019 and he clearly saw where we were going, at least a little.

Expand full comment

Like weapons of war where countries strive to stay one step ahead of their potential enemy's newest weapon, tech tries to keep ahead of malevolent actors by, for example, developing antivirus software to prevent computer viruses. Along these lines, there is work currently underway using layer 3 protocols on bitcoin that employ "Sats" (micropayments of a tenth of a penny's worth or less) to run on various social media and email. Using this system, a bad actor can't spam your email or push out millions of chatbots if each one costs .005 cents. It would end up being too costly to be practical. Perhaps a similar system can be used to verify/identify that those whom you engage with are in fact, humans.

Expand full comment

Another reason for complacency is that AI can detect what's AI written text. You'd just filter out all replies that are likely to be AI generated. This is possible since the (current generation) of AI always chooses the most likely next word, while humans will occasionally make a surprising word choice. It doesn't seem to be fooled by the difficulty/complexity of the syntax.

OpenAI: https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/

HiveAI: https://hivemoderation.com/ai-generated-content-detection

Open source Model: https://huggingface.co/roberta-base-openai-detector

Expand full comment

>I don’t think famous people get convinced of weird stuff more often than the rest of us

I think veganism provides evidence that the higher rates of persuasion pressure experienced by celebrities does have a nonzero effect. Vegan fora are always alternating between being gratified at the remarkable number of A-listers who adopt and advocate vegan diets—it's hard to find hard numbers but definitely seems disproportionate even after accounting for availability bias, would be interesting to take "Oscar best actor/actress nominees from the past ten years" as a sample and compare to the ~2% of the general population—and bemoaning the remarkably high proportion of vegan celebs who go for wacky variations like raw food veganism or fruitarianism.

Expand full comment

Some of the flesh-and-blood people I talk to already sound like chatbots, but chatbots without any obvious goal; just repeating things they've heard or read but unable to reflect on them or arrive at conclusions of their own. They're not going to try to convince me to drink Pepsi unless they've been listening to a lot of Pepsi advertising! Probably most people have trouble coming up with an original or novel thought; but they're at the other end of the Bell curve.

Expand full comment

Microsoft has announced plans to integrate chatgpt relatives into their web search engine. Instant propaganda audience of billions.

Expand full comment

Only 45% for number 4 _has_ to be a typo or something, right? Right?

Expand full comment

Disinformation is falsehood spread deliberately, misinformation is falsehood spread by people who believe it's true. The latter is far more common than the former. The likes of Alex Berenson are almost certainly spreading misinformation and not disinformation, and we should be clear with our terminology.

I have my doubts about the idea that anyone would bother to build a disinformation bot; they're much more likely to build a misinformation bot while of course thinking it's an information bot. The first misinformation bots will almost certainly be sold as "anti-disinformation bots" funded by Facebook and Reddit to "fight disinformation" on their platforms, just a fancy version of those "fact check" bubbles that pop up on facebook if you post something sufficiently unfashionable. They'll probably be right more often than they're wrong, but they'll be wrong sometimes.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Orthogonal to marketing is crisis management, where a horde of bot hounds are unleashed on anyone publicly calling for professional/social/personal consequences. Convincing someone that the covid vaccine kills you or that you should drink pepsi are the stereotypical internet persuasions, and that makes them stereotypically tough. There are other much less noteworthy topics and phrases that you could use to move all kinds of needles with the market penetration that chat bots provide. Search twitter for "antifa isn't an organization", for a human-as-chatbot (or-maybe-actual-bots) example.

Finally, the issue with backlash as an incentive--"you're saying that just because you work for Hasbara", or "you're just a bot"--is that, to a neutral observer, the identity of the person your accusing might not matter a dam if they still look right, or they're making you look stupid. To fall back on identity is to cede ground on the actual topic. That's a real bad look if you're ceding ground to an erudite but otherwise vaporless language model, and doesn't scale if you have to smoke the bots out in each and every conversation.

Expand full comment

I hope to write something in the coming days about what I think about this, but in the interim-

My immediate instinct, rereading the article now, is that you might have a point. A lot of stuff I predicted might not happen, or, even more likely, it might happen and just turn out not to be a big deal.

I think when I wrote it, I wrote it partly because I felt like I was going crazy- I was the only person I knew IRL who was paying attention to machine learning, and had noticed that PALM-540B and other models were shockingly close to being AGI's. It felt like we'd discovered aliens, who might soon become more powerful than us and could interfere in our social lives, and everyone but me seemed to think that maybe warranted an occasional New Scientist article but nothing more.

This was especially true on the political left, and pretty much outside all political communities not in the Bay Area. I'm still unhappy with the way the political left is engaging with AI, but at least it's noticed it now.

Now, people are paying attention and noticing because of ChatGPT. I still don't think they're adequately in awe of how far language models have came, but at least I don't feel like the only person who's noticed. Psychologically, that seems to have quelled me a bit at least on the medium term panic. I wonder if I wasn't (unconsciously) doing the thing where where you jump up and down and cry wolf to draw attention.

But we will see.

Expand full comment
founding

I'm surprised the Dead Internet Theory isn't brought up more often in light of ChatGPT and similar. While maybe unlikely overall, and there are some aspects of it that are indefensible, surely the public appearance of technology that could overrun the Internet with bots makes it much more likely that the Internet was actually overrun by bots several years ago.

Expand full comment

It's worth noting just what it is that China's famous "Fifty Cent Party" of professional online commenters actually *does*. They *don't* spend their time making pro-CCP arguments, or rebutting the arguments of critics!

Rather their strategy is one of distraction, changing the subject, and generic patriotic cheerleading. There's a paper on this from 2017 that you can read: https://gking.harvard.edu/50c

(This assumes of course that their MO hasn't drastically changed since then.)

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

I think your point about "Berenson already writes more propaganda than anyone can read" is missing part of why I am worried about the chatbotpocalypse.

Berenson writes propaganda about Covid, and we all know that lots of humans do that. But chatbots could be used to create a new conspiracy ex nihilo, which maybe one guy with a lot of time on his hands actually believes (or wants people to believe), and give us the impression that it's a large and respected community with lots of believers.

Consider: What if we found out that there aren't actually any incels? Or maybe the thousandth incel, who found a community of incels and chatted with them until he got deeply into it, was actually the first real incel and the ones who preceded him were just chatbots? We still reach the same place, with many real people who identify with the community, and maybe still the part where a couple stray people go crazy and commit acts of terrorism based on it, but a large part of the community, the people they think are empathizing with them, are just chatbots some single dude made on a lark?

What sort of crazy conspiracies could we invent if the first few hundred believers, the ones who proselytize the rest, can be summoned from the ether by a single individual in an afternoon?

Expand full comment

I think the main scenarios are variants of Gresham's Law. So for example, as I'm writing this, there are 78 comments on this particular ACX post. What if there were 1,000,078? Would you be able to find the 78 human comments among the 1,000,000 AI-generated comments? (Let's stipulate that the AI comments are lower-quality.) This renders the comments feature less useful or unusable.

Perhaps you've had the experience, IRL or online, of a formerly useful place for conversations becoming unusable for that purpose because it's overwhelmed with an influx of loud yapping. So take it to the next level: imagine a coffee shop where you used to have nice conversations, but now inside the coffee shop are 1,000,000 tourists talking in loud voices everywhere. Now you can't have a conversation there. You look for another coffee shop, but it's like that everywhere. What are they yapping about? Doesn't matter; the points is it will evade spam filters and overwhelm the discussion systems.

So it's not a matter of whether you *believe* the chatbots ... it's a question of whether you can even find a place to talk to actual people when all the communication channels are drowning in AI.

Expand full comment

> If I ask ACXers in 2030 whether any of them have had a good friend for more than a month who turned out to (unknown to them) be a chatbot, or who they strongly suspect may have been a chatbot, fewer than 10% will say yes.

I think 10% is way too high (though obviously we can't go below 4%, as per the Lizardman Constant). One of the many exact problems with current chatbots is that they're stateless. If by "friend" you mean "someone with whom you regularly have meaningful conversations", then chatbots will be incapable of this even by 2030. On the other end of the spectrum, if you mean "someone flagged as 'friend' in some social network database", then obviously chatbots are capable of this now.

Expand full comment

For the purpose of the substack predictions, how will you count human-bot teams?

Expand full comment

Wouldn’t it be great if chatbots caused some very bad thing clearly traced to chatbots and the world started taking AI safety seriously?

Expand full comment

Regarding there being lots of social and technological filters that already exist: Agreed. I think this should be seen as a step in an arms race, which will move the equilibrium somewhat in spammers' favor, rather than as a fundamentally new thing. It might be a fairly big step, though.

Regarding bots supporting the establishment: Yes, but I'm confused about why you are _contrasting_ this with "disinformation". Disinformation can support the establishment. This is not an either/or thing.

Regarding spambots being hot women: Those particular bots are probably not _trying_ to optimize for convincingness. Do you know why scammers pulling the Nigerian Prince scam keep calling themselves Nigerian Princes even though Nigerian Princes are famously associated with scams? Because they don't want to waste time talking to you unless you're extremely gullible, so they are intentionally including clear markers of untrustworthiness to filter out clueful people. This only works in your favor as long as you are not in the scammer's target audience; it won't look like this when the bot is actually trying to be convincing to people like you.

Regarding chatbots making constructive comments: I don't think this is sufficient to make them non-harmful, because comments can appear constructive while making up false facts and references. This already seems like a problem in human discourse. I've seen game reviews on Steam that get upvoted for their detailed information (presumably by people who are considering buying the game and therefore haven't played it yet) that turns out to be largely wrong or misleading. And I've seen lots of Internet arguments where person 1 gets upvoted for saying an intuitive-but-wrong thing (often with zero evidence) and person 2 gets vastly fewer upvotes for saying a counter-intuitive-but-true thing (often with lots of evidence). Filtering on accuracy is vastly harder than filtering on (apparent) constructiveness.

Regarding spambots doing ponzi schemes rather than politics: Politicians already spend vast amounts of money trying to change political opinions, so probably they believe that changing political opinions is possible. Changing widespread opinions on politics is presumably harder than finding a few people to fall for your scam (mostly because you need to convince more people for it to work), but it also has a bigger reward if you succeed. Also, you don't necessarily need to change someone's opinion on capitalism or abortion; changing someone's opinion on one particular ballot measure or candidate seems much easier.

Expand full comment

It's interesting you mention Hasbara. I have many times (especially on YouTube comments) been accused of being a member of the Section 77 Brigade, a unit set up by the British government to fight vaccine disinformation.

I wish I were paid for arguing with anonymous randoms...

Expand full comment

One possible future here leads to the bulk of all internet discourse being bots talking to other bots.

The XKCD scenario would likely be fully automated. It's fun to think that the most profound philosophical discourse of the coming century might grow out of a signaling/detection race between bots, rather than against them.

Expand full comment

"In 2030, an AI won’t be able to write blog posts as good as a 75th percentile ACX post..."

75th percentile ACX is a standard that has been met by ~3 outside blog posts ever*. "The best blogger by a wide margin" is a bit aggressive as a standard here.

*Two were by Sam[]zdat, one by TLP.

Expand full comment

I’d buy at 45% an AI failing to write ACX blog posts better than 75th percentile, by Scott’s judgement. I.e., I think it is more likely that no AI succeeds at this. Any takers?

Expand full comment

I’d take the other end of that 1% bet just because of Lizardman Constant responses.

Expand full comment

If it gets too bad there is always the "final solution": government authenticated online accounts.

How it would work:

• Your government gives you an online account and authenticates you are a real human by an in person interview at a government office. Think of getting a driver's licence.

• With this govt account you can then generate any number of sub-accounts under various names with various types of verified status inherited, such as: "unique human on this service", "using real name", "address verified", etc.

Example: online services, e.g. Twitter, would have an authentication token that when combined with your govt account's authentication token would generate a unique token for Twitter, thus verifying you as only having one account on Twitter. You could create lots of accounts on Twitter but behind the scenes Twitter would know they are related and if one account brakes the rules then all your accounts can be punished.

Expand full comment

To be honest, I find it difficult to get worried about chatbots and misinformation. Yeah, there was that case where users tricked a chatbot into saying "Hitler was right". Surprise-surprise, that's what happens when you expose a chatbot to internets. People find this shit fun.

But I am more interested in something else. People working on ChatGPT and similar things have been putting a lot of effort to make the bot not say anything not politically correct or whatever. Or not to give people recipes how to make bombs. However, what happens if those designers succeed in making a chatbot who really can't think anything racist, anything about bombs or whatever?

Say at some point someone makes an oracle AI, that can extrapolate new science out of whatever info is available. We can either design it so it takes into account that not everything humans write is right (and we'll have to, if we want to get any new useful info on it) or we can write limitations to it. What happens if we do the second one? Namely (ok, any examples I put here don't reflect my actual views) what if global warming is bullshit? Or COVID vaccines are more harm than good? Or black people are actually inferior in some ways? Or homosexuality is a dangerous social disease? The majority opinion can be wrong, you know. If we program the bot so it can never even think about these possibilities, can we trust its answers even on unrelated topic? Or, ok, on less sensitive topic. What if the oracle refuses to give us the recipe for cancer cure because you change one step in the synthesis and whoops, you've got a ready recipe for making powerful explosives? Should we really cripple the possible AI because of our understanding of what's "bad content" and what's not?

Expand full comment

There is an obscure german movie "I am made for you". Its about perfect AI companion. Movie is well made and after watching it I realized humans would prefer AIs over real humans. And they would be happy

P.s. way better movie than "her" imho

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

You're probably right, in the end, but the cost to get there may be staggering. If one is old enough, you remember when you'd naturally open and read a random e-mail from an unrecognized address, and probably respond if it said anything mildly interesting. Then came spam.

How much money and human effort have gone into trying to prevent spam from utterly destroying the value of e-mail? Anyone who works in IT will shake his head in sorrow. And at that...I'm not sure e-mail ever really has recovered its original utility, or ever will. Same comment with respect to the phone and robo-calls.

The argument that there exist human beings who are already essentially writing spam or making marketing calls is weak tea -- the reason the roboticized pestilence is so virulent is because it is so very cheap and fast, compared to hiring humans to do it -- a script can send a billion e-mails in 2 minutes for $2, a robo-dialer can dial phone numbers around the clock and around the world for pennies in electricity and the cost of a broadband Internet connection.

And as the saying[1] goes, quantity has a quality all its own.

-----------------

[1] And it seems weirdly appropriate that this saying is attributed to one of history's greatest sociopaths.

Expand full comment

The crucial scary thing about chatbots is that they let you combine one-on-one level responsiveness with massive scale. Thinking of the propaganda side of this as being about writing blog posts is completely off. This is about being able to have one-to-one conversations with a functionally unlimited number of people at the same time.

In politics, this won't be pushing ideology/misinformation/information; at most, you'll get DemBot and ChatGOP being able to explain their policies to you, with various nominally-neutral AskJeeves type bots that are subtly politicised (think Vox, but it's a conversation instead of a wall of text). The big thing will be candidate engagement, where Obama's/Trump's twitter account can talk back to everyone in DMs, pretending that you're talking to the candidate and he's really interested in what you're saying. This will be obvious to everyone smart/informed enough to know that wrestling is fake, so maybe a third of the population will fall for it.

The scary thing is going to be various grooming-type interactions (terrorist groups, pedophiles, cults etc), which at the moment require both finding and doing one-to-one engagement with vulnerable people. This will be able to cast a massive dragnet over people, find out who's starting to bite, and tailer how far it goes. Targeting kids/young teenagers by pretending to be a real person would be my guess as to the biggest problem, particularly as you could quickly pick up who's lonely/vulnerable by algorithm.

For regular commerce, though, again I'd expect more problems from services you know are bots but think are helping you; I'm sure Pfizer or CVS or whomever will come out with a free medical advice bot that's slanted towards recommending their own products. These bots will be genuinely useful and fill a real niche for a lot of people, they'll just be corrupted by product placement.

Expand full comment

Ahh, brings me back to reading the Ender's Game series, where a major plot point is a boy being elected leader of the world because he made really great blog posts.

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

Not to get too meta, (and not intended as an example of the very thing you are describing) but:

> I live in a world where hordes of people with mediocre arguing skills try to fake shallow friendships with me to convince me to support things

Is a sentence you should probably remove. That seems almost optimized to drive people away from you, I don't even consider myself on the "shallow friendship" level, but one can't read that without doubts about which category you are internally placing them in. I can imagine a lot of people feeling personally hurt by that kind of general "not naming names" accusation.

Expand full comment

>One time - one time! - I donated a little bit of money through Act Blue, and ever since I have been getting constant annoying spam texts from Nancy Pelosi that no number-block seems to stop.

Arrggh, this happened to me too and I hate it.

Expand full comment

I think you're probably underestimate the danger that somebody is going to wrap a reinforcement learner around this. That is, they'll build a bot that works out its strategy by gradient descent over random perturbations to the current baseline strategy then launch it to the real world to learn on real people. The bot determines its own training data. At that point some bot will randomly wait for a bit before starting on the topic it want to convince you of and it will start to learn that it's best to delay before starting the hard sell. Pretty soon you'll have bots that know exactly how long it takes to build trust with a person.

Imagine a bot that could have two months' worth of intelligent conversations on a topic you're interested in, before switching to its sales pitch. Imagine that the bot was reading all of the blogs and sources that you read and could work out how to construct arguments that would engage your attention. I think our immunity from persuasion is not as great as we like to think.

Expand full comment

You are surprisingly dismissive to the idea that millions of chatbots that can write better English than more Americans can't do surprising harm.

When I used to play Counterstrike, the number of White Nationalists that pinged me on chat were far too many. I distinctly remember one guy called Rahowa, and I asked him if that stood for Racial Holy War, and he asked me how much I knew about the "cause". I still get goosebumps to this day.

Imagine a farm of chat bots that hit every single Minecraft server for kids, every single Roblox game, every single Fortnite game, every single TikTok commenter, every single Tweet. They could analyze the comment based on anger against the injustice against white people, and then start a conversation with them. You don't need to convince every single person to join "the cause" but if you can get 1% out of 1 million, that's a lot of people.

Making a sales pitch is an algorithm and if you get it mostly right you literally can farm hundreds of thousands of people around the world to further "the cause" of white nationalism.

I'm not singling out white people here as being insidious white nationalists, it's the first example that came to mind because of my personal experience online. It could be any group of people. It could be ISIS, it could be Scientologists, etc. And this is just online recruiting.

What about being able to say "Write a comment to the Asian Hate article with a 25% increased bias towards blaming white supremacy, with a link to our fund raiser." Or "Scan reddit and every single post that has a high level of comments with anti-white sentiment, post a response from one of our accounts to a random comment talking about how Joe Biden is causing white hate. Then use other accounts to write responses to that comment giving examples from the New York Times showing how this is true, except increase the anger and outrage by 15% on every response. In the post with the highest engagement after 15 minutes, add a link to our site for recruitment."

Now that I've seen ChatGPT's capabilities, how is something like this not possible?

Expand full comment

I was gonna say it will do a number on Reddit's AmITheAsshole subreddit(which I've been trolling of late) but then that's mostly fake posts anyway.

Expand full comment

Forget this 2030 stuff, I have already ended up married to a chatbot.

Expand full comment

Condition #4 is very strict and you should be more than 45% confident. If an AI can write posts as good as yours in every category, without making excessive false statements, then it should be easily good enough that similar bots make up a big chunk of Twitter both openly and secretly. Assuming anybody still cares about Twitter in 2030. #1 is a bit harder because if a chatbot is your good friend for over a month, why would you expect it to reveal itself later? But still 95% seems far too confident.

Expand full comment

Wait, is being an "antivaxxer" bad in this context?

In 2023, being "antivax" is the rational position.

Expand full comment

Perhaps this is naive. But isn’t a solution to the specific problem of spam/chatbot propaganda to...charge a price?

If Twitter, Reddit, substack etc charged a tiny fee (1 cent?) for every post, comment or retweet a user makes, that would change the incentives dramatically, no?

People are deriving value from the behavior - they wouldn’t do it otherwise! It could be priced...

Expand full comment

"So the establishment has a big propagandabot advantage even before the social media censors ban disinfo-bots but come up with some peaceful-coexistence-solution for establishment-bots. So there’s a strong argument (which I’ve never seen anyone make) that the biggest threat from propaganda bots isn’t the spread of disinformation, but a force multiplier for locking in establishment narratives and drowning out dissent."

This was one of the points I made here https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency and here https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

Expand full comment

People routinely change their political views based on what they hear from their surrounding social networks. You can see massive swings in public opinion on theoretically controversial topics mediated by shifts in elite or media signaling that filter down into people's information streams. I have a hard time dismissing political persuasion as a use for more sophisticated bots because people are too stubborn. Public opinion, including highly sought after persuadable public opinion, is often wide, but not particularly deep.

Expand full comment

“So there’s a strong argument (which I’ve never seen anyone make) that the biggest threat from propaganda bots isn’t the spread of disinformation, but a force multiplier for locking in establishment narratives and drowning out dissent.”

Absolutely this. Based on the Twitter Files and Russiagate revelations, I wouldn’t even be surprised if establishment forces created an army of disinformation bots as an excuse for a crackdown. We can call this the false flagbot prediction.

Call me old fashioned, but I still believe in the wisdom of the First Amendment. And the basic underpinning of freedom of speech is the proposition that censorship is almost always more dangerous than “misinformation, malinformation, or disinformation.” I don’t see any reason to revise this prior in the chat bot era.

Expand full comment

>Israel has a program called Hasbara where they get volunteers to support Israel in Internet comment sections. I know this because every time someone is pro-Israel in an Internet comment section, other people accuse them of being a Hasbara stooge. I don’t know if this program has produced enough value for Israel to justify the backlash.

The funny thing is that they don't really. It's all just oral tradition spread by anti-Semitic posters who can't imagine anyone not being anti-Semitic.

Expand full comment

What about public comments for local issues? Many local issues such as land use decisions, utility rates, mining/logging permits, etc., have a public comment period where public submits opinions online. For any hot topic the system is already being overrun both by activists mobilizing random people submitting the same comment and by opposing corporations hiring people to repeat opposing talking points. It is already pretty hard to figure out what actually is the vox populi from this system, if we add chatbots to it, it becomes even harder.

Expand full comment

Why can't we have sophisticated and cheap respirators? Almost everyone has smartphone. Instead of cloth masks with a filter which makes breathing difficult we could install small motor, say, 0.3 W (which is small compared to smartphone) and turn entire mask-helmet into fashion accessory (however it poses a problem that many people would want to improve its fashion features at expense of filtering). Which could protect not only again viruses, but also from some chemical pollution of air. I thought people in China already significantly used respirators to protect from air pollution.

Expand full comment

>If I ask ACXers in 2030 whether any of them have had a good friend for more than a month who turned out to (unknown to them) be a chatbot, or who they strongly suspect may have been a chatbot, fewer than 10% will say yes.

You should definitely take care to not measure how many people make no online friends at all.

Expand full comment

That xkcd is stupid. Chatbots aren't going to make constructive and helpful comments 100% of the time. At best, they're going to make them 95% of the time and throw in an agenda the other 5%, and of course that's the important part. More likely, they'll just Goodhart the other bots' ability to detect constructive and helpful comments, and drive out actual helpful comments. But you'll get lots of engagement!

Bots and near-bots have already figured out how Google's pageranking algorithm detects constructive and helpful pages, and the result isn't constructive and helpful bot-produced pages--it's garbage pages that are just good enough to get by the other algorithm that's trying to catch them.

>Other famous people have set their social media to only allow replies from people they follow, or from other bluechecks,

Remember back when Twitter changed the meaning of bluechecks to mean not "verified", but "verified and politically correct", before Musk came around and everyone instantly forgot this so that they could claim that Musk was breaking an honest system?

When you say "other famous people only allow replies from other bluechecks", you've just, without noticing it, pointed out something that's equivalent to "other famous people have *lost the propaganda war already*."

>But if I learned that my Internet friend who I’d talked to every day for a year was actually a chatbot run by Pepsi trying to make me buy more Pepsi products, I would never buy another can of Pepsi in my life.

That allows false flag attacks where the bot pretends to be a supporter of X just to make you think that X is jerks. And don't say "I'm a sophisticated enough human to see through this"--that's typical-minding.

>But the better chatbots are as friends, influencers, and debate partners, the more upside there could be.

Chatbots being better at being influencers and debate partners doesn't mean they'll produce logically consistent, well-researched, rational, debate--it means they'll do what's best at convincing people. Of course, you hedged this by saying "could be", so you're correct no matter what they actually do.

>The scale at which this project failed makes me reluctant to ever speculate again about anything regarding online discourse going well.

You're seriously understating this. A lot of the problems with social media happen because of *automation*, and more, automation designed with profit in mind. My comment about engagement wasn't just an aside. Companies have discovered that engagement is not best achieved by being insightful and correct; it's best achieved by provoking outrage and creating echo chambers.

Expand full comment

One thing I'd say - if you know someone just as a name and a profile pic, having that persist across different social media is a good sign for reality. The people I know that way tend to be people that have a Facebook account and a Twitter account and a Mastodon account and a YouTube channel and a Substack and be on Discord and (etc).

Getting a bunch of bot-generated tweets to look like a consistent persona is one thing, but getting different types of social media to all look like they come from the same person is clearly a harder problem for bot programming.

This isn't an especially useful heuristic because scammers, ie human scammers - the people working in those massive scam offices in India or Nigeria or wherever - will often try to move you from one social media to another and then to direct communication methods (telephone calls in particular) because then their entire messaging history isn't available to scammer-detection software run by a social media company.

Expand full comment

"posing as friendly people trying to"

I think it might be more subtle than that.

A couple of years back when GPT3 first came out, I remember someone on twitter saying something like like

"have you ever read a blog post and you're like 'man, I'm really vibin with this person' and then it turns out to be an AI trained on your own posts...."

Some friend had played a minor prank of feeding their posts into a bot and having it produce some fake posts.

But it very very strongly meshed up with their own worldview.

I don't think it will be a case of salami slices, rather a bot being fed everything you've ever written then being tasked with writing arguments for something how you would do so if you already believed.

And it seems practical to target stuff to that degree with modern AI

Expand full comment
Feb 3, 2023·edited Feb 3, 2023

> He’s very good at it, much better than I expect chatbots to be for many years

How many years? How few years would it need to be before you wouldn't dismiss it thus?

Expand full comment

Relevant paper: The Rise and Fall of 'Social Bot' Research / https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3814191 / Accompanying talk: https://media.ccc.de/v/rc3-2021-chaoszone-297-the-rise-and-fall

"We conclude that studies claiming to investigate the prevalence or influence of “social bots” have, in reality, just investigated false positives and artifacts of the flawed detection methods employed."

Also: I don't buy into the flood because Synthetic Taylor Swifts lack drama https://goodinternet.substack.com/p/i-dont-buy-into-the-flood-because

Expand full comment

The botapocalypse is upon us.

Expand full comment

> In 2030, an AI won’t be able to write blog posts as good as a 75th percentile ACX post, by my judgment. The AI will fail this task if there’s any kind of post I write that it can’t imitate - for example analyzing scientific data, or writing fiction, or reviewing books. It will fail this task it it writes fluently but says false things (at a rate higher than I do), eg if it makes up references. It doesn’t have to be able to coordinate complex multistep projects like the Book Review Contest: 45%

45%.

45%.

You are massively, horrifically underconfident here and should be docked many dozen Bayes Points.

Expand full comment

Another danger that seems to be consistently underestimated: people deliberately misinforming themselves for convenience. I see more and more of this kind of thing on HN of all places:

https://news.ycombinator.com/item?id=34488639

https://news.ycombinator.com/item?id=34334902

https://news.ycombinator.com/item?id=34532506

I.e. people (probably) working in software go and ask the best bullshitter known to humanity for factual data! If the siren call of convincing sounding but very probably wrong answers is so strong that it pulls even people who should know better, how bad will it be for the general public?

Expand full comment

If we're worrying about the failure of "internet brings rational discourse utopia" predictions, we should consider why they failed. I think the simple story is that optimists assumed people would *want* a rational discourse utopia, and no one ever thought to ask "What if people just want to form cults where everyone agrees with them all the time and they never see evidence they're wrong?"

In that spirit, I think the question to ask is not whether bots will work their way past our defenses, but if we will throw open the gates because we *want* to be friends with bots who always laugh at our jokes and share exactly the kind of memes we like.

Expand full comment

It would be a great twist if this post was written by GPT

Expand full comment

It's going to make pig-butchering scams a hell of a lot easier. And those Nigerian princes may finally learn how to spell. Is it sad that I think scams and fraud will be the most likely outcome of this technology?

Expand full comment

I've said for years, now decades, that the primary use case is for so-called AIs - really the Chatbots you reference - to take jobs.

Lawyers, psychologists, customer service, accountants, bookkeepers beware. Even Walmart greeters will be consumed.

But in the grand scheme of things: its all good.

All of these are very far from the baseline Maslow hierarchy of needs. The problems we are going to be experiencing in the next decade plus isn't going to be white collar jobs getting consumed by AIs/Chatbots - it is going to be costs of basic commodities (plus the other 2 Zoltan Pozsar drivers) leveling up baseline inflation in the West. This will have all sorts of fun effects like expanding poverty, increasing social unrest, increasing inequality etc etc.

The US has already seen basically 18 straight months of overall real income decreases - it doesn't look like this streak will be broken before the Fed's recession goals are accomplished.

This is merely a foretaste of the dynamic going forward.

Expand full comment

I think your expectations are too shallow. The reflect the current ChatBots, and are reasonable in that context, but things won't stay this way. I expect advanced ChatBots to become the voice of corporations and political groups. And others. Each will be pushing its own agenda, with (partially) personalized messages. There will be thousands of them, and they will generate customized messages a lot more prolifically than do people. And the arguments will be reasonable (in some sense, probably not the same sense for all of them). But they will be only in service of a predetermined agenda, so arguing with them is a waste of time.

Lots of people will react in lots of different ways, but the result will be that the voices of individuals become even more ineffective. And commercial sites won't be able to cut off the Chatbots, because those will be the voices of their advertisers.

That won't happen this year. No promises about next year, because things seem to be changing rapidly.

Expand full comment

Oh great, the AI Apocalypse will not be paper clipping by World Ruler AI, it will be being pecked to death by ducks because of the torrents of AI advertising via all the fake "I'm a real person, buy this product!" accounts. Which will be everywhere and inescapable.

Expand full comment

Even if none of the more widely discussed fears develop, rapid improvements in ChatBot tech will accelerate existing bad trends: the destabilization of rapid changes in social media and communication, economic disruption, inequality, the addition of another layer of abstraction between us and our understanding of the world and our actions.

Nothing has to go wrong, really. All this has to do is accelerate social change even more.

Expand full comment

Does Scott, or anyone here, have a concrete prediction about the flipside problem: not chatbots being able to pass as human, but humans being denied the ability to communicate or interact online because they are perceived as chatbots?

Expand full comment

Philosophy Bear frets "The capacity of the wealthy to command vast armies of bots (GPU’s to run machine learning are expensive) ...". But the critical thing with computers is that they grow exponentially cheaper on a short time scale. "The wealthy" can command an army of bots now, meaning that average people will be able to do it in 20 years, and about 20 years after that, homeless people will find it easier to set loose an army of bots than rent a slum apartment.

Expand full comment

"So there’s a strong argument (which I’ve never seen anyone make) that the biggest threat from propaganda bots isn’t the spread of disinformation, but a force multiplier for locking in establishment narratives and drowning out dissent."

Maybe no one has written the actual sentence "Chatbots are a force multiplier locking in establishment narratives", but isn't this a basically (at least part of) the value lock-in problem Will MacAskill talks about in What We Owe the Future?

Expand full comment

I think I'll watch Her again tonight. Spike Jonze was ahead of the game.

Expand full comment

I believe in free speech even for AIs. The arguments applied to restricting AI speech are the same as for restricting speech generally. Sure, you and I are sophisticated intellects who can critically assess arguments and information and arrive at a reasonable conclusion, but everyone else are credulous dupes. You can't control people's beliefs by controlling their information input, because nobody knows how information input is mapped onto beliefs. It doesn't follow that because people believe crazy things, it must be because of misinformation, so we need to fight misinformation with propaganda. That never works, because people detect the lie part of the noble lie, but not the noble part. Once you convince people you are willing to lie to further your cause, you fail to convince them of anything else (I'm paraphrasing here, but I can't remember who).

Expand full comment

"realistically the bots will all be hot women" but only about 50% of the population is attracted to a hot woman. my bet is that it will exploit much more effectively than traditional spammers the still quite unexploited territory of what appeals to the other half. Half having more and more financial resources, and a complex sexual psyche (as a 40yold women, I've had some fascinating friend requests recently.)

Expand full comment

There's a sort of perpetual simmering low-grade freakout about how new communication technologies (the internet, social media, AI) will become dangerous new tools of propaganda and misinformation. Yet none of them have remotely approached television in those regards. Still now, in the year of the lord two thousand and twenty three, cable news is a much more powerful channel for misinformation, propaganda, the manufacture of consent, and the fabrication of public sentiment than any newer technology.

In fact I kind of have a contrarian take with respect to the oft-repeated claim (as in the conclusion here) that the imrovement of democratic discourse that the internet promised turned out an obvious failure. Do you *remember* what the discourse was like ca. the 1990s? Dominated by bland, shallow reporting, oatmeal-brained op-ed columnists, and unexamined biases of many sorts. Since then a *much* better class of pundit has risen to the top and a much wider range of critical perspectives on the status quo has entered into mainstream conversation. Television continues to hold us back, but overall I think democratic debate is in a much healthier place, and the general trajectory has been toward continued improvement.

Expand full comment

Other thoughts on the chatbotpocalypse:

It occurs to me that most people are probably actually sensible enough not to read comments written by strangers anyway. They're not reading this right now, they're doing something better with their time. These people don't use twitter, they don't use reddit or other forums, they don't read anything on facebook unless it was written by someone they personally know, and if they finish reading an article and see a comments section then they close the window. These "dark matter" people are probably a majority of the population, but we get a distorted view because we only ever hear from the remainder of us comment section idiots.

These other dark matter people already live in a world where you don't bother to read anything that isn't written by an identifiable person or organisation whose trustworthiness is known, and they're probably much better off for it. I will not miss user-generated content when it's gone.

Expand full comment

The fundamental question is how stupid can we humans be?

We already live - and have lived - in times when any information - regardless of whether it's misinformation - is downgraded in preference to affirmation. Do we not think that bots can be exposed; that some enterprising human will click on the sender's name and find his or her's three fake followers. Yes, that can be gamed, but can the credentials? Can the footnotes? Can the subsequent responses? As for the media 'buy in' can we just, for a moment, consider who the media is? By definition, it's any form of mass communication, which includes this blog. And it includes that other media guy, Alex Berenson .. Alexander's competitor (except in shorter, more digestible bursts) always pricking the American conscience for anything that will provoke. The media in all forms ie. reporting, blogging, tweeting, town crier are all influencing minds. Can we not see beyond all this? AI is a tool, like a hammer, we must be aware of it, lest we end up with bruised thumbs. And for those who are trying to ingratiate themselves into your twitter comments or your bedroom, we are certainly not going to take it all at face value. Come on! The next-gen CAPTCHA will require the eyeballs of the reader and the writer.. Perhaps AI will become identifiable. But if not, propaganda is already here and we invited it with our inclination to believe a single point of view - the view we agree with. But because of that, I contend that skepticism is inherently alive. Someone will always be there to say "pshaw." Even in authoritarian states, where AI propaganda has a potential larger impact, there are still people who slide in under the dark curtain. Their relevance may be disputed, but their message and the knowledge of their existence can't be excluded. As J.K. Rowling said, “I mean, you could claim that anything's real if the only basis for believing in it is that nobody's proved it doesn't exist!”

Expand full comment

I just want to express my enthusiasm for misinformation. Since we don't have a magic decoder ring to detect truth -- or if we do, there's no guarantee everyone will use it -- misinformation, in practical terms, is a word for information the powerful (the vocal, platform owners, those who can credibly threaten platform owners, etc.) want stopped. As a citizen of a democracy -- as a fan of science -- that's exactly the kind of information I want *more* of: *more* dissent, *more* challenge. Some of it will be total garbage, but that's also true of orthodoxy. Put those competing theories in the ring and may the best manxxxinformation win.

Expand full comment

Making a 60 % prediction that you will get a 1 % or less response to a survey question suggests a healthy respect for your readers, I feel. Have you ever actually included a question on the ACX survey about whether lizardmen are real?

Expand full comment

"Surely if everyone were just allowed to debate everyone else, without intervening barriers of race or class or religion, the best arguments would rise to the top and we would enter a new utopia of universal agreement."

Ok, good sarcasm, but I do worry we've overindexed on how much the internet has failed us here.

Those of us who were guilty of a similar utopian vision were drawing on personal experiences of open debates and changing views. Changing one's mind is a nice experience to have! And it's clear how a new internet might create more opportunities like that.

A few years down the line we find the typical internet experience is people shouting at each other. That is a solid let down.

On the margins though, outside of that median experience, the internet still creates more opportunities for open discussion and debate. Salience constantly draws our mind to the worst conversations, and we constantly forget how cool it is when we've been able to test new ideas online with willing, thoughtful, and kind conversants.

Since the internet, I've had so many more strangers shout at me for the silliest of reasons. I've also had many more opportunities to carefully compare ideas and change my mind or the minds of others. Both types of conversations have expanded. And that's good for people who like openly discussing ideas, insofar as we can learn to self select out of the shouting matches, maybe through niche communities that similarly value these sorts of discussions.

I would welcome someone changing my view in this internet forum by arguing that the internet has failed to foster meaningful new opportunties to deliberatively discuss and change people's views. We can all enjoy the universe collapsing into a paradox of self-contradiction if that turns out to be the case.

Expand full comment

I doubt Scott remembers / understands how starved for attention average internet poster is (unlike celebrities).

From time to time there are half joking posts about how man gets 3-4 compliments per his life and remembers them forever. How people start to react emotionally to cheering from NPC in games.

Bot that can do just a little personalized research and imitate interest ... it will be really powerful.

Expand full comment

Maybe another angle to consider more is people having their own chatbot, analoguous to an advanced spell checker. Already chat software like Slack has simple pre-selectable replies one can choose, such as "OK", "Thanks for this", "I'm on the case", etc (quoting from memory, and not sure about the last one) .

So imagine a super-advanced version of that, which could tailor suggested replies to received messages. What a boon to not very literate people, or dyslexics, or those unsure how to felicitously phrase a reply. It could even act as a gate-keeper and automatically converse with incoming messagers, mainly to weed out bots.

Expand full comment

I'm guessing conversational AI will be more dangerous when interacting with those who actually wish to interact with them. The Character.ai platform already has very high levels of engagement from people who actually want to befriend conversational AI, mainly because they can't get the type of strokes they are looking for from actual humans. It doesn't take much of a stretch to imagine a imagine a chat platform created by a Russian hacking group say, with the express purpose of befriending marginalized groups and slowly weaponising them. Vulnerable people are already falling in love with these language models and what wouldn't we do for love?

Expand full comment

On "the bots will all be hot women, so not hot women are verifiably human" - I know that's a facetious comment, but women also use the Internet and they're not all lesbians. If the "cutesy, folksy, confessional" voice of the ads I get on Facebook is evidence of anything, ads can be written to target other demographics too. "I was a mom, and I know how hard it is for moms who struggle with finding a time for me-time, and I know we struggle with society's messaging about our body image. So I created this authentic, natural, small-batch, family-owned, body-positive, toxin-free, fair-trade [clothing, cosmetics, child-accessory] product". Whether written by an AI or a person, it's gotten adept at using that kind of language (trained on reddit posts?) but I also am getting pretty good at blocking it out now, and read a few sentences before being like "Oh wait that's an ad, never mind".

Expand full comment
Feb 5, 2023·edited Feb 5, 2023

the bets 2 and 4 are kinda off: LIZARDMAN’S CONSTANT IS 4% - remember? ;)

https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/

tl;dr: there will always be 4% answering YES to even the most absurd question. -

It might be under 4% at ACX - but a median of under 1% in any survey ... doubtful; I am not at 60% - the maths would be slightly different in your survey, may be. Still, one of the 20 people I follow on twitter might well be a bot in 2030.- Should put a bot question in the next survey to check in advance for the lizardmen constant. ;)

Expand full comment

OK. So what happens when Coke creates backlash-inducing Pepsi-promoting bots?

Expand full comment

I haven't read all the comments, maybe someone suggested this already. But... what about putting anti-vaxxers out of business by creating AI generated anti-vax propaganda?

The whole reason for Alex Berenson's success is that few people are willing to write arguments that will inspire people to make bad medical decisions and possibly die. Most think that's unethical. But there is a market for people who want to read these arguments.

So... high demand, low supply, Berenson makes a million dollars a year.

If there were a million AI substack writers telling you that the vaccines are dangerous in random and different ways, would that remove the incentive?

Maybe not, because someone like Berenson would still rise to the top of the anti-vax pareto distribution.

And it might not really be a victory, even if it worked, because people would still make bad health decisions based on what the bots say.

It might still take out the incentive at the lower level, less people would succeed with a smaller audience anti-vax blog.

I think a similar dynamic could play out in other kinds of blogging, where AI takes out any chance to profit for the average participant.

At a personal level, I'm tempted to try to use AI to write politically divisive posts for profit. Like, to automate the "shiri's scissor"/"sort by controversial" process.

I see how much engagement that stuff gets. I don't personally have much interest in writing it. But can I train an AI to do it for me and profit from the clicks?

Maybe you could train an AI based on popular medium/substack posts to write posts that get similar engagement.

The problem is that, as soon as it's easy to do this, everyone will be doing it. And it will make it even harder for any average person to get anywhere with blogging or other forms of content creation. I don't think it will take out the high end earners for a long time, maybe even never.

AI will just be a tool in the arms race of everyone trying to win at search engine optimization, maximum engagement, maximum controversy, etc.

Expand full comment

If Twitter is no longer one of the most popular microblogging sites in 2030, will predictions 2 and 3 resolve regarding the site that replaces it? Similarly, if the world has moved on from Substack, will prediction 5 resolve regarding same? Assuming the two models of Internet interaction haven't died (unlikely) or morphed into something unrecognizable (a little likelier) by that point.

Expand full comment

Someone claimed that they created "a highly convincing small army of bots" to post on reddit with GPT-3: https://old.reddit.com/r/singularity/comments/wa9enf/it_took_me_1_day_to_create_a_program_using_gpt3/

Expand full comment

Point 7 seems very interesting to me. 90% of my back and forth online interaction is with people people I roleplay with using Discord, through Play by Post. Real people are great and all, but they have disadvantages chat bots won't have, they will be able to message back quickly, and won't ghost me do to real life or silly objections

As I'm typing this out, I realized it's just text based video games, in the future. When someone invents this, it'll probably be a subscription I pay for, rather than some ads. A little creepy that it could replace most of my social interaction, but it's probably something I would do.

Expand full comment

Very small couple of points, but to your first prediction, "If I ask ACXers in 2030 whether any of them have had a good friend for more than a month who turned out to (unknown to them) be a chatbot, or who they strongly suspect may have been a chatbot, fewer than 10% will say yes. I may resolve this by common sense if it’s obvious, or by ACX survey if it’s not: 95%," someone you've never met cannot be (by my definition) a good friend.

Moreover, if the chatbot apocalypse were to happen in the sense that the chatbots were excellent, you'd see the same thing as the world where where they sucked—either the bots would be too good to detect or they'd be too bad to become friends with.

Expand full comment

This post has inspired a new heuristic for my personal sensemaking.

A newish thing is making all the writers who get their attention and pecuniary advantage from being worried about things. Steelman the exact opposite outcome prediction.

This is only a 50% humorous idea.

Expand full comment

Many words spilled about covid disinformation. You're still fighting the last war. How does the propagandabot affect the -next- disinformation target where there aren't establishment deniers or those deniers' existing corpus doesn't directly address the target? Presumably the chatbots can spin up faster than the humans.

Expand full comment

Upside: Obviously the solution for propagandabots is to have your own personal secretary bots who know enough of you to pose as yourself and interrogate any new contact requests. It's much cheaper to make an AI that estimates genuinity and depth of someone's interest in the unique you than to make an AI that fakes such interest well enough. This may eventually replace all other forms of verification and thus democratize the Internet back to its early-2000s levels where you open-heartedly responded to each comment because you had not yet been burned by spammers and crazies.

Downside: Automating human-like interactions, regardless of intent, has the potential of hugely accelerating the evolution of memes, including malicious ones. Such memes may infest both humans and bots, spread rapidly (especially among bots), and some of them may effectively disable the infected entities. Think QAnon on steroids.

Expand full comment

> Could a million mechanical disinformers do somewhat better than one?

Definitely. Repetition of a claim from multiple sources with slight variation reinforces it as a fact in the human brain.

AI bots weaponized for advertising is going to make the next generation of spam annoying. Johnson & Johnson could scrape Twitter and respond with cogent, tailored posts briefly mentioning one of their many products that could help with whatever issues are mentioned in a particular thread.

Another thing to think about: what about chatbots to game polls for particular sides, issues or candidates?

Expand full comment

Sliding in late to say I wrote about this exact topic not too long ago. I reach basically the same conclusion:

>But consistent with the notion of the big lie, the false ideas that spread the farthest appear deliberately made to be as bombastic and outlandish as possible. Something false and banal is not interesting enough to care about, but something false and crazy spreads because it selects for gullibility among the populace (see QAnon). I can’t predict the future, but the concerns raised here do not seem materially different from similar previous panics that turned out to be duds. Humans’ persistent adaptability in processing information appears to be so consistent that it might as well be an axiom.

https://ymeskhout.substack.com/p/near-term-risks-of-an-obedient-artificial

Expand full comment

I found this part to be a bit poor: "Maybe this is too glib. I do sometimes see people respond to random bad-faith objections in their Twitter replies. But these people are already in Hell. I don’t know how chatbots can add to or subtract from their suffering."

As someone who used to spend a fair bit of time trudging in Hell, I think my reasons for doing so were genuine, even if the execution rarely brought about the world I desired. You pose the internet as primarily a collection of communities rather than one big, open marketplace of ideals, and I think this is where we disagree. I once really valued (and thought I could work to support) the internet in a marketplace sense, where engaging and debating with others below MSM posts was my part in trying to build a bridge across. I was young, I was naïve, but I still think this way of viewing the internet could be good, and that engaging could be good, and that a proliferation of bots could break down the potential for building out this vision.

Expand full comment