234 Comments
author

Housekeeping question: I accidentally sent this to paying subscribers only first, then edited it so everyone can see it. So:

1. Can nonpayers see this?

2. Did nonpayers get an email about it?

3. Did payers get two emails about it?

4. Any other weird bugs caused by this?

Expand full comment
author

My response to Weyl: Thanks for your thoughtful response. I'm not as familiar with your other work as I should be and I took the essay as a stand-alone document (and excused my laziness because I got the impression from the essay that you were repudiating some of your other work). I can't figure out a good way to get full-text access to the David Levine review of your work but I can certainly believe he erred in the other direction.

You're right that I was unfair to accuse you of using only those examples. Many people (I'm thinking particularly of a group who call themselves "post-rationalists") obsess over those particular examples, and you used a set of examples including those, and I unfairly used you as my target in attacking people who rely on those too much.

I'm a bit confused by the broadband allocation example though, even after reading your separate piece on it. The impression I get is that some technocrats designed an auction mechanism, and you (another technocrat) wanted a different, better auction mechanism. It doesn't sound like they failed to consider alternative perspectives or things outside their models (their first draft included the dynamic reserve prices you wanted, and then they took them out). It sounds like maybe some companies that were going to profit from the bad design pressured them to remove it.

And it sounds like your preferred response isn't to be less mechanismy in how broadband gets auctioned (eg have Congress allocate spectrum to whoever they like without any auction or rules), it's to try to explain these issues to the people, so that they can demand the good mechanism instead of the bad mechanism. I'm pretty skeptical of this. I will admit that after reading the article somewhat closely I still don't feel like I understand the issue well enough to know if you or the other technocrats are right (I assume you were, via social proof, but I can't intellectually prove it). I imagine if this somehow became a vibrant topic of public debate, it would devolve into AOC saying that dynamic reserve prices are necessary to prevent greedy Wall Street fat cats from stealing your spectrum and Ted Cruz saying that dynamic reserve prices are a plot by anti-progress extremists to destroy broadband, and then half the spectrum ends up going to Halliburton and half to queer women of color. It's hard to imagine it ending with everyone understanding the math behind dynamic reserve prices and rising up to have them included in the finished proposal at the exact right level. Maybe I'm being too cynical here, but keep in mind I work in health care, which a field where complex technocratic proposals constantly get elevated to the popular consciousness and we get to see what happens next.

(also, I'm unclear how this involves rationality being dangerous, or needing to incorporate perspectives from Continental philosophy, etc. It sounds like lots of people thought rationally about how to design an auction, and you did it better and more rationally than they did, which made your proposal better, and potentially they were also corrupt in some way. How would being less rational, or incorporating perspectives from Continental philosophy, have helped with this?)

>> Furthermore, the positive examples of technocracy @slatestarcodex refers to are...surprising. Two examples. To call school desegregation a technocratic invention papers over decades of community activism for desegregation. Perhaps even more dramatically looking at the coronavirus as an example of the success of technocracy runs against pretty much any reasonable reading of the international data. Danielle Allen and I have a piece coming

There were also decades of community activism for Soviet communism; in fact, there was a whole popular Revolution in favor of it. Does that mean collective farms aren't technocratic?

Overall I continue to be concerned that you're trying to channel James Scott but seem to be thinking of this on a completely different axis than the one he is. There being decades of community activism for something is completely consistent with (maybe contributory to) the sorts of top-down technocratic policies he focuses on. Well-intentioned reformers, many of whom may be some kind of "community activist", demand that the government interfere in a bottom-up emergent system and make it better. Then the government agrees to do that. That's the essence of Scottian technocracy.

This is also why I'm including the coronavirus. For Scott, the alternative to technocracy isn't "some even better government policy". It's ordinary people living their everyday lives unaffected by government reformers. I think the government letting ordinary people make their own choices on the coronavirus (ie no official lockdowns, everyone chooses individually or perhaps on a neighborhood-by-neighborhood basis whether or not to quarantine) would have been a mistake.

I continue to think we have some deep definitional disagreement on what technocracy is and what non-technocracy is. Usually I would be more embarrassed about this and understand that the impetus is on me to resolve it, except that we both seem to think we're using it the same way James C Scott does and so I would rather figure out how our interpretations of him are differing so badly.

(Substack is forcing me to cut off this comment here, second half continues below)

Expand full comment

"I was blowing the whistle on one (promarket.org/2020/05/28/how…) at roughly the same time I wrote the technocracy piece."

Link seems to be broken.

Expand full comment

"Blowing the whistle on one" link doesn't work, it has ... in the URL instead of the actual rest of the URL.

Expand full comment

Some valid critiques. Also read through the AI article. The idea that “AI is an ideology” is, um, extremely contrived. It seems to be based primarily on the tired argument that “image classification isn’t really AI” and similar arguments. It doesn’t matter what you call it, it’s still an important technology and the advantages that China is cited as having are still advantages.

Expand full comment

Scott, thanks for posting Glen's response.

I think Glen is one of the more interesting critics of EA/AGI concern/rationalism precisely because his way of thinking about the world is such a close cousin of the rationalist approach. I see Glen as someone who has infused his natural tendency towards economic/rationalist thinking with ideas about social justice and inclusive community building. I think it's a really valuable mindset, and an effective bridge between people and concepts that are often separated by a much greater chasm.

Still, sometimes it seems like Glen is trying to create a bridge--or deal with tensions--that lead to him coming across as hypocritical or self-defeating. That's annoying, and it has definitely set off my alarm bells in the past: "Oh really, this *Princeton economics prodigy turned Harvard fellow turned U Chicago professor turned Microsoft guru* is also suspicious of complex, technocratic solutions for public problems?" But, actually, yeah. That's valid. I'm glad that some of the brilliant people doing technocratic, rationalist-adjacent work are occasionally hypocritical, almost confused critics of what they are doing. That seems pretty useful, at least in Glen-sized doses.

Expand full comment

Having had some contact with Weyl's thought before, his radical egalitarianism is interesting to me, because I agree with his values but I feel like some of the substantive views he combines it with are fairly far-out.

Like I remember a podcast with Rob Wiblin where Weyl said something along the lines of claiming that there is no such thing as a difference in overall epistemic skills among humans. Rather, when it looks like there is such a difference, it's really that people have specialized in different ways.

Can others who are more familiar with his views explain why he thinks this?

Expand full comment

GW is likely correct about one thing, which is that EA types and rationalists should pay (more) attention to insights from humanities and humanistic social sciences. Those disciplines are not in fact synonymous with some simplistic "woke ideology" or whatever , as Scott seems to imply.

He is also of course correct that democratic accountability is good, but I am not aware of anything from the EA movement (which is more institutionalized than rationalism and whose principles are thus easier to pin down), that would suggest that it is against democratic accountability. That seems to be a rather crude strawman. I guess we can find someone who identifies as rationalist and thinks that world should be ruled by unelected technocrats, but they are most definitely not representative of the EA movement.

Also GW´s examples about Phillips curve and Eastern European "shock therapy" are kind of empirically shaky, i.e. imho he arguably got his history wrong. I am hedging here a lot, since those are two huge rabbit holes of a discussion, for which I don´t have time at the moment.

Expand full comment

yowza, i don't want to get into the larger issues, but that audrey tang tweet is insanely dumb. maybe it's ok as a poetical musing, but to pretend it has any non insipid content is really something :\

Expand full comment

The reversal of the cover art... funny! Might have been cool to reverse it on the original and then reverse it back for the response. But would hav been hard to think of that in advance.

Expand full comment

"AI" is anything we don't know how to do yet. Once we know how to do it, it's just programming.

First, I programmed in machine language. 0 means and, 1 means add, 2 means increment and skip if zero, 3 means store, 4 means jump to subroutine, 5 means jump, 6 means I/O, and 7 is a bunch of math operations.

Next, I programmed in assembly language, and a robot translated it into machine language for me.

Next, I programmed in C, and a robot translated it into assembly language for me. Mostly I could do a better job than it could, but my time was better spent programming in C.

Next, I programmed in Python, and a robot ran the programs for me. I could have turned the Python into C, but that would have been too much work, so I let the robot just run the programs.

People think I should be scared that a robot will take my job.

I'm not. A robot has always taken my job. Robot, come take my job so that I can get more valuable things done.

Expand full comment

You guys are talking past each other on critique of "rationalism." It's clear it doesn't mean the same thing to the two of you, but what does it really mean at all? Is it people in and around the software and venture capital fields on the west coast who have gone all in on the notion that billionaires have some special mental property that makes them qualified to disrupt and hack any industry, including government? Is it a few hundred people who all lived in San Francisco at around the same time who attended cuddle parties and coalesced around a few high-profile autodidacts highly committed to intellectual parable via Harry Potter fan fiction?

There's clearly some overlap, but these are not the same groups. I think Scott is particularly sensitive about this because one of these groups has tremendous social power and the other not so much and may in fact have spent much of their lives being mocked and bullied, and sure, a few now have advanced degrees and decent salaries, but don't fundamentally see themselves as being aligned socially with the Adam Neumanns and Travis Kalanicks of the world.

And then there's the rest of us, who for some period of time possibly now spanning decades, possibly well before Less Wrong or even Overcoming Bias ever existed, have tried in some way to systematically overcome all of the various failure modes of human cognition for one reason or another, possibly profit, possibly fun, and possibly in some small subset clustered around the cuddle party people specifically to stave off the AI apocalypse. I don't know that this really has to entail specific policy opinions. That the commentariat may in practice lean toward some anyway I think is a consequence of Scott's commitment to charitably reading all opinions as long as you aren't being a dick. If you build a place where culture warriors can speak their minds without fear of being canceled, they're gonna show up, even to a blog nominally meant to be principally about psychiatry, philanthropy, and AI risk. This creates a very skewed view of "rationalism" because Scott is seen as probably dignitary #2 of the entire movement, but his blog is largely populated with people who don't identify as rationalists and strongly disagree with Scott on nearly everything except the commitment to open discourse.

So I think you end up with two broadly wrong ideas about who rationalists even are based on cherry picking people who are socially adjacent but ideologically committed to very different goals. Then you have the broader group that is probably a more accurate view but may not even consider themselves rationalists. Then you have Scott's very specific peer group who would definitely call themselves that and in fact made up the name and they take it very personally when you criticize a different group but implicate them.

Expand full comment

This is exactly the type of infuriating talking-past-each-other conversation I find myself having with partially-aligned people who've ended up in a different bubble than me and with whom I haven't had the same tipsy, late-night, intention-clarifying conversations.

Whatever the result, I'm grateful to see y'all do it in public.

Expand full comment
founding

Glen keeps nodding towards Human Computer Interaction and user centered design as a model forward, which makes me as uncomfortable as he is as a mechanism designer. As someone who's job is to "communicate across lines of difference" and then bring /something/ back to the technocrats who actually build (and design) things, I know how hard and flawed the work is. I have to use all my own technocratic skills to build a better map of the territory that is still legible to the product team. The outcomes he celebrates (99dots - so cool!) are some successes he heard about that survive in a brutal space out there.

I'd also like to understand what "democratic communication" means. Is it a style of communication with some characteristics but not others? Likes and votes don't count, but do survey responses? Open-ends only? Does it have to be public, or can private or anonymized communication count? Who is part of the demos for a given topic? (I'm hoping this isn't completely depenent on point 7 above - what does "democracy" mean)

Expand full comment

> the country, Taiwan, which performed best in the virus was led in part by Audrey Tang who moved back to Taiwan after being immersed in and repulsed by the rationalist movement in Silicon Valley - see e.g. https://www.wired.com/story/how-taiwans-unlikely-digital-minister-hacked-the-pandemic/) and dedicated herself to doing things differently in Taiwan

Did anyone else read the fairly-long cited article and find anything to support this claim? As far as I can tell he just made it up..

Expand full comment

I'm not sure if I ought to rephrase things to be politer, but:

I thought the stuff on rationality and effective altruism is pretty obviously wrong. The next step is wondering how much this is an aberration vs. representative. And isn't it an interesting coincidence that the section of Weyl's essay that this audience know the most about (and is thus best equipped to evaluate for ourselves) is also the part that Weyl is *apparently* the most wrong about?

Naively this will be some cause for suspicion for the rest of the essay, and perhaps enough that he's not worth engaging further with.

Yet as far as I could tell, nobody else has made this (seemingly obvious) point so far. Are other people just virtuously being silent? Is there collective Gell-Mann amnesia? Or am I being dumb for object-level reasons (eg his other points are reasonable and has a nontrivial correlation with reality, the coincidence I noted happens to just be an interesting coincidence, rather than a "interesting coincidence")

Expand full comment

Great to see you back Scott.

I have an issue with the Firefox reader mode on substack. When navigating through your blog, only the first page, I visit, works in reader mode, after that the icon disappears from the URL bar. I have to refresh the page to get the reader mode icon again. It's just a small annoyance but maybe you can pass it on to someone at substack to look into.

Expand full comment

Again, I think that this misses the real division, which is Coerced Vs Chosen. Ultimately, the idea that you can force people to do what you want because you're LARPing science is no different than the idea you can force people to do what you want because you have Divine Revelation / Dialectical Materialism / Whatever on your side.

If your ideas are actually good, you don't need force; you just need people's rational judgement. It's when your ideas aren't up to snuff, that's when you need force.

Notice how this is completely independent of whether you are claiming your ideas are good because of lots of bottom-up practical, hands-on experience, or whether you're claiming that they're good because of lots of top-down, formal-studies knowledge.

This also get's to why it's people who actually do technology who are critical of technocracy, while those who are wannabes like technocracy.

Expand full comment

I concur with everyone who pointed out this is two people who mostly agree and mostly say correct things talking past each other.

Scott, you made a mistake by not heeding your own warning ( https://slatestarcodex.com/2018/12/18/fallacies-of-reversed-moderation/ ) and not realizing Glen is himself a "rationalist" talking about how "maybe we should also think about other people". This in turn makes you argue for expert opinion and top-down intervention being viable at all. It turned out poorly. Except for vaccinations, your examples are highly debatable for any position more nuanced than "it's possible for top-down not to be 100% harmful", and this actively distracts from your point.

Other than that, or perhaps because of that, I think Scott is more correct in the big picture. Glen, you're worried about people justifying their selfish, close-minded decisions with rationalism. This happens, but, as Scott points out, so does people justifying their selfish, close-minded decisions with "human values", etc. At some point you have to come around to the notion that it's selfish, close-minded people who are the problem and their excuses are just that, excuses. (I believe Scott (James), being an anarchist, would support that conclusion.) Forget that, and you're just arguing about what values should selfish people use a cover for their selfishness. (And as Scott forcibly but correctly points out, humanities are already on top of that ranking nowadays, to the point where even naive rationality looks like a useful correction.) This directly ties to Scott's point about mechanisms being useful to restrain selfishness and biases of those in positions of power and expertise. Yes, mechanisms can themselves be just an extention of this selfishness and bias. But we're nowhere near the point where the realistic alternative is a more human-friendly process. The realistic alternatives is blatant self-interest and corruption. Say, voting may not be the best form of democracy (much less the only one), but Scott never claims it is. He says it's a democratic mechanism that's been successfully implemented, and it beats other available mechanisms. More importantly, it actually serves as a check on selfish people and their top-down planning. (It's hardly perfect, and you could argue it's acting as a tool of legitimization that disables other checks on them. But it did not arise as an alternative to participatory democracy, it arose as a replacement for dictatorships and tribalism. The nearest alternative to it is a reversion to dictatorship.)

Also Glen, ignoring the matter of what (James) Scott actually meant, what Scott takes from him is important for the same reason your praise of iPhone is important.

I think you're wrong, Scott is right, the actually important part is for people to be able to express themselves legibly, making the environment legible to them is a necessary part of this, but nowhere near sufficient. I say this because modern technologies, of which iPhone is a poster child, (ostensibly) aimed to make electronic appliances more legible are in fact actively taking control away from their users, and the result is the social hellscape we're currently witnessing - collapse of creativity and individuality, atomization, epistemic bubbles on one end, and forced conformism and more and more authoritarian control on the other. (Moreover, "it was certain that this would happen because they optimized for the wrong thing".) It's the geeks away from "user-friendly" ecosystems who still keep us going, against all odds, and it's not because their tools are arcane, because they never stay arcane for long. It's because their tools are made to serve people, not to let them perform the exact curated set of options someone wants them to perform. Yes, sometimes the people are just them and not random person off the street, but that's always more easily fixed than Apple Corp.

Expand full comment

This engagement is valuable and food for much consideration. Both positions are well articulated and thoughtful, willing to accept scrutiny and without ad-hominem attacks or malice. It is EXACTLY what I want to see in my Internet of people. Thank you both for taking the time to write so clearly on your thoughts and engage with each other as fellow travelers should.

Expand full comment

“There is no unitary thing called "science" or "mechanism". There are a variety of disciplines of information processing across academic fields, across cultures, across communities with in a culture, etc.“

He means standpoint theory, from postmodernism. Has this stuff permeated Everything? 👆👆👆

Expand full comment

I'm kinda surprised that this debate has not explicitly touched on Weyl's argument about "fidelity" vs "legibility." I had thought that was the core point of the original essay!

Specifically, I thought Weyl's central line of argument went something like this:

-------

1. Currently, the practice of government involves a lot of "mechanism design," i.e. proposing and adopting rules and procedures that will guide or constrain how the government does a particular thing.

2. Currently, the mechanisms are designed and argued for by a small elite quite different from most people. This elite has trouble communicating with much of the populace.

3. Mechanisms designed by this elite tend to leave out important factors in a way that matters practically. This happens for general "all models are wrong" reasons, but is exacerbated by the elite's lack of communication with most people.

Even when communication happens, it is delayed by the need to "translate" the opinions of the masses into the language of the elite before the elite can respond to those opinions. And it occurs unreliably, depending on whether someone's around and willing to do this "translation."

4. For the sake of the present argument, let's assume 1+2+3 are fixed for the foreseeable future.

So we're assuming we *will* have mechanisms and they *will* be designed by an out-of-touch elite. The question is how this elite ought to behave if 1+2+3 are true.

5. To help with #3, the elite needs to provide ways for the masses to directly intervene in a way that corrects the elite's own errors. There are two ways to do this:

- Try to mend the communication breakdown between elite and masses

- Design mechanisms which the masses can directly modify to their own ends, somewhat like open-source software

Weyl mentions both, but focused mostly on the second one, about mechanisms. I take this to be the central *goal* articulated in the essay -- to make this kind of mechanism feasible.

6. Currently, the elite tends to focus on making their mechanisms better when judged using existing models ("optimality"), and on making the models more realistic ("fidelity"). Pushing for optimality can make ideas more *or* less complicated, but pushing for fidelity usually makes them more complicated.

7. Due to 6, the elite's models and mechanisms tend to get ever more complicated with time. Thus, they get steadily more difficult for the masses to understand. (Indeed, the highest-fidelity mechanisms available to the elite may be incomprehensible even to the elite themselves, e.g. black box neural nets.)

8. What needs to be true for a mechanism to be open to modification by the masses? For one thing, the masses need to understand what the mechanism is! This is clearly not *sufficient* but it at least seems *necessary*.

9. Elites should design mechanisms that are simple and transparent enough for the masses to inspect and comprehend. This goal ("legibility") trades off against fidelity, which tends to favor illegible models.

10. But the elite's mechanisms will *always* have problems with insufficient fidelity, because they miss information known to the masses (#3). The way out of this is not to add ever more fidelity as viewed from the elite POV. We have to let the masses fill in the missing fidelity on their own.

And this will requires more legibility (#8), which will come at the cost of short-term fidelity (#9). It will pay off in fidelity gains over the long term as mass intervention supplies the "missing" fidelity.

I take this to be the central *piece of advice* articulated in the essay.

-------

This argument is interesting, novel (to me anyway), and very different from hoary old complaints about the downsides of mechanism itself.

On the other hand... I don't really buy it? It's all technically true as far as it goes. But it seems narrowly interested in a necessary condition that's far from sufficient. There are plenty of laws, etc. that are simple and easy for most people to *understand*, yet very difficult for people to *change*.

Weyl seems to want mechanisms that are easy to customize for different local circumstances -- perhaps even mechanisms that are more like templates or genotypes, specifying not the rules themselves but how to produce a set of rules for your local context. It's an appealing idea, but it would require all kinds of work that his essay doesn't argue for.

Meanwhile, the change which the essay does argue for -- towards more legibility -- feels only tangentially relevant to the problem. Yes, designs that are easier to understand are often easier to customize. But sometimes a design is easy to customize precisely because you *don't have to* fully understand it to usefully customize it. (The Apple vs Microsoft example is illuminating. Early Macs were not simple machines! They didn't turn the average person into a computer expert. Instead they succeeded by carving out a valuable kind of interaction with a computer which *didn't require computer expertise*.)

It's as if Weyl had written an ode to open-source software which assumed the user was carefully reading every line of source code, and that no one could adapt a piece of OSS to their own ends unless they fully understood it. And had then argued for more readable source code, even at the cost of performance, correctness, and *meaningful* adaptability. (I.e. how easy it actually is to adapt the code in practice, as opposed to the coarse proxy "is the code readable?")

Expand full comment

As many have said, Scott A and Glen seem to mostly agree with each other but I think part of the difference is the concept of "taking people with you". Sometimes you actually can put in place a really good system that doesn't need huge buy in. A pretty trivial example is a deposit refund system on beer glasses at sporting events. You pay a £1 for your cup and get £1 back at the end when you return it. Or you can keep it if you want. If anyone leaves there cup on the floor, someone just picks it up and pockets a very easy £1 when they return it. The problem of having loads of cups left everywhere has been solved by a smart (I guess technocratic) solution.

However, this is a small thing and bigger things often require people to "buy in". Implementing a company policy, government policy etc rarely works if everyone thinks it's a terrible idea. It's not impossible, but you are swimming upstream. This is where there's some tension between Glen and Scott's view. Even though I think Glen is just obviously, objectively wrong by saying (as referenced in another comment) that everyone's epistemology is equal and people just specialize for different things, that sorted of viewpoint is (I think) much likelier to get people to buy in. "We think you are equal and value your feedback" works far better than "you are too stupid to understand why this is a good idea".

However, I think Glen is going too far the other way. If you're a mechanism designer and you really believe that designing better mechanisms will lead to better outcomes, just defend your view! Yes, there are caveats. Yes, there are some things which you absolutely have to take account of that are better expressed as general principles than precise graphs. But ultimately, not everything is just a matter of preference. Some things do work better than others.

I love Scott's writing and it took me a while to figure out that this was unusual. I don't think he's writing for the benefit of everyone, it's a pretty small self-selecting group that will enjoy it (which Scott seems totally fine with). I don't think an SSC piece is going to have much impact on 100 randomly selected individuals. But as Scott sort of alluded to, I don't think the average Joe who's upset with an overmighty, arrogant authority is necessarily bemoaning their lack of focus on continental philosophy.

I do think these two views are worth discussing though. In the UK, by the time lockdown was announced in late March 2020, I recall seeing an opinion poll giving the policy 98% support (have that lizardman!). This is certainly taking people with you. But it was also too late and thousands of people died as a result. A technocratic early intervention would have been the right thing to do, even if there had been a backlash to it. The irony here is that the government pretty much followed the medical advice they got early on in the pandemic, it was just wrong.

I suppose a middle-ground is to take people with you where possible, but when time is scarce the best thing to do use all the tools you have to figure out the right course of action and do it. Does this seem fair to both?

Expand full comment

Me: "Audrey Tang repulsed by rationalist movement"

Google: "It looks like there aren't many great matches for your search"

Citation needed.

Expand full comment

Speaking as someone whose academic experience is entirely on the 'humanist' side, I am sceptical that it is generally better on any of Weyl's criteria or capable of offering the insights he hopes for.

Expand full comment

Reading the excellent comment (probably the best one in this whole conversation) by

nostalgebraist, as well as rereading some of Scott's responses makes me realize that most of what I was actually at the core trying to argue and call attention was almost completely lost in this discussion.

There is a fundamental and extremely central element of how most positive social change at scale takes place that is mostly ignored in rationalistic discourse and technocracy (not just the rationalist community, but most of the economics community etc.) that is studied extensively in literatures that I am just starting to learn about. A key goal is to call attention to these other literatures and ways of thinking. I have tried doing a search and may have missed something but as far as I can tell there is basically 0 discussion of these literatures on LW, SSC or OB.

These literatures include the work of John Dewey, the whole field of human-centered design (e.g. Don Norman's The Design of Everyday Things), the related field of participatory design, etc. I won't go too much into a lit review here as I am about to put out paper with an extensive one. These fields have a pretty deep methodology for thinking about the role of technology in society and how social change takes place. They have been both tremendously influential and successful; contributors and participants have originated many of the technologies we use most today.

These areas emphasize a basically different model of social change than shows up in the way Scott poses the distinction between top-down v. bottom-up. Social change happens through a range of communities coming up with designs and then other communities experimenting with, having a range of experiences with and reshaping and repropagating technologies. Little successful change comes from a single center gathering "evidence" and then "implementing". Think about the internet (which diffused in a very complicated, polycentric way and was reshaped half a dozen times along the way), the personal computer (which was invented in universities, developed at a corporation, then redeveloped in a much weaker form by hobbyists, spawning an industry that eventually rediscovered the earlier work, etc.), democracy in America (which grew out of local self-government, diffused into colonial governments, etc.) and so forth.

The success of change thus depends crucially on practices that facilitate comprehension, reuse, refashioning, participation in the design process from a range of people etc. This does not mean, as nostalgebraist points out, only or perhaps even primarily making the entirety of a system completely comprehensible (though that can sometimes help). It has to do with allowing accurate mental models of various parts of systems and understanding that the ways a system will be used relate less to how it is originally intended and more to the basic ways it can fit into a complex system. The internet is probably the single best example of this, but the iPhone is quite good as well.

Is this hard to model or make fully rigorous? Absolutely. Is it very easy to completely mess this up by totally ignoring and debasing its importance? Absolutely as well...as the examples from economic policy making I give illustrate. Have I frequently made this mistake myself? Countless times. What led me to want to bring this to the attention of this particular community? I advocated, in a very rationalist way, a range of things in my book Radical Markets. I took a lot a flak from people along the way for technocratically ramming things down people's throat (e.g. see the reaction to this piece: https://www.politico.com/magazine/story/2018/02/13/immigration-visas-economics-216968). I was initially defensive, and of course I do think some of the reaction was extreme/unfair. But in the end the conversations I had with diverse publics and people from a range of fields tremendously enriched and expanded my thinking not because, as Scott suggests I am advocating, I took everything they said as literally true, but *because the discipline created by having to take seriously folks' objections and reimagine my own thinking synthetically in light of them, so that I could justify my views in their own language stretched and improved my designs*. If you want to see my own personal learnings from this, see here: https://www.radicalxchange.org/kiosk/blog/why-i-am-not-a-market-radical/.

Treating other people as rough epistemic peers does not mean making them read tons of source code, nor does it mean taking every comment they make as literally truth or expert on precisely the issues you the designer are. It means holding yourself accountable to the ability to articulate your ideas in their language and realizing that you are likely missing something important and will thus be ineffective in making change if you cannot. Ignoring the need to do so and instead using power to implement change usually results in harm even when there is underlying merit to the ideas, because people resist and the ideas are usually broken in a place that is unappreciated by the designer and could have been fixed if the designer had tried to focus more on communication and less on optimization.

Expand full comment

I don't feel like either Scott's post or Weyl's succeeded in clarifying what they disagree on. It definitely feels like a "bravery debate" and/or a definition debate, in that "rationalism" and "technocracy" and "high modernism" are all fuzzy ideas with different meaning in different contexts. So I'm trying to think through what, in practice, would be a source of disagreement between them.

First, it seems like what they disagree on is all in the sphere of politics / policy / public decision making. All the examples they discuss fall into that category. It doesn't seem like Weyl would object to, say, his doctor using "expert knowledge" or "formal training" to make decisions about how to treat him. What he's claiming is specific to decisions that amount to a policy for society more generally.

Second, let’s divide policy disagreements into values (what do we want) and beliefs (what courses of action will do what we want). It seems like Scott and Weyl disagree on the correct process to determine beliefs, not values. If Scott were to learn in hindsight that, say, the economic and social damage caused by COVID lockdowns were more harmful than unchecked COVID would have been, that would change his opinion about the correct course of action. Weyl nods at this when he mentions (in the intro to his original essay) that "technocrats" exist in, and are responsible to, a variety of democratic and authoritarian regimes.

So I'd frame the disagreement as: "On the margin, would shifting public policy toward expert input and sophisticated plans, or toward community input and simple plans, lead to more successful actions?" I do think it's fair to say that Scott and the rationalist community lean towards the former, though it's not their primary focus and Weyl shouldn't have picked on them specifically. Interest in utilitarianism and QALYs, effective altruism as a social norm, interest in complex tweaks to democracy (ranked-choice voting, quadratic voting, futarchy), and the long-term vision of friendly AI running the world all point in that direction.

I don't know who's right on balance, but I'm sympathetic to Weyl's side. It seems like a key point to be made here-- which rationalist responses have neglected so far-- is that engaging and mobilizing the community is itself a huge part of what determines a policy's success in practice. A policy that won't be "correctly" implemented due to lack of democratic support is, ipso facto, a poor one. Again COVID is a good example: lockdowns were pitched as a way to stop the spread or flatten the curve on a timeline of weeks, but instead led to a miserable year of R~=1 due to people's reactions and especially their low confidence in public officials (thanks in part to misleading messaging from experts early on!). Another example: I favor approval voting over ranked-choice voting even though it's less expressive, because approval voting is a more intuitive and less disruptive tweak to the current model.

Expand full comment

Given the radical scope of Glen's vision for societal restructure I'm heartened by his effort into the anti-thesis of the technocratic spirit. Especially considering he seems to be largely animated by it's thesis (smart people can do smart things to make things better).

I really liked the first 3/4's of the essay and the discussion of legibility in particular but found the critic of especially EA tricky to absorb from my position inside that ideology.

Expand full comment

respect to this guy for defending his viewpoints; not a strong defense but still respect

Expand full comment

Is this an example of what Glen is talking about/advocating for? https://urbankchoze.blogspot.com/2014/04/japanese-zoning.html

Tl;Dr: It's a post praising the Japanese zoning system, as opposed to the US 'Euclidean' model.

The National Government of Japan has defined 12 zones that are applied consitently across the country. Local government has to zone things more or less into these 12 categories. Compare the US, where local government can apply whatever byzantine system it wants short of de jure racial discrimination.

Another difference is that height limits are set up according to simple, consistent geometric rules as opposed to arbitrary maximum heights.

This results in a system where the government is forced to behave in ways legible to homeoners and property developers. In the US, however, property development seems to be more about knowing the correct masonic handshake that will get the zoning authority to approve your plans.

In theory, the American system lets local governments tailor things more optimally to the local situation. In practise, that power is mostly used for nepotism, redlining, housing market manipulation and general graft.

Expand full comment

Hi Scott, I am a nonpayer. I received a copy of something that was blocked by a paywall on RSS, but I am not sure if it was this or another article (and it would be a bit time consuming to check). Would it be possible for you to create two RSS feeds (one for payers and one for nonpayers)?

Expand full comment

Who else thinks Scott and Glen should do an Adversarial Collaboration?

For it to work, the title would need to be well chosen. Something like 'Technocrats are overrated' seems like a good first stab. Obvious issue is that 'overrated' is not a well-defined concept, but that could be a good thing. The obvious need to define 'overrated' would probably help clarify what they do and do not disagree on.

Expand full comment

I'm a bit confused about desegregation being a technocratic thing. Are we saying that every moral crusade imposed from the top down represents technocracy? What is the *technical* angle to school desegregation. That it would raise test scores?

Expand full comment

This is an interesting discussion. I just wanted to add that I think the popularist revolt against what Glen describes as the technocratic decision making process is not only or even mainly about technocrats living in some kind of technical bubble that would be popped if they consulted the common person a little more. I more get the impression that populists are mostly worried that the system designers don't have their best interests at heart and are either power-hungry or unduely influenced by vested interests (of a different flavour whether you are left or right leaning). Regardless of whether that is true or not, I think most people would be happy to leave it the experts if they thought the experts had the right intentions. I'm not sure this is a good thing, but I think it's true, because most people lack either the time, inclination or ability to thoroughly explore and public policy topic and the best they can offer is a casual impression. IMHO, populism isn't aiming to achieve an improvement on the mechanism design process but rather is an attempt to assert the interests of the popularists who believe their interests have been excluded.

There are also at least some mechanisms (or non-mechanisms) that couldn't be described as technocratic in nature, but are a result of the internal workings of the political process. Established political parties aren't run or staffed by technocrats, but instead usually bring them in as consultants. So I don't know if technocracy and populism are really the only approaches on show here.

Scott and Glen's discussion partly hints at this, but I'd like to see them explore that further. Regardless, great discussion, thanks and really happy to see the site up and running!

Expand full comment

I read the link about Taiwan's covid response. It's about as "technocratic" as you could get. It's more-or-less exactly what we'd expect a Bay Area rationalist to do if they were in charge.

Weyl wishes the "rationalist" response was as bad as the US medical establishment's actual response, because that would tell the David and Goliath story he's trying to push.

What Audrey Tang *did* do differently is recite poetry and say Daoist things to the press. Apparently that's all it takes to convince some skeptics that you're a new kind of benevolent leader, even as you solve the actual problems through the same technical data-driven mechanisms.

Expand full comment