762 Comments

"What The Western Mind Doesn't Get About Putin's War"

I love this bit:

"The nuclear risk could be low, it could be higher, but to say that it's for effective purposes zero simply makes no sense. I tell you what, it's an expression of a prejudice that, in fact, expresses so many things that we get wrong, and this is a really important prejudice, probably the most important prejudice we've mentioned on this channel, and it's this: humans tend to think that the future will be like the recent past.

Humans tend to think that the future will be like the recent past. It won't. It won't.

The future will be like the past, and that's very different to the recent past."

...

The previous stuff we discussed — you know, putin's plan for the continuation of this war, for a second invasion of Ukraine, potentially — that's to say we're not used to thinking of somebody with a crazed world view that's partly prudential, partly sort of soaked in mysterious fake spiritual civilizational ideas ... engaging in brutal territorial war, and we need to understand that's not an aberration, that's history. History is full of these things. Putin is *normal* historically. He seems abnormal to us because we are confusing the recent history with human history."

https://www.youtube.com/watch?v=DozlFOCb4nQ

Expand full comment

I'm a bit late to this post, but I do play a UI engineer for video games in real life, so that second point is pretty pertinent to me.

Something that's been on my mind recently is the fact that there just aren't any standardized tools or processes for building UI that all developers in the UI space are expected to be familiar with and can build upon. Any time you start work on a new application, you're dealing with some sprawling tech stack of random UI framework layers plus a huge mess of ad hoc code, and you have to spend a lot of time learning the quirks of that particular combination of tools. Any experience you might have previously gained is probably only loosely relevant, because anything you've worked on previously was probably built using a completely different methodology. So you'll probably spend your first couple of years trying to make changes that are as small as possible, so you don't break anything in the application's fragile framework, until you learn its specific quirks well enough to be more confident in making riskier changes. And by then... well, you've probably moved on to the next project anyway.

By contrast, if you're working on 3d graphics, there are only a couple of frameworks that you'll ever really need to be familiar with. If you know how to use OpenGL/D3D and their associated shader languages, or if you know how to build and optimize models in 3DS Max/Maya/Blender, you'll already know most of what you need to know to start working on any new project. No game developer is going to be dumb enough to build their own in-house polygon rasterization library, let alone their own mesh editing/character animation tool. And because those basic frameworks are so ubiquitous, computer manufacturers have been able to build specialized hardware (3D graphics accelerators) that implements their APIs and data formats in an extremely optimized and streamlined way.

And I think the basic reason for this is that UI frameworks look deceptively easy to build. Rasterizing an arbitrary 3D mesh into pixels on a screen is a pretty hard problem requiring quite a bit of deep mathematical knowledge, but almost any developer can come up with an idea for a better way to put flat colored rectangles on the screen. So, given the choice, they'll probably happily start writing their own framework, rather than trying to work around or improve one that already exists. So the UI space is littered with these half-finished frameworks, none of which ever really reach the point of maturity, and none of which work particularly well outside of the environment and language where the original developer preferred to work. Every app developer just picks one that vaguely fits their particular problem space, and then builds their own extra framework layers on top of it to work around its limitations.

And this pattern holds even outside the world of random nerds building open-source tools in their basements. Even the big players in the software world can't manage to come up with a sensible standard for UI, even within their own ecosystem. Apple, Microsoft, and Google are constantly announcing new UI frameworks, and even whole new languages for building UI in those frameworks, every few months.

And it's not like there's even anything all that unique about any particular UI framework. The basic problems of UI have been pretty much the same since the days of OS/2 and the first Macintosh, and most UI frameworks ultimately converge on pretty much the same set of features. But they're all mutually incompatible, and knowledge we gain from working with them is just as incompatible, so all our energy goes into trying to figure out how to accomplish the same simple thing in a hundred different frameworks instead of figuring how to do that thing better and faster in just one or two frameworks.

Expand full comment

Some interesting wordplay in the Friday XWord.

Clue: Summons before congress?

Answer: OBBGLPNYY

Expand full comment
May 14, 2022·edited May 14, 2022

RE software speed

The primary reason is because the software you use your eyes to look at is designed and written for the human timescale (in contrast to high-frequency trading algorithms, etc). Back when computers were slower, they did fewer things, because the designers+programmers couldn't cram any more stuff into a second. Now that computers are faster, more stuff (features) is crammed in. There's less pressure to cut features for performance because the performance is already tolerable.

For example, Windows 10 both searches the internet and your computer whenever you use windows search (part of Cortana) (also as part of the '18 spring update, you can no longer turn this off).

Also, everything has tracking now because it can be afforded (even including the single-player video games you play). Where and when you click, when you scroll, how long you kept the tab open.

It's not that in the past websites didn't want to track you, it's that they couldn't afford to force you to do it because it would be unacceptable performance and you would leave.

In addition, there's not only tracking from one source but also multiple sources (although extra trackers usually do less tracking like only IP address/clickthrough instead of other things like scroll tracking). Why not let both google and facebook track the users on latimes.com? They'll both pay.

Post-script: Do you think gwern reads these comments? I bet not, because there's no sorting by quality=there's no upvotes/downvotes. The more skilled people are, the higher their opportunity cost of reading every comment, and if there's no filtering so they can spend a little of their valuable time reading one or a few comments, then they might skip the comments entirely. Due to this, the community quality here on this substack may have no hope of being as good as say lesswrong (I mentioned gwern specifically because I think he's a friend of the blog).

Expand full comment

Apologies if this is considered spammy, but does anyone happen to remember seeing a picture that was the Gramsci "the old world is dying" quote put into the "Gru explains the plan" meme? I'm certain I saw it on a post here but I can't remember which one.

Expand full comment

I wrote the review for David Deutsch's The Beginning of Infinity. Any feedback from people who read that review would be appreciated. Feel free to drop a comment.

Also, would it be possible to see the ratings for the non-finalists? I'd be interested to see how it ranked.

Expand full comment
May 12, 2022·edited May 12, 2022

It's a day that ends in "y", so a clever-seeming cryptocurrency idea has abruptly lost all its value. Matt Levine has a good writeup of what went wrong:

https://www.bloomberg.com/opinion/articles/2022-05-11/terra-flops

TL;DR: An algorithmic stablecoin works by giving people an incentive to do arbitrage - if the coin's price moves away from the peg, then people can buy or sell tokens and dollars to move it back to the peg and make a profit. Sounds reasonable - so long as the tokens are worth more than zero dollars, then there should be some number you can exchange to make a dollar's worth of stablecoin. But if the prices drop suddenly, then people might notice that the tokens they're buying are actually worthless, stop buying them entirely, and then nobody can do arbitrage and the whole scheme unravels.

Expand full comment

TIL about Curtis Yarvin. Thanks Mr Douthat. Now I understand ACX a bit better.

Sorry about the loss of your wife Curtis. It’s a terrible loss. There are no words for something like that.

Expand full comment

What is the right way to look at inflation?

I regularly see people use the YoY price increase as the standard metric for inflation, but it seems like this misses a good deal. If we are looking for a point estimate in how prices are changing right now, it seems something like a MoM price increase, maybe adjusted for seasonality, would be more reasonable. If we are looking at generally how high prices are, YoY might make more sense, although if we have very high price increases for a year, and then are static for a year, we'll have 0% inflation for that static year even though prices are very high in an absolute sense.

Are these concerns reasonable? Are there better metrics of inflation people should look at?

Expand full comment

My friend Mike Darwin is starting the Biopreservation Institute (a nonprofit research organization) and we need a COO-type person to help get everything started!

Mike used to be the president of Alcor and he's been involved in some of the most impressive research in cryonics over the last 30 years, such as reviving dogs after 16 minutes of warm ischemia.

The Biopreservation Institute's goal is to put cryonics on a modern, evidence based, scientific footing. We want to define rigorous standards for preservation and quality control mechanisms to make sure each preservation meets those standards. We already have extensive funding for the first year of the project with follow-up funding conditioned on further good results.

If that sounds like a good mission to you and you want to get started on the ground floor, then email me at r@nectome.com and we can discuss the next steps. (I'm on the board of BPI which is why I'm helping with recruiting). The Biopreservation Institute is located in Southern California and it's definitely going to be an in-person job. Happy to respond to comments in this thread as well.

Expand full comment
May 10, 2022·edited May 11, 2022

Main reasons for the "latency problem":

1) the video game is using a GPU to do these computations, which is a massively parallel device that can to some degree compute each pixel in the image and each physical element in the simulation independently of the others. The 2D app is likely using the CPU to draw to the screen. This is (brutally approximated) like using a thousandth of the GPU's ability to draw to the screen.

2) A lot of the delay in a program STARTING is the program having to load information from the disk into RAM to compute the next step in an extremely long set of instructions to get the program fully running. The video game also takes several seconds to get started as it pulls information from the disk / internet and loads it into faster memory like RAM / the L2 cache, etc. The analogy here is that if I already have the book open in my hands, turned to page 82, with my finger on the second paragraph, and you ask me to read the second paragraph, then I can respond in seconds, but if you ask me to read the second paragraph on page 82 of a book that's in the library it might literally take me 45 minutes to respond as I physically go to the library, look up the book, and find the relevant information. However AFTER I've done that if you ask me to read the next sentence after that I'm back to just taking a few seconds to respond. The situation with computers copying information from a spinning disk hard drive into registers that live on the CPU / texture processors in the GPU is, I think, even worse than the book analogy.

However, we COULD make computers that could boot up instantly and run most programs instantly as well. It would involve some combination of pre-loading everything and carefully optimizing every step of the process. It doesn't happen in practice because it's very expensive.

Expand full comment

Self promotion: I started writing about NLP and personality structure, which I figure interests this crowd. The first post is about how the Big Five are word vectors: https://vectors.substack.com/p/the-big-five-are-word-vectors

Expand full comment

Did anyone sign up for the forecasting tournament mentioned in the previous open thread? If so, did you get a confirmation email after submitting the signup form?

Expand full comment
May 10, 2022·edited May 10, 2022

Grip strength as a diagnostic tool for depression [1], any thoughts?

A lot of strength output is neurological, so maybe depression suppresses neural drive. Neural drive is suppressed automatically to prevent injury, eg. as you accumulate fatigue while exercising, or when sore from exercise, your nervous system output becomes attenuated to prevent injury.

Depression is also connected to pain [2], which presumably is also connected to these circuits that suppress neural drive to avoid injury.

If exercise works to alleviate some depression (there is some contention here), perhaps this is all connected via a common mechanism.

[1] https://pubmed.ncbi.nlm.nih.gov/34354204/

[2] https://www.health.harvard.edu/healthbeat/the-pain-anxiety-depression-connection

Expand full comment
May 10, 2022·edited May 10, 2022

When my father had heart attack, I took an android tablet to call ambulance... and got messages like "%brandname% weather isn't responding", "google play services isn't responding", "system UI isn't responding"

I think none of engineers designing that model used it, and this maybe part of why this happens

Expand full comment

Writing efficient code is simple; but there is no market for selling "simple" to people.

A specific example is helpful. Let's talk about loading from or saving stuff to a file. We've been doing this for a very long time and it's very simple. You write a version header, then you go over your data writing everything out. Your file loading code just verifies the version header is as expected, and then is the mirror image of the saving code. Now that computers are fast enough to actually fuzz test this, it's easier than ever to get this right. On top of that you can work to optimise the layout of the data; how it is loaded, and so on. It is also easy to cope with changes to the data format -- just change the version in the header and choose whether you have an error "you need a different version to load this"; or you support multiple versions and add code to migrate the data.

This has been an optimally solved problem forever.

And yet the experience of many programmers today is that this is actually really difficult. Firstly, there's no way to sell this solution. So, if you go searching around for how to do this, you'll find a lot of people trying to sell their solution -- some magic Reflection driven nonsense that claims to be able to solve this difficult problem for you -- and it's all gaslighting -- there is no difficult problem; just a lot of marketing to convince you that you need what they're offering, and what they're offering is great. So you end up using Newtonsoft.Json -- which is a really solid piece of work -- it's great -- no disrespect. But compared to rolling-your-own (as I described above) it's a disaster: it's very slow (compared to rolling your own) because it's doing something dynamic rather than static, and when you (inevitably) have to customise what it is doing, you a down a horrible rabbit hole, doing things much more complicated than if you had rolled your own -- and generally very inefficient.

There aren't many old people around in the industry to teach newcomers to ignore misinformation and do things as simply as possible. The way you progress in top tier of tech is by jumping from company to company, never having to deal with the long-tail of your terrible decisions or really learning from them. Motivated newcomers will go read what people are up to at coding conferences; and learn about all these awesome cool modern languages and go write something up in Python, having been told it is simple and forward.

Many programmers NEVER write apps directly to the OS API; For example, using CreateWindow in C to make a Window. They should. Until you've done this, it's hard to understand how fast computers actually are: watching youtube teaches you to make decisions that are horrifyingly slow and complicated and then use very advanced performance techniques to get something running half as fast (using twice as much battery) as if you'd just written it simply in C in the first place. There is no reason for any app on modern hardware to not show a window INSTANTLY except for the drag factor of whatever platform they're pulling in. I remember this video https://www.youtube.com/watch?v=GC-0tCy4P1U

The "people don't want to pay" line is nonsense. I have spent weeks trying to untangle stuff written at this high level. In terms of writing it, debugging it, maintaining it, and trying to make it faster -- this stuff COSTS a huge amount. Writing simple code simply and ignoring all innovations in computer languages of the past 30 years, is both a huge cost saving, and a huge performance win. Where this "don't want to pay" attitude comes from, is that the amount of work required to make modern-software performant costs a lot -- because we're already in the pit and trying to dig ourselves out.

I highly recommend Jonathan Blow's old talk on this https://www.youtube.com/watch?v=ZSRHeXYDLko

and also anything by Molly Rocket. For example this video "How fast should an unoptimized terminal run?" https://www.youtube.com/watch?v=hxM8QmyZXtg -- where we compares Microsoft's initial release of Windows Terminal (written in the super modern way) with something he wrote maximally naively that runs A LOT faster (because written simply).

It's highly frustrating.

Expand full comment

Since neither review of Unsettled (What Climate Science Tells Us, What It Doesn’t, and Why It Matters) is a finalist, I'll post my comment on Chapter 5 (Hyping the Heat) now: https://muireall.space/heat/ — I think Koonin's rhetoric is unjustified, at least here.

Expand full comment

Slow software is 95% "JavaShit/React/React Native/Electron/MS Office Apps - are utter crap, and almost nobody remembers how to write fast app, and certainly nobody wants to pay for them". I say this as professional software guy who spends too much of his time reading the foundational code of these termite piles. This is slowly changing, I've played with GUI and TUI apps written in Flutter and in GoLang and in Rust that do a good job of reminding you of how good modern computers are, when you don't stuff with the architecture of sewage on them.

Expand full comment

As someone who works with EHR, I very much feel the pain and frustration of waiting for our comps to load the info that we need. I understand the comment quoted in the post/email on a personal level.

Expand full comment

self-promotion: I am working on a new social network designed around having better conversations. It's a bit like emailing in public. Conversations are one-to-one, and the first message in a conversation remains private until a reply comes through. Works entirely through email, too, if you want it to. It's called Radiopaper.

Given that Scott hosts the best comment section on the internet, I thought there might be some interest here: radiopaper.com/explore

Expand full comment

Hi Scott,

Thank you again for organizing the book review contest!

I would really love to get some feedback on my (non-finalist) review. Would it be possible for non-finalists to see our scores? This was my first time writing a book review and I'm really curious about how people thought I did/how I could improve for next time.

PS. I reviewed "The Knowledge" by Lewis Dartnell, so if any of you have read it, I would love to see your comments!

Expand full comment

From https://www.gwern.net/Modus

"Probably the most famous current example in science is the still-controversial Bell’s theorem, where several premises lead to an experimentally-falsified conclusion, therefore, by modus tollens, one of the premises is wrong—but which? There is no general agreement on which to reject, leading to:

Superdeterminism: rejection of the assumption of statistical independence between choice of measurement & measurement (ie. the universe conspires so the experimenter always just happens to pick the ‘right’ thing to measure and gets the right measurement)

De Broglie–Bohm theory: rejection of assumption of local variables, in favor of universe-wide variables (ie. the universe conspires to link particles, no matter how distant, to make the measurement come out right)

Transactional interpretation: rejection of the speed of light as a limit, allowing FTL/​superluminal communication (ie. the universe conspires to let two linked particles communicate instantaneously to make the measurement come out right)

Many-Worlds interpretation: rejection of there being a single measurement in favor of every possible measurement (ie. the universe takes every possible path, ensuring it comes out right)"

Superdeterminism seems silly to me, or maybe I just don't understand it, but the other possibilities seem reasonable.

De Broglie–Bohm seems mostly/partly equivalent to Transactional. Wouldn't a global variable effectively be the same thing as transmitting information faster than light?

Transactional (my preference) needs superluminal transmission but it could get this if there are backwards in time particles. Is that the pilot wave thing? Also would the uncertainty principle allow for this? For example if a particle doesn't have a definite location is space could it also be said that it doesn't have a definite location in time? If a particle with mass is traveling very close to the speed of light could it quantum tunnel through the lightspeed barrier to become superluminal while at no time actually travelling at the speed of light (which is the thing that's actually forbidden.)

Many-Worlds is probably the favorite at this point. But to me it seems to have pretty serious problems. Like if every time there is a quantum branching and both branches become realized then there should be an equal chance of ending up in either one but there's usually not. So you end up needing to introduce a new variable, say reality juice, to try and account for this and then try to explain why this "reality juice" corresponds to the chance that you'll end up in any particular reality.

Anyway thoughts?

Expand full comment

In 1800, the population of the US was ~5 million, smaller than that of Virgina today. In 1865, the US pop was 35 million, smaller than Texas today.

Today devolving power to the states amounts to giving most state governments power over more people than existed in the USA at its founding. It seems to me, if you believe “locals should decide local issues”, you should favor devolving real power to much smaller levels than state governments.

A massive culture war issue like abortion getting decided by a state as big as Texas, where about 55% of the population is red tribe while 45% of the population is blue tribe, is a recipe for decades of conflict and hatred. One can argue we’ve already had exactly that over the abortion issue for five decades in this country, but the fight is about to become much more geographically concentrated. Some states are going to sit on the sidelines and laugh, while others are turned into the political equivalent of gladiator arenas.

It seems like people who believe local governments should decide such issues, should define local governments as cities not states, given that major cities are now much bigger in population than large states were 150 years ago.

Most of us can live in the same culture which makes our laws, if an urban government gets to make one law while an ex-urban government another. If you don’t want to live in a place where abortion is legal, live in the suburbs (you probably do already). Otherwise, you can live in an urban core.

That way it would be easy for most people to live where their values match their laws, and hopefully we can avoid a conflagration.

Expand full comment

On software performance: (one perspective)

There are broadly 2 types of problems in software. Embarrassingly parallel [1] vs serial problems. Games and GPU tasks fall in the former, everyday software usually falls in the latter. Serial problems do not parallelize, and will take just as long on 100 processors as they would on 1. For the last 20 years, individual processors have stopped scaling [2]. So, the only way to get performance improvements is to parallelize, but it rewards problems disproportionately. Serial problems stay slow, while embarrassingly parallel problems continue improving in leaps and bounds. This is exactly why impossible seeming tasks in Deep Learning or crypto computations are suddenly feasible, while sorting a bunch of items still takes just as long as it did in 1990.

P.S:

The lack of rich markdown or text-box resizing in substack comments in unforgivable.

[1] https://en.wikipedia.org/wiki/Embarrassingly_parallel

[2] https://en.wikipedia.org/wiki/Dennard_scaling#Relation_with_Moore's_law_and_computing_performance

Expand full comment

Slow software:

I think commercial software tends to be as fast as it needs to be, and no faster.

Optimizing for speed requires developer time, which is expensive.

Expand full comment

To your question about slow software:

Computers are now ~4 core, ~8 wide, ~3GHz powerhouses. Javascript, Python, Java, C#, basically all modern languages trade "ease of use" and "maintainability" (citation needed) for a factor of 0.1 in runtime speed, single core, single wide. If you write software in any language that isn't compiled to machine code, you now have a new baseline speed of 1x1x0.3 (core, wide, GHz) compared to 4x8x3. That's why basically all modern software is many orders of magnitude slower than 20 years ago.

This insight has brought people together already, under the name Handmade Network. Everything there takes 0 seconds to startup and was written by competent people who care about speed and not wasting 100x the time and energy (and electricity!) that, say, Python would.

Javascript is the only option for web for now, which is a reason to change that, not excuse it and continue!

Compilers are bad at making normal code 8 wide and parallel, which we should fix, not excuse it and continue!

Same for operating systems, Graphics APIs, drivers, debuggers, build systems, etc etc.

Further Resources:

- SIMD (what I mean by "wide")

- Handmade.network

- Jon Blow: Preventing the Collapse of Civilization (talks about the above specifically)

- Jeff&Casey: The Evils of Non-native Programming

AMA

Expand full comment
May 9, 2022·edited May 9, 2022

"Unsettled" https://www.amazon.com/dp/B08JQKQGD5/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1 Was reviewed for the book contest, but not a finalist. I liked the review(s) and bought the book. It does seem that (some) climate scientists, are trying to sell us something, and not tell us all of the science. (The rest of this is a back of the envelope calculation of the warming.) Anyway I was thinking about the numbers presented in the book. The first is that the sun adds energy at an average rate of ~250 W/m^2 The second number is that humans and our CO2 emissions have added about ~2 W/m^2 to the energy budget. Let's say it's 2.5 W/m^2 so it's a 1% effect. Now the average temperature of the Earth is ~14 C or 287 K (Kelvin). (Let's call it 300 K (easy numbers). If the temperature was proportional to the energy input then 1% rise in energy would be a 1% rise in temperature or ~3 K. But it's not. A black body radiator should follow Stefan-Boltzmann's law

https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law Energy = T^4, or temperature is the 1/4th power of the energy. For small changes this means (using a Taylor series expansion) delta T = delta E * 1/4. Or the amount humans have warmed the planet is about 0.75 C (K). That seems fine to me. And to hold human caused warming below 1.5 C we can double the amount of CO2 that we've already put into the atmosphere. (The rest of the observed warming comes from natural processes that are not under our control.) And the human caused global warming hysteria, should come to an end. Any comments. (Read the book or at least the book review.)

Expand full comment

https://clearerthinkingpodcast.com/episode/103

One hour interview with a deradicalized jihadi recruiter who died recently.

More than a little interesting, since I'd wondered about the mental state of jihadist recruiters, though perhaps I was more curious about handlers.

Jesse Morton had human motivations, though some of them were extreme.

His father abandoned the family, his mother was violently, compulsively, constantly abusive. He asked for help a few times. H didn't get help.

He didn't just end up hating his mother or the people who didn't help. He hated America because he felt betrayed. And because he had PTSD, which can lead to rage of various sorts.

So he got recruited and became a speaker to Americans. He was energetic and effective. (How come I never hear deradicalization stories from people who were ordinary shlubs in radical movements?)

His motivations were past trauma, the usual community and purpose, and pleasure at being good at what he was doing. He was in a community which was good at keeping people from seeing they were committing atrocities.

And he got caught, and was deradicalized, and liked America pretty well. (Past tense because he died recently.) He did covert intelligence work against jihadists until The Washington Post outed him.

He still loves the Koran, but thinks the viciousness came in the hadiths, the later interpretations of the Koran.

Since he couldn't be undercover any more, he went public and did deradicalization generally instead of specializing in jihadists.

He believed that people get deradicalized when they're extended more compassion than they deserve. I think I've seen a few examples of people who deradicalize from spontaneous insight, but no examples of people who deradicalize from being argued with or insulted.

He didn't get into the question of what you do if you don't have the internal resources to be compassionate.

Expand full comment

Random question here:

Does anyone know of someplace to watch groups play intrigue games?

Games like the board games diplomacy, dune and game of thrones but any recommendations will be much appreciated!

Expand full comment

I got an apple watch SE last week (yes, very middle class i know), and im wondering if people here have tips and tricks

Main reasons i got it:

-Health (exercise more and better)

-Be less sedentary (ideally want to move every 25 minutes, but the Watch is set on every 50 minutes(

-compensate for my adhd/autism brains poor executive functioning

-Be better at building habits, minimize effort required. maybe use beeminder with it in the future.

The main things it has done for me so far is to illustrate that i sleep too little, make me be a bit less sedentary, Motivate me to have slightly longer workouts, and find my phone much more easily. i have Some notifications on it, pretty minimal, but seeing my calender there helps.

On another note: i want to build good habits, but i get overwhelmed when choosing a habit app: I want something that works with apple watch/iphone and easily puts my exercise and such in it without me having to constantly check and update it.

Expand full comment

Documentary on 2020 voter fraud;

https://2000mules.com/

Expand full comment

I once heard a story that David Gelernter made the argument for an online avatar of ourselves, which understands our preferences and pre-screens our actual activities. Suppose you like italian food and overseas travel - you tell your online avatar this, and then rather than you spending hours of your own time online searching for italian/travel things, it trawls the web and does that for you, and only notifies you with new and relevant content, or comes to you with a pre-filled out itinerary for a flight to Rome and potential bookings for a dozen excellent restaurants when there are cheap flights and when you're going to have some leave time you can use. (Otherwise it doesn't tell you.)

I say all of this because a) this is a service that I'd like to use despite the creepiness, and b) I think about this every time I'm sitting there endlessly trawling through some mediocre online shop trying to see if they have shoes my size (bc often they don't, and then I've just wasted my life trawling crappy online stores). But also, internet services aren't designed to *reduce* the amount of time I spent online, they're trying to maximise it. After all, Instagram is loaded with ads and sponsored content, and yet they're trying to keep you engaged as much as possible, and web content companies openly talk about increasing their eyeball time with their investors - are they not incentivised enough to actually sell you something? Is the going rate for advertising too high relative to the kickback from purchasing something? (I can't help noticing that Apple shows me my time spent on my device, but then Apple has already been paid when I bought the thing. Facebook isn't an upfront purchase. )

I feel like someone is missing a trick here, or there's a business model which isn't getting deployed, but maybe there's a good reason for that. Sci fi routinely depicts tech heavy worlds which show people less connected to devices. I want to argue that there's a very high abstract demand for ways to get us offline, though I'll admit that's not necessarily how you'd actually behave when presented with the option in front of you.

Expand full comment

AAA video games have some pretty stiff hardware requirements; realistically, you need a GPU that is at most a couple years old. On the other hand, user interaction in these games is usually pretty limited. Yes, the worlds they render can be quite detailed, but the user can only execute a few logically simple (though admittedly computationally expensive) and predictable actions. On top of that, once the game is released, its functionality is pretty much set in stone (plus or minus a few DLCs). Business software is the exact opposite of that. It has to run on lowest-common-denominator hardware, especially cheap (and extremely low-power) laptops. It has to execute many complex business-logic functions simultaneously. And it has to be easy to extend and maintain, as new functionality comes on the market. On top of that, unlike your game, business software has to be interoperable with other business software, not just with itself.

So, would it be possible to build e.g. a GPU-accelerated word processor that runs like a video game ? Definitely, but very few people would buy it.

Expand full comment

It's a common trope that popular culture has fragmented over the past two decades, but isn't that merely a continuation of a centuries-long trend? A thousand years ago, in Christendom, in the Islamic world, the Hindu and Buddhist world, the stories from the respective religious books were basically the popular culture of the time and place. No?

500 years later, some other books and plays got printed in Europe, and more literature became part of popular culture, including rediscovered culture from ancient Greece and Rome. Then came printed sheet music, operas and symphonies. Over the centuries, more books, pamphlets and newspapers became a part of popular culture.

I suppose it's hard to say that much literary and art culture in the 18th century was "popular culture", since the median member of society at the time didn't have access to it from their pig farm, but what has changed since then that now allows the pig farmer or would-have-been pig farmer to consume artistic culture, be it high or low? Isn't it mostly that we have raised the economic status of the pig farmer so that he may now enjoy the opera if he chooses and not that we have brought the opera down to the level of the pig?

In the 20th century we get movies, radio, sound recordings. Then network TV, cable, the www. Well before streaming channels and Spotify, the popular culture of movie stars and singers had begun to eclipse the popular memory of Biblical stories -- once the key reference point for all pop culture in the West. An entire canon of secular Literature developed in parallel to the religious literature which had once dominated everything.

My question is whether the fragmentation continues for the next thousand years or whether some consolidation of cultural memory returns which replaces small stories with big myths.

Expand full comment

Who here knows something about how you succeed as a realtor? I’m trying to help a young guy who’s attempting to do that, and has made very little money in his first year of working. He’s a smart, nice-looking polite guy in his late 20’s, and I’m sure he comes across to his clients as honest, conscientious and intelligent, all of which he is. He has looked into how one gets sales, and has managed to get a bit of internet professional presence going. He pays for various services that give you various kinds of leads. And he works his ass off to help his clients and learn more about how the business works. But he’s getting nowhere.

I myself am impractical and about as uninterested in business as it is possible to be. But my intuition is that there are 2 main reasons why he’s getting nowhere. The first is that the profession of realtor may be dying out, the way travel agents and malls died out. More people find things via online searches, etc., and do not want a realtor in the mix, taking a chunk of money. The other is that he just is not old enough and rich enough. He does not have anyone in his social network who is in a position to buy a house, so he’s not in a position to do “networking” to find buyers. And he drives an econobox — I’m sure his clients can tell that he himself has never bought a house, and is not in a position to do so now.

When he first started out he was doing rentals, where you make small sums, but now he is doing house sales. Apparently it is much better to represent sellers than buyers, but sellers are hard to get. He mostly represents buyers, and spends lots of time showing them place after place.

Is there hope? If so, where the hell is it?

Expand full comment
May 9, 2022·edited May 9, 2022

In computing, speed gets optimized if and only if it pays. AAA video games compete heavily on being responsive, enjoyable, and gorgeous. Hardware manufacturers compete on performance benchmarks, especially for compute-intensive stuff like GPUs (which generally get used either to run those games or, even more speed-dependently, to mine crypto). Amazon knows exactly how much revenue it loses if its average load time increases by 10ms, and that number has more zeroes than you might expect.

But your average boring 2D software is pretty latency-insensitive below a certain threshold. Consumers come for the functionality, not the blazing speed. 100-200ms is about the threshold of what a user can perceive; even 1-2s is tolerable if it's not happening constantly. What are consumers going to do, run apples-to-oranges latency comparisons with other tools and post the detailed analysis in product reviews?

As far as the mechanics of making things faster go: caching dominates all other strategies in most contexts. Games, hardware architectures, and FAANGs use it a ton. But it's resource-intensive and notoriously hard to get right. No one's going to bother engineering it unless the payoffs far outweigh the risks.

(Source: I'm actively involved in cache design for a SaaS app where the threshold of latencies we care enough to look into sits in the multi-second range... and that's an order-of-magnitude improvement over previous-generation competitors)

Expand full comment

The 3D SW comment is problematic because it confuses two different issues: compute vs IO.

The primary reason SW is slow to open, and slow to respond the first time you perform a new function (whether pressing a menu or anything else) is because of IO issues. This becomes ever more of a problem the lower end your computing is, from high performance flash through low performance flash through old-style hard drives.

And while there are many things that can be done to make IO faster, it's the area of computing that demands the most backward compatibility -- poor decisions made in the mid 1970s can, to some extent, be worked around, but cannot simply be ignored, because while people will tolerate some degree of forced upgrading of SW, and a lot of "new game simply doesn't play on old card", they will tolerate very little of "your files from five years ago can no longer be read".

After this most important issue of IO, there is the basic fact that different problems have different structures. Those skilled in the art of understanding how modern CPUs work (which is, admittedly, a set vastly smaller than those claiming such expertise on the internet) appreciate that, to a substantial extent, the gating factor in the performance of one class of algorithms ("latency code", much of the stuff you run on your CPU) is serialization - step Z cannot happen before step Y which cannot happen before step X and so on. The gating factor in a different set of algorithms is simply how much compute engines you can throw at the problem ("throughput code", most of the stuff you run on a GPU [or NPU, or VPU, or ISP]).

You can't make a baby faster by throwing more women at the problem, but you can create an army of babies faster starting with an army of women. If your problem more like "I need to create one baby, and I need to wait nine months for it to happen" or is it more like "I need to create ten thousand babies every day, so I can just create a pipeline of dorms of women, each scheduled to give birth as appropriate every day"?

Expand full comment

For any nuclear history buffs out there - when do you think the first nuclear weapon would've been made if the Manhattan project never happened?

From my read of wikipedia, it seems that while the Russians did make a nuclear bomb by 1949, they were helped along by outside intelligence, and only redoubled their efforts after 1945. The article also mentions that some Russian physicists had been skeptical a nuke was possible. Separately, the UK only made a nuclear bomb by 1952, and I think were also helped along by the work on the Manhattan project.

I understand this requires some assumptions around what the US and American physicists do in lieu of the Manhattan project, so feel free to make any reasonable assumptions.

Expand full comment

I submitted a proposal to Future Fund back in March and haven’t heard anything. Do they respond to all proposals? Or should I take the silence as a “no”?

Expand full comment

Recently I received an emergency medical diagnosis of Cannabinoid Hyperemesis. The ER doc said it's being taught in medical schools, now. The short explanation is that habitual marijuana smokers can build up cannabinoids over several years of use, and experience a toxic reaction. It can result in spells of vomiting and extreme abdominal pain, a loss of weight and muscle cramps. Some who experience the phenomenon take hot showers to relieve the discomfort.

The literature I've found online -- from the Mayo Clinic, the American College of Gastroenterology, Analytical Cannabis, Business Insider, and CNN -- supports the theory, although the tone of much of it sounds like Reefer Madness 2.0. As several of the symptoms described are similar to my own, I've reduced my use by around 70%, and switched to a pipe from joints.

Has anyone else experienced Cannabinoid toxicity or Hyperemesis?

Expand full comment

Lots of people very confident in what The Problem with software is. My best guess is it's a mish-mash of a ton of different things from technical to social.

One thing I would say is that IME (working with/under/over say a dozen such people), a not-insignificant proportion of video game developers are excellent at developing performant software...that is barely maintainable!

Maybe this is because video games often have short shelf lives after release so they haven't had to develop those skills? Just speculating.

Another speculation...is there a bit of tradeoff between maintainability and performance? I can think of some occasions where this is the case, but I'm not sure if it's a generalizable Law Of Computing.

Expand full comment

I'm debating picking up a new language. I spend a lot of time listening to tapes as I do stuff. I'm getting bored of radio and I've been running low on book recommendations and recorded lectures (though I'll take them if you have them). I figure picking up a language is one of the few skills I can significantly learn by listening.

The question is: which language? Anyone have any recommendations?

Expand full comment

at ~80 pages i’m not at all surprised my book review didn’t make the cut lol but i’m dying to know if anyone actually did read their way through the entire thing. hat’s off to you if you did

Expand full comment

Regarding the question of relative application performance, the "simple answer" is that this is an apples-to-oranges comparison, and the difference primarily stems from the way a Von Neumann machine works and the memory hierarchy of any modern computer.

The involved answer touches on many aspects of computer system design and architecture:

Stored programs need to be loaded into main memory before they can be run. The place that they are loaded from is the "disk" (persistent storage). The memory hierarchy in a typical computer has very fast registers directly on the CPU that can store values during computation, but very few of these, then a layer of RAM which is about 1000x slower to access than the registers (as a rule of thumb), and a layer of persistent storage that is about 1000x slower than the RAM. When you open the "boring software," some version of the program has to be copied from the disk to main memory before it can even begin to run on the CPU. (Moving data is the slowest thing that computers do, generally speaking.) The application designer gets to choose what the program loads when it is run, and depending on what language it was written in and how it was compiled, the language runtime may insert a lot of other stuff into memory in order for the program to actually run. This explains most of the startup cost of any program, nothing to do with graphics per se, just data transfer between the disk and RAM.

Hardware device access is mediated by the operating system in a modern computer, essentially for security reasons. Programs can request services from the OS to do things like access the disk, allocate memory, etc., but these kernel calls are typically more expensive in terms of computation than other instructions a program might run. To the extent that a program needs to make a lot of kernel requests as it sets itself up for execution, this could marginally affect the startup time, though the disk access is the main element causing a delay.

On the other side of the comparison, GPUs are purpose-built coprocessors that are designed to be good at massively data-parallel tasks, like rendering 3D graphics or performing linear algebra computations on arbitrary data. They face their own startup costs due to the need to transfer data from the disk via RAM to the GPU's on-board memory, and this cost is similarly high in comparison to the cost of computation. For example, if you run a task on a GPU that doesn't take advantage of the massive data parallelism, you can wind up spending quite literally 99% of your wall time waiting for data transfer while the parallel computation itself takes microseconds. For an appropriate task, however, the data transfer costs can be amortized over more computation, and as new data need to be loaded onto the device, computation can continue to be performed on data that are already resident on the device, hiding the subsequent memory access latency. (This is why you have to wait while a 3D game starts up for various data to be loaded into the memory of the GPU and the computer itself, but then once it is running it can become quite smooth.)

As a final point, modern computing performance is being driven more by cache performance than computational speed or algorithmic complexity, at the margin -- because memories are much slower to buffer and use than on-chip caches or registers, applications that can be designed in a cache-friendly way and have higher hit rates on faster caches can experience better performance running a more "computationally complex" algorithm than code running a simpler algorithm but making poorer use of caches. Both the CPU and GPU have multiple cache layers. In the given example, a new program being launched is unlikely to be able to take advantage of any cached data (other than possibly shared libraries that it may link), but a graphical application that is in-flight is almost by definition operating on data that is either cached or loaded into one of the numerous memories that modern GPUs include on the board.

So, this is not particularly mysterious, but certainly opaque if one is not aware of what is happening under the hood of a machine.

Expand full comment

Congratulations to all of the selections in the book review contest (and all the non-selections too)! I plan on reading all of the finalist entries as they appear on the blog.

Does any one have any favorite entries among the non-finalists? I only read a few of the entries. Of the ones I read, my favorite was definitely The Outlier, but that one can't count towards my question, because it ended up being a finalist. Among the other ones I read, I most enjoyed the review of Cracks in the Ivory Tower.

Expand full comment

Software engineers do not write software in a vacuum, they write it as part of a business. If the business needs to render an entire 3d scene in under 20 ms, you spend 10 million dollars a year on optimizing render pipelines. If the business needs to render a couple buttons, and it's okay if they take a few seconds to load, you ask your engineers to spend 1 day on it.

Expand full comment

Do people have strong takes on the famous German apprenticeship system? How could we apply that here in the US? To my understanding the German state works hand-in-hand with local manufacturing employers to train future workers- an example of 'ordoliberalism' where the state & private industry work closely together on shared goals.

What's the reason we don't have anything similar in the US? Contrary to popular belief, the US is still the world's second largest manufacturer. I doubt the federal government would do a particularly good job of this, but individual states could certainly work hand-in-hand with say Boeing, Lockheed, various steel plants etc. to train skilled workers. I think state-level American politicians are a little more pragmatic and less ideological, so regardless of free market positioning, most states do in fact want more skilled blue-collar labor. What's the reason this hasn't happened in America in any kind of systematic, large-scale way like they do in Germany?

Expand full comment

Re: #2, the short answer is that most software engineers are very, very bad at making software.

See https://www.youtube.com/watch?v=ZSRHeXYDLko

Or see the minor kerfuffle over Casey Muratori's bug report against Windows Terminal regarding its poor performance, which culminated in this https://github.com/cmuratori/refterm along with a couple of videos diving into exactly what techniques were used to achieve ~1000x(!) performance gains.

The slightly longer answer is that a long time ago, programming was very difficult and required a high level of skill to build even basic software with an acceptable level of performance. Advances in hardware and software have made it much easier, the profession has expanded greatly, and so the average practitioner has _substantially_ less skill. Combine that with market dynamics that reward time-to-market and punish quality, and you get... today's software, which barely works at all.

Except in games! The highest-dollar games ("AAA titles") push the hardware to its absolute limits, as described in the quoted comment. Since the market rewards the "biggest" games with the best graphics, largest and most detailed worlds, etc etc, game developers have to be _actually competent_ in order to create successful products. Though even this market pressure is somewhat attenuated by Unity, Unreal, and other off-the-shelf game engines that do the hard parts and let smaller studios focus on gameplay, art, story, etc.

There's a lot more to it than this, but that's the basic reason games are awesome and get better all the time, and all other software is garbage and getting worse.

Expand full comment

I've heard P100 masks basically fully protect against covid? Want to get my Grandma one. Is this true? Are there any places where I can read more about this, the best ones filters etc.

Expand full comment

Regarding the image thing, as someone working with an image heavy software project using TGUI, which is a UI library based on SFML in C++, you still get that kind of hanging in the UI, and this should be much easier to deal with than web dev UI stuff. Of course it isn't optimized yet but even games on the market have this problem, especially Unity games. I mean the UI hang specifically. Major RPGs like Shadowrun have this just as bad as minor strategy games like Star Dynasties.

For my own project my impression is the issue relates to dynamic sets of images which either need complicated code to check what is added or removed or to simply erase the container and reload/draw each image every time you click something. I am 99% certain that a menu panel you only have to load once has 0ms hang time.

Expand full comment

There are lots of local and state elections for which most of the candidates don't have any obvious strengths or weaknesses, but it's also not trivial to find out enough to know who to vote for. I don't want to spend hours doing research but I would pay to crowdsource that research, especially if it answered specific questions about my values (which candidate will best advance cause X?). Even for small races with a hundred thousand total votes I imagine there are at least dozens if not hundreds of people who would value that research enough to fund it. Mainstream media sources will never offer that perspective. It would have to be something like a paid, but one-off substack.

Does this exist? Could it exist?

Expand full comment

So I'm developing a Map & Menu strategy game with a strong focus on social simulation on top of a strategy layer. Think a map painter but instead of always map painting you can be a spy master or a merchant or build tall in a way that is actually interesting.

I'm writing some of the basic AI code for NPC characters. What I'd like to ask for is interesting suggestions for distinct social actions, personality traits, ideological axes and also "Interests" for characters in a fantasy world.

Now of course I have my own list but I'm curious if there's some obvious great stuff I'm missing. So Social Actions would be like Gossip, Flatter, Rant, Debate, Empathize, etc.. Personality Traits would be like Family Oriented, Obsessive, Analytical, Diligent, Courteous, Impulsive, Vicious, etc.

Ideological axes would be stuff like Purity, Egalitarianism, Militarism, Traditionalism, and so forth. I have 20 split into 10 axes going from -40 to 40 but I think I might be missing some potentially prominent ones.

Interests are stuff like Philosophy, Gardening, Animal Handling, Athletics, Tactics/Strategy, and so forth. Basically more specific stuff rather than the broader categories of Ideology or Personality.

Expand full comment

MetaFilter had a post about Every Bay Area House Party, and I made the mistake of reading the comments. Each of these is from a different person -

"lovable eugenicist asshole SSC fucker"

"eugenicist Scott"

"he pulls the trick of writing something endearingly witty and not obviously fascistic."

"The author is a eugenicist. You don't become that way if you don't think you're far above average while being incredibly self-unaware."

"he didn't want his employers and patients knowing he was a eugenicist."

"a eugenecist assclown" (sic)

Did I miss it when Scott said he was a eugenicist? Or maybe he isn't, in which case why do these people think he is?

Expand full comment
May 8, 2022·edited May 8, 2022

Why is it slow to load 2d images? It's standard practice in the software industry to write something functional without paying too much attention to speed as a first step. And it actually works pretty good. Most of the time your first try is good enough. Also most of the time you're not working on the step that takes the most time so it wouldn't make sense to spend time making that part of the code run faster. Why spend a week writing something in C when you could spend an hour writing it in python if the difference in speed is only 50ms?

So almost all software is 1000x slower than it could be and this represents a much more efficient allocation of developer time than if all software were fast. (Obviously some things are slow that should be fast though.)

Expand full comment
May 8, 2022·edited May 8, 2022

I'm quite desperate right now and I don't think I can make things worse by posting this comment here. Does anyone have short-term, concrete, actionable advice for EXTREME procrastination and emotional block to start intellectual work?

Alternatively, I'd enjoy recommendations for an online therapist which can help with this kind of issues. I'm already in therapy but this is kind of an emergency situation and the slow-paced approach of my regular therapy is not very useful for these immediate problems (to be fair, I've focused mostly on relationships so far).

Apologies if this is not the right place to ask....

Expand full comment

Uhm... how do you guys avoid this pesky extra whitespace in your comments, that I seem to create in longer comments? It does not show up for me like that in the edit-box.

Expand full comment
May 8, 2022·edited May 10, 2022

Here is a site created by an engineer at Microsoft that gives info about the current availability of drugs to treat covid (Paxlovid, Molnupiravir & Bebtelovimab) and the preventive drug Evusheld, given to people who are immune compromised. Info is only for US.

Evusheld locator: https://rrelyea.github.io/evusheld/

Paxlovid locator: https://rrelyea.github.io/paxlovid/

Bebtelovimab locator: https://rrelyea.github.io/bebtelovimab/

Molnupiravir locator:https://rrelyea.github.io/Molnjpiravir/

Site draws data from COVID-19-Public-Therapeutic-Locator, but is easier to use and gives extra information. You can get information about Bebtelovimab, which is not on the Therapeutic Locator, & see how much supply of each drug each site has. You can also see a graph showing what the supply has been over time. Seeing supply over time is useful if you want to make some noise about how terrible the distribution of these drugs has been, and how much is going to waste. For instance, here’s Paxlovid in freakin Alabama: https://rrelyea.github.io/paxlovid/?state=AL Notice how many pharmacies’ graphs are practically flat — they got in a supply of the stuff a good while ago, and it’s just sitting there.

Expand full comment

What often makes websites ridiculously slow is an excessive number of round trips to the server. First your browser asks the server for an html file, and then the html file tells your browser to ask the server for JavaScript file #1, and then JS#1 tells your browser to ask the server for JS#2, and so on up to JS#9. They could save between hundreds of milliseconds and several seconds (depending on how slow your connection is) by inlining all the javascript code in the first request.

Expand full comment

Now, a million dollar question: why Process Manager (the program you start to find an offending program which takes too much CPU/RAM/disk and to stop it) is slow too?

Expand full comment

So Honduras revoked their law that allowed new ZDEs/charter cities (unanimously!), and we'll see how things shake out for Prospera and a couple of the other existing ones. I think charter cities are interesting and I'd definitely like to see them succeed- unfortunately, as I said at the time, you basically can't trust a 3rd world Latin American country to not go populist in the future. Even if you make an 'agreement' with one government at one point, the history of Latin America tells us that a future populist will come into power and rip it up. Prospera settling in Honduras was a bad idea, I would basically expect that a future government will shred their agreement and seize the land.

I continue to advocate for charter cities among Caribbean nations, probably an ex-British colony. While not perfect they tend to be much more politically stable than poor Latin American nations like Honduras, respect the rule of law more, basic property rights, etc. The Bahamas, Barbados, Saint Kitt, Grenada, Saint Lucia..... all much better options. Let's hope that charter cities build on a more successful foundation for their next round

Expand full comment

I've recently retired from a job in digital device performance. (Much of that time involved cell phones, with a little bit of work with computers near the end.) That doesn't make me an expert; I always felt more like I was just fumbling rather than knowing what I was doing.

In general, user-noticeable performance problems tend to be caused by a giant collection of tiny problems, rather than a single big issue. I call it a death of a thousand cuts. Occasionally the root cause is a single major design or coding mistake; we loved those problems, because we could pretty much always solve them. Whereas with the deaths of a thousand cuts, it would be tradeoffs all the way down - do we want 10% of users bit by *this*, or 11% bit by *that*?

At a high level, I'd say the twin root causes of much of what we dealt with was (a) excessive complexity and (b) cramming in at least 10% more features, more constantly-running processes, more memory use, etc than the device could handle comfortably, and relying on various tradeoffs to make it look happy *most of the time*. The cell phones I worked with lived on the edge - not just the older, slower models we were still supporting, but generally also the latest and greatest model we hadn't yet announced. (There was a big tradeoff between the cost of more powerful hardware and the features designers wanted to load unto it. Neither higher prices nor fewer features were going to keep profits high - but that meant everything was always running on the edge.)

Getting back to excess complexity - modern software development optimizes to reduce development costs. Everything's done with really really high level languages, with layers of frameworks. No developer understands all the things a particular line of code will do (or cause to be done). Moreover, the same line, in the same program, will do different things depending on the state of the device. 99 times out of 100, or even more often, a particular operation is non-blocking - and the other time, it sends a message to another process, which needs to be launched, which wakes up 5 more processes, which need to be launched, which runs the device out of free memory, causing the process that started it to be paged out (on a computer), or even killed and relaunched (on a cell phone or tablet).

One culprit was ever higher level languages. Swift uses more resources to accomplish things that could have been done in Objective C more cheaply. C++ or C would be even cheaper - in terms of performance. But not in terms of development time. Swift would also protect developers from some common coding errors Objective C could not. But when I write C, I pretty much know *exactly* what it will do - except for operating system behaviour, and whatever external routines I call. And when I write Swift, I really don't - I hopefully know what it will eventually accomplish, but not how it will be done.

Another culprit is that iOS, at least, and much of MacOS, does just about everything by sending a message to some other process, which does part of what you wanted itself, then sends out a few more messages to accomplish the rest. It's unlikely that the whole set will fit in memory. (And by the way, your cell phone lies to you about what apps are really in memory - sometimes it's metaphorically freeze dried and has to be reconstituted on the spot. But most of the problems come from daemons - non-app processes that the user never sees.)

Web browsers have all the problems of any other app, even before you get to the point where the web page itself is written in some language like javascript, which needs to be interpreted at run time.

Writing code to run on more than one browser, or more than one operating system, or both, adds its own complexities. Both web browsers and operating systems have things they do well/easily, and things they do badly, and what Safari on Macos does easily is probably a disaster on some other combo - but not because Safari or MacOS is objectively better. Some other thing you want to do is implemented in such a way that it's at its best on Microsoft Edge, running on the latest Windows system. (And this gets even worse on cell phones, which tend to be more constrained.)

I could go on, but this response is already in TL;DR territory ...

Expand full comment

Video games use way more power than my computer consumes at any other time. My tiny laptop heats up to the heat of the sun if I don't use an external fan while playing video games, but pretty much doesn't heat up at all if I'm browsing or using regular boring 2d programs.

Expand full comment

Scott, I don't expect to be a finalist, but I've become aware since submission of many glaring problems with my review, and imagine other writers may have too. Is there a way for the finalists to revise their reviews before you post them?

Expand full comment
May 8, 2022·edited May 8, 2022

If we have an entry in the book club, but we don’t get selected, will there be any way to find out what ranking our review was? e.g 319/423

I want to get more into writing, so I want to know where I’m at compared to other people.

Expand full comment

It takes orders of magnitude longer to get a response packet from a remote server than it takes to do anything local. There are ways to compensate for this in a video game context (e.g. streaming/network socketing and predictive rendering) but those methods have tradeoffs that aren't really worth the trouble when working with a web app.

Also...video games do regularly hang for hundreds (on my computer thousands) of milliseconds. These are called loading screens and they're ubiquitous. Designers have gotten very good at hiding these, but you might have noticed space warriors taking more elevators and ninjas spending lots of time in low-texture corridors before entering into a vast and gorgeous expanse lately. However clever these might be, at some point your computer has to move the information about how to render the world from the hard drive (which may as well be in China as far as the CPU is concerned) to RAM (which is more like next door).

With that said, I think the jist of the comment was that the user experience of the average video game is better than that of the average web app/word processor/photo viewer. That's hard to deny.

The thing is that games are made with billions of dollars by the best programmers on Earth, using programming languages that work closely with the machine. Blogging web apps are built in JavaScript.

Expand full comment

I think the "amazing game engines" argument underestimates how difficult it is to render fonts. Not only each glyph has many points of its own, but they also interact with each other with ligatures and kerning. I think this interaction may not be parallelizable, unlike rendering many triangles.

Expand full comment

Dan Luu has a good piece on how computer interfaces have gotten slower over time: https://danluu.com/input-lag/

Expand full comment

Economic reasons: On the labor supply side, the number of people employed as programmers has far outpaced the number of people who have the talent and expertise to write maintainable and performant software. And the training that many new programmers have is at a level of abstraction which completely obfuscates the systems you have to understand in order to write really performant code, like the rendering pipeline and how the CPU interacts with different layers of memory (cache, etc). And outside of the game industry, the best programmers usually end up working on the backend. Meanwhile, on the consumer demand side, most users don't complain about performance unless it gets really bad. Or unless they're gamers.

Institutional reasons: 98% of the time executives don't ask about performance at quarterly meetings with product managers, so PMs don't pressure programmers to focus on performance, or even have a reliable way of monitoring it. Instead, they're asking for more features faster, which puts pressure in the opposite direction, and performance slowly gets worse. Eventually, an executive might notice how bad performance is getting, or one star reviews might start showing up, then there's a scramble at the company for a bit, performance gets better - or not because an executive decided something else was even more urgent, but either way the development process usually doesn't change and poor performance slowly creeps back in.

Technical reasons: A lot of these have been mentioned already, but take react as a case study. You've got a virtual DOM (document object model) that has to be re-calculated whenever there's a change in state - there are a lot of foot guns here, you only want it to re-calculate the part of the virtual DOM that's relevant to the state change. Then when the virtual DOM changes, it has to be reconciled with the actual DOM (the stuff that gets specified with HTML tags), which often means the entire layout of the page has to be re-calculated and redrawn. Meanwhile, the javascript engine handles memory for you with a garbage collector, but you often don't know when new memory is allocated (which is slow), or when the GC will run, or whether or not the memory you're using is in the cache (fetching memory outside of the cache is ~30 times slower). Javascript usually runs with "just in time" compilation, which helps, but it also gives the CPU extra work that it needs to do upfront whenever new javascript code loads. Meanwhile, your latest triple A video game is written in C++. In C++ (as in C and Rust), you control when memory is allocated, when it's destroyed, how it's laid out. You can check the assembly that it gets compiled into for different CPUs (godbolt is a great tool for this) and make sure that it's being optimized at that layer.

Expand full comment

The minimum viable video game requires extremely high performance. The minimum viable UI for an app requires "generally works, and is about as snappy as other things you use". Once the minimum viable product is achieved, resources are directed to adding features/content, rather than continuing to improve performance.

To put it another way, there is a cliff of profitability if your video game can't achieve sufficient frame rate to not make players sick. That cliff for a desktop app is many orders of magnitude slower.

Expand full comment

Why do economists draw supply/demand graphs with volume of trade as the X axis and price as the Y axis?

As a supplier or demander I can control the price I accept or offer, and the amount I can sell or buy is then a function of that price and everyone else's choices.

But supply/demand graphs are invariably drawn as though volume is the independent variable and price is the dependent variable, which feels weirdly counter-intuitive to me.

Similarly, "The supply curve is steep" feels to me like it ought to be saying that supply is very sensitive to price, whereas this way round is means that supply is very insensitive to price.

Expand full comment

I find myself with a lot of extra belly fat after sitting on my couch during Covid.

What are the tradeoffs around removing this with diet vs liposuction?

Expand full comment

Who owns text generated by GPT-3, or images created by Dall-e 2? The person who wrote the prompt? Once AGI is a thing, who will own ideas and/or products generated by the AI?

Expand full comment
May 8, 2022·edited May 8, 2022

Regarding the horrid performance of modern software, relative to the incredible hardware, I recommend Casey Muratori's stuff (of Handmade Hero, as well as former software engineer at RAD Game Tools (which produces high-performance tools like video game codecs which are used in gazillions of games, and which was recently bought by Epic Games for a bunch of money)).

Here he illustrates how slow the Windows Terminal is, and how he made a version in a week that was orders of magnitude faster than what Microsoft manages to produce with all their time and resources: https://www.youtube.com/playlist?list=PLEMXAbCVnmY6zCgpCFlgggRkrp0tpWfrn

Or here he talks about "Where Does Bad Code Come From?": https://www.youtube.com/watch?v=7YpFGkG-u1w (there's also a follow-up Q&A video)

Or here he lectures on "The Only Unbreakable Law" of software architecture: https://www.youtube.com/watch?v=5IUj1EZwpJY - Namely Conway's Law. Quote from Wikipedia: "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure."

Expand full comment
May 8, 2022·edited May 8, 2022

Observation is on the money.

https://computers-are-fast.github.io/

Computers are blindingly fast, if you use their resources properly. If you chain a bunch of requests over the internet together, say naively running a bunch of loads or library calls in sequence, you can make anything take a long time. (But it should be telling that speed of light latency at the range of a planet is considered slow in computing terms.)

Expand full comment

Possibly naive question: I was reading that many (most?) computer languages don't handle complex numbers well, and I was surprised. It's just algebra, isn't it? Algebra is a mechanical process. Maybe it takes more computation than arithmetic, but once you've got the equation (for reasonably simple equations), you just turn the crank, don't you?

I realize that computer programs tend to use approximations rather than algebra, but I don't know why.

http://www.antipope.org/charlie/blog-static/2022/04/holding-pattern-2022.html#comment-2145812

Expand full comment

There is a great article/rant about software bloat that basically lists everything that is wrong with our various software products and urges for developers to spend more time on optimizing their products

It mentions 3d games/text editor seeming difference in performance too, although doesn't explain it

https://tonsky.me/blog/disenchantment/

(his blog also has some more examples of software disenchantment, like https://tonsky.me/blog/good-times-weak-men/)

Expand full comment

I read about a problem recently that might be a good application for prediction markets or Social Impact Bonds/Retroactive Public Goods Funding/whatever you want to call it.

Growing a tree takes decades, so it's possible to fund a reforesting initiative that looks good at first but fails further down the line, due to either a lack of maintainance or corners being cut during planting (e.g. using a cheaper species of tree which is unsuitable for the haitat being afforested). Principal can't be expected to learn everything about forestry so they have to trust the agent doing the planting.

https://www.bbc.co.uk/news/science-environment-61300708

What you need is to pay for there to be a forest in an area in 10, 30, 50 years' time. This could be assessed through remote sensing such as the Sentinel 3 Chlorophyll Index and backed up by in-person inspections of reforested areas.

Investors will solve the problem of the cost of reforesting being front-loaded while the results can only be seen decades later. They will be incentivised to make sure that the planting is done in a way that will result in a forest that survies.

Expand full comment

Why do we not have politicians or journalists as good at their jobs as Ronnie O’Sullivan is at his?

Expand full comment

Many good comments have been made on the observed 2D / 3D divide (the most important of which is usually disk I/O or network I/O), but I just wanted to add: Only one of these is likely using your computer hardware to near-full capacity and spinning up the fans. You typically don't want that, period.

But it's only the 3D AAA video game that needs to draw upon all these resources or fail. It is indeed remarkable that it can, but a lot of hacks have had to happen to hardware and software over the years (even with computing power increasing) to allow it to render that much at all. The trade-off is that it uses approximately all resources.

If your office software did the same thing you'd prefer a different one, which can give you several windows of the same thing at once, let you run your web browser in parallel, possibly listen to music as well, rapid tab between applications, et cetera.

The video game has the benefit of having your full attention, so it can afford to be greedy.

(Yes, if you have a good desktop computer, you can tab between the game and whatever else you're doing. But even if you have a good gaming PC you should be able to tell the difference in system load if you boot up a high-polygon 3D game to other usage patterns.)

Expand full comment

Reading about the mathematician Gauss, I was reminded of Secrets Of The Great Families (https://astralcodexten.substack.com/p/secrets-of-the-great-families).

Gauss's grandson claimed that Gauss "did not want any of his sons to attempt mathematics for he said he did not think any of them would surpass him and he did not want the name lowered" (https://en.wikisource.org/wiki/Charles_Henry_Gauss_Letter_-_1898-12-21).

This suggests that Gauss saw family reputation, in his field, as an average (with all non-mathematicians excluded, rather than assigned a 0).

I suppose that I see family reputation in technical fields as cumulative, like knowledge itself. Parent proves 5 theorems + child proves 1 theorem = family proves 6 theorems. Unimportant, yet difficult to imagine otherwise.

Secrets Of The Great Families discussed families with accomplished ancestors and accomplished descendants. In contrast to those ancestors, there are also accomplished people without accomplished (or any) descendants. Now I'm wondering whether these two contrasting notions of family reputation might have some small influence.

Or is Gauss's view truly rare?

Expand full comment
May 8, 2022·edited May 8, 2022

The truth is that we don't how to build good software. The term "software crisis" was coined 54 years ago, and every developer knows in their bones it's still the status quo. Our profession just feels *wrong*, like natural philosophers making stuff up loosely based on observations.

We keep making the same mistakes, reinventing wheels and grasping at straws, then getting desperate and releasing something that barely works. Some other poor soul has a similar problem, starts with that partially broken program in hopes of saving effort, and builds another layer of this Jenga tower.

Eventually the tower gets too unstable, working on it is too hard, so people build a new tower somewhere else, with a base that is unstable in different ways. But not before 10 other people make the same decision, so now we also have 10 new towers dividing our efforts.

Performance is only one of the facets of this problem, though it's an easily measurable one. I just wrote two equivalent programs[1] that count to 1 billion. One is written in C, and the other in Python. Neither one uses parallelism, the GPU, caching, or does anything clever.

The program in C executes in 0.52 seconds. The Python one takes 1m42s, for the exact same results.

That's 200x slower, and just for the language! There will be libraries written on this language, and frameworks built using the libraries, and applications developed using those frameworks. The performance sacrifices scale multiplicatively, and that's where your performance went.

The same mechanisms also make our software less reliable, harder to customize and interoperate, and basically any characteristic that is not mission critical. Because software developers don't have our shit together.

If you'd like to know more and hear about counter movements, look into the Handmade community, and look for talks from Jonathan Blow and Molly Rocket.

[1]: https://gist.github.com/boppreh/e9110afa1077c6329de3f47041e90646

Expand full comment

Prompted partially be a recent Buck EAF post- does anyone have any interesting reading about the potential value of doing psychotherapy-type analysis on AI?

Not so much as in pure Freudian analysis on 1s and 0s, but more like- as deep learning models become more and more convoluted and impenetrable, could a valuable route of analysis be interrogating their outputs using the same tools we use to interrogate the outputs of the (also impenetrable) human mind?

Expand full comment

I can only do a poor job at conveying the depth to which things in software are critically bad, but here's a couple talks that go deeper on why we got in this mess:

Jonathan Blow's

"Preventing the Collapse of Civilization" https://www.youtube.com/watch?v=ZSRHeXYDLko

Casey Muratori's "The Thirty Million Line Problem" https://www.youtube.com/watch?v=kZRE7HIO3vk

Both of these talks are by people who are working very hard to improve the situation : you might recognize Jonathan Blow as the game designer of Braid and The Witness ; he has been working since ~2014 on a new programming language meant to be a replacement for C++, which is now in closed beta. Casey Muratori is the host of Handmade Hero, a 650+ video series on building a game engine, and who somewhat recently has put Microsoft to shame by coding in a few days a terminal that runs several orders of magnitude faster than Windows's new terminal, with the same amount of features - after being told by Microsoft developers that what he was suggesting was "an entire doctoral research project in performant terminal emulation" (https://github.com/microsoft/terminal/issues/10362, https://www.youtube.com/watch?v=hxM8QmyZXtg&list=PLEMXAbCVnmY6zCgpCFlgggRkrp0tpWfrn).

Expand full comment

I made a web app for one-on-one voicechat with other ACX readers.

One-on-one, because it's simpler:

- When one stops talking, the other starts.

- No groups, no hierarchy, no status.

Voicechat, because it's more intimate then text, but more private then video.

With other ACX readers because that creates some common ground.

It's called coffeehouse: https://coffeehouse.chat/acx

Expand full comment
May 8, 2022·edited May 8, 2022

Hello folks! I've had 3 doses of the Moderna vaccine. It has been 6 months since the 3rd. I've been pondering whether to get the 4th that is now approved for 50+ healthy adults. I was wondering if there were statistics experts here who could comment on this. Is this good science? Thank you!

https://youtu.be/o_nKoybyMGgh

Expand full comment

Are there any Effective Altruists/Rationalists/Lesswrong users at Lancaster University, going to come here next year, or in Lancaster? I’m starting up an EA group with two others here and we’re starting the process of getting members, anyone that’s around feel free to reply here or send an email to gruffyddgozali@gmail.com

Expand full comment

It's because your graphics card is a completely separate computer. The PC and the graphics card are well tied together so you don't normally notice the seams but if you know what to look for they are visible.

Quite often games have a 'loading' stage. This is where all the data is being passed across to the graphics card and the shaders (programs) are being set up. It can take a while because there is a lot to do and you have to shift things across the computer/card boundary and that is slow.

Once it's there the graphics card can do its own thing with occasional nudges from the main computer. It has everything it needs in it own memory and it can run its own programs. There is little to hold it up,

Your main computer is a much bigger thing. It has a variety of types of memory of varying speed from registers, to cache (levels 1, 2, 3), to RAM, to SSD to hard disk. Most operations are bound not by CPU but by memory speed. If it's all near to hand, things are fast. If the thing you wanted is on disk then that requires loading into RAM, cache and then register. It can be a physical thing and that's not fast.

There are other reasons too. UI design is just not sexy like 3D and it's actually relatively complex. A graphics card does the same thing over and over again and to do that quickly it needs to be relatively simple. A UI involves many more moving parts going down into the depths of the OS and it is a tangled web. Perhaps for that reason we don't get the best engineers working on it.

If would be nice if we did. Then we could have folders which showed their size, or file copies that worked reliably, or folder colums that persisted each time the folder was opened. Instead we have a menu bar in the middle. Hallelujah!

Expand full comment

Others have already mentioned the distance component of website vs. 3D video game performance (one mostly has to transfer information between parts of the same silicon chip, the other between the browser and the web server which might be in different countries; the speed of electromagnetic waves can't really be optimized) and cost tradeoffs (for a website, excess funds are better invested in more content / more features / less bugs because the user cares a lot more about those than performance).

The other big thing is control - web technologies have been optimized to be transparent and easy to interfere with, so the website owner can override the advertiser's decisions on how the ad can look (e.g. whether it is allowed to show phising popups), your browser can override the website owner's decisions, and browser plugins can override the browser's decisions. That's a win for accessibility and user control, but a loss for performance, since information needs to be available for manipulation at some early stage of processing (the DOM tree) so your adblock plugin can interact with things like "link" or "popup", not pixels on the screen (at which point identifying and removing ads would be rather impossible). You will never have a tool that can remove ads from a 3D video game - it would be performance-prohibitive to expose the 3D processing pipeline to such manipulation.

Another aspect of control is that a video game mostly gets to own the relevant resources, while browsing the web means interacting with massively multi-tenant systems - both on your computer, where you want to be able to have more than one website open at a time, and want the OS and other applications to still work as well, so the resources available to a single website are fairly limited; and on the network, where from the provider's point of view where you are one of a million clients trying to transfer information through them, and they need to figure out how to do that fairly and safely, taking into account that some of the clients are malicious or overly greedy. That's achieved with various communication protocols that add even more overhead to already slow network requests; and then, since you need interoperability in the network, you end up with standards used by billions of people, which are very slow to evolve.

Expand full comment

Based on my comment below on non native apps not being that fast on the Mac, I ran a test (on second and subsequent launch) of Books, Kindle and Teams on a 2021 m1 MacBook Pro.

The test was from double click to being able to use the app:

Results on average.

Books:0.65

Kindle: 3.8

Teams: 16.2

if there’s any bias it’s to add time to books as I had to react to the launch by hitting stop, my reaction times matter here.

Expand full comment

2.

They say that one of the causes is a "mobile first"-paradigm.

In some sense we do. But imagine what a world would look like, where software developers actually cared about a good mobile experience.

Let me describe an actual "mobile first"-paradigm.

Your apps would automatically downloads tens to hundreds of megabytes of data every time you're in wifi. Like all the websites and blogs I commonly visit would auto-refresh.

There is no reason I need mobile internet when I'm on the go, if the couple kilobytes could have been automatically preloaded. If I read maybe... five blogs and five newspapers, why do I not automatically get them updated in the background?

If I cannot reliably read in the forest or on the train, then that's a sedentary paradigm.

Not a mobile one.

A "mobile first"-approach would also instantly detect a slow connection and offer a text-only website. Cutting out all the overhead, cookie prompts, advertising and tracking junk, so that only the actual information gets to the user (that's in the kilobyte range, yet somehow we need 5g, lol).

What we actually see is "cloud first", which preloads nothing and demands your phone to be permanently and reliably online. Obviously this is not what you'll have, when you're "mobile".

For the longest time, there was not even automatic full cloud download for Android.

OneDrive has it now, but it's second-class.

OneNote won't even sync in the background. Many apps only will sync their content once you open the app, forcing you to wait.

"Mobile-first" would see local storage in the terrabytes for full cloud sync. Instead we've been letting phone manufacturers play stupid price games with minuscule storage sizes for a decade now.

Expand full comment

As for the videogame comment, that's not what's going on. Unless the world we are talking about is really small or simple, no game engine is genereting everything 60 times a second. They might not even generating stuff right behind you or few meters from your user camera, if they use the occlusion culling technique (e.g. https://docs.unity3d.com/Manual/OcclusionCulling.html ) .

Also, most 3D engine uses massive parallelization thanks to GPUs (we know very well how to parallelize computer graphic operations) - most common 2D UIs don't.

Expand full comment

Does anyone know current Japanese sentiment about the possibility of Rogue AI posing an existential threat? I ask because the Japanese certainly seem to have different attitudes about robots, which isn’t quite the same thing, but very close. They’re certainly much more interested in anthropoid robots and have spent more effort developing them.

Frederik Schodt has written a book about robots in Japan: Inside the Robot Kingdom: Japan, Mechatronics, and the Coming Robotopia (1988, & recently reissued on Kindle). He talks of how, in the early days of industrial robotics, a Shinto ceremony would be performed to welcome a new robot to the assembly line. Of course, industrial robots look nothing like humans nor do they behave like humans. They perform narrowly defined tasks with great precision, tirelessly, time after time. How did the human workers, and the Shinto priest, think of their robot compatriots? One of Schodt’s themes in that book is that the Japanese have different conceptions of robots from Westerners. Why? Is it, for example, the influence of Buddhism?

More recently Joi Ito has written, Why Westerners Fear Robots and the Japanese Do Not,

https://www.wired.com/story/ideas-joi-ito-robot-overlords/

He opens:

“AS A JAPANESE, I grew up watching anime like Neon Genesis Evangelion, which depicts a future in which machines and humans merge into cyborg ecstasy. Such programs caused many of us kids to become giddy with dreams of becoming bionic superheroes. Robots have always been part of the Japanese psyche—our hero, Astro Boy, was officially entered into the legal registry as a resident of the city of Niiza, just north of Tokyo, which, as any non-Japanese can tell you, is no easy feat. Not only do we Japanese have no fear of our new robot overlords, we’re kind of looking forward to them.

“It’s not that Westerners haven’t had their fair share of friendly robots like R2-D2 and Rosie, the Jetsons’ robot maid. But compared to the Japanese, the Western world is warier of robots. I think the difference has something to do with our different religious contexts, as well as historical differences with respect to industrial-scale slavery.”

Later:

“This fear of being overthrown by the oppressed, or somehow becoming the oppressed, has weighed heavily on the minds of those in power since the beginning of mass slavery and the slave trade. I wonder if this fear is almost uniquely Judeo-Christian and might be feeding the Western fear of robots. (While Japan had what could be called slavery, it was never at an industrial scale.)”

As for Astro Boy, which Osamu Tezuka published during the 1950s and 60s, there are some robots that go nuts, but they never come close to threatening all of humanity. But rights and protection for robots was a recurring theme. Of course, in that imaginative world, robots couldn’t harm humans; that’s their nature. That’s the point, no? Robots are not harmful to us.

But those stories were written a while ago, though Astro Boy is still very much present in Japanese pop culture.

What’s the current sentiment about the possibility that AI will destroy us – not just take jobs away, that fear is all over the place – but destroy us?

Expand full comment

Vim is indeed very correct in their observation (on top of being a text editor that I very strongly like). I experienced the same frustration during my PhD in real-time distributed simulation and kept thinking about this for a long time. I feel like I have several elements of explaination worth sharing. Two of them stem from architectural and philosophical considerations, and most of them stem from personal experience.

This is my first serious post here, expect some heavy rambling and feel free to correct/comment anything that seems off.

Architectural reasons :

1. Time is not a first-class citizen in computer programs. Execution time is not a property embedded in the source code of a program, as it is the combination of what the source code says, how it was compiled (compilers have a lot of different options allowing for speed vs size tradeoffs), the specs of a computer running the program, how system resources are shared between concurrent programs, etc. It is mind-bogglingly difficult to determine how fast a piece of code would run, as there are too many moving parts to allow for static analysis to produce meaningful results. Therefore, the act of optimizing for speed is a tedious one: it requires profiling the running program, determining which functions were consuming the most CPU cycles, and revisit said functions by refactoring them and hoping that those efforts would indeed lead to improvements. Most programmers don't take it that far : if it works, albeit slowly, they'll leave this section as is and will start working on something else.

2. Parellel vs sequential. It is quite unfair to compare the rendering prowesses of a modern GPU with the inner requirements of a business-grade software. In the case of a video game, screen rendering is offloaded to the GPU, whose specialized architecture have been designed to process ludicrous amounts of instructions in a parallel fashion. In contrast to that, CPUs mostly don't get parallelism, and process data one bit at a time. If there are say 3000 pixels to render, a GPU would render them in a single cycle. If there was 3000 bits of data to review, a CPU would have taken 3000 cycles to go through all of them. Most business-grade software isn't written in a parallel manner, because parallel processing is finnicky to get right and comes with its own caveats requiring specialized knowledge.

"I-feel-like" reasons :

3. Switching to slow languages : 8 years ago came a framework called Electron. This framework allows to write code designed for webpages and build a desktop program with it. This allows to develop applications once, and deploy them both on a webpage and as a desktop software. However, this adaptability comes at the expence of speed, as web languages such as Javascript are much, much slower than other languages such as C. Plus, the compatibility layer provided by Electron make use of a brower. For each Electron-enabled software running on your computer, a new browser is spawned, increasing the load of your machine tenfold. At work, I'm writing code with Visual Studio, checking my mails with Outlook, Teams start automatically, and I look at documentation online using another brower. Therefore, at all times, I have at least 4 browsers running in the background, and requiring time to run their slow Javascript (several orders of magniture slower than C). Browser slowness is leaking onto the desktop applications as well.

4. Disdain for execution time : During my initial studies, we were told not to optimize too much, since computers are already very fast and optimization wouldn't bring more value. This is true for small, distinct programs but false for large software. As 100 different individuals work together, they each bring their own non-optimizations, quoting how fast computers are and how they do not need to optimize. However, as software grow in size over time, slowness adds up multiplicatively as the codebase looks more and more kafkaesque, and "feature creep" forces functions to adapt to more and more weird edge cases, with patches upon patches upon patches, each degrading a little bit execution time. At the tipping point, it appears that throwing the entire codebase and rewriting everything from scratch is the only viable solution. Said solution will seldom be implemented as it requires extra time, effort and money the company would like to invest somewhere else.

5. Size growth : In the past, we were working with small batches of data since storage space was limited. As storage space is infinitely better than before, we process larger and larger batches of data requiring more processing power.

6. Mandatory internet connectivity : As Vim pointed out, software often realize network transactions upon activating features. Network exchanges are several orders of magniture slower than fetching something from memory, as we delegate work to servers outside our own country. As stated before, execution is sequential, and everything will hang until server reply.

7. Size and scope : Software back in the day was quite primive in terms of features. It was designed around carefully selected use cases. Due to their storage and processing power limitations, software had to be extremely well written to even fit in memory, and every aspect was tightly controlled. Nowadays, software is bloated with hundreds of functions which will almost never be used, but still affect how data is structured and increase the amount of operations needed.

Expand full comment

2. I think of this as cost disease in software.

economics:

There's a standard of living that we accept and put up with. Work 40 hours per week for so many decades. GDP/capita may be however many times higher than fifty years ago, but the pressure is much the same.

software:

There's a standard amount of lag/sluggishness that people are used to put up with.

Our computing devices actually never end up becoming faster on the user interaction level.

economics:

Surplus gets captured by more bureaucracy, regulation and other overhead. As standards of living remain constant, nobody notices what we're stealing from ourselves.

software:

Any hardware innovation will be used to fuel the demands of an evergrowing stack of software complexity. If there's time freed up, it gets eaten by regulation again, like cookie-prompts or mandatory/non-optional 2FA-authentification. As the speed remains constant, nobody notices the time we're stealing from ourselves.

Meh... not a perfect isomporphism. I give it 6/10.

If I figure out how to translate the Baumol-effect to that dynamic, I'll rate it higher.

Expand full comment

No operation on the UI thread should take anything close to even 200ms. If the network or disk need to be accessed then that should happen on another thread. It is a cardinal sin to do networking calls before the app is launched in my opinion, but games are a likely culprit in that regard.

Therefore it’s a bit disingenuous to compare start up times to frame rate though, games can take minutes to load. Network and disk are much slower to access than RAM.

Added to that the games are doing their calculations on the GPU, which is dedicated to the kind of floating point calculations that enable fast graphics.

For most application developers writing native code the launch time is beyond their control - and a lot is going on as the OS needs to bring the application and it’s frameworks, and libraries, into memory. Often a cause of slowness is the need to swap out older memory. That is an OS driven operation and beyond the control of application devs. Nevertheless that’s often fast.

This doesn’t justify all long launch times however, on my M1 mac some of the applications from the usual suspects (Adobe and MS) take seconds.

This is because often the developers are using a non native framework, so that needs to be loaded. In order to get the app running. Electron allows developers to create applications in JavaScript, to run on any platform. So that environment (which is chrome) needs to be loaded. Apple’s native apps are ready to use in less than half a second. There’s no or one bounce. Something like slack does bounce once, but even then the main window takes ~1 second to populate. Teams takes 2 bounces to show a small window to tell me that it is loading, then about 3-5 seconds until I have a workable window.

Expand full comment

What's happening is that the GPU is, by far, few most heavily optimised piece of hardware in your computer. It is really good at performing the linear algebra required to fill those pixels with such desperate alacrity (or for deep learning for that matter).

Every other component in your pc is a sluggard by comparison.

Expand full comment

There's a bunch of technical answers about load speeds (and I can get into the technical side if you want) but the fundamental reason is economic: engineering time is orders of magnitude more expensive than computer time. How much does an engineer cost per hour? $50 on the low end? So $8,000 a month. How much does it cost for a basic Watson plan per month? $500. And that's unusually high.

Optimizing for speed is less economically efficient than throwing processing power at the problem. This is a pretty generally accepted principle: you minimize your engineering time above all else because engineers are the most expensive. (Now, of course, you also minimize FUTURE engineer time through tech debt. But fundamentally you're still minimizing engineer hours.) This includes creating features that are as slow as users will tolerate. Any excess time invested in making it faster is wasted engineering hours that can be better used elsewhere. Like developing new features or solving bugs.

How much will users tolerate? People are generally willing to tolerate a few seconds of latency on websites or for things like Word. (And they'll tolerate it much, much more than bugs or lacking features.) If Amazon takes a few seconds to paint you're generally not going to complain. Videogames are in a different market. If you shoot someone and miss because the game is off by a few seconds then you're very upset.

Expand full comment

Sometimes I find I have a piece of context that seems like it's universal that some people don't have, like getting choked unconscious or hit with an airbag or something, and it's always jarring for me. I'm curious to know what experiences people have that others don't that are jarring for them in the same way - where you go to talk about an experience you unconsciously felt was universal that turned out to be less than.

Expand full comment

They are already good answers on the 3D vs 2D question. I'd add: The UI part is the wrong framing. Your 2D lags not because it's hard to render, but because it's waiting for the data.

Also, the 3D is wrongly described. It doesn't compute everything for every frame. Most of it (like 99.9%) is cached. For every new frame, the computer just computes the difference with the previous frame.

Expand full comment

It's worth noting that when a 2D desktop application is just sitting there idle, it's still being drawn 60/144 times per second, so it's evidently not the 'draw' that is causing difficulties. You can occasionally catch programs that do have some weird difficulty with the draw, as they will hang when you try and drag the window around, for example. As others have pointed out, it will be the logic behind what it is deciding to draw.

Even something as simple as a menu, it might decide to check what items should be in the menu, and look up your settings, and have to pull that from disk, or some other slow method. Even then, we're probably just talking about a hang on the first load, which you do also see with games (the initial loading screen).

Some of these 100ms hangs might be fixable with having your application do a 60 second initial load, like a game might do, but I doubt that's a desirable behaviour.

Expand full comment
May 8, 2022·edited May 8, 2022

Part of the story is that a video game is installed locally, it runs directly on your CPU, and it has full access to all the capabilities of your video card. Whereas most boring 2D apps nowadays are just websites, so they are written in Javascript which gets translated on-the-fly to run on your actual CPU. It’s like speaking in your native language versus going through a translator.

That’s not the whole story, however. Modern computers are fast enough that even in Javascript you can write an app which responds to any user action within tens of milliseconds, not hundreds or thousands. You just need to spend some effort on it. And the people who like to obsess about shaving the last microsecond off a piece of code, tend to go into video game development (or high-speed trading) rather than web development.

But even then, if you look at Substack’s Archives page (which shows the first 10 entries and then waits for you to scroll down before fetching the next 10 from the server) versus the Archives page of the old SSC blog (which simply sends you the entire archive so that you can scroll through all entries immediately), that’s not a matter of "not spending enough effort on optimizing it".

The reason why the Wordpress site which SSC is built on, is so much faster, is not because the Wordpress people knew some incredibly clever advanced tricks to optimize it. The Substack page is much more complicated and, in a way, more “advanced”. The Wordpress page is faster and more responsive exactly *because* it is simpler: it just sends all the data to your browser and leaves it up to the browser to handle scrolling, searching etc. And your browser runs natively on your CPU, and also probably had a lot more effort (and possibly also more competence) spent on optimizing it.

(Edit: which is also how 3D games do it. Most of the credit for rendering those millions of triangles 60 times per second, goes to the video card hardware. The game’s job is mostly to *not get in its way* and to make sure that the data for the next room is ready to be loaded into the video card’s memory by the time the player enters that room.)

So then the remaining question is, *why* do website developers nowadays so often insist on doing things the hard way, when the result is so obviously inferior? That is more a psychological question than a technical one, so Scott this is actually your area of expertise..

Expand full comment

As someone who works with embedded systems that rely on updates happening every 2ms without fail, building a responsive, fast UI is absolutely possible using the same techniques (i.e. hard real-time operating systems) but takes far too much effort to make sense for everyday computing. Most computing that interacts with humans is on a "best effort" basis. Any effort put into optimizing UIs beyond "usable" is going into luxury territory. Nice to have but maybe nobody wants to pay so much for it. Although, I do hear Apple runs their UI updates on something close to real-time priority and that's why their UI feels relatively better compared to the competition. So maybe people are willing to pay the premium?

Expand full comment

There might be very good reasons not to do it, but if other people are like me you might consider making that finalist list public - knowing you didn't get on a list lets you close that tab and free up some RAM.

Expand full comment

I believe the main reason why games apparently are working faster than other software is that it is a requirement for them. If something can't be rendered at 30 fps, it's either optimized or removed. As a result you end up with whatever you _can_ do in 10-40 ms. On the other hand, in most other types of software there is no significant difference between use waiting half a second and a second when performing a relatively rare operation.

On a more technical level the things that games do 60 times a second usually only rely on in-memory data. Most real-world operations on the other hand involve disk operations, network requests, RPC to other services etc. It is not impossible to optimize, but it's not easy. I'm working on a service that for every request calls ~40 backends (secondary services), runs several machine learning models and a metric shitton of business logic. It usually finishes within 30 ms.

Expand full comment

Based on the recent discussion about the blog layout, I'm going to plug a browser extension I created to help make the site better. It's called ACX Tweaks and it works on Firefox and Chrome-like browsers. Find it in the extension store or at https://github.com/Pycea/ACX-tweaks.

A few of its features:

- Can restore the old SSC theme

- Can highlight new comments

- Can add back the comment like button

- Adds keyboard shortcuts for comment navigation

- Fixes the stupid header that appears whenever you scroll up

Hopefully this makes the site more palatable for some of you, and let me know if there are any other issues with it that you think can be addressed.

Expand full comment

UX designers spend a lot of their time doing user testing or studying usage via analytics. User testing questions are things like: “show me how you would share this with a friend?” They aim for frictionlessness, obviousness, and simplicity. Analytics will show how much time people spend reading, how many shares and subscribers an article gets. Numbers must go up! Neither of these ways of designing would capture the sentiments described by your longtime readers who are nostalgic for the old, clunky site.

(Though, personally, I am nostalgic for LiveJournal so ...)

Expand full comment

Starting a program up will always incur some delay due to disk access. Programs often need to load a lot of libraries from the disk into RAM. (Note that SSDs are visibly better in this regard, program startup is so much faster than with classical HDDs.) Video games are no exception, they do not start instantly either.

On the other hand, displaying a command menu should not take long. If it does, either the framework is shitty, or the software is doing something over the network and tries to refresh some menu-related data from a remote server.

Expand full comment

This has been on my mind for a while, and I'm convinced I've got a stringent and nonobvious theory. But over the course of the last few days I got increasingly convinced that something in there is off, and maybe you can tell me ;)

I would really like to not waste any more energy on something really wrong

Some people play weird status games in their head. They desire to 'win' it. And also, whatever the desire is about isn't the point. The point is to win the status game. So the goal is not the 'real' desire. It is extrinsic motivation that makes us pursue this desire. This kind of desire wouldn't exist without other people (or the imagination thereof in whatever form == the Other). I would like to call this social desire.

Some people (if I had to bet, most people) do sometimes put up effort for things that don't let them win any status games. They simply want whatever the desire is about.

Please note, that this includes all desires that aren't social desires. I would like to claim, that those come from within your mind and your brain (as they are pretty intertwined).

Let's take a look (no particular order)

* hunger

* thirst

* libido

* desire to avoid pain

* drug addiction

* aethetics

Please note, non of these, in their most basic form, happen for status reasons.

Obviously, there are evolutionary reasons for most of these desires. But that is not why you desire this in a certain moment. We all desire these because *our body/brain/mind demands it*. (this distinction is very important!)

As for aesthetics, I'd like to claim that this is the result of some weird feedback loop in the brain. And therefore, at the very moment of experiencing something being beautiful or not, coming from within the brain.

And because these desires are independent from other people they are intrinsic desires. Real desires. I'd like to call these personal desires.

Most desires are partially personal, partly social. Like eating 'good food' when you are slightly hungry. Or you want to learn something about butterflies, because there beauty fascinates you, and now you can't stop, because you'd be a loser who failed.

A well functioning society supports "helpful" (for the society/the other) desires, thus leveling your status and giving you resources. This strengthens the community.

Therefore you would be a loser if you failed. This means, the more your goals align with society, the harder it will become to have a purely personal desire. In fact, nobody has a purer personal desire than the homeless person wanting heroin. (no attached status etc)

So, what to take from here?

What is healthy?

(This is my opinion, feel free to argue)

People need personal desires, that's what makes us individuals.

We don't exactly *need* social desires (that's why they are looked down upon) but they are healthy both on an individual level (society provides resources for playing the game) and on a society level (the rules make us playing together rather than against each other).

I dont know, what the healthy ratio is.

Expand full comment

Programmers are as lazy as they can get away with, no real mystery there.

Expand full comment

Supplementing with Citicoline cleared up my brain so hard within a week, that if I was around when Scott did his survey on nootropics, I would rate it as "10 -- life changing". My cognition, mood, memory, reading comprehension, creativity -- all skyrocketed so high I wonder how I ever got things done before that.

Reading studies and reports, I don't see anyone reacting to Citicoline that well. Nobody is taking it and becoming superhuman. What's going on?

My first thought is that normal people feel like that *all the time*, and I'm just deficient in something or am otherwise "damaged" in a way that Citicoline can fix it, but it wouldn't further buff anyone who's not "damaged".

Could this be true? Or is it something else?

Expand full comment
founding
May 8, 2022·edited May 8, 2022

Most "slow" UI experiences in modern software are the result of bottlenecks other than "how long does it take to render these pixels", such as "how long does it take to get this data from the server (which relies on a poorly optimized database query)", or some similar failure mode.

Expand full comment
Comment deleted
Expand full comment
deletedMay 8, 2022·edited May 8, 2022
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment
deletedMay 8, 2022·edited May 10, 2023
Comment deleted
Expand full comment
deletedMay 8, 2022·edited May 8, 2022
Comment deleted
Expand full comment