254 Comments
Comment deleted
Expand full comment
Comment deleted
Expand full comment

Rather than political effects or the mechanical effects described herein, I think there are also effects relating to ideology: amateur scientists don’t exist as much because people don’t believe amateur thinkers can provide value anymore, and so they don’t even try. This is distinct from the aspect of only believing credentialed figures when told things, though it is related.

There are strong arguments to be made that a number of scientific fields are wrongheaded in some fashion. In the 1980s, doctors were telling people to avoid SIDS by having babies sleep on their tummies, and now they insist quite strongly the *exact opposite is true.* Numerous “paradoxes” around infinites seem to indicate, at least to some, that maybe we are on a false assumption or two there. Professional physicists have failed to reconcile GR and QM for decades.

The mechanistic model here doesn’t address the “revolution” problem of science: where some philosophical or other assumption is overturned by some “brilliant” new idea (ones that can often be more common with amateurs than professionals - Einstein being a patent clerk is a good example.)

Expand full comment

One other place where this theory would predict a difference is in the artistic domains. Since we explicitly value novelty, we don't run out of easy melodies like we do easy scientific discoveries.

Music fits this theory in some ways (the most acclaimed artists are young) but not in others (to succeed you need to dedicate all your focus).

Unfortunately, these areas are subjective so determining decline is impossible. But if decline is real we would expect a steady decline in artistic greatness over time.

Expand full comment
Apr 1, 2022·edited Apr 1, 2022

My problem with this model is that human genius should be at least semi-constant on at least a per capita basis if it's primarily genetic. If it's primarily environmental then you should expect to be able to produce it in a way we haven't been able to. If it's a combination (like I believe) then you're waiting for the right environmental conditions in which genius can express itself.

However, this has a marginal effect. In the worst conditions one or two geniuses will shine through. In moderate conditions a few more. In abundant conditions many. But once you have many geniuses it makes sense to specialize. When you're one of twenty scientists in England then it makes sense to do five things and to make foundational but ultimately pretty rudimentary discoveries about them. When you're one of twenty thousand then it makes sense to specialize in specifically learning about... I don't know, Ancient Roman dog houses. This creates more and higher quality knowledge. But it creates less towering geniuses.

Further, keep in mind you don't have to outrun the bear, you just have to outrun your competition. You can get a lot wrong and so long as you're relatively more correct you'll do well. This also explains how amateurism decreases. A few hundred years ago I'd probably be able to make some serious contributions to a variety of fields. Now I can't. Not because I do not know those fields or have interesting thoughts about them. But because now I don't have to contend with a few widespread amateurs. I have to contend with several thousand people who have spent their entire lives studying whatever as a full time, professional job.

Expand full comment
founding

This all seems reasonable as far as it goes, but maybe the impression that we are producing fewer geniuses nowadays is more due to relatively contingent and parochial-to-humans facts about the sociology of fame assignment within groups (as hinted at briefly in point #3) than it is about great feats of insight, or what make for good examples of creativity or impactful problem-solving or whatever.

In the forager analogy, the other foragers considering the problem of who finds good fruit sources are only able to consider foragers that came to their attention in the first place, and that could be due to reasons other than the actual fruit-harvesting (especially if the fruit harvesting had impact as hard to quantify in isolation from frame as does that of scientific genius).

Expand full comment

I think there is some evidence that support the idea that ML researchers make breakthroughs at younger ages. The classic example would be Ian Goodfellow who invented GANs while a grad student. Also the Turing Award winners, LeCun, Hinton, Bengio, all did their seminal work while much younger.

Expand full comment

I don’t buy it. This assumes the subset of the space that’s been searched is a significant fraction of the total space (even if you just consider the “easy” subset). If it’s small, you can always just move slightly to the frontier of the set and find new low-hanging fruit. There’s no reason a priori to assume that this region should not be huge.

In my area, theoretical physics, I see plenty of interesting research problems that are no more difficult than problems a generation or two ago. In many cases, the problems are easier because we have much more powerful tools.

I do, however, see the field unable to pay grad students, unable to get grant money relative to other fields, hemorrhaging good students to outside of academia, trapped in a nightmare of academic bureaucracy, and with an increasingly large number of outright crackpots.

Expand full comment

Scott, I think this model has less explanatory power than your previous* model, because it fails to account for discoveries which make other discoveries more available. For example, had Newton invented calculus but not the laws of motion, this would have reduced the depletion of the forest in that area, because some things which *could* be discovered without calculus are much easier to discover with calculus. Maybe you could throw in something like builders (teachers in real life) who build roads which make other areas easier to reach?

The point of this is that a more innovations in whatever makes things easier to understand, maybe educational psychology (if its effective at making better learners at scale, which idk) will reverse this trend, and the model should have something to reflect that

*https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/

Expand full comment

Plus, some of the early geniuses may well be “names associated with” rather than “sole inventor of.”

For example, despite improvements in the history of science, I bet there were still some husband and wife teams where only his name is remembered (at least in popular culture).

Or Darwin: clearly his ideas grew out of the shoulders of the giants upon whom he was standing, that’s why other people were able to come up with them as well. But we don’t remember the names of those other guys. Similarly for Newton/Leibniz: sometimes the genius halo grows more out of our desire to have a single historical hook on which to hang our story of scientific advances, rather than a deep understanding of the science process.

And if our perception of past genius is distorted by the lens of history, then our comparisons with current geniuses will be less accurate.

Expand full comment

This model seems a bit oversimplified in two important ways.

1. Ideas don't really "deplete" like this. Say you come up with some good ideas around factoring prime numbers. Someone else invents the computer. A third person puts them together and gets RSA. All three of those are good valuable work, but I wouldn't think third idea was "further out" than the first (in terms of how long it would take to get there). It was just gated on the computer.

Lots of ideas are like this -- simple, but dormant until the other necessary ingredients are ready.

2. The campsite "moves" over time. A whole lot of our cognitive technology is encoded deep in our language, tools, norms, etc., and isn't fixed year over year. Even if today's people and yesterday's people could travel the same distance on average, today's people would still be biased to discovering new things -- just by virtue of starting off somewhere else.

Some of this technology is more literal: computers are something like a bicycle in this metaphor. The early astronomers were analyzing data by hand!

Expand full comment
Apr 1, 2022·edited Apr 1, 2022

Machine learning is definitely still one of the low hanging fruit areas. In this case, you can turn a drug discovery ML system into one that can discover VX nerve gas and a whole new, exciting range of chemical weapons just by inverting the utility function....

https://www.nature.com/articles/s42256-022-00465-9

Expand full comment

(Forgive me if this point has been made already, I'm writing this comment quickly)

I've been thinking about this a bit recently because I'm trying to write a piece about a related topic (and I listened to this interesting BBC documentary https://www.bbc.co.uk/programmes/m0015v9g on the slowing of science and progress). There's another mechanism which you don't model here: in the foraging model, finding fruit only makes it harder to find new fruit. But in science and tech, a discovery or invention makes future discoveries or inventions easier.

For instance, a wood-handled flint axe is a combination of two earlier inventions, the stick and the hand-axe. Newton's observations about gravity are possible because of the earlier invention of the telescope. The invention of the iPhone 13 is possible because of the earlier invention of the [various things, transistors, touch screens, etc].

So there's a countervailing force: individual discoveries become harder *given a base of zero knowledge*, but there are also new discoveries that become possible because they are simply combinations of earlier discoveries (or new technologies make them more accessible).

In your model it might be more like you're loggers, rather than foragers, and cutting down some trees allows access to new trees, but somewhat further off? I don't know what the equivalent of height might be, but perhaps strength.

Expand full comment

> Let’s add intelligence to this model. Imagine there are fruit trees scattered around, and especially tall people can pick fruits that shorter people can’t reach. If you are the first person ever to be seven feet tall, then even if the usual foraging horizon is very far from camp, you can forage very close to camp, picking the seven-foot-high-up fruits that no previous forager could get. So there are actually many different horizons: a distant horizon for ordinary-height people, a nearer horizon for tallish people, and a horizon so close as to be almost irrelevant for giants.

Doesn't help that there used to be [a tribe with lots of seven-foot-tall people](https://slatestarcodex.com/2017/05/26/the-atomic-bomb-considered-as-hungarian-high-school-science-fair-project/) but [it has since been mostly exterminated](https://en.m.wikipedia.org/wiki/The_Holocaust).

Expand full comment

Another interesting read, thanks! :)

On a single small point : "Since a rational forager would never choose the latter, I assume there’s some law that governs how depleted terrain would be in this scenario, which I’m violating. I can’t immediately figure out how to calculate it, so let’s just assume some foragers aren’t rational".

Isn't there a question of personal preferences and aptitude? Sure, it'd be more productive to go over there but I happen to really like it here and foraging that particular ground makes me feel competent while going over there is arduous for me.

Hence even if it would be more 'rational', I'm not going to do it. 'Irrational' is an acceptable descriptor for that behaviour in economics, but it may not be quite 'irrational' in everyday parlance, it's just optimizing for different objectives.

Expand full comment
Apr 1, 2022·edited Apr 1, 2022

Let me give an epistemic reason for the stall. There’s a clear barrier to recent progress of the traditional kind, which is (to use the jargon of my colleagues in Santa Fe Institute) complexity.

Complex systems are not amenable to the Francis Bacon style “vary and test” experimental method. We’re learning a huge amount but returns to experimental methods of the causal-control kind are hard to come by. Taleb is a good example of a person — a real, no-BS practitioner — who recognized many of the same things the SFI people did. In a funny way, so was David Graeber.

Examples of complex systems include the human mind and body; hence why we’ve had so little progress in getting control of (say) depression or cancer, and why the human genome project fizzled after we found the 1-SNP diseases. Much of Econ is similar (IMO the RCT era is overblown). ML is CS discovering the same.

They’re hard problems that will require a new set of tools, and even a new Francis Bacon. The good news is that I think we will crack them. We stumbled on this world in the mid-1980s, but IMO didn’t get serious until the mid-2000s.

Expand full comment

What about education? Think the day away/teleport thing might break on this one. The tribe writes down descriptions of very distant places in very exacting detail and if a student spends ten years studying it they can get there instantly vs say two hundred years if they tried to go it alone. Or do we define the day as what is possible to achieve even with education?

Other interesting thought is artifice. One day the tribe invents a car. I mean that literally in this analogy, although maybe microscope is better. Or stilts or a shovel or something in this analogy? The mere addition of tools that allow you to reach greater depths or heights causes the depleted land to have new bounty. Some of those technologies exist farther away.

I like this a lot overall. I have a similar analogy about light houses I use.

Expand full comment

Thanks for the great write-up.

In some sense, a lot of progress in science can be thought of as "getting closer to the truth" than "finding new terrain". "Getting closer to the truth" comes from "change of perspective". This change of perspective mostly comes from new technology or observations, like the Morley-Michelsen experiments, which gave rise to Relativity, or other experiments that led to Quantum Physics. The age of the scientists is generally irrelevant. Physics was many hundreds of years old when Einstein and Dirac, young scientists, made their discoveries. Although they may in themselves be giants, it is difficult to argue that such giants don't exist at all today in terms of sheer intellect and hard work.

Hence, I feel that point no 5 and confirmation bias can explain a lot of this. People learn a paradigm, and try to stick very hard to it, until new technology makes experiments possible that clearly contradict those paradigms, causing paradigms to change. The first scientists to then discover those changed paradigms that accommodate the new experimental results become heroes.

Expand full comment

Let's revisit this issue when you've got more data about the scientists, so that you can concentrate on that instead of elaborating the already-clear forager metaphor and then shrugging your shoulders over the real question.

Expand full comment
Apr 1, 2022·edited Apr 1, 2022

This is really pessimistic without the last part - that at some point, the foragers manage to set up camp in another part of the forest, acquiring untold riches at first, then letting others set up even further.

This is what happened with machine learning, with biotech (next generation sequencing, anyone?), in fact a lot of science is about this kind of camp-setting. "Standing on the shoulders of giants", and it's giants all the way down/up.

There is a huge difference between having to figure out calculus from first principles, and learning it in high school then moving on to something cooler. And then you can have the computer calculate your integrals for you, with calculus relegated to your "maybe figure it out someday when it's needed" pile. Knowledge is a tool for acquiring further knowledge.

Expand full comment

As I say in the original essay on genius, I think it's true that "ideas are getting harder to find" (what you call the "Low-Hanging Fruit Argument"). It's also empirically supported by looking closely at things like agricultural yields. The question is just whether it fully explains the effect, or even most of the effect, and there are reasons to doubt that. For example, the two reasons I give in the original essay to be skeptical are:

(a) if the lack of genius (or let's just say "new ideas") is due solely to ideas getting harder to find, then it is an incredible coincidence that, as the effective population of the people who could find such ideas exploded to essentially the entire globe (with the advent of the internet and mass education), ideas got harder to find to the exact same degree. In fact, this looks to be impossible, for there should have been "mining" of the idea space in order to quickly exhaust it, and which would have triggered a cultural golden age. It is on this question that the original essay starts, but I've never seen anyone address how changes in effective population should have led to more "finding" and that doesn't look like what we see.

(b) “ideas are getting harder to find” seems especially unconvincing outside the hard sciences in domains like music or fiction. I actually still think there is some truth to it - you can only invent the fantasy genre once, and Tolkien gets most of that credit. But overall it seems obviously true that something like fictional stories aren't as directly "mineable" as thermodynamical equations. And yet, again, we see the same decline in both at the same times, so the explanation needs to extend beyond the hard sciences.

Expand full comment

The low-hanging fruit argument seems to me very probable, and I love the metaphore with real fruits in it!

I would like to add a small (and quite optimistic!) additional hypothesis concerning the decrease of the observed frequency of geniuses, this one in relation with the increase of the population and its level of education.

If we assume that we recognize someone as a genius when he or she clearly surpasses all the other people in his or her field, that it is therefore mainly an evaluation that is relative, being done by comparison with what other people produce at a given moment in the field in question. In this case, the fact that the population as well as its level of education is increasing must also very significantly increase the number of people working in a field . And in this case, it seems to me that statistically, the probability that the most talented person in a field is much more talented than the second most talented person in the same field, is probably much lower than before.

Therefore, we would have difficulty recognizing contemporary geniuses partly because there would be many people doing extraordinary things in general, whereas before there were a few who stood out.

Expand full comment
Apr 1, 2022·edited Apr 1, 2022

I like the rough model but i'd point out that there are certain topological assumptions being made, which maybe don't apply. If 'places of insight' were arranged in some Euclidean geometry, then your theory holds.

But if we generalize it to "finding new knowledge require _either_ walking new ground, or exceptional talent" (which i think is totally fair), we might ask whether it's possible to walk new ground via nontraditional approaches. If the _only_ dimension we consider is 'angle and distance from the base camp', i.e. the territory is 2-d euclidean grid, and we've mapped out where everyone has and hasn't walked, then it becomes much less likely you will _find_ new ground immediately around the camp.

But if the number of dimensions is so high that most people don't even _see_ a bunch of dimensions, then we might actually expect _creativity_ to lead to insights more readily than intelligence.

Or, if technology+economics have changed in such a way that someone might have 10 different mini-careers and still acquire sufficient wealth to do as they please , this might _also_ be 'new territory' where discoveries become easy. So we might expect future discoveries to be more likely from, say, a startup employee turned venture capitalist turned armature horticulturalist turned poet turned botanist, who synthesized a bunch of experiences that many other people had, _individually_, and yet nobody had yet had _collectively_.

The fruit-gathering analogy might work if someone is the first person to circumnavigate the camp at a specific radius, and to spend at least a few weeks at different angles at different times of the year. They might notice some seasonal continuity between plants growing only at that radius, which might only be observable to someone who had spent the right amount of time in all of those places. In terms of ground, they haven't covered anything new. But if we include time in there, then yes, it's like they _did_ walk on new territory.

So i like the theory if we generalize it as "to maximize your chance of discoveries you have to walk on ground nobody else has walked on before", but it's worth asking whether "the space of being an academic researcher" being extremely well-trodden means that there aren't low hanging fruit in dimensions none of us have even considered looking in.

Like, for all we know, just breathing weird for like 7 years straight could let you levitate and walk through walls. How would we know if this were true? Suppose someone discovered it 10,000 years ago, and they did it, and everyone was like 'holy shit that's crazy' and they wrote stories about it, and today we dismiss those because they are obviously absurd. Are _you_ willing to spend seven years chanting some mantra on the off chance that maybe it'll let you walk through walls? I'm not. Probably most reasonable people aren't. That's some unexplored territory right there! But something tells me it probably isn't' worth the effort.

And yet people like wim hof exist. This tells me there's probably a ton of low hanging fruit still around but it'll be discovered by eccentric weirdos.

Expand full comment

I'm not sure about "taking more time to reach the frontiers of knowledge". Bachelor's degrees haven't gotten steadily longer over time, and previous key discoveries get built into the curriculum. The length of postdocs has (particularly for those eyeing an academic career), but that has more to do with the often enormous quantity of work required to get your Nature Something "golden ticket" paper. Once you start grad school you're basically teleported to the frontier. People learn and adapt quickly.

I think genuine breakthroughs happen on a more regular basis than people think, but we've pushed the depths of knowledge so deep that they're not necessarily recognizable to an outside observer.

Expand full comment

I really enjoy analogies so thank you for writing up this very thoughtful and entertaining model. I think there's another thing at play, which is distraction. I'm not as talented a writer as you, so instead of clumsily trying to extend the analogy, I'll tell a some stories about my own medical school class.

I went to a well regarded medical school with lots of brilliant and talented classmates. I will say there was a big difference in how much each of my classmates were excited by discovery, and interest in discovery was largely orthogonal to pure intellectual horsepower. Some of the smartest people I've ever met had exactly zero interest in research--they jumped through the appropriate hoops to get residencies and fellowships in lucrative fields and now enjoy lives where they are affluent, enjoy high social status, and do work that they find interesting enough. I think some of these folks are "potential geniuses" who made a rational choice to take a sure thing (a career as an orthopedic surgeon), over something more volatile (a career doing research in the life sciences).

To give an example of the same effect, working slightly differently, a friend of mine told me that he had taken a job as an investment banker right after college, and then was laid off before he could start working due to the financial crisis. He came to medical school as a backup plan, and is now an extremely talented epidemiologist.

Final story is about a friend who, while he was a post-doc (MD PhD), realized it made much more sense to moonlight as a doctor and pay other post-docs (who were PhDs and didn't have the more lucrative option of taking care of patients) to execute his experiments for him. This was kind of a boot-strappy way of leveraging the resources around him. But I tell this story is because he had to make science more a passion project funded by his actual lucrative career which was as a physician.

What I take away from these stories is three things:

1. It doesn't really make a lot of sense to study the sciences (especially at a fundamental, basic level that is most likely to create groundbreaking discoveries) if what you care most about is a comfortable or happy life. True, the rewards are enormous for the right-most outliers, but most people work very hard for tiny material rewards, when they're usually clever enough that they could have nicer lives in most other careers.

2. Having a successful career as a scientist is HIGHLY path dependent. You have to have the right sequence of experiences that give you more and more momentum, while also not having experiences that pull you off your scientist path onto more lucrative or comfortable paths. This is a point that's been made MANY times before but I wonder how many potentially great thinkers over the last 30 years have pursued careers in management consulting, banking, or dermatology, or orthopedic surgery. Obviously these people still make potentially great contributions to society in the roles that they take but they are much less likely to expand the frontiers of human knowledge.

3. We still probably undervalue most research, as a society. Because the potential payoff is so uncertain, individuals have to bear a lot of the risk of these careers. There's an enormous opportunity cost to getting a PhD and doing a post doc, and even if you are one of the few successes that gets your own lab, it's still a not a very materially rewarding situation. So what you end up with is a) a lot of talented people who bail on their science careers for things that are more of a sure thing and b) a lot of people who never consider a science career because it represents a high-risk, low-reward scenario compared with the other options in front of them.

Expand full comment

For the sake of argument, let's grant that your argument as presented is 100% correct. Even so, outsized focus on the political aspect is right and proper because unlike the mechanical causes we have some small hope of changing the politics. Instead of "there's no mystery to explain here" the takeaway could be "we need to run a tighter ship of Science, the deck's stacked against us".

Expand full comment

Perhaps a more apt analogy for science is not picking fruit, but planting fruit trees. Planting a fruit tree suggests a scarce return in the short term, but the returns can expand organically in two ways: as the tree grows, and as the seeds from the tree spread to sprout other trees. So, a single planted tree has the potential spawn an entire ecosystem. Similarly, knowledge begets knowledge.

Expand full comment

"machine learning should have a lower age of great discoveries."

Possibly controversial opinion but machine learning is a technological field, and not a scientific one... or rather -- none of the science is novel. The advances in machine learning are a combination of the scale afforded by modern hardware, vast amounts of data, and statistical and curve-fitting theories that have been around forever. The big issue with regarding it as a scientific field (for me) is that they aren't coming up with new principles as such, they're coming up with a set of techniques to accomplish tasks. And in general they have no idea how these techniques actually accomplish these tasks -- the loop is generally suck-it-and-see; hence all the pseudoscience surrounding it and baseless claims that brains work like neural nets, or that sexual-reproduction works like drop-out, and so on.

Another factor is that to make a discovery in machine learning, you need to spend a lot of money on compute, and a lot of money on data (or have an agreement with some company that already has tonnes of it) -- so this also favours established people.

Finally, advances in machine learning are consistently overstated. GPT-3 already absorbs more content than any human has ever absorbed; and people are amazed that it can muddle through tasks that are simple for a human child with a fraction of the compute, or training data. Also, there's a bit of Emperor's Clothes about this stuff. One of the useful things about human cognition is that you can tell a human "hey, there's this interesting thing X" and the human can quickly assimilate that into their model and use it. For example, I can give you a slightly better method for multiplying numbers, and you can apply it pretty instantly. This is what "learning" usually means for human cognition. You can't explain to GPT-3 a better method of multiplying numbers. And there's no mechanisms on the drawing board for how to do it. Sorry this is a bit of a rant, but in my real life I'm surrounded by people who think GPT-3 is basically a human brain and it drives me nuts.

I think you need a different model for science and technology? As you say, physics seems to have stagnated, but our technology continues to advance; cosmology continues to advance, but space faring technology regresses; scientific knowledge about birth advances, but outcomes of birth in terms of morbidity and cost declines. For software and engineering, the science for which continues to advance, but the technology declines (see Collapse of Civilization by Jonathan Blow).

Expand full comment

The area available to forage, is (pi R squared).

I'm pondering if this is related to scientific discoveries too. Since geography varies, and science fields vary also, I think there's some merit here. One forager my specialize on muddy seeps, whilst another may focus on the banks of larger rivers, another robs the nests of cliff dwelling birds. Each would find different resources, one comes back with a fish, the other with cattail roots & duck eggs, another with swallow eggs and nestlings. Likewise in science, someone plays at melting things in the furnace, someone plays with light and lenses, another ponders infinite series.

Expand full comment

"Some writers attribute the decline in amateur scientists to an increasingly credentialist establishment"

I suspect that one reason for the credentialist establishment is that it takes many years to reach the state of the art in knowledge, and non-rich people can't afford to spend that many years studying rather than working. The longer it takes to reach state of the art, the more money has to be spent getting that student to the state of the art, and the greater the need for a bureaucracy to decide who gets it and who doesn't - and bureaucracies run off credentials.

One reason I think that the UK is overrepresented in scientific research is that our education specialises earlier than most other countries, which means that, at the expense of a broader education, Brits can reach the state of the art several years earlier than Americans (the average age at PhD is 29 vs 33).

Expand full comment

If true, this is an excellent argument for letting gifted kids specialize earlier, while the whole educational community is pushing for a longer period of general education. There are other considerations here - maybe children with a more general education are more likely to lead happy lives and it's worth sacrificing a few potential geniuses to the gods of mediocrity to make that happen.

But if so that just brings us back to Hoel and the idea that an education that is personalized and one-on-one is just vastly superior to our system at cranking out revolutionary thinkers.

I guess if you find yourself burdened with a precocious progency the strategy is get 'em young, find someone who can cultivate their strengths, and try to keep the truancy officer away long enough that they aren't forced to spend 6 hours a day proving they're reading books they already read.

Expand full comment

> I can’t immediately figure out how to calculate it, so let’s just assume some foragers aren’t rational.

I can't resist that remark. Nerd snipe succesful. First, short answer: After 9 hours of travel, you only have half the time to forage, so you should be able to gain twice as many points per time unit to compensate for that. So if the area at 6 hours distance is 50% depleted, the area at 9 hours walking distance should be 100% virgin area to get the same expected total value. Only after the depletion level in all areas within 9 hours walking distance increases does traveling further become worthwhile.

Compare with the early explorers: Nobody will travel when the area at distance 0 has full value; it is only after the depletion level at close distance starts to become noticeable that people will decide to venture out (and even then, they'll travel as little as possible if they want to maximize their gain).

More general computation. Assuming we are in a state of equilibrium, let D(x) be the depletion level at x hours from camp. Then after walking x hours and gathering for 12 - x hours, you gain (12-x)*100*(1-D(x)) points. In a state of equilibrium, this should be constant, so (12-x)*(1-D(x)) is constant, say C. Then 1-D(x) = C/(12-x), i.e. D(x) = 1 - C/(12-x). Given the assumption that D(6) = 0.5 (you need to make an assumption somewhere), you find C = 3 and hence 1 - D(9) = 1. When traveling further, you'll find that D(x) becomes negative, i.e. the area should have expected value more than 100 points per hour to become worth traveling to. In a model where D(x) must be between 0 and 1, you'll find that the depletion level will gradually decrease as you travel further until you reach D(x) = 0, at which point exploring further does not gain you anything. Note: D(x) = 0 when x = 12 - C. You can measure C by checking the depletion level at any point where people do forage; for example, if the area at 0 hours distance is 95% depleted, then 1-0.95 = C/12, so C = 0.6, and people will travel as far as 12 - C = 11.4 hours to forage for 0.6 hours; 0.6*100 = 5*12 points. Chances are that far before this point it'll become valuable to invest in ways to travel further.

Expand full comment

I don't think this adequately explains why people who made great discoveries when they were young in 1900 didn't increase their rate of making great discoveries when they were old and bring the average up. One needs to explain 1900-people losing discover-ability as they age, but 2000-people gaining it.

One unmentioned thing can help explain this: extension of healthspan. The mind is the brain is just an organ in the body and if the body is generally dysfunctional the brain will probably not be in best condition either. Being in great health instead of poor health probably at least dectuples the probability of some great discovery. The age-related cognitive decline curve probably shifted a lot due to the extension of healthspan.

Expand full comment

I think there's something foundationally missing from this model. Very specifically - what about cranks and weirdos who were retroactively not cranks and weirdos?

More specifically - all of the computer science greats (Dijkstra, Turing, etc) all did their foundational *mathematical* work well well before they were household names (at least among people who are relatively intelligent and science-aware).

There's a great revolution that happened around 1980 that suddenly made computer programming, computer software, and thus *computer science* and all of its offshoots - massively more high-status and important because the Fast Transistor Microprocessor was starting to allow more and more things to use Software.

Without the Fast Transistor Microprocessor - none of that work would be lauded as the genius at it is (Turing, rather famously - went to prison) and would instead be an esoteric curiosity for mathematicians.

I get the feeling that with the amount of Science Infrastructure we have in place today, absent some New Technology or Paradigm that enables a lot of work that was done previously to matter in a new way, or enables new work - most people seeking truth are going to be happily chipping away in the truth mines for proofs or evidence for esoteric effects that aren't super relevant to today. We will lament their lack of progress in world changing and truth seeking for decades.

Suddenly - something will change, some new technology will get invented, or some new mathematical, computational, or scientific tool will become widely known or available, and suddenly proofs from 1960s mathematicians or earlier are no longer esoteric curiosities - they're the world-shaking foundation of the modern universe.

Expand full comment

I keep thinking about the time period of the Buddha, Jesus, and Mohammed. (I know that's quite a range of time, but in the course of human history, it's not so much.) Was there just a sweet spot around then for religions? Like, there was enough 'ambient philosophy' around that new and compelling religious discoveries could be made? (Although it's not what I actually believe, for this purpose assume by "discovering" I mean to say, there are certain religious ideas that can be dreamt up that enough people will find compelling that they can gain a real foothold. Discovering is finding one of those ideas.)

Expand full comment
founding

weren't early scientists amateurs because science wasn't a profession you could earn a living in?

Expand full comment

Steven Johnson has a similar concept he explains in his book, Where Good Ideas Come From: The Natural History of Innovation, called "the adjacent possible." His analogy is that every new discovery opens a door into a new room which contains yet more doors. Each new discovery opens paths to new discoveries.

Expand full comment

I have been a bit confused by the premise of this conversation on genius and the perceived implications (concern?) that it seems to be bringing up.

My (oversimplified?) understanding of Hoel's original piece:

1. The world use to product "geniuses" (towering giants in a single field or multi-disciplinarians who made large contributions across many fields). Some of them even made their contributions in their spare time!

2. We don't do this any more

3. This is bad/concerning

4. How can we solve this problem?

5. Aristocratic tutoring?

Isn't this essentially specialization playing out? The reason this doesn't happen anymore is the comparative advantage even for people with the same natural talent as past geniuses is more than overcome by the specialization that is required to make a contribution in nearly all fields. Instead of being a problem, isn't this a natural consequence of all of the efforts of those who came before? As Scott's analogy is pointing out, hasn't all of the low-hanging fruit been picked?

That strikes me as a much simpler answer than a lack of aristocratic tutoring.

Expand full comment

Interesting article. I think one element this fails to take into account is the general category of surprise/accidental discoveries. Like Kuhn's paradigm of scientific revolutions on a small scale.

To put that in terms of your example: What if one day little Jimmy the forager trips and and lands face first on a rock and realizes it's edible. It doesn't matter then if he is experienced, smart, old or young.

Scientific progress is not necessarily linear?

Expand full comment

I think the conceit that knowledge is dimensional is flawed in a number of ways, not least the ways others have already brought up, such as that historical ideas make entirely new ideas possible.

I'll observe that somebody (Cantor) invented set theory. He didn't find a new space in the territory - he created new territory out of nothing.

Expand full comment

Sounds solid to me. But I'll nitpick against the claim that `physics is stagnant.' This is arguably true for high energy physics, but physics as a whole remains vibrant, largely by virtue of constantly inventing new subfields (which open up new foraging opportunities). See my DSL effortpost on the topic here https://www.datasecretslox.com/index.php/topic,3007.msg91383.html#msg91383

Expand full comment

I have enjoyed Scott's whole collection of posts around research productivity. I want to throw in another ingredient that I think should get more attention.

In most fields, having a research-focused career has gotten dramatically more competitive over the last generation or two. Intense competition can help motivate people to work harder and reach further, but it can also stifle creativity. I'm specifically thinking here about the need to publish and get grants, and how in highly competitive areas it's easy to shoot down an application or manuscript due to some weakness, even if there's something really interesting in it. It's super-extra-hard to come up with brilliant new vistas to explore when you simultaneously have to defend against a hoard of maybe-irrelevant criticisms.

If this dynamic is important (not sure if it is), the only way I see to address it is to somehow collectively limit the number of people who have research careers.

Expand full comment

Maybe amateur scientists are less common because our true leisure class is smaller? Even the children of oligarchs like Trump's kids pretend to flit around doing some kind of Succession thing, where in the past it was totally normal to own enough land to support your lifestyle and then go off on a hobby forever.

Expand full comment

You are never supposed to forage in the immediate vicinity of your camp. The area around your camp should be left alone for emergencies.

Expand full comment

machine learning might be inherently complicated a subject.

Evolution / Darwin took many years because there was so much work involved.

physics might have essentially been less complicated in 1920s.

generally, there is no clear formula for how complicated a field is Which is less related to how old it is

Expand full comment

i tend to respect my doctoral advisor, and that means i tend to respect the economists he respects, including his doctoral advisor and (presumably) the economists his doctoral advisor respected, etc.

what if "geniuses" are just the genghis khans of science?

Expand full comment

In some ways the foraging analogy is apt but one thing it fails to capture is the inherent high dimensionality of the search space of scientific discovery.

Foraging primes mostly two or three dimensional intuitions but high dimensional spaces are a different beast so relying on those intuitions can be misleading.

Expand full comment

A couple years ago I presented a somewhat more abstract version of the low-hanging fruit argument that takes off from a 1992 article by Paul Romer (Two Strategies for Economic Development, 1992): Stagnation, Redux: It’s the way of the world [good ideas are not evenly distributed, no more so than diamonds], https://new-savanna.blogspot.com/2020/08/stagnation-redux-its-way-of-world-good.html.

That blog post makes up the second part of my working paper, What economic growth and statistical semantics tell us about the structure of the world, August 24, 2020, 19 pp, https://www.academia.edu/43938531/What_economic_growth_and_statistical_semantics_tell_us_about_the_structure_of_the_world.

The argument is about the relationship between the world itself and our cognitive capacities. Because the world itself is “lumpy”, rather than “smooth” (as developed in the working paper, but akin to “simple” vs. complex”), it is learnable and hence livable. The American economy has entered a period of stagnation because the world is lumpy. In such a world good “ideas” become more and more difficult to find. Stagnation then reflects the increasing costs of the learning required to develop economically useful ideas.

Expand full comment
founding

The 'rational equilibrium' for gatherers is that the depletion of any particular land should be the thing that makes gathering there equally good as gathering elsewhere. That is, (total time - travel time) * gather rate = constant. So if land 6 hours out is 50% depleted, land 9 hours out should be 0% depleted (so it's also worth 300 points); land 8 hours out should be 25% depleted (so it's also worth 300 points).

Expand full comment

Seems to me discoveries are not discrete objects that are gathered, rather that they're composites synthesized from existing states. This means each discovery adds a new state to the space from which increasing complex blends of discoveries can be synthesized. More of a network effect where every truth increases possible combinations. Eg, discover a transistor, and then other semiconductors become more likely to be discovered and many other uses and devices discovered through them, including an integrated circuit, etc. We're building a mountain not digging a hole, so there is a bigger and bigger pile to draw from.

Expand full comment

"many of the most innovative crypto people (eg Vitalik Buterin) seem young, but that could just be a “crypto is cool among young people” thing."

It seems to me these my not be unrelated. It's entirely possible that crypto is cool among younger people, because that's a field where younger people are not disadvantaged.

Expand full comment

The model is wildly inappropriate. Science doesn't work like that at all. Scientific advancement almost never consists of finding a brand-new explanation for what has never had an explanation before. Generally, it consists of finding a *better* explanation in the face of (rarely) pure brilliant insight, and (almost always) the receipt of new and better experimental data.

For example, people had theories of combustion going all the way back to the Greeks, and probably back to Neolithic Man. The correct modern explanation was constructed by Lavoisier, replacing the prior "phlogiston" explanation, when new experiments on the oxidation of metals became available. Quantum mechanics replaced classical mechanics because of new data on spectra, not because nobody had any notions of how atoms worked -- they had ideas, some of which were of course wrong (or incomplete), but they fit the data then available, e.g. German organic chemists of the 1880s and 1890s had accurate and deeply insightful ideas of atomic valence, empirically derived, without the slightest idea of the structure of the atom from which they arose. Same with Newtonian gravity explaining the orbits of the planets, that replaced notions of perfect complicated nests of spheres *because* of the significant improvement in observational data of the planets brought about by the invention of the telescope. The notion of cells came about because of the invention of the microscope. Thermodynamics was formulated because careful measurements on heat input and output (undertaken because of the surging importance of the steam engine) produced new data. It's not that nobody had *any* theory of heat before Count Rumford observed cannons being bored 1798, people had a working theory of heat that was simply wrong, or more precisely insufficient in t he face of new data. And on and on. The first and most important engine of scientific advance is the invention of new instrumentation, which itself comes mostly from technological demands. When people are highly interested in new machines, to do new things (or do old things differently), then you get new measurements, new instruments, new data -- and, by and by, new theory. (Which traditional explanation is more than sufficient to explain the renaissance of "AI" ideas lately -- look at the massive technological demand for, e.g. better directions in Google Maps.)

The much better model of scientific progress is that it's a case of various areas of knowledge coming into sharper focus. We *think* we understand some area -- the movement of heat, kinematics, the way substances chemically combine -- and it works for a while, but then some new data cause us to need to sharper our focus, understand things in more detail, and sometimes even replace the major struts and beams that underly our understanding in general.

But this has nothing to do with any explanding circle of inquiry, and the metaphor that you have to travel greater and greater distance to find something new is wholly inapplicable. People have made stunning discoveries by simply looking where no one else thought to look (cosmic microwave background), or considering a possibility no one else thought to consider (RNA catalysis). They didn't have to master some much larger amount of prior material than people who made similarly remarkable discovering at an earlier age. Indeed, if anything, I'd say incremental discoveries at some far frontier on top of a huge complex pyramid of prior understanding are rarely deeply significant. Those are more the workaday nailing down of the details stuff that occupies the host of scientists in the 2nd and 3rd rank. It's discoveries that re-examine what "everyone knows" --- that velocities should add simply, that the flow of time should be the same for everyone, that if you mix two substances the resulting compound should have properties in between those of the components -- which prove enduring and powerful.

There have been plenty of times when people have observed a reduction in the pace of scientific advance, and concluded that new discoveries were just becoming intrinsically harder, cf. Albert Michelson's famous statement in 1894:

"The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote... Our future discoveries must be looked for in the sixth place of decimals."

...which, of course, was also famously amazingly and hilariously wrong, inasmuch as it was made by a professional and highly accomplished physicist a mere handful of years before astonishing discoveries were to shake the foundations of physics.

The progress of discovery is never smoothy monotonic. We get waves of progress, then periods of stagnation -- sometimes centuries long -- and even in some cases lose ground. These things need no better explanation than human nature. The idea that if *our* time happens to be a period of slowdown or stagnation this must be because of some intrinsic essential difficulty in advancement, and not because our social structure, say, has hobbled human creativity to a greater extent than prior ages, seems to me fairly narcissistic, in the same sense as medieval monks placing us at the very center of the universe, and as the very origin of all its meaning -- e.g. the orbits of the planets or habits of wolves were as they were primarily *because* of how that affected human affairs. We're just not that special, not this species, and not this generation.

Expand full comment

I'm not convinced. On the previous article I couldn't find the right words for this to make a comment, but I think I do now:

Geniuses are interesting when they are uncommon. If they are rare, you are unlikely to hear about them. If they are common, then they aren't noteworthy. Since the time of Einstein, Bohr, Feyman, etc we've pushed hard to get students into STEM and so now STEM geniuses aren't a novelty.

---

Newton famously said "If I have seen further it is by standing on the shoulders of Giants." The underlying idea behind that is that his noteworthy insights were easier to find because of discoveries that came before him. Matt Ridley's book How Innovation Works talks about how most of the famous inventions with famous inventors aren't the product of some lone genius who saw something everyone else couldn't. Instead there were multiple teams working towards the same goal, but our society and legal system give all the credit to one team or person. Sometimes the teams were a single person, sometimes a group, sometimes it was an entire crew (eg Edison).

I think this concept of some innovations make others easier to discover is missing from your village metaphor. Maybe instead of picking fruits from trees, the villagers are picking berries from dense pricklebush. As you pick the berries, you cut away the vines making it easier to access the further berries. On day 1 picking a berry 100m away might take one hour of pushing thru dense and prickly brush. On day 105, it may be an leisurely 1 minute walk.

Expand full comment
Apr 1, 2022·edited Apr 1, 2022

Over the past two years of the pandemic I noticed several odd behaviors of SARS2 spread and inter-variant competition that seemed to defy standard epidemiological models. With each new wave of VoCs we'd see some strikingly different epidemiological behavior in different countries. I'm just a scientifically well-educated hobbiest when it comes to pathogens, but it seems like there were all sorts of green-field research activities and cross-speciality investigative opportunities that researchers weren't interested in following up on. Many times these phenomena went unnoticed, or, if they were noticed, they were explained away by NPIs and such. There may have been lots of low-hanging fruit out there, but either because current research is so siloed, or because the grant-making agencies are giving out money to established researchers who have been investigating one thing for their entire career, this low hanging fruit seems to have been ignored (at least for SARS2).

Expand full comment

The model is too simplistic.

The world population in the 1800s, say, was 1 billion people. There are 7.5? 8? now. From a pure warm body standpoint, it would be 7.5 to 8 times more competition.

But this is the 100,000 foot level.

Science is a very luxury activity. It rarely, if ever, provides tangible first person benefits to the scientist in their own lifetime. The only exception is our present period (more later).

So even if we look at the 1 billion, how many are actually able to take on the luxury of being a scientist? 10,000? less?

Now luxury/lifestyle vs. profession. The ability to make a career out of being a scientist is probably no more than around 2 generations old. Prior to World War 2, I am fairly sure all the scientists were inventors/kooks/curious/etc - but there just weren't jobs to be a scientist per se. There were jobs to be an academic - but not all academics do science and academic mainstream thinking and science have, at best, a fraught relationship.

So is science today better with professional scientists?

Clearly it is better in the sense of developing known paradigms.

Not so clear if it is better in generating new paradigms - the risk for a scientist, professionally, to deviate from expectations now has not just a social, but an economic penalty.

We'll see.

Expand full comment

The fact that the foragers are living on a 2d surface is doing a lot of work here, and if we get rid of that then the argument kind of falls apart.

Scott has previously written a lot of things comparing human thought to the way artificial neural networks function, so hopefully it won't be too controversial when I say that an idea is very very roughly like a point in an extremely high dimensional space. (Neural network activations are very high dimensional vectors, and we're saying that they kind of correspond to "ideas" or "things the network is thinking".) We should probably think about scientific ideas / discoveries in the same way: they are ideas that happen to be good in a very large and high-dimensional space of possible ideas.

If the foragers are foraging in a very high dimensional space rather than in 2 dimensions, then they'll pretty much never run out of fruit to pick that's a very short distance away. The front of discovery will expand quickly at first, but will soon slow almost to a halt. In this later stage, if a forager increases the distance he is willing to walk by just a couple of steps, then suddenly whole vistas of millions of fruit trees become available, enough to spend a lifetime picking, without ever coming close to running out.

Expand full comment

I'm interested in cataloguing ten-thousand-foot answers like this, if anyone wants to try to succinctly state ones I've missed (https://muireall.space/progress/).

Expand full comment

So moving camp is like a paradigm shift ...

Expand full comment
Apr 2, 2022·edited Apr 2, 2022

In this foraging model, one-on-one tutoring is equipping some foragers with a bicycle with a handlebar-mounted spotlight. Grad school is a network of trams, with the rails going near previous large finds.

As the tram network grows, resistance to growing it further increases, as does the cost.

Expand full comment

Could it be in part motivational? If we tell the young people with the greatest minds that science is no longer about inducing fundamental, sweeping truths about the world, but merely about falsifying hypotheses, then maybe “science” seems to them like a waste of their potential.

Expand full comment

Hasn't fame and fortune has been one of the big drivers of scientific discovery? There is still plenty of terrain to explore, but a lot of the really important and pressing problems have been picked over. Building the first aircraft was a dream of many inventors. Creating a tweak for a slightly more fuel efficient flight doesn't hold the imagination in the same way.

For amateurs, Coley's toxins is a good example. Coley became the father of immunology by injecting people with bacteria. In some way he was able to move the field forward, even though he had no understanding of the mechanisms. This seems less likely to happen today.

Expand full comment

Scott, in this debate I am 99% on your side. Yes, it is relatively easier to be a genius when all the low-hanging fruit is still hanging there. Today, if you discover something amazing in math, chances are that someone else already made the same discovery decades ago, you just didn't notice it because there are too many things like that. For every obscure part of science that has existed long ago, some people spent their entire life commenting on everything they noticed, and it is hard to even find out something they didn't notice. Which, if I understand it correctly, is what you are saying.

The remaining 1% is the argument... sorry, too lazy to find the link now... that people today lack the *ambition*. Yes, many people homeschool and hire tutors, but very few of them do it with the explicit purpose of making their child a *genius*. Most of them simply aspire to get the child admitted to a prestigious university.

Consider the difference between Polgár -- who decided that his girls are going to become chess grandmasters, and organized their childhoods accordingly -- and some modern parent who merely want his kids to attend an afternoon chess club, because they believe that this kind of extracurricular activity will make their child more likely to get to Oxford.

I would predict that given the same levels of innate talent, the latter children will most likely *not* become chess grandmasters. Not because they couldn't, but simply because they will not spend enough time and effort. (They will probably also be actively distracted by doing *other* things that might also increase their chances at Oxford, like riding a pony or whatever.) This is the difference between trying, and trying to try, so to speak. Instead of striving to become stronger, you are just trying to project an imagine of a "hard-working student"; you are not really trying to become the best in the universe, because even the hard-working students are not socially expected to actually do that.

Unlike the aristocrats in the past who hired tutors for their kids, the rich (enough to afford any amount of tutoring) people today are not trying to make their kids as great as possible. They are okay with making them good enough. (And sometimes it is easier to just donate some money to the university.)

I did a lot of math tutoring when I was younger. People hired me to teach their kids that were failing at school, or who were trying to get admitted to some school. No one has ever offered me money to teach them or their kids anything *beyond* what the school required. I have a gold medal from international mathematical olympiad, and I am also interested in psychology and pedagogy; that could in theory make me qualified for this kind of tutoring. But there is no demand on the market, as far as I know.

Expand full comment

Wolves DO NOT eat people. participating in the idiotic anti-factual demonization of wolves is pathetic and disgusting, even as a metaphor. shame on you.

Expand full comment

The most extreme example of low-hanging fruit should be reading the books of lost civilizations. The greatest Roman scientists—Hero, Galen, Ptolemy—are the ones who read the Library of Alexandria, and yet they are pale reflections of their sources. Archimedes was the main source of inspiration for science for almost 2000 years. And yet, even with the hint given by reading Archimedes, the golden ages of Roger and Francis Bacon were substantially slower in reconstructing Hellenistic science than the 200 years it took the first time. It's not entirely clear what Hellenistic science accomplished, but probably the 17th century accomplished nothing outside of it, and much less than half of it. And that's a lot better than any prior attempt to extend Archimedes!

Expand full comment

While the general model is hard to refute and makes perfect sense, the scaling seems to boggle the mind. Keeping in mind that technology expands so rapidly, it seems to stretch the analogy that the foraging would truly be THIS much more difficult each generation. Are we really going from dozens of barely trained amateurs being geniuses each generation to almost none?

I think the answer here has to be more complicated, if only to explain how much noise gets into this signal in the first place.

My personal hobby horses may bias the following example:

In the times of Ben Franklin and Lavoisier, it was common to have lots and lots of leisure time if you had any at all. Aristocrats were famously bored and most developed serious hobbies. Those who became scientists had more time for different kinds of creative thinking than we can justify in academic careers today. Freed from pressures to publish, teach, or even make sense to anyone else, they could think in a manner unavailable to most today. Intellectual Minor Leagues weren't limited to politics. You got in the major league by coming up through the minor league at least as often as through classes. Many others make the point about the pressure to publish being bad for scientific rigour in many cases. But think also about how it shapes the thought process of a scientist searching for truth. Think about the change over time in scientist labour division between administration, experimentation, collaboration, and quiet thinking. I believe if we calculated the hours lost in a scientific career today to nonproductive administrative time the curve would flatten a little. Including time lost to trying to publish papers that turn out to have irreproducible results, and it may be a more dramatic effect.

One parallel worth noting in the wider society is uses of leisure time. Leisure time for the general population is at an all time high, but so are entertainment options. Media consumption is at an all time high too. This leads to a lot of lost time each day that could be spent on hobbies, investigation, and thought. It certainly affects the contemporary aristocracy, almost none of whom are amateur scientists. It certainly affects me. I have a paper to finish by the end of this month and i am behind. I think it is not a stretch to say a vastly more distracted world will produce less signal and more noise.

Expand full comment

My Exhibit A for the youth thing in ML is Ilya Sutskever (36). He is a co-founder of OpenAI and a co-writer on: AlexNet (groundbreaking convolutional neural network), Transformers, GPTs, and AlphaGo. The Bay Area Rationalist scene is also chock full of world-class ML nerds who are in their late twenties.

Also, "young" might have to be re-calibrated to the typical age for a prodigy to get a Ph.D., and "old" should also be re-calibrated since everybody's living longer.

Einstein once said, "A person who has not made his great contribution to science before the age of 30 will never do so." Probably it's 35 now, and maybe a hundred years before Einstein it was 25, assuming what he said is true.

Expand full comment

I would like to suggest an alternative explanation, that I’m borrowing from David Deutsch, there’s a possibility that genius is less common since the late 1800s/early 1900s because of bad philosophy:

“ Let me define ‘bad philosophy’ as philosophy that is not merely false, but actively prevents the growth of other knowledge.”

Most scientific fields, and even popular culture, was corrupted at that time by the philosophical anti-enlightenment, where ideas become arbitrarily relative.

In such an environment we’d expect to see both very little actual progress, because science is now more focused on instrumentalism than explanations, leading to an infinite regress of vague theories. And a lot more tall poppy syndromes, because if there is no standard of beauty anything is art, and no one should be labelled a better artist than anyone else.

I don’t know if this idea has already been discussed, so sorry if I'm being redundant. Here’s David’s explanation of bad philosophy’s impact on physics: https://publicism.info/science/infinity/13.html

Expand full comment

Interesting model. But it doesn't seem to include the sometimes revolutionary questions that earlier answers can pose. In other words, scientists eat the low-hanging fruit and poop out seeds that grow new fruit trees in the proximal area.

Expand full comment

I've posted an alternative model in response here: https://www.lesswrong.com/posts/WKGKANcpGMi7E2dSp/science-is-mining-not-foraging

Cross-posted from LR:

Tl;dr: My model for science is that it is like mining outwards from a point. This offers predictions that extend beyond Scott Alexander’s foraging metaphor and sometimes disagree. The mining metaphor emphasises the fact that research exposes new research problems; that research is much slower than learning; and research can facilitate future learning. The model offers actionable advice to researchers (e.g., “avoid the piles of skeletons” and “use new tunnels to unexplored rock-faces”). In part II, I also object to the idea that it is concerning that a smaller proportion of people are now “geniuses”, arguing that this would be true even if the importance and difficulty of intellectual feats achieved was constant because of the way in which genius is socially constructed.

Expand full comment

I know the virtue of the model was its simplicity, so I feel bad about proposing an addition, but here goes: We're not modeling *satiety*, which I think is real, and explains a lot of the slowdown we observe. Basically, the most scientifically productive regions place comparatively less priority on progress because there's peace, or at least, no life-or-death sci/tech race, and because our basic needs for the fruits of technology are mostly satisfied most of the time, which leads us to prioritize goals other than progress. Even actual scientists and the institutions to which they belong clearly now put more emphasis on making sure that scientists have comfortable jobs, predictable salaries, prestige, and fair treatment. This selects for people who can convince grant committees and peer reviewers, not necessarily those who burn with larger ambitions. The latter generally try to make it big in industry, but the sort of industry that caters to largely satiated people tends to spend its collective brainpower on making people click on ads for dog accessories. So I think there is a general lack of urgency for real progress, and a corresponding decline in concerted effort.

Expand full comment

One other field to consider in this analysis is religion. More precisely, religious research. Israel now has 150,000 Yeshiva students learning full time, often as a career. But the pace of residue discoveries (important Halachic innovations) has not sped up.

Expand full comment

> I haven’t done the work you would need to distinguish between these two explanations yet, although I find it suggestive that the trend is more pronounced in theoretical physics than in biology.

This link doesn't at all mention biology. And I'm not sure what you mean with "the trend is more pronounced" — but my impression was that more theoretical and mathier sciences (like theoretical physics) typically have younger people making contributions, mostly because fluid intelligence peaks during people's 20s.

Expand full comment