433 Comments
deletedJan 26, 2022·edited Jan 26, 2022
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

I'll say what I say about all of these studies, which is that poverty is obviously bad and we shouldn't have to slap some quantifiable label on it in order to justify doing something about it.

If you don't agree poverty is bad, you've never experienced it.

Expand full comment

It doesn't help that many people routinely overstate the degree to which income correlates with educational data, which I think is contributing to the credulity here. Yes, all educational data has income stratification, but it's smaller than liberals constantly insist, and people have persuasively argued that's a racial effect masquerading as an income effect. (As in, you throw race into a regression and income ceases to be a significant predictor.) Claims about school funding and expenditures are even worse. But "it's the money, stupid!" is just a really tempting standpoint for liberals.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"It makes it feel real - you can literally the effects! " The "see" is missing. I guess it should be: "...you can literally see the effects!"

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

EDIT: I now understand you meant multi-dimensional into single-dimensional, which makes way more sense.

I think "non-inherently numeric result" is the wrong way to put it. I've worked with EEG, and the output is very much numeric and easy to immediately work with, the issue is how noisy it is (and, as you said, you can invent whatever hypothesis you want by reading patterns in the noise). You do seem to understand that later, for what it's worth, but it just makes the initial explanation more confusing.

Expand full comment

'papers apparently “do not have to go through anonymous peer review”.'

Stuart Ritchie: "Contributed" submissions do get peer-reviewed - but there must be SOMETHING easier about this way of submitting articles, otherwise why would it exist? My guess is that the Contributor's handpicked choices for reviewers are almost always granted: https://pnas.org/authors/member-contributed-submissions

Expand full comment

> Why do groups with no real difference between them look so different on the graphs?

Because the power spectrum is basically the Fourier transform of the original signup, and even for signals that are basically noise similar frequencies are highly correlated. This gives that "continuous" feeling to the curve, and makes it look like whether it's up or down, it can't be an artifact of noise. Noise is not nice and continuous!

It probably doesn't help that the plot's y-axis says the numbers are represented as z-scores, which exaggerates the differences between variables with relatively flat distributions.

Expand full comment

Is it fair to summarize the study setup like this:

1. State some research question and some hypothesis.

2. Collect some arbitrary data, unrelated or loosely related to (1).

3. Some p < .05 is taken as a proof for (1).

I am aware that the researchers typically use slightly different terminology (they probably don't write "as a proof for", but something a bit more blurred).

4. Bonuses: babies, cute animals, virtue signals of all kind

Expand full comment

But if it has the result of low income families getting a bit more cash to make things more comfortable, then I'm all for that.

https://nakedemperor.substack.com/

Expand full comment

The New York Times posted this study as the third most important news story on the NYTimes.com homepage and issued a tweet about it under the "Breaking News" caption.

This is a good example to keep in mind when reading Scott's next post about how the news media seldom outright lie, which I agree with. I'd be hard-pressed to find an outright lie in Jason DeParle's news story in the NYT.

On the other hand, the news media has a huge amount of discretion over what it treats as Front Page Breaking News and whether it approaches the story from a credulous or skeptical perspective.

For example, here's a two-week old study in "Developmental Psychology" of a study with a much larger sample size and a much longer duration that finds that the Democrats' idea of more funding for pre-Kindergarten education is not a good idea:

https://www.unz.com/isteve/is-pre-k-school-really-a-panacea/

Unlike the brand new EEG study, I haven't seen much news coverage of this.

Expand full comment

> But this study basically shows no effect. We can quibble on whether it might be suggestive of effects, or whether it was merely thwarted from showing an effect by its low power, but it’s basically a typical null-result-having study.

This treatment of statistical (in)significance is problematic. I've only looked at the charts and read Gelman's blog post, but it seems to me that the study produced evidence that was most consistent with some effect, but with enough uncertainty around that estimate that it's also reasonably consistent (though less so!) with no effect.

Statistical significance does not prove that some association is exactly as observed. But conversely a lack of statistical significance does not prove that no association is present. A confidence interval (or similar) around the estimate would be more informative, but to the extent that we're having to use p-values, a low-ish but >0.05 p-value might loosely be interpreted as saying the data are most consistent with there being some effect, but also reasonably consistent with there being no effect.

Expand full comment

Because they were essentially posted together, I don't mind putting this thought out here more related to your other post on Bounded Distrust.

Hardly anybody has the time to sift through the information as you did. As an aside, that's why I really appreciate you taking the time to do this. But, most people don't read your blog or other things to help sort through. Long story short, this study is accurately binned as "unproven, possible/likely false" and the NYT ran the story anyway. For most people, the conclusion *must* be that the NYT either flatly lied to them, or was so much more interested in pushing their agenda than the truth, that they were willing to forward a study that was very likely false in order to advance a narrative. For most people, the proper mental configuration *must* be to consider the NYT suspect. The only other alternatives are to trust in known liars, or spend far too much time sifting through information to try to understand and sort it, knowing that most of us lack the intellectual ability and time to do that properly.

Fake News is real, and there's no way for the average person to solve it. It must be fixed at the institutional level of the media who publish lies on a regular basis. Even those of us who can and do take the time to sort through information cannot let it slide that the media regularly lies to us, even in cases like this where they are presenting something "potentially" true. It's a weak study and should not be printed in the media.

Expand full comment

You should write a listicle by the title:

"Top Ten Cases of Nominative Determinism"

You won't believe number 7! (It's the neurologist Lord Brain)

Expand full comment

It's hard enough to do good science and be statistically rigorous even on "boring" topics like cell biology, physics, etc, when there's so much pressure to get positive results for the sake of your career. Take those career pressures and the normal desire to be proven right, and combine that with a politically-charged topic and a field where 95% of researchers have the same political ideology and....... well, that doesn't seem ideal.

This is why I think there's some case to be made for promoting political and ideological diversity in academia, at least in fields with politically-charged topics.

Expand full comment

There's a longitudinal study that took place in the Gambia, which used EEG & fNIRS as well as a battery of behavioural and socioeconomic assessments to look at the effects of poverty/malnutrition/low-SES status on brain development, and while it's not over yet, their conclusions are mixed (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6767511/). They pretty much did everything you could ask for with a poverty assessment study. And so far, they've found that Gambian babies seem to habituate to new stimuli much more slowly than their comparative cohort in the UK.

Is this because they have worse attentional performances or because they live in noticeably different households to UK children - massively higher average household size, more people to factor in etc.? For a study of this kind, it's not particularly enlightening? (Admittedly I think COVID impacted their work, and they should find out some more when the children return for a 3-5 year check-up.) But it is suggestive that certain types of neuroimaging have similar problems to heritability, where results don't translate brilliantly across geographic areas.

Expand full comment

"Andrew Gelman says no" could be the title of a whole book series

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I now think journalists should be banned from reporting on science, as a matter of professional ethics. I think it's clear at this point that they just can't do it right, and I think this sort of bad science journalism is a leading reason why a lot of people distrust scientific institutions.

If they want to report on the state of science, maybe they should have maybe 3 randomly selected researchers from 3 randomly selected institutions write a review.

Expand full comment

One thing I would have liked to see the researchers address was the significant difference in age between the control and experimental groups. The research is fairly heterogeneous in this area, but from what I understand, the younger you are the less neural attenuation one has in various areas of the brain. Even if the average EEG collection varied by a month between two groups, that could contribute to differences in power seen here. One example is from this VEP study: https://www.cambridge.org/core/journals/visual-neuroscience/article/abs/development-of-lateral-interactions-in-the-infant-visual-system/FCBAC731A0B367404B116639A2C1757A. From the abstract: "We studied the development of the short-and long-range interactions at 100% and 30% contrast in human infants using both VEP amplitude and phase measures. Attenuation of the second harmonic (long-range interactions) was adult-like by 8 weeks of age while the strength of the fundamental (short-range interactions) was adult-like by 20 weeks suggesting a differential development of long-range and short-range interactions. In contrast, corresponding phase data indicated significant immaturities at 20 weeks of age for both the short-and long-range components."

Expand full comment

Obligatory typo nit: "but a bunch of other people beat me to it (see eg Philippe Lemoine, Stuart Ritchie) have beaten me to it."

Expand full comment

Stuart Ritchie is wrong about whether papers by National Academy members are peer-reviewed. In fact the review process is basically the same, it's the editorial process that changes, that is, the stage when the journal decides whether to send the paper to reviewers. NA members are given a certain number of "silver bullets" that allow them to skip the editorial stage and go straight to peer review. It is certainly bad and I think it does lead to bad science but to say that those papers are not reviewed is very much overstating the case.

Expand full comment

This reminds me of TLP's fantastic post:

"In a recent fMRI study, a salmon was shown a series of pictures of human faces showing various emotions: can a salmon distinguish them? and what brain regions are involved. 15 pictures, ten seconds each.

I won't bore you with the anatomy. Because of the small size of the brain, exact brain structures could not be distinguished, but something in the brain did light up. A statistically significant number of voxels, comprising an area of 81mm3 in the midline of the brain, were active (p<.0001).

So can fish interpret human emotions from a picture? I have no idea. I do know, however, that that fish can't do it: it was dead."

https://thelastpsychiatrist.com/2009/10/the_problem_with_science_is_sc.html

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"Cash aid to poor mothers increases brain activity in infants, study finds"

That headline makes me cry. I have to assume the story is better and that the study has something more going on than "We gave a wodge of tenners to Sharon and immediately her eighteen month old baby, Shanice, had a noticeable spike in brain activity".

I imagine what they mean is "By getting more money, the low-income parent(s) have less anxiety about finances, can pay bills on time, can feed their kids better, and the reduction in stress and improvement in the environment means that the babies are receiving better care and so we conclude better care = being healthier = more brain activity going on = hitting developmental milestones, smarter than if lower brain activity, and other good things".

I'll have to read the thing to see what it's about, but I'm going to predict this is what they mean.

EDIT: Mmmm. The study seems a bit wishy-washy, I don't know if they demonstrated what they set out to demonstrate, and they do seem to realise that. They're coming down on the side of "pro-cash transfers" but I honestly don't know if the extra money *did* make more of a difference:

"However, we do not yet know which experiences were involved in generating these impacts. Future work will examine potential mechanisms affected by the cash gifts, including household expenditures, maternal labor market participation, maternal parenting behaviors, and family stress, noting that pathways may operate in different ways across different children and families."

If you're not tracking where the money is going, and you don't know what is going on, you can't say "X shows Y happened as a result". $20 a month is not going to make a big difference, so the $333 is the one to track.

And the problem is: you can have neglectful parent(s) who use the extra cash to spend on themselves and the kids get no benefit. You can have parent(s) who are trying to do their best, spend the money on paying bills or buying food and clothes for the kids, etc. AND IF YOU DON'T KNOW WHICH IS WHICH, YOU DON'T KNOW JACK.

Is Susie Spendthrift's baby one of the ones with higher brain activity? If so, the extra money isn't the reason the kid is doing okay. Is Sally Striver's baby one of the ones with lower brain activity? If so, then despite Sally doing the right thing, lack of enough money is not the problem here.

Poverty is terrible, growing up as a child in a household where there is constant anxiety over paying bills and not having a financial cushion if anything goes wrong does make you anxious, and more money is probably better - but unless you know where best to direct that extra money, then your studies are not much use at all.

Expand full comment

A possible reason for "wanting" to find physical evidence of harm from poverty is that it is a holdover from a distant past when Liberals were "soft hearted" and Conservatives were "hard headed" and finding physical harm from poverty was supposedly a way to persuade Conservatives.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"Some kids in Romania were randomly assigned to stay in (probably terrible) orphanages vs. be placed in foster care."

No, they weren't "probably terrible", they were "absolutely fucking appalling", at least going by my memories of Irish interventions in the 90s after the collapse of the Ceaușescu regime.

https://www.bbc.com/news/av/magazine-35944245

This sparked a lot of adoptions of children by Irish families (and other Western countries),and one unhappy side-effect was the setting up of a trade in Eastern European adoptions; canny operators in effect bought and sold babies to rich (by their standards) Westerners:

https://www.irishtimes.com/culture/cashing-in-on-the-baby-rescue-1.1058341

"The television pictures of Romanian orphanages and the children who lived there were among the most memorable of the 1990s. These shocking images sparked a huge humanitarian effort, particularly among the Irish. For many, though, the help they brought was not enough and they became involved in "rescuing the orphans" by adopting them.

However, these rescues unwittingly involved many Irish people in a baby trade. Most children were not orphans; they had parents and brothers and sisters and aunts and uncles and grandparents and these "rescues" were mostly facilitated by large sums of money. Many experts believe that, tragically, this trade from Romania condemned thousands more children to institutions and made reform of childcare almost impossible.

Today, Serban Mihailescu, the Romanian minister for children, says the effect of foreign adoptions was "extremely negative" and encouraged officials to keep the institutions full of children. "The number of children in institutions increased because more and more foreigners wanted to adopt Romanian children and more and more of the personnel in the institutions worked as dealers and they pushed the children for the inter-country adoption. It's like a business, a $100 million business," he says."

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

I can actually offer some operator-level expertise! I am a board-certified pediatric epileptologist, and can describe what EEG actually is and what it is purported to measure. And why this study is bullshit. I hit the comment length limit so this will have to be threaded out.

EEG is a test by which we indirectly observe the activity of the cortex (the brain's surface). We do this by gluing twenty electrodes to the scalp in a predetermined grid, attaching each to a differential amplifier, and then digitally recording the voltages detected by each electrode. The resultant tracings can tell us a lot about the state of a person's brain - are they awake or asleep, are they at an expected developmental stage, is their brain experiencing local or diffuse dysfunction, could they be having seizures? For many such practical questions, EEG is a useful tool.

With respect to many other questions, EEG is a blunt instrument. We sometimes call the EEG tracings "brain activities," but they are at a great remove from the brain's actual activities.

Edit: haven't posted since SSC days. Not sure about the etiquette for effortposting or how to make it look pretty. Correct me where I'm wrong.

Expand full comment

I’m not sure how to actually take on board your point about not over relying on heuristics and dismissing any study you don’t like. I mean, I agree in principle. But when I saw this headline I immediately said to myself “this is not a real finding and I would bet a very large sum of money on the spot that the paper has obvious flaws and won’t replicate”. And I was right, as I knew I would be.

So how do I remain epistemically virtuous here? The heuristic just works too well to truly ignore it.

Expand full comment

I really want to know what goes on in these academics minds.

The two poles

A) We think we can see a real effect here, if only we pre registered it, maybe we should publish anyway it will add to the sum total of human knowledge.

B) We are in the money with this, let's p hack the hell out of this study so we can get headlines and future grants.

Obviously both are extremes, but I have gone from thinking academics were mainly A, to realising quite a few are a bit like B.

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

In the original post, Scott wrote:

"Most of these families were making about $20,000, so this was an increase of about 10-20%."

This is only partly true. From Table 1, "Characteristics of EEG Sample", "Household combined income at baseline (dollars)":

$22,739 ± $20,875, n=238 for Low-cash gift EEG sample

$20,213 ± $14,402, n=168 for High-cash gift EEG sample

That is a great deal of intra-group variation. At face value, a $4,000 per year supplement will mean far more to a family one SD below the mean (69% boost to income) than to one at +1 SD (12% boost).

It seems to be commonly accepted that low-income mothers' behaviors are influenced by the knowledge that benefits will be reduced by higher declared income -- this provides incentives to participate in the informal economy.

Perhaps the authors figured that for that reason, declared income doesn't mean much; "poor is poor." I think the journal should have required the authors to address this point explicitly (I don't believe that they did).

Expand full comment

Interesting tie in to Scott's comment on hating imaging -- computer vision has a terrible time with undetectable-to-humans changes to small numbers of pixels in images. In effect, all image analysis by both humans and complicated statistical tools are very very hard due to exactly the high degree of freedom that Scott points out. Unfortunately, this is also why a bunch of very good compression and storage techniques for signal processing and lowering cost on medical imaging technologies are looked at with so much skepticism, per a friend of mine studying some of that in his EE PhD at UC Berkeley.

Expand full comment

From the study: "The BFY study will continue to follow these children through at least the first 4 y of life, to determine whether treatment impacts on brain activity persist and extend to direct measures of children’s cognitive and behavioral outcomes."

My guess is it won't for the reasons you laid out. Maybe you could make a prediction on it if someone disagrees.

Expand full comment

Did not Dick Armey teach the masses thusly: "You tell me who did the study, and I'll tell you what results they got!"?

That applies to all sides, FWIW.

Expand full comment

> And finally, people want to discover a link between poverty and cognitive function so bad.

This is baffling to me. The link is OBVIOUS. People with lower cognitive function have less ability to generate wealth in society. This is like 95% of the way to the definition of generating wealth. This is almost as tautological as saying "people want to discover a link between poverty and being poor". Yes, being poor _causes_ poverty, almost definitionally. In the same manner, in an information-age economy, being information-literal is almost definitionally what it means to be capable of generating wealth.

Expand full comment

Absolute EEG power is a very odd metric and doesn't really mean much from my understanding

Expand full comment

An interesting note is the distribution of parental incomes. The average was around $20,000, but the range was anywhere from about $4000 to $40,000. Getting an extra $4000 a year on a $4000 annual income should have a much larger effect than an extra $4000 on a $40k annual income (that at least sounds good on paper). I dont belive the authors of the study ever broke the children down into groups based on parental income levels because that wasn't part of the pre-trial design. But, to give them a generous interpretation (which I don't necessarily think they deserve), maybe the effects are more pronounced at lower income levels / for higher relative income changes? Maybe $60/mo (14.4% increase on an annual income of $5000) has a bigger impact that $330/mo on a $40,000 dollar salary (10% increase)?

Expand full comment

Is it an important critique that PNAS has weird peer review? Does Richie systematically talk about this, or only when he objects to a paper?

We should keep track records. PNAS was one of the most prestigious journals in the world when it did not do peer review. Why was this? Was it because the members of PNAS put skin in the game by personally endorsing papers? But the response to its success was to end the experiment and prevent it from generating more data. It was bullied into adopting peer review. And no one believes the journal wanted it, so no is sure what they're really doing. Richie continues the bullying today:

'"Contributed" submissions do get peer-reviewed - but there must be SOMETHING easier about this way of submitting articles, otherwise why would it exist? My guess is that the Contributor's handpicked choices for reviewers are almost always granted:'

Or, maybe people submit to PNAS for the obvious reason that it is one of the most prestigious journals in the world. We shouldn't care about whether it's "easy" or "hard," but whether it differentiates truth from falsehood. This paper should be a black mark for PNAS, but we should study its whole track record. We should judge the method by the results of the journal, not the journal by the method. I don't know if PNAS deserves its prestige, but it definitely shouldn't have its prestige be a function of its method.

I think anonymous peer review has been a catastrophe, but I am much more certain that experiments are good. We should test the hypothesis of anonymous peer review. Experiments are *obviously* good and the homogeneity of peer review is *obviously* bad.

Expand full comment

Systemic trauma still a little too touchy to shine light upon.

Expand full comment

I'm slightly confused by the argument with the random graphs. Is the point here that those are random in a way that makes their average proportion of disadvantaged children basically the same? Because surely otherwise you would expect to see differences depending on which random group has more of those?

Expand full comment

This is fake p-fishing science plus fake cherry-picking journalism. Later, you will see NYT news stories and editorials lobbying for some social spending program on the ground that "studies show . . ." They have been doing this forever.

Every time I see the NYT or some other lefty fake news outlet say "studies show . . ." or "experts agree . . ." I just roll my eyes and assume it is fake.

Expand full comment

Just curious, why are the two groups in each EEG plot always vertical mirror images of each other?

Expand full comment

I agree that the methodology isn't great and it is actually a null result but kudos to the researchers for making the data public! I don't think we should be harsh to the researchers there. It would be easy to tell a tale that the data is too sensitive to release and I bet PNAS would have still published it.

This is also a good demonstration of why data + public release >>> Peer Review for advancing Science.

Expand full comment

What Henderson points out is absolutely damning to me, and shows clear evidence of "rooting around" for an effect. It really does make me sad that this type of activity has got to be understood by those running studies that it is antithetical to getting a useful result, and they do it anyways, and most people don't care, it seems to be almost a norm.

Expand full comment

Has no one even considered the possibility that the cash grants to poor families might have been used to purchase alcohol, tobacco, drugs and lottery tickets for the adults? Which would be unlikely to have any positive outcome for the children. I might be accused of bias in this assumption, but an assumption that the money would only be used to improve the physical health and learning opportunities for the children also reflects bias. In high GDP countries many people are poor, not because of external factors, but because of poor life choices. Giving money to such individuals does not magically result in better life outcomes, whether its welfare or lottery winnings.

Expand full comment

The debunking has focused on half of this study, the link between gibs money and EEG.

But what about the other half -- the link between EEG and cognitive performance? Is there any indication that cognitive performance is meaningfully measurable with an EEG (outside extreme cases where the brain is severely malfunctioning)? Can we save a whole lot of money on IQ tests by just hooking people up to EEGs to see who the smart ones are?

Expand full comment

To try to invent reasons to do one thing as opposed to doing another, or more correctly to make others do the one or the other, invariably creates false science and counterproductive policies. Such being the case of war against poverty, war against drugs, war against God or war against fossil fuels, to name only the recent stupid political abuses against human rights.

Expand full comment

As soon as i say this blogger is cool he turns into a mega pussy again. WTF?

Expand full comment
Jan 26, 2022·edited Jan 26, 2022

"the lead author is named Dr. Troller, and I am a nominative determinist"

OMG, me too! It's crazy how often names match occupations or reveal hidden truths. Of course Bernie 'made off' with all your money!

But it can also be a limiting bias. I was overly skeptical of Operation Warp Speed just because the lead doctor's name sounds like "slow-ee."

Or maybe not? From Dec 2020:

"Moncef Slaoui, chief science adviser for Operation Warp Speed, said during a media briefing Wednesday. 'The process of immunizations — shots in arms — is happening slower than we thought it would be.'"

https://www.nbcnews.com/health/health-news/slower-expected-covid-vaccines-are-not-being-given-quickly-projected-n1252225

Of course he would say that!

Expand full comment

The "low cash" and "high cash" lines in the first graph looks too much like mirrors of each other - eyeballing it, it looks like score(low cash frequency) ~= -0.8 * score(high cash frequency) for every frequency. But Andrew Gelman's results look the same, so what's going on? And if the graphs really are supposed to look mirrored, why are they both present? Isn't that going to make any differences look twice as much as they really are?

Expand full comment

> In order to trust their positive results, the researchers had to correct for multiple comparisons. The simplest method for this is something called Bonferroni correction, which would have forced them to get a p-value of 0.05/8 = 0.00625.

This looks like it should probably say 0.05*8 = 0.4?

Expand full comment

It seems that there are almost innumerable confounding variables between the guardian's receipt of hundreds of dollars and the child's brain scan.

Expand full comment

I love how this and the Bounded Distrust post came out within 24 hours of each other. Is it some commentary that media will report "scientific study finds X" when the truth is "scientific study fails to reject the null hypothesis but scientists still think X" when the media really believes and would like to push X?

Expand full comment

"poverty is obviously bad (...) If you don't agree poverty is bad, you've never experienced it." so my first instinctive reaction was to agree with you, and then because i have this need to try to contradict everyone i went looking for a reason why this statement may be wrong, and the analogy that came to mind is with pain. I also thought pain was obviously bad, until I saw a documentary about kids born with analgesia and lightbulb went on that hey maybe that bad thing serves a useful purpose and your instinctive feeling about it being good or bad is unreliable.

The parallel with poverty at individual level would be the kids of rich people who are born in an environment where they can't ever conceive of themselves being poor and go on leading dissolute and aimless lives. The parallel at societal level would be something like communism, which while it certainly didn't manage to eliminate poverty came darn close to eliminating the idea that one could be rich, which is... more or less the same thing? (today's middle class lives in far more material wealth than kings of centuries past etc.) and people under communism certainly lost a lot of their drive and creative vitality and initiative and their societies suffered from it.

So knowing poverty, being able to experience some of it firsthand (particularly in youth) or conceive one could fall into it, or seeing it around us daily, can maybe be useful (trying to avoid using the word "good" here) for individuals giving them more drive to succeed in life, keeping them real, not being wasteful etc.? and for society because it acts as a stick to make people try harder?.

This doesn't mean you can take it to the extreme - too much is clearly bad - PTSD from getting tortured gonna outweigh any benefits from being more careful avoiding danger so as not to suffer pain, and too much poverty when it affects nutrition of kids, structural integrity of family, and other basic needs is gonna be unequivocally bad.

Now how much poverty (or inequality, if you want to call it that) is still overall beneficial at the individual level i don't have the slightest clue beyond strong suspicion the threshold is higher at society vs. individual level.

for context if useful: have personally experienced communism and then moderate poverty, but not extreme poverty. Currently not poor.

Expand full comment

Twin studies actually do find large shared environmental effects on cognition, but only in children. For reasons not fully understood, the heritability of IQ starts out low and increases throughout childhood and adolescence, with shared environmental contributions fading out. This is called the Wilson effect.

I suspect that this is largely a matter of parents in some households giving their children informal early education that has some transfer to IQ tests, and that this effect is swamped by 13 years of public school, but I'm not an expert and I don't think there's a clear consensus on exactly what's happening here.

Expand full comment

> you have to figure out how to convert a multi-dimensional result (in this case, a squiggly line on a piece of paper) into a single number that you can do statistics to. This offers a lot of degrees of freedom, which researchers don't always use responsibly.

You actually don’t have to convert a multidimensional result into a single number you can do statistics to. Multivariate statistics is a thing — it’s a huge branch that deals with *exactly* this kind of scenario — and it’s amazingly painful how often this gets ignored, leading to bad outcomes.

Whenever anyone tries to do a “Multiple comparisons correction,” please remember to yell at them on behalf of all the crying statisticians who just want you to understand that the right choice here is to do a multivariate analysis.

Expand full comment

re Bounded Distrust -- I gather this is one of the cases where experts/authorities flat out lied to the public? But it's ok because it was caught quickly and talked about on Twitter?

Expand full comment
Jan 27, 2022·edited Jan 27, 2022

I did fMRI research back in 2010-2011; the state of the field was so bad it convinced me to avoid academia at all costs. My thesis advisor heavily encouraged me to do the coloured jelly beans thing.

Expand full comment

"Shared environmental effects on cognition are notoriously hard to find. Twin studies suggest they are rare."

I don't like this prior, because while twin studies suggest shared environment effects are rare, the Flynn effect suggests that shared environments are (or at least were) common.

I'm annoyed that there's this giant glaring contradiction, this huge "notice you are confused" moment, where high-powered twin studies nearly flat-out contradict high-powered Flynn effect studies... and yet the entire IQ community just shrugs and dismisses one set of studies without introspection.

Expand full comment

I am Greg Duncan and one of the authors of the PNAS study. I am speaking for myself. I was trained in economics and not neuroscience but participated in virtually all of the analyses of the EEG data and in writing up the results.

First off, I would urge everyone to actually read the paper and its appendix, both of which are freely available on the PNAS website, to see what evidence we present and the words we use to describe that evidence. Many of the issues raised in the original story could have been resolved with a careful reading of the study.

*Shared environmental effects make the paper suspicious*

Our analyses are based on random assignment of different economic environments (i.e., a high or low cash gift payment) to equivalent groups of families. So shared environmental effects are not a confound.

*Cognitive tests and not EEG are the most reasonable ways of measuring cognition*

The children participating in our research were 12 months old, precluding measurement using cognitive tests. We are measuring electrical activity and not cognition. Certain patterns of infant EEG have been found to correlate with later thinking and learning but we do not imply that EEG at 12 months is a measure of either thinking or learning. In our upcoming age-4 data collection we will be gathering both EEG and more conventional assessments of thinking and learning.

*Researchers do not always process EEG data responsibly*

Our procedures, including data processing, are explained in Troller-Renfree, S., Morales, S., Leach, S., Bowers, M., Debnath, R., Fifer, W., ... & Noble, K. (2021). Feasibility of Assessing Brain Activity Using Mobile, In-home Collection of Electroencephalography: Theory, Methods, and Analysis. Developmental Psychobiology, 2021. Code for data processing is available on GitHub (https://github.com/ChildDevLab)

*People love seeing visible EEG effects*

Figure 2 provides the picture, but statistical analysis (reported in Table SI6.1) are provided to support whatever differences might be observed.

*Replication studies often produce lower effect sizes*

We welcome replication studies.

*The study has enough yellow flag to warrant checking into it*

We welcome close scrutiny, especially among people who have given the paper and its supplemental materials a close reading.

*They conclude that financial support changes brainwave activity*

From the abstract: “Unconditional cash transfers may change brain activity…” We never claim that we have established a causal link.

*All differences lose statistical significance after adjustment for multiple comparisons*

Results for our preregistered hypotheses are featured in Table 2 of our paper, and we pointed to their p>.05 nature in several places in the paper, including our conclusion. Results for additional, non-preregistered, analyses are also presented in the paper and, especially, appendix. Some of these results are statistically significant, even after multiple testing adjustments. These supplemental analyses play a substantial role in the conclusions that we draw.

*The abstract sure does say “infants in the high-cash gift group showed more power in high-frequency bands*

It certainly does, but, in the abstract, the conclusion that we draw from these differences is not couched in causal effect language.

*Can we just say that regardless of stats, we can eyeball a significant difference here (in Figure 1)? Andrew Gelman says no.*

So do we. We state that Figure 1 describes our data but that a proper statistical testing is needed to assess whether the differences pass muster in a statistical test.

*The graph (Figure 1) proves nothing*

We agree.

*But this study basically shows no effect.*

Results in Table 2 for our preregistered hypotheses do not show p<.05 results – a fact that we clearly acknowledge is several places. Analyses of preregistered hypotheses are properly accorded a great deal of weight in reaching conclusions from a piece of research. Gelman’s analyses of our data focused exclusively on those preregistered hypotheses, as have almost all of the blog post reactions. Had we stopped our analysis with them, we probably would not have tried to publish them in such a high-prestige journal as PNAS.

But the paper goes on to present a great deal of supplementary analyses. Statisticians have a difficult time thinking systematically (i.e., statistically) about combining pre-registered and non-preregistered analyses and often choose to give 100% weight to the preregistered results. That is a perfectly reasonable stance — and is what classical statistics was designed to do.

For us, the appendix analysis of results by region (in SI6.1), coupled with the visual regional differences in Figure 2 and by results from the regional analyses in the past literature, led us to judge that there probably were real EEG-based power differences between the two groups. Our thinking, which is explained in the paper, was reinforced by our non-preregistered alternative approach of aggregating power across all higher-frequency power bands. This gets around the problem of the rather arbitrary (and, among neuroscientists, unresolved) definition of the borders of the three high-frequency power bands and eliminates the need for multiple adjustments. Results (in Table SI7.1) show an effect size of .25 standard deviations and a p-value of .02 for this aggregated power analysis.

As our cautious “may cause” language in the paper suggests, we are far from assigning 100% weight to the supplemental analyses, but our weighting of that information is definitely not zero. Our non-zero weighting of the non-preregistered results led us to our “weight of the evidence” conclusions from all of the data analyses we conducted, while Gelman has good reasons to believe in their interpretation of data that assigns a zero weight to anything other than the pre-registered results. I hope that everyone agrees that we will know a lot more with the age-4 data, but we and not Gelman believe that there is enough going on to bring the age-1 results to the attention of scientific and policy audiences.

*Stuart Richie says that this article was accepted under rules that give it an easier ride*

Two reviewers provided detailed comments on the first draft, which led to extensive revisions.

*Heath Henderson says that the paper was not preregistered for beta and it showed the biggest impact. This should raise red flags*

Not for someone who actually takes the time to read the paper. Section SI4 explains that, at the time of preregistration, beta was not preregistered owing to sparse evidence on its association with income. By the time we began our analysis, several papers had established an income correlation so our analysis plan was updated accordingly. Table SI4.1 shows that differences in results for multiple testing adjustments with and without the inclusion of beta are trivial.

*Julia Roher compares this study to an experimental study of foster care placement out of Romania and finds differences*

One of the authors on our study is a PI on the Romanian orphanage experiment. There are too many extreme differences in the environmental conditions and nature of the experimental treatment to begin to make comparisons between the studies.

*We can quibble about whether the study might be suggestive of effects…”

Yes, let’s quibble…or maybe consider all of the information more carefully. We do not claim an ironclad case for causal effects and, as suggested above, recognize that anyone wanting to consider only the pre-registered results could justifiably conclude that we cannot reject the null hypothesis. We, however, conclude that the weight of the evidence taken as a whole supports a possible causal connection.

Expand full comment

As someone who was a professional fMRI researcher, yes you absolutely should have default suspicion. I hate the trend of psychologists slapping on an EEG or fMRI component to their paper to make it seem more technical and "biological", when there is a much-more-relevant behavioral outcome they should be studying.

Stick to using brain-imaging for what it can actually teach us about: the structure of the brain, and maybe a bit about how it computes stuff. (Also, I may be biased, but I think you should be 10X MORE suspicious of EEG. It's such an incredible noisy technique, and unfortunately has a really low activation cost to slap on your study. Anyone can buy one off the shelf and use it in a way that guarantees tons of weird artifacts.)

Expand full comment

"beg you to believe I would have come up with the same objections eventually."

hahaha...I feel ya; from me, anyway, you have the benefit of the doubt.

Expand full comment

This comment section is mostly dead, but just wanted to say I came across a reference to this study in the wild (class discussion of a different article that cites it at length) and got to say Hey! I know this study and they did dubious stuff with their p-values!

(and to her credit, the prof was like oh I didn't know that, I'll keep that in mind.)

Expand full comment