As I explained to TPM Reader JG, who wrote the email below, when I’m writing about or describing something where my knowledge is very limited I try to keep it vague and refer people to the source. I’m sharing JG’s follow up on the “rosy scenarios” post because he gets into a, probably the key element about why these models have a major uncertainty contained in them.
A comment on your “Rosy Scenarios” post. I read Carl Berstrom’s Twitter threads about the IHME study and think you’re understating just how rosy the estimates from that study are. Outside of the assumptions on the biological side – that there is “Wuhan-style” social distancing for the duration of the epidemic – the main problem is that the study is an exercise in mathematical curve fitting, not biology.
Essentially the authors posit that the cumulative deaths will fit a certain type of function, and they are using scarce and quite limited data to constrain how that curve will look in the future, without tying their projections to any sort of underlying biological mechanism of infection/transmission. To me, that makes the study “interesting but not significant”.
Full disclosure: I’m a geologist, not a biologist; however, I find it hard to believe that the authors of the paper would mean for their results to be interpreted this way, much less directly inform policy before peer review. This disconnect – between what the scientists actually say and what is digestible for the public – is always the challenge of science journalism, and one reason I go back to the original studies before believing the popular media write-up on them. (With exceptions, Ed Yong at The Atlantic being one.) There is often the urge to sexify results for public consumption, and further to suggest “scientists have solved this longstanding problem!” when really they have just re-interpreted or added another opinion to the canon. Unfortunately, as people scramble for hope, we are bound to see more of this, and they will become an easy target for public ire when their projections are not met (even if it is not their fault – see the case of the Italian geologists who were jailed for rosy predictions about the 2012 L’Aquila earthquake). Believe it or not, the science community is inherently conservative, and we should all “wait and doubt” (Sinclair Lewis) before we place meaning on these results, especially if we want more serious investment in science when this crisis fades.
Again, you can read Bergstrom’s argument here. Everybody is of course trying to provide the best information possible under crisis conditions. As I mentioned, I was watching an interviewer with the lead modeler when I was writing the earlier post. And I did not get any sense that he was saying that the public or policy makers were over-interpreting a model designed to estimate a notional set of conditions. And since I wrote that post I’ve heard from various readers with accounts of how the model is being used by hospital systems and governments around the country to plan for the epidemic in their regions. Lots of studies have been released to the public in advance of peer-review during this crisis. That certainly seems like a good decision, if they are understood in that light. The need for the information is now not six or 12 months from now. But it does like there are significant reasons for caution about the numbers this model predict.
- -Hiring More Journalists
- -Providing free memberships to those who cannot afford them
- -Supporting independent, non-corporate journalism