Some considerations for interacting well with science

(Audio reading of today’s post.)

The sort of questions science can and cannot answer and, perhaps even more importantly, how to use and interact with science (as a human endeavor) properly is something I try to cover (but too briefly!) in some of the classes I teach. But, although it has come up here and there, I haven’t tried to discuss it extensively in a Substack post so I thought, “that might be helpful to some people” (just, you know, force your homeschoolers to read it, I linked a quiz at the bottom… that part is a lie). I hope this post doesn’t come off as too “wow, science is totally messed up today”… obviously I love science, science used well is a great thing, but almost anything removed from its proper place and limits becomes trouble.

So let’s jump in. The following are not in order of importance, more like the order in which they came into my mind.

Publication incentives and poorly done science

We sometimes talk about the problem of “publish or perish” culture in science, which means that scientists feel pressure to publish (usually journal articles) to advance their career or, sometimes, even just keep their jobs. Put briefly, scientists want more publications for their curriculum vitae. (That’s Latin for something like “the racetrack of life”, I learned that recently - this education is free!)

Now that isn’t all bad inasmuch as it is an anti-laziness incentive, nobody is against the fact that car factories feel obliged to produce automobiles… but the reliable assembly line is a different thing from the production of what is supposed to be novel human knowledge. Even when there isn’t media attention on scientists, that can lead to the rushing out of low-quality work. This is probably at least part of the explanation for the “irreproducibility crisis”, where it has been noticed that, when checked, an apparently increasing number of scientific results cannot be replicated, sometimes later resulting in retractions of already published work.

By one estimate, from 2001 to 2010, the annual rate of retractions by academic journals increased by a factor of 11…

I’m not going to have a separate section for the problem of the financing of science today, but it’s worth also noting the reasons the linked article gives.

Surveys of scientists have tried to gauge the extent of undiscovered misconduct. According to a 2009 meta-analysis of these surveys, about 2 percent of scientists admitted to having fabricated, falsified, or modified data or results at least once, and as many as a third confessed “a variety of other questionable research practices including ‘dropping data points based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response to pressures from a funding source’ ”

The situation gets worse when science becomes a focus of media and political attention. It is clear to me right now that a mediocre or even bad publication that feeds into a promoted media/state narrative (see that Duke masking study for a now infamous example of such work), will garner praise from most media and universities (the Duke authors got a NYTimes editorial out of it long after flaws in their study had been pointed out, and I haven’t seen any “sorry, we messed up” from them), whereas a much better publication that calls into question a desired narrative will be torn apart for any little mistake or speculation it contains. The incentives and pressure this creates for scientists is obvious (and of course that’s the point).

And if rebuking your study isn’t possible, policymakers who should find the work extremely relevant to their rulemaking will just ignore it anyway. This could lead into a whole second post, which I will not write right now, about how our information-saturated world practically means that people who want to manipulate you can find some study somewhere to point at to justify their manipulation, and if they have to ignore a dozen other studies to make a convincing case, are you ever going to know what they neglected to mention?

Empirical studies v. modeling studies

On a related note, I am happy to see more people becoming attuned to the difference between empirical studies and modeling studies. Did someone actually do an experiment (empirical), or did they dump a bunch of assumptions into a computer model, run the model, and see what the model spits out? Of course the latter is a sort of experiment as well, but you can make a model “conclude” literally anything depending on the assumptions you enter, it isn’t necessarily tied to reality the way a physical experiment is.

A few days ago the US Secretary of Education tweeted out four studies he claimed demonstrated the importance of masking children in schools (you know that thing that, at least for elementary ages, almost nobody on the planet does except the United States… don’t worry about it people, science is different here). Now we should actually give him some credit for linking four studies instead of just saying “experts and studies say”. But of course that also opens you to criticism about what the studies actually say. And on that point:

And Corey later, referring to the modeling study, followed up to say:

Well, this sort of serious criticism is the way we want to attend to scientific studies. Even university press releases mess this up. I’m not going to be able to find it now, but I do remember, relatively early in COVID, a University of Michigan modeling study that dumped into the model the idea that masks + social distancing reduced disease transmission by 80% (or something like that, I forget the exact number). The university press release promoted it as a “see, look at the importance of masking” study, and of course legacy media that picked it up from them then told the same story. “Masks work” was an assumption of the model! Of course it would find that masking was effective.

The details are what matters

Perhaps the most abstract but also most important point I try to drive home to students is that you have to read the details to understand the science. What was the method of the study? What were the limitations of the study? These are critical questions if you want to understand a study correctly.

Corey mentions above that three of the studies did not have a control group. Humorously, one of the study authors even pointed this out to our Secretary of Education.

This is a good example of “you actually have to check the method of the researchers”. I hate to keep mentioning masking studies but there are so many good examples here. I highly recommend you actually click through the following to the full thread by Karol, which tweets through a pro v. con panel about school masking hosted by the American Federation of Teachers. But I’ll just mention one tweet now:

Right. We’ve seen these studies, “I put a mask on a water-spitting robot and measured that it spit less water while wearing the mask”. OK, but how much does that actually tell us about a six-year-old wearing a mask with (or trying to avoid) COVID-19? You have to read the study in detail first to understand what their method actually was, and then ask good questions.

Emoting and name-calling is not science

Short and, you’d think, obvious point here, but calling people names in the name of science is a popular activity today, and usually the people who do it are not people actually able to articulate a scientific defense of their position. We’ve had it for a long time with “climate change denier” or whatever (a literally false description anyway because I don’t know anyone who denies that the climate changes over long time periods), but as soon as we started adding “anti-masker” and “anti-vaxxer” to the list you knew we were laying science aside for emotionalism. And it’s not irrelevant that all three terms eliminate the possibility of nuanced distinctions. What if you think vaccines make sense for the elderly but not for children, for example? Doesn’t matter, “anti-vaxxer”.

I put this part of the article here because it was also part of Karol’s thread about the pro- or anti- child masking debate.

And Slothrop, here responding to the governor of California deciding to mandate vaccination for children in California schools, wrote the following. For clarity, Newsom had referenced the measles vaccine as part of a “we already mandate other vaccines” argument to mandate COVID-19 vaccination:

If often takes time to get the science correct

One of my least favorite things about our present society is its tendency for “hey, we just learned” (so we imagine) “this thing yesterday, let’s force it upon everyone right now”. That is profoundly unwise but there are many tendencies in our society, from the “right side of history” mindset to a media / social media environment that wants you to take a position on everything right now that encourage that unwisdom. We want quick and easy answers to every question and we falsely imagine that this is what science produces. It’s simply an historical fact that it takes time to understand complicated things, and very often we thought we understood something and later realized we were wrong (virtually every modern scientific paradigm overturned a previous paradigm that was the consensus of science at some point). Ergo, for example, it is profoundly unwise to mandate a brand new vaccine for children for a disease of virtually zero threat to them. (Particularly when we’re still now trying to understand how effective it really is.)

For example:

I share the above because of the “just basic science” comment, as if he’s referring to a well-established consensus instead of something that, actually, almost nobody believed two years ago and still has poor evidence in its favor. Science rarely goes from “that’s not true” to “everyone knows that is true” that quickly… but it sure seems to if you’ve “learned” about the world from CNN.

As Colin Wright knows well, you could also put topics of sex and gender in this category, as we went overnight from two sexes to “everybody knows there aren’t just two sexes”. Everybody most assuredly does not know that.

Science is a specialized field of human study and special training helps you

Anyone who reads me regularly will know that I am strongly against the “just turn off your brain and trust the experts” mindset. (Feynman also rejects it.) Nevertheless, it does still remain true that specialized training helps, not just in the doing of science (which is obvious), but in its interpretation. And it remains true that one of our problems right now is that we’ve got a whole lot of officials, politicians, journalists, and fact-checkers trying to “play scientist”, and many of them just aren’t very good at it. They don’t know scientific facts or terminology, they don’t think in scientific ways, and especially they aren’t used to thinking with the precision that science requires.

The following… isn’t actually a very good example of that problem, but it did make me laugh so I’m sharing it anyway. Justin Lee, commenting upon a Twitter-promoted fact-check from a few days ago:

They have since corrected the article, but their original fact check, in trying to describe a calculation of myocarditis risk that was off by a factor of 25, stated that the calculation was off by “25 orders or magnitude”. If you don’t know the lingo, in science “order of magnitude” means a factor of ten. So “25 orders of magnitude” would refer to a calculation off by a factor of 10000000000000000000000000, instead of a factor of 25.

Science is not infallible and unchanging

It will be uncontroversial for me to state here that many people are trying to make “the science” into their religion. I see “experts and studies say” these days and the Roman Catholic equivalent would be “according to tradition and scripture”, the phrase is used in exactly that manner. (The phrase can also be used in a manipulative way, since “experts and studies say” may be technically true if you can find two experts and one study, even if there are 15,000 experts and 10,000 studies saying the opposite.) I, uh, added a submission to the Babylon Bee headline forum to do my part.

(Got a bunch of votes for that one later, it did well.) Ahem! But as part of religion, people want “the science” to be an infallible, indisputable, and unchanging rule book, and science is none of the three. I see people saying “that contravened a public health order!” as if they’re appealing to scripture instead of some rules very recently created by some probably partisan and probably incompetent state employees somewhere.

It only kind of fits here, but as part of the “science as religion” theme I have to share a recent tweet from Tara, this one also motivated by California announcing the mandated vaccination of children.

Upon whom is the burden of proof?

This will be my last heading for this long post! Especially when science intersects with public policy, “upon which side is the burden of proof?” becomes an important question. (“Proof” is actually a bad word to use, proof is hard to come by in science, it’s better to talk about “the data supports” or “the data does not support”, but the common phrase is “burden of proof”.) Again related to Karol’s thread above about that AFT panel, if you want to order schools to require masking, should you have to first demonstrate that they do work, or does the other side have to demonstrate that they don’t (one panelist apparently implied the latter)? As part of your opposition, do you have to demonstrate that masks do have other negative effects upon children, or should the other side first have to demonstrate that they don’t? Needless to say the usual rule is that you must first demonstrate the safety of whatever new thing you are proposing, or at least clearly determine and list possible negative consequences for all to see and consider, but in our monomania about avoiding COVID-19 those protective customs seem to be now ignored.