Doc Hall returns to our newsletter with a thought-provoking three-part essay on the risk of thinking we know the truth of things. Part II challenges us to keep improving our systems for learning.
Peer reviewers usually have access to check data. The public may not. Privacy is a big reason in safety studies; however, miscreants may also cherry pick data from a study to attempt to refute conclusions they don’t like. Interpreting data – to get closer to reality – Is hard enough without taking flak from parties not wanting to get closer to reality.
Reviewers often find errors, but no one can check every study in detail. Reviewing others’ data seldom being the most exciting part of a science career, the rigor of peer reviews invites criticism. Reviewers can’t escape using intuitive judgment on validity of findings. Without anyone intending to be deceptive, systemic issues of reviewer bias are considerable.
Negative findings are seldom submitted for publication, although negatives are important to others working an area. About 40% of all reported findings can’t be replicated. Almost every issue of Science reports a published study that has been pulled. Some studies are suspect because a source of funding automatically injects bias. A few scientists are disciplined for outright fraud. Scientists are human too, and their struggles trying to get closer to the truth hold lessons for all of us.
Keeping up is a problem in science. Accessing published articles is not easy. Most journal articles are behind a paywall, and journal subscriptions are expensive. University libraries complain about it; both the number of publications and the volume of articles keep rising. Plagued by information overload, scientists can only keep up with findings significant to them. But which ones are those? Something significant can “come out of left field.”
Changes to ease these problems are underway, but science needs a new system of learning. The rest of us need a new system of learning too.