“Chicken,” repeated for the entirety of a paper.
“Get me off your fucking mailing list,” likewise.
Over a hundred papers automatically generated by a computer. Ditto.
The piece of shit that launched the anti-vax movement? Goddamn motherfucking peer reviewed and published.
These have all been published in allegedly academic journals. The first three were held up as examples of predatory journalism issues. The last is used by anti-vax zealots as a reason why we shouldn’t trust the scientific establishment.
Recently, a paper was published about “the penis as a concept” in a predatory publishing outlet. It was rejected by a more reputable journal and was accepted by a pay-for-play outlet with, at best, questionable publishing standards. By which I mean the outlet had zero publishing standards.
Now, people in the skeptic community are holding this up as a reason why ‘gender studies’ are entirely a bullshit field.
Oh for fucks sake.
Are immunology studies bullshit because Andrew Wakefield was able to publish in The Lancet? No, and that’s an actual reputable journal.
Are genetics entirely a bullshit field because Dr. Pamela Ronald of UC Davis, an excellent researcher, voluntarily retracted published research after she found an error in her lab’s work? No, and it’s an insult to her, the entire field of genetics, and to the ethical and honest way she handled the retraction to suggest otherwise.
Is the show Adam Ruins Everything all a hoax because they were able to publish an entire script of an episode in a reputable sounding yet entirely bullshit pay-for-play journal? Probably not. I trust that hair.
There are a lot of issues in academia and the peer review process. One of which is that there is no incentive to perform studies to show reproducibility of the original study. Reproducibility simply means that the results of the study, in an unbiased environment following the same set of standards of the original experiment, the results can be replicated. In an article published in Nature last year, over half of scientists couldn’t meet the standards of reproducibility in attempting to replicate their own experiments, and about 70% of scientists couldn’t reproduce experiments of their peers. Please take into account that this was a survey of scientists, not a study, but it is telling that this was a survey of scientists across many disciplines in hard sciences, including chemistry, physics, engineering, and medicine. This was not a survey about social sciences. This was a survey about the sciences that we think of as hard, binding, and finite.
We have an academic crisis in publishing.
There is a major problem in academic publishing. Along with the job of reviewing literature for use in their own research, a researcher now has the added task of evaluating the journal’s reputation. Are they pay for play? How many retractions have they issued? How often do they turn papers down? Can you trust a normally reputable journal if they’ve published a piece that’s been retracted? And how trustworthy is the peer review process, and academia itself, if they are continually passing through work that should better be lining a bird cage?
This needs fixing within the annals of academia, and we can’t fix it just by shaking our fists at the academic institutions. But we can arm ourselves, both from the people who produce bad studies and the people who profit off of it. Just like the bullshit artists who try to sell you magic crystals, people who try to sell you bullshit studies passed off as science are equally predatory.
How can you, as a supporter of science, navigate this?
Upon first glance of a journal article, how can you tell that the journal is reputable? We all know the big names. British Medical Journal. New England Journal of Medicine. The Lancet. Nature Chemistry. Physical Biology Monthly.
I made up the last one. I mean it might be a journal somewhere, but doesn’t it sound real enough? Sounds as real as Nature Chemistry, and that’s real.
So how can you tell? You can judge a lot by their Impact Factor. The Impact Factor is not everything, but it’s a good guide to how seriously other scientists take research in that journal because it measures how frequently their work is cited. The line of thinking in measuring the Impact Factor is that, generally, more reputable and valuable information will be cited more often. For example, in 2015 the IF of the New England Journal of Medicine was 59.5, the highest in the field. The two-year IF of Nature in 2015 was 38.1. The BMJ was ranked fourth in 2016 among medical journals and it had an impact factor of 19.6.
And The Lancet, the journal that published the paper that started the anti-vax movement, has an Impact Factor of 44. One extremely damning study does not an entire field, or an entire journal, kill.
What do we take away from this? The numerical range from the top ranked medical journal in the world to the fourth is fairly big, and all of those journals are extremely reputable. We also have to remember that Impact Factor is clearly not the only thing that determines if you can trust a journal. Some studies may be extremely valuable in terms of the insights they offer or the new questions they provoke while not being used as citations in new studies, and this is a criticism often levied against Impact Factor as an absolute measure of journal quality. Occasionally one paper will boost a publication’s IF exponentially and briefly, and as is the case with many publications a small number of their papers are responsible for the majority of their citations. And as we’ll get into in the next section, there are entire fields of hard sciences that tend to have lower Impact Factors just due to the nature of research in their field. Similar to a litmus test for a chemical, Impact Factor does not tell you all the qualitative properties of a journal, but it’s a good raw indicator.
What does any of this have to do with… penis as a construct?
So some people were like “hey let’s put a paper into a journal and disprove the entire field.”
Remember all those nice high Impact Factor scores and that, earlier, we were discussing other bullshit articles that were published?
The ‘Penis As A Construct’ was turned down from one paper. They had an Impact Factor of 0.00. They literally could not get published in a place that nobody else references.
THEN, they got published in a pay-for-play journal that literally does not have an Impact Factor indexed at all. Not even zero Impact Factor indexed. It does not have an Impact Factor listed on their website. The place is so new and so unknown that it isn’t even on the academic map. In an article about their “accomplishment” on skeptic.com, the authors of the suspect paper tried playing up how legitimate of a website Cogent Social Sciences is by listing their professional associations, but this was a deceptively artful act of cherry picking. Cogent OA, the umbrella site for Cogent Social Sciences, doesn’t even have a set amount for people to pay it. Their model is “pay what you can” and you’re published. It’s that much of a shit journal. Several times on their site they have written editorials complaining and kvetching about Impact Factor. Funny, maybe they should have spent that time actually reviewing the bullshit that was sent to them instead of bitching that nobody is citing the “work” that they’re being paid to publish.
So why am I even paying attention to this? It’s one of a zillion other bullshit stories being published, right?
If the people who published this had just claimed “we published this, here’s another predatory journal to avoid,” this would be no problem. I’d be thrilled if they perhaps went this route, as I think people deserve to be warned about predatory publishing houses. But the presentation of events seems to be that gender studies as a whole is a bullshit field.
If you want to debunk an entire field, you are going to have to present more compelling evidence than publishing in a journal that writes editorials complaining that nobody cites their articles.
Do low Impact Factors across journals invalidate a field?
Gender studies are not my field, but picking apart bullshit is. To indicate that this field is bunk because of one junk paper in one junk journal? Complete bullshit. I went to look for other academic studies on gender, and I wasn’t sure what to look for in terms of reputability. But I decided to look for impact factor on the journals. The top ranked journal for gender studies, Gender And Society, has an Impact Factor of 2.461. Sounds low after the journals we saw earlier, right?
But let’s think about what this means. It also means that it’s a field that’s not commonly referenced. BMJ, Lancet, and NEMJ are journals covering subjects that are referenced frequently. I looked up a few other fields that fell under hard sciences for comparison.
The second ranked journal in anatomy, American Journal of Surgical Pathology, has a lower IF than Gender And Society. Architecture is a hard science, right? There isn’t a single architectural science journal with a higher IF than Gender and Society. You know what also had a higher Impact Factor than the best ranked architectural journal? A peer reviewed journal on alternative medicine.
As happens in every other field, Cogent Social Sciences is a journal publishing bullshit. And that’s a problem. But this little “look at our bullshit paper we published” demonstration did nothing to prove that social sciences has a problem worse than the rest of academia.
How do I know? I took ten minutes to look through the rest of the branches of Cogent, and it’s publishing the same generally ‘meh’ studies that you see in other journals too. Juniper berries treat cancer… in a petri dish. This was in Cogent Chemistry. It doesn’t debunk the field of chemistry, and it’s alarming to see a “promising skin treatment” promised in a chemistry journal when it’s only been tested on cells in a petri dish. But furthermore, I found two more studies on the effect of essential oils on allegedly cancerous or inflamed cells, both of which were likewise published by representatives of the DoTerra essential oils corporation. The same two researchers also published a study on the effects of cardamon oil in Cogent Medicine. The researchers had backgrounds in nutrition, not cancer research, and they were on the payroll of a company that was trying to make essential oils look good.
They paid the price of admission, so their work was published. In their haste to prove that Cogent only publishes bullshit about Gender Studies, the skeptical researchers who published the Penis Paper(tm) somehow missed that Cogent was fine with working as a pay-for-play publishing arm for DoTerra.
What does any of this prove?
The work published under the Cogent umbrella is suspect. However, this has not provided anywhere near the burden of proof that any one field that Cogent publishes- ranging from arts to physics- is invalid. If we treat this as an invalidation of a field, it gives license for the anti-vax movement to similarly disregard immunological studies based on retracted studies in legitimate journals. Ditto the anti-GMO movement. Ditto whatever the hell journals flat earthers and chemtrailers pretend isn’t science (though I suspect they get their info from Alex Jones). Unfortunately, this has shown the willingness of some people in the skeptic community to throw out burden of proof when it fits their ideology, assign blame to one variable when there are multiple variables to consider, and to disregard the core tenet of science and skepticism: follow the evidence wherever it takes you.
Like, for example, how did “chicken” get published on a paper so many damn times?
To get to the other side of the peer review problem.