Are Scientific Journals Reliable? Study Exposes How Authors ‘Spin’ Abstracts
For centuries, scientific journals have remained the most vital means for disseminating research findings by members of the scientific, medical and technical communities.
The painstaking studies published in scientific journals after intensive peer reviews serve to advance scientific progress. This, in turn, springs from the free interchange of ideas other scientists are either free to support or refute through their own research, analyses and theories.
Controversial views are intentionally published in journals to stimulate further debate and move the field forward to a clearer understanding of critical issues and relevant variables.
However, scientific journals have been hounded by controversies that is always a risk with any human undertaking. One of the more enduring criticisms is that having to do with “spin.”
Spin in this specific context means using of specific reporting strategies, “from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results," the authors defined.
A study, recently published in the journal, BMJ Evidence Based Medicine, investigated spin in psychiatry and psychology research papers. Surprisingly, it discovered spin in more than half of the abstracts it analyzed.
The question now is “What impact might this have on doctors' decisions?” And, as a corollary, the study sought to assess how much "spin" authors used in the abstracts of research papers published in psychology and psychiatry journals.
The study authors looked at papers from the six psychiatry and psychology journals published from 2012 to 2017. It looked at the journals JAMA Psychiatry, the American Journal of Psychiatry, and the British Journal of Psychiatry, among others.
The study found readers overwhelmingly first read the abstracts because they summarize the entire paper. Doctors also often use abstracts to help inform medical decisions.
The study focused on randomized controlled trials with "nonsignificant primary endpoints." The primary endpoint of a study is the main result of the study. "Nonsignificant" in this context statistically means the team didn’t find enough evidence to back their theory.
The study discovered spin comes in many forms, including:
- Selectively reporting outcomes. This means the authors only mention certain results.
- P-hacking where researchers run a series of statistical tests but only publish figures from tests that produce significant results.
- Inappropriate or misleading use of statistical measures.
The study analyzed the abstracts of 116 papers. Of this total, 56% showed evidence of spin. It included spin of 2% in titles, 21% in the result sections of the abstract and 49% in the conclusion sections of the abstract. Spin was present in both the results and conclusion sections of the abstracts in 15% of the papers.
If industry funding was associated with spin, was also investigated by researchers. Surprisingly, they found no evidence that linked increase in the likelihood of spin to having financial backing from industry. That is because researchers have an ethical obligation on being transparent about the results of their research. However, authors are free to pick and choose the details that they include in the abstract section.
The researchers of the current study are now concerned as to how it might impact doctors' understanding since they base their clinical decisions mainly on research papers. And majority of the time, most physicians only read the abstract of the articles.