This is not a scare story, this is a “just the facts” story.

Last month in France, research of an experimental drug intended to treat mood, anxiety, and movement disorders went tragically wrong, resulting in the death of one participant and hospitalization of five others. As reported by Reuters, the six men who fell ill had been in good health until taking the pill under study. In a statement, Biotrial, the laboratory testing the drug which is made by Bial, a Portuguese company, says they complied with international standards for research. As the post-mortem investigation continues it’s natural to ask the obvious question: How many people die or get hurt during research studies?

“No one knows,” Liz Woeckner, president of Citizens for Responsible Care and Research, Inc. (CIRCARE), tells Medical Daily.

Getting the Facts

In a 2001 paper, Dr. Adil Shamoo estimates 19 million human subjects participate in clinical research trials, both public and private, each year. Clinical trials investigate whether a drug, device, or even a treatment strategy is safe and effective for humans after testing in the lab and on animals.

Because their purpose is to produce decision-making data, clinical trials follow strict protocols meant to generate reliable results and also to protect participants. Along with pre-testing and strict standards, institutional review boards, made up of doctors, researchers, and members of the community, oversee each clinical trial to make sure they are ethical and the welfare of the participants are protected. Yet terrible things do happen as indicated by the incident in France and another recent well-publicized case of six healthy men experiencing organ failure after taking a research drug in London’s Northwick Park Hospital.

Scratch the surface and you will soon discover these recent cases reported in the American Journal of Bioethics: Nicole Wan, 19, who died within 2 days after receiving a lethal dose of lidocaine in a 1996 University of Rochester research project; Jesse Gelsinger, an 18-year-old with liver disease, who died during a gene therapy experiment at the University of Pennsylvania; and Ellen Roche, a 24-year-old healthy volunteer, who died after inhaling a chemical that induces asthma during a study at Johns Hopkins University.

While all of these incidents received some publicity, it is likely that other, more difficult to find cases also exist.

Serious Adverse Events

A clinical trial must report any “serious adverse events” to its institutional review board. As the FDA defines, serious adverse events (SAEs), include: Death; life-threatening harms; hospitalization; disability or permanent damage; birth defects; injuries requiring intervention to prevent impairment; or any other serious event.

If these are the rules, how often are they followed?

“Inadequate and underreporting of trial results, especially safety results, is common,” wrote the authors of a 2015 study published in BMC Medicine. Their analysis of 300 trials with SAEs found that slightly more than a quarter simply did not publish their results, while nearly a third did not correctly publish the number of SAEs. For the remaining trials, 13 percent did not mention SAEs, two percent reported no SAEs, and 16 percent did not report the total number of SAEs per treatment group.

In fact, only 11 percent accurately published and described their serious events. “Although we do not know which the ‘true’ results are, we believe that these discrepancies clearly outline problems in the reporting of SAEs,” concluded the authors.

Another 2014 study explored discontinued randomized clinical trials. Here an international team of researchers discovered “discontinuation was common.” Nearly a quarter of all trials ended early, yet little more than a third reported to ethics committees. When they did, the most frequent reason cited for discontinuation was poor recruitment, yet a small percentage (2.4 percent) ended due to “harm,” while the same proportion were aborted for “unknown” reasons.

Importantly, these two studies used data from only publicly-funded trials listed with ClinicalTrials.gov. Yet “the lion’s share” of trials are conducted by industry (pharmaceutical companies and other private interests) that do not have to report, according to Woeckner. As she explained, even if you had the raw numbers of people harmed during trials, “you’d have to adjudicate each case.”

Human Volunteers

In a 2001 paper, Shamoo finds 878 institutional incident reports filed, 41 investigations made, and only 8 deaths reported to the Office for Human Research Protections (among the 70 million human subjects), during a 10-year period ending August 2000. Of the incident reports, he found 44 percent included adverse events.

Because any death, whether occurring in a car crash, by suicide, or as a result of the research itself, must be reported, Shamoo calculates 8 deaths is too small a number for those millions of people and this leads him to the “inescapable conclusion that research institutions supported by the NIH are failing to report, or are not accurately reporting adverse events including deaths.”

For an inside look at drug trials, Guinea Pig Zero is the go-to source. Describing itself as “an occupational jobzine for people who are used as medical or pharmaceutical research subjects,” GPZ features first-person accounts of volunteers who enlist for clinical trials often to earn a small honorarium (paycheck). While some contributors recount the tedium of a particular clinical trial, others relive their personal (and witnessed) experiences of research horrors.

And so the research continues, with distressing reports of accidental bleeding or other calamities arising from the misty bog on occasion. Last year the World Health Organization called for disclosure of unreported clinical trial data, while industry contemplates "open science" policy complete with data sharing. Honesty is not so easily decreed.