Faulty research methods raise questions about cancer trials

April 28, 2008

A review covering 75 group-randomized cancer trials over a five-year period indicates that fewer than half of the studies used appropriate statistical methods to analyze the results. It suggests that some trials may have reported that interventions to prevent disease or reduce cancer risks were effective when in fact they might not have been.

A review covering 75 group-randomized cancer trials over a five-year period indicates that fewer than half of the studies used appropriate statistical methods to analyze the results. It suggests that some trials may have reported that interventions to prevent disease or reduce cancer risks were effective when in fact they might not have been.

The review by David Murray, Ph.D., chair of epidemiology at the Ohio State University College of Public Health, and colleagues appeared online March 25 in the Journal of the National Cancer Institute.

The reviewers found that more than one-third of the trials contained statistical analyses that they considered inappropriate to assess the effects of an intervention being studied. Of those studies, 88% reported statistically significant intervention effects that, because of analysis flaws, could be misleading to scientists and policymakers.

"We cannot say any specific studies are wrong. We can say that the analysis used in many of the papers suggests that some of them probably were overstating the significance of their findings," Murray said. "If researchers use the wrong methods and claim an approach was effective, other people will start using that approach. If it really wasn't effective, then they're wasting time, money, and resources and going down a path that they shouldn't be going down."

The review identified 75 articles published in 41 journals that reported intervention results based on group-randomized trials related to cancer or cancer risk factors from 2002 to 2006. Thirty-four of the articles (45%) reported the use of appropriate methods used to analyze the results. Twenty-six articles (35%) reported only inappropriate methods were used in the statistical analysis. Eight percent of the articles used a combination of appropriate and inappropriate methods, and nine articles had insufficient information to even judge whether the analytic methods were appropriate or not.

"Am I surprised by these findings? No, because we have done reviews in other areas and have seen similar patterns," Murray said. "It's not worse in cancer than anywhere else, but it's also not better. What we're trying to do is simply raise the awareness of the research community that you need to attend to these special problems that we have with this kind of design."

Murray and colleagues called for investigators to collaborate with statisticians familiar with group-randomized study methods and for funding agencies and journal editors to ensure that such studies show evidence of proper design planning and data analysis.

The use of inappropriate analysis methods is not considered willful or in any way designed to skew results of a trial, Murray said.

"I've seen creative reasons people give in their papers for using the methods they use, but I've never seen anybody say it was done to get a more significant effect. But that's what can happen if you use the wrong methods, and that's the danger," he said. "What we want to know from a trial is what really happened. If an intervention doesn't work, we need to know that, too, so we can try something else."