The relationship between nutrition and health is fully entrenched in the mainstream media, and everyone from career scientists to our next-door neighbor seems to be an expert on the topic. Becoming skilled in research evaluation, being aware of media perspectives, and understanding different forms of bias are extremely important in this rapidly evolving field.
We recently interviewed Dr. Andrew Brown of the University of Alabama at Birmingham’s Office of Energetics and Nutrition Obesity Research Center, whose voice has risen to the foreground in discussing research evaluation and scientific integrity. In the last of our three-part series, Dr. Brown discusses research funding and bias.
FOOD INSIGHT: Let’s talk about another hot topic in your field: the funding of research. You have received funding from a variety of sources to conduct research. Can you describe how, if at all, working with different funders impacts your work?
DR. ANDREW BROWN: I think it is first important to clarify that none of the research funding comes directly to me. Funding goes to the university, which has a number of checks and balances to assure ethical research conduct and disclosure of potential financial conflicts of interest. With the exception of a current NIH grant, none of the funding that I have worked with was granted or gifted to the university under my name.
These details are very important to help separate various interests from the science—regardless of whether those interests arise from industries, foundations, or government—and from personal interests, political viewpoints, or financial gain.
We always clarify and specify conditions for collaborations, gifts, or grants. For instance, unrestricted gifts are just that: There is no explicit restriction on what the university can do with the money.
For research-related funding, we are always sure to define what exactly our role is in the research. For consulting, we serve as a resource, giving our expertise or advice to a funder, but do not necessarily take responsibility for the research or the results.
However, when I put my name to research, there must be an understanding with the funder that we have all editorial discretion in writing and publishing the work, regardless of the outcomes. This applies to all funders (government, industry, foundations, etc.), but when we work with industry we tend to make the agreements even more explicit because of the increased public scrutiny of industry funding (warranted or not).
Concerns about scientific autonomy occur with all forms of funding, though. Sense About Science is investigating the delay and suppression of government research, for example.
Why are different funding sources perceived so differently by some?
I can say personally that it is easy to doubt when a company, foundation, or research group publishes results concurrent with their known ideology, especially if they disagree with mine. That is my human response, my knee-jerk reaction.
I then have to actively engage my scientific training: Look at the data, the methods, and the logic; nothing else matters for science. Sometimes my suspicions are confirmed: The methods are shoddy or the conclusions overblown. Sometimes I am wrong: They have a solid study that gives fairly convincing evidence against my personal or scientific understanding. Often, reality is somewhere in the middle.
Forcing myself to be skeptical when a study does agree with my viewpoint, however, can be harder but more essential for scientific conclusions. If I read a paper that supports my worldview, I have to ask myself, “If this study had disagreed with what I believe, would I have readily accepted it as evidence?” If the evidence is not strong enough to convince me I am wrong, I should not treat it as strong enough to support my point of view.
Focusing on the funding, although a simple and human response, is the last thing of value to evaluate in a study.
People seem more critical of published research when it confirms a research funder’s interests. Is the publishing of positive results a new phenomenon?
Suspicion of funding is not new, and some high-profile examples (such as certain pharmaceutical, tobacco, and oil companies, for example) have planted that seed. Unfortunately, as I mentioned, there is too much focus on funding and too little on the science. Many studies attacked because of funding could be criticized simply on study design limitations or reporting, without getting into ad hominem or other logical fallacies.
Read our recent blog post about current research on breakfast consumption, eating behaviors, and dietary intake.
Then, there are people who are biased toward only accepting results if they disagree with the funder.
A recent study, for instance, funded by a company that sells breakfast products showed that skipping breakfast resulted in greater weight loss than eating breakfast. Several news stories indicated that the results should be believed because they disagreed with the funder’s interests, but this creates a bias: If we only believe results in a particular direction, then we are selectively discarding information in a highly biased manner.
Again, focusing on funding creates a bias unrelated to the data, the methods, or the logic connecting data to conclusions.
How can we improve transparency and reduce public distrust in research funding?
I think we first have to stop attacking people who disclose information. We received what we jokingly called “forelash” (as opposed to backlash) for giving a webinar on improving science communication for a press organization because the webinar was apparently funded by a company (unbeknownst to us until our “forelashing”).
The contents of the webinar were attacked before we even gave it because of the financial underwriting. By definition, this is a logical fallacy: Evaluating content because of from whom the information comes or with whom they are affiliated. Ironically, some of the materials we presented were derived from the very organization criticizing us. Serving up paranoia and conspiracy in the guise of “investigative journalism” without any attention to the quality of science is certainly part of the problem.
On the other hand, we must recognize that some of the distrust started with genuinely disappointing examples of funding/researcher conduct. Rather than using the discovery of, mitigation of, and perseverance through the problems as evidence of the self-correcting nature of science, the problems are recounted as scary failure stories to denigrate science and scientists.
Particularly in public health-related topics, any time someone disagrees with a company that is actively involved in funding research, the company is compared with tobacco [companies]. This is obviously not helpful and distracts from science.
But people have many concerns beyond scientific conclusions, and dismissing these very real examples as isolated incidents is a much less compelling and attention-grabbing narrative than stories of a corrupt scientific enterprise. Even though conflict disclosure is, by definition, a bias-inducing practice (that is, it brings in speculation rather than data), it will remain important for some degree of public peace of mind—again, only if it is not used as an ad hoc weapon against those making disclosures.
It is more important for funders and the scientific community to continue to build a research enterprise where results are trustworthy, regardless of funding. The AllTrials campaign is one such example; independent data safety monitoring boards is another; and study registration such as ClinicalTrials.gov is yet another.
In these ways, additional eyes, worldviews, and realms of expertise can be applied to the same data and studies to help address the multitude of human biases beyond the superficial concern of funding. I hope the continued development of these initiatives will help relegate ad hominem attacks on science and scientists to conspiracy theory, rather than mainstream discourse.