She’s probably mostly kidding when she tells the origin story this way, but Kathy Hudson—until last year the deputy director for science, outreach, and policy at the National Institutes of Health—says that a massive update to the NIH’s rules for funding science started with humiliation. A pal who ran approvals at the Food and Drug Administration, Hudson says, “used to walk around and talk about how NIH funded small, crappy trials, and they would say it at big gatherings.” This was Washington, in front of congresspeople—or at conferences full of leading researchers. “I would get so pissed off,” Hudson says.
But then, well, she took it to heart. “I started to look at our trials and what kinds of policies we had, to make sure investments in clinical trials were well spent,” Hudson says. It turned out they were not.
This week, after almost a decade of work, some new rules go into effect for researchers funded by NIH. If they’re using human beings in their experiments, most of them now have to register their methodologies on a government-built website, clinicaltrials.gov. They have to promise to share whatever they find, even if they don’t prove what they hoped—especially if they don’t prove it. They have to get trained up in modern clinical practices.
Philosophically, almost no one disagrees with the intent. Make science more open, more ethical, and smarter. But some researchers think the rule change will bring with it more than just confusing, possibly burdensome new bureaucracy, and maybe even set back all of basic bioscience. They’re just as pissed off as Hudson used to get.
The changes to the rules aren’t small-potatoes. The agency awards tens of thousands of grants, $17 billion in 2016; it’s a key source of money for US scientists and a primary driver of new biomedical knowledge. The process for getting one of those grants is competitive, whether you’re doing basic science or preliminary investigations or if you’re doing giant clinical trials that attempt to figure out if a new drug or therapy cures a disease. “Clinical trials are super-special, because people are involved and at risk, and it matters,” Hudson says. “So we should make sure they’re really good.”
The new rules expand the definition of clinical trials to work with human subjects that didn’t used to be clinical. Yet the NIH’s bureaucratic requirements still ask for information on those experiments that maps onto the old definition. And much of that doesn’t apply to smaller studies. The point is, if a researcher has to figure all this out, they might just give up altogether—and not do the science.
Back in the early 2010s, Hudson and Francis Collins, the director of the NIH, set out to get the clinical trial rules sorted. That meant trials had to be well-designed, with enough statistical power to answer the question they set out to, and researchers would have to pre-register those designs to make sure they didn’t try any shenanigans at the end—changing the thing they said they were trying to measure so their data looks more convincing. “We invest in clinical studies where we tell human beings, ‘your participation in this clinical study may not benefit you, but it will benefit other people because we will learn from your contribution.’” Hudson says. “Too frequently that is an outright, blatant lie. Something like 5 percent of all clinical studies terminate without generating any data.”
So another condition: Share the data, no matter what. “People, academics in particular, have an incentive system that rewards publication and getting grants,” Hudson says. “Posting data on clinicaltrials.gov is not a citable thing that you put on your CV.”
NIH leadership was making an argument based on economics and ethics. “When it is research that involved human volunteers, regardless of whether they’re giving of their time or bodies or they’re engaged in higher-risk late-phase clinical trials, we had an ethical obligation to make sure those results saw the light of day,” says Carrie Wolinetz, Associate Director for Science Policy at the NIH. “Also, if you were to ask us—and Congress did—‘at any given time, NIH, how many clinical trials are you funding,’ we could actually answer those questions.”
As a bonus, the rules for pre-registering methodologies and sharing data also happen to meet the philosophical goals of Open Science, a set of principles designed to deal with science’s ongoing reproducibility crisis. Academic and social pressures—journals tend to only want to publish surprising, positive results (“hypothesis confirmed!”)—lead to bad science.
At least, that was the hypothesis.
In practice, when the research community started to understand what the new rules would mean, lots of people freaked out. They thought using the infrastructure for registering all-out clinical trials, and changing the definition of “clinical trial” to include, it seemed, every experiment with human beings, would mean basic research and simpler behavioral studies just wouldn’t get funding. In late 2017, more than 3,500 researchers signed a petition to the NIH asking that the new rules be delayed and rethought. “We support the goals of transparency and replicability. Unfortunately, the current effort to improve transparency and replicability in basic science does so by mislabeling basic research as a clinical trial,” the petition said.
Their fear was that even something as innocuous as monitoring a research subject’s stress levels would be an “intervention” in the eyes of an NIH grant review committee. Those kind of studies are a lot more potent than mere observation, but letters from the Association of Psychological Science and a crossover-sized team of academic university associations worried that redefining all human interventional science as clinical would mean lots of researchers would switch to those simpler observational studies.
Everyone mostly agreed that transparency, ethics, and openness were good goals. But the resources available to a big, multi-year, multicenter trial to deal with burdensome bureaucracy are very different than a small, non-clinical lab. “The use of the term ‘clinical trial,’ this was just a huge distraction. There is a sense in science of what that means, and basic scientists really didn’t think of their research as being that,” says Brian Nosek, a psychologist at the University of Virginia and founder of the Center for Open Science, a major advocate of pre-registration and data sharing.
In a big clinical trial, “people say, that’s one study that occurs over five years. I run five studies a day,” Nosek says. “It’s a massive administrative burden where we have to start from scratch every time and fill out a wealth of forms.”
Policymakers at the NIH didn’t go back to square one, but they did try to spread the word. Over the last year or so, case studies and explainers have clogged up the NIH website. Some of the concerns of the community got taken care of. It seems like studies that don’t fill in all the fields of the registration documents, if they’re not applicable, won’t get dinged.
Other details got smoothed over, if not entirely ironed out. After much back and forth, for example, studies using fMRI to study brain function won’t be counted as clinical trials, says Nancy Kanwisher, a neuroscientist at MIT and an early critic of the new rules. Studies that use fMRI to guide surgery, say, or evaluate whether a drug works? Those are clinical trials. “This is sensible and a huge relief,” Kanwisher says, but argues that the process has been “dysfunctional.” “The failure of NIH officials to consult with people in the field before implementing the policy was a serious mistake that has wasted the time of hundreds of scientists for months.”
The folks who put together the new policy deny that. “There was a workshop early on to get input. There was a public comment period,” Hudson emails. “We talked about it all the time in conferences etc. We did user groups to help make the interface for inputting data more user friendly. Not sure what specific input we missed.”
One thing that might make the new registration rules less onerous: alternatives simpler than the government’s clinicaltrials.gov website. That could be where Nosek’s Open Science Framework comes in. OSF is already trying to develop templates for pre-registration and data sharing. Nosek wrote one of a half-dozen letters explaining and critiquing the new NIH policy in this week’s issue of Nature Behavioral Science, which also ran a Q&A between Mike Lauer, NIH’s Deputy Director for Extramural Research, and FABBS past president Jeremy Wolfe, another outspoken opponent. They mostly agreed to disagree.
With the policy in effect starting this week, and with a new round of grant applications due soon, nobody is entirely sure what the shakedown cruise will look like. “Ironically, researchers are being urged to contact NIH staff to help them determine what is and what is not a clinical trial,” says Sarah Brookhart, executive director of the Association for Psychological Science, “a question no one had a problem answering until now.”
New, broad policy changes are rare and disconcerting. “I hope that we will get them on board. Once this goes into place, perhaps some of the burden concerns will be relieved. If there are obvious pain points, we’re going to keep our eye on that,” Wolinetz says. “It is an unknown question how this impacts the enterprise. Ideally we will be better able to manage our portfolio. Does that change what we invest in? That remains to be seen.”