How the Common Rule 2018 Updates Can Affect Your Research and Quality Improvement Strategies

How the Common Rule 2018 Updates Can Affect Your Research and Quality Improvement Strategies

As you well know, Common Rule is the federal policy for the protection of human test subjects. Published in 1991, it has been updated and amended along the way, including the most recent revision—aka the Final Rule—which went into effect on July 19, 2018. Changes have been made and agreed upon between sixteen different federal departments and agencies, spearheaded by the U.S, Department of Health and Human Services.

Research beginning prior to July 19 are subject to the pre-2018 Common Rule. The revised Common Rule applies to any research that begins on or after that date, which is why we are addressing the issue now. It will affect a great many research projects, and although it was published in January of 2017, giving researchers plenty of time to amend projects, we realize that things can be missed, overlooked, misunderstood, or misconstrued.

We’re here to provide you with what you need to know about the Common Rule updates and revision. By the end of this article you will:

  • Understand how legislation and rulemaking will affect your research and quality improvement strategies.
  • Be able to outline the steps you should be taking now to be ready for the opportunities created by the new rule.
  • Be able to identify architectures that maximize the reuse potential of data derived from both clinical and research activities.

To achieve these goals, we’ll begin with historical context, then move on to what’s changing in the world as it applies to the Common Rule. We’ll look at the imminent opportunities those changes create and make some concrete suggestions about how you can take advantage of those opportunities.

Let’s get started….

Historical Context

Let’s flashback to 1991. It’s been almost 30 years since there was a major update to the regulations that govern human subjects research in the United States. There was a small revision in 2005 to the Common Rule, but those changes weren’t big enough to matter or to impact data use that we care about.

So, what’s happened in the last 27 years?

We’ve had the implementation of HIPAA, passed in 1996. Electronic health records (EHR) are now mainstream across the country. Smartphones, big data analytics, precision medicine…these have all occurred since 1991.

There has been remarkable growth in healthcare quality improvement research, but even in that short span, the scope of QI has evolved from being focused on costs to more distilled measurements like clinical efficacy and patient outcomes. This is being largely driven by the availability of an ever-increasing, full-of-potential data generated both within and outside of traditional care settings.

The evolution has been just as dramatic on the traditional research side.

Collaboration is way up. Rare diseases are getting more and more attention than ever before. And our studies are generating more and more complex data, necessitating some specialized technology for both generating and managing that data.

So, across the clinical and the research landscapes, we’re seeing sign after sign that says that what’s needed is more and better data reuse options. The common thread here is that secondary use could transform both disciplines.

The interesting question now is, why aren’t we seeing more secondary use today? Let’s take a look to find out….

Secondary Use

Secondary use of data relates to identified information and biospecimens. It means using information about a patient or participant for another approved purpose after the one that was used to collect it.

An example is analyzing data from a previous study or using data collected during clinical care for research. People on the clinical side tend to think of secondary use as only applying to the use of clinical data that is repurposed for research use, but secondary use can also apply to data collected for research within one primary study, and then applied secondarily within another unrelated study.

In a learning healthcare system if you think about what we’re trying to build, learning only happens if you have secondary use of data, such as using data collected for clinical care that is then used to inform how care should be done.

What’s been holding back secondary use? Well, there are two kinds of limitations in our world:

  1. Technical
  2. Regulatory

Technical limitations come from a lack of consensus on how to define and validate data. There was tremendous site-by-site variability in how data was stored and where it was stored. There were few good options to exchange data between health IT systems. Much of the early focus in health data standards was about how to exchange data within health IT systems.

On the regulatory side, the Common Rule required consent for each specific use of human subject research data. Uncertainty and concerns around HIPAA and secondary uses of health data were prevalent.

Also—and it may not be fair to blame this entirely on regulation—our finite IT resources were historically dedicated to payment reform initiatives.

For a number of years, all informaticists could think about was meaningful use: Stage one, stage two, and now stage three. Very few researchers could dedicate any energy to secondary use of data.

To foreshadow what’s coming, I’ve highlighted here the two constraints that are going to open up more opportunities as a result of the change to the Common Rule. Let’s find out how….

What Is the Common Rule?

Let’s take a step back again and look more in depth at the Common Rule. As we mentioned earlier, it’s a federal policy that aligns regulations on research utilizing human subjects across DHHS and 15 other federal agencies and departments. It is also federal regulation 45 CFR 46.

The Common Rule provides requirements for assuring compliance by research institutions, requirements for researchers obtaining and documenting informed consent, and requirements for institutional review boards—how to organize them and how they should work.

The historical origins are actually a good way to understand what the Common Rule is for and what it does.

History & Foundation of the Common Rule

The early history goes back to the Nuremberg Trials, specifically the crimes against humanity, and the subsequent Nuremberg Code drafted in 1947, which delineated research ethics principles for human experimentation. Then came the Declaration of Helsinki in 1964—a set of ethical principles regarding human experimentation—and the National Research Act, which was the first set of federal regulations in the U.S. to systematically regulate research. These were followed by the Belmont Report (1978) and finally the passage of the 1991 version of the Common Rule.

In practice, the Common Rule codifies principles that are universally agreed upon. These are the principles articulated by the Belmont Report of respect, beneficence, and justice, accomplished through informed consent, internal review boards, and assured compliance.

The current Common Rule applies to research involving human subjects, conducted, supported, or otherwise subject to regulation by any federal department or agency. It lists six explicit exemptions, with number four allowing for secondary use of the identified data.

Does the Common Rule Apply to You?

A question that comes up frequently among registry stewards is, “Do I care about the Common Rule if I don’t take federal money?”

This clearly puts an institution that takes federal funding on the hook and that means virtually every hospital and every university is subject to the Common Rule because they typically take federal funds at some point.

But, what if you’re just a stand-alone professional society and you’re running a registry and you have no plans to take federal grant funding?

In practice, the answer is always: “Yes, eventually the Common Rule will apply to you.”

Practical considerations like publishing a paper in a respected journal means that the editor is going to ask, “Where’s your IRB?” So, its best to have an IRB review process if you’re doing anything with your registry.

NOTE: Let me restate that what is offered here is not legal advice, and if you actually need to make a decision about whether the Common Rule applies to your situation then you should consult with counsel who specializes in this area.

Now back to exemption number four which allows for secondary use.

Exemption Four: What Does It Mean?

Exemption four of the Common Rule allows for research involving the collection or study of existing data documents, records, pathological specimens, and diagnostic specimens if these sources are publicly available…which usually doesn’t apply.

Or, if the information is recorded by the investigator in such a manner that the subject cannot be identified directly or through identifiers linked to the subject, which means, no driver’s license numbers, no social security numbers. You have to remove any identifiers linked to the subject, but if you do then the data is exempt from the Common Rule.

Secondary Use of Data and the Common Rule

So, this means that the Common Rule already allows for secondary use of data. But the data itself has to be de-identified. Alternatively, due to the Common Rule requirement that a participant must consent to a specific research protocol, you would have to go back and re-consent all participants for secondary use.

Needless to say, option B is often very expensive and impractical. PIs and study coordinators break out in a cold sweat if they have to go back and re-consent 100 patients, let alone thousands that most studies include. It’s a logistical nightmare.

De-identification is also often harder and more costly than you think because it means limiting your data set. You’re going to have to remove data elements, anything that can be used to identify a patient. And you need to do it in such a way that they can’t be re-identified. That’s often neither obvious nor easy and can limit the way you can use the data in the future. So, while it is the best option of the two, it still wasn’t a great option.

So, how will this change? Let’s take a look at the Final Rule.

The Final Rule: Part I

Enter the Final Rule of 2018. It allows for an expansion of secondary use for both information and biospecimens. For those of you who have been following federal regulations, this should sound a little bit familiar.

There was an ambitious Notice of Proposed Rulemaking (NPRM) that was published in 2015. That’s federal speak for a proposed rule on which the federal government is asking the public to comment. Then, we had a change of administration in 2016 and everybody got a little nervous because it was unclear if the Common Rule would be a priority for the new administration.

The Final Rule was scheduled to take effect on January 19, 2018 but a six-month delay was built in, with it now taking effect July 19, 2018.

So where are we today?

First, let’s quickly recalibrate to set expectations.

The Final Rule doesn’t magically unlock petabytes or research data. What it does do is create an environment in which secondary use of data can thrive.

The way that Final Rule accelerates secondary use is through two changes: the introduction of broad consent and a disambiguation of the overlap between the Common Rule and HIPAA.

Let’s talk about broad consent first.

Broad Consent

The current Common Rule states a participant must consent to a specific research protocol. The Final Rule allows for broad consent. Broad consent is not a waiver, so it must be obtained from a patient, but it’s a prospective consent to unspecified future research. It allows for storage and maintenance of identifiable information by specimens in the interim.

So, what can you do now that you couldn’t do before?

As a PI you can go to a research participant and say, “Listen, I’d like to collect your blood and I’d like to collect your medical history and I would like you to consent to its use for future unspecified research.”

If the patient consents, you can now store the medical history data and the blood sample, allowing for a study opportunity later on, during which you can use both the data and the biospecimen.

That’s amazing and important for future research. It matters because it gives you another option between de-identifying and re-consenting patients, both of which were not particularly attractive options.

Final Rule: Part II

The second thing that the Final Rule does is clarify the overlap between the Common Rule and HIPAA.

The way it does this is by codifying a secondary use exemption for covered entities using protected health information to conduct healthcare operations, public health activities, or research as defined under HIPAA.

There are situations where both the Common Rule and HIPAA applied. If you were performing human subject research in a healthcare setting as a covered entity, like a hospital, then the research would be governed by both the Common Rule and HIPAA.

Because they use slightly inconsistent language and they weren’t fully harmonized, it was hard to know which regulation to follow, which caused problems.

The Final Rule changes this protocol. The new rule is that if your research is covered by HIPAA, then HIPAA is enough. You’re exempt from the Common Rule. So this creates quite a bit of convenience. It matters because it enables secondary use of PHI with just a HIPAA authorization where informed—and additional informed consent—for human subjects research is not required.

For example, If you are doing research on a patient, and said patient provides clinical data, then you have to get a HIPAA authorization from the patient to transfer that data for research purposes. But, if it fits under this particular exemption, you don’t have to. Also, you don’t have to obtain any additional informed consent from the patient, which is very, very convenient.

So this change is intended explicitly to eliminate the duplication that we were seeing between the Common Rule and HIPAA.

Clinical Data Repositories

If that wasn’t enough, Christmas came early. We also got some bonus guidance in clinical data repositories.

The Final Rule says that it does not apply to clinical data repositories that are not supported by a Common Rule department or agency. Remember that problem about who the current version of Common Rule applies to?

Well, the new version basically says that if you are a clinical data repository and you’re not supported by DHHS or the DOD or any of the other 14 federal departments included in
the Common Rule, then the Final Rule does not apply to you.

Now, if you begin to conduct human subject research, the Common Rule will apply, but initially, if all you have is a clinical data repository and you don’t take any federal funds then you’re in the clear.

There are activities that don’t meet the definition of research, including many clinical improvement activities, which aren’t covered by the Common Rule. Institutions that release identifiable information to a clinical data repository for research purposes are not covered by the Common Rule if the institution is not directly engaged in research, which makes this last point especially interesting and may open up conversations between registry stewards and contributing sites that were somewhat difficult before.

Here’s the situation:

I have a clinical data repository. I go to a clinical site and say, “Hey I’d like to submit some data.”

Under the current rule we would need to discuss whether or not that clinical site has to get informed consent from the patients before sending the data. What the new version of the Common Rule says is they don’t. They are allowed to kick the can down the road and say, “Look, I’m not doing research. I’m just the site. I’m collecting data for clinical care.” Merely submitting it to a registry does not make it human subject research and I do not need to get informed consent from a patient. This makes the conversations with the sites a lot easier.

While overall I think it’s fair to say that this version is a liberalization of secondary data use, there was one disappointment.

There was a proposal that was very popular with informaticists which included a new exemption category for secondary use of identifiable information collected or generated for non-research purposes when prior notice had been given. This would make most clinical data fair game, as long as it included prior notice to the patient, i.e. that there was a plan to use that data for research further down the road.

As an informaticist, I’m a little disappointed it was not included, but after reflecting on it as a patient, a research participant, and a human being, I’m a little relieved. I think that most of us would feel similarly that this would have been opening the floodgates a little too much and making the data a little bit too easy to get without added respect for individual privacy and individual efficacy to agree or disagree to participate in research.

In Summary:

  • The Final Rule is live as of July 2018. It’s going to accelerate secondary usage of information in specimens.
  • Participants can now provide consent for unspecified future research and to maintain their data and specimens in the interim.
  • It further disambiguates HIPAA and the Common Rule for covered entities making life a little easier.

Is this a big deal in practice? Yes.

By itself these regulatory impacts are limited, just changes in federal regulation. You might ask, “How much good can it possibly do?” But, these are actually the last pieces of a much bigger jigsaw puzzle. When taken together, the impact on the future is going to be remarkable.

What the Future Holds in Light of This Latest Regulatory Change

Let’s revisit what’s been holding back data reuse. We already talked about the technical limitations and regulatory constraints—and they have just been relaxed by the final version of the Common Rule—but many other things have changed in the interim.

Technology has moved ahead. Instead of technical limitations, we now have technical empowerment. We have an emergent consensus of how to define and validate health data. We have diminishing variability on how and where the data is stored and we have reasonably good options for exchanging data between health IT systems.

In addition, there’s regulatory refinement. We have saner regulations coming from the Common Rule and the beta clarification.

The last piece—where IT talent was tied up dealing with meaningful use—has also changed.

Focus has shifted from payment reform to data reuse initiatives, and if you look at the pattern of federal regulation and federal initiatives that have emerged over the last number of years, you will see that the Common Rule is really just the last piece of the puzzle working together with the 21st Century Cures Act, the Fire initiatives, and new work from the Office of National Coordinator for Health Information Technology.

Let’s think about how this applies to you.

If you’re an institution, what is your ideal future state? How do you take advantage of the opportunities that are coming your way from both the regulatory side and the technology side?

Well, here’s what your world ought to look like:

  • Your governance situation at your research center should allow you for data sharing and reuse. Data and research assets from multiple studies are centralized across all data types. The data itself should be optimized for reuse somehow.
  • Data reuse should be the norm and not the exception. You shouldn’t require a giant IT project to figure out how to reuse data from one study for another or how to reuse a piece of clinical data for a particular piece of research.

The future is arriving, but it’s only arriving at a few institutions that are thinking farther ahead than others, because reaching it requires some big changes on how we deal with data and it enables a new paradigm.

It requires that we bring data together for many different activities and sources. We need to organize the data in a manner that makes it possible to reuse it in a variety of different ways. And we need to support many different kinds of research and partnerships.

So, how do we do that?

We have to move to a new kind of data architecture. The desired architecture to support this paradigm shift looks something like this:

data architecture

What you need is a process that takes a variety of data inputs like:

  • data from clinical systems
  • laboratory systems
  • patient reports
  • digital phenotyping
  • claims data
  • translational research
  • social determinate research

Then you add enough structure to it to organize that data into what we call a pluripotent clinical data repository. From this central organization of the data, it should empower a variety of data uses, like quality improvement research, investigator initiative research, clinical trial networks, precision medicine, government and industry data partnerships, and cross-specialty collaborations.

You should gain the ability to explore your data in a wide variety of ways because you’ve put in the effort into organizing it in this reusable pluripotent form.

We can call them integrated registries or integrated clinical data repositories, but that doesn’t quite capture the potential.

Stan Huff and Chris Chute came up with much better labels and a better conceptual model for what we’re trying to do in a recent paper. It was very exciting because this is a case where the academic thought leaders are coming up with a conceptual structure that perfectly reflected what the implementers on the ground were already doing; a model that really reflects the technical direction, one that they have been implementing for the past eight years.

Another piece of news that shows this direction is exciting is a recent paper from Google, where a similar architecture model, very much like a pluripotent clinical data repository, supports deep learning in analytics. They were able to put the data into a format and run machine learning on it and get very, very interesting analytic results without doing a tremendous amount of restructuring.

So, as William Gibson has said, “The future has arrived, it’s just not evenly distributed.” I think part of our job as thought leaders is to keep a sharp eye and see where the future is and where the bright spots are and see what can we do to accelerate progress in areas that are falling a little bit further behind.

The Future of Clinical Data Repositories

So, where are we seeing the future today?

In our role as implementers of clinical data repositories, we see the future in the two areas we work in: biomarker discovery and clinical quality improvement. In the biomarker discovery side, we’re seeing large studies in large research centers increasingly focused on bringing data assets together where they are building unified systems to store data for multiple studies. Often the goal is explicitly data reuse for one study and also the integration of multiple data types. A lot of the biomarker discovery data projects have complex data types and heterogeneous data types, like genomic data combined with EEG data, combined with clinical data.

So, bringing them together and reusing them in large longitudinal studies is something that really pushes the architecture in this pluripotent direction.

The other area where we’re seeing this emerge is in leading organizations within clinical quality improvement where the typical client is a professional society that is trying to create a data asset that would allow the profession to develop new standards and to improve the quality of care that they deliver.

Often, the goals are fairly big and the leading organizations are thinking more broadly than just developing some ECQM (electronic clinical quality measures) calculation. These organizations are really thinking much more in terms of a program that allows them to leverage the data assets to move the direction of the specialty and to gauge how well they’re moving in that direction.

There’s another interesting piece of validation from the market that this direction is on the right track. If you’re following health IT news you may have heard that Flatiron, an oncology data company, was recently purchased by Roche Pharmaceutical. The price tag was an eyebrow-raising $2 billion.

All of us are wondering though, what did Flatiron do to make themselves worth $2 billion? Can we do that too?

Well, Flatiron was very smart. They had good technology and an excellent strategy, but what they really had that made them valuable was data. They were able to collect data from oncology practices and they were able to organize it and curate it to make it very high quality, very well organized, and very, very valuable to a curious pharma buyer who wants to make use of that data.

There is a great quote from the former ADA commissioner that talked about how people should pay attention to Flatiron’s strategy of relentless curation of data.

“People should pay attention to the strategy—relentless curation of data—an army of ‘data janitors’ transforming EHR data into analyzable, actionable information. …” (February 15, 2018)

They actually had an army of data janitors transforming HR data into analyzable actual information. It went way beyond the standard automated transforms.

They actually were willing to pay people money to do what we called manual chart abstraction and dig manual data curation to increase the value of the data asset. It is very important to understand the kind of effort that’s required to create value and a valuable data asset, and to further understand the kind of value that asset will have in the market. It tells you a lot about where the industry is and where it’s going as a whole.

The Big Picture

We’ve all heard about the learning healthcare system and the need to build multiple learning healthcare systems that enable research to influence practice and practice to influence research. There’s a broad understanding of what’s needed to do that.

We need care to generate outcomes that produce measurement and we need to be able to produce analyses from those measurements and those which should lead to change recommendations and change implementations within care.

What’s often unrecognized is the need for a secondary cycle around measurement. The data curation cycle. What do you need in order to make use of measurement in analysis?

It’s not enough just to have data in your HR. It’s not enough to have clinical notes entered somewhere. You need to aggressively invest in creating clean reusable data assets. You need to collect and ingest data. You need to organize it. You also need to enrich it with other data sources and you need to deliver. Only if you do that consistently will you empower the next downstream step in the value creation cycle, which is analysis and generation of valuable analytics and results.

We’ve been calling this sub-cycle the Data Curation Cycle, and we think that pluripotent clinical data repositories are a key technology and a set of architectures that will empower that cycle to be more efficient than it has been. The data curation cycle and technology support are becoming an increasingly vital part of the growing and learning healthcare system.

Let me give you a sense of a high-level functional architecture. For example, a system that ingests and integrates clinical and research data and organizes it for several uses. There’s a number of clinical data sources including a clinical provider portal, as well as some non-clinical data sources including a patient portal.

high-level functional architecture

If these are aggregated into a central data repository which is a storage of a pluripotent representation of clinical and ancillary data, the consequence is that they could be parsed apart and delivered to provide a measured calculation service.

In this case, it’s an electronic clinical quality measure calculation or CMS integration. The submission of data to a center for Medicare services and data partner services, which is a catchall for delivering data sets and query services to various data consumers and other data partners.

Connecting the Dots

The interesting thing in seeing this architecture is that the Final Rule unlocks several of these boxes that were more difficult to use before. The Common Rule changes ease up which data can be ingested and repurposed in this functional architecture.

Here’s an example of a more technical and detailed diagram that shows data flows between components. It’s a similar system with a similar model. Again, we’re ingesting data from multiple sources and organizing it so that it can be used for multiple sources.

more technical and detailed diagram

We’ll spend more time exploring this architecture in detail on our next article, but I want to emphasize that without the changes to the Common Rule, some of these arrows would actually be administratively difficult and impossible from a regulatory point of view. You have to re-consent patients in many cases or de-identify the data and lose some of the value that you’ve generated by collecting it.

So what have we learned?

Imminent changes to the Common Rule can broaden your research and quality improvement strategies. Preparing for opportunities created by the new rule includes anticipating the more extensive secondary use of the data. And novel architectures can maximize the reuse potential of data derived from both clinical and research activities.