Leveraging Research Data to Enhance Quality Improvement Initiatives

Leveraging Research Data to Enhance Quality Improvement Initiatives

Does quality improvement really matter and do we need to convince you that it’s important to advancing state-of-the-art healthcare? This is the debate that has been raging amongst us recently and, after much thought and discussion, we came to the conclusion that it does matter, but that we don’t need to convince you…. Everyone working in healthcare today already understands why quality improvement is important and that it will have a vital impact on state-of-the-art healthcare.

That being said, we realized that we might not have come to the same conclusion 100, even 50 years ago. So, it’s important to look at the historical perspective of quality improvement and data registries before diving into why they’re crucial today.

After getting some historical context, we’ll then discuss the current state of the art, the imminent next frontier, and provide some concrete suggestions for implementing the practices we describe.

Our overall goal is to cover three core topics:

1. What’s possible for enhancing quality improvement initiatives with the data from nonclinical sources.
2. How legislation and rulemaking can affect your strategy.
3. Opportunities to start improving your QI initiatives today.

Let’s get started.

Quality Improvement: A Brief History

Today, quality improvement is playing an increasingly essential role in medicine, but it wasn’t always so popular. One of the earliest examples is Dr. Ernest Codman, a Harvard professor, and surgeon who practiced at the turn of the 20th century.

Dr. Codman’s practices would be exemplary by today’s standard. He personally tracked every patient for a full year after their surgery and maintained consistent longitudinal records on every single one of them. This enabled him to compare cases and learn from both the best and the worst outcomes.

Was Dr. Codman heralded for his foresight? Sadly he wasn’t. In fact, he was driven to resign his position at Harvard and even received death threats. But he had planted two very important themes with his revolutionary concept—that healthcare organizations should be held to a common set of standards and that data should be systematically collected on every patient for the purposes of advancing the state of care.

The former led directly to the creation of today’s Joint Commission on Accreditation, and the latter led directly to the creation of quality improvement registries.

What Is a Quality Improvement (QI) Registry?

When you hear informaticists and others talk about registries, it’s really shorthand for any organized system for collecting, storing, analyzing, and disseminating information on a defined patient population. It can be as simple as a set of structured note cards on a small set of patients with a rare disorder, or a nationwide digital repository of every pediatric patient in the U.S.

What matters is that the content of the registry is very well defined and that it’s high quality and accessible over time. Today QI registries are a highly specialized subclass that we use to evaluate different dimensions of the care being delivered to patients. As a result, QI registries are typically much broader in their scope than registries that focus on specific disease areas or procedures like vaccinations.

Progression of Modern QI Registries

While we saw progress in some of those really specific registry types throughout most of the 20th century—especially in areas like oncology and the aforementioned vaccinations—it took almost until the 21st century before quality improvement came back into the spotlight.

Let’s focus on what we call the “modern period” of quality improvement registries. Not surprisingly, the rise of quality improvement in the late 20th century wasn’t due to some resurgence in book sales of Dr. Codman’s 1914 page-turner study in hospital efficiency. Instead, it was largely driven by forces affecting the cost of care in the U.S. and the resulting carrots and sticks that emerged from regulators and payers.

These external motivators led to a progression: from focusing on how much was being spent to how much was being paid, all the way to the current concept of how much value is being delivered.

Locally most healthcare systems responded to these motivators with a series of fits and starts. In the slide below, I’ve intentionally flattened those into a continuum, those most commonly seen at the macro level.

 

 

Yes, there are many more motivators than are shown, but let’s focus on the big ones that affected the entire industry all at once.

It’s important to note that these phases weren’t mutually exclusive and each essentially enabled the next. The most obvious example being that large-scale quality reporting got its big start on the Physician Quality Reporting Initiative (PQRI). Then there was the Physician Quality Reporting System (PQRS) that followed, and the codification of value under MACRA legislation and its merit-based incentive payment system.

So, where are we now?

We’re pointing pretty strongly at value. It doesn’t mean that we’ve solved all of the world’s problems up until this point, and “value” itself is proving to be a very slippery concept to get our hands around. But that’s not holding off the future.

The focus and expansions of each of these were driven by the incorporation of an ever-widening set of data, like procurement systems and claims processing systems, and now electronic health records and other clinical systems. The latter is proving to be the greatest hurdle yet given the tremendous variability in clinical records, especially when compared to the relative consistency of data about healthcare purchases and insurance claims.

But what do all of these resources have in common? They’re all coming from the healthcare system, what some refer to as the healthcare bubble.

This significant amount of data, as large as it is, is just a small fraction of what we think impacts patient health.
The future is going to require leaving the healthcare bubble.
The main reason for this is because the future of clinical care, in large part, will be driven by data that are generated outside of the bubble.

The Future of Quality Improvement in Healthcare

If you are wearing a Fitbit or an Apple Watch or even have a fitness app on your smartphone, you’re part of a movement called “quantified self.” You’re already living in the future.

The quantified-self movement got started in slightly nerdy circles where folks just wanted to measure themselves and basically play epidemiologist and doctor. But, what we’re finding is that the utility of these devices that are now available at a consumer level hold the key to a tremendous amount of health-regulated data.

In fact, some of you are probably aware that recent studies with the Apple Watch, for instance, have found that it’s reasonably good at detecting things like arrhythmias and sleep apnea (and that’s without a significant amount of dedicated effort going into the use of these tools within the healthcare enterprise).

The easiest way of describing what the future is going to look like is this: We’re going to be shifting from trying to understand how and why care gets delivered (and how to do that more effectively) to a focus on understanding how and why specific patients respond to that care.

Measuring Outcomes: Where Are We Today?

Let’s briefly refocus on the present state of where are we today. Where is most of the energy in informatics and data integration going?

It’s not yet at the quantified-self level. Today, the most active areas for integrating new data sources involve automatically pulling data from clinical systems.

The most common and important example is EHR systems, electronic health record systems. Many of us working on obtaining problem sets, diagnoses, medications, etcetera, get our data from EHRs, as well as other systems like laboratory information systems, various departmental systems like PACS, and sometimes genetic testing and other omics-type systems.

But what’s missing from this picture?

As we already mentioned, clinical systems really don’t contain all the data that we need to discover how healthcare is affecting patients. Why?

Let’s use a quick anecdotal example to illustrate why this is the case.

David had his regular physical last month. He is a reasonably healthy, 49-year-old non-smoking male. He went to the doctor because his insurance company forced him to go. They sent him a series of letters gently reminding him that he had not gone for over two years and threatened to raise his rates if he refused a checkup.

Note: This example demonstrates a success in quality improvement process measurement.

David’s insurance company had enough information about him to say, “Wow, this guy hasn’t been having his biannual physical in approximately two years. We really need to prod him to go because clearly, he does not have sufficient organizational skills to do this on his own.”

So, David went to his physical. He spent between 30 minutes and an hour with a very competent clinical staff. They were great. They asked him all the right questions. He provided his healthcare history. They updated his medical chart in real time with his answers. Done. David then left and went home.

What about the other 17,520 hours of David’s life over the last two years? What happened during that time was only indirectly referenced by the story that he told to his clinical care team and that was recorded to some extent in his medical chart.

The real answer to what happened is between David and his diary. But nowadays the less old-fashioned answer is that it’s between the patient and their smartphone or the patient and his/her Fitbit.

On average, if you do the numbers, a typical patient spends less than 100th of 1% of their time in a clinical setting. The rest of the time they’re essentially on their own.

How to Integrate Nonclinical Data Into a Quality Improvement Program

So, how can we integrate nonclinical data into a quality improvement program and take advantage of some of the data that may become available to you in the future? How do we escape this healthcare bubble?

Roughly speaking, there are two kinds of data from outside the healthcare bubble that we may want to bring to bear on quality improvement efforts.

  • Detailed data generated by a patient outside of a normal healthcare setting
  • Data about the population and the environment relevant to the patient

Those are represented as patient-generated and population-base data core graphics below.

 

 

There are also several kinds of data that are right on the boundary of healthcare. Think about the different kinds of omics data that you may want to integrate—they’re not population-based, they’re not exactly patient-generated (except indirectly), but they’re somewhere on the boundary of what now counts as clinical data.

We’ve broken them into three types.

1. Patient-Reported Data

This data comes in two flavors: patient-reported outcomes and patient-reported experience measures, oftentimes abbreviated PROMs and PREMs. These already exist, but what’s happening currently is that there are new modes of delivery that are becoming more widely available.

The first important mode is computer-adaptive testing based on item response theory, which allows us to collect patient-reported outcomes by asking far fewer questions in order to generate high-resolution answers.

Anyone who has taken the SAT, GRE, or a similar test may be familiar with the technique. The questions change depending on the answers that you provided in the earlier sections of the test. The idea is the same. But it does impose a significant burden on the authors of the tests. You now have to create tests that adapt and have all the logic in the tests that can respond to patient answers.

The other really interesting technical advance is SMS-based interactions with patients. There are some questionnaires or some items that require patient responses that can be reasonably delivered via SMS text messaging.

This clearly has its limitations.

You can’t ask really long or complex questions, but you can ask some short questions like, “How is your pain level today on a scale from 1 to 10?” If a patient is appropriately trained and is given all the right support, you can get very good answers using this format.

The third mode is highly empowering. There are a couple of technologies that are coming to bear that make it easier for implementers to create phone apps and web apps that interact with clinical systems.

The most interesting one of these is the SMART on FHIR framework, which enables the PRO and PRE apps to integrate with EHRs and other clinical systems securely.

What we have talked about here are improvements in standardizing delivery methods. There’s also a tremendous amount of work necessary on standardizing content.

One example of this is the NIH Toolbox—the NIH PROMIS program specifically—which has developed a set of patient-reported outcome measures that they’re strongly encouraging people to use across multiple projects.

As we standardize both the technical methods and the content, we believe our capability to use this data systematically and effectively for quality improvement is going to increase over time.

2. Digital Phenotyping Data

This is a very broad term. It refers to the moment-to-moment quantification of the individual-level human phenotype in situ using data from personal digital devices.

There are various levels of resolution for this. You can think of these devices as sensors that have particular output parameters and you can think of their output as measured in hertz. There are heart-rate monitors that average your heart rate over the course of several minutes and there are those that deliver the entire waveform or each heartbeat measurement precisely.

There are measurement devices that can measure your temperature very precisely and deliver the entire waveform, and the same is true for accelerometry data. You can either have average data that’s already aggregated in the device or you could have a device that sends the relatively raw data for future application.

There are several platforms that have emerged that make one or both of these easier. Apple Health and Google Fit tend to provide summary data. They tend to allow the sensors within the devices to efficiently summarize data and aggregate it over a time period. Then the data can be sent to a stream aggregator service.

There are also other platforms.

The best example that we’ve come across recently is the Beiwe platform, which is available for research-grade passive data collection on smartphones. The goal of that platform is to provide the highest level resolution data and make it available for research.

It is literally recording all the accelerometer data on your phone and making all of it available for research. That is an enormous amount of data and very, very different from what you get from a typical Fitbit or Apple Watch, which is somewhat aggregated data, not the raw waveforms.

3. Social Determinants of Health (SDH) Data

Another increasingly important kind of data is not patient-generated but population-generated. A lot of it goes by the handle of social determinants of health. There’s a tremendous amount of interest and activity in this space where everyone is trying to get social determinants data.

It’s defined as the structural determinants and conditions in which people are born, grow, live, work, and age. The kind of factors that we typically look at: socioeconomic status, education, physical environment, employment, social support network, and access to healthcare.

Now, why is this not patient-generated? Well, normally social determinants data is not gathered directly from patients. It’s inferred from where the patient is or what the patient does. There’s some piece of information you have that tells you what environment the patient is in. Then you are bringing in data about that environment in order to inform your decisions about the patient.

A typical example is geocoding.

For example, my insurance company knows I live in New Haven, Connecticut. It knows what neighborhood I live in and it can actually infer a lot about me. It knows the general socioeconomic status of people who live in the neighborhood. It knows what school districts are nearby. With a little bit of clever data munging, you can actually get information about food availability in that area and whether there are large grocery stores or only small stores in the neighborhood.

Geocoding can happen at various precision levels. You can do it at the ZIP Code level, which is very coarse, or you could do it at the census block level, which is very precise and sometimes can give you very interesting information.

Several research projects that we’ve come across have used the data more or less successfully to infer things like pollution impact, for example, at the census block level because it can reveal proximity to various pollution sources.

Multiple sources can be combined to validate these population-based data points, and, if you are clever, you can actually pull in some very interesting variables and create some interesting conclusions about the population.

The Availability of Clinical Data for a Quality Improvement Program

Now, let’s talk about the slightly in-between, difficult cases that sort of feel like clinical data but may not fit the modern assumptions about clinical data. There are several new kinds of data sources.

Biobanks

One new data source is biobanks and the rich phenotypes that are generated for those biobanks or for large characterization studies.

For example, Prometheus helped with a very large autism characterization study (the largest cohort of the time) called the Simons Simplex Collection. This study generated tens of thousands of biospecimens in a cell and DNA repository of records, with very rich phenotype data available for each and every one of the samples.

Now, that would be great data to have about a patient being treated.

The question is, is it available?

In the case of the Simons Simplex Collection, the answer is yes. But in many others, the answer may be no.

Precision Medicine

Another kind of important new data source is everything that we’re calling precision medicine. It’s a slightly vague term and somewhat comical that we categorize it as a nonclinical data source because it’s medicine so it must be clinical.

But the data sources are so loose that it falls into a gray area. They don’t fit the standard models of clinical systems. An omics-type data from proteomics or genomics may be stored in a really weird-looking departmental system, and it may not be clear how to integrate the data with other systems in the healthcare setting.

We’re playing a little bit loose with the definition. From our point of view as implementers, it looks like nonclinical data. Although I wouldn’t stand up and argue that in court.

Novel Data Uses

There are also some very interesting new data uses. There are computational phenotypes that can be used for cohort discovery. Once you have access to data in EHR, you can compute various secondary attributes of the patient based on their medication exposure and the history of their disease and push those calculated attributes back into either a clinical system or research system and use them to discover interesting subgroups of patients.

There is primary research data. Many research projects—both clinical trials or academic research—may be collecting very rich data about patients, and the patients may be very happy to share that data with their clinicians and make it available for study.

There are pragmatic clinical trials that can be, in some cases, run entirely on data obtained in the course of clinical care.

These are all novel data uses, which in some cases also generate additional data that can be then fed into the process of care and integrated with quality improvement initiatives of various sorts.

When we talk to clinicians about the practical problems they’re facing and describe some of the opportunities, we usually get two questions right off the bat.

  • Wait, do we even want this data?
  • If we got it, how would we manage it?

Let’s talk about the first one. There is a new, rich, and potentially exciting set of data sources. Once people think about it, the answer is, yes, they want it. They want it right now. Could you please give it to us?

We can now learn from the other 99.99% of the time that patients are not in the healthcare setting. But we do need to figure out how to address the challenges raised by the second question.

How to Incorporate, Organize and Use New Kinds of Data

The first question is how do we get it? It’s a lot of data. It’s a lot of different kinds of data. Standards for submitting and storing novel data types are not set. There is no government or regulatory agency that says, “Here’s what all the Fitbit data must look like.”

Standards, both technical and semantic, are evolving rapidly. There’s very good work being done on this space. One example that comes to mind is Open mHealth Project. They’ve been doing a tremendous job over the last three to four years, trying to work out both the technical and semantic interoperability challenges that go along with collecting mobile data and making it available for various clinical and quality improvement purposes.

The work is detailed and time-consuming. You have to go data element by data element and say, “Wait, how should we represent blood pressure in a mobile app so that it could be exchanged with other applications and everyone will understand exactly what this data means?”

What Are We Allowed to Do With the Data?

Let’s say you figured out how to get the data you want from a novel source using some existing set of standards. (There’s a very large body of work on health data standards curated by organizations like HL7—Health Level Seven International—and others that are very important to understand if you’re dealing with clinical data sources.)

Now, what are you allowed to do with it?

Well, there are two legal regimes that apply to using this data. The first is HIPAA and the second is the Common Rule, which may or may not apply to you depending on whether or not you’re using federal funds.

Let’s next talk about each of these.

HIPAA

HIPAA privacy rules apply to personal health information (PHI). If you have patient-generated data, it may or may not be personal health information. The question of whether my Fitbit data is PHI is not self-evident. However, if you are going to take any data that comes from a patient and combine it with clinical data, it’s going to become personal health information.

It’s identifiable and it bears on a patient’s diagnosis, status, or medication. It bears on their health. Therefore, I think the only reasonable conclusion for an organization that is planning to use these interesting data types is to assume that it’s going to be PHI and treat it as PHI.

Common Rule

The Common Rule applies to federally funded human subjects research. There are some threshold questions that you want to ask. Is this human subject research? Is your organization involved in federal programs funded by HHS or has it signed a federal-wide assurance document that says, “We’ll comply even if it’s not.”

Usually, if you’re doing pure QI and if you have no interest in ever publishing your journal, you may be able to circumvent the Common Rule and not worry about whether you have to comply. However, almost every entity we work with in practice thinks that they’re going to publish the results.

If you’re going to publish your results, you’re probably generating generalizable scientific knowledge. If it’s generalizable scientific knowledge, then it is human subjects research. Most journals will not let you publish without an Internal Review Board (IRB).

In practice, you may as well have an IRB. That means you have to have a sane and clear consenting process and think very clearly about how you’re consenting patients for various projects they engage in. Your IRB will be able to advise you and you will learn that for pure quality improvement projects you may not need consent, but you have an exemption.

(NOTE: If you’re doing this work, you need to talk to your institutional lawyer or a lawyer that specializes in this area. What we are saying does not constitute legal advice.)

Where and How Do We Store the Data?

This is a more difficult problem than most people realize. Some of the common sense approaches are problematic and probably won’t work.

Some of you may be thinking, “Well, gosh. It’s data about a patient. Why not store it in an EHR?” Well, is it the kind of data that should go in an EHR? Is it part of the patient’s chart? Is it part of their legal clinical healthcare record? Should it be discoverable by a legal process?

The answer is, in many cases, absolutely not. You don’t want my Fitbit data in EHR. No one is sure about the implications. Is the clinical team now responsible for reviewing my Fitbit data to make sure that I’m not at risk for a heart attack? I don’t think they want to take on that responsibility. For practical reasons, you don’t want it in EHR.

Also for technical reasons, you don’t want it in an EHR. EHRs are not designed to absorb these novel data types. It’s not that kind of system. They are designed to store patient chart information, medications, diagnoses, problem lists, etc., and they’re very good at doing that but they are not yet designed to absorb novel data types that are changing rapidly.

The second thought for those of you who have been around health IT for a while is, “It sounds like you should push all that data into a clinical data warehouse.” That’s not a bad idea. It’s actually a better idea than EHR. However, the complexity and variety of departmental data and waveform data from sensors are going to overwhelm the structure of a typical data warehouse.

Very importantly, the metadata necessary to preserve context and provenance is often deliberately reduced in a standard data warehouse architecture. Data stored in a CDW requires access to the original systems to provide important context and provenance data that you will need to interpret the particular values. This becomes critical when you’re dealing with novel and very diverse data types. You often can’t interpret something until you know what the associated metadata actually says.

Lastly, why not store it in some ad hoc system for each data type or for each analysis type? That sounds like a promising idea. But you get a combinatorial explosion. Now you have a variety of data uses and you have a variety of data sources. Every time you want to use some sources for a particular use, you’ve got an explosion of ETL (extract, transform, load) projects on your hands.

Each of the lines in the spaghetti diagram may represent several years of working to integrate the data. An enormous and ever-growing amount of labor would be required to maintain that kind of architecture.

 

 

Next, let’s talk about how we organize it to use it effectively. There are several ways out.

How Do We Organize the Data to Use it Effectively?

You’re getting lots of exciting and potentially invaluable data. It just raises some practical, legal, and governance issues and we know there are some things you can’t do.

The key to data management challenges is addressing the data volume, variety, and schema volatility in dealing with the complexity of the many analytic use cases. You need to avoid rebuilding the factory every time you integrate a novel data type or answer a new kind of research question.

One approach was described in a recent paper by Chris Chute and Stan Huff that they call the Pluripotent Clinical Repository of Data or PiCaRD. (We think they’re both Trekkies. Actually, we’re sure they are.) The idea is that you deconstruct the clinical data into standard, semantically annotated metadata-rich elements. They’re a proposing a FHIR and CIMI standard. Store those in the repository along with all the context, provenance metadata and then reassemble them into analytic data marts as needed.

This actually is a very reasonable approach. It’s not trivial to implement and it requires some institutional investment. But, in terms of strategy, it actually would get you closer to what you want and get you closer to the ability to make use of these very interesting new data types. Let’s drill down and see what that kind of architecture would look like in practice.

Everything we’ve described so far would imply that if you are integrating clinical and nonclinical data, you would want to build a system with a data flow pattern similar to what’s represented in this diagram. If you look at the upper right, you’ll see clinical data sources going in. You look at the lower right, you’ll see nonclinical data sources going in. They are being fed respectively into clinical data services, which ingest clinical objects and validate them and parse them out into meaningful pieces. They’re going to a central data repository that we would call, following Chute and Huff, the pluripotent representation of clinical and ancillary data.

The nonclinical data goes into what we call research data services. We use research as a shorthand for anything that handles messy and complex data. These services provide research operational support, research data integration, centralized research registry, and research data access.

You also want to provide a patient portal for collecting patient data from PROs and PREMs. There’s got to be a portal for registry stewards and a portal for the clinical providers in order to annotate the various pieces of clinical data.

But if you’ve done all that work and you’ve parsed out the data into meaningful chunks, you can now reassemble them—usually into data marts—to provide a variety of services without having to reinvent the wheel each and every time.

How Do We Organize the Data to Use it Effectively

Advantages: What Should an Effective Architecture Do for You?

Let’s recap the advantages of building this type of system. What should an effective architecture do you for you? It should:

  • Leverage advances in health IT standards
  • Enable multiple data uses, support research-quality data acquisition and curation
  • Support multiple data types, including novel data types
  • Support multiple sites
  • Allow for longitudinal designs (It’s very important to be able to track data for the same patient over a long period of time.)

Conclusion

The future is already here. It’s already possible to start incorporating data from research and other non-clinical sources into quality improvement initiatives.

The current legislative and regulatory climate is set to dramatically increase the amount of data becoming available for quality improvement thanks to the Final Rule. And it’s not all going to be easy. There are still both technical and practical challenges working with all this new data. The good news is that there are solutions available to address those challenges both from a fundamental informatics perspective and from a team expertise perspective.

With the Final Rule going into effect as of late 2018, we’re going to give ourselves until the start of 2019 to make sure that nothing dramatic or unexpected takes place. In a future article, we’ll do a much deeper dive into the legislative and regulatory climate and how it’s expected to impact your QI initiatives.