Domain

Clinical Informatics

Type

Case Study or Comparative Case Study

Theme

effectiveness; population

Start Date

7-6-2014 2:55 PM

End Date

7-6-2014 4:15 PM

Structured Abstract

Introduction

Collaborative research networks using electronic data support learning health system goals by facilitating identification of best practices care across delivery organizations. However, submission of required data elements to these consortiums is often complex and resource intensive. We describe our process and lessons learned during the preparation of an expansive, electronically derived dataset to a large national collaborative network.

Background

The High Value Healthcare Collaborative (HVHC) consists of 19 geographically diverse healthcare delivery systems (encompassing markets of over 70 million lives) and The Dartmouth Institute for Health Policy and Clinical Practice. Participating members submit electronic data for inclusion in a comprehensive data warehouse allowing benchmarking, querying, reporting, and analysis. A 140 page specifications document outlines 19 required data tables for diverse inpatient and outpatient condition-specific populations. Our recent HVHC data submission included over 370,000 unique patients, 9000 providers, and 3 million encounters. Dataset construction required access to multiple electronic data systems and over 1300 hours of personnel time.

Innovation

The scope of this electronic data submission warranted use of a structured project team and the establishment of a formal quality control process. We used an interdisciplinary group of Masters and Doctoral level analyst/programmers plus a Quality Assurance (QA) lead and Project Manager. Over 80 raw datasets were assembled into the final data submission. We created a formal quality assurance and feedback process to track and review data. This process required that each raw dataset be accompanied by a standard set of instructional notes outlining known data quality issues. Data were placed in a staging location, prompting notification of the QA team. Datasets were reviewed for correct naming, order, variable formats, missing and unacceptable values and placed into a “Pass” or “Fail” folder. Data not meeting specifications were reviewed and updated by the assigned analyst, with repetition of this sequence until data met specifications. The QA team tracked preparation and validation steps and provided feedback on file status. The Project Manager oversaw timelines and facilitated communication with the HVHC data coordinating center.

Lessons Learned

This complex electronic dataset build process required an experienced technical team and sufficient infrastructure to assure success. The team accessed multiple electronic systems, identified and interpreted required data, and assembled the datasets. Other key contributors focused primarily on project management and quality assurance, serving an essential role in helping the technical team to communicate, remain organized, and document processes. Additional learning points gleaned from this work centered on allowing for adequate time and personnel resources to build these types of datasets. Likewise, communication in these efforts is critical. We had weekly meetings to discuss project challenges, leading to a determination that we needed an internal specifications document that standardized identifying variables and extract instructions for our internal systems.

Next Steps

To promote efficiency with future electronic dataset development, we will create an internal specifications document (including standardized code/naming conventions), and automate QA steps. We are also looking ahead and creating timelines and resource expectations for the next large-volume data submission related to our engagement in the collaborative.

Acknowledgements

Portions of this work were funded by CMMI collaborative agreement , sponsored by the Department of Health and Human Services, Centers for Medicare and Medicaid. The authors of this work are solely responsible for its content.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.

Share

COinS
 
Jun 7th, 2:55 PM Jun 7th, 4:15 PM

Developing Electronic Data Methods Infrastructure to Participate in Collaborative Research Networks

Introduction

Collaborative research networks using electronic data support learning health system goals by facilitating identification of best practices care across delivery organizations. However, submission of required data elements to these consortiums is often complex and resource intensive. We describe our process and lessons learned during the preparation of an expansive, electronically derived dataset to a large national collaborative network.

Background

The High Value Healthcare Collaborative (HVHC) consists of 19 geographically diverse healthcare delivery systems (encompassing markets of over 70 million lives) and The Dartmouth Institute for Health Policy and Clinical Practice. Participating members submit electronic data for inclusion in a comprehensive data warehouse allowing benchmarking, querying, reporting, and analysis. A 140 page specifications document outlines 19 required data tables for diverse inpatient and outpatient condition-specific populations. Our recent HVHC data submission included over 370,000 unique patients, 9000 providers, and 3 million encounters. Dataset construction required access to multiple electronic data systems and over 1300 hours of personnel time.

Innovation

The scope of this electronic data submission warranted use of a structured project team and the establishment of a formal quality control process. We used an interdisciplinary group of Masters and Doctoral level analyst/programmers plus a Quality Assurance (QA) lead and Project Manager. Over 80 raw datasets were assembled into the final data submission. We created a formal quality assurance and feedback process to track and review data. This process required that each raw dataset be accompanied by a standard set of instructional notes outlining known data quality issues. Data were placed in a staging location, prompting notification of the QA team. Datasets were reviewed for correct naming, order, variable formats, missing and unacceptable values and placed into a “Pass” or “Fail” folder. Data not meeting specifications were reviewed and updated by the assigned analyst, with repetition of this sequence until data met specifications. The QA team tracked preparation and validation steps and provided feedback on file status. The Project Manager oversaw timelines and facilitated communication with the HVHC data coordinating center.

Lessons Learned

This complex electronic dataset build process required an experienced technical team and sufficient infrastructure to assure success. The team accessed multiple electronic systems, identified and interpreted required data, and assembled the datasets. Other key contributors focused primarily on project management and quality assurance, serving an essential role in helping the technical team to communicate, remain organized, and document processes. Additional learning points gleaned from this work centered on allowing for adequate time and personnel resources to build these types of datasets. Likewise, communication in these efforts is critical. We had weekly meetings to discuss project challenges, leading to a determination that we needed an internal specifications document that standardized identifying variables and extract instructions for our internal systems.

Next Steps

To promote efficiency with future electronic dataset development, we will create an internal specifications document (including standardized code/naming conventions), and automate QA steps. We are also looking ahead and creating timelines and resource expectations for the next large-volume data submission related to our engagement in the collaborative.