By Deborah J. Cohen, David A. Dorr, Kyle Knierim, C. Annette DuBard, Jennifer R. Hemler, Jennifer D. Hall,Miguel Marino, Leif I. Solberg, K. John McConnell, Len M. Nichols, Donald E. Nease Jr., Samuel T. Edwards,Winfred Y. Wu, Hang Pham-Singer, Abel N. Kho, Robert L. Phillips Jr., Luke V. Rasmussen, F. Daniel Duffy,and Bijal A. Balasubramanian
Primary Care Practices’ AbilitiesAnd Challenges In Using ElectronicHealth Record Data For QualityImprovement
ABSTRACT Federal value-based payment programs require primary carepractices to conduct quality improvement activities, informed by theelectronic reports on clinical quality measures that their electronic healthrecords (EHRs) generate. To determine whether EHRs produce reportsadequate to the task, we examined survey responses from 1,492 practicesacross twelve states, supplemented with qualitative data. Meaningful-useparticipation, which requires the use of a federally certified EHR, wasassociated with the ability to generate reports—but the reports did notnecessarily support quality improvement initiatives. Practices reportednumerous challenges in generating adequate reports, such as difficultymanipulating and aligning measurement time frames with qualityimprovement needs, lack of functionality for generating reports onelectronic clinical quality measures at different levels, discordancebetween clinical guidelines and measures available in reports,questionable data quality, and vendors that were unreceptive to changingEHR configuration beyond federal requirements. The current state ofEHR measurement functionality may be insufficient to support federalinitiatives that tie payment to clinical quality measures.
Since 2008, adoption of office-basedphysician electronic health records(EHRs) has more than doubled.1
Federal investment played a criticalrole in accelerating EHR adoption
through a combination of financial incentives(the EHR Incentive Program) and technical as-sistance programs (Regional Extension Cen-ters).2–6 The expectation was that widespreadadoption of EHRs would efficiently generatemeaningful data, enabling accurate measure-ment of quality, informing practice quality im-provement efforts, and ultimately leading to im-proved care processes and outcomes.Yet little isknown about how well EHRs meet these expect-
ations, particularly among primary care practic-es with scarce technical resources.7–11
The EHR Incentive Program set standards forthe meaningful use of EHRs, which includedimplementing an EHR system and demonstrat-ing its use to improve care. Therewere seventeencore standards defined in stages 1 and 2 of themeaningful-use program (2015–17). Stage 3 be-gan in 2017 and expanded the requirements toinclude health information exchange, interoper-ability, and advanced quality measurement tomaximize clinical effectiveness and efficiencyby supporting quality improvement. As of 2017the EHR Incentive Program defined sixty-fourelectronic clinical quality measures12 that are
doi: 10.1377/hlthaff.2017.1254HEALTH AFFAIRS 37,NO. 4 (2018): 635–643©2018 Project HOPE—The People-to-People HealthFoundation, Inc.
Deborah J. Cohen ( is a professor offamily medicine and vice chairof research in the Departmentof Family Medicine at OregonHealth & Science University,in Portland.
David A. Dorr is a professorand vice chair of medicalinformatics and clinicalepidemiology, both at OregonHealth & Science University.
Kyle Knierim is an assistantresearch professor of familymedicine and associatedirector of the PracticeInnovation Program, both atthe University of ColoradoSchool of Medicine, in Aurora.
C. Annette DuBard is vicepresident of Clinical Strategyat Aledade, Inc., in Bethesda,Maryland.
Jennifer R. Hemler is aresearch associate in theDepartment of FamilyMedicine and CommunityHealth, Research Division,Rutgers Robert Wood JohnsonMedical School, in NewBrunswick, New Jersey.
Jennifer D. Hall is a researchassociate in family medicineat Oregon Health & ScienceUniversity.
Miguel Marino is an assistantprofessor of family medicineat Oregon Health & ScienceUniversity.
Leif I. Solberg is a senioradviser and director for careimprovement research atHealthPartners Institute, inMinneapolis, Minnesota.
April 2018 37 :4 Health Affairs 635
Health Information Technology
Downloaded from on August 30, 2022.Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at

aligned with national quality standards. Therationale behind using these measures was toreduce the need for clinicians’ involvement inreporting by using data already collected withinthe EHR and automating the electronic submis-sion of results.Quality measurement for payment grew with
the 2006 implementation of the Physician Qual-ityReportingSystem, as an increasingnumberofclinicians and practices reported their qualitydata electronically. In 2016 the Quality PaymentProgram6 was developed as a way to streamlinequality reporting programs while expanding theexpectations of electronic reporting as definedby the Merit-based Incentive Payment Program.A core expectation of meaningful use and thesubsequent Quality Payment Program was forEHRs to have the capability to measure and re-port electronic clinical quality measures and forpractices to use these data to improve quality. Tothat end, the Office of the National Coordinatorfor Health Information Technology (ONC)worked with the Centers for Medicare and Med-icaid Services (CMS) and stakeholders to estab-lish a set of certification criteria for EHRs. Use ofan ONC-certified EHRwas a core requirement ofmeaningful use. The functionality of certifiedEHR systems’ reporting of electronic clinicalquality measures was aligned with CMS-basedincentives and quality criteria; it is anticipatedthat these quality-based incentives will continueas meaningful use evolves into the Quality Pay-ment Program.Clinicians participating in the Quality Pay-
ment Program were required to report on thefull 2017 performance period by March 31,2018. In addition to meeting external reportingrequirements, EHRsmust help practices identifydelivery gaps and “bright spots” of perfor-mance13,14 that are critical for quality improve-ment. This requires the ability to produce pa-tient-, clinician-, and practice-level reportsacross various measurement periods and at dif-ferent frequencies and to allow for customizedspecifications to conduct improvement cycles.15
EHR systems often fail to meet these expecta-tions, but it is unclear whether this is becauseof implementationdifferences, providers’ lackofknowledge about capabilities, or lack of capabil-ities in the EHRs themselves.16–18
We explore how well EHRs—as currentlyimplemented—meet the measurement-relatedquality improvement needs in primary care prac-tice. To do so, we examined survey data from1,492 practices and combined this informationwith qualitative data to gain a richer answer thansurveys alone could provide. Our findings high-light the challenges that practices face as value-based payment replaces volume-based systems.
Study Data And MethodsStudy Design And Cohort In 2015 the Agencyfor Healthcare Research and Quality (AHRQ)launched EvidenceNOW: Advancing HeartHealth in Primary Care. EvidenceNOW is athree-year initiative dedicated to helping smalland medium-size primary care practices acrossthe US use the latest evidence to improve cardio-vascular health and develop their capacity forongoing improvement. AHRQ funded sevengrantees (called cooperatives) that span sevenUS regions (and twelve states). Cooperativeswere tasked with developing and leveraging sus-tainable infrastructure to support over 200 prac-tices in their regions in improving electronicclinical quality measures endorsed by CMS andthe National Quality Forum for aspirin use,19
blood pressure monitoring,20 cholesterol man-agement,21 and smoking screening and cessationsupport22 (the ABCS measures).AHRQ also funded an evaluation of this initia-
tive called Evaluating SystemChange to AdvanceLearning and Take Evidence to Scale (ESCA-LATES) to centralize, harmonize, collect, andanalyze mixed-methods data with the goal ofgenerating cross-cooperative, generalizablefindings.23 ESCALATES started at the same timethe cooperatives’ work began. The goals of ES-CALATES included identifying facilitators of andbarriers to implementing regionwide infrastruc-ture to support quality improvement among pri-mary care practices, of which health informationtechnology (IT) was a central component.Data Sources ESCALATES compiled quanti-
tative survey data collected by the cooperativesfrom the 1,492practices.While cooperative studydesigns (for example, steppedwedge, group ran-domized trials) varied, all cooperatives usedtheir first year (May 2015–April 2016) for re-cruitment and start-up activities, and all stag-gered the time at which practices received theintervention. Survey data were collected frompractices before the start of the intervention(that is, at baseline),which ranged fromSeptem-ber 2015 to April 2017.We collected complemen-tary qualitative data (observation, interview,and online diary) for this study in the periodMay 2015–April 2017.23 We chose this time peri-od because it gave us exposure to the data issuesthat manifested themselves during start-up andimplementation.Qualitative Data Collection And Manage-
ment We conducted two site visits withevery cooperative. The first site visit occurredbefore implementation of the intervention(August 2015–March 2016) and focused onunderstanding the cooperative, its partners, re-gional resources (including EHRand data capac-ities), and approach to supporting large-scale
K. John McConnell is aprofessor of emergencymedicine and director of theCenter for Health SystemsEffectiveness, both at OregonHealth & Science University.
Len M. Nichols is director ofthe Center for Health PolicyResearch and Ethics and aprofessor of health policy atGeorge Mason University, inFairfax, Virginia.
Donald E. Nease Jr. is anassociate professor of familymedicine at the University ofColorado School of Medicine,in Aurora.
Samuel T. Edwards is anassistant research professorof family medicine and anassistant professor ofmedicine at Oregon Health &Science University and a staffphysician in the Section ofGeneral Internal Medicine,Veterans Affairs PortlandHealth Care System.
Winfred Y. Wu is clinical andscientific director in thePrimary Care InformationProject at the New York CityDepartment of Health andMental Hygiene, in Long IslandCity, New York.
Hang Pham-Singer is seniordirector of qualityimprovement in the PrimaryCare Information Project atthe New York CityDepartment of Health andMental Hygiene.
Abel N. Kho is an associateprofessor and director of theCenter for Health InformationPartnerships, NorthwesternUniversity, in Chicago, Illinois.
Robert L. Phillips Jr. is vicepresident for research andpolicy at the American Boardof Family Medicine, inWashington, D.C.
Luke V. Rasmussen is aclinical research associate inthe Department of PreventiveMedicine, NorthwesternUniversity.
F. Daniel Duffy is professorof medical informatics andinternal medicine at theUniversity of OklahomaSchool of CommunityMedicine–Tulsa.
Health Information Technology
636 Health Affairs April 2018 37 :4Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at

practice improvement. The second site visit wasconducted during implementation of the inter-vention (July 2016–April 2017) and focused onobserving practice facilitators workwith practic-es.We observed forty-one facilitators conductingsixty uniquepractice quality improvement visits.During site visits we took field notes and con-ducted and recorded (and later transcribed therecordings of) semistructured interviews withkey stakeholders (for example, investigators, fa-cilitators, and health IT experts).To supplement observation and interview
data, we attended and took notes at a meetingof an AHRQ-initiated cooperative work group todiscuss health IT challenges, and we imple-mented an online diary24 for each cooperativethat included documentation by key stakehold-ers (such as investigators, health ITexperts, andfacilitators) of implementation experiences inreal time (approximately twice a month).Online diary data, interviews, meeting notes,
and field notes were deidentified for individualparticipants and reviewed for accuracy. To con-firm our findings, cooperative representativescompleted a table that characterized obstaclesto using EHR data for quality improvement.We used Atlas.ti for data management and
analysis. The Oregon Health & Science Univer-sity Institutional Review Board approved andmonitored this study.
Survey Measures Cooperatives administereda survey to all of their practices. The survey,completed by a lead clinician or practice manag-er, consisted of a subset of questions from theNationalAmbulatoryMedicalCareSurvey’sElec-tronic Medical Records Questionnaire25–28 andassessedpractice characteristics, EHRcharacter-istics,29 and reporting capabilities (see onlineappendix exhibit A1 for survey items).30
Qualitative Data Analysis Three authors(Deborah Cohen, Jennifer Hemler, and JenniferHall) analyzed qualitative data in real time fol-lowing an immersion-crystallization approach31
and coded data to identify text related to clinicalqualitymeasurement, quality improvement, andEHRs.We analyzed data within and across coop-eratives to identify nuanced findings and varia-tions regarding usage of EHRs for quality im-provement. Data collection and analysis wereiterative; initial findings prompted additionalquestions that were later answered in the onlinediaries and during site visits to cooperatives.32
We triangulated data with other sources, discus-sing differences until we reached saturation—the point at which no new findings emerged.32
Qualitative findings informed the selection ofvariables for quantitative analyses, and bothquantitative and qualitative data informed inter-pretations.
Quantitative Data Analysis Two authors(Bijal Balasubramanian and Miguel Marino)useddescriptive statistics to characterize theEvi-denceNOW practice sample and used multivari-able logistic regression to evaluate the associa-tion between practice characteristics and EHRreporting capability, measured as a “yes” or“no” response to the following question: “Doesyour practice have someone who can configureor write quality reports from the EHR?” Indica-tor variables for cooperatives were included inthe logistic model to account for regional vari-ability, and we used multiple imputation bychained equations to account for missing data(see appendix exhibit A2).30 We performed sta-tistical analyses using R, version 3.4.0.Limitations Our study had several limita-
tions. First, our findings may have underesti-mated the challenges that practices face in usingEHRs for quality measurement, as the practicesrecruited to participate in EvidenceNOW mayhave self-selected based on their greater qualityimprovement and health IT confidence.Second, our understanding of practices’ chal-
lenges in using EHRs for quality measurementwas based on the views of cooperative expertsand does not necessarily represent the practices’perspectives. Thus, we were unable to quantifythe extent to which practices experienced theseproblems.Yet it is from the cooperatives’ vantagepoint that we identified problems that are oftendifficult to characterize using practice-level sur-veys, and it may be that solutions are most effec-tive at the regional rather than practice level.Third, our primary survey outcome—the re-
sponse to the question “Does your practice havesomeone who can configure or write quality re-ports from the EHR?”—combines workforce andreporting capacity in a single item.While itmightbe preferred to parse these issues in separateitems, we did not do this because of concernsabout response burden.Our qualitative data sug-gest that directing more survey questions topracticesmight not have been useful, since prac-tices lack staff with the expertise to answermoretechnically complex questions. Data collectedfrom cooperatives’ health IT experts comple-mented practice survey data, shedding light onthis complex issue.Fourth, our study findingswere also limited by
our inability to identify whether some EHRsfaced more or fewer challenges than others,and by the fact that some survey items had morethan 10 percent missing data. However, our con-clusions were based on one of the largest studiesof geographically dispersed primary care prac-tices, and the use of multiple imputation lever-aged this scale to minimize potential bias due tomissing data.
Bijal A. Balasubramanian isan associate professor in theDepartment of Epidemiology,Human Genetics, andEnvironmental Sciences, andregional dean of UTHealthSchool of Public Health, inDallas, Texas.
April 2018 37 :4 Health Affairs 637Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at

Study ResultsOf the 1,710 practices recruited to Evidence-NOW, 1,492 (87.3 percent) completed the prac-tice survey. The majority of these practices hadtenor fewer clinicians (84percent), were locatedin urban or suburban areas (71 percent), andwere owned by clinicians (40 percent) or hospi-tal/health systems (23 percent) (exhibit 1). Over93 percent used EHRs, of which 81 percent werecertified by the ONC for 2014.While sixty-eightdifferent EHRs were represented, Epic, eClini-calWorks, and NextGen were the most common-ly used systems. The number of different EHRsystems among practices within a cooperative
ranged from four to thirty-two. Sixty percentof practices participated in stages 1 and 2 ofmeaningful use. (More detailed findings are inexhibit 1 and appendix exhibit A2.)30
Challenges Using Electronic ClinicalQuality Measures For Quality ImprovementPractices and quality improvement facilitatorsexperienced significant challenges using EHRsto generate tailored reports of electronic clinicalquality measures for quality improvement,which led to substantial delays in reporting qual-ity measures and engaging in measurement-informed quality improvement activities (ex-hibit 2).
Exhibit 1
Selected characteristics and electronic health record (EHR) system capacity of 1,492 EvidenceNOW practices
Practices Range acrosscooperatives (%)Number Percent
Practice size (number of clinicians)
1 356 23.9 6.2–52.42–5 696 46.6 16.2–59.16–10 205 13.7 6.8–17.211 or more 160 10.7 1.9–23.4
Practice ownership
Clinician 603 40.4 27.8–72.8Hospital/health system 342 22.9 1.6–53.9Federala 322 21.6 8.4–42.7Academic 19 1.3 0.0–5.8Other or noneb 147 9.9 1.0–38.8
Practice locationc
Urban 948 63.5 34.9–100.0Suburban 107 7.2 0.0–14.8Large town 202 13.5 0.0–29.5Rural area 235 15.8 0.0–27.9
Electronic health record characteristics
Practices using ONC-certified EHR (n= 1,490) 1,215 81.5 58.9–100.0Participation in meaningful use (n= 1,490)Neither stage 1 nor stage 2 230 15.4 8.4–23.8Stage 1 only 176 11.8 5.3–20.7Stages 1 and 2 887 59.5 38.0–84.5
Clinical quality measure reporting capability
Produced a CQM in prior 6 monthsd (n= 1,281)Aspirin 616 48.1 30.9–65.0Blood pressure 817 63.8 43.5–78.8Smoking 868 67.8 48.7–80.8All three 596 46.5 29.8–64.2
Report CQMs at practice leveld (n= 1,069) 897 84.0 52.7–95.7Report CQMs at provider leveld (n= 1,069) 903 84.5 55.2–94.7Ability to create CQM reports from EHRe (n= 1,490) 913 61.3 37.2–75.2
SOURCE Authors’ analysis of data from the EvidenceNOW practice survey. NOTES Percentages might not sum to 100 because of missingdata. Denominators for some variables are dependent on survey skip logic. ONC is Office of the National Coordinator for HealthInformation Technology. aIncludes federally qualified health centers; rural health clinics; Indian Health Service clinics; andVeterans Affairs, military, Department of Defense, or other federally owned practices. bIncludes practices with nonfederal,private/nonclinician, or tribal ownership; those indicating “other” without specifying an ownership type; and those responding “no”to every other ownership type. cLocation categories determined using rural-urban commuting area codes. Practices in onecooperative were excluded from the analysis because questions about reporting clinical quality measures (CQMs) were notincluded in their survey. dOne cooperative was excluded from the analysis because CQM reporting questions were not included intheir survey eMore than 15 percent of the practices had missing data.
Health Information Technology
638 Health Affairs April 2018 37 :4Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at

Generating Reports Of Electronic Clini-cal Quality Measures For Quality Improve-ment Practices participating in stages 1 and 2 ofmeaningful use were more likely to report beingable to generate reports of electronic clinicalquality measures at the practice and clinicianlevels, compared to practices not participating(odds ratio: 1.65) (exhibit 3). Similarly, practicesparticipating in quality improvement demon-stration projects or in external payment pro-grams that incentivized quality measurementhad 51–73 percent higher odds of reporting anability to generate reports of electronic clinicalquality measures (exhibit 3). Facilitators andhealth ITexperts working directly with practicesnoted that practices could produce reports thatcomplied with meaningful use. However, EHRreporting tools did not meet practices’ needs forquality improvement measurement.
Practices reported needing reports with cus-tomizable time frames, which could be repeatedas desired, to align with quality improvementactivities. Cooperative experts reported thatsome ONC-certified EHRs, as implemented,could generate Physician Quality Reporting Sys-tem or meaningful-use clinical quality reportsonly for a calendar year. When functions wereavailable to customize measurement periods,significant manual configuration or additionalmodules were required. According to a report onmeasurement challenges from cooperative 3,“out of the box tools are inadequate to use forroutine quality improvement. This necessitatedworking with vendors to deploy reports in thelinked reporting tool, which required expertisein database query writing, which is almost uni-versally absent from the skillset of staff at inde-pendent small practices.”
Exhibit 2
Challenges with using electronic health records (EHRs) for quality measurement and improvement
Challenge Specific problems
General challenges
Inability to produce clinical quality reports that align withquality improvement needs
ONC-certified EHRs for meaningful use do not provide customizable measurespecifications, date ranges, and frequency of reports.
Vendors are resistant to making changes to EHRs beyond what is required for ONCcertification and meaningful use, and any changes are expensive and take too much timeto deliver.
Most practices lack the technical expertise to extract and prepare data and cannot affordexternal consultants.
Inability to produce clinical quality reports at practice,clinical team, clinician, and patient levels
Most EHRs lack this functionality, which is necessary to compare clinicians and producelists of patients in need of services or of services needed by individual patients.
Purchasing this functionality is an upgrade expense that smaller practices cannot afford.When this functionality is present, smaller primary care practices usually lack thenecessary health IT expertise to make use of these tools.
Data from EHR reports are not credible or trustworthy EHR design features lead to suboptimal documentation of clinical quality measures (forexample, EHRs lack consistent or obvious places to document the measures).
Clinical team documentation behavior leads to incomplete extraction of clinical qualityvariables.
Delays in modifying specifications when guidelines ormeasures change
Delays in government revision of value sets after changes occur.Delays in vendor programmatic changes per value set changes.Delays in practice EHR upgrades.
Challenges in developing regional data infrastructure (data warehouses, hubs, exchanges)
Cooperatives developing regional data infrastructureencounter developmental delays
Vendors charge excessive fees for connecting practices to a data warehouse, hub, or healthinformation exchange.
Vendors are unresponsive and “drag their heels” when working with cooperatives to createconnections.
Vendors exclude information from continuity-of-care documents that is critical tocalculating clinical quality measures.
Vendor tools for exporting batches of the documents are slow, making the documentsdifficult to export.
Data exported in batches of the documents lack credibility and trustworthiness for thereasons listed above.
Inability to benchmark performance because data extractedfrom different EHRs are not comparable
Variations in EHR system versions and implementations.Vendors make different decisions about what fields or codes to include when calculatingclinical quality measures.
SOURCE Authors’ analysis of qualitative data from EvidenceNOW practices. NOTE Continuity-of-care documents are explained in the text. ONC is Office of the NationalCoordinator for Health Information Technology. IT is information technology.
April 2018 37 :4 Health Affairs 639Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at

EHRvendors charged extra fees to access thesetools, and smaller practices couldnot pay for thisassistance. Additionally, some EHRs could gen-erate meaningful-use metrics only for patientswith Medicare or Medicaid coverage (often aminority of practice patients). Many vendorswere resistant to making software changes be-yond what was required for Physician QualityReporting System or meaningful use reporting.Thus, most practices were unable to query EHRdata for measurement in rapid-cycle tests ofchange.Practices owned by health/hospital systems
had higher odds of reporting the ability to gen-erate reports of electronic clinical quality mea-sures, compared to clinician-owned practices(OR: 2.88), while solo and rural practices wereless likely than practices with six or more physi-
cians and those in urban areas to report beingable to generate such reports (exhibit 3). Com-plementary qualitative data showed that system-owned practices had greater health IT and datacapability than solo and rural practices did, butthese resources were centralized. These practic-es and facilitators experienced substantial andrepeated delays in getting access to data neededfor quality improvement, as organizational pri-orities took precedence (particularly when tiedto payment), and their experts were over-whelmed with other demands.New Clinical Guidelines Quality measure-
ment was complicated by changes in clinicalguidelines. The American College of Cardiologyand American Heart Association guidelines oncardiovasculardisease risk changeddramaticallyin 2013.33 At the start of EvidenceNOW in 2015,measurements for the A, B, and S parts of theABCS measures were routinely part of the Physi-cian Quality Reporting System. However, CMSdid not publish the criteria for the C part (thecholesterol measure) until May 4, 2017. Themeasure chosen for the EvidenceNOW initiativematched the 2013 guideline, but lack of a com-plementary official CMS measure meant that noEHR had yet implemented a similar measure intheir system. Some practices created their ownmeasures based on all or part of the new guide-lines to inform quality improvement, but thiswas not useful for benchmarking.Validity Across Different Electronic
Health Record Systems Facilitators and healthIT experts often found verifiable problems inclinical quality reports. For example, a represen-tative of cooperative 6 told us in an interview:“Doctors always look at our data and say it’s not[correct]…. Unless you put [information] in theexact spot, it doesn’t pull it [for the electronicclinical quality measures]…. They didn’t hit thelittle cog-radio button. It takes [you] to a tem-plate that you have to complete. In order to pullthe data it has to be on there.”It was common for there to be specific loca-
tions (for example, checkboxes) where struc-tured data elements had to be recorded to becounted in a calculation of electronic clinicalquality measures. The combination of vendor-standardized documentation requirements forthemeasures, lack of alignment of these require-ments with clinical workflows, and clinicalteams’ lack of awareness of documentation rulesand the consequences of recording patterns onquality measurement led to many examples ofunreliable reports of the measures.Challenges Developing Regional Data
Infrastructure For Quality ImprovementCooperatives that used data warehouses, hubs,or health information exchanges in their regions
Exhibit 3
Association between practice characteristics and ability to create clinical quality reports atthe practice level
Characteristic Odds ratio 95% CI
Practice size (number of clinicians)
1 0.59** 0.38, 0.93**2–5 0.87 0.57, 1.336 or more Ref Ref
Practice ownership
Clinician Ref RefHospital/health system 2.88** 1.92, 4.33**Federal 6.02** 3.65, 9.92**Academic, other or none 1.14 0.64, 2.01
Practice location
Urban Ref RefSuburban 0.70 0.39, 1.26Large town 1.03 0.64, 1.67Rural area 0.61** 0.39, 0.96**
Practice participation in meaningful use
Neither stage 1 nor stage 2 Ref RefStage 1 only 1.09 0.65, 1.85Stages 1 and 2 1.65** 1.08, 2.51**
Practice part of external payment program
No Ref RefYes 1.73** 1.19, 2.51**
Practice participating in demonstration project
No Ref RefYes 1.51** 1.09, 2.09**
SOURCE Authors’ analysis of data from the EvidenceNOW practice survey. NOTES Two practicesreported not having an EHR system, and thus the sample was limited to 1,490 primary carepractices of small to medium size in twelve states that reported having an EHR system. Multipleimputed logistic regression models were performed to estimate odds ratios and corresponding95% confidence intervals (CIs). EHR reporting capability was measured as a “yes” or “no”response to the following survey question: “Does your practice have someone who can configureor write quality reports from the EHR/EMR [electronic medical record]?” This model controlledfor cooperative. Before multiple imputation, the prevalence of EHR reporting capability amongpractices that had a complete response was 76.0 percent. After multiple imputation, theprevalence averaged across the thirty imputed data sets was 74.9 percent. For more detailedresults of this process, see appendix exhibit A2 (note 30 in text). Federal and “other or none”ownership and determination of location are explained in the notes to exhibit 1. **p < 0:05
Health Information Technology
640 Health Affairs April 2018 37 :4Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at

did so to provide practices with clean data andtools for measurement. To develop this type ofdata infrastructure, cooperatives worked withEHR vendors to access back-end EHR data. Ex-hibit 2 summarizes the challenges cooperativesfaced in using EHR data for this purpose.Cooperatives reported that EHR vendors and
other organizations charged high fees ($5,000–$15,000) for accessing data, and cooperativesfound it difficult to export continuity-of-caredocuments (electronic documents standardizedfor patient information exchange) in batch for-mat. Exporting batches of these documents—arequirement forONCcertification29—means thatEHR data can be extracted frommultiple patientrecords simultaneously and pulled into one file.The documents are meant to include most com-monly needed patient information in a form thatcan be shared across computer applications.Yetcooperatives found that the documents met onlyminimum requirements. One representative ofcooperative 7 said in an interview that one ven-dor “will only send ten [documents] at a time.Another only does, like, one an hour or some-thing. There are these bizarre kinds of thingswhere they’remeeting the requirement, but theyaren’t useful.” Cooperatives and practices que-ried vendors about these issues, but vendorswere not responsive. Lack of efficient, mass ex-port of continuity-of-care documents meant thatcooperatives were unable to help practices usetheir underlying data for quality improvement.Cooperatives that developed data manage-
ment infrastructure also created the capacityfor combining and comparing data (perfor-mance benchmarking) and found differencesin electronic clinical quality measures acrossEHRs. One expert attributed differences to var-iations in implementation of the EHR systemand noted in cooperative 1’s report on measure-ment challenges that “editorial decisions” madeby vendors about which EHR fields to pull datafrom when calculating the measures led to prob-lems: “We have experienced challenges in defin-ing the measures and achieving accurate results.We began with pre-built measures from our soft-ware vendor but often found differences in defi-nition for a commonly named measure.” Extrasteps were needed to ensure uniform definitionsof measures for performance benchmarking.Without regional infrastructure to extract andnormalize ABCS measures, performance datafromdifferent EHR systemsmight not be corrector comparable.
DiscussionPrimary care practices have to exert too mucheffort to get usable data from their EHRs to im-
prove care quality and meet reporting require-ments. Despite the large national investment inhealth IT and substantial investments of timeand expertise by practices and cooperatives, ithas beendifficult for themtogenerate timely andusable data for quality measurement and im-provement. These findings are particularly sa-lient given that the majority of practices in thislarge national sample used EHRs certified by theOffice of the National Coordinator for HealthInformation Technology, and more than half re-ported participating in stages 1 and 2 of mean-ingful use and having the ability to produce re-ports of electronic clinical quality measures. YetEHR reports that complied with meaningful usegenerally did not allow practices to customizedate ranges or report frequency, and they rarelyprovided functionality for measuring perfor-mance by individual clinicians. In cases wherethis functionality was available, it was a costlyupgrade and typically required health IT exper-tise to use. These resources are not often presentin practices—particularly solo, rural, and clini-cian-owned practices. Additionally, practicesthat questioned the validity of meaningful-usereports did not have feasible ways to validatethem. These factors inhibited practices’ abilityto make the measurements—the ongoing identi-fication of quality gaps and monitoring of theeffects of changes made in care processes—thatare essential for quality improvement.These persistent challenges should be cause
for concern. Our study amplifies findings fromprior research that documented challenges inusing EHRs in general and for quality improve-ment in particular.3,16–18,34–37 To our knowledge,previous studieshavenot examined this problemin a sample of the size and diversity that oursample attained, especially notwithmixedmeth-ods. Our study shows that survey data alone areinadequate for fully understanding the problem.Furthermore, most studies of these challengesare nearly a decade old and do not reflect theimpact of more recent federal programs.3,38
Regional and national organizations that con-nect disparate practices’ EHR systems to a cen-tral data repository offer a potential solution formitigating measurement challenges throughshared infrastructure for data extraction, nor-malization, validation, analysis, and reporting.However, most states in our sample did not haveregional data infrastructure, and those that didhad limited reach; the time, effort, and invest-ment required to build these resources are exten-sive.8 Regional leaders struggle with financing,vendor relations, and governance structures.To improve EHRs’ ability to achieve their
potential and support sustainable payment re-forms, policy makers should consider empower-
April 2018 37 :4 Health Affairs 641Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at

ing the ONC and CMS to expand their standardsand requirements for, and monitoring of, EHRvendors.39 The agencies need to make it moreefficient for practices to generate quality reportswith up-to-date definitions,40 ensure that organ-izations can extract data from batches of conti-nuity-of-care documents for secondary use, andcreate explicit requirements to support qualityimprovement and practice population healthmeasurement. New initiatives from the ONCareencouragingvendors to facilitate this processthrough standard application programming in-terfaces andmapping (for example, Fast Health-care Interoperability Resources). With severalnew reporting requirements for clinicians (in-cluding those for recognition as a patient-centered medical home), payer requirements,andother federal demonstrationprojects (whichhavediffering reporting requirements), theONCshould focus not just on EHR capacities to servetheQuality Payment Programbut also on qualityimprovement and reporting needs generally.Cooperatives’ current experience is that EHR
data are “locked up,”which prevents even a well-resourced initiative from being able to use data
for quality measurement across diverse practicesettings. The first Quality Payment Programreporting period ended in December 2017, andmany clinicians may be unable to achieve theirfull potential related to quality improvement de-spite having certified EHRs. CMSmay need to beprepared to help practices not only comply butalso use data for quality improvement—perhapsby loosening reporting options, expanding ex-clusion criteria, and allocating additional fundsfor technical assistance.
ConclusionPrimary care is an essential part of healthycommunities.With federal value-based paymentprograms such as the Quality Payment Programpoised to motivate clinicians to improve carequality, investment is needed to ensure thatthe health ITclinicians use delivers credible clin-ical quality data and has the functionality neces-sary to inform quality improvement efforts aswell as external reporting for payment and otherpurposes without adding to an already highburden. ▪
A version of the findings presented inthis article was reported at the 44thAnnual North American Primary CareResearch Group Meeting, ColoradoSprings, Colorado, November 12–16,2016. This research was supported bythe Agency for Healthcare Research andQuality (Grant No. R01HS023940-01).
This article could not have beencompleted without the help of peoplefrom the seven EvidenceNOWcooperatives, to whom the authors aregreatly indebted. Members of thenational evaluation team were alsoimportant in supporting this study,including Rachel Springer, David
Cameron, Bernadette Zakher, RikkiWard, Benjamin Crabtree, Kurt Stange,and William Miller. Without their efforts,this work would not have been possible.Finally, the authors acknowledgeAmanda Delzer Hill, who assisted withcopy editing.
1 Office of the National Coordinatorfor Health Information Technology.Office-based physician electronichealth record adoption [Internet].Washington (DC): Department ofHealth and Human Services; 2016Dec [cited 2018 Feb 6]. (Health ITQuick-Stat No. 50). Available from:
2 Hsiao CJ, Jha AK, King J, Patel V,Furukawa MF, Mostashari F. Office-based physicians are responding toincentives and assistance by adopt-ing and using electronic health rec-ords. Health Aff (Millwood). 2013;32(8):1470–7.
3 Lynch K, Kendall M, Shanks K,Haque A, Jones E, Wanis MG, et al.The Health IT Regional ExtensionCenter program: evolution and les-sons for health care transformation.Health Serv Res. 2014;49(1 Pt 2):421–37.
4 Centers for Medicare and MedicaidServices. Electronic Health Records(EHR) Incentive Programs [Inter-net]. Baltimore (MD): CMS; [last
updated 2017 Nov 29; cited 2018 Jan23]. Available from:
5 Xierali IM, Hsiao CJ, Puffer JC,Green LA, Rinaldo JC, Bazemore AW,et al. The rise of electronic healthrecord adoption among familyphysicians. Ann Fam Med. 2013;11(1):14–9.
6 Quality Payment Programresource library [Internet]. Balti-more (MD): Centers for Medicareand Medicaid Services; [last modi-fied 2018 Feb 6; cited 2018 Feb 6].Available from:
7 Rao SR, Desroches CM, Donelan K,Campbell EG, Miralles PD, Jha AK.Electronic health records in smallphysician practices: availability, use,and perceived benefits. J Am MedInform Assoc. 2011;18(3):271–5.
8 Mostashari F, Tripathi M, KendallM. A tale of two large community
electronic health record extensionprojects. Health Aff (Millwood).2009;28(2):345–56.
9 Howard J, Clark EC, Friedman A,Crosson JC, Pellerano M, CrabtreeBF, et al. Electronic health recordimpact on work burden in small,unaffiliated, community-based pri-mary care practices. J Gen InternMed. 2013;28(1):107–13.
10 Friedman A, Crosson JC, Howard J,Clark EC, Pellerano M, Karsh BT,et al. A typology of electronic healthrecord workarounds in small-to-medium size primary care practices.J Am Med Inform Assoc. 2014;21(e1):e78–83.
11 Heisey-Grove D, Patel V. Nationalfindings regarding health IT use andparticipation in health care deliveryreform programs among office-based physicians. J Am Med InformAssoc. 2017;24(1):130–9.
12 eCQI Resource Center. 2017 perfor-mance period EP/EC eCQMs [Inter-net]. Baltimore (MD): Centers forMedicare and Medicaid Services;[cited 2018 Feb 6]. Available from:
Health Information Technology
642 Health Affairs April 2018 37 :4Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at

13 McNamara P, Shaller D, De La MareJ, Ivers N. Confidential physicianfeedback reports: designing for op-timal impact on performance [In-ternet]. Rockville (MD): Agency forHealthcare Research and Quality;2016 Mar [cited 2018 Feb 6]. (AHRQPublication No. 16-0017-EF). Avail-able from:
14 Krist AH, Green LA, Phillips RL,Beasley JW, DeVoe JE, KlinkmanMS, et al. Health information tech-nology needs help from primary careresearchers. J Am Board Fam Med.2015;28(3):306–10.
15 Higgins TC, Crosson J, Peikes D,McNellis R, Genevro J, Meyers D.Using health information technolo-gy to support quality improvementin primary care [Internet]. Rockville(MD): Agency for Healthcare Re-search and Quality; 2015 Mar [cited2018 Feb 6]. (AHRQ Publication No.15-0031-EF). Available from:
16 Parsons A, McCullough C, Wang J,Shih S. Validity of electronic healthrecord–derived quality measurementfor performance monitoring. J AmMed Inform Assoc. 2012;19(4):604–9.
17 Roth CP, Lim YW, Pevnick JM, AschSM, McGlynn EA. The challenge ofmeasuring quality of care from theelectronic health record. Am J MedQual. 2009;24(5):385–94.
18 Chan KS, Fowles JB, Weiner JP. Re-view: electronic health records andthe reliability and validity of qualitymeasures: a review of the literature.Med Care Res Rev. 2010;67(5):503–27.
19 eCQI Resource Center. Ischemicvascular disease (IVD): use of aspirinor another antiplatelet [Internet].Baltimore (MD): Centers for Medi-care and Medicaid Services; [lastupdated 2017 Oct 25; cited 2018 Feb6]. Available from:
20 eCQI Resource Center. Controllinghigh blood pressure [Internet]. Bal-timore (MD): Centers for Medicareand Medicaid Services; [last updated2017 Jul 12; cited 2018 Feb 6].Available from:
21 eCQI Resource Center. Statin thera-py for the prevention and treatmentof cardiovascular disease [Internet].Baltimore (MD): Centers for Medi-
care and Medicaid Services; [lastupdated 2017 Oct 25; cited 2018 Feb6]. Available from:
22 eCQI Resource Center. Preventivecare and screening: tobacco use:screening and cessation intervention[Internet]. Baltimore (MD): Centersfor Medicare and Medicaid Services;[last updated 2017 Jul 12; cited 2018Feb 6]. Available from:
23 Cohen DJ, Balasubramanian BA,Gordon L, MarinoM, Ono S, SolbergLI, et al. A national evaluation of adissemination and implementationinitiative to enhance primary carepractice capacity and improvecardiovascular disease care: theESCALATES study protocol. Imple-ment Sci. 2016;11(1):86.
24 Cohen DJ, Leviton LC, Isaacson N,Tallia AF Crabtree BF. Online diariesfor qualitative evaluation: gainingreal-time insights. Am J Eval. 2006;27(2):163–84.
25 Nutting PA, Crabtree BF, Miller WL,Stange KC, Stewart E, Jaén C.Transforming physician practices topatient-centered medical homes:lessons from the national demon-stration project. Health Aff (Mill-wood). 2011;30(3):439–45.
26 Balasubramanian BA, Chase SM,Nutting PA, Cohen DJ, StricklandPA, Crosson JC, et al. Using LearningTeams for Reflective Adaptation(ULTRA): insights from a team-based change management strategyin primary care. Ann Fam Med.2010;8(5):425–32.
27 Shaw EK, Ohman-Strickland PA,Piasecki A, Hudson SV, Ferrante JM,McDaniel RR Jr, et al. Effects of fa-cilitated team meetings and learningcollaboratives on colorectal cancerscreening rates in primary carepractices: a cluster randomized trial.Ann Fam Med. 2013;11(3):220–8,S1–8.
28 National Center for Health Statistics.Ambulatory health care data [Inter-net]. Hyattsville (MD): NCHS; [lastupdated 2017 Dec 12; cited 2018 Feb6]. Available from:
29 Office of the National Coordinatorfor Health Information Technology.ONC fact sheet: 2015 edition healthinformation technology (health IT)certification criteria, base electronichealth record (EHR) definition, andONC health IT certification programmodifications final rule [Internet].Washington (DC): Department ofHealth and Human Services; 2015
Oct [cited 2018 Feb 6]. Availablefrom:
30 To access the appendix, click on theDetails tab of the article online.
31 Borkan J. Immersion/crystalliza-tion. In: Crabtree BF, Miller WL,editors. Doing qualitative research.2nd ed. Thousand Oaks (CA): SagePublications; 1999. p. 179–94.
32 Cohen DJ, Crabtree BF. Evaluativecriteria for qualitative research inhealth care: controversies and rec-ommendations. Ann Fam Med.2008;6(4):331–9.
33 Stone NJ, Robinson J, LichtensteinAH, et al. 2013 ACC/AHA guidelineon the treatment of blood choles-terol to reduce atherosclerotic car-diovascular risk in adults: a report ofthe American College of Cardiology/American Heart Association TaskForce on Practice Guidelines. Circu-lation. 2014;129(25 Suppl 2):S1–45.
34 Baron RJ. Quality improvement withan electronic health record: achiev-able, but not automatic. Ann InternMed. 2007;147(8):549–52.
35 Fernandopulle R, Patel N. How theelectronic health record did notmeasure up to the demands of ourmedical home practice. Health Aff(Millwood). 2010;29(4):622–8.
36 Krist AH, Beasley JW, Crosson JC,Kibbe DC, Klinkman MS, LehmannCU, et al. Electronic health recordfunctionality needed to better sup-port primary care. J Am Med InformAssoc. 2014;21(5):764–71.
37 Lyons SS, Tripp-Reimer T, SorofmanBA, Dewitt JE, Bootsmiller BJ,Vaughn TE, et al. VA QUERIinformatics paper: informationtechnology for clinical guidelineimplementation: perceptions ofmultidisciplinary stakeholders. J AmMed Inform Assoc. 2005;12(1):64–71.
38 Heisey-Grove D, Danehy L-N,Consolazio M, Lynch K, MostashariF. A national study of challenges toelectronic health record adoptionand meaningful use. Med Care.2014;52(2):144–8.
39 Washington V, DeSalvo K,Mostashari F, Blumenthal D. TheHITECH era and the path forward. NEngl J Med. 2017;377(10):904–6.
40 ONC regulation FAQs:#42 Question [06-13-042-1] [Inter-net]. Washington (DC): Departmentof Health and Human Services; [lastupdated 2013 Nov 14; cited 2018 Feb7]. Available from:
April 2018 37 :4 Health Affairs 643Downloaded from on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.For personal use only. All rights reserved. Reuse permissions at


(USA, AUS, UK & CA PhD. Writers)


The Best Custom Essay Writing Service

About Our Service

We are an online academic writing company that connects talented freelance writers with students in need of their services. Unlike other writing companies, our team is made up of native English speakers from countries such as the USA, UK, Canada, Australia, Ireland, and New Zealand.

Qualified Writers

Our Guarantees: