Supplementary Appendix 1 - Methods for the Development of Quality Measures Included in the Family Experiences with Coordination of Care Survey
Development of the conceptual framework
Our multi-stakeholder Center began development of the quality measures included in the FECC Survey by first creating a conceptual framework for care coordination/fragmentation for children with medical complexity (CMC) (see Electronic-Only Appendix 2). This frameworkemploys a Plan, Do, Study, Act (PDSA) cycle to exemplify what should occur when high quality care coordination is occurring. When care coordination is progressing in a high quality manner, data and information about a CMC is being collected, shared and synthesized among the family and all of the healthcare and community providers involved in their child’s care (Plan). The patient-centered medical home, whether it is in the primary care or sub-specialty setting, should take key responsibility for synthesizing the information gathered into an individualized shared care plan for the child. That shared care plan is then implemented (Do) and outcomes are assessed (e.g. determining if goals outlined in the plan are being met). When goals aren’t met, barriers are identified (Study) and interventions to address them are implemented (Act). Then data is again collected in order to update the care plan for the child and the cycle begins again.
When care is fragmented and coordination is poor, there are “voltage drops” from this care coordination PDSA cycle. This can manifest as a result of interpersonal discontinuity, where providers lack familiarity with the child’s health issues, informational discontinuity, where information needed to care adequately for the child is missing, or longitudinal discontinuity, where the family experiences churning in insurance coverage resulting in frequent changes in healthcare providers. The framework also makes explicit the relationships between care coordination and both short- and long-term outcomes, such as emergency department utilization and health-related quality of life.
Development of the quality measures
Our center’s staff conducted systematic literature reviews for 6 key domains represented in the conceptual model: information exchange, goal setting, continuity of care, care coordination, shared care plans, and the medical home. We reviewed literature related to children in four categories – those who have: a) a significant chronic conditions in two or more body systems, b) a progressive condition that is associated with deteriorating health with a decreased life expectancy in adulthood, c) continuous dependence on technology for at least six months, or d) a progressive or metastatic malignancy that impacts life function (except those who are in remission for more than 5 years). We defined a significant chronic condition as one involving a physical, mental or developmental condition that can be expected to last at least a year, will use health care resources above the level for a healthy child, require treatment for control of the condition, and can be expected to be episodically or continuously debilitating. Because we expected to find only limited literature related to the above categories, we broadened our search to include all children with chronic disease as well as adults including the frail elderly.
We then searched MEDLINE, CINAHL, PsycINFO, and Cochrane Reviews for studies examining links between care coordination structures, processes, and outcomes of care. Inclusion criteria in the MEDLINE, CINAHL, and PsycINFO searches specified that articles must be written in English, and published in 1990 through September 30, 2011. MEDLINE searches specified that articles must not be editorials or comments; PsycINFO searches specified that studies must be peer-reviewed. Cochrane Reviews searches specified that articles must be written in 1990 or later. Focus on populations between 0 and 18 years of age was achieved in Cochrane Reviews searches by using variations of “kid,” “child,” “teenager,” and “adolescent” as search terms. Additional search terms were specified depending on the domain of interest for that particular literature review. We also conducted an Internet search to identify any current guidelines or standards, and reviewed references cited by the guidelines and standards to identify any additional seminal literature. Given the lack of studies related to care coordination effectiveness, we also looked outside of the pediatric literature for any evidence related to care coordination in other populations (e.g. the frail elderly, adults with multiple comorbid conditions, etc.)
The literature reviewed was abstracted into summary tables that systematically describe the study design, population, and results. The strength of evidence was formally rated for each study according to the University of Oxford’s Centre for Evidence-Based Medicine levels of evidence1.
From the review, we developed 39 draft quality measures that were supported by varying levels of evidence. Some of the developed measures assessed care structures and processes and had been included in bundled interventions in the child, adult, and frail elderly, populations as part of well-designed randomized clinical trials. Performance on these measures as part of a bundle was associated with improved outcomes such as decreased ED use, decreased readmissions, and increased adherence to follow-up appointments after hospitalization etc. Unbundling these measures and testing them individually allows us to identify which specific measures were key drivers of improved outcomes. This is particularly important given the healthcare system’s constrained resources. All of the measures underwent iterative refinement based on input from the center membership.
RAND-UCLA Modified Delphi Method for Final Quality Measure Selection
The RAND-UCLA Modified Delphi Method was used to rate the validity and feasibility of each quality measure, in order to arrive at a set of care coordination quality measures for children with medical complexity to be further developed and field tested. This method is a well-established, structured approach to measure evaluation that involves two rounds of independent panel member scoring, with a group discussion in between2-7. Independent scores are used to determine whether a measure is selected for the final measure set, not consensus. In the first round of measure scoring, members of the panel received the 6 literature reviews and the draft quality measures resulting from each of these reviews. They were asked to rate each measure on validity and feasibility, on a scale from 1 (low) to 9 (high).
Panelists were instructed that for a measure to be considered valid it should have adequate evidence or expert consensus to support it; that there should be identifiable health benefits associated with receiving the measure-specified care; that providers and provider groups who adhere more consistently to the measure would be regarded as higher quality; and that adherence to the measure is in the control of clinicians or health care system. A rating of 1 to 3 indicates the quality measure is not valid; a rating of 4 to 6 indicates the validity of the proposed quality measure is equivocal; and a rating of 7 to 9 indicates the proposed quality measure is deemed to be valid. The Delphi method has been found to be both reliable and to have content, construct and predictive validity.
Panel members were also asked to consider the feasibility of each measure. For the FECC survey-based quality measures, a feasible measure was defined as one that included content a typical caregiver would be able to correctly identify, recall, and report within the specified time frame and that quality assessments based on the measure would be reliable. Ratings of 1 to 3 indicate that the quality measure is not be feasible to collect using caregiver-report; ratings of 4 to 6 indicate that there would likely be a great deal of variability in caregiver ability to accurately report on the quality measure; and ratings of 7 to 9 mean that it would be feasible to determine adherence to the measure based on caregiver report.
To determine level of agreement among panelists on the validity and feasibility of a given measure, we employed a statistical method that frames agreement and disagreement as hypotheses related to expected score distributions from a hypothetical population of repeated scores from similar panelists. To determine agreement, we tested the hypothesis that 80% of the hypothetical scores would be within the same score domain (1-3, 4-6, or 7-9) as the observed median score. The measure is determined to be scored “with agreement” if we could not reject that hypothesis with a binomial test at the 0.33 level. Based on a group of 9 scores, agreement requires that no more than two be outside of the 3-point domain that contains the observed median. To determine disagreement, we tested the hypothesis that 90% of hypothetical scores were within one of two larger, overlapping score domains (1-6 or 4-9). We determined scores to be “with disagreement” if we rejected that hypothesis on a binomial test at the 0.10 level. From a group of 9 panelists, the definition of disagreement was met when 3 or more ratings are in both the 1-3 range and the 7-9 range. If the scores could not be classified as either with agreement or with disagreement based on the definitions above, they were considered to be “indeterminate”. Measures that were scored with indeterminate agreement were retained, if they met validity and feasibility criteria.
Member Selection for the Delphi Panel
The Delphi panel was comprised of 9 individuals (Electronic-Only Appendix 3) nominated by national stakeholder organizations selected for their relevance to care coordination for children with medical complexity, including: the American Academy of Pediatrics (AAP); the Academic Pediatric Association (APA); the Society of Hospital Medicine (SHM); the Children’s Hospital Association (CHA); the Medicaid Medical Directors Learning Network (MMDLN); the American Academy of Child & Adolescent Psychiatry (AACAP); the Society for Adolescent Health & Medicine (SAHM); the National Association of Pediatric Nurse Practitioners (NAPNAP); and Family Voices. Of the individuals nominated, one from each nominating body agreed to participate in the Delphi panel. Delphi panels generally consist of 9 members, as larger panels have been found to be less productive2. The final panel consisted of a parent representative, a state Medicaid medical director, a doctoral health services researcher, a nurse practitioner, a psychiatrist, a hospitalist, an adolescent medicine physician, and two general pediatricians. All panelists had expertise or experience related to the care of children with medical complexity.
Conducting the Delphi Panel
After members of the Delphi Panel had been selected, a conference call was conducted to orient them to the RAND-UCLA Delphi method. Six weeks prior to the in-person panel, all panel members were sent the draft quality measures and the literature reviews on which the quality measures were based. After reading the reviews, each panelist scored the draft quality measures on validity and feasibility and submitted their scores to the measure development team, along with comments. Research staff at Seattle Children’s Research Institute (SCRI) compiled the initial results and shared them with panelists, who received the score distribution and a caret to indicate their own score for each measure. The results that were shared were otherwise anonymous.
A 2-day in-person meeting was conducted in Seattle, Washington, during which time the panelists discussed controversial quality measures. A measure was considered to be controversial if the median validity score was 4-6, if the median feasibility score was less than 4, or if the mean absolute deviation from the median score indicated either disagreement among panelists, or an indeterminate level of agreement. Following discussion of all controversial measures, panelists privately re-scored all 39 measures. The second round of scores were tabulated, and median rating and mean absolute deviation from the median were calculated for each measure. All measures with a median validity score of 7, a median feasibility score of 4 and scored without disagreement were considered to have been endorsed by the panel to move forward in the measure development process. The Delphi criterion is more liberal for feasibility than for validity, as all measures go through field-testing after the Delphi panel. This step in the development process serves to confirm whether the measures scored in the 4-6 range are truly feasible to collect via caregiver-report.
Thirty-one of the 39 measures (79%) were endorsed by the Delphi panel as being valid and feasible and progressed to the next development phase where measures were operationalized for data collection and field testing. Twenty-one of the 31 measures were specified as caregiver-reported quality measures. Operationalization of these measures resulted in the development of the Family Experiences with Coordination of Care (FECC) survey. The remaining measures were operationalized for data collection from medical records or administrative claims and are not discussed further here.
References
1.The Oxford Levels of Evidence 2. (Accessed April 30, 2015, at
2.Brook R. The RAND/UCLA Appropriateness Method. In: McCormick KA, Moore SR, Siegel RA, eds. Methodological Perspectives. Rockville, MD: US Department of Health and Human Services; 1994.
3.Hemingway H, Crook AM, Feder G, et al. Underuse of coronary revascularization procedures in patients considered appropriate candidates for revascularization. The New England journal of medicine 2001;344:645-54.
4.Kravitz RL, Park RE, Kahan JP. Measuring the clinical consistency of panelists' appropriateness ratings: the case of coronary artery bypass surgery. Health policy (Amsterdam, Netherlands) 1997;42:135-43.
5.Shekelle PG, Chassin MR, Park RE. Assessing the predictive validity of the RAND/UCLA appropriateness method criteria for performing carotid endarterectomy. International journal of technology assessment in health care 1998;14:707-27.
6.Shekelle PG, Kahan JP, Bernstein SJ, Leape LL, Kamberg CJ, Park RE. The reproducibility of a method to identify the overuse and underuse of medical procedures. The New England journal of medicine 1998;338:1888-95.
7.Selby JV, Fireman BH, Lundstrom RJ, et al. Variation among hospitals in coronary-angiography practices and outcomes after myocardial infarction in a large health maintenance organization. The New England journal of medicine 1996;335:1888-96.
Supplementary Appendix 2 – Conceptual model
Supplementary Appendix 3 – Delphi panel membership
Panel Member / Nominating OrganizationRichard Antonelli, MD, MS
Medical Director of Integrated Care and Strategic Partnerships
Medical Director Physician Relations and Outreach
Boston Children’s Hospital
Assistant Professor of Pediatrics
Harvard Medical School / American Academy of Pediatrics (AAP)
Allison Ballantine, MD, MEd
Assistant Professor of Pediatrics
University of Pennsylvania School of Medicine
Section Chief of Education
Medical Director, Integrated Care Services
Division of General Pediatrics
Attending Physician Palliative Care Team
Attending Physician Inpatient General Pediatrics
The Children’s Hospital of Philadelphia / Society of Hospital Medicine (SHM)
Jennifer Bolden-Pitre, MA, JD
Director of Integrated Systems,
Statewide Parent Advocacy Network
Family Fellow,
Leadership Education in Neurodevelopmental Disabilities
Children's Hospital of Philadelphia / Family Voices
Carol A. Ford, MD
Professor of Pediatrics
Orton Jackson Endowed Chair in Adolescent Medicine
University of Pennsylvania
Chief, Craig Dalsimer Division of Adolescent Medicine
The Children's Hospital of Philadelphia / Society for Adolescent Health & Medicine (SAHM)
Jason Kessler, MD, FAAP, CHBE
Medical Director
Iowa Medicaid Enterprise / Medicaid Medical Directors Learning Network (MMDLN)
Karen Kuhlthau, PhD
Associate Professor, Pediatrics
Harvard Medical School
Associate Sociologist, Pediatrics
Center for Child and Adolescent Health Policy
Massachusetts General Hospital for Children / Academic Pediatric Association (APA)
Dennis Kuo, MD, MHS
Assistant Professor of Health Policy and Management
Fay W. Boozman College of Public Health,
University of Arkansas for Medical Sciences
Assistant Professor of Pediatrics
Section on General Pediatrics
Center for Applied Research and Evaluation,
University of Arkansas for Medical Sciences
Pediatrician
Medical Home Program for Children with Special Needs,
Arkansas Children’s Hospital / Children’s Hospital Association (CHA)
Wendy Sue Looman, PhD, RN, CNP
Pediatric Nurse Practitioner
Cleft Palate and Craniofacial Clinic
School of Dentistry, University of Minnesota
Associate Professor
School of Nursing, University of Minnesota / National Association of Pediatric Nurse Practitioners (NAPNAP)
Karen Pierce, MD, FAPA, FAACAP
Attending Physician
Department of Child and Adolescent Psychiatry
Children’s Memorial Hospital, Chicago, Illinois
Clinical Associate Professor
Feinberg School of Medicine, Northwestern University Medical School
Department of Psychiatry and Behavioral Sciences / American Academy of Child & Adolescent Psychiatry (AACAP)
Supplementary Appendix 4 – Family Experiences with Care Coordination (FECC) survey (telephone interview version)
1. / Your child’s main provider is the doctor, physician assistant, nurse or other health care provider who knows the most about your child’s health, and who is in charge of your child’s care overall.1A. / OPEN TEXT
(100 CHARACTERS) / What is the name of your child’s main provider?
1B. / EMPTY / The questions in this survey will refer to [FILL 1A.] as “your child’s main provider.” Please think of that person as you answer the questions.
2-INTRO / EMPTY / This first set of questions are about the people who help you manage care, treatment and services for your child.
2. / 0=NO (GO TO 17-INTRO)
1=YES (GO TO 3A)
8 = DON’T KNOW (GO TO 3A)
9 = REFUSED (GO TO 3A) / In the last 12 months, did your child visit more than one doctor’s office or use more than one kind of health care service, such as physical or speech therapy, or community service, such as home health care or transportation services?
IF NEEDED: Other examples of community services are early intervention programs, respite care, and parent or caregiver support services.
3A. / 0=NO (GO TO 3B)
1=YES (GO TO 4)
8 = DON’T KNOW (GO TO 3B)
9 = REFUSED (GO TO 3B) / Did anyone in the main provider’s office help you to manage your child’s care or treatment from different doctors or care providers?
3B. / 0=NO (GO to #17 Intro)
1=YES (GO TO 3C)
8 = DON’T KNOW (GO TO 17 INTRO)
9 = REFUSED (GO TO 17 INTRO) / Did anyone else outside of [1A]’s office help you to manage your child’s care or treatment from different doctors or careproviders?
3C. /
- Another provider from a different office/clinic
- A care coordinator who isn’t part of [FILL 1A’s] office staff
- A social worker who isn’t part of [1A’s] office staff
- A care or case manager who isn’t part of [1A’s] office staff
- Someone else who isn’t part of [1A’s] office staff