Core Indicators Project:

Progress Report

October 17, 1997

Center on Managed Long Term Supports

A collaboration of:

Human Services Research InstituteNational Association of State

Cambridge MassachusettsDirectors of Developmental

Disabilities Services, Inc.

Alexandria Virginia

Introduction

Late last year, the Board of Directors of the National Association of State Directors of Developmental Disabilities Services (NASDDDS) agreed to sponsor the “Core Indicators Project” (CIP). The Association’s sponsorship of CIP reflects its long standing commitment to supporting efforts by its member state agencies (the state developmental disabilities authorities (SDDAs) in the fifty states and the District of Columbia) to meet the needs of the more than 600,000 individuals with developmental disabilities they are collectively responsible for serving.

The project’s aim is to investigate the feasibility of establishing a nationally-recognized set of performance indicators that will provide solid, reliable information concerning the effectiveness of a state’s developmental disabilities service system along various dimensions of performance and support valid comparisons with results being achieved in other states. The development of performance indicators that support such comparisons from state to state entails not only selecting indicators of common interest to states but also overcoming a variety of problems and barriers in the arenas of data collection and measurement across multiple jurisdictions. Fifteen states are participating on the CIP Project Steering Committee; seven of these states have agreed to compile the data necessary to determine whether a particular indicator reveals sound information regarding system performance.

Project work to date and planned over the next several months is aimed at developing what is best termed “Version 1.0” of a core indicators system. A limited number of performance indicators have been identified for testing and analysis over the next several months. Version 1.0 covers four primary domains of system performance. However, the 61 indicators that are being tested over the next several months far from exhaust all the possibilities for gauging system performance. In this phase of the project, trade-offs have had to be made between the desirability of including a particular indicator in the test candidate set and the feasibility of collecting and analyzing the data necessary to determine the utility of particular indicators for gauging system performance across jurisdictions. The initial set of candidate indicators focuses on services and supports for adults with developmental disabilities. The basis for this decision is described later in this report. In the future, indicators will be developed concerning supports for children with developmental disabilities. If this initial effort proves successful, then attention will turn to enhancing the indicator set to provide a richer profile of system performance.

Over the next several months, the project is seeking to determine the validity and reliability of the initial set of indicators selected for testing. Some will prove out; others will drop by the wayside. The project also is concerned with identifying how to resolve the complex logistics of implementing data collection methods across multiple jurisdictions so that the information gleaned is compatible from state to state.

This project is being conducted under the auspices of the Center on Managed Long Term Supports for People with Disabilities, a collaboration between NASDDDS and the Human Services Research Institute (HSRI - Cambridge, Massachusetts). Created in 1995, the Center’s charter is to investigate the appropriateness and feasibility of adapting managed care strategies in order to improve the delivery of long-term supports for individuals with developmental disabilities.

CIP was formally launched in January 1997 by soliciting the participation of interested NASDDDS member agencies. Serious project work began in March. This is the first comprehensive report concerning project activities. It describes how the project has unfolded over the past nine months, provides an up-to-date description of where the project stands, and outlines the project activities that are planned over the next several months. Since this is the first project report, it provides a considerable amount of background information concerning the project’s approach in investigating the development of indicators to gauge system performance.

This project report is being distributed to NASDDDS member SDDAs, the organizations that have agreed to participate on the project’s external Advisory Committee, and others who have expressed an interest in the project. The wide distribution of this report (as well as future project reports) reflects the Association’s commitment to conduct this project in an open fashion that keeps interested parties informed of project activities and invites comments and input.

A final project report will be prepared and distributed next summer. This report will describe in considerable detail what was learned during the development and testing of Version 1.0. It will include a more complete discussion of indicators considered but not included in Version 1.0, the potential value of including these and other potential indicators in future iterations, and overcoming the challenges to achieving interstate compatibility in data collection and the implications of this experience for addition of new indicators.

This report is divided into five principal sections:

Project Purpose and Aims describes why the Association decided to sponsor the CIP project and outlines the project’s goals;

Project Organization, Funding and Activities to Date discusses how the project is being managed and financed. Project activities to date also are described;

Project Methodology outlines the approaches being utilized in developing the core indicator system;

Test Indicators describes the various performance indicators that have been considered for inclusion in the project and the narrower set that will be investigated during the project’s field test phase. This phase will be carried out in seven states over the next several months; and,

Future Project Activities contains the project’s work plan through June 1998 and discusses possible follow-up activities after that date.

Various project materials are included in appendices to the report.

1

Project Purposes

And Aims

At present and for the foreseeable future, SDDAs face steadily increasing demand for services and supports, long waiting lists and tight budgets. This climate places a premium on SDDAs' acquiring solid information that will enable them to better appraise service system performance, including the extent to which critical outcomes are being achieved by and on behalf of individuals with developmental disabilities. Moreover, in light of the system change initiatives that several SDDAs have launched, it is clear that there is a need for new methods of gauging the quality and effectiveness of publicly-funded developmental disabilities services and supports.

SDDAs need better tools to: (a) systematically measure and track performance and outcomes on a systemwide basis; (b) assess results against the costs of alternative service delivery strategies; (c) compare performance and outcomes across providers; (d) relate results to the costs incurred by specific categories of provider agencies; and, (e) compare the overall effectiveness of the state's system to results being achieved in other states. Absent such tools, it is not possible to evaluate the effectiveness of alternative support delivery strategies from a value-added perspective and thus determine the progress being made toward achieving critical systemwide goals in such dimensions as access, quality, efficiency and implementing person-centered support delivery principles.

Background

Today, the states collectively furnish specialized developmental disabilities long-term services and supports to more than 600,000 individuals and families. Nationwide, public outlays for these services exceed $20 billion. Over the past two decades, the nature and type of services furnished through SDDA-administered service delivery systems have steadily shifted from serving individuals in large, congregate facilities to underwriting an increasingly diverse array of services and supports in the community. Moreover, SDDA mission statements have changed to embrace person-centered support principles, displacing "care and treatment" as the basis of service delivery in publicly-funded programs.

Four central trends have affected the character of SDDA long-term support delivery systems: growth, decentralization, privatization, and diversification. These trends are continuing. For example, where once "residential services" were furnished mainly in very large public facilities, today such facilities account for fewer than 20 percent of all individuals receiving such services nationwide. The census of such facilities continues to decline. Over the past decade, the number of individuals receiving publicly-funded residential services has increased by approximately 27 percent but the number of locations where such services are furnished has grown by more than 250 percent[1]. Where once licensed congregate-care settings were the rule, increasingly residential supports are being furnished to individuals in homes of their own. A decade ago, family support services were widely available in only a few states. Today, such services are offered in nearly all the states and have become a primary avenue for expanding service system capacity to meet the needs of individuals and their families. The delivery of services and supports has been shifting steadily from the public to the non-profit and for-profit private sectors. Most state service delivery systems are operated through regional or smaller (e.g., county) substate agencies that are responsible for intake, eligibility determination, support planning, contracting, and locating service providers.

Ongoing growth, decentralization, privatization and diversification pose significant system management challenges for SDDAs, particularly in an era when such state agencies are under considerable pressure from state policy makers to downsize their operations. At the same time policy makers are demanding greater accountability and more information concerning system performance and costs in weighing SDDA budget requests to expand and/or enhance services and supports. SDDAs are managing a rising volume of contracts that cover more diverse services furnished principally by non-public agencies.

Today, many SDDAs lack the sophisticated tools that are needed to measure the relative value and effectiveness of the increasingly diverse services and supports that they purchase on behalf of people with developmental disabilities. Developmental disabilities performance measures have been (or are being) developed in a number of states. However, these measures are neither comprehensive nor geared toward the types of managerial challenges that SDDAs face today and in the future. Currently available performance yardsticks vary considerably from state to state, thus making it very difficult to amass sufficient data to establish valid, reliable national performance norms and standards and, hence, gauge how a particular state's system is performing relative to results being achieved elsewhere.

Several SDDAs are seriously examining the pros and cons of adapting "managed care" strategies in order to reconfigure the funding and delivery of community services and supports. A common theme in these initiatives is decoupling funding streams from narrow program categories in order to promote greater flexibility in tailoring services and supports to meet the needs of individuals and families while concurrently fostering incentives for the more efficient use of scarce public dollars.

In order for these initiatives to succeed, it is vital that SDDAs have at their command valid, reliable, and robust performance and outcome indicators/measures for monitoring and evaluating performance. Such indicators must provide a sound basis for: (a) gauging the responsiveness of the service system to individual needs and preferences; (b) tracking performance against critical outcomes such as community integration, self-determination and personal independence; (c) monitoring the reliability of protections of health, safety, and individual rights; (d) maintaining accountability of public dollars; and, (e) weighing costs against the results being achieved. In order for SDDAs to shift from "buying programs" to "buying results", it is imperative that performance-oriented indicator systems be developed.

Over the past decade, there has been considerable effort expended in developing systems for gauging the relative quality and effectiveness of health, behavioral health, home health and nursing facility services. In the arena of health care, for example, the National Committee for Quality Assurance (NCQA) has developed successive versions of the Health Plan Employer Data and Information System (HEDIS) in order to gauge the performance of health plans from both a purchaser and consumer perspective. In the mental health arena, the Center for Mental Health Services is developing a Consumer-Oriented Mental Health Report Card as part of its Mental Health Statistics Improvement Program. These and other efforts are aimed at amassing data that will enable purchasers and consumers to assess the delivery of services and their effectiveness based on statistically reliable information. The shift to managed care delivery systems in both health and behavioral health care has sparked the development of increasingly sophisticated performance measurement systems to provide purchasers with sound information to guide their buying decisions. Advances in computer technology now permit analyzing enormous amounts of data to be analyzed in support of performance evaluation.

In the arena of publicly-funded long-term supports for people with developmental disabilities, the development of comprehensive performance indicator systems is just now getting under way. For many years, SDDAs have desired (and been urged) to base contracting and service purchasing decisions on "performance" and "outcomes" rather than "process" or rote compliance with rigid regulations. Some SDDAs have used performance contracting for a number of years. In many states, quality assurance systems have been modified to de-emphasize process compliance in favor of rating agencies based on whether they are achieving results that reflect such desired outcomes as community inclusion, real work, and individual control over life decisions and living arrangements. The Council on Leadership and Quality (formerly the Accreditation Council) has developed performance measures based on desired outcomes. There is increasing sophistication in conducting evaluations of the effects and outcomes of community services. The federal Health Care Finance Administration (HCFA) is in the process of designing a quality indicator system for the Medicaid home and community-based waiver program which states may adopt. The HCB waiver program has emerged as a very important means for SDDAs to underwrite community services and supports for people with developmental disabilities.

NASDDDS is sponsoring the Core Indicators Project for three fundamental reasons:

First, the Association's mission is to support its member agencies in improving the delivery of services and supports for people with developmental disabilities. NASDDDS member agencies have expressed a strong interest in securing the capacity to more systematically gauge the performance of their service systems, particularly in light of their interest in reconfiguring service funding mechanisms to promote greater flexibility and cost-effectiveness. SDDAs recognize that shifting from "program-funding" to more flexible arrangements necessitates the development of performance and outcome-based data systems.

Second, it is vitally important that performance/outcome evaluation systems are designed with the needs of SDDAs in mind. SDDAs play important roles in overseeing the delivery of services and supports as well as making purchasing decisions. If a performance indicator system is to inform system improvement strategies, it must take into account dimensions of performance that SDDAs regard as important.

Third, conducting the project under the auspices of NASDDDS enables SDDAs to pool their resources to support the development of a performance indicator system. Moreover, since the project mainly concerns system-level performance, it needs to be conducted collaboratively by several SDDAs in order to ensure that the results are compatible state-to-state.

By agreeing to sponsor the project, the NASDDDS Board of Directors has made an important commitment to support the Association's member agencies in their efforts to improve the delivery of services and supports to people with developmental disabilities. Fifteen SDDAs have agreed to actively participate in CIP as members of the Project Steering Committee.

Project Description and Aims

The Core Indicators Project is a multi-state project aimed at identifying and testing a comprehensive, "core" set of data-based indicators that can serve as the foundation for assessing how well a state's public developmental disabilities system is performing in comparison to systems in other states. In other words, the indicators that are being examined through the project and the data collection methods being tested are aimed at supporting interstate comparisons of system performance.