Rebecca Herman, senior researcher at the American Institutes for Research and a member of the EWA Research Roundtable, took a very quick look at the text of the National Education Policy Center report profiling education management organizations, the 13th annual “Profiles ofFor-Profit and Nonprofit Education Management Organizations 2010-11,” which was released Jan. 6. She notes that she did not look at the appendices and some of the answers may reside there.

Here are her comments and questions:

Questions About the Comprehensiveness of the List of EMOS

The researchers note in a few places how difficult it was to get a comprehensive list of EMOs, and how in some cases they received new data this year that caused them to revise previous years’ reports (e.g., p 4). They indicate that the list of small EMOs may not be comprehensive.

Based on those observations in the report:

  • In the data collection this year, did they uncover any medium or large EMOs that had been missing from the review in previous years? What is the chance that larger EMOs are missing from the current report?
  • Was it harder to find non-profit than for-profit EMOs (or vice versa)? Could that have introduced a bias?
  • The researchers indicated that they got more intensive support in identifying EMOs from three states: California, the District of Columbia and New York. Does that mean that the EMO list might be more complete in those locations? What does that mean for the results—are there more large or more for-profit EMOs operating in CA, DC, and NY, for example, introducing a bias in the national findings?

Questions About Size, Demographics and Profit Status of EMOs

The report focuses primarily on the size and for-profit/non-profit status of EMOs. Are there meaningful differences in the nature of the services provided? For example, do larger EMOs tend to provide more systematic school structures, and do smaller EMOs tend to tailor the program to the student population/context? If there are systematic differences among EMOs that relate to size or profit status, that might be helpful in thinking about the implications of, say, the rise in the number/share of large non-profit EMOs.

Can the EMOs or researchers cast any light on the causes or implications of some of the patterns noted? For example, why have non-profit EMOs expanded recently? What does that mean in terms of the nature of EMO-run schools available?

One of the issues raised about charters is that they often have a different student population than the district from which they draw students. It would be extremely helpful to know the demographics of the EMO-run schools, especially in the analyses of adequate yearly progress(AYP) status according to the No Child Left Behind Act. Do EMO-run schools have more or less disadvantaged students (e.g,. students with disabilities, low socioeconomic status, English-language learners, historically low achievement) than the surrounding community, and how does the AYP status of these schools compare to the surrounding community? Demographic data are generally available from state and federal sources, making it possible to conduct these analyses.

Questions About Accountability of EMOs

Accountability is a serious concern for EMOs, their schools, and policy makers. Therefore, I think it’s useful to ask what more can the researchers share about EMO-operated schools making AYP.

  • The researchers collected AYP data for 81% (non-profit) to 89% (for-profit) of EMO-run schools. What were some of the reasons AYP data were not available? Does that introduce any systematic bias in these results? I could imagine that AYP data would not be as available, for example, for schools that are just at the edge of the cut-point or safe harbor provisions.
  • Schools can miss AYP for big things (percent of students across the school missing cut-points in both English and math) or littler things (only one, small subgroup missing in one subject). I believe it’s not much harder to get this nuanced information than to get the “make-or-miss AYP” information, and the nuanced data are much more meaningful when talking about how well (or poorly) EMOs are doing.
  • The researchers indicate that AYP passing rates have decreased since 2009-10. How does this relate to rising state standards for AYP (it’s harder to make AYP in 2010-11 than in 2009-10). It would be useful to look at the AYP rates of the surrounding community for the two time periods, to see if it is not just the EMO schools that have decreased.
  • How do the researchers determine what difference is “big enough” to say it is important. Is it a 10% point difference? Some other bar? How was that determined? In one place the term “significantly” was used. Did the researchers mean “statistically significantly” and, if so, how did they test that and were all comparisons tested in that way?

Questions About State-by-State EMOs

Is there a relationship between receiving Race to the Top (RTTT) funding in the first two rounds and changes in the numbers of EMOs? RTTT requires states to allow for charter schools, and pushes to remove caps or other restrictions on the granting of charters. Does that seem to relate to changes in the numbers of EMO-run schools? It would be interesting to look at the RTT states versus the non-RTT states over time in terms of the numbers of EMO-run schools. Three RTTT states (Hawaii, Delaware, Rhode Island) did not seem to have any EMO schools operating, which seems a little contrary to the spirit of RTTT. Is there a reason for that?