How Much and Where?

How Much and Where?

How Much and Where?

Private vs. Public Universities’ Publication Patterns

in the Information Systems Discipline

Clyde W. Holsapple

University of Kentucky

Daniel O’Leary

University of Southern California

June 2008

Revised September, 2008

The authors are listed alphabetically.

Acknowledgement: The Authors wish to thank the referees for their comments on an earlier version of this paper.

Comments are solicited

© Copyright 2008, Clyde W. Holsapple and Daniel O’Leary
How Much and Where?

Private vs. Public Universities’ Publication Patterns

in the Information Systems Discipline

Abstract

In most disciplines of scholarly endeavor, there are many effortsat ranking research journals. There are two common methods for such efforts. One of these is based on tabulations of opinions offered by persons having some kind of relationship with the discipline. The other is based on analyses of the extent to which a journal’s articles have been cited by papers appearing to some selected set of publications. In either case, construction of a journal ranking for a discipline makes no effort to distinguish between private and public universities. That is, data are aggregated from both private and public faculty researchers. It is thus assumed that the resultant ranking is applicable for both kinds of institutions. But, is this assumption reasonable? The answer is very important, because these rankings are applied in the evaluation of promotion, tenure, and merit cases of faculty members working in a discipline. Here, we examine this widespread bibliometric assumption, through the use of a ranking methodology that is based on the actual publishing behaviors of tenured researchers in a discipline. The method is used to study the behaviors of researchers at leading private universities versus those at leading public universities. Illustrating this approach within the information systems discipline, we find that there are indeed different publication patterns for the private versus public institutions. This finding suggests that journal ranking exercises should not ignore private-public distinctions and that care should be taken to avoid evaluation standards that confound private and public rankings of journals.

Keywords: information systems,journal rankings, private universities, public universities, publication breadth, publication intensity, publishing behavior
1.Introduction

Over the years there have been numerous studies analyzing and ranking journals for publishing information systems (IS) research. Rather than reaching closure on ranking issues, such studies continue to emerge employing different data sets and variants of two main approaches. One approach involves analyzing some set of citations to some set of articles published in each journal being considered. The other involves ranking journals based on opinions elicited from some set of observers about some set of journals with respect to some criterion (e.g., “quality” or “best”). Such approaches seem to follow the old adage, “don’t do as I do, do as I say,” overlooking where established IS scholars actually publish their research as an approach to evaluating and ranking journals.

Although the many IS journal ranking studies have accounted for many variables, and generated a variety of different results, there is continuing controversy over these rankings, what they mean, and how they are to be applied. Notably, aside from overlooking actual publishing behaviors, none of these studies considers another variable – one that has received at least some attention in other settings: Whether a faculty member is from a public or private school. That is, are journal ratings in the IS discipline different for private and public universities, given a particular rating approach? Accordingly, this paper contrasts publishing behaviors of tenured IS faculty members from public versus private universities in terms of both the numbers of journal publications and the journal placements of these publications.

This paper proceeds as follows. Section 2 provides a background that includes brief descriptions of different approaches to devising journal rankings and of previous studies concerned with research differences for public versus private universities. We detail the methodology used in this study, along with an explanation of how it is implemented, in Section 3. In Section 4, we present and discuss the findings. Section 5 briefly summarizes the paper, its contributions, and future extensions.

2. Background

To provide a context for understanding the nature and value of this paper, we furnish descriptions of prior related research involving journal ranking methodologies and examinations of research differences for public versus private universities. Along the way, we build a case for the practical importance of issues examined in this paper.Against this background, we subsequently describe the research process and its findings.

2.1 Journal Ranking Methods

There has been considerable effort directed at trying to understand which journals are the “leading” or “most desirable” outlets for publishing information systems research. Many of these efforts are summarized by Peffers and Ya (2003). An analysis of these studies reveals that two basic types of methodologies have been used over the years. One of these methods has involved the gathering of citation information, either manually (e.g., Alavi and Carlson 1992; Holsapple et al. 1994)or using digital databases (e.g., Katerattanakui et al. 2003) capturing what the researcher has written, typically based on what they have read. The other method involves gathering opinion and perception data from various sets of subjects, ranging from deans of business schools (Doke and Luke 1987) to information systems researchers (e.g., Mylonopoulos and Theoharakis 2001). As a complement to these two heavily-used methodologies, Figure 1 illustrates a third methodology: observing the history of publishing behaviors exhibited by IS researchers. Here, we adopt this third methodology in exploring differences between journal publishing patterns existing for IS faculties at private versus public universities in the United States.

Rather than the ranking methods that study what researchers cite or what they espouse, an alternative is to study where IS researchers actually publish their work. The first study of this kind examines the overt journal publishing behaviors of all tenured IS researchers (as of 2006) belonging to the faculties of an independently selected set of 20 schools representative of the top public research universities in the United States including the likes of Arizona, Georgia, Maryland, Michigan, Minnesota, and Texas(Holsapple 2008). Aggregating these publishing behaviors, it finds that the tenured information systems researchers at these 20 universities have historically (1980-2006) published more research articles in Decision Support Systems, Journal of Management Information Systems, and MIS Quarterly than any other IS journals.

The implicit view taken there is that the big picture about journal importance cannot be seen from the vantage points of what some commentators say regarding “best” publication outlets or what journals some set of researchers reference in their writings. Instead, a key vantage point for understanding relative long-term importance of alternative journal outlets for the IS discipline is what well-established IS researchers do over an extended time. Where do they actually publish their research as a basis for success in their research careers – achieving tenure, subsequent efforts at garnering strong merit review results, and becoming vertically (and horizontally) mobile?

In the case of tenured IS researchers, it is very likely that their publishing behaviors (in terms of both quantity and placement of journal articles) are very largely responsible for their having been granted tenure at their respective universities. That is, each has published a sufficient number of articles in sufficiently well-regarded journals to be judged by the university, by internal peer evaluators of the promotion case, and by external peer evaluators of the research record as passing some critical threshold of research accomplishments. This threshold can differ from one university to another. Nevertheless, it is safe to say that thresholds for universities regarded as exhibiting the greatest research prowess tend to be at least as high as (or higher than) thresholds in place at other universities.

In the case of the IS discipline, it is presently unclear whether research thresholds differ, and how they differ, between the most prominent private research universities and the most prominent public research universities. Here, we investigate and provide answers for this issue. The prior study of IS publishing behaviors is extended as follows. We examine the collective journal publishing outcomes of all tenured IS faculty members at 31 of the most prominent public research universities (which include the 20 in the previously noted study). Most important for the purpose of this paper, we also examine the collective journal publishing patterns of all tenured IS faculty members at 31 of the most prominent private research universities. In comparing the public vs. private cases, we uncover both commonalities and striking differences in IS publishing norms and tendencies. Before considering these, we offer some further background, which pertains to the existence private vs. public research differences, the rationale for understanding such differences in the case of the IS discipline, and what we can expect to find in an examination of such differences.

2.2 Research at Public versus Private Universities

When evaluating university research, it is not unprecedented to differentiate between private and public universities (e.g., Lombardi et al. 2007). Although other disciplines have differentiated between researchers in private and public schools, the issue has heretofore received little, if any, attention in the information systems literature.

There is reason to assume that there likely are differences in research productivity between public and private university faculties. As noted by Armstrong and Perry (1994, p. 16), “Private schools are less bureaucratic and thus are able to respond more rapidly and with more flexibility to customer (business) needs. Further, donors prefer giving money to private schools on the grounds that they are not also supported by state government subsidies. Such private monies can be used to promote … research … thus enhancing the schools’ reputation.” Dundar and Lewis (1998) suggest that incentives in public and private universities could be different and that private universities generally provide performance incentives, in the form of higher salaries to enhance research productivity. Further, Dundar and Lewis (1998) also assert that “most private research universities generally have fewer but more highly research-productive faculty than those typically found in public schools.”

Differences in research productivity between public and private universities have been substantiated in several disciplines. For instance,in the field of economics,Jordon et al. (1988) find that private institutions are associated with greater average research productivity than that of public university faculty members. Dundar and Lewis (1998) find similar results in biologicalsciences, engineering, the physical sciences, andmathematics. Moreover, Armstrong and Perry (1994) find a difference in the research impacts of faculties, in favor of private universities over public universities.

From a practical standpoint, there are several reasons why it is important to understand if there is a difference between public and private measures of IS research productivity, and the nature of that difference if it exists. First, one traditional way of evaluating a researcher is in terms of the stature of journals in which his/her research articles are published. If a school uses rankings to help gauge a journal’s stature, then care must be taken to be sure that there is a fit between a ranking and the type of school. Because extant rankings of IS journals do not distinguish between public and private universities, we do not know whether a particular ranking fits with norms for private universities, public universities, both (i.e., no substantial private-public differences), or neither (i.e., due to confounding that could stem from mixing the two into a single ranking).

Second, when evaluating a promotion case (e.g., as an external reviewer), perhaps we should take into account whether the candidate’s university is private or public. Because universities in one category may have a noticeably different research threshold than those in the other category, care must be taken to avoid applying the norms for one type of institution to an evaluation of someone belonging to a different type of university. For instance, the typical quantity of journal articles involved in thresholds at one type of university may be considerably different than that for another type of university. Additionally, the typically favored journals at private universities (e.g., as reflected in actual publishing behaviors of their tenured faculty members) may be ranked considerably differently than what is common for public universities, or for some mix of private and public schools.

Third, aside from informing promotion cases to associate, full, or endowed professor levels – which are very important for building long-term strength in any particular university’s IS faculty – private vs. public differences give a context for assessing the relative research success of individual scholars. Such assessments comprise a base for studies that endeavor to determine how well an individual IS department is doing relative to others. An appreciation of private vs. public differences may well suggest what benchmark schools should be used as a basis for comparison, given that universities of the opposite type may well tend to have a different standard for research productivity. Being able to characterize an IS department’s relative research performance is important for efforts at external fund raising, attracting high potential students (especially at the doctoral level), faculty recruiting, and achieving favorable allocations of university resources.

Fourth, an appreciation of differences between private and public universities’ conceptions of favored journal outlets may well inform the editors and publishers of IS journals. For instance, the fact that a particular journal is rated substantially higher for, say, private universities may be a reflection of its stated editorial scope, its marketing approach, its editorial board composition, or simply a tradition. This state of affairs may indeed be the aim of the journal and the result of intentional effort. On the other hand, the journal’s publisher and editor may be unaware that the journal is not as highly rated with respect to public university norms. Because public university segment represent a huge market and a large potential source of excellent research, those who shape the journal may want to take steps in the direction of elevating its exposure, appeal, and/or openness (and ultimately its rating) in the public university arena.

So, what can we expect to find from an investigation of journal publishing behaviors exhibited by tenured IS faculty members, over an extensive timeframe, for prominent private versus public universities? Because there are many large public universities, we would anticipate fewer IS faculty members for a set of N research-intensive private universities than for a set of N research-intensive public universities. However, as seen in other disciplines, we also would expect that the faculty members for the set of private schools would produce more research output, averaging more journal publications per capita, than their counterparts in the set of N public universities.

Further, differences in the two university segments could manifest as a “divide” such that substantial differences between them would be found in any ranking analysis of IS journal rankings that pays attention to the two segments. In particular, if one population is behaving in one manner, while another is behaving in another, and if those differences in behavior are not noticed, there is likely to be a conflict. When a single result is forced on both groups (e.g., choice of a single ranking system), aggregation across significantly different populations obscures the nature of their differences and is likely to generate controversy.

Thus, in the interest of greater clarity, we investigate potential differences between public and private universities’ actual research outputs produced by tenured IS faculty members. The results helps to better illuminate the big picture of IS research productivity in academia.

3. Methodology

This section summarizes the methodology used for this study. It does so by explaining the treatment of several key issues: choice of universities to be studied as being representative of the leading private and public universities in the United States, choice of IS faculty members from these universities to be included in the data collection, choice of a data collection timeframe, choice of the source for collecting data about faculty member publications, and choice of a cut-off point below which data collected for a journal are ignored. Analysis of data collected for journals achieving the cut-off is presented later – for chosen IS faculty members at the representative private and public universities.

3.1 Universities Studied

Rather than taking random samples from the populations of public and private universities, we are interested in identifying a sample from each segment that can be fairly regarded as representative of the leading research-intensive universities in that segment. By “leading,” we mean very high in factors such as visibility, impact, reputation, research funding, and other aspects of scholarly prowess. It is imperative that such public and private samples be derived from an independent source - one that uses the same criteria for identifying top private research universities and for identifying top public research universities.

Data provided by the 2005 report of The Center for Measuring University Performance (http://mup.asu.edu/) satisfy the imperative. Known as TheCenter, this organization produces a ranked list of the highest performing private universities in the U.S., based on a variety of objective measures – none of which involves publications in some pre-specified list of journals. The same kind of list, based on the same metrics, is reported for public universities. From each list, we adopt the N universities that are ranked highest by the TheCenter as being a representative sample of the leading universities in each segment.

As for the size of N, if it is too small (e.g., 10), then the sample may not be fully representative of the leading universities. If N is too large (e.g., 50), then the sample may be too “watered-down” to be representative of the leading research universities – as we might dip too deeply in the ranking, so as to include universities that tend to be farther away from the highest levels of performance necessary to be regarded as being among the premier public or private institutions. Splitting the difference, we settle on an N of 30. Because of a tie in the thirtieth position on one of the lists, we consider the 31 highest ranked universities having schools of business, management, and/or economics. Of course, one could quibble that N should be 27 or 33, but it is unclear that the result would be substantially different.