Scholarly Journals from Science Periphery - Towards a Common Methodology for Evaluating

Scholarly Journals from Science Periphery - Towards a Common Methodology for Evaluating

Note (of 29th July 2002): on June the fourth 2002 submitted for publication to INTERCIENCIA.

SCHOLARLY JOURNALS FROM SCIENCE PERIPHERY - TOWARDS A COMMON METHODOLOGY FOR EVALUATING THEIR SCIENTIFIC COMMUNICABILITY?

Siniša Marièiæ

HR-10000 Zagreb, Poljièka 12/D-419, Croatia

Abstract

It is proposed in this article that scholarly journals from the science periphery can (and should) be assessed by their scientific communicability.

The objective is to review the literature comprehensively and critically in order to suggest a simple method for reliable "stratification" of journals from the most to least appropriate for public funding support and for inclusion in international databases.The emphasis is on the neglected (Francis Narin's) principle of evaluation indeterminacy and on (Michael Moravcsik's) a strategy to bridge the communication gap between the mainstream and peripheral science communities.

Background

’Science communication’ as defined here is formal communication within the academia, not communicating science to (or with) the public at large. We are witnessing a transitional period for science communication due to the impact of the information technology [1]. While information technology may be transforming communication techniques, formal communication remains critical to the advancement of science. To quote the most recent source, an international encyclopaedia [2]:

"Scientometrics can be defined as the study of the quantitative aspects of scientific communication, R&D practices, and science and technology (S&T) policies." Hence, the communication within science is one of the three aspects of science studies in a quantitative manner. While the communication may be manifold (from informal, personal, to purely formal), the formal one through scientific journals will be discussed here.

The formal scientific communication structure is shaped by secondary information services - the databases. These commercial services index multitudes of papers from a vast array of journals. How do their managements select the journal titles for coverage within their databases? Towards this end, at an early stage of the discussion of the problem, in an indirect way, though, a piece was published titled "Publication of scientific information is not identical with communication". [3]

There are no clear selection criteria statements when it comes to selecting for secondary information the journals published within peripheral scientific communities. And yet it is to be noted that scientific information published in journals must first be brought to the attention of the scientists-readers. But with an overwhelming volume of research being published, scientists rarely have the possibility of serendipitous discovery of literature pertinent to their field. The secondary science information sources (the databases) are, therefore, indispensable to formal scientific communication.

The term “periphery” has no simple definition in the context of science. It was used earlier [4] with a very strong connotation referring to less developed countries.But the difference between the "centre(s)" and the "periphery" was stressed byinvoking the element of size [5]: "Most of the world's scientific activities are concentrated in a few countries, which from a global perspective can be defined as the c e n t e r. Other countries, which for historical, economic, social, cultural reasons represent the smaller share of global scientific activity may be characterized as the p e r i p h e r y . Although the concept of periphery is usually associated with the Third World countries that are relative latecomers to western science, many small, economically advanced countries (in Europe for instance) for structural and cultural (or linguistic) reasons are also in a peripheral position.” In many respects both types of countries face similar problems with respect to the automatic adoption of evaluation procedures developed in the centre for assessing its scientific activity.

Here is, however, the description of the term in a more qualitative (and perhaps substantial) way [6]. There are, at least, three characteristics of scientific periphery: (I) its smallness, (II) lack of societal equilibrium, and (III) communication barriers.

(I) (Smallness) The scientific community is relatively small in regard to the fields of current research. The sheer size of the country (geographic and/or demographic), is not of sole importance, but rather the structure of its scientific endeavour. Peripheral scientific communities have a sub-critical research mass in many of the research fields pursued. Under such circumstances there is frequently a lack of qualified scientists to take part in the peer review process. Instead, subjective "in-person assessments" [7] come to the forefront.

(II)(Societal non-equilibrium)

Owing to their smallness peripheral scientific communities lack self-regulatory mechanisms which are otherwise common in well developed modern societies where the science sphere is one of the social factors within the decision making procedures.

(III)(Communication barriers) Within the human civilization's time scale science has become a lasting and dynamic world process of cognition. Any barrier against bringing closer the periphery to the core of the science process hampers the very substance of this human endeavour. The ideological and political barriers may be regarded as special cases, and they may or may not persist for long, but the sociocultural barriers are very serious, and among them the language could be regarded as the most important: "…to replace the indigenous language by a foreign one is dangerous in science because it hampers the development of the all-embracing modern culture within the ethnic group…(and)…to make negligible the cultural barriers preventing the social influence of science, it (science) must be integrated within the culture(s) of particular countries…" [8]

Can the literature of the science periphery be integrated into the science communication fabric? Positively articulated replies to this question have come from the most advanced part of the world [9, 10], but the cultural, technical and financial hurdles are serious. An impressive overview of the real-life difficulties in peripheral science communities is presented in [11].In [12] it is stated that "Almost 50 percent of all African research reports are published in local scientific journals that are not listed in information data bases."

Introduction

It would seem desirable to determine if there are any journals evaluation/ranking studies outside the databases management systems, which could be used in constructing a common methodology for improving the databases journal selection procedures. In pursuing this aim there is hope that if a common methodology for journals evaluation and ranking could be discerned it might perhaps help in ameliorating the status quo.

The rationale for this review is twofold. First externally (with respect to scholarly journals origins), the science communication structure concerning the peripheral scientific communities cannot be neglected within the science studies in general; second, internally,from the practical point of view, most of the journals in peripheral scientific communities are directly financed by public money (as opposed to the well-developed world). There is thus a need for an efficient and, as far as feasible, an objective method as a basis for indigenous financial decision making in support of journals (and/or for their inclusion in international databases as mentioned here initially), or, at least, for monitoring purposes. Whether such a method will be made use of at all remains to be seen, but that is an operational aspect which must not put in doubt the need for this type of study.

The last decade or so has witnessed several attempts, from the less developed countries, to devise methods for evaluating the indigenous journals. Prominent among these is the Latin American case [13]. The search of the literature relevant for the present review was done with the intention not to miss any important contribution. If there are any such publications in Asian or African languages, or in journals not recorded in databases or on the Internet—their lack in this review is regretted. (The author would welcome information on any such case). Deliberate exclusions of papers for this review are those that either deal with single journals, or a rather small subject field, or make use solely or preponderantly of the citation approach (through the citation indexes).

Journals are products of cooperation within the "triangle" of -authors-editors (+ publishers)-referees-. For the present purpose the editing-and-publishing (EP) process will be treated as far as it is demonstrated through the physical appearance of the journals.The referees and their reviews are not accessible straightforwardly for independent studies. Certainly not before the time comes of a completely open refereeing for the digitally produced (versions of) journals. Indirectly the refereeing process may be assessed via the editors and/or the authors, which requires meticulously designed questionnaires and involved statistical analyses. Equally important within the peripheral scientific communities is the lack of a sufficiently responsive surveying climate.

If the number of journal titles to evaluate is on the order of a few hundred, the editors, or even full editorial boards, as well as a selection of authors, may be subjected to a survey about the journals with which they are involved. It is indeed the only way to get a glimpse inside the very process of editing-and-publishing journals (the refereeing mechanism included), which requires well-organised research teams. An excellent example comes from Australia [14] - a “medium level” countryof scientific communities integrated within the social fabric and yet with an eye for problems of the less developed countries. Another example, on a smaller scale, but also meticulously executed comes from South Africa [15].

Evaluation Indeterminacy

About quarter of a century ago Francis Narin pointed out an inherent indeterminacy in the evaluation of “scientific advance”. The paper was published in the very first issue of Scientometrics [16], nowadays considered a leading journal for quantitativescience studies.

The inverse relationship of methodological objectivity vs. relevance can be summarised briefly for the present purpose (for details the interested reader may consult [16, 17, 18, 19]). There is no mathematical expression of Narin’s indeterminacy, so that the concept can only be described qualitatively. Narin depicts it in a two-dimensional graph of the “More relevant(methods)to true measurements of scientific advances” (the ordinate) and the “More objectiveof the methods used in the assessment” (the abscissa).

In the graph, a downward “curve” begins at the lowest objectivity with concurrently high relevance of methods ascribed mainly to expert opinions. The relationship (“curve”) between the two variables descends further on in relevance, at first slowly, while the objectivity increases. The concluding part of the curve is within a rather narrow range of high objectivity, while the relevance diminishes sharply to its lowest “value” (at the highest objectivity), ascribed to methods based on quantifiable parameters (simple counts).

The whole “interdependence” is substantiated by Narin’s intuitive choice and sequencing of various methods in assessing the “scientific advance”. The beginning of the “curve” is depicted by methods exploiting several shades of interviews and surveys, what we shall call here the questionnaire methods ("q"). In his papers Michael Moravcsik [20, 21, 9] callsthe indicatorsderived by these methods as perceptual, which “implies a personal evaluation, by inspection, on the part of knowledgeable investigators of the particular situation to be assessed. This kind of assessment is often referred to as ‘peer review’” (p.172 in [20]). The maximal objectivity at the lowest relevance end is substantiated by various methods of quantifying the scientific productivity. These methods are called databased type, and, sometimes, “quite misleadingly, objective, including bibliometric measure, patent counts, production statistics, counts of literate persons, etc.”. (p.172 in [20]).

As journals are the primary public record of the science process dynamics, Narin’s indeterminacy may be expected to hold in journal evaluation as well. However, a caveat is appropriate at this point. Namely, the present case (based on earlier papers referred to here) appears to be the unique concrete application of "Narin's indeterminacy", so that the generality of the latter is as yet untested.

Let us turn now briefly to a puzzling question, which was not addressed in [16]. How come at all such an inverse relationship between the objectivity of a method and its relevance to the determination of the value of a (societal!) manifestation like science (and maybe in case of scholarly journals as well)? One can rationalise, perhaps, that the objectivity increases with the simplification of the observable manifestation. It occurs, however, at the expense of the relevance, because it diminishes simultaneously as only its outer, ever more simplified, manifestation is being observed. The comprehending of the underlying “heart of the matter” is thus gradually being lost.

Journals are emanating from and for the science endeavour. Their role as constituents of formal science communication channels is only part of the science story, though an important one. So, what "property" can one invoke to evaluate a journal w i t h i n scientific communication channels? We choose here - the journal's scientific communicability.

Webster's Third New International Dictionary (1986) explains “communicability” as: "the quality of being readily communicated or of having a message readily understood". The second meaning (or…) is beyond the scope of our approach here, because it would mean entering the realm of content analysis, which after all is the main part of the science process proper. On the other hand the leading part of the definition is quite useful for the present purpose. To what extent a journal is capable of being "readily communicated," within the science process, i.e. what is its "degree of communicability" will be used here within Narin's indeterminacy evaluation approach.

In the case of journal evaluation this inverse relationship (between the journal's scientific communicability and the objectivity of its assessment) seems to break into two distinct sections [17]. Let us here expand the above stated caveat As in the case of Narin's original paper, we intuitively select the available approaches to evaluate the scientific communicability of journals. Narin dealt with a broad definition of "scientific advance". Journals are a reflection of scientific advance if "the whole sample" is taken into account. In such a case (of all scholarly journals) one would expect one monotonous "curve" depicting the indeterminacy relationship like in the case of "scientific advance".

However, when intuitively selecting one particular evaluation "method" the researcher simultaneously and unwillingly selects subsamples of journals, which may lead to discontinuity in the indeterminacy relationship as it was observed here with the methods for evaluating scientific communicability of journals. This may account for a dichotomy in applying Narin's indeterminacy when evaluating scholarly journals.

One approach, that of high objectivity/low relevance, does comply with the inverse relationship. The other, however, “defies” such an uncertainty in that for the starting region at rather low objectivity both the relevance (for assessing the scientific communicability) and the (methodological) objectivity increase within a rather narrow range of the latter. This framework, then, yields roughly two juxtaposed classes of methods for evaluating journals: Class A of low objectivity, and Class B of higher objectivity.

The Methods

Class A methods - low objectivity

Four levels in a concurrent increase of both objectivity and relevance could be discerned among the methods “defying” Narin’s indeterminacy as follows [17]:

(i) Journal selections by the local public fund sponsoring agencies (through their "expert committees"); this is the case of creating perceptual indicators in Moravcsik’s sense. The results of a thorough questionnaire approach [14, 15] may indeed improve the objectivity. However, for the peripheral (less-developed) scientific communities this is not feasible. For one reason there is limitation owing to the small numbers involved (see above (I) (smallness)of science periphery). Secondly (because of the small size of the communities) the “committees of experts” frequently, if not regularly, act within a group pressure play.

Here is an example from Latin America. In [13] there are two references, recording two workshops held within a span of three years in Mexico: proceedings of the first [22] are about scientific publications in general, whereas the second deals explicitly with scientific journals [23]. A contributor in [22] mentions at the end of his intervention [24] that a very limited list of “excellent” Mexican scientific journals has been created by experts for the science council (CONACYT). The same author reflects after a three year experience upon that kind of evaluating scientific journals (p. 367, [25]):

“1) it is absurd and irresponsible to defend the idea that the government is obliged to support financially, in any way, the publishing of whatever scientific journal, 2) it is necessary to delimit the aid, if there it is, by rules objective, clear, public and equally applicable for all… How to differentiate between the journals which deserve some type of subsidy and those that do not?”

Moravcsik’s wording (p. 174, in [20]) adds weight to our putting the indigenous “expert committees” journal ranking at the lowest level of both the relevance and the objectivity: “Perceptual indicators, that is peer reviewing, are also on shaky grounds in the context of developing countries. Scientific communities in individual countries are mostly too small to allow internal peer reviewing … developing countries are often reluctant to turn to external peer reviewing, because those in charge of science policy are not acquainted sufficiently with the world-wide scientific community to know whom to ask, because going outside the country is thought to reflect on the national sovereignty, because there is fear that the request for outside help will be rebuffed, and because financial resources may not be available to organise such external peer reviews. In some cases external review teams arranged through international organisations turned out to be both ignorant of and insensitive to the local conditions under which scientific work must be performed in the developing countries.”

(ii) By domestic (national) bibliographical and indexing/abstracting services with a higher or lower relevance according to their expected readership - international, or domestic, respectively.

The national bibliographic services (if there are such) accumulate the experience in exercising good judgment to define the pool of domestic journals for their bibliographic processing. The criteria vary between countries, but while among the chosen journals there are many of strictly national culture interest, scholarly titles are included. Such approaches yield the widest possible journal selection. If bibliographic or indexing services exist which are meant for the readership abroad, the relevance as to scientific communicability may be expected to be somewhat higher than in case for domestic readership. In either case the objectivity is of the same degree and higher than in (i) because those making a certain choice of journals are not individually under pressure from interest groups.