Dear Alter Ego,

Welcome to the wonderful world of evaluating secondary literature! Since we share a similar educational background, we will also share many discoveries and difficulties in this arena; I will detail a few of mine in the hope that they will be helpful.

To begin with, my past experiences have not been particularly relevant here. Much of what research I've done has been in areas I'm familiar with, on topics that are well circumscribed and easily searched, by authors whose names or institutions I can easilyidentify and rank in importance.

This process of secondary research evaluation may be quick, or it may be very time-consuming. It’s certainly much easier to do if you're familiar with the topic, the experts in the field, and the terminology. Moreover, evaluating electronic sources was new to me: the sources themselves are varied and multifold beyond belief, and the process of using the Internet to evaluate them is itself novel to me. It's easy to get bogged down, or lost altogether, but working within a structure can fend off chaos.

The processes of evaluating articles and websites have many features in common. The key categories to examine, some of which may overlap to an extent, are

how reputable is the “publishing” source of the piece

how credible are the authors and the work they rely upon

whether the content itself bears scrutiny.

I will deal primarily with websites, not print articles, in what follows, but will sometimes blur the line between the two.

Evaluating a website was, as I said, a totally new experience. I would have assumed that a web address ending in .gov or .edu would furnish reliable information, but beyond that assumption, I would have had little to go on. Now I know that even an ".edu" site may have degrees of reliability (no pun intended): if a "~"appears in the URL, it may indicate a faculty site, which calls for further evaluation. I would have been only as comfortable with an address ending in ".org" as I was with the organization it represented: no prior knowledge, no gauge of reliability.

When dealing with the Web, I am easily distracted: there’s so much information competing for my attention. It’s essential to remain focused, disciplined – and yet sometimes it's the very wandering that proves most helpful, in a serendipitously recursive sort of way.

Take as an example the First Monday website. On trying to reach it, I'm told it's "moving to OJS with the October 2007 issue." What is OJS? I click on the OJS link near the top of the FM home page and find that it stands for "Open Journal Systems", which is a "journal management and publishing system developed by the [federally funded] Public Knowledge Project." Should I find out what the Public Knowledge Project is, or leave well enough alone? Deciding to let it go for the moment, I look again at OJS and see that it has as one of its goals "making open access publishing a viable option for more journals, as open access can increase a journal's readership as well as its contribution to the public good on a global scale." That sounds praiseworthy, but what does "open access" imply about the refereeing process? Should I follow that up? Luckily, I find answers to many of my questions about editorial policy by clicking the "About" link. Continuing on to the content, I realize that the titles of the articles convey very little to me, outside of seemingly obvious words like "library," "scholarly," and "publishing." Choosing one article to peruse, I find that the references lead in many other directions. Many of them are electronic and/or from conferences or handbooks, and I wonder how the author found them. Then I look again and suddenly realize that the whole issue is devoted to the OJS and PKP!

In evaluating any website, I must look at the qualifications of the editors (if any) and authors. When evaluating, I am frequently stunned by the sense of delving into a whole world of personalities, institutions, affiliations, advocacies: it’s a labyrinth, a maze, an endless vista...where am I? Who am I? Where did I start? Where am I going? Ok, back to reality: assessing the publishing source of the website should be relatively easy to do, if the site is a reputable one. I can see what institutions the authors/editors are affiliated with, and it is possible, if necessary, to find out more about them via the Internet: if a college or university is two-year or four-year, whether graduate education is offered, how much, if any, research is done, and so on. I had not known about the Carnegie classifications previously, but quickly realize how useful they can be. Quality control of the website should be fairly easy to judge, too, both through editorial policy information and the appearance of the site itself (spelling, grammar, functionality of links).

It is also important to know whether the authors have other published works to their credit; whether they are full, associate or assistant faculty; what their educational background is; and whether and how often their works are cited elsewhere. Many of these questions can be answered by consulting Google Scholar or browsing a metasite, searching a university library catalogue or professional bibliographies, or using several such tools in combination. The currency of the website will be apparent by its date of last revision. Checking the dates of references can be a clue to their currency, but other factors must be considered, as some “classic” works continue to address the present topic.

.

Verification of the content of a document is a slower operation. One must crosscheck the validity of conclusions by finding other credible treatments of the same topic and seeing if they are consistent. Evaluating research methodology, while a key part of secondary research assessment, can be difficult if a person is not well acquainted with the field or familiar with experimental design. Noting whether the author is honest about the limitations of the study and its conclusions can be another criterion by which to judge a study. After all the above “legwork” is done, one decide about the feasibility of including an item in one’s nascent bibliography.

------

How have I changed as a result of this learning? I've discovered that a researcher must become very patient, observant, and meticulous when evaluating secondary research, just as the researcher needs the same qualities in order to find it in the first place. Both processes truly do resemble detective work with clues to be found and leads to be followed; the process Is often tedious and tiring, and doesn't offer instant gratification. My native tendency is to scan a document, article, or webpage rapidly, trying to make a "snap decision" about its value. That procedure doesn't work well at all in this context. I must remain patient and focused on the content,an approach which takes a lot of discipline. I must learn to live with the insecurity of not knowing whether or not I'm wasting my time following the clues found. And the more I want to situate the article in a given context – the more I want to verify its content – the more work I have to do: investigating authors, references, sources, and methodology. Mirrors within mirrors – is that an image for a recursive activity?

I'm beginning to feel more positive about this experience, because I’ve learned a lot from the experience and may gain additional skill with time.I have been able to build on knowledge learned about searching for secondary literature. I can now approach an unfamiliar source with some confidence that I will be able to evaluate and use that source appropriately.