Interpreting & Documenting Research & Findings

Published by the Universities of Edinburgh, Glasgow and Strathclyde

W.L. Wilson

Acknowledgements
The material from this booklet has been developed from discussion groups and interviews with the research staff of Glasgow and StrathclydeUniversities

The advice and contributions of Dr Avril Davidson, Mr Keri Davies, Prof George Gordon, Mrs Janice Reid, Dr Alan Taylor and Mrs Sheila Thompson are acknowledged.

The advice of the project Steering Group: Prof Michael Anderson, University of Edinburgh; Dr Nuala Booth, University of Aberdeen; Dr Ian Carter, University of Glasgow; Ms Jean Chandler, University of Glasgow; Dr Avril Davidson, University of Glasgow; Prof George Gordon, University of Strathclyde; Prof Caroline MacDonald, University of Paisley; Prof James McGoldrick, University of Dundee; Dr Alan Runcie, University of Strathclyde; Prof Susan Shaw, University of Strathclyde; Dr Alan Taylor, University of Edinburgh; Prof Rick Trainor, University of Glasgow is also acknowledged.

The project was funded by the Scottish Higher Education Funding Council.

Other titles in Series
Gaining Funding for Research
Gathering and Evaluating Information from Secondary Sources

Preparing the Research Brief

© Universities of Edinburgh, Glasgow and Strathclyde 1999
Cartoons D. Brown & W. L. Wilson
ISBN 0 85261 688 0 Printed by Universities Design and Print

Introduction

This booklet is one of a series of four aimed at researchers in the early stages of their career life cycle. The comments within the booklet are based upon information collected at a series of discussion groups and interviews at Strathclyde and GlasgowUniversities. The questions put to the discussion groups were based broadly upon the performance criteria and knowledge requirements identified in the report "Draft Occupational Standards in Research" (Gealy et al, 1997).

The booklet is in two sections. The first section, "Interpreting Research Results and Findings" considers various aspects concerning the interpretation of results. Generally the section considers how to confirm the reliability and analysis of results, the avoidance of bias or over-interpretation of results, and the identification from the results of potential areas of future research.

Section two, "Documenting Research Results and Findings," examines methods of presenting research findings, the physical aspects of record keeping, and what should be recorded within research records both to ensure their value to the researcher and to ensure that they are legally and ethically correct.

The booklet is not intended to be read in one fell swoop, but rather to be dipped into as and when the occasion arises.

Both sections within the booklet are subdivided into subsections each of which consist of:

  • Introduction
  • Points of advice, and examples from experienced researchers to highlight these points (colour linked). Information for the second section was collected through a series of interviews and discussion groups, which were formed from lecturers, PhD students, and Contract Research Staff (CRS).
  • Bullet points which highlight the main points. The bullet points refer to the points and examples preceding them.

The booklet is not intended to be exhaustive or definitive. The issues raised are those which most exercised the minds of the researchers providing the comments for its preparation. These comments do offer interesting contrasts of opinion, either because commentators disagreed about the way to approach a certain issue, or because researchers from different subjects took different approaches in their methodology. The nature of the examples provided in the booklet are a reflection of the interests of those taking part in the discussions and interviews, and possess no greater significance than that.

Contents

Interpreting Research Results and Findings

How do you confirm the reliability of your results?

Introduction

Points to Consider

How do you avoid getting into a rut with your analytical methods?

Introduction

Points to consider

How would you define interpretative methods?

Introduction

Definitions of "Interpretative"

How do you recognise and avoid bias in your interpretation of your results?

Introduction

Points to Consider

How do you evaluate your results in the light of the objectives of your original proposal?

Introduction

Points to Consider

When do you think uncertainty may arise over results and their interpretation and, how do you ensure that your conclusions are fully justified by the results?

Introduction

Points to Consider

How do you identify potential areas of further research from the results?

Introduction

Points to Consider

Documenting Research Results and Findings

What techniques do you use to present your findings, and possible areas of future research to other interested bodies?

Introduction

Points to Consider

How do you record your research and findings? Are there methods of recording that you would avoid?

Introduction

Points to Consider

What details do you put in your research records? What details should never be missed out of records, and why?

Introduction

Points to Consider

How do you confirm that your records meet all relevant legal and ethical requirements?

Introduction

Points to Consider

Interpreting Research Results and Findings

How do you confirm the reliability of your results?

Introduction
The exact nature of what is reliable will vary from field to field. Mathematical proofs, which are unusual in that there is an absolute right, are usually developed over years. In other fields, e.g. social planning and architecture, there may be no absolute right or wrong, and the confirmation, or otherwise, may take 30 years of urban development. Communication, experimental repetition, alternate approaches, good background knowledge will all be applicable in some fields, but are unlikely to be applicable in all fields.

Points to Consider
The most important initial stage is to be aware that your results may not be reliable. Blind faith does not make for good investigative research. Results may be misleading for a wide range of reasons, e.g. an atypical sample, equipment error, or the simple vagaries of animal behaviour. The latter point is nicely summed up by the Harvard Law of Animal Behaviour:Given precisely controlled conditions, the animal will do as it damn well pleases.

Example: During a study of prostitution habits the researcher found that it was difficult to obtain reliable data on condom use. She could ask till she was blue in the face, and in as any different ways as she could think of: one-on-one interviews, focus groups, whatever. All interviewees reported 100% condom use unless they happened to burst. Yet it was obvious to the researcher that there were women who were working without condoms.

Peer review is a basic step in checks of reliability. Asking colleagues who have a sound knowledge of the field, but have not been as close to the work as yourself, is an essential and basic check of reliability. Better to have a colleague pick up a discrepancy at an early stage rather than a paper or grant referee at a later one.

It is important to ensure that you have an adequate number of repetitions within your experimental data (allowing for events such as pseudo-replication). However, repetitions can add new variables to the process. There is inevitably a balance between the demands of the objectives and the demands of precision.

Example: The value of repetition was emphasised by one researcher who remarked that he would not report on any data which had not been confirmed within his own laboratory. For experimentation which required statistical analysis the precise number of replications was dependent upon the expected level of variability within the measurement. In order to ensure statistical accuracy when it is not possible to run a number of replicates simultaneously, the researcher reruns the complete experiment. The precise number of repetitions depends upon the variability between trials. His recent study examining the rearing of halibut fish highlights this latter point. The experiment required four different tanks, each tank providing a different environment. Normally the researcher would aim to do these in triplicate, providing a total of 12 tanks. However, because the experiment was within a production style system, the scale of the project made simultaneous trials impossible, thereby requiring the entire experiment to be repeated. This unavoidable variability requires an increased number of repetitions beyond the average. On the other hand, the big advantage of using a production style system is the avoidance of the extra variables inherent in scaling from the very small upwards.

Using several techniques on the same sample provides an alternate form of experimental repetition. Thus the reliability of tests for genetic mutations in tumours is regularly checked by using three different techniques on the same tumour sample.

Refer to previously published work and review your results within the context of previous publications to obtain a feel for general trends. They are some trends which may be expected to emerge. You must ensure adequate quality controls to avoid bias, i.e., inadvertently creating the result expected from ‘trend’. Bear in mind when checking reliability in the light of previous trends that many breakthroughs in science have at first been regarded as completely implausible. Plausibility is determined by present knowledge.

It is important to be thoroughly familiar with the background and content of the project. This is especially important where moving into new fields, where some less than obvious fact may pass unnoticed.

Example: Whilst out collecting crabs a postgraduate researcher observed that some crabs reacted to other individuals of the same species by rearing up and attacking. Lower shore crabs were more likely to be aggressive than upper shore crabs. Several years later the researcher discovered that there were actually two species of crab on those shores, but that the two species were virtually identical. Fortunately the researcher had not published the study, and learnt a valuable lesson cheaply.

One engineer suggests the following summary for his own speciality:

a) Derive from first principles to establish ‘plausibility’. This would help to highlight erroneous results.

b) Meticulous calibration.

c) Error analysis. (Error analysis being the system used to measure the parameter will consist of different parts, each with an associated uncertainty. When this uncertainty can be obtained from calibration,then the uncertainty of the whole should be quantifiable.)

When working with human subjects it is essential to ensure that the sample is as representative as possible in order to check for a variety of different responses. One method of checking the accuracy of responses is to rephrase the question and then compare the new response with the answer to the earlier question. It is important to ensure that the analysis of the data is as inclusive of the varied responses as possible. One technique by which this can be done is the inductive procedure of deviant case analysis.

Example:Deviant case analysis proceeds through examination of the universe of responses provided to a certain topic. If exploring the question of condom use, a basic hypothesis may be that prostitutes would encourage their clients to use condoms to ensure their own protection against HIV. However, there might be women who do not articulate their use in these terms at all, but refer to other reasons (e.g. they form a means of distinguishing between the sex they have with their private partners and the sex they provide to clients). The overall explanation for condom use as a barrier would still fit but the argument would have to be modified to incorporate the broader spectrum of responses. If, for example it was observed that most women report other reasons for condom use which do not fit within the barrier explanation then the original argument must either be modified, or discounted entirely.

  • Bear in mind that your results may not be as you need them.
  • Check all results thoroughly.
  • Use alternative techniques to check results.
  • Examine your results in the light of other work.
  • Know your background information well.

How do you avoid getting into a rut with your analytical methods?

Introduction
The best way to avoid becoming stuck in a rut is to remind yourself regularly of the risk of staying there. Most researchers will develop favoured techniques, and it is always easier to fall back on well used comfortable techniques than to seek out new and novel approaches which require the additional effort of getting up to speed. Communication and keeping up with the literature (not just in your own field) appear to be the best ways of remaining fresh.

Points to consider
Keep in touch with the research world around you. It takes some time for new methods/techniques to appear in the literature. As with so many aspects of research, networking is vital. Perhaps the most common reason for the development of new techniques has been where the present technique was very labour intensive. In these circumstances it is worth asking around to see what other investigators are doing.

Look to other fields for inspiration.

Example: One research team studying the prostitute population of the red light area in a large city decided to adopt the biological technique of ‘capture/mark/recapture’. The study required identifiers, so the team ‘tagged’ each individually ‘captured’ subject with a unique ID, then used these identifiers to model changes in the prostitution population over a period of time. This is a particularly elegant example as the technique originated from an 1800’s study in Paris in which the number of priests was used to estimate the total population of the city. Then the technique was adopted by ecologists to model animal population dynamics. In this instance the technique has moved from social science research to biological research and back to social science.

Discuss what you are about to do with your colleagues who may well contribute some good ideas. Discussion can help you avoid becoming enmeshed in minutiae and missing the bigger picture. It may be helpful to brainstorm with a group of your peers on a regular basis.

Example: One researcher remarked that she had been experimenting for months with a new technique to identify differentially expressed genes, all the while going nowhere. Then after several months she discovered that throughout that period a colleague in the group had been arguing the case for an alternative technique. Greater efforts at communication would have saved her several wasted months.

For some fields, such as English Literature, opening a line of communication with the author you are studying may provide a useful insight into their work. For other fields, contacting authors of publications will allow you to discuss new techniques being developed, or perhaps highlight publications which you may have inadvertently missed.

Look for weakness in the methodology, for instance, is the present technique of a lower sensitivity than required, can it be improved? Regular reappraisal of the techniques used, and consideration of their less satisfactory points, should help avoid complacency.

Example: A research group was interested in measuring virus-specific immune responses. When the researcher joined the group there were a number of techniques available to measure antibody responses to the virus. However, there were no reliable techniques available to measure the T-cell response. Unlike antibodies, T-cells recognise virus infected cells, or tumour cells, and kill them. The researcher’s first task was to develop such a technique. He succeeded in developing a technique which was then adopted globally. There remained concern that the technique was underestimating the true magnitude of the host T-cell response. The team is now in the process of designing novel assays to measure virus-specific T-cells. Using these assays it will not only be possible to verify the data that they (and other laboratories) have obtained, but the improved sensitivity afforded by the new techniques will allow detection of T-cells in circumstances which would otherwise have been overlooked.

There can be problems in securing funding for completely new approaches.. However attractive a new methodology may appear, it is important to ensure that the methodology will not discourage funding bodies if it is included within your grant application. This can be a "catch 22". You want to be adventurous, but cannot move forward because funding bodies or collaborators will be cautious of the ‘excessive’ novelty of your new idea. On the other hand, in the highly competitive world of research funding, you may need that bit of novelty as an added attraction. If in doubt it is well worth contacting your prospective funding bodies in advance. Some funding bodies run schemes to promote "blue skies" research such as the Research Council, Realising our Potential Awards (ROPA’s). Though original and novel are not one and the same, a pilot run will move "novel to original" and help you convince the more sceptical reviewer.

  • Keep up with the literature.
  • Networking is essential.
  • Look to other fields for inspiration.
  • Cast a critical eye over your methodologies, identify the weak points, seek alternatives which ameliorate them.

How would you define interpretative methods?

Introduction
It became obvious during the discussions and interviews used in the creation of this booklet that the definition of ‘interpretative’ was not consistent. The question "how would you define interpretative methods?" was put to participants to try to gain some idea of the definitions of different fields. In order to avoid interpreting the interpretations of "interpretative" and inadvertently shifting the definitions towards a biologist’s view of the world, this section has been kept in the form of the original quotations.