EUROSTAT
Directorate F: Social Statistics and Information Society /
Doc. ESTAT/CR/TF2007/4
Notes on the Draft EU Victimisation Module Questionnaire
Document for Point 6 of the agenda
task force on victimisation surveys
luxembourg, 28-29 June 2007
bechbuilding, Ampere room
Markku Heiskanen
Kauko Aromaa
Seppo Laaksonen
Minna Viuhko
14.6.2007
EUROPEAN CRIME VICTIM SURVEY (ECVS) PLAN
Background information
In planning a victimisation survey questionnaire, many decisions should be made before the questions can be drafted. For example, the face-to-face method provides more possibilities to visualize the questions compared with the telephone interview mode. In this paper we have documented certain methodological choices discussed in the Stockholm meeting on 7 June 2007. The topics have come up when planning the survey, in getting acquainted with the literature, reading victimisation survey questionnaires from different countries and in discussions in the country meetings organised in the course of the project. Although we have to retain the possibility for different choices in some methodological issues, the questionnaire should be basically identical in each country.
1. How often?
General national victimisation surveys are conducted in the EU Member States at different time intervals. Some countries carry out surveys annually, while others do it on an irregular basis. Our proposal is that the ECVS should be completed every third year. Over a short time period, the changes in victimisation and safety situation are often so small that they do not exceed the confidence intervals of the statistical estimates of victimisation prevalences or incidences. Also the costs of annual victimisation surveys become high. On the other hand, reasonably updated information on the safety situation in the Member States should be available. Decision makers may find the results less useful, if they are e.g. five years old. A three year time period was also supported in the discussions with the experts in some of the countries we visited. (The topic was not discussed in detail in all meetings.)
2. Which survey mode?
CATI and CAPI are used in most national victimisation surveys in Europe. The BCS is an example of the application of the CAPI (combined with CASI) mode, while the Finnish national victimisation survey has used the CATI mode. Both surveys have a history of more than 25 years.
Telephone surveys have many benefits, especially if the interview is short, which may be the case if the respondent reports no victimisation incidents, and the survey comprises only few other safety (and other) issues. The telephone mode may be preferable as it protects the respondent’s anonymity better than the face-to-face mode with personal contact, especially in small towns and in the countryside, because the telephone interview can be conducted from a different area. On the other hand, people in different countries may be differently acquainted with telephone communication. The telephone survey mode as such cannot be considered as the source of failure in some recent telephone surveys, but the importance in choosing the sampling frame should be emphasised.For instance, the list of landline telephone numbers cannot in many countries be the frame from which the sample will be drawn. Mobile phones are in many countries replacing the landline phones as the only phone in the household, and mobile phone numbers are not necessarily registered in the same way as landline numbers. Especially young people are difficult to be contacted by landlines. Another problem is that if the respondents are contacted by their mobile phones, they may be in a situation in which a confidential discussion is not possible; (cf. in landline phone contacts the interviewer often knows where the respondent is when he/she answers). The telephone survey also has the disadvantage of having limited communication possibilities compared the face-to-face interview. No showcards can be used, the interviewer cannot use the gestures of the interviewee to interpret the responses, mutual communication is limited etc. Therefore questions in telephone surveys should be short, and easy to answer.
We are in favour of CAPI surveys, because the data quality is often better in face-to-face surveys, and the interviewing time is not limited to 20-30 minutes.
3. Sampling
Sampling is a very important factor for the quality of a survey. The basic requirement is that sampling should follow probability principles in each stage but these principles may vary from one country to the next, since each country has no identical tools and experience to draw a sample correctly.
Probability principles in sampling include:
-Each member in the target population should have a positive inclusion probability to be interviewed.
-The inclusion probabilities should be calculated for each stage of sampling.
-It is possible to use and it is even recommended to use stratification in order to reach all sub-groups of the target population well. Consequently, inclusion probabilities can vary between strata.
-The stage units (e.g. small areas) should have an inclusion probability lower than 1.
-It is very important how a potential respondent (gross sample unit) will be selected. This procedure should be really random. Hence substitution cannot be used, and neither is a usual random route technique allowed.
-It is natural that some potential respondents cannot be contacted and if they are contacted, they will not respond. In other words, some missingness due to non-response and ineligibility will occur. It is however crucial how this missingness is handled. First, it is essential to anticipate the amount of these units and take this into account when determining gross sample sizes; secondly, the information on missingness should be collected and such information should be added in the sampling file.
-As a conclusion from the six aspects given above, the first task is to decide what is the target for the effective sample size n(eff) (this is usually decided by the whole international team for the survey). When n(eff) is known, the required gross sample size can be predicted taking into account the following issues: variation in inclusion probabilities, clustering effect if clusters are used as primary sampling units, anticipated missingness and possible risk in fieldwork.
After the fieldwork, the sampling file can be constructed. The file covers all units of a gross sample, all sampling design variables, and possible other variables, referred to as auxiliary variables.
In order to achieve the above targets for sampling, many aspects need to be taken into account. Two crucial aspects are the following:
(i)What is the realistic target population for a survey, and which sampling frames are available? One principle under this is definite: the target population should be exactly the same for each country while the sampling frames can vary. Until now, there is some agreement on that in the general crime victim survey the target population could comprise all residents of a country with ages 15 years and over. It is however good to discuss whether some groups could be excluded that are difficult to interview or because a good sampling frame is not available for them so that the country results would be comparable. This point may lead to excluding minorities that are not speaking the main country languages as well as small areas outside the mainland of the country. All details of these exclusions should be discussed together and an agreement on these should be reached. Persons living in institutions will usually be excluded from the target population of a general survey but it is possible to conduct a special survey of these people, and this is even easy, since the institutions are usually well listed in countries. The same possibility is for residents under 15 years old, since they are (practically) all in schools in Europe.
(ii)Which data collection mode or modes are used? The easiest mode in cross-national surveys is face-to-face interviewing, since all countries have much experience of it. It is possible that some countries can work well with personal telephone interviewing (CATI). However, no such country exists that can cover the target population completely only using CATI. It is also possible to exploit the web but still using personal contacts. This last option can currently be used only for some sub-groups. Thus, it is possible that the mode of the survey is mixed. Nevertheless, the probability sampling principles should be followed.
4. Reference period
Reference periods used in national victimisation surveys range from one year to lifetime, and in the same survey different reference periods are sometimes used. Life-time prevalence is applied especially in violence against women surveys, which describe the continuum and long term effects of violence in close relationship. One reason to increase the length of the reference period has been the need to capture more victimisation incidents into the survey data set, because victimisation to most crimes is infrequent even with large sample sizes.
Our suggestion is that the main victimisation questions (section C on the questionnaire) would first be asked for the period of the last three years. If the time period between the successive surveys would be three years, the whole time period between the surveys would be under scrutiny. Of the three year period also estimates for one year should be calculated.
5. Last 12 months vs. calendar year?
Two ways have been used to evaluate the one year prevalence for victimisation. The “during the last 12 months”-method starts from the day of the interview, and refers to 12 months back in time. The benefit of this method is that it begins from the day of the interview, and is not as dependent of the date of the interview as the calendar year approach.
The calendar year-method gives an estimate comparable with the police and other relevant statistics and may facilitate the comparison of the results in different countries. Comparability was considered to be an important issue especially in dark figure assessments. The calendar year is also thought to be easier to remember. The calendar year approach is simpler for the interviewer, because the reference period is always the same. In the last 12 months approach, the precise reference period changes daily. The disadvantage of the calendar year approach is that interviewing should be done as soon as possible after the turn of the year (during the first quarter). This may require the interviewer organisations responsible for data collection to concentrate their efforts in this period to this particular survey, because samples in victimisation surveys are large.
The number of incidents can be estimated in principle similarly by both methods. Many experts recommended the calendar year approach, so this option was selected for the questionnaire. One technical detail with the calendar year approach is that the details of the “most recent incident”- asked in the victim form may not be included in the calendar year estimates, but this happens rarely if the interview takes place near the beginning of the year. One advantage of the proposed procedure is that we can calculate victimisation prevalence for the last three year-period, for one calendar year and for the last 12 months. Incidents are available for one calendar year. The most important reference period in the proposal is the last calendar year.
To sum up: in principle, the last calendar year-approach and the last 12 month-approach should produce similar prevalences, at least if the interviews are completed near the last calendar year. Of practical reasons given above we prefer the calendar year- approach.
6. The timing of the fieldwork?
The best time for the field work is the first quarter of the year, if the calendar year method is chosen. However, some safetyissues like fear and sexual harassment on the streets may depend on the season. Interviewing during common holiday seasons should be avoided. The first quarter of the year is in most countries suitable for interviewing because of no long holiday periods, except for winter vacations.
7. How many victim forms?
In the draft questionnaire, victimisation to 13 traditional crimes or incidents is asked, and in certain order (given in the question D0). If the respondent has experienced these kinds of victimisations, a victim form about the details of the crime/incident will be filled in. We recommend that the detailed information should be asked about all victimisation incidents – at least when piloting the survey. Experience demonstrates that it is uncommon that one respondent would be victim to very many crime categories during a short time period. If this should anyway be the case, priority rules for some victimisation forms can be set.
8. Instructions for reading the questionnaire
First
Most crime victim survey questionnaires contain many questions. Also our draft has 80 pages. This does not necessarily mean that the interview takes a very long time. Many respondents have only few if any victimisation experiences, and therefore the average interviewing time will remain rather short. Possible variation in the interviewing time is, however, to be taken into account by the interviewer organisation.
Reading the questions
All questions in the questionnaire end in a question mark. This means that the interviewer reads the question until he/she reaches the question mark. This may be in the end of the actual question part, or in the end of each response alternative. If the question mark is in the end of the actual question part, the interviewer does not read aloud the response alternatives. If each response alternative ends with a question mark, the interviewer reads the alternatives one by one and marks each chosen response alternative separately on the questionnaire. If the question mark is in the end of the last response alternative, the interviewer reads aloud all the response alternatives, and after that notes the reply of the interviewee.
Don’t know, refusal
Every question must have the response alternatives “Don’t know” and “Refuses to answer”. These are not written on the questionnaire, because most computer assisted interviewing systems have internal codes for these answers. “Don’t’know” and “Refusal” are real response alternatives, especially in attitude questions.
Showcards
If the response alternatives are to be seen by the respondent on a showcard, the text [SHOWCARD] is printed after the actual question. This has not yet been done in this draft version.
Multiple responses
If more than one response alternative are allowed, text [MULTIPLE RESPONSE POSSIBLE] is printed after the actual questions.
Other interviewer instructions
The possibility for probing is mentioned in some questions with multiple responses.
Overall, the draft version of the questionnaire includes few interviewer instructions, and little “social talk” which may be common in some countries. More sensitivity might be introduced in the future in the question wording.
9. Question sections
Introduction
Questions describing a similar topic or a complex of events are collected under a capital letter, A-G and V. Every list of response categories shall include the alternatives:
- Refusal / doesn’t want to say (last but one response category)
- Don’t know (last response category)
In general, “tested questions”, i.e. questions used in previous victimisation surveys, have been used as often as these have been available and applicable. However, we should take into account that questions planned for certain contexts can become problematic in other contexts. E.g. questions planned for national purposes are sometimes too detailed or country specific for international comparison – therefore simplification may have been made under many topics. On the other hand, the international experiences offer now a great variety of ways of asking about similar matters, and it is sometimes a matter of opinion which way to operationalise them is chosen.
Sections[1]
A. Respondent and household characteristics
Background variables contain questions mostly based on the Final report from the task force on core social variables. These are variables often used in the social surveys of statistical institutes. Some variables are complicated like the enlargened marital status and occupation. In detailed coding the occupation – if it is included in the final questionnaire – all occupational labels are in a file, and the code of the occupation is selected by the computer program when first letters of the occupation are typed on the questionnaire. The question set for income classification is taken from the ICVS, but adding the median class. While finalising the questionnaire income quartiles in the country must be known. In question A13 education will be asked differently in different countries to produce the given classification. Many respondents do not know to which category their education should be classified.