MIDLANDS STATE UNIVERSITY

DEPARTMENT OF PSYCHOLOGY

BSc. HOUNOURS PSYCHOLOGY

LECTURE NOTES

BY

J. M. KASAYIRA

PSY 410: PSYCHOMETRICS

What is Psychometrics?

Psychometrics is the science of psychological assessment. This involves the process of selecting and evaluating human beings (Rust & Golombok 1989, P. 20). The term is further defined by the American Psychological Association (APA) (1985; P.93) as that pertaining to the measurement of the psychological characteristics such as abilities, aptitudes, achievements, personalities, traits, skills, and knowledge. Thus psychometrics is concerned with the design and the analysis of research and the measurement of the human characteristics. One important component of psychometrics is psychological testing that includes intelligence testing, personalities testing and vocational testing. Below we look at psychological testing.

Psychological Testing

Psychological testing is a process of assessing a sample of behaviour through use of objective and standardized instruments. According to Anastasi (1990), psychological testing is rarely concerned with the measurement of the behaviour sample directly covered by the test. Thus, test items need not resemble closely the behaviour the test is to predict. It is not necessary that an empirical correspondence be demonstrated between the psychological test and the behaviour of interest. In the next section we describe some early efforts to develop psychological tests, in particular the intelligence test.
The History of Psychometrics and Assessment

The two major events that influenced the birth of psychometrics and assessment in the 19th century, developed along relatively independent lines: psychophysics and mental testing. Other influences including some incorporated in the major two are: the experiment on reaction time, which began with the discovery of the astronomer’s personal equation, the theory of evolution, the development of statistical methods, and the two world wars. Thus, Miller (1964) noted that quantitative psychology has developed from a different number of places where different sponsors pioneered in new directions.

Psychophysics (the forerunner of the first genuinely experimental psychology) developed when human judgements were studied using concepts and tools that had been applied successfully in Physics, Chemistry and Astronomy. Also the urgent need for methods of measuring emotional stability and intelligence in Medicine, Psychiatry, and Social Welfare Research (with the inspiration of Evolutionary Biology) led to the development of the mental- test tradition that centered upon individual differences. The merging of the two influences, with the help of statistical methods, produced the modern methodology of testing.

Curiously, Experimental Psychology and Differential Psychology can be traced to the early experiment in psychological function made by a German astronomer, Friedrich Wilhelm Bessel (1784 – 1846). Bessel had read about the dismissal of Kinnebrook, an assistant to Maskelyne, the royal astronomer at Greenwich Observatory. Kinnebrook had been fired for being persistently, between half a second to one second, slower than his superior in noting the transits of stars used to check the accuracy of clocks.

Contributions of German psychologists

Around 1820, Bessel began experimenting upon himself and other astronomers, and found considerable variation among individuals in speed of response. This finding led to the recognition that people differ in their judgements, and that such individual differences can be accounted for scientifically. The finding also led to the formulation of `personal equation` whereby the characteristic tendency to over or underestimate observations by the certain amount is corrected.

Bessel’s work on ‘reaction time’, also led German philosophers and the early psychologists to begin speculations about the threshold of awareness, the limen. For example, a philosopher, Hebart (1776 – 1841) suggested the concept of an absolute threshold, or lower limit of sensation. And Ernst Weber (1795 – 1878) carried out research on the two point threshold and demonstrated that it varies both in different parts of the body of the same person, and from one person to another for the given region of the body. He then developed the concept of just noticeable difference from which he developed what may be said to be the first truly quantitative law (Schultz, 1981). It states that, the smallest detectable difference between two weights can be expressed by the ratio of the difference between the weights, and the ratio is independent of the absolute values of the weights. This has been termed Weber’s law by Gustav Fechner (1801 – 1887).

Fechner established the science of relation of mental process to physical events (Psychophysics). In 1850, Fechner postulated that the connection between mind and body could be found in the statement of quantitative relation between mental sensation and material stimulus. He reformulated Weber’s law as follows: sensation difference remains constant when the relative stimulus difference remains constant. Mathematically he expressed it as; a sensation is proportional to the logarithm of its Stimulus (S = k log R).

It was largely because of Fechner’s psychophysical research that Wundt conceived the plan of his Experimental Psychology. According to Tuddenham (1964), Wundt had even attempted to measure the time intervals required by the mind to perceive; to discriminate; and to associate, by noting differences in reaction time for tasks presumably involving different combination of these complex activities. Hence, he developed his mental chronometry.

Since the German psychologists have been preoccupied with finding general laws, similar to those found in natural sciences, they had not done much on individual differences. Only a few German researchers, who include Weber, Fechner, and Helmhotz reported individual differences in their experiments. Nevertheless, Wundt and other German workers contributed to the mental test movement and the quantitative psychology as a whole by bringing out the need for vigorous control of the conditions under which observations were made. This led to the standardization of procedures that is very important in psychological assessment today.

Galton’s Contributions

The science of individual differences may be said to have truly began with Sir Francis Galton (1822 – 1911) whose predominant emphasis on heredity as an explanation of difference in intelligence was greatly supported by the zeitgeist up to the mid 1920s. This started with Darwin’s theory of evolution. Spencer, a British philosopher, and Galton were impressed by the emphasis that Darwin put on individual variability as the key to survival and evolution of species. Hence, Herbert Spencer (1820 – 1903) advocated for the application of the theory of evolution to human nature. This led to the development of Social Darwinism. Spencer put forward a theory that humans differ from one another in amount of general intelligence. However, it was Galton who fully elaborated a theory of mental ability and proposed ways of testing it. In both of his major psychological works, ‘Hereditary Genius’ (1869) and ‘Inquiries into Human Faculty and its Development’ (1883), he examined the inheritance of mental abilities with the goal of racial improvement. Thus, in 1883 Galton founded the science of eugenics whose purpose was the betterment of the human race through the control of mating.

Liberal sociologists such as Lester Ward and Charles Cooley vigorously attacked Galton’s theory. Ward and Cooley neither did not reject Galton’s theory completely. They were willing to accept hereditary determinants of behaviour as long as the possibility of environmental determinants was not ruled out. This is when the nature – nurture controversy began. Galton refused to compromise and sought to prove that nature was overwhelmingly more important.

In ‘Hereditary Genius’, Galton had used reputation as his measure of natural ability. Through his critics, however, he came to recognise that he was not measuring natural abilities. Hence, he devoted much of his later career to finding ways of measuring innate abilities. These later researches gained him the title of, ‘father of mental – testing’.

Following the British empiricist John Locke’s (1632 – 1704) dictum that all knowledge comes through the senses, Galton developed tests that mostly were measures of simple sensory discrimination. To get extensive data on individual differences in wide range of sensory and motor capacities, Galton maintained an Anthropometrical laboratory at South Kensington Museum between 1884 and 1890. More than 9000 people were tested at a cost of three pence per person. To summarise his data, Galton had recourse to statistical methods of Quetelet (1776 – 1874). Quetelet has demonstrated that anthropometrics measurements of unselected samples of persons typically yielded a normal curve. Galton was impressed by Quetelet’s work. Applying Quetelet’s law of deviation from an average and the normal curve, Galton distinguished 14 levels of human ability, ranging from ‘idiocy through mediocrity to genius’. Galton also proposed that the mean and standard deviation could be used to define and summarise the data. He also invented a number of additional statistical tools. Among them, he developed the methods of correlation, and psychological scaling methods, such as the order of merit and the rating scale method. To determine the highest frequency of sound that could be heard, he invented a `supersonic` whistle, with which he tested animals as well as people. In 1833, he introduced the twin-study method to assess the effectiveness of inheritance and environment.

In the area of association, Galton worked on the diversity of association of ideas and the time required to produce association. He then invented the word association test. Wundt adopted the technique, limited the response to a single word, and used it at Leipzig. Galton`s investigation of mental imagery marks the first extensive use of the psychological questionnaire. His student Karl Pearson (1857 – 1936) polished and expanded the statistical techniques Galton had developed. Pearson derived the correlation coefficient, partial correlation, multiple correlations, and factor analysis and laid the foundation for most of the multivariate statistics that are being used in psychology (Nunnally 1959). In America, the most influential psychologist in the development of quantitative psychology is J Mckeen Cattell (1860 – 1944).

Cattell was impressed by Galton`s emphasis on measurement and statistics. He was also influenced by Galton`s idea of eugenics. Back in America Cattell tried to put the science of eugenics into practice and he also carried out extensive research using Galton`s ideas and methods. In 1890, he published an article entitled ‘Mental Tests and Measurements’. Thus while Galton originated mental tests, Cattell coined the term. However, both their tests dealt with elementary bodily or sensory-motor measures. These early tests were not successful measures of psychological phenomena because they were largely of physical qualities such as reaction times, colour recognition and hearing.

Development of Intelligence Scales

The real development of psychometrics as we know it today has been shaped by practical requirements rather than by theoretical developments. In 1904 the French government assigned Alfred Binet (1857-1911) the task of devising tests that would distinguish educationally bright and mentally retarded children. Together with his colleague Theodore Simon, they developed the Binet-Simon test. Lewis Terman of Stanford University translated and adapted the tests for use in U S A in 1916. The result was what was named the Stanford-Binet Scale that became the model for most subsequent intelligence tests. The pressures for quick and easily applied methods of testing military personnel in the first world wars led to U S A developing the Army Alpha and Army Beta intelligence tests. These served as early model of other group tests of intelligence.

Binet and Simon divided up the children into groups according to age and created norms that reflected the average performance in each age group. The norms were established by recording the percentage of children from a particular age group who could correctly answer each particular item. Those norms enabled them to introduce the concept of mental age (MA). Mental age measures the intelligence ability of a child by comparing it with that of other children of the same age.

Instead of calculating mental age, Lewis Terman adopted the concept of intelligence quotient (IQ), invented by the German psychologist William Stern. The intelligence quotient is a score that can be derived for each child, and makes it possible to compare the ability of child of the same and different ages. The formula used by Terman is presented below:

IQ = MA x 100

CA

Children with mental age that are the same as their chronological age will have an IQ of 100. An IQ of 100 is the score of an average child. The normal distribution curve of IQ scores is used as the basis for classifying people into descriptive categories of gifted, average or retarded.

Adults cannot use the IQ formula discussed above because the abilities measured by most IQ tests do not improve much after the age of 16. David Wechsler (1944) introduced the deviation IQ method that is generally used in obtaining IQ scores today. At each chronological age, the distribution of scores is obtained, and those individuals who score at the mean are given an IQ of 100. The IQs of other individuals are obtained by statistically calculating how much they deviated from the mean. Currently this method is used for obtaining IQs for children and adults. Thus an IQ score is no longer an intelligence quotient because there is no longer a division sum with a quotient as a result (Myers, 1995).

Psychometric Theories of Intelligence

Psychometric approaches to intelligence study the statistical relationships between different measures, that is, how one set of scores is related to another set. Some influential psychometric theories of intelligence include Spearman’s g factor and Raymond Cattell`s crystallized and fluid intelligence. Spearman’s concept of g and Cattell`s distinction between fluid and crystallized intelligence, identify statistical relationships among mental abilities.

Spearman’s Two-Factor Theory

Charles Spearman introduced the two-factor theory that proposes intelligence can be seen as the result of two factors or type of abilities: general intelligence (g) and specific intelligence (s). The general intelligence is common to all types of intellectual behaviour.

According to Spearman, people with high degree of general intelligence tend to be successful in any activity they perform, whether it is in science or in languages.

Spearman also suggested that in addition to the general intelligence, there are various kinds of specific intelligence (s) that are specialised abilities in specific areas. Thus, people’s performance is determined by a combination of:

(1)the amount of general intelligence (g) they possess

(2) the amount of their specific aptitude for languages

The relative contribution of the g and s factors varies, depending on the type of task being done. For instance the g factor played an important role in arithmetic reasoning but a less important role in mechanical tasks. Group factor is a common underlying ability shared by specific abilities. Therefore, for Spearman, performance on any intellectual task was determined by a combination of g, s and group factors (Louw 1997:325).

Thurstone’s Theory of Primary Mental Abilities

Louis Thurstone (1938; 1953) proposed that our total intellectual ability depends on the following seven primary mental abilities:

  • Verbal comprehension: The capacity to understand ideas in the form of words
  • Verbal fluency: The capacity to express ourselves fluently in words
  • Spatial visualisation: The capacity to mentally manipulate and rotate objects) in solving problems
  • Numerical ability: The capacity to work with figures (add, subtract, multiply, and so on)
  • Memory: The capacity to store and recall information
  • Reasoning: The capacity to plan and solve problems according to rules, principles and experience
  • Perceptual speed: The capacity to perceive and compare objects rapidly

Initially, Thurstone believed that these abilities were relatively independent of one another. Later, it was discovered that close relationships existed between many of these primary mental abilities (Louw 1997: 326-327).

Fluid intelligence and Crystallised Intelligence

Raymond Cattell`s (1971) suggested that there are two kinds of g: fluid intelligence and crystallised intelligence. Fluid intelligence refers to our ability to reason and solve problems; in other words, it is our ability to create new knowledge.

Crystallised intelligence is the ability to apply the knowledge we already have (e.g. vocabulary and multiplication tables) to solve problems. In Cattell`s view, fluid intelligence is mainly determined by genetic factors while crystallized intelligence is mainly determined by cultural and environmental factors.

The psychometrics approaches, some of which have been discussed above, have been criticised on the grounds that researchers have come to widely different conclusions about the number of abilities or factors that make up intelligence. Depending on the methods they used, some researchers have identified as few as 20, while other still describe as many as 150.

Other Theories of Intelligence

Cognitive science approaches define intelligence more broadly; in terms of the tasks and problems they seek to deal with. This view contends that there are several components making up intelligence. In other words, there are several different kinds of intelligence to be demonstrated in human functioning. For instance, Guilford (1959) made important distinction between convergent and divergent thinking. Divergent thinking involves working with information in such a way that a number of solutions flow from it. This type of thinking is characterized by a process of “moving away” in various directions. While convergent thinking is characterized by a bringing together or synthesizing of information and knowledge focused on a solution to a problem (Reber, 1995).

Sternberg’s Triarchic Theory

Sternberg’s (1985; 1988) triarchic theory of intelligence is a good example of the information processing approach. With this approach, intelligence is multi-dimensional and is made up of three different kinds of ability: Componential intelligence, experiential intelligence and contextual intelligence.

Componential intelligenceis similar to the traditional concept of intelligence. According to Sternberg, componential intelligence consists of three processes or components.

  • Meta components, which include abilities such as identifying a problem, and making strategic decisions. They are also called executive processes because they control the other two components.
  • Performance components carry out the action, which the meta components have planned. They are involved in understanding incoming information from memory, and in making decisions about how to respond.
  • Knowledge-acquisition componentsare involved in the learning and storing of new information.

Experiential intelligence refers to the ability to master new tasks and to carry out complex tasks automatically. Contextual intelligence is the ability of people to adjust to the environment around them. In Sternberg’s view, contextual intelligence makes a much more important contribution to achieving success in life than formal education.