EssexCounty Council SENCO Manual 2011

EssexCounty Council

SEN and Children with

Additional Needs

Pupil Assessment Tools

Guidance for schools

Pupil Assessment Tools

In order to enable you toidentify children’s needs you will need to gather data which will include some of the following:

statutory assessments e.g Foundation Stage Profile, end of Key Stage SATs, P Scales

formal assessments such as optional SATs

additional assessments such as teacher assessments, reading tests, spelling tests, literacy and numeracy assessments, CATs, QCA, NFER, non-verbal/verbal learning potential assessment etc.

Target Tracker data

County audit materials for SEN identification – School Action

In addition to the above you should consider evidence from other sources such as:

  • parents
  • pupils
  • teachers
  • reports from external agencies (e.g. EP, specialist teacher, health, social services etc.)

Guidelines for the use of standardised assessments

These guidelines cover the use of standardised (formal) assessments and are written in order to provide support and to ensure consistency in approach to assessment. If you are planning to use any standardised tests, it is important to liaise with other professionals who may be undertaking the same types of tests e.g. SENCO, educational psychologist, speech and language therapist. They will be able to offer advice as to the most suitable tests to use. With all standardised assessments, be careful to read the relevant sections of the manual before using the test.

PLEASE NOTE – a repeat within a six-month period will usually invalidate the test.

What do we mean by a standardised assessment?

A standardised assessment is a published test, standardised on a sample population, which provides scores or data relevant to a normal distribution curve and which can usually be re-used after six months to give consistent results about a student’s performance. It should always be administered under the conditions described in the test manual, and its scores interpreted as per the instructions in the test manual. It is useful to be aware of the following information:

  • when was the test standardised?
  • on which age/gender group was it standardised?
  • how large was the population on which it was standardised?
  • are there cultural implications with regard to its standardisation?

Some tests have parallel forms that can be used for comparison. Usually, however, scores from different standardised tests cannot be validly compared. However, it is often useful to get comparative data on a pupil’s performance, even if the school has carried out a standardised assessment, by carrying out another standardised assessment in the same curricular area.

What should you look for in a good standardised assessment?

Assessments should meet the following criteria:

  • Fit for their purpose – should relate to the pupil’s development and experience, focus their attention, and not inflict stress on them
  • Valid and relevant – assessments should do what they claim to do; they should use current and up-to-date language

Reliable – the results of the assessment should be dependable, consistent and fair

  • Manageable – assessments should not take too long to administer, and should be as brief and simple as possible while still providing the information you need
  • Teacher-friendly – assessments should be easy to administer and score. They should be objective and require as little subjective judgement as possible

Useful terminology

Ability test – a test designed to show the individual’s present level of efficiency in a specific area.

Achievement test – a test designed to show to what extent the person tested has achieved the aims and objectives of a particular course.

Age norms – the average (mean) score produced on any test by samples selected according to age.

Battery (test battery) – a group of tests with scores standardised on the same population, used as a totality on subsequent populations.

Correlation – the degree of sameness in two people, items, situations etc.

Criterion – a standard against which performance is evaluated, or a judgement or selection made.

Culture bias – the effect which living in a particular environment has on an individual’s test scores.

Diagnostic test – a test used to diagnose weakness in specific areas of achievement.

General ability – often referred to simply as intelligence.

Halo effect – the tendency to rate an individual as good or bad on all criteria because he appears good or bad on one.

Histogram – a graphic representation of mark or score distribution in column form instead of in a curve form.

Non–verbal test – a test whose items consist of symbols, diagrams, or pictures.

Normal curve of distribution – a curve of probability which is symmetrical about its mean (which is also the mid-point of the scale or continuum) – approximately two-thirds of the population lie between one standard deviation above the mean (+1 SD) and one standard deviation below the mean (-1 SD).

Objective test – a test which will give the same score for any individual, regardless of who marks it, as the marks awarded are predetermined and cannot be influenced by the preferences or prejudices of the marker.

Performance test – a test requiring motor or manual responses.

Random sample – a sample not selected according to specific criteria.

Rating-scale – a continuum of n points (usually 3, 5 or 7) on which a person can be rated or assessed.

Raw score – the actual score obtained by an individual on a test, not adjusted by use of statistical techniques.

Reliability – the extent to which a test will give comparable results when applied to the same individual on a number of occasions.

Standard deviation (SD) – measure of distribution of scores about the mean.

Validity – the extent to which a test measured what it claims to measure.

Verbal test – a test that asks questions in words and is designed to test the individual’s comprehension of words and of problem situations.

Z score (also called standard score) – a score derived from a raw score on a given test.

1

Ref: SM1/4.2

First issue: April 2002