Adapting the CEFR for Non-Latin script languages

EALTA Conference - Voss, Norway, June 2005, Neil Jones and Karen Ashton

Asset Languages is the brand name of the assessment system being developed by UCLES to implement the Languages Ladder. It is a joint venture by two UCLES business streams: OCR and Cambridge ESOL. Cambridge ESOL have responsibility for developing the assessments and conducting validation. UCLES was awarded the tender for Asset Languages in October 2003 by the Department for Education and Skills (DfES hereafter). The project addresses one of the objectives of the National Languages Strategy “Languages for all: Languages for Life. A Strategy for England” (2002).

For a fuller description of the project see Jones et al 2005.

The Languages Ladder: a case study for framework constructionFunctional proficiency levels as the basis of cross-language comparison

Implementing the complex (26 languages) multilingual measurement framework implied by the Languages Ladder is clearly a challenge, but we can benefit from looking to similar work going on elsewhere, particularly in relation to the Common European Framework of Reference (CEFR). Cambridge ESOL is among the assessment bodies who have undertaken to do case studies based on thedraft pilot Manual Relating language examinations to the CEFR (Council of Europe 2003).

For users of the Manual the typical case will concern relating a single language to the CEFR. The Languages Ladder demands a more explicit focus on direct cross-language equating, and needs robust replicable methods, i.e. which restrict the freedom to make judgemental decisions on a by-language basis (Jones 2005).

None the less, human judgement remains critical to the equating process. This judgement should be informed by evidence relating to the real-world language skills of learners. It is these which are the object of interest, rather than features of tests or tasks, which relate to the real world only indirectly. For this reason learner-centred standard setting procedures are to be preferred over the task-centred procedures which are prominent in the Manual and in recent CEFR-related studies (e.g. Alderson et al 2004).

It seems then that cross-language equivalence is most meaningfully conceived in terms of comparable levels of functional language proficiency, as defined by the illustrative scales in the CEFR. Indeed, the usefulness of the CEFR, or the Asset assessment framework, lies precisely in the notion of functional equivalence.

But unfortunately CEFR levels are not defined purely in terms of functional language proficiency. The lower levels in particular – A1 and A2 – need to be seen at least as much as learning stages. This becomes clear from the way the levels have developed.

In a position paper on the Breakthrough level John Trim explains:

Until relatively recently the Cambridge Local Examinations Syndicate (UCLES) held that its First Examination, as the name chosen to demonstrate, represented the lowest level of foreign language proficiency that was ‘of public interest’, i.e. a serious qualification for the purposes of education or, particularly, employment. This attitude has since been replaced by a view of a suite of examinations functioning as an educational ‘ladder’, the ‘rungs’ of which, the individual examinations, should be sufficiently close to be successively attainable in the short to middle term, but far enough apart for the gain in proficiency to be significant – a qualitative leap rather than merely a quantitative increment. (Trim 2000).

The concept of levels as a support structure for learning is an important one. In relation to Waystage:

… experience soon showed that Threshold Level involved a considerable learning/teaching load, particularly if the goal were effective productive use and not merely receptive understanding. Waystage, not termed a ‘level’, was developed as an intermediate objective, suitable for the first year of the media-led English course for adult learners Follow Me! (Trim 2000).

Among the arguments against defining Breakthrough which Trim considers are:

  1. So low a level as A1 has no portable ‘trade-in’ value as a qualification and is therefore ‘of no public interest’.
  2. Learning at that level would be too disparate, incoherent and language-specific for any common European standard to be definable as a ‘level’ (Trim ibid).

While Trim rejects these as arguments against defining Breakthrough, it is clear that the rationale for A1 and A2 is as much to do with learning as with identifying a useful functional proficiency level.

It is easy to miss this when looking at the CEFR illustrative scales in European context. This is because European languages are similar enough that learners of different languages can be imagined to progress at the same rate and in the same way across all skills. There is, indeed an implication in the CEFR illustrative scales that the typical European language learner will have a flat profile of skills – that is, achieve a given CEFR level in all skills at the same time. This despite the fact that Threshold Level – which has become B1 in the CEFR - was originally specified in terms of needs, aimed at “those language learners who wish to be able to operate as independent agents in a foreign environment. The model specifies primarily what the language user is required to do in the communication situations seen to be necessary for this purpose…” (Trim 2000). According to Trim (personal communication) there was no expectation that typical learners would achieve the level in all skills simultaneously.

The easy assumption that learning and functional proficiency progress evenly across skills and languages is challenged when non-European languages are brought into the comparative framework. Particularly the demands of acquiring the writing system can vary greatly. Somehow these different demands need to be reconciled in constructing the framework.

Reading/writing assessments in Non-Latin script languages

In developing specifications and assessments for non-Latin script languages (Panjabi, Urdu, Mandarin Chinese and Japanese initially), what type of comparison is possible across these languages (and with European languages)? The CEFR assumes knowledge of all of the script at A1, e.g:

  • ‘Can write isolated phrases and sentences’ (CEFR 2001, p 61)
  • ‘I can write a short, simple postcard, for example sending holiday greetings. I can fill in forms with personal details, for example entering my name, nationality and address on a hotel registration form’ (CEFR 2001, p 26)

The languages that we are developing assessments in use different writing systems, e.g.

Table 2: Writing systems

Logographic / Syllabographic
Chinese characters are logographic, e.g.

water / Gurumukhi script in Panjabi is syllabic, e.g.

Hiragana script in Japanese is syllabic, e.g.

ka ki ku ke ko

Additionally each script places different demands on learners.

Table 3: Script learning demands for different languages

Tamil / Urdu / Chinese / Japanese
  • 247
/
  • 35 base characters
/
  • approx. 6, 500 in use
/
  • 46 base katakana
  • 46 base hiragana
  • approx. 1,945 in daily use

The assessment approach taken will clearly impact teaching and learning. Cambridge ESOL is working closely with teachers in order to ensure that the impact will be positive. It is important to know what teaching models are used for script acquisition for non-Latin script languages. Weare aware of two distinct models currently being used by prominent language schools in the UK.

Model 1: There is a separate pre-entry class focusing purely on script acquisition. At the end of this class and before students are able to attend the entry level class, they must have acquired all of the script. This model applies to syllabicscripts – not to Chinese characters or Japanese Kanji.

Model 2:This model uses an integrated approach where all skills are taught together. The script is taught in ‘chunks’ and script acquisition may take a few years (as there are few classroom hours in a week for language learning and time is divided over the four skills).

Asset Languages needs to find a working solution to this measurement challenge,defining an approach that fits into the framework while respecting the diversity of the different scripts and writing systems for each of the languages we are working with. We have consulted widely with experts in each of these languages in order to develop specifications at Breakthrough, Preliminary and Intermediate levels. There was good agreement among experts in developing these specifications, enabling the following decisions to be made.

For Panjabi, Urdu and Japanese (hiragana and katakana scripts only) any characters may be used in Breakthrough reading and writing assessments. It was agreed that the entire script was needed before any kind of functional level could be reached. One expert commented ‘it was quite obvious that the model given to us is workable in all languages’.

Chinese is not syllabic in the same way that the above languages are. Words are instead learned as building blocks so it is difficult to talk about script acquisition without being overly prescriptive and impacting negatively on pedagogy. Experts have suggested specifying that candidates should know enough characters to enable them to complete the functional requirements set out at each grade and stage.

With respect to other non-Latin script languages consultation with experts and teachers needs to continue, as do studies looking at the way in which our assessments are impacting on both learners and teachers. The approach to reading and writing assessments in other non-Latin script languages will be developed in collaboration with experts and on a case-by-case basis.

The two figures below show the two potential models for scale construction based on the two models discussed above. With model 1, A1 represents a functional level of achievement that is comparable for both European and non-Latin font languages although due to the extra task of acquiring the script, the process of acquiring A1 level is likely to take longer for non-Latin font languages. Model 2 on the other hand shows that A1 is achieved with a comparable degree of effort to a European language A1, which means that in terms of can-do achievement, learners will be able to do less at A1 than learners of European languages at A1.

For Asset Languages, the model that we create is likely to be a compromise between these two. The system needs to meet the learning demands of particular languages. While it is desirable that the dependent levels of A1 and A2 represent a substantive achievement, they also need to be realistic learning targets. The learning effort to reach these levels may be greater than for European languages, but not so great as to be unrealistic, demotiovating and at worst unachievable. Learners who achieve these levels and wish to progress further will certainly have the dedication and purpose to work intensively on mastering the script. We could thus aim at preserving B1 as the first "true" functional level, with learners of both European and non-Latin font languages demonstrating comparable proficiency.

References

Alderson , J.C., Figueras, N., Kuijper, H., Nold, G., Takala, S., Tardieu, C. (2004) The Development of Specifications for Item Development and Classification within the Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Reading and Listening. The Final Report of the Dutch CEF Construct Project

Council of Europe (2003) Relating language examinations to the CEFR. Manual; Preliminary Pilot Version. Retrieved from: Co-operation/education/Languages/Language Policy/Manual/default.asp

DfES (2003). The Language Ladder – steps for success. Web page, retrieved from

DFES (2004). Languages for all: From strategy to delivery. Document retrieved from

Jones, N (2005). Raising the Languages Ladder: constructing a new framework for accrediting foreign language skills. Research Notes No 19. Cambridge ESOL. Retrieved from:

Jones, N., K. Ashton, A. Shi-Yi Chen (2005) Rising to the challenge of Asset Languages. Research Notes No. 19. Cambridge ESOL. Retrieved from

Trim, J. (2000). Breakthrough. A position paper prepared under contract for the Council of Europe, November 2000.

1