Here you can download the Babel Placement Test. All these files are in pdf format. The information found on this page is available in the babel test information pages.
- babel test information
- The test itself
- The candidate answer sheet
- The marking scheme
- The score converter
The Babel English Language Placement Tests
The Babel English Language Placement Tests were closely based on the Nelson Quickcheck Placement Tests. These have generally been used for base-line language assessment of company employees in order to benchmark their language level against an established external reference [the ALTE and CEF levels]. They have been used at Liverpool City Council, A4E Language and Basic Skills Training, Kuwait Petroleum Corporation and subsidiaries, Chevron Texaco, Liverpool and Everton Football Teams, IdioMaster (Spain), and Liverpool Language Academy. The testing cycle should require no more than 70 minutes of trainee time and does not require any specialist testers to administer it.
There are two versions of the test available: the paper version and the computer based test [CBT]. The computer based version requires Perception QuestionMark© testing software and this is available for either internet or intranet computer systems. The test author is able to set up the test in the software environment but each client will need to purchase Perception from the software company.
The test items have been trialled and pretested on more than 500 testees and benchmarked using standard correlation statistical methods to IELTS and Cambridge testing suite tests. This gives some assurance regarding level benchmarks.
In addition, the test has undergone stringent facility value statistical calculations and assures a wide spread of scores from beginner levels to advanced students. A large population sample (some 5000 tests carried out) gives some assurance of reliability of these calculations. These calculations have been automatised using standard Perception QuestionMark© testing software.
The Babel English Language Placement Tests consist of four tests of equal difficulty [designated Test A, Test B, Test C & Test D]. Each test contains four sections of 25 reading, grammatical & lexical items. Each section is in ascending difficulty. The questions are not ranked in order of difficulty in the paper test. The CBT can use computer adaptive testing [CAT] in common with other CBT tests, for example the Oxford Placement Test, if this is required. It is important for test users to understand quite clearly that the Babel English Language Placement Tests are only indirect tests [and have, in common with most placement tests, the inherent weaknesses of such tests]. They make no statements about what the testees can do in terms of language performance. However, they do provide a robust means of establishing the most probable level of language performance [and that is all we seek for purposes of initial language assessment]. After initial placement in teaching groups, trainees may be moved up or down a level within the first week of the course in the light of observed language performance. This is standard procedure in most language training establishments.
The Babel English Language Placement Tests were originally closely based on the Nelson Quickcheck Placement Tests. These are designed for rapid placement testing. The tests were designed for ease of administrability in observing stringent test design standards.
The test writer would be cautious about adding extra test components for two reasons:
● Any other subjective items used in conjunction with this test will need to be
valid, reliable and benchmarked. Test items which elicit the range of language
required to make a proper assessment, inter scorer reliability and valid language
descriptors for subjective tests are difficult things to come by. The addition of
any subjective item(s) presupposes either tester training and/or the in-
disposability of the Tester
● Apparent proficiency. Many test takers have some degree of proficiency in one or
more of the tested skills. This then creates a bias which then becomes noticeable
when learners are in a General English class, this biased score then lets them
down [and causes them to fail] are their reading and writing skills when other
skill areas are demanded in later teaching and/or testing environments.
We are not really interested in their speaking or writing skills for placement purposes: we are more interested in whether testees can understand the teacher and read the course book, whether they have an adequate level of lexis, and whether they can handle written sentence patterns and structures with a degree of automaticity. Whilst the addition of add-on test items puts the reliability and validity of the test in question, what is not possible to do is to adapt or change questions, question order or rewrite questions and maintain test reliability. This is because each distractor, question and section is designed to work as one measuring instrument. Benchmark calculation was carried out on this basis and the test will lose reliability if changes are made.
In the first instance, the items in each of the four tests were carefully selected and adapted from the ten levels of the Nelson English Language Tests battery. The latter ranges from near-beginner level up to UCLES Certificate of Proficiency level [that is, to near native-speaker level]. Questions which showed the highest discrimination rating were chosen in each case. In order to keep up to date with current testing methods, skill based questions were trialled, pretested, edited, benchmarked and added to the test. To date, only reading questions have been added. There are some listening questions written and ready for trialling and benchmarking, However, present resources prohibit further development. The tests are in multiple-choice format [to ensure rapid marking] and consist of items measuring the recognition of correct responses to reading prompts, grammatical forms and lexical choices in context. All items have been extensively pre-tested with students
from a variety of first-language backgrounds.
The Babel English Language Placement Tests are accurate, although they do not provide the same degree of precision in placement as the individual tests of the larger NELT or
other CAT battery. For our purposes, however, they will suffice to indicate accurately the general language level of testees. They do not enable statements to be made about
individual skills [reading, writing, listening or speaking].
Scoring & Administration
Scoring and administration is fairly straightforward, although some points need to be borne in mind.
● Each of the four sections of the tests contains 25 questions. Each of the sections
are progressively more difficult.
● Testees must be clear what the task involves, so go through the rubric with them
before starting. Invigilators are advised to concept check testees’ understanding
of the test rubric before starting the test. In addition, testees with particularly low
levels should be carefully watched at the start of the testing session to ensure that
they fully understand the time limit and performance relationship.
● The test writer understands that in some instances, and in some institutions,
testees are permitted to use dictionaries in tests. However, as automaticity
considered here to be an important aspect of language usage, time constraints will
not allow testees to both use a dictionary and achieve a high score. In the cases
where dictionaries are used, testees habitually under achieve. Therefore,
dictionary use is not encouraged.
● The tests items should be negatively marked. Negative marking is not now a
common feature of test scoring, but it is used in this instance to allow both
differentiation at lower levels and to reduce the discriminatory effects between
guessing and leaving answers blank in multiple choice items. Benchmarking has
been carried out using negative marking score conversion. For all these reasons,
negative marking is an important feature of this test.
● Blank answers are considered to be incorrect.
Since scores on any one of the four tests [A, B, C & D] should be almost the same for any given section, security can be maintained among large groups of testees by giving different tests [alternates] to those sitting next to one another. Test administrators are encouraged to maintain the highest levels of test security within their own institution to maintain local reliability of the test.
For more information on levels, corpus and benchmarks see the “Common European Framework” and the related works:
- Waystage Level [Jan van Ek & John Trim] Cambridge University Press [0521-56707-6]
- Threshold Level [Jan van Ek & John Trim] Cambridge University Press [0521-56706-8]
- Vantage Level [Jan van Ek & John Trim] Cambridge University Press [0521-56705-X]
These texts are published by CUP in collaboration with the Council of Europe.