Having established the purpose of a test (see the post: Types of test – why are we testing?), other factors will then affect the general approach. Whereas in the past the focus was almost exclusively on the techniques of testing, how to test, the main issues and concerns in testing today are much more to do with what we want to test. Communicative approaches to language teaching have brought with them a focus on real-life language use. How this can be tested, and more specifically what aspects of communicative competence go into the test, has been the primary issue in testing in the 1990s [Weir, 1993; Alderson & Hughes, 1991]. This has meant a fresher attitude to the role of error. In the past, errors were counted and grades often assigned on the basis of the number of mistakes. The emphasis on measuring achievement rather than counting errors is welcome, but not without difficulty. How do we measure? Counting is easier and does separate candidates out into weaker and more able, but it is not really appropriate when talking about the ability to communicate in the language. A useful distinction in discussing tests is that of direct and indirect testing.
Direct and Indirect Language Testing
A test is said to be direct when the test actually requires the candidate to demonstrate ability in the skill being sampled. It is a performance test. For example, if we wanted to find out if someone could drive a vehicle, we would test this most effectively by actually asking him to drive the vehicle. In language terms, if we wanted to test whether someone could write an academic essay, we would ask him to do just that. In terms of spoken interaction, we would require candidates to participate in oral activities that replicated as closely as possible [and this is the problem] all aspects of real-life language use, including time constraints, dealing with multiple interlocutors, and ambient noise. Attempts to reproduce aspects of real life within tests have led to some interesting scenarios.
An indirect test measures the ability or knowledge that underlies the skill we are trying to sample in our test. So, for example, you might test someone on the Highway Code in order to determine whether he is a safe and law-abiding driver [as is now done as part of the UK driving test]. An example from language learning might be to test the learners’ pronunciation ability by asking them to match words that rhymed with each other.
One of these words sound different from the others. Underline it. door law though pore
This is essentially knowledge about the target language [or recognition of target language items] rather than actual performance in the language. Indirect testing is controversial, and views on it vary, but it is clear that many of the claims made for it in the past cannot be readily substantiated. It does not give any direct indication of the candidates’ oral proficiency, accuracy, or appropriateness of pronunciation. In many instances, an indirect approach involves the testing of enabling skills at a micro-level. Thus, in terms of spoken interaction, we might seek to test learners by asking them to write down what they would actually say in a given situation [as in this example from the New Cambridge English Course Test Booklet].
Language In Use1. Shopping,A. Can I help you? B. ______________ A.Here’s a lovely one. B. ______________ A. What size?
[Swan & Walter, 1991: 6]
Again, this would not assess the candidate’s oral performance directly. In fact, it is not at all easy to see exactly what is would assess usefully.
Some further considerations
List any public examinations with which you are familiar.
In a TEFL context, you might consider the UCLES testing suite, or tests such as TOEFL, or IELTS. In a UK-based mainstream TESL context, you might prefer to look at the mainstream SATs or GCSEs.
- How do these tests fit into the direct or indirect categorization?
- List some advantages and some disadvantages of both types of testing [and consider the issues raised in the section above].