Examining Writing: Research and Practice in assessing second language writing
Written for the "Researcher as teacher" or trained interviewer/examiner rather than the "Teacher as researcher", this publication highlights the need for test developers to provide clear explanations of the ability constructs which underpin tests offered in the public domain. While a number of previous frameworks are mentioned, including the Common European Framework of Reference (CEFR) and Cambridge's own VRIP approach, the authors argue that "...language tests should...place the same requirements on test takers as are involved in writers' responses to communicative settings in...'real life situations'". (p. 9) Their intention is to provide a model that incorporates much of what the previous models and frameworks have suggested as well as providing a practical framework for evaluating the "real-world" assessment value of standardized tests of writing.
A well documented and research-verified explanation is increasingly required if the validity of test score interpretation and use are to be supported both logically and with empirical evidence. This book demonstrates the application of a comprehensive test validation framework which adopts a socio-cognitive perspective. The framework embraces six core components which reflect the practical nature and quality of an actual testing event. It examines Cambridge ESOL writing tasks from the following perspectives: Test Taker, Cognitive Validity, Context Validity, Scoring Validity, Criterion-related Validity and Consequential Validity. Chapter 1 explains the previous models and frameworks proposed by people like Weir (p. 14) and outlines the basis for the book's structure as well as their proposed test evaluation model. Chapter 2 focuses on test-taker characteristics with particular focus on the research dealing with Cambridge's own ESOL tests. Chapter 3 focuses on cognitive validity for testing writing but, as there is a shortage of L2 research in this area, L1 research is also looked at. Chapter 4 deals with the limited content validity research with particular attention to the diversity of tasks and parameters across tests intended for test takers at all levels of ability. Chapter 5 looks at scoring reliability and chapters 6 and 7 look at the value of the test score in terms of criterion-related and consequential validity. The final chapter summarizes the findings that resulted from applying the proposed framework to Cambridge writing exams as well as proposing suggestions for further research that could be of value to Cambridge ESOL as well as the wider testing community.
The authors show how an understanding and analysis of the framework and its components in relation to specific writing tests can assist test developers to operationalize their tests more effectively, especially in relation to criterion distinctions across test levels.
While I found this volume quite theoretical and overly academic in its language and writing style, I was quite interested in how research conducted on the various tests influenced the construction of both the writing tasks used for various tests and the way examiners are trained to evaluate the writing samples. It is a genuine attempt to tie research, test taking and the challenges both test takers and examiners face in the obviously artificial "test taking" situation together in such a way that tests become more consistent and realistic tools for measuring a student's real language level as well as their likelihood to succeed in attaining the level of English proficiency they are likely to need in today's world. For that reason alone. it should be read by anyone who is a qualified examiner for any of the recognized ESOL tests that include a writing component.