1st Edition

Local Language Testing Design, Implementation, and Development

    228 Pages 34 B/W Illustrations
    by Routledge

    228 Pages 34 B/W Illustrations
    by Routledge

    Local Language Testing: Design, Implementation, and Development describes the language testing practice that exists in the intermediate space between large-scale standardized testing and classroom assessment, an area that is rarely addressed in the language testing and assessment literature. Covering both theory and practice, the book focuses on the advantages of local tests, fosters and encourages their use, and provides suggested ideas for their development and maintenance. The authors include examples of operational tests with well-proven track records and discuss:

    • the ability of local tests to represent local contexts and values, explicitly and purposefully embed test results within instructional practice, and provide data for program evaluation and research;
    • local testing practices grounded in the theoretical principles of language testing, drawing from experiences with local testing and providing practical examples of local language tests, illustrating how they can be designed to effectively function within and across different institutional contexts;
    • examples of how local language tests and assessments are developed for use within a specific context and how they serve a variety of purposes (e.g., entry-level proficiency testing, placement testing, international teaching assistant testing, writing assessment, and program evaluation).

    Aimed at language program directors, graduate students, and researchers involved in language program development and evaluation, this is a timely book in that it focuses on the advantages of local tests, fosters and encourages their use, and outlines their development and maintenance. It constitutes essential reading for language program directors, graduate students, and researchers involved in language program development and evaluation.

    Preface

    Acknowledgements

    Chapter 1: Introduction

    Why local tests?

    Introduction

    Our tests

    Contexts and problem solving

    Large-scale language tests and academic language testing

    Organization of the rest of this volume

    Further reading

    References

    Chapter 2: Local tests, local contexts

    Introduction

    Local contexts and test purpose  decisions about test-takers

    Linking test purpose to instructional outcomes

    Further reading

    References

    Chapter 3: Local test development

    Introduction

    Planning test development

    Test design

    Technical manuals

    Summary

    Further reading

    References

    Chapter 4: Test tasks

    Introduction

    Discrete-point lexico-grammar items

    Integrative measures of general language ability

    Skill-based performance assessments

    Summary

    Further reading

    References

    Chapter 5: Local test delivery

    Introduction

    Digital devices

    Production of traditional and digitally delivered tests: what does it take?

    Opportunities for innovative task design

    Test administration

    Selection of test delivery format: what to consider

    Summary

    Further reading

    References

    Chapter 6: Scaling

    Introduction

    Why do we need a rating scale?

    Different types of rating scales

    Approaches to scale development and validation

    Issues to consider during scale validation

    Unique challenges and opportunities for scaling in local testing contexts

    Summary

    Further reading

    References

    Chapter 7: Raters and rater training

    Introduction

    The promise and perils of rater-mediated language assessment

    Rater behavior: a catalog of rater effects

    Why do we need rater training? Does it really work?

    Best practice for rater training

    Summary

    Further reading

    References

    Chapter 8: Data collection, management, and score reporting

    Introduction

    Data flow

    Data storage

    Data management

    Data uses

    Summary

    Further reading

    References

    Chapter 9: Reflections

    Introduction

    Some reflections on the revision of the TOEPAS

    Some reflections on the EPT scale revision process

    Some reflections on the development of the OEPT

    Conclusions

    References

    Appendix A

    Index

    Biography

    Slobodanka Dimova is an associate professor in Language Testing and the coordinator of the Test of Oral English Proficiency for Academic Staff (TOEPAS) at the University of Copenhagen, Denmark. She currently serves as the book review editor for the journal Language Testing, a member of the Executive Committee of the European Association of Language Testing and Assessment (EALTA), and the Chair of the Danish Network for Language Testing and Assessment (DASTEN). She has also been a language testing consultant for the Danish Ministry of Education regarding the foreign language tests administered at the end of obligatory education. Her current research interests include rater training and rater behavior for oral proficiency tests, the development and validation of scales for performance-based tests, the use of technology in language testing and assessment, and the policies and practices related to the implementation of English-medium instruction (EMI) programs at non-Anglophone universities.

    Xun Yan is an assistant professor of Linguistics, Second Language Acquisition and Teacher Education (SLATE) and Educational Psychology at the University of Illinois at Urbana-Champaign (UIUC). At UIUC, he also supervises the university-level English Placement Test (EPT), designed to assess international students’ writing and speaking skills. His research interests include speaking and writing assessment, scale development and validation, psycholinguistic approaches to language testing, rater behavior and cognition, and language assessment literacy. His work has been published in Language Testing, Assessing Writing, TESOL Quarterly, and Journal of Second Language Writing.

    April Ginther is a professor of English at Purdue University and teaches graduate classes in language testing and quantitative research. During her tenure at Purdue, she has advised more than 20 graduate student dissertations. She is also the director of Purdue’s two primary English language support programs: The Oral English Proficiency Program, the support program for international teaching assistants, and The Purdue Language and Cultural Exchange, the support program for incoming international undergraduate students. As director of both programs, she is responsible for the development, maintenance, and administration of the local tests used to evaluate the English language skills of 1,000 incoming students each academic year. She is a founding member of MwALT, the Midwest Association of Language Testers, and is a well-respected member of the international language testing community. She served as the co-editor of the journal Language Testing from 2012 to 2017.