Prepping for proficiency: The scope of test preparation for language learning

ArticleDetailDownload PDF

Language unlocks many doors in today’s globalized world. English test scores determine high-stakes decisions including university admission, graduation and immigration to English-speaking countries. But to what extent does language test preparation improve language test score and language learning? Until now, research into this topic has been conducted solely from the perspective of test users, but along with her research colleagues, Dr Liying Cheng – Professor of Language Education and Assessment at Queen’s University, Canada – aims to bridge this gap and engage test agencies as well to examine the outcomes of ‘prepping for proficiency’.

3AM. An empty can of energy drink. The same phrase muttered over and over. The image of a language student ‘cramming’ for a test leaves a questionable impression of their learning. And whilst such a stereotypical picture of test preparation is far from universal, the consequences of failure in many language tests compel students and their instructors to focus on training for the test as opposed to a more holistic approach to language learning. Does test preparation really improve English proficiency, or do ‘tricks to beat the test’ inflate test scores while having little or limited effect on the test takers’ actual language ability?

With her research colleagues, Professor Liying Cheng of Queen’s University at Kingston, Canada, has studied how teachers and students view and enact test preparation for high-stakes language tests. By working with sample groups in Australia, Iran, Canada, and China, the researchers are gathering comprehensive findings on an issue with serious implications for social, economic and educational development in the English-speaking world and beyond.

ivector/Shutterstock.com

English As A Powerful Language
A history of conquest and broader influence has made English the international language of today. In the 16th and 17th centuries Britain made technological strides that stretched over continents, while establishing English as the language of science. The spread of the British Empire later led to the systematic re-education of colonies, entrenching written and spoken English around the world. The rise of the United States to the status of global superpower from the 20th century onwards has further propagated the spread of the English language through a far-reaching economic and cultural network. The proliferation of English has led to its linguistic domination of many aspects of our everyday lives globally: the economy, business, politics, popular culture, ideology, research and development and more.

…the possibility that such tests may fail to capture English proficiency go unnoticed or unchallenged by policymakers and participants involved in the process.

Linguistic globalization has preserved power and wealth for the Anglophone world, and this continues today. Scores on English language tests widely determine immigration and citizenship for foreign nationals entering English-speaking countries, as well as entry and progression through professional qualifications and are compulsory to higher education. Such gatekeeping function of English language test scores provides policymakers and institutions with levers to control immigration flows, the labour supply and the language of scholarship. Meanwhile, language tests and test preparation courses are enterprises in themselves worth multi-millions of dollars a year. Within classroom walls, high-stakes examinations steer teacher practice towards the demands of tests, as their personal career trajectories and those of their students hinge on exam success.

bibiphoto/Shutterstock.com

With a plethora of high-stakes language tests in operation for learners in their country of origin as well as in Anglophone nations, the activities surrounding assessment are of widespread importance. And yet, the possibility that such tests may fail to capture English proficiency go unnoticed or unchallenged by policymakers and participants involved in the process, since their investment is often tangential to language development.

To The Test
Previous research has demonstrated that when test scores are of great consequence, teachers and students tend to tailor their instruction and learning ‘to the test’. Meanwhile, stakeholders such as university admission officers, administrators and potential employers implicitly believe that test results reflect ‘true’ English proficiency. However, what is little studied is whether test preparation actually enhances language test performance, as shown in higher test scores, and whether it cultivates a genuine proficiency for communicating in English. Furthermore, previous research has hitherto been carried out from the perspective of test users, without full input from and partnership with test designers. Seeking to answer this question whilst also addressing this gap in research, Professor Liying Cheng’s team has conducted fieldwork at multiple education centres carrying out test preparation courses, interviewing and observing teachers, students, and administrators to uncover their beliefs, practices and outcomes. In particular, researchers observed preparatory courses in three unique instructional contexts: Australia – where English is taught as a second language (ESL), as well as Iran and China – where English is taught as a foreign language (EFL).

Professor Cheng and colleagues explore how teachers and students view and enact test preparation for high-stakes language tests.

Findings reveal common as well as contrasting perceptions and practices among students, test centre administrators and teachers across the settings. All preparatory courses were orientated toward test demands, albeit to differing extents. Moreover, student preference for test-orientation was accompanied especially among experienced test-takers by an expectation that courses would inculcate authentic language ability. More specifically, students on the Australian ESL course perceived their lessons to have an overly narrow focus on assessment criteria, which mirrored those teachers’ priorities toward test content and methods over improving proficiency. A potential explanation for this tendency is that teaching English as a foreign language in countries of origin allows for a more flexible approach to language learning. This helps to clarify why Chinese instructors saw the assessment as evaluating proficiency through an academic format, making it inseparable from thorough language development. Courses also diverged in their instructional aspects. Iranian and Chinese (EFL) settings saw a preference in students and teachers towards practicing and developing speaking, listening and writing skills, whilst Australian (ESL) learners expected detailed feedback on writing tasks in particular, since they were exposed to other communicative skill experiences outside the classroom.

This research has important implications for understanding the nature and scope of test preparation.

Breaking Down Barriers
Inside preparatory courses in Iran, students and teachers would align their activities with student needs and expectations, particularly by combining test strategy training with language proficiency. The Chinese participants also valued a pedagogical incorporation of both language development and test preparation. However, Iranian educators would sometimes deviate from interactive instruction to teach grammar through deductive rule-example-practice methods, as well as repetitive drill exercises to help students learn specific language points. Chinese teachers, on the other hand, could use their educational and professional backgrounds, including overseas studies, to undertake more dialogic engagement with students and to foster deeper communicative ability.

…stakeholders such as university admission officers, administrators and potential employers implicitly believe that exam results reflect English proficiency.

Recruiting high quality teachers was a priority for administrators from all centres. This was echoed in the reasons that students gave for choosing their place of study, as they emphasised teacher quality and familiarity with the test. However, despite a consistency in status, teachers’ cultural identities differed in ways that impacted classroom practice. Specifically, within Iranian and Chinese EFL contexts, teachers represented role models because of their own success in the students’ upcoming testing and their proficiency in the target language. This mutual perspective also gave those teachers empathy for student struggles with repeat testing and future studies abroad, as well as an intimate familiarity with a shared native language. Meanwhile, ESL teachers in Australia had acquired English as their primary language, which validated them as ‘experts’ who could judge student attempts to communicate in English without being asked to state the English language rules and conventions violated. Nevertheless, those teachers had an experience with language learning and test-taking that contrasted with their students.

Macrovector/Shutterstock.com

Mediating Factors
The research conducted by Professor Liying Cheng and her colleagues has important implications for understanding the nature and scope of test preparation. Their findings reveal complex processes underlying test preparation in language centres across diverse contexts, and notably provides both test designers and test users with empirical evidence regarding the validity of language test scores. This research is critical because decisions of selection and access on the basis of inaccurate or improper inferences drawn from test scores by policy- and decision makers in areas such as higher education admission, professional certification, and immigration will be costly, undermining knowledge transfer potential and threatening social, educational and economic development in English-speaking countries and beyond.

Can language proficiency be measured by a test?


Yes, it can, but to a certain extent. A test score is a snapshot of language proficiency at a point of time and based on one-time test performance. To understand how a test score reflects the actual language proficiency is far more complex than a dichotomic question and answer posed above. To fully understand the relationship between a test score and language proficiency, we need to conduct ongoing test validation research. And we need to examine the context of test performance, i.e., the context of test-takers and their learning, the educational system, and the societal infrastructure of a particular test score and most importantly its test use.

References

  • Saif, S., Ma, J., May, L. & Cheng, L. (2019). Complexity of test preparation across three contexts: Case studies from Australia, Iran and China, Assessment in Education: Principles, Policy & Practice. doi: 10.1080/0969594X.2019.1700211
  • Cheng, L. (2018). Geopolitics of assessment. In The TESOL Encyclopedia of English Language Teaching (eds J.I. Liontas, T. International Association and M. DelliCarpini). https://doi.org/10.1002/9781118784235.eelt0814
  • Ma, J. & Cheng, L. (2017). Preparing students to take tests. In C. Coombe (Ed.), TESOL encyclopedia of English language teaching: Assessment and evaluation. Hoboken, NJ: John Wiley & Sons.
    https://doi.org/10.1002/9781118784235.eelt0321
  • Wang, H. & Cheng, L. (2017). Exploring the relationship between test-takers’ obtained test scores and their academic achievement. Paper presented at 39th Language Testing Research Colloquium, Bogotá, Columbia
  • Ma, J. & Cheng, L. (2016). Chinese students’ perceptions of the value of test preparation courses for the TOEFL iBT: Merit, worth and significance. TESL Canada Journal, 33(1), 58-79. http://www.teslcanadajournal.ca/index.php/tesl/article/view/1227
  • Cheng, L. & Doe, C. (2013). Test preparation: A double-edged sword. IATEFL-TEASIG (International Association of Teachers of English as a Foreign Language’s Testing, Evaluation and Assessment Special Interest Group) Newsletter, 54, 19-20.
DOI
10.26904/RF-134-3437

Funding
The Social Sciences and Humanities Research Council (SSHRC) of Canada

Collaborators
Shahrzad Saif (Laval University, Canada); Hong Wang (Mount Saint Vincent University, Canada); Lynette May (Queensland University of Technology, Australia)

Bio
Liying Cheng is Professor and Director of Assessment and Evaluation Group, Queen’s University in Canada. Her seminal research on washback focuses on the global impact of large-scale testing on education policy, curriculum, pedagogy, and on the academic and professional acculturation of international and new immigrant students, workers, and professionals to Canada.

Contact

Liying Cheng
B201 Faculty of Education
Queen’s University, Kingston
Ontario, K7M 5R7
Canada

E: liying.cheng@queensu.ca
T: 1-613-533-6000 ext.77431
W:

Related posts.

Comments.

Leave a Comment

Your email address will not be published. Required fields are marked *

Share this article.

Share on facebook
Share on twitter
Share on whatsapp
Share on linkedin
Share on reddit
Share on email