Apichat Khamboonruang. DEVELOPMENT AND VALIDATION OF THE ACADEMIC COLLOCATIONAL COMPETENCE TEST FOR EFL UNIVERSITY STUDENTS: AN APPLICATION OF THE ARGUMENT-BASED APPROACH. Master's Degree(English as an International Language). Chulalongkorn University. Office of Academic Resources. : Chulalongkorn University, 2013.
DEVELOPMENT AND VALIDATION OF THE ACADEMIC COLLOCATIONAL COMPETENCE TEST FOR EFL UNIVERSITY STUDENTS: AN APPLICATION OF THE ARGUMENT-BASED APPROACH
Abstract:
Using a well-developed and validated test of specific English collocational competence may provide meaningful scores that partly inform test users of to what extent test-takers are proficient in English for the purposes of placement or screening uses. To make proper decision, test users need to depend remarkably on trustworthy information provided by a well-developed and validated collocation test. The primary purpose of the present study was, therefore, to apply the argument-based approach (Kane, 1992, 2006, 2011, 2013) to develop and validate the Academic Collocational Competence Test (ACCT) for EFL graduate students. The argument-based approach involves two argument development stages. The first stage is to develop the interpretive argument by specifying the intended interpretation and use of test scores and the second stage is to build the validity argument by evaluating theoretical and empirical evidence collected to support such intended score interpretation and use specified in the interpretive argument. This study also aimed to apply the Rasch measurement approach to provide empirical evidence in support of the ACCT validity argument.A total of 193 EFL graduate students from various academic disciplines at Chulalongkorn University participated in this study. Theoretical evidence was collected during the development of the ACCT and the ACCT interpretive argument. Empirical evidence was gathered using the ACCT, the Academic Vocabulary Level Test (AVLT) developed by Schmitt, Schmitt, and Clapham (2001), and the test reflection questionnaire adopted from Voss (2012). The ACCT was developed using high-frequency verb-noun collocations from varying domains of the academic written discourse in the British National Corpus (BNC) and developed primarily as a norm-referenced placement test of receptive collocational competence of EFL graduate students. Empirical data were analysed using descriptive statistics, Rasch model analysis, correlation analysis, analysis of variance, chi-square analysis, content analysis, cut score analysis, and classification error analysis.Research results revealed that the argument-based approach helped the development and validation of the ACCT. The interpretive argument served as the guideline for designing and developing the ACCT and also for assembling evidence that was later appraised to construct the validity argument of the ACCT. The development process of the ACCT and the ACCT interpretive argument was an interactive process and was modified until they were consistent with the intended score interpretation and use as well as the context of the current study. The validity argument indicated to what degree the ACCT score interpretation and use were valid or appropriate based on collected evidence collected to support the score interpretation and use specified in the ACCT interpretive argument. The ACCT validity argument revealed a reasonable degree of validity of the ACCT score interpretation and use. That is, the ACCT scores were appropriately interpreted and used as intended. The ACCT validity argument was based on sound and sufficient theoretical and empirical evidence supporting assumptions in domain description, evaluation, generalization, explanation, extrapolation, and utilisation inferences in the ACCT interpretive argument. Backing for the consequence inference is beyond the scope of this study. The Rasch measurement approach provided sound empirical evidence in support of the ACCT validity argument. Rasch-based evidence included unidimensionality, internal consistency, examinee competency dispersion and hierarchy, item difficulty dispersion and hierarchy, multiple-choice distractor functioning, differential test functioning, and uniform differential item function.