For decades, tests used for university admission, such as the SAT, have been the subject of many complaints, and perhaps the most critical drawback discussed is their lack of predictive validity. To be sure, there is far more than mathematics and verbal skills to achieving a high GPA even during a student’s freshman year. Emotional maturity, English ability (for international students), the absence or presence of the need to work part-time to support one’s education, access to campus facilities, and quality of instruction are all factors that can affect academic achievement.
Hannon1 examined the impact of individual learners’ social and cognitive characteristics on college student GPA and found that SAT scores were a significant predictor of freshman GPA only when added statistically as a first predictor, while academic self-efficacy, epistemic belief in learning, and high knowledge integration otherwise acted as more significant predictors. Kobrin and Patterson2, meanwhile, found that the correlation between SAT scores and first-year GPA varied widely among institutions in comparison to other institutional factors such as the size of an institution’s financial aid package and the proportion of white freshmen admitted.
These personal and institutional factors, however, can generally not be easily measured on an admissions test, and even an excellent high-school transcript or high level of English proficiency offers no guarantee of success. From a pedagogical perspective, what an SAT or TOEFL score will not tell you is whether a student will have sufficient academic skills and knowledge within their specific discipline to be able to competently perform the work assigned in their courses to complete their degree or diploma. Biber et al3 found that scores on the ibt TOEFL writing section had only weak to moderate correlations with students’ scores on academic writing tasks across various disciplines; correlations were particularly low with scores of overall organization of ideas, both at the graduate and undergraduate levels. Cho and Bridgeman’s4 study of graduate and undergraduate students across ten universities found only weak correlations between ibt TOEFL scores and GPA.
At the graduate level, the GRE general is described as “a measure of general cognitive skills”5 and so also ought not be regarded as a reliable predictor of discipline-specific competence. The reason for this is that graduate-level coursework and research is carried out in a highly-specialized program, requiring application of skills and/or knowledge at a level of expertise considerably beyond that required for an undergraduate degree.
Young6, for example, discovered that the mean scores of students on the GRE Verbal section who were accepted into, but did not complete, a doctoral program in Educational Leadership, were nearly identical to the mean scores of those who were successful in completing the requirements. That is, scores on the GRE Verbal section had no predictive validity pertaining to the ability or lack thereof on the part of students to attain the degree. Moneta-Koehler et al, researching a biomedical degree program at Vanderbilt University, found that performance on the GRE had no predictive validity either with respect to passing a Ph.D qualifying exam or successfully completing their Ph.D.7 Rubio et. al.8 reported mixed results of earlier studies and found in their own study that GRE-section scores had very weak correlations with graduate GPA for students at both the masters and doctoral levels across various disciplines at a midwestern university.
General cognitive skills do not reliably indicate readiness to perform with a sufficiently high level of academic competence to attain an advanced degree. Even the GMAT, an exam used for admission to business schools, has been found to rather variable predictive validity worldwide, with the quantitative section of the test being a poor predictor in the UK9 and the verbal and quantitative sections being variable in their predictive validity worldwide10.
If the information provided by a standardized test score and transcripts are not sufficient predictors of academic success, how can program administrators make well-informed decisions regarding the academic readiness of students seeking admission to their programs? In small programs, at least, one possibility would be to conduct individual interviews with applicants in which program administrators can use specific questions to evaluate a student’s readiness for study. An additional or alternative strategy, particularly in larger programs, would be to administer tasks representative of those typically used by instructors in a given program and evaluate applicants’ performance based on specific rubrics. Finally, with respect to a discipline-specific “test”, a cloze test based on a journal article or book chapter covering a wide range of topics in the discipline could be administered to evaluate applicants’ ability to handle content specific to that discipline. A vocabulary test, in which applicants match field-specific terms with their definitions, is also a possibility. The advantage of a cloze test would be the opportunity it would provide to test-takers to demonstrate not only their topical knowledge but also their skill at applying that knowledge to read literature particular to the discipline.
Whatever the strategies selected, it is clear that administrators need information besides standardized test scores to make reliable decisions with respect to admitting students to their programs.
1Hannon, B. (2014). Predicting College Success: The Relative Contributions of Five Social/Personality Factors, Five Cognitive/Learning Factors and SAT Scores. Journal of Education and Training Studies, 2(4), 46–58.
2Kobrin, J., & Patterson, B. (2011). Contextual Factors Associated With the Validity of SAT Scores and High School GPA for Predicting First-Year College Grades. Educational Assessment, 16(4), 207–226. https://doi-org.ezproxy.student.twu.ca/10.1080/10627197.2011.635956
3Biber, D., Reppen, R., & Staples, S. (2017). Exploring the Relationship Between TOEFL iBT Scores and Disciplinary Writing Performance. TESOL Quarterly, 51(4), 948–960. https://doi-org.ezproxy.student.twu.ca/10.1002/tesq.359
4Cho, Y., & Bridgeman, B. (2012). Relationship of TOEFL iBT® scores to academic performance: Some evidence from American universities. Language Testing, 29(3), 421–442. https://doi-org.ezproxy.student.twu.ca/10.1177/0265532211430368
5Liu, O. L., Klieger, D. M., Bochenek, J. L., Holtzman, S. L., & Xu, J. (2016). An Investigation of the Use and Predictive Validity of Scores from the “GRE”® revised General Test in a Singaporean University. ETS GRE® Board Research Report. ETS GRE®-16-01. ETS Research Report. RR-16-05. ETS Research Report Series.
6Young, I. P. (2008). Predictive Validity of the GRE and GPAs for a Doctoral Program Focusing on Educational Leadership. Journal of Research on Leadership Education, 3(1).
7Moneta-Koehler, L., Brown, A. M., Petrie, K. A., Evans, B. J., & Chalkley, R. (2017). The Limitations of the GRE in Predicting Success in Biomedical Graduate School. PLoS ONE, 12(1), 1–17. https://doi-org.ezproxy.student.twu.ca/10.1371/journal.pone.0166742
8Rubio, D. M., Rubin, R. S., & Brennan, D. G. (2003). How Well Does the GRE Work for Your University? An Empirical Institutional Case Study of the Graduate Record Examination Across Multiple Disciplines. College & University, 78(4), 11–17.
9Dobson, P., Krapljan-Barr, P., & Vielba, C. (1999). An Evaluation of the Validity and Fairness of the Graduate Management Admissions Test (GMAT) Used for MBA Selection in a UK Business School. International Journal of Selection & Assessment, 7(4), 196. https://doi-org.ezproxy.student.twu.ca/10.1111/1468-2389.00119
10Talento-Miller, E. (2008). Generalizability of GMAT® Validity to Programs outside the U.S. International Journal of Testing, 8(2), 127–142. https://doi-org.ezproxy.student.twu.ca/10.1080/15305050802001193