Assertion‐reason multiple‐choice testing as a tool for deep learning: a qualitative analysis
This paper reflects on the ongoing debate surrounding the usefulness (or otherwise) of multiple‐choice questions (MCQ) as an assessment instrument. The context is a graduate school of business in Australia where an experiment was conducted to investigate the use of assertion‐reason questions (ARQ), a sophisticated form of MCQ that aims to encourage higher‐order thinking on the part of the student. It builds on the work of Connelly (2004) which produced a quantitative analysis of the use of ARQ testing in two economics course units in a flexibly‐delivered Master of Business Administration (MBA) program. Connelly's main findings were that ARQ tests were good substitutes for the more conventional type of multiple‐choice/short‐answer type questions and, perhaps more significantly, ARQ test performance was a good predictor of student performance in essays—the assessment instrument most widely favoured as an indicator of deeper learning. The main focus of this paper is the validity of the second of these findings, analysis of questionnaire data casting some doubt over whether student performance in ARQ tests can, indeed, be looked upon as a sound indicator of deeper learning—student reactions and opinions suggesting instead that performance might have more to do with one's proficiency in the English language.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media