Non‐response biases in surveys of schoolchildren: the case of the English Programme for International Student Assessment (PISA) samples
Summary. We analyse response patterns to an important survey of schoolchildren, exploiting rich auxiliary information on respondents’ and non‐respondents’ cognitive ability that is correlated both with response and the learning achievement that the survey aims to measure. The survey is the Programme for International Student Assessment (PISA), which sets response thresholds in an attempt to control the quality of data. We analyse the case of England for 2000, when response rates were deemed sufficiently high by the organizers of the survey to publish the results, and 2003, when response rates were a little lower and deemed of sufficient concern for the results not to be published. We construct weights that account for the pattern of non‐response by using two methods: propensity scores and the generalized regression estimator. There is clear evidence of biases, but there is no indication that the slightly higher response rates in 2000 were associated with higher quality data. This underlines the danger of using response rate thresholds as a guide to quality of data.
No Supplementary Data
No Article Media