Comparison of evaluation methods using structured usability problem reports
Abstract. Recent HCI research has produced analytic evaluation techniques which claim to predict potential usability problems for an interactive system.Validation of these methods has involved matching predicted problems against usability problems found during empirical user testing. This paper shows that the matching of predicted and actual problems requires careful attention, and that current approaches lack rigour or generality. Requirements for more rigorous and general matching procedures are presented. A solution to one key requirement is presented: a new report structure for usability problems.It is designed to improve the quality of matches made between usability problems found during empirical user testing and problems predicted by analytic methods. The use of this report format is placed within its design research context, an ongoing project on domain-specific methods for software visualizations.