Making interactions between humans and artificial agents successful is a major goal of interaction design. The aim of this paper is to provide researchers conducting interaction studies a new framework for the evaluation of robot believability. By critically examining the ordinary sense of believability, we first argue that currently available notions of it are underspecified for rigorous application in an experimental setting. We then define four concepts that capture different senses of believability, each of which connects directly to an empirical methodology. Finally, we show how this framework has been and can be used in the construction of interaction studies by applying it to our own work in human–robot interaction.