Computationally Constrained Beliefs
Abstract:People and intelligent computers, if there ever are any, will both have to believe certain things in order to be intelligent agents at all, or to be a particular sort of intelligent agent. I distinguish implicit beliefs that are inherent in the architecture of a natural or artificial agent, in the way it is 'wired', from explicit beliefs that are encoded in a way that makes them easier to learn and to erase if proven mistaken. I introduce the term IFI, which stands for irresistible framework intuition, for an implicit belief that can come into conflict with an explicit one. IFIs are a key element of any theory of consciousness that explains qualia and other aspects of phenomenology as second-order beliefs about perception. Before I can survey the IFI landscape, I review evidence that the brains of humans, and presumably of other intelligent agents, consist of many specialized modules that (a) are capable of sharing a unified workspace on urgent occasions, and (b) jointly model themselves as a single agent. I also review previous work relevant to my subject. Then I explore several IFIs, starting with, 'My future actions are free from the control of physical laws'. Most of them are universal, in the sense that they will be shared by any intelligent agent; the case must be argued for each IFI. When made explicit, IFIs may look dubious or counterproductive, but they really are irresistible, so we find ourselves in the odd position of oscillating between justified beliefs and conflicting but irresistible beliefs. We cannot hope that some process of argumentation will resolve the conflict.
Document Type: Research Article
Affiliations: Yale Computer Science Department, Email: firstname.lastname@example.org
Publication date: January 1, 2013