Intelligence and Understanding -- Limits of Artificial Intelligence
Much contemporary artificial intelligence (AI) research neglects to investigate the nature of its key object of study: intelligence. This paper seeks to compensate for this neglect by offering an ontology of intelligence so as to determine whether an artificial system can truly be described as intelligent. AI research frequently operates with an 'efficiency' sense of intelligence, which associates this property with a system's problem-solving capacity. By invoking the thesis of biological externalism, according to which our mentalistic vocabulary is essentially tied to picking out behaviors of living creatures, I argue that ascriptions of mental properties to non-living systems is categorially inappropriate, given how relating to a problem space to begin with necessarily has biological parameters. I explain AIs as thought-models, which need not therefore be understood as thinking models: it makes no sense to attribute intelligence to such models outside of the context of our intelligent use of them for our own problem-solving ends. Finally, I maintain that thought itself is a sense modality, which is bound to inherently contextual forms of understanding. As yet, there is no reason to think that we can substitute any alternative for the 'lifeworld' such understanding inhabits, let alone anything digital.
No Supplementary Data
No Article Media
Document Type: Research Article
Affiliations: Department of Philosophy University of Bonn, Germany
Publication date: January 1, 2020