ABSTRACT

The long-term aim of the field of artificial intelligence (AI) is to endow computers and robots with intelligence, where intelligence might be defined as the ability to perform tasks and attain goals in a wide variety of environments (Legg and Hutter, 2007). In attempting to do this, the field has developed a number of enabling technologies, such as automated reasoning, computer vision, and machine learning, which can be applied to specialist tasks in an industrial or commercial setting. But our concern here is the yet-to-be-realized prospect of human-level artificial intelligence, that is to say AI that can match a typical human’s capacity to perform tasks and attain goals in a wide variety of environments. A key feature of human-level intelligence is its generality (McCarthy, 1987). It can adapt to an enormous variety of environments, tasks, and goals. Achieving this level of generality in AI has proven very difficult. But let us suppose it is possible. What, then, are the philosophical implications? This is the question to be addressed here, after a brief historical account of the field followed by a short sketch of the challenges facing developers of human-level AI.