6 Consequences for philosophy

So suppose that, fifty years hence, we have developed an understanding of cognitive systems that allows us to build artificial systems that are on par with, or, if we see fit, surpass the abilities of an average person. Suppose, that is, that we can build artificial agents that can move, react, adapt, and think much like human beings. What consequences, if any, would this have for our theoretical questions about cognition? I take these questions to largely be in the domain of philosophy of mind. In this section I consider several central issues in philosophy of mind and discuss what sorts of consequences I take building a human-like artificial agent to have for them.

Being a philosopher, I am certain that, for any contemporary problem we consider, at least some subset of those who have a committed opinion about that problem will not admit that any amount of technical advance can “solve” it. I suspect, however, that their opinions may end up carrying about as much weight as a modern-day vitalist. To take one easy example, let us think for a moment about contemporary dualism. Some contemporary dualists hold that even if we had a complete understanding of how the brain functions, we would be no closer to solving the “hard problem” of consciousness (Chalmers 1996). The “hard problem” is the problem of explaining how subjective experience comes from neural activity. That is, how the phenomenal experiences we know from a first-person perspective can be accounted for by third-person physicalist approaches to understanding the mind. If indeed we have constructed artificial agents that behave much like people, share a wide variety of internal states with people, are fully empirically accessible, and report experiences like people, it is not obvious to what extent this problem will not have been solved. Philosophers who are committed to the notion that no amount of empirical knowledge will solve the problem will of course dismiss such an accomplishment on the strength of their intuitions. I suspect, however, that when most people are actually confronted with such an agent—one they can interrogate to their heart’s content and one about which they can have complete knowledge of its functioning—it will seem odd indeed to suppose that we cannot explain how its subjective experience is generated. I suspect it will seem as odd as someone nowadays claiming that we cannot expect to explain how life is generated despite our current understanding of biochemistry. Another way to put this is that the “strong intuitions” of contemporary dualists will hold little plausibility in the face of actually existing, convincing artificial agents, and so, I suspect, they will become even more of a rarity.

I refer to this example as “easy” because the central reasons for rejecting dualism are only strengthened, not generated, by the existence of sophisticated artificial minds. That is, good arguments against the dualist view are more or less independent of the current state of constructing agents (although the existence of such agents will likely sway intuitions). However, other philosophical conundrums, like Searle’s famous Chinese room (1980), have responses that depend fairly explicitly on our ability to construct artificial agents. In particular, the “systems reply” suggests that a sufficiently complex system will have the same intentional states as a biological cognitive system. For those who think that this is a good rejection of Searle’s strong intentionalist views, having systems that meet all the requirements of their currently hypothetical agents would provide strong empirical evidence consistent with their position. Of course, the existence of such artificial agents is unlikely to convince those, like Searle, who believe that there is some fundamental property of biology that allows intentionality to gain a foothold. But it does make such a position seem that much more tenuous if every means of measuring intentionality produces similar measurements across non-biological and biological agents. In any case, the realization of the systems reply does ultimately depend on our ability to construct sufficiently sophisticated artificial agents. And I am suggesting that such agents are likely to be available in the next fifty years.

More immediately, I suspect we will be able to make significant headway on several problems that have been traditionally considered philosophical before we reach the fifty-year mark. For example, the frame problem—i.e., the problem of knowing what representational states to update in a dynamic environment—is one that contemporary methods, like control theory and machine learning, struggle with much less than classical methods. Because the dynamics of the environment are explicitly included in the world-model being exploited by such control theoretic and statistical methods, updating state representations naturally includes the kinds of expectations that caused such problems for symbolic approaches.

Similarly, explicit quantitative solutions are suggested for the symbol-grounding problem through integrated models that incorporate aspects of both statistical perceptual processing and syntactic manipulation. Even in simple models, like Spaun, it is clear how the symbols for digits that are syntactically manipulated are related to inputs coming from the external world (Eliasmith 2013). And it is clear how those same symbols can play a role in driving the model’s body to express its knowledge about those representations. As a result, the tasks that Spaun can undertake demonstrate both conceptual knowledge, through the symbol-like relationships between numbers (e.g., in the counting task), and perceptual knowledge, through categorization and the ability to drive its motor system to reproduce visual properties (e.g., in the copy-drawing task).

In some cases, rather than resolving philosophical debates, the advent of sophisticated artificial agents is likely to make these debates far more empirically grounded. These include debates about the nature of concepts, conceptual change, and functionalism, among others. However these debates turn out, it seems clear that having an engineered, working system that can generate behaviour as sophisticated as that that gave rise to these theoretical ideas in the first place will allow a systematic investigation of their appropriate application. After all, there are few, if any, limits on the empirical information we can garner from such constructed systems. In addition, our having built the system explicitly makes it unlikely that we would be unaware of some “critical element” essential in generating the observed behaviours.

Even without such a working system, I believe that there are already hints as to how these debates are likely to be resolved, given the theoretical approaches I highlighted earlier. For instance, I suspect that we will find that concepts are explained by a combination of vector space representations and a restricted class of dynamic processes defined over those spaces (Eliasmith 2013). Similarly, quantifying the adaptive nature of those representations and processes will indicate the nature of mechanisms of conceptual change in individuals (Thagard 2014). In addition, functionalism will probably seem too crude a hypothesis given a detailed understanding of how to build a wide variety of artificial minds. Perhaps a kind of “functionalism with error bars” will take its place, providing a useful means of talking about degrees of functional similarity and allowing a quantification of functional characterizations of complex systems. Consequently, suggestions about which functions are or are not necessary for “mindedness” can be empirically tested through explicit implementation and experimentation. This will not solve the problem of mapping experimental results to conceptual claims (a problem we currently face when considering non-human and even some human subjects), but it will make functionalism as empirically accessible as seems plausible.

In addition to these philosophical issues that may undergo reconceptualization with the construction of artificial minds, there are others that are bound to become more vexing. For example, the breadth of application of ethical theory may, for the first time, reach to engineered devices. If, after all, we have built artificial minds capable of understanding their place in the universe, it seems likely we will have to worry about the possibility of their suffering (Metzinger 2013). It does not seem that understanding how such devices work, or having explicitly built them, will be sufficient for dismissing them as having no moral status. While current theories of non-human ethics have been developed, it is not clear how much or little theories of non-biological ethics will be able to borrow from them.

I suspect that the complexities introduced to ethical theory will go beyond adding a new category of potential application. Because artificial minds will be designed, they may be designed to make what have traditionally been morally objectionable inter-mind relationships seem less problematic. Consider, for instance, a robot that is designed to gain maximal self-fulfillment out of providing service to people. That is, unlike any biological species of which we are aware, these robots place service to humans above all else. Is a slave-like relationship between humans and these minds still wrong in such an instance? Whatever our analysis of why slavery is wrong, it seems likely that we will be able to design artificial minds that bypass that analysis. This is a unique quandary because while it is currently possible for certain individuals to claim to have such slave-aligned goals, it is always possible to argue that they are simply mistaken in their personal psychological analysis. In the case of minds whose psychology is designed in a known manner, however, the having of such goals will at least seem much more genuine. This is only one among many new kinds of challenges that ethical theory will face with the development of sophisticated artificial agents (Metzinger 2013).

I do not take this surely unreasonably brief discussion of any of these subtle philosophical issues to do justice to them. My main purpose here is to provide a few example instances of how the technological developments discussed earlier are likely to affect our theoretical inquiry. On some occasions such developments will lead to strengthening already common intuitions; on others they may provide deep empirical access to closely related issues; and on still other occasions these developments will serve to make complex issues even more so.