Prediction is difficult, especially about the future
– Danish Proverb

1 Introduction

The prediction game is a dangerous one, but that, of course, is what makes it fun. The pitfalls are many: some technologies change exponentially but some don’t; completely new inventions, or fundamental limits, might appear at any time; and it can be difficult to say something informative without simply stating the obvious. In short, it’s easy to be wrong if you’re specific. (Although, it is easy to be right if you’re Nostradamus.) Regardless, the purpose of this essay is to play this game. As a consequence, I won't be pursuing technical discussion on the finer points of what a mind is, or how to build one, but rather attempting to paint an abstract portrait of the state of research in fields related to machine intelligence broadly construed. I think the risks of undertaking this kind of prognostication are justified because of the enormous potential impact of a new kind of technology that lies just around the corner. It is a technology we have been dreaming about—and dreading—for hundreds of years. I believe we are on the eve of artificial minds.

In 1958 Herbert Simon & Allen Newell claimed that “there are now in the world machines that think” and predicted that it would take ten years for a computer to become world chess champion and write beautiful music (1958, p. 8). Becoming world chess champion took longer, and we still don’t have a digital Debussy. More importantly, even when a computer became world chess champion it was not generally seen as the success that Simon and Newell had expected. This is because the way in which Deep Blue beat Gary Kasparov did not strike many as advancing our understanding of cognition. Instead, it showed that brute force computation, and a lot of careful tweaking by expert chess players, could surpass human performance in a specific, highly circumscribed environment.

Excitement about AI grew again in the 1980s, but was followed by funding cuts and general skepticism in the “AI winter” of the 1990s (Newquist 1994). Maybe we are just stuck in a thirty-year cycle of excitement followed by disappointment, and I am simply expressing the beginning of the next temporary uptick. However, I don’t think this is the case. Instead, I believe that there are qualitative changes in methods, computational platforms, and financial resources that place us in a historically unique position to develop artificial minds. I will discuss each of these in more detail in subsequent sections, but here is a brief overview.

Statistical and brain-like modeling methods are far more mature than they have ever been before. Systems with millions (Garis et al. 2010; Eliasmith et al. 2012) and even tens of millions (Fox 2009) of simulated neurons are suddenly becoming common, and the scale of models is increasing at a rapid rate. In addition, the challenges of controlling a sophisticated, nonlinear body are being met by recent advances in robotics (Cheah et al. 2006; Schaal et al. 2007). These kinds of methodological advances represent a significant shift away from classical approaches to AI (which were largely responsible for the previously unfullfilled promises of AI) to more neurally inspired, and brain-like ones. I believe this change in focus will allow us to succeed where we haven’t before. In short, the conceptual tools and technical methods being developed for studying what I call “biological cognition” (Eliasmith 2013), will make a fundamental difference to our likelihood of success.

Second, there have been closely allied and important advances in the kinds of computational platforms that can be exploited to run these models. So-called “neuromorphic” computing—hardware platforms that perform brain-style computation—has been rapidly scaling up, with several current projects expected to hit millions (Choudhary et al. 2012) and billions (Khan et al. 2008) of neurons running in real time within the next three to four years. These hardware advances are critical for performing efficient computation capable of realizing brain-like functions embedded in and controlling physical, robotic bodies.

Finally, unprecedented financial resources have been allocated by both public and private groups focusing on basic science and industrial applications. For instance, in February 2013 the European Union announced one billion euros in funding for the Human Brain Project, which focuses on developing a large scale brain model as well as neuromorphic and robotic platforms. A month later, the Obama BRAIN initiative was announced in the United States. This initiative devotes the same level of funding to experimental, technological, and theoretical advances in neuroscience. More recently, there has been a huge amount of private investment:

Google purchased eight robotics and AI companies between Dec 2013 and Jan 2014, including industry leader Boston Dynamics Stunt (2014).

Qualcomm has introduced the Zeroth processor, which is modeled after how a human brain works (Kumar 2013). They demonstrated an Field-Programmable Gate Array (FPGA) mock-up of the chip performing a reinforcement learning task on a robot.

Amazon has recently expressed a desire to provide the Amazon Prime Air service, which will use robotic quadcopters to deliver goods within thirty minutes of their having been ordered (Amazon 2013).

IBM has launched a product based on Watson, which famously beat the best human Jeopardy players (http://ibm.com/innovation/us/watson/). The product will provide confidence based responses to natural language queries. It has been opened up to allow developers to use it in a wide variety of applications. They are also developing a neuromorphic platform (Esser et al. 2013).

In addition, there are a growing number of startups that work on brain-inspired computing including Numenta, the Brain Corporation, Vicarious, DeepMind (recently purchased by Google for $400 million) and Applied Brain Research, among many others. In short, I believe there are more dollars being directed at the problem than ever before.

It is primarily these three forces that I believe will allow us to build convincing examples of artificial minds in the next fifty years. And, I believe we can do this without necessarily defining what it is that makes a “mind”—even an artificial one. As with many subtle concepts—such as “game,” to use Wittgenstein’s example, or “pornography,” to use Supreme Court Justice Potter Stewart’s example—I suspect we will avoid definitions and rely instead on our sophisticated, but poorly understood, methods of classifying the world around us. In the case of “minds,” these methods will be partly behavioural, partly theoretical, and partly based on judgments of similarity to the familiar. In any case, I do not propose to provide a definition here, but rather to point to reasons why the artifacts we continue to build will become more and more like the natural minds around us. In doing so, I survey recent technological, theoretical, and empirical developments that are important for supporting our progress on this front. I then suggest a timeline over which I expect these developments to take place. Finally, I conclude with what I expect to be the major philosophical and societal impacts on our being able to build artificial minds. As a reminder, I am adopting a somewhat high-level perspective on the behavioural sciences and related technologies in order to make clear where my broad (and likely wrong) predictions are coming from. In addition, if I'm not entirely wrong, I suspect that the practical implications of such developments will prove salient to a broad audience, and so, as researchers in the area, we should consider the consequences of our research.