2 Technological developments

Because I take it that brain-based approaches provide the “difference that makes a difference” between current approaches and traditional AI, here I focus on developments in neuromorphic and robotic technology. Notably, all of the developments in neuromorphic hardware that I discuss below are inspired by some basic features of neural computation. For instance, all of the neuromorphic approaches use spiking neural networks SNNs to encode and process information. In addition, there is unanimous agreement that biological computation is in orders of magnitude more power efficient than digital computation (Hasler & Marr 2013). Consequently, a central motivation behind exploring these hardware technologies is that they might allow for sophisticated information processing using small amounts of power. This is critical for applications that require the processing to be near the data, such as in robotics and remote sensing. In what follows I begin by providing a sample of several major projects in neuromorphic computing that span the space of current work in the area. I then briefly discuss the current state of high-performance computing and robotics, to identify the roles of the most relevant technologies for developing artificial minds.

To complement its cognitively focused Watson project, IBM has been developing a neuromorphic architecture, a digital model of individual neurons, and a method for programming this architecture (Esser et al. 2013). The architecture itself is called TrueNorth. They argue that the “low-precision, synthetic, simultaneous, pattern-based metaphor of TrueNorth is a fitting complement to the high-precision, analytical, sequential, logic-based metaphor of today’s of von Neumann computers” (Esser et al. 2013, p. 1). TrueNorth has neurons organized into 256 neuron blocks, in which each neuron can receive input from 256 axons. To assist with programming this hardware, IBM has introduced the notion of a “corelet,” which is an abstraction that encapsulates local connectivity in small networks. These act like small programs that can be composed in order to build up more complex functions. To date the demonstrations of the approach have focused on simple, largely feed-forward, standard applications, though across a wide range of methods, including Restricted Boltzmann Machines (RBMs), liquid state machines, Hidden Markov Model (HMMs), and so on. It should be noted that the proposed chip does not yet exist, and current demonstrations are on detailed simulations of the architecture. However, because it is a digital chip the simulations are highly accurate.

A direct competitor to IBM’s approach is the Zeroth neuromorphic chip from Qualcomm. Like IBM, Qualcomm believes that constructing brain-inspired hardware will provide a new paradigm for exploiting the efficiencies of neural computation, targeted at the kind of information processing at which brains excel, but which is extremely challenging for von Neumann approaches. The main difference between these two approaches is that Qualcomm has committed to allowing online learning to take place on the hardware. Consequently, they announced their processor by demonstrating its application in a reinforcement learning paradigm on a real-world robot. They have released videos of the robot maneuvering in an environment and learning to only visit one kind of stimulus (white boxes: http://www.youtube.com/watch?v=8c1Noq2K96c). It should again be noted that this is an FPGA simulation of a digital chip that does not yet exist. However, the simulation, like IBM’s, is highly accurate.

In the academic sphere, the Spinnaker project at Manchester University has not focused on designing new kinds of chips, but has instead focused on using low-power ARM processors on a massive scale to allow large-scale brain simulations (Khan et al. 2008). As a result, the focus has been on designing approaches to routing that allow for the high bandwidth communication, which underwrites much of the brain’s information processing. Simulations on the Spinnaker hardware typically employ spiking neurons, like IBM and Qualcomm, and occasionally allow for learning (Davies et al. 2013), as with Qualcomm’s approach. However, even with low power conventional chips, the energy usage is projected to be higher on the Spinnaker platform per neuron. Nevertheless, Spinnaker boards have been used in a wider variety of larger-scale embodied and non-embodied applications. These include simulating place cells, path integration, simple sensory-guided movements, and item classification.

There are also a number of neuromorphic projects that use analog instead of digital implementations of neurons. Analog approaches tend to be several orders of magnitude more power efficient (Hasler & Marr 2013), though also more noisy, unreliable, and subject to process variation (i.e., variations in the hardware due to variability in the size of components on the manufactured chip). These projects include work on the Neurogrid chip at Stanford University (Choudhary et al. 2012), and on a chip at ETH Zürich (Corradi, Eliasmith & Indiveri 2014). The Neurogrid chip has demonstrated larger numbers of simulated neurons—up to a million—while the ETH Zürich chip allows for online learning. More recently, the Neurogrid chip has been used to control a nonlinear, six degree of freedom robotic arm, exhibiting perhaps the most sophisticated information processing from an analog chip to date.

In addition to the above neuromorphic projects, which are focused on cortical simulation, there have been several specialized neuromorphic chips that mimic the information processing of different perceptual systems. For example, the dynamic vision sensor (DVS) artificial retina developed at ETH Zürich performs real-time vision processing that results in a stream of neuron-like spikes (Lichtsteiner et al. 2008). Similarly, an artificial cochlea called AEREAR2 has been developed that generates spikes in response to auditory signals (Li et al. 2012). The development of these and other neuronal sensors makes it possible to build fully embodied spiking neuromorphic systems (Galluppi et al. 2014).

There have also been developments in traditional computing platforms that are important for supporting the construction of models that run on neuromorphic hardware. Testing and debugging large-scale neural models is often much easier with traditional computational platforms such as Graphics Processing Unit (GPUs) and supercomputers. In addition, the development of neuromorphic hardware often relies on simulation of the designs before manufacture. For example, IBM has been testing their TrueNorth architecture with very large-scale simulations that have run up to 500 billion neurons. These kinds of simulations allow for designs to be stress-tested and fine-tuned before costly production is undertaken. In short, the development of traditional hardware is also an important technological advance that supports the rapid development of more biologically-based approaches to constructing artificial cognitive systems.

A third area of rapid technological development that is critical for successfully realizing artificial minds is the field of robotics. The success of recent methods in robotics have entered public awareness with the creation of the Google car. This self-driving vehicle has successfully navigated hundreds of thousands of miles of urban and rural roadways. Many of the technologies in the car were developed out of DARPA’s Grand Challenge to build an autonomous vehicle that would be tested in both urban and rural settings. Due to the success of the first three iterations of the Grand Challenge, DARPA is now funding a challenge to build robots that can be deployed in emergency situations, such as a nuclear meltdown or other disaster.

One of the most impressive humanoid robots to be built for this challenge is the Atlas, constructed by Boston Dynamics. It has twenty-eight degrees of freedom, covering two arms, two legs, a torso, and a head. The robot has been demonstrated walking bipedally, even in extremely challenging environments in which it must use its hands to help navigate and steady itself (http://www.youtube.com/watch?v=zkBnFPBV3f0). Several teams in this most recent Grand Challenge have been awarded a copy of Atlas, and have been proceeding to competitively design algorithms to improve its performance.

In fact, there have been a wide variety of significant advances in robotic control algorithms, enabling robots—including quadcopters, wheeled platforms, and humanoid robots—to perform tasks more accurately and more quickly than had previously been possible. This has resulted in one of the first human versus robot dexterity competitions being recently announced. Just as IBM pitted Watson against human Jeopardy champions, Kuka has pitted its high-speed arm against the human ping-pong champion Timo Boll (http://www.youtube.com/watch?v=_mbdtupCbc4). Despite the somewhat disappointing outcome, this kind of competition would not have been thought possible a mere five years ago (Ackerman 2014).

These three areas of technological development—neuromorphics, high-performance conventional computing, and robotics—are progressing at an incredibly rapid pace. And, more importantly, their convergence will allow a new class of artificial agents to be built. That is, agents that can begin processing information at very similar speeds and support very similar skills to those we observe in the animal kingdom. It is perhaps important to emphasize that my purpose here is predictive. I am not claiming that current technologies are sufficient for building a new kind of artificial mind, but rather that they lay the foundations, and are progressing at a sufficient rate to make it reasonable to expect that the sophistication, adaptability, flexibility, and robustness of artificial minds will rapidly approach those of the human mind. We might again worry that it will be difficult to measure such progress, but I would suggest that progress will be made along many dimensions simultaneously, so picking nearly any of dimensions will result in some measurable improvement. In general, multi-dimensional similarity judgements are likely to result in "I'll know it when I see it" kinds of reactions to classifying complicated examples. This may be derided by some as “hand-wavy”, but it might also be a simple acknowledgement that “mindedness” is complex. I would like to be clear that my claims about approaching human mindful behaviour are to be taken as applying to the vast majority of the many measures we use for identifying minds.