11 Discussion

Consciousness and the relation of the outside world to mental representation are central to philosophy of mind, and have led to many diverse views (Vision 2011). While many of those views appear plausible in themselves, especially from a non-philosopher’s perspective, there appears to be much disagreement among philosophers. Many of the positions are based on high-level views approaching consciousness in a top-down fashion. In contrast, our approach starts from a low-level control system for a behaving agent. The goal is the bottom-up development of higher-level faculties. In this way, the neural architecture implements a minimal cognitive system that can be used as a hypothesis for cognitive mechanisms and higher-level functioning, which are testable in a real-world system, for example, on a robot. This allows deriving testable and quantitative hypotheses for higher-level phenomena. In this way, a bottom-up approach can nicely complement philosophical discussions focusing mainly on higher-level aspects. In addition, such a minimal cognitive system can provide functional descriptions of higher-level properties. We briefly introduced the reaCog system in this article, following this bottom-up approach. The central concern is the emergent properties that can be identified when analyzing this system. In particular, high-level properties, such as emotions, attention, intention, volition, or consciousness have been considered here and related to the system.

From our point of view, such a bottom-up approach leads to a system that can be used to test quantitative hypotheses. Even though the system was not intended to model, for example, consciousness, the system can be thoroughly analyzed and emergent properties can be related to mental phenomena. This is particularly interesting, as high-level descriptions can leave a lot of room for interpretation. In contrast, connecting mental phenomena to mechanisms of a well-defined system allows for detailed studies and clear-cut definitions on a functional level. In this way, a system can be examined with respect to many even diverging views and may allow resolving ambiguities. Knowledge gained from analyzing the system can in this way inform philosophical theories and refine existing definitions by defining sufficient aspects as well as missing criteria.

One might ask if higher-level phenomena as considered here are not simply too far removed for such a simple system. One basic problem is represented by the frequently-formulated assumption that all these phenomena have to be tied to the notion of an internal perspective and that phenomenality has a function in and of itself. In contrast, we claim that focusing on the functional aspect is a sensible approach. It is possible because we believe that the phenomenal aspect is always coupled to specific, yet unknown, properties of the neuronal system that, at the same time, have functional effects and show subjective experience. In other words, adopting a monist view, we assume that we can circumvent the “hard” problem, i.e., the question concerning the subjective aspect of mental phenomena, without losing information concerning the function of the underlying procedures. Of course, we are not in a position to claim which of these structures, if any, are accompanied by phenomenality. If, however, the function of, for example, the artificial system indeed corresponds well enough to those of the neuronal structures that are accompanied by phenomenality, the artificial system may have this property, too.

The control network reaCog consists of local procedural modules. We have presented two subnetworks: Walknet, which aims at the control of walking, and Navinet, which deals with navigation. Both consist of a heterarchical structure of motivation units that form a recurrent neural network. This, via competition and cooperation between those units, allows for various attractor states that enforce action selection. Selection of one or a group of procedures protects a current behavioral context against non-relevant sensory input. An internal model of the body is part of the control network coordinating joint movements in walking. As this model is quite flexible and predictive, it can be used for planning ahead through internal simulation. Following the definition of McFarland & Bösser (1993), the network, since it is based on reactive procedures and is capable of planning ahead, can be termed a cognitive system, giving rise to its name: reaCog. In combination with the attention controller, the whole framework can come up with new behavioral solutions when encountering problems, i.e., behaviors that are not automatically activated by the current context. Internal simulation allows us to test these behaviors and to come up with predicted consequences, which can be used to guide the selection process for the real system. The attention controller cannot function independently. It is tightly connected to the reactive structures. The procedural memory of the reactive system is further accompanied with perceptual memory and Word-nets, a specific form of mixed procedural and perceptual memory. The latter memory elements allow the introduction of symbolic information. Symbol-grounding is realized by specific connections between the motivation unit of a Word-net and its partner motivation unit, representing the corresponding concept in the procedural (or the perceptual) memory.

Key characteristics of reaCog are modularity, heterarchy, redundancy, cross-modal influences (e.g., path integration and landmark navigation in Navinet), bottom-up and top-down attention control, i.e., the selection of relevant sensory inputs, as well as recruitment of internal models for planning. The complete control system constitutes a holistic system as the central selection control process—including the internal body-model—is implemented as an RNN. Overall, reaCog follows Anderson’s massive redeployment hypothesis (Anderson 2010), since large parts of the reactive control network structure are reused in higher-level tasks (as discussed in detail in section 4 for planning ahead and in section 10c for Theory of Mind).

ReaCog nicely demonstrates how complex behavior can emerge from the interaction of simple control networks and coordination on a local level, as well as through the loop through the environment. Its feasibility is shown through the implementation of the system at first in dynamic simulation (for Navinet on a two DoF, wheeled robot platform; for Walknet using a hexapod, twenty-two DoF hexapod robot). Second, those control networks are currently applied to a real robot, called Hector (Schneider et al. 2011).

Emergent properties are properties that are to be addressed using levels of description other than those used to describe the properties of the elements. In the reactive part of the system (Walknet, Navinet) we have already found some emergent properties (development of different “gaits”, climbing over large gaps, finding shortcuts in navigation characterized as cognitive-map-like behavior) as well as forms of bottom-up and top-down attention. With respect to the notion of access consciousness, several contributing properties are present in reaCog. Most notably, planning ahead through internal simulation is central to reaCog. New behavioral plans are tested in the internal simulation, thus exploiting the existing internal model and its predictive capabilities. Only afterwards are successful behaviors applied on the real agent. In this way, the agent can deal with novel contexts and is not restricted to the hard-wired structure of the reactive system.

Furthermore, the system shows global availability, which means that elements of the procedural memory can be addressed even if they do not belong to the current context. A third property contributing to elements forming access consciousness concerns the ability of the system to communicate with an external supervisor by following (i.e., understanding) verbal commands and by reporting on its internal states. Therefore, except for the ability of linguistic reasoning, which is clearly missing, the issues characterizing access consciousness as listed by Cleeremans (2005) are fulfilled. But there are also disadvantages: (i) First, reactive automatic control is faster. As cognitive control involves internal simulation (and probably multiple simulations) the whole process takes more time. In addition, there is an overhead of higher-level control going on in contrast to reactive control. (ii) While access consciousness enables the system to deal with novel situations and to come up with new behaviors, the same processes might interfere when they are active during processing of the reactive control level. This might lead to worse performance when both levels are active at the same time. Both mentioned drawbacks have been confirmed in psychological experiments. We have not dealt with the subjective aspect of consciousness. But leaving this aside, we have shown how reaCog shows important constituent properties of access consciousness and how it may provide, in this way, a scaffold for a more complex system that can manifest additional basic aspects of consciousness.

The property of having an internal body-model and the property of being able to internally simulate behavior have been explicitly implemented and can therefore not be considered emergent properties in our approach. However, when referring to a hypothetical evolutionary process that may have led to the development of these properties, the appearance of the body-model and of cognitive expansion might well be characterized as representing an emergent property.

We based our analysis and discussion on the perspective of Cleeremans, and used his concepts. One counter argument addressing the notion of access consciousness is that this notion is too unspecific as it does not help to distinguish between systems, and may cover “too many” systems. For instance, one may ask, following a minimalist approach, whether this notion of access consciousness might even include programs like chess-playing software. One might also ask whether there is a fundamental difference between such a system and a system like reaCog.

While both systems are able to search for the solution to a problem using internal simulation, there are indeed crucial differences. A typical chess program would be not embodied, but, obviously, today this difference can be easily overcome and the system could be realized in a robot equipped with a vision system and a hand that could move the chess figures.

However, more importantly, the basic difference between such a chess player and reaCog would be their flexibility in using internal models. A chess-playing robot always operates within the same context, which is stored in a separate memory-domain, for example in a list of symbolic rules. In contrast, reaCog basically operates with a reactive system, but can also switch to the state of internal simulation when a problem occurs. It then searches for a solution by testing memory elements not belonging to the actual context. In other words, reaCog is able to exchange information between different contextual domains. Such a switch is not available to a chess-playing program at all. Such a program cannot distinguish between different contexts. In other words, there is no global accessibility in the sense described for systems showing access consciousness. As a consequence, the discussion of drawbacks connected with access consciousness as mentioned in the above paragraph on emergent properties, that is, issues (i), and (ii), is not applicable to such a chess-playing system, and nor are the dynamical effects observed in the experiments of Beilock et al. (2002, section 10b).

The same holds for the phenomena of a psychological refractory period, attentional blink, and the masking experiments discussed earlier in section 7. None of these phenomena can be addressed by a classical chess-player system, first, because due to the different architectures, no search of a domain belonging to a different context is possible. A chess player does not meet the requirements of access consciousness as listed by Cleeremans (2005) and represented by reaCog. Second, no specific dynamics can be found in such a chess-player system that could be made responsible for the dynamical effects mentioned above and which may provide the substrate for the occurrence of phenomenal experience. Therefore, both systems are qualitatively different. If at all, the chess player may correspond to a subsection of the symbolic domain of access consciousness, which has not yet been explicitly addressed in this article.

In an earlier paper (Cruse & Schilling 2013), taking a conservative position, we argued that properties of metacognition could not be found in the earlier version of reaCog. We have now provided some new arguments that permit a different position concerning this matter. Using this architecture, the agent is able to monitor internal states and use this information to control its behavior. Internal states may also be able to represent the agent itself. A first expansion allows representation of the activations of a partner by using the same procedure as is used for controlling the agent’s own behavior (application of “shared” circuits, “mirroring”). Furthermore, using an expansion proposed by Cruse & Schilling (2011), the agent is also able to exploit and represent knowledge about the internal states of others, specifically by applying ToM.

Cruse & Schilling (2011) have further shown how this network can be expanded to represent the discrimination between subject and object (e.g., Ego push Partner) and to attribute subjective experience (e.g., pain) to the partner using a shared body-model. A further expansion that allows for mutualism—two agents cooperate to reach a common goal (“shared intention”, Tomasello 2009)—requires two body-models, corresponding to what Tomasello calls a we-model.

In the remainder of this section we briefly mention some aspects not addressed by reaCog. First, not all combinations of the elements explained for our network have been tested within the complete system. For example, Walknet and Navinet have been tested in separate software and hardware simulations. Second, we concentrated on solving motor problems alone, and did not deal with how this system could solve problems in the symbolic domain at all. From an embodied point of view, this restriction is not as problematic as it might initially seem, as the solution process for many problems can be traced back to abilities that are based on solving motor tasks (Glenberg & Gallese 2011); for example this holds true even for abstract domains such as mathematical problem-solving (Lakoff & Nunez 2000).

Finally, an important aspect not addressed here in any detail concerns how learning of the memory elements, including the weight of the motivation unit network, is possible. Examples of learning position and quality of new food sources in Navinet are given by Hoinville et al. (2012), examples of learning perceptual networks, including the heterarchical arrangement of concepts, are given by Cruse & Schilling (2010a), but introduction of the ability to learn such properties within the complete system has not yet been introduced.