9 Attention, volition and intention

In the following section we want to turn to attention, intention, and volition. To what degree can those properties be attributed to our system? We start from the definitions of attention provided by Desimone & Duncan (1995), of intention from Pacherie (2006) and Goschke (2013), and of volition from Goschke (2013).

Attention is the ongoing selection process in perception. It can be driven bottom-up, i.e., by sensory influences, or it can be controlled by top-down influences (Desimone & Duncan 1995). Top-down driving of attention depends on the internal or emotional state and might depend on familiarity with the stimulus.

We can indeed find properties corresponding to attention in reaCog. The motivation network is constituted of local clusters of units that always compete on this local level and form in this way coalitions of units and small subclusters. As an example, we introduced the selection of procedures at the leg level. Either a swing or a stance motivation unit can be active and inhibits the other one. These two units compete for control of behavior. Sensory units can influence this competition. For example, an incoming ground-contact signal ends a swing movement and initiates stance activation. After activating the “Stance” unit only sensory input relevant to stance can be perceived by the system, but not inputs relevant to swing. Therefore, this case corresponds to bottom-up attention control.

Such competition can also be found on a global level, on which different behaviors can be chosen. The activation of these higher-level elements influences the lower level. This activation provides a context for the lower level, which guides the selection process on that level and decides which sensory inputs might be relevant. Thereby, more global clusters control the attention on the lower levels in a top-down fashion. Corresponding examples can be found in Navinet, which we mentioned earlier. Only visual signals concerning landmarks that belong to the current active context are considered and switching between contexts only becomes possible after the food source has been depleted and found empty.

The cognitive expansion of reaCog represents another case of top-down influence. This system comes up with new behaviors and probes them via internal simulation. As mentioned, there is a specific WTA layer that mirrors the arrangement of the lower motor control layer (figure 6, green units). This part of the controller can be called an “attention controller”, as the explicit function of this layer is to narrow down the search for suitable behavior and to actively select a single one. We call this selection a cognitive decision, as the system is supposed to select a behavior that would not normally be triggered through the given context. In this way the system represents a special type of top-down attention. The focusing mechanism may correspond to what sometimes has been termed “spot light” (Baars & Franklin 2007 p. 955). Overall, we can therefore observe three different types of attentional influences in reaCog.

Volition is an umbrella term denoting mechanisms allowing for voluntary actions. The latter are “actions that are not fully determined by the immediate stimulus situation but depend on mental representations of intended goals and anticipated effects” (Goschke 2013). For an outside observer, voluntary actions cannot be predicted. As mentioned above, it is crucial for the cognitive expansion that it can select behaviors that are not triggered by the current situation. The system has to invent new behaviors. Even though the consequences of these behaviors are predicted, from the outside the finally chosen behavior is not predictable, as this invention and selection of new behaviors is stochastic to some extent. The application of internal simulation only guarantees that the proposed behavior will lead to a solution, but it does not give away which behavior will be chosen. To the contrary, the search space of possible solutions can easily become very large and has to be restricted. Such restrictions help to span a tractable space of possible solutions. In our example, reaCog looks first for solutions in the morphological neighborhood, i.e., it tries to use the neighboring legs to help find a solution for a locally-given problem. There are still many possible behaviors that must be tested in a somewhat random order. The system will end up with one that has been anticipated as a solution in internal simulation, but this solution is not selected through sensory inputs or the current context as such. Therefore, volition may be attributed to a system like reaCog.

Does an agent controlled by reaCog show intentions? Intentions are present when the controlled action is goal-directed. We are following Pacherie (2006), who proposes a differentiation of three types of intentions (based on Bratman’s (1987) original differentiation into two such types). Pacherie distinguishes future-directed as well as present-directed intentions and introduces motor-intentions as a third type. Present-directed intentions are considered to be under “conscious” (or “rational”) control. In contrast, motor intentions are related to lower-level function (Pacherie 2006). Defining for these types of intention is that they provide guidance for the function on the respective level. In reaCog, motor-intentions are realized by the fact that, on the reactive-control level, behaviors can be selected based on the context. Present-directed intentions can be found on the level of cognitive decision. Future-directed intensions are not treated by reaCog, because its architecture in the current version only allows for dealing with problems that occur in the context of current walking behavior. However, an expansion of reaCog that would include planning ahead using Navinet as a substrate would include future-directed intensions, too.

Goschke (2013) defines intentions as “causal preconditions explaining why a particular stimulus triggers a particular action (rather than a different action)” (p. 415). In other words, “intentions can be said to shape the “attractor landscape” of an agent’s behavioral state space” (Kugler et al. 1990, ref. from Goschke 2013, p. 415). In reaCog, such an attractor landscape is described by the motivation unit network. As explained in the preceding paragraph on attention, the activation of a context guides, in a top-down fashion, both the selection of a suitable behavior as well as which sensory inputs the system should attend to. The lower-level activation and incoming sensory inputs influence, on the one hand, the adaptive execution of the behavior as such. On the other hand, the sensory input can inform the higher level in bottom-up fashion and might indirectly trigger changes on this higher-level, too. The activation on the higher level will, however, be in general more stable on a temporal scale and will reflect a specific context as well as relate to specific goals. For example, in the case of Navinet, there are different possible goals, such as food sources or the nest, which are represented in the higher-level network. Selecting one of these as a goal will guide the overall function of the system, as its behavior is directed towards approaching that location, while the sensory system will attend only to the specific (expected) sensory stimuli. Therefore, reaCog can be assumed to show goal-directed behavior and intentions.