7 Phenomenality

Before concentrating on specific phenomena, such as emotions or consciousness, we would like to address a more fundamental aspect that appears to be relevant for all higher-level phenomena, namely the occurrence of subjective experience.

An example of subjective experience is pain. Even though it might be possible for us to closely attend to all neuronal activities of a human test subject while stimulating that person’s skin with a needle, the observed data would be different from the experienced pain, which is only felt by that person. Nobody other than that person can feel the pain. This form of experiencing an internal perspective is therefore only accessible to us through self-observation. Intuitively, other systems—like non-living things or simple machines—lack such an internal perspective. But in many cases, like for animals, it is hard to determine whether they have subjective experience or are merely reflexive machines that do not possess an internal perspective.

This problem is also visible when we consider a human brain, in the contrasting states of being awake or asleep, for example. While in (dreamless) sleep or under anesthesia the same neuronal systems as in a wakeful state may be active, subjective experience is assumed not to be present. And even in a normal wakeful state, we are not aware of all the contents of the different neuronal activities that take place in our brain. Therefore, only a specific type of neuronal activity seems to be accompanied by subjective experience.

There is only indirect evidence on the conditions required for subjective experience. Libet et al. (1964) performed an early experiment, where the cortex of a human subject was directly stimulated, electronically. Only for stimuli longer than 500 ms did the subjects report a subjective experience. Bloch’s law (Bloch 1885) formulates this connection more generally. The subjectively-experienced strength of a stimulus depends on the mathematical product of stimulus duration and stimulus intensity. In other words, a stimulus is only experienced subjectively when the temporally-integrated stimulus intensity surpasses a given threshold.

More recent experiments have studied the concurrent activation of different procedures that compete for becoming subjectively experienced. A basic experiment has been performed by Fehrer & Raab (1962), and has been followed by detailed later studies (Neumann & Klotz 1994). First, participants learned to press a button whenever a square was shown on a screen, but not when two squares were shown in a position on the screen flanking the first square. After the learning period was over, in the experiment the single square was presented for only a short period (about 30 ms), which was then followed by a longer presentation of the two squares. The participants did not report having seen the single square, but reported only having seen the two squares. Nonetheless, they pressed the button. This result shows, first, that the first procedure A (“stimulus single square-motor response”), can be executed without being accompanied by subjective experience of stimulus stimA, the single square. Second, procedure B (“stimulus double squares—no motor response”) appears to influence how the first procedure is experienced, i.e., this procedure inhibits the subjective experience of stimulus stimA. Therefore, stimulus stimA is not subjectively experienced (the “masking” effect), but nonetheless triggers the motor reaction.

This situation can be interpreted in the following way (Figure 8, left). On the input side, each procedure shows temporal dynamics that are similar to that of a low-pass filter (LPF) (see footnote on page here) followed by an integrator (IntA, IntB).[6] Stimulation of one procedure inhibits the representation of the other procedure for some limited time (figure 8, Δt). In addition, both integrators are coupled via mutual inhibition (in figure 8 depicted by separate units). In the masking experiment, the first stimulus (stimA) does not inhibit the second procedure (B), because the latter is not yet stimulated, as long as stimulus stimA is active. In contrast, when the second stimulus, stimB, is given, the representation of procedure A may be suppressed. The representation of the input given by units IntA and IntB activate the corresponding motivation units (MU) of the procedures, MUA and MUB, respectively. This could be explained if we assume two different thresholds. First, the motor command of a procedure can be elicited when a small threshold (thr1, figure 8) is reached. But, a second, larger threshold (thr2, figure 8) must be reached in order to have subjective experience. Then, in our paradigm, procedure A, which was activated first, may reach the level of thr1, which is sufficient to activate the motor output, but not thr2. Only the second procedure, B, has enough time to reach the state of subjective experience (thr2, figure 8, right), which allows the double square (stimB) to become subjectively experienced (however this comes about). The model therefore suffices to explain the basic properties characterizing the backward-masking experiment. As has been shown by Cruse & Schilling (2014), the structure depicted in figure 8 can also deal with a forward-masking paradigm, the so-called attentional blink effect (Schneider 2013). To further describe another experiment, showing the so called psychological refractory period (PRP) paradigm (e.g., Zylberberg et al. 2011), the motivation units (MUA, MUB) of procedure A and procedure B are connected in such a way as to inhibit each other. In other words, the motivation units of these procedures form a WTA network. In addition, each procedure inhibits its own motivation unit after its action has been completed.

From these observations we conclude that there are specific neuronal states that require time to be developed. While eliciting an output signal (like a motor command) is the basic function of the system, this can happen without accompanying subjective experiences. Only some procedures may give rise to such phenomenal experience and might, in addition, trigger subsequent functions in the neural system. For example, this procedure may be able to access more neuronal sources and perhaps allow faster storing of new information (e.g., for one-shot learning). In addition to such functional properties the network can endorse the (mental) property of showing subjective experience, i.e., entering the phenomenal state.

The experimental findings mentioned above support a non-dualist, or monist, view, which means that there are no separate domains (or “substances”), such as the mental and the physical domain, in the sense that there are causal influences from one domain to the other one as postulated by substance dualism. Rather, the impression of there being two “domains”—often characterized as being separated by an explanatory gap (Levine 1983)—, results from using different levels of descriptions.[7]

An explanation of the necessary and sufficient conditions of neural networks that allow for subjective experience would be extremely interesting. Even though there currently exist only early insights or mere speculations, there has been a lot of progress during the past few years (review Schier 2009; Dehaene & Changeux 2011). The continuation of these research projects will hopefully yield a more detailed understanding. Using combinations of neurophysiological and behavioral studies may lead a better understanding of the physiological properties and functions of this state. It is, however, generally assumed that even if we knew the physical details at some future time, we would not understand why this state, which is characterized by physical properties, is accompanied by phenomenal experience. Here we propose another view. We assume that this problem will be “solved” such that the question concerning the explanatory gap will simply disappear, as has happened in the case of explaining the occurrence of life. Concerning the latter, there was an intensive debate between Vitalists and Mechanists at the beginning of the last century on how non-living matter could be transformed into living matter. The Vitalists argued that a special, unknown force, termed vis vitalis, was required. After many decades of intensive research, we are in a position where an internal model is available, which represents the observation that a specific collection and arrangement of molecules is endowed with the property of living. This and similar cases may be generalized as the following rule: If we have enough information, such that we can develop an internal model of the phenomena under examination, and if it is sufficiently detailed to allow the prediction of the properties of the system, we have the impression of having understood the system. In the case of life, indeed we do not need a vis vitalis any longer, but consider liveliness an emergent property. Correspondingly, we propose that if we knew the functional details and conditions that lead to matter having subjective experience well enough, so that the appearance of subjective experience can be predicted, we would have the impression of having understood the problem. Therefore, we assume that the question of the explanatory gap will disappear at some point, as was the case in the example of life.

Adopting a monist view allows us to concentrate on the functional aspects when trying to compare systems endowed with the phenomenality, i.e., human beings, with animals or artificial systems. According to this view, phenomenality is considered a property that is directly connected with specific functions of the network. This means that mental phenomena that are characterized by phenomenal properties—as are, for example, attention, intention, volition, emotion, and consciousness—can be examined by concentrating on the aspect of information processing (Neisser 1967).

To avoid possible misunderstandings, we want to stress that we do not mean that the phenomenal aspect does not have any function in the sense that the system would work in the same way if there was no such phenomenal properties. Since, according to our view, the phenomenality necessarily arises with such a system, a version of such a system showing exactly the same functions but not having the phenomenal aspect would not be possible. A change in the phenomenal properties of a system has to be accompanied by a change in its functional properties. Functional and phenomenal aspects are two sides of one coin. However, remaining on the functional side makes the discussion much easier.

Image - figure007.jpg Figure 7: The reactive network expanded by a layer containing procedures that represent words (Word-net, upper row). The motivation unit of a Word-net (WU) is bi-directionally connected (dashed double-headed arrows) with the corresponding motivation unit of the reactive system containing procedural elements of Walknet (left, see figure 2) and of Navinet (right, see figure 4). The word stored in a Word-net is indicated as (“ ... ”). Not all of these motivation units have to be connected with a Word-net.

To summarize, the content of any memory element may be subjectively experienced (or available to conscious awareness) if (1) the (unknown) neuronal structures that allow for the neural dynamics required for the phenomenal aspect to occur are given, and (2) the strength and duration of the activation of the memory element is large enough, provided the element is not inhibited by competing elements.

The question of how any system can possibly have subjective experience was famously called the “hard problem” by Chalmers (1997). Adopting a monist view, we can avoid this question and leave it open, as we are interested in understanding the functional aspects of consciousness (on the ethical implications of an artificial system having subjective experience implemented in appropriate neural dynamics see Metzinger 2009, 2013). Regarding what kind of dynamics could be thought of, it has been speculated that subjective experience might occur in a recurrent neural network that is equipped with attractor properties. Following this hypothesis, subjective experience would occur if such a network approached its attractor state (Cruse 2003). This assumption would mean that any system showing an attractor might be endowed with the phenomenon of subjective experience. It may, however, not have all the other properties characterizing consciousness. On the other hand, there might be systems in which the functional aspects currently attributed to consciousness are fulfilled, but where there is no subjective experience present. This case would imply that our list representing the functions of consciousness as given in section 10 below is not yet complete.

In the following two sections we shall briefly treat two phenomena—emotions and consciousness—and discuss how they might be related to the minimally-cognitive system as represented by reaCog.