3 An interpretation

3.1 Intentional Systems Theory

An important part of what follows is Intentional Systems Theory (IST). What is crucial here is that according to IST, all there is to being an agent in the sense of having beliefs and desires upon which to act is to be describable via a certain strategy: the intentional stance. The intentional stance is a “theory-neutral way of capturing the cognitive competences of different organisms (or other agents) without committing the investigator to overspecific hypotheses about the internal structures that underlie the competences” (Dennett 2009, p. 344). If one predicts the behavior of an object via the intentional stance, one presupposes that it is optimally designed to achieve certain goals. If there are divergences from the optimal path, one can, in a lot of cases, correct for this by introducing abstract entities or false beliefs. Since there are presumably no 100%-optimally-behaving creatures in the world, every intentional profile (a set of beliefs and desires), generated via adoption of the intentional stance, contains a subset of false beliefs.[7] It seems that humans have a “generative capacity [to find the patterns revealed by taking the intentional stance] that is to some degree innate in normal people” (Dennett 2009, p. 342). I will come back to this point and its connection to PP in the next section.

Let us assume for the sake of argument that IST gives a correct explanation of what it is to be an agent (in the sense of someone who has beliefs and desires and acts according to them), and that PP allows us to see how an agent can be implemented on the “algorithmic level”(see Dennett’s discussion in Dennett 1987, p. 74, where he refers to the IST as a “competence model”). Whenever I say that an agent believes, wants, desires, etc. something I mean it in exactly the sense found in IST.

Intentional systems can be further categorized by looking at the content of their beliefs, e.g., a second-order intentional system is an intentional system that has beliefs and/or desires about beliefs and/or desires, that is, it is itself able to take an intentional stance towards objects (Dennett 1987, p. 243). A first-order intentional system has (or can be described as having) beliefs and desires; a second-order intentional system can ascribe beliefs to others and itself. If something is a second-order intentional system it harbors beliefs such as “Peggy believes that there’s cheese in the fridge”. But taking the intentional stance towards an object is an ability that comes in degrees. I now want to describe what one might call an intentional system of 1.5th order, an intermediate between first- and second-order intentional systems. This is a system that is not able to ascribe full-fledged desires and beliefs with arbitrary contents to others or itself. We, as intentional systems of high order, have no difficulty in ascribing beliefs and desires with very arbitrary contents, such as “She wants to ride a unicorn and believes that following Pegasus is a good way to achieve that goal”. But the content of beliefs and desires that such an intentional system of 1.5th order can ascribe should be constrained in the following way:

  1. An intentional system of 1.5th order is able to ascribe desires only in a very particular and concrete manner, i.e., actions that the object in question wants to perform with certain particular existing objects, that the system itself knows about (e.g., the desire to eat the carrot over there), but not goals directed at nonexistent objects, described by sentences like “he wants to build a house”, or objects the ascriber itself does not know about.

  2. It is only able to ascribe beliefs to others that it holds itself. That means it is able to take the basic intentional stance with the default assumption that the target object in question believes whatever is true (if we assume the ascriber’s beliefs are in fact all true), but lacks the ability to correct the ascriptions if it leads to wrong predictions for the behavior of the target. A real-world example can be found in Marticorena et al. (2011): rhesus macaques in a false belief task can correctly predict what a person will do, given that the person knows where the object is hidden and they have seen the person getting to know this. They can also tell when a person doesn’t have the right knowledge, but they cannot use this information to make a prediction about where the person will look.

The implementation of such an intermediate between first- and second-order intentional systems can be easily imagined following predictive coding principles, as I will soon show. Following this, I argue that this sets down the basic fundaments for systems evolving from this position to be believers in qualia, etc.

The reason for introducing this idea is that I want to show how, given predictive processing principles and a certain selection pressure, a 1.5th-order-intentional-system might develop from a first-order-intentional-system. In a next step, I will argue that under an altered selection pressure such a system might become a full-fledged nth-order-intentional-system, where n is greater or equal to two. Systems evolving in such a way, as I will describe, are bound to believe in the existence of something like qualia. In some sense this is only a just-so story, but the assumed selection pressures are very plausible, and the empirically-correct answer might not be too far away from this.

3.2 Our Bayesian brains[8]

To see how the pieces fit together imagine the situation of some first-order intentional systems, agents, which are the first of their kind. They act according to their beliefs and desires. They do so because the generative models implemented in their brains generate a sufficient number of correct predictions about their environment for them to survive and procreate. They do a fairly good job of avoiding harms and finding food and mates. Since they are first-order intentional systems, the behavior of their conspecifics amounts to unexplained noise to them, because they are unable to predict the patterns of most of their behavior (which is what makes them merely first-order intentional systems), though they might well predict their behavior as physical objects, e.g., where someone will land if she falls off a cliff, for instance.

When resources are scarce, this leads to competition between these agents and it becomes an advantage to be able to predict the behavior of one’s conspecifics. This behavior is by definition pretty complex (they are intentional systems), but one can get some mileage out of positing the following regularity: some objects in the world have properties that lead to predictable behavior in agents, e.g., if there is an apple tree this will lead to the agents approaching it, if they are sufficiently near, etc., whereas if there is a predator, they will run from it, etc. Their model of the world is populated by properties of items that allow the (arguably rough) predictions of agent behavior. One might indeed say that the desires of the agents are projected [9] onto the world.[10] Those who acquire this ability are now 1.5th order intentional systems (see above; monkeys and chimpanzees might turn out to be such, see Roskies this collection).[11] However, findings in this area are controversial. See Lurz 2010), since they can predict the behavior of others, given that their behavior is indeed explainable via reference to actually-existing objects, such as apples or potential sexual partners. In addition to these properties, there is a new category of objects in “their world”: beings that react to these properties in certain ways.[12]

In a next step we might suppose that a system of communication or signaling evolves (the details are not important), turning our intentional systems of 1.5th order into communicative agents. As communicative beings they have an interest in hiding and revealing their beliefs according to the trustworthiness of others and their motives (cf. Dennett 2010). That is, any of those beings needs to have access to what it itself will do next, so that they can hide or share this information, depending on information about the other. One might think of hiding the information about one’s desire to steal some food, and so on.

This is a situation where applying the predictive strategy that was formerly only used to explain the behavior of others to oneself becomes an advantage for each of the agents.[13] Agents like this believe in the existence of a special kind of special kind of properties, i.e., they predict their own behavior on the basis of generative models that posit such properties: they believe that they approach apples because they are sweet, cuddle babies because they are cute, laugh about jokes because they are funny. Applying the strategy to their own behavior puts them in the same category (according to the generative model) as the others: they are unified objects that react to certain properties, not a bunch of cells trying to live among one another.[14]

The agent-models of these beings might improve by integrating the fact that sometimes it is useful to posit non-existing entities or omit existing entities in order to predict the behavior of a given conspecific (think of subjects in the false belief-task looking in the wrong box). By this the concept of (false) beliefs arises. One can imagine how they further evolve into full-fledged second and higher-order intentional systems, in an arms-race for predicting their fellows.[15]

A further step: they develop sciences like we did and will come to have a scientific image of the world, which contains no special simple properties of objects that cause “agents” to behave in certain ways. They come to the conclusion that the brain does its job without taking notice of properties like cuteness or redness, “instead relying” on computations, which take place in the medium of spike trains and nothing but spike trains (cf. target, section 1). Their everyday predictions of others and most importantly of themselves still rely on the posited properties. And some might wonder whether there isn’t something missing from the scientific image.

According to the scientific image, they, as biological organisms, react to photons, waves of air, etc., but these are not the contents of their own internal models employed in solving the continuous task of predicting themselves. The simplest things they react to seem to be colors and shapes, (perceived) sounds, etc. The reaction towards babies is explained via facial proportions and the like, but this is far from what their generative models “say”, which is “the reaction to babies is caused by their cuteness”.

They begin to build robots, which react to babies like they do. They say things like, “all this robot reacts to are the patterns in the baby’s face, the proportions one can measure; but although it reacts like we do, it does not do so because of the baby’s cuteness”. Of course only non-philosophers might say that science misses a property of the baby, but philosophers still see that there is something missing, and since cuteness is not a property of the outside world, they conclude that it must be a property of the agents themselves.

This seems to me to be the current situation. We have the zombic hunch because it seems to us that there is something missing and it seems so because our generative models are built upon the assumption that there are properties of things out there in the world to which systems like us react in certain ways. We never consider others like us to be zombies because they are agents like us or better: we are systems like them. We dismiss robots because we know they can only react to measurable properties, which do not seem to us to be the direct cause of our behavior.