9 Functionalism and biology

So far the free energy principle has been given a functionalist reading. It describes a functional role, which the machinery in the brain realizes. One of the defining features of functionalism is that it allows multiple realization. This is the simple idea that the same function can be realized in different ways, at least in principle. For example, a smoke alarm is defined by its functional role but can be realized in different ways. There is on-going debate about whether something with the same causal profile as the human brain could realize a mind. Philosophers have been fond of imaging, for example, a situation in which the population of Earth is each given a mobile phone and a set of instructions about whom to call and when, which mimics the “instructions” followed by an individual neuron (Block 1976). The question then is whether this mobile phone network would be a mind. Though this is not the place to enter fully this debate, it seems hard for the defender of the free energy principle to deny that, if these mobile phone-carrying individuals are really linked up in the hierarchical message-passing manner described by the equations of the free energy principle, if they receive input from hidden causes, and if they have appropriate active members, then they do constitute a mind.

However, a different issue here is to what extent the free energy principle allows for the kind of multiple realization that normally goes with functionalism. The mathematical formulations and key concepts of the free energy principle arose in statistical physics and machine learning, and hierarchical inference has been implemented in computer learning (Hinton 2007). So there is reason to think that prediction error minimization can be realized by computer hardware as well as brainware. There is also reason to think that within the human brain the same overall prediction error minimization function can be realized by different hierarchical models. Slightly different optimizations of expected precisions would determine the top-down vs. bottom-up dynamics differently, but may show a similar ability to minimize prediction error over some timeframes. Different weightings of low and high levels in the hierarchy can lead to the same ability to minimize prediction error in the short and medium term. This is similar to how a dam can be controlled with many small plugs close to the dam wall, or by fewer connected dam locks operating at longer timescales further back from the dam wall. In some cases, such different realizations may have implications for the organism over the long run, however (for example, building locks in a dam may take time, and thus allow flows in the interim; whereas many small plugs prevent flows in the short run term but may be impractical in the long run). Such differences may show up in our individual differences in perceptual and active inference (for an example, see Palmer et al. 2013), and may also be apparent in mental illness (Hohwy 2013, Ch. 7).

Functionalist accounts of the mind are widely discussed in the philosophical literature, and there are various versions of it. A key question for any functionalism is how the functional roles are defined in the first instance (for an overview see Braddon-Mitchell & Jackson 2006). Some theories—psychofunctionalism or empirical functionalism—posit that functional roles should be informed by best empirical science (“pain is caused by nociceptor activation… etc.”). The consequence is that their domain is restricted to those creatures for whom that empirical science holds. Other theories—commonsense functionalism—begin with conceptual analysis and use that to define the functional roles (“pain is the state such that it is caused by bodily damage, gives rise to pain-avoidance behavior, and relates thus and so to internal states…”). The consequence of taking the commonsense approach is that such functionalisms apply widely, including to creatures science has never reached, in so far as they have something realizing that functional role.

There are some nice questions here about what we should really say about creatures with very different realizations of the same functions (e.g., “Martian pain”), and creatures with very similar realizations but different functions (e.g., “mad pain”; see Lewis 1983). Setting those issues aside for the moment, one question is which kind of functionalism goes with the free energy principle. There is no straightforward answer here, but one possibility is that it is a kind of “biofunctionalism”, where the basic functional role is that of creatures who manage to maintain themselves within a subset of possible states (in a space-filling or active manner) for a length of time. Any such creature must be minimizing its free energy and hence engaging in inference and action. It is biological functionalism because it begins by asking for the biological form—the phenotype—of the candidate creature.

This is an extremely abstract type of functionalism, which allows considerable variation amongst phenotypes and hence minds. For example, it has no problem incorporating both Martians and madmen in so far as they maintain themselves in their expected states. It will however specify the mental states of the organism when it becomes known in which states it maintains itself. This follows from the causal characterization of sensory input, internal states, and active output that fully specify a prediction error minimizing mechanism. Once these states are observed, the states of the system can be known too, and the external causes rendered uninformative (i.e., the sensory and active states form a Markov blanket; Friston 2013).

What drives biofunctionalism is not species-specific empirical evidence, as in psychofunctionalism. And it does not seem to be commonsense conceptual analysis either. Rather, it begins with a biological, statistical observation that is as basic as one can possibly imagine—namely that creatures manage to maintain themselves within a limited set of states. As seen at the very start of this paper, this defines a probability density for a given creature, which it must approximate to do what it does. For an unsupervised system, it seems this can only happen if the organism minimizes its free energy and thereby infers the hidden causes of its sensory input, and then acts so as to minimize its own errors. This is an empirical starting point at least in so far as one needs to know many empirical facts to specify which states a creature occupies. But it is, arguably, also a conceptual point in so far as one hasn’t understood what a biological creature is if one does not associate it at least implicitly with filling some specified subset of possible states.

The upshot is that the free energy principle sits well with a distinct kind of functionalism, which is here called biofunctionalism. It remains an open question how this would relate to some versions of functionalism and related views, such as teleosemantics (Neander 2012), which relies on ideas of proper function, and information theoretical views (Dretske 1983). The biofunctionalism of the free energy principle seems to have something in common with those other kinds of positions though it has no easy room for the notion of proper function and it doesn’t rely on, but rather entails, information theoretical (infomax) accounts.

Setting aside these theoretical issues, note that biofunctionalism has a rather extreme range because it entails that there is Bayesian inference even in very simple biological organisms in so far as they minimize free energy. This includes for example E. coli that with its characteristic swimming-tumbling behavior, maintains itself in its expected states. And it includes us, who with our deeper hierarchical models maintain ourselves in our expected states (with more space-filling and for longer than E. coli). Of course, one might ask where, within such a wide range of creatures, we encounter systems that we are comfortable describing as minds—that is, as having thought, as engaging in decision-making and imagery, and not least as being conscious. This remains a challenge for the free energy principle, just as it is a challenge for any naturalist theory of the mind to specify where, why, and how these distinctions between creatures arise.