As outlined above, when attempting to identify the neural correlate of a particular content of conscious experience, it is important to ensure that brain representations in any candidate region fulfil certain mapping requirements. Because we have no direct way of establishing this mapping, multivariate decoding provides a rough approximation that allows the linking of perceptual contents to population brain responses in different regions, and allows us to explore their properties. The data from our lab provide several constraints for a theory of NCCCs. Consistent with previous suggestions (Crick & Koch 1995), the very early stages of processing in V1 are presumably not directly involved in encoding visual experiences. Representations in these regions have more detail than enters consciousness (Haynes & Rees 2005) and might not change their information content during perceptual learning when contents are successively represented with more detail in consciousness (Kahnt et al. 2011). Please note that early regions beyond V1 have to be NCCCs, because higher-level visual areas are invariant to low-level visual features. This has not only been shown in animals (Sáry et al. 1993), but also in humans using classification techniques (e.g., Cichy et al. 2011). This invariance means that high-level regions cannot simultaneously encode the high-level, more abstract phenomenal properties (such as whether a cloud of points resembles a dog or a cat) and the low-level phenomenal properties (colour or brightness sensations). Multiple regions are needed to account for the full multilevel nature of our perceptual experience. While V1 is presumably excluded from visual awareness, early extrastriate regions (such as V2) are likely to be involved, because they still encode low-level visual information. They also appear to filter out sensory information that does not enter awareness, thus again closely matching perceptual experience. For example, V2 and V3 do not encode the orientation of invisible lines, whereas V1 does (Haynes & Rees 2005). Similarly, neural object representations in the lateral occipital complex were wiped out by visual masking that rendered an object stimulus invisible (Bode et al. 2012). The role of extrastriate and higher-level visual areas in visual awareness is further highlighted by the fact that they exhibit a certain convergence of different aspects of awareness. Most notably, they employ a shared code for visual perception and visual imagery (Cichy et al. 2012).
While extrastriate and higher-level visual regions jointly encode different feature levels of visual awareness, there is evidence that a representation in these regions is not sufficient for visual awareness. For example, our experiments on perceptual learning (Kahnt et al. 2011)—where subjects are unable to access certain details of visual stimuli—show that improved sensory perception is not necessarily associated with improved representation of information in these early areas. The mechanism through which perception of details might be improved lies beyond the sensory encoding stage, in the prefrontal cortex. The mechanism of this improvement is not an improved sensory representation in the prefrontal cortex. Contrary to several experiments on animals (Pasternak & Greenlee 2005), our experiments consistently fail to show any sensory information in the frontal cortex. For example, when a stimulus survives visual masking and is consciously perceived, there is no evidence for the additional distribution of information into the prefrontal cortex (Bode et al. 2012; Bode et al. 2013; Hebart et al. 2012) as would be expected if information is indeed made globally available in the sense of a “streaming model” of a global workspace (Dehaene & Naccache 2001). Even in a more conventional experimental task, based on visual working memory, we were not able to identify sensory information in the prefrontal cortex. Thus, the direct encoding of the visual contents of consciousness, the NCCCs appear to lie in sensory brain regions, at least as far as can be told with the resolution of non-invasive human neuroimaging techniques. On the other hand, our results suggest that the prefrontal cortex is involved in the decision-making—as has been suggested before (Heekeren et al. 2004)—and in learning about sensory contents (Kahnt et al. 2011). Thus, it appears to do so without re-representing or encoding sensory information itself.