1 Introduction

In Rules: The basis of morality…?, Churchland points at several problems for classical rule-based accounts of moral knowledge that attempt to identify morally valid behavior-guiding rules and the sources of their authority. Those problems (all based on the fundamental assumption that rules in the literal sense require a language) show that we need a non-classical non-rule-based account of moral knowledge. Hence, the author proposes an alternative account from computational neuroscience based on “the best hypothesis currently available for how the brain both represents and processes information about the world […] [and] of how the brain learns”(Churchland this collection, p. 8; emphasis omitted): parallel distributed processing (PDP). In PDP, a neural network embodies a conceptual framework that contains knowledge about the world, that is, a configuration of attractor regions, a family of prototype representations, or, rather, a hierarchy of categories (Churchland 2012, p. 33): against this background, moral knowledge is a configuration of synaptic weights in a neural network. Subsequently, this insight is used to reconceive moral competence, moral conflict, and moral reasoning. Moral competence is the personal level competence to apply sub-personal level knowledge to a moral situation by assimilating it to a prior learned category or prototype. A moral conflict, however, is (at least partly) the consequence of a moral situation that has been assimilated to a category or prototype of which it is not an instance. In short, the fallibility of moral cognition leads to competing interpretations of a moral situation and thereby to a disagreement with others. Accordingly, moral reasoning is (at least mostly) not about rules and the sources of their authority but about adequate assimilation of a moral situation to a category or prototype in the first place. Finally, the author concludes: “[k]nowing how the brain works to generate and constantly improve our moral understanding will not obviate the need to keep it working toward that worthy end, though it may help us to improve our pursuit thereof” (Churchland this collection, p. 13; emphasis omitted).

Churchland’s publication has my full support. I agree with what he says, as I do with his general approach. What follows is a complementary (and perhaps even extending) attempt to improve our pursuit of moral understanding, only on a different level: applied metascience of neuroethics (NE), that is, the application of a metascientific approach to neuroethical research.[1]

In this commentary, I apply the (as-yet unpublished) bottom-up approach to NE[2] offered by Hildt et al. (forthcoming)[3] to Churchland’s publication. Thereby, I attempt to achieve my epistemic goal, which is both to localize the publication within NE and reveal its degrees of relevance[4] to neuroethical research; as well as my argumentative goal, which is to demonstrate that applied metascience of NE can optimize NE itself and, hence, improve our pursuit of moral understanding.

In the following, I introduce NE and present three typical examples of (disadvantageous) contemporary top-down approaches to NE. I then introduce a bottom-up approach to NE. Following this, I apply the bottom-up approach to NE to Churchland’s publication and present my case study results. After this, I analyze my case study results. Finally, I conclude with some suggestions for future research.