Frank Faries (University of Cincinnati)
Mechanistic integration, of the kind described by Craver and Darden (2013), is, at first glance, one way to secure sensitivity to the norms of mechanistic explanation in integrative modeling. By extension, models in systems neuroscience will be explanatory to the extent that they demonstrate mechanistic integration of the various data and methods which construct and constitute them. Recent efforts in what Braun and colleagues have dubbed “multi-dimensional network neuroscience” (MDNN) claim to provide increasingly mechanistic accounts of brain function by moving “from a focus on mapping to a focus on mechanism and to develop tools that make explicit predictions about how network structure and function influence human cognition and behavior” (Braun, et al., 2018). MDNN appears to provide examples of simple mechanistic integration, interlevel integration (looking down, up, and around), and intertemporal integration. Moreover, these models appear to increasingly satisfy the Model-to-Mechanism Mapping (3M) requirement (Kaplan and Craver, 2011), and allow for intervention, control, and the answering of “what-if-things-had-been-different” questions (Woodward, 2003). These efforts attempt to situate parametric correlational models “in the causal structure of the world” (Salmon, 1984). As such they appear to be excellent exemplars of mechanistic integration in systems neuroscience.
However, despite such good prospects for mechanistic integration, it is unclear whether those integrative efforts would yield genuine explanations on an austere mechanistic view (of which I take Craver (2016) to be emblematic). I identify three objections that can raised by such a view—what I call the arguments from (i) concreteness, (ii) completeness, and (iii) correlation versus causation. I treat each of these in turn and show how a more sophisticated understanding of the role of idealizations in mechanistic integration implies a rejection of these objections and demands a more nuanced treatment of the explanatory power of integrated models in systems neuroscience. In contrast to austere mechanistic views, I offer a flexible mechanism view, which expands of the norms of mechanistic integration, including the 3M requirement, to better account for the positive ontic and epistemic explanatory contributions made by idealization—including the application of functional connectivity matrices—to integration in systems neuroscience. Further, I show how the flexible mechanistic view is not only compatible with mechanistic philosophy, but better facilitates mechanistic integration and explanation.