02 Nov 2018 01:30 PM - 03:30 PM(America/Los_Angeles)
Venue : Diamond (First Floor)
20181102T133020181102T1530America/Los_AngelesExplanation and UnderstandingDiamond (First Floor)PSA2018: The 26th Biennial Meeting of the Philosophy of Science Associationoffice@philsci.org
The Roles of Possibility and Mechanism in Narrative Explanation
Philosophy of Science01:30 PM - 02:00 PM (America/Los_Angeles) 2018/11/02 20:30:00 UTC - 2018/11/02 21:00:00 UTC
Daniel Swaim (University of Pennsylvania) There is a fairly longstanding distinction between what are called the ideographic as opposed to nomothetic sciences. The nomothetic sciences, such as physics, offer explanations in terms of the laws and regular operations of nature. The ideographic sciences, such as natural history (or, more controversially, evolutionary biology), cast explanations in terms of narratives. This paper offers an account of what is involved in offering an explanatory narrative in the historical (ideographic) sciences. I argue that narrative explanations involve two chief components: a possibility space and an explanatory causal mechanism. The presence of a possibility space is a consequence of the fact that the presently available evidence underdetermines the true historical sequence from an epistemic perspective. But the addition of an explanatory causal mechanism gives us a reason to favor one causal history over another; that is, causal mechanisms enhance our epistemic position in the face of widespread underdetermination. This is in contrast to some recent work that has argued against the use of mechanisms in some narrative contexts. Indeed, I argue that an adequate causal mechanism is always involved in narrative explanation, or else we do not have an explanation at all.
Philosophy of Science02:00 PM - 02:30 PM (America/Los_Angeles) 2018/11/02 21:00:00 UTC - 2018/11/02 21:30:00 UTC
Marc Ereshefsky (University of Calgary), Derek Turner (Connecticut College) John Beatty and Eric Desjardins offer a rich account of historicity and historical narrative. Here we identify four concerns with their account. We then offer an alternative account of historical narrative that draws on the work of earlier philosophers (Gallie, Danto, and Hull). In particular, we highlight three features of narrative explanation that Beatty and Desjardins underemphasize: central subjects, historical trajectories, and the idea that historical narratives are only known retrospectively.
Mixed-Effects Modeling and Non-Reductive Explanation
Philosophy of Science02:30 PM - 03:00 PM (America/Los_Angeles) 2018/11/02 21:30:00 UTC - 2018/11/02 22:00:00 UTC
Wei Fang (Tongji University) This essay considers a mixed-effects modeling practice and its implications for the philosophical debate surrounding reductive explanation. Mixed-effects modeling is a species of the multilevel modeling practice, where a single model incorporates simultaneously two (or even more) levels of explanatory variables to explain a phenomenon of interest. I argue that this practice makes the position of explanatory reductionism held by many philosophers untenable, because it violates two central tenets of explanatory reductionism: single level preference and lower-level obsession.
What Use Are Problem-Solving Approaches to Explaining Degrees of Understanding?
Philosophy of Science03:00 PM - 03:30 PM (America/Los_Angeles) 2018/11/02 22:00:00 UTC - 2018/11/02 22:30:00 UTC
Mark Newman (Rhodes College) According to some philosophers of science, our understanding of a scientific theory is reflected by our ability to solve problems using that theory (see for instance de Regt and Dieks (2005), de Regt (2009), de Regt and Gijsbers (2017) and Newman (2017a, 2017b)). If this is so, then it follows that the degree of understanding someone has of a theory could in principle be measured by the maximum difficulty levels of the problems they can solve. I raise five problems for this view. I then look at two models from cognitive science that claim to measure and rank physics problems by their level of difficulty. I raise four problems for these models, and argue that even if they could provide an objective measure of problem difficulty they cannot provide a measure of someone's degree of theoretical understanding. I argue that what all these problems show is that we need a more fine-grained measure? one that works at the fundamental level of individual beliefs and inferences, not at the abstract level of problems.