1. A Role for the History and Philosophy of Science in the Promotion of Scientific Literacy
Philosophy of Science00:01 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:01:00 UTC - 2018/11/03 06:59:00 UTC
Holly VandeWall (Boston College), Margaret Turnbull (Boston College), Daniel McKaughan (Boston College) In a democratic system non-experts should have a voice in research and innovation policy, as well as in those policy issues to which scientific and technological expertise are relevant – like climate change, GMOs and emergent technologies. The inclusion of non-expert voices in the debate is both a requirement for truly democratic process and an important counter to the kinds of jargon and group-think that can limit the perspective of more exclusively expert discussions. For non-experts to participate in a productive way does require a certain degree of scientific literacy. Yet in our present place of intensive specialization, access to understanding any one subfield or subdiscipline in the sciences requires years of study. Moreover, the relevant sort of literacy involves not simply familiarity with factual information, but some perspective on the goals, methods and practices that constitute knowledge formation in the scientific disciplines. We have spent the last decade developing a syllabus, readings, and tools for teaching science literacy through the history and philosophy of science. These include assemblage of appropriate primary and secondary course materials, creation of cumulative assignments, developing technology resources to connect students to key events and figures in the history of science, and implementation of assessment methods that focus on skill and concept development rather than fact memorization or problem sets. Our poster will showcase these tools and provide attendees with specific suggestions for similar course practices they can implement at their own institutions. In particular, we have found that coursework that familiarizes students with the how practices of knowledge formation in the sciences have developed over time has helped our students to: 1. Recognize that the methods of science are themselves developed through trial and error, and change over time. 2. Understand that different disciplines of science require different approaches and techniques, and will result in different levels of predictive uncertainty and different standards for what is considered a successful hypothesis. 3. Consider examples of scientific debate and processes through which those debates are resolved with the advantage of historical perspective. 4. Trace some of the unintended effects of the sciences on society and to identify where the social and cultural values of the scientists themselves played a role in their deliberations – and whether or not that had a negative epistemic effect.
2. STEAM Teaching and Philosophy: A Math and the Arts Course Experiment
Philosophy of Science00:02 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:02:00 UTC - 2018/11/03 06:59:00 UTC
Yann Benétreau-Dupin (San Francisco State University) This poster presents the goals, method, and encouraging results of the first iteration of a course titled “The Art(s) of Quantitative reasoning”. It is a successful example of a STEAM (i.e., STEM+Arts) teaching experiment that relied on inquiry-based pedagogical methods that philosophers are well prepared for. The course focused on a few issues in quantitative reasoning that have shaped the history of the arts, that is, to study a few cases in the history of the arts that posed a technical—mathematical—problem and different ways to overcome this problem. The main units were the problem of musical tuning and temperament, and perspective and projective geometry in visual arts. The general pedagogical approach was to focus on problem solving, in small group class and at home, so as to foster conceptual understanding and critical thinking rather than learning rules. The small class size (enrollment capped at 30) made this manageable. The mathematics level covered was not higher than high school level. Even though this was not, strictly speaking, a philosophy class, an argumentation-centered teaching method that is not constrained by disciplinary boundaries makes this a teaching experience in which many philosophers can partake. To assess the ability of such a course to help students become “college ready” in math and meet their general education math requirement, a pre/post test was conducted on usual elementary notions, most of which weren’t explicitly covered during the semester. Overall students’ elementary knowledge improved, but this was much more true of those whose initial knowledge was lower, to the point where pre- and post-test results are not correlated. Assuming that any further analysis of the data is meaningful at all (given how small the sample size is), these results depend on gender (women’s scores improved more than men’s), but not on year (e.g., no significant difference between freshmen and seniors).
Philosophy of Science00:03 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:03:00 UTC - 2018/11/03 06:59:00 UTC
Aleta Quinn (University of Idaho) I teach “environmental philosophy,” “philosophy of biology” and related undergraduate courses. In this poster I reflect on what is/are the purpose(s) of teaching these courses, and in turn how I should teach. I am presenting this paper at a major scientific conference in July, to collect feedback from individuals with broad backgrounds in molecular or organismal biology and wildlife management, both to improve my own class and to contribute to pedagogical literature. At the PSA I will present the results of this interaction with biology professionals and students. Challenges include students’ belief that empirical studies will straightforwardly solve conceptual problems, colleagues’ views about the relative value of different sub-fields of biology, and administrators’ demand that pedagogy narrowly fit career objectives. Additionally, the things that interest me as a philosopher and a hobby herper differ from the things that would be of interest and value to my students. I recently argued successfully for my courses to earn credit towards biology degrees, and I expect to contribute to graduate students’ research. What issues and skills, broadly considered “conceptual,” do biologists wish that they and/or their students had an opportunity to study? My poster is an invitation to collaborate across disciplines to improve scientific literacy in the general population, but especially to help develop strong conceptual foundations for future biologists.
4. How to Teach Philosophy of Biology (To Maximal Impact)
Philosophy of Science00:04 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:04:00 UTC - 2018/11/03 06:59:00 UTC
Alexandra Bradner Given the accelerating pace of the biological sciences, there is arguably no more relevant, useful, and appealing course in the philosophical arsenal right now than the philosophy of biology. We are a scientifically illiterate nation, and philosophers of biology are poised to respond: we can present scientific problems clearly to non-specialists, place those problems in their socio-historical contexts, generate critical analyses, and imagine alternative hypotheses. But philosophy of biology is typically offered only every other year at R1 institutions (and only every 4-6 years elsewhere) as a small, specialized, upper-level undergraduate seminar or as an early graduate seminar—i.e. to minimal impact. To make matters worse, in order to succeed in philosophy of biology, students must arrive with prerequisites in math and biology, to process our contemporary readings, and with prerequisites in Aristotle and/or medieval philosophy, to grasp the significance of the Darwinian transition to populationism. Still, departments rarely require these prerequisites, first, because it can be hard enough to enroll the course without any prerequisites; and second, because requiring too many prerequisites can scare off scientists who are especially protective of their GPAs. As a result, general-ed students enroll, thinking they’re in for a “hot topics” course in bioethics, and end up behind and bored. In this poster, I will detail the syllabus of a philosophy of evolutionary biology course for a general undergraduate population that achieves three learning outcomes, without abandoning our field’s canonical texts. By the end of the course, students: 1) come to understand the shift from essentialism/natural state to populationism by reading a series of Darwin’s precursors and much of both the "Origin" and "Descent;" 2) master the populationist paradigm by exploring a collection of contemporary phil bio papers that build upon the issues encountered in the "Descent;" 3) satisfy their hunger for bioethics by studying, in the last 2-3 weeks of the course, a group of articles drawn from recent journals. I have taught this course four times at three different institutions to maximal enrollments. Pedagogically, the course employs a number of techniques and methodologies to maintain student engagement: a one-day philosophical writing bootcamp to alleviate science students’ anxiety about writing philosophy papers; a visit to the library’s rare book room to view historic scientific texts in their original editions, two classes on the "Origin" spent in jigsaws, one class spent on a team-based learning exercise, an external speaker invited to respond to students’ questions via Skype, two weeks of student-directed learning, and lots of lecture and discussion. This particular course design comes with some costs, primarily errors of omission, which I will detail. But the benefits of introducing a broader population of students to the philosophical problems of biology compensates for the losses, which can be recuperated in a second course or an independent study. Perhaps most importantly, teaching philosophy of biology in this way delivers to philosophy new students who otherwise would never have encountered the discipline, both sustaining our major and increasing enrollments in upper-level courses.
Philosophy of Science00:05 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:05:00 UTC - 2018/11/03 06:59:00 UTC
Brian Woodcock (University of Minnesota), Arthur Cunningham (St. Olaf College) Both popular culture and introductory science pedagogy abound with statements about the nature of science and the so-called “scientific method.” This means that college students stepping into a philosophy of science course often come with deep-seated (though perhaps implicit) preconceptions about science, like the idea that there is a single, universally-recognized method that distinguishes science from other domains of inquiry. We believe that directly confronting such popular accounts of how science works is an important task in an introductory philosophy of science course. Philosophy of science textbooks typically present the ideas of leading philosophers of science, past and present, together with critical evaluation of those ideas. The content contained in such textbooks (for example, about inductivism, hypothetico-deductivism, falsificationism, and contexts of discovery and justification) can be applied to critically evaluate “pop” accounts of how science works, including statements of the so-called “scientific method.” If we want students to understand and appreciate those applications, we need to make it an explicit goal of our courses that students learn to relate philosophical concepts and criticisms to popular accounts of science, and we need to support that goal with examples and exercises. Our experience shows that it is all too easy for students to compartmentalize the academic debates they encounter in a philosophy of science course so that they later fall back into routine ways of describing how science works—for example, by continuing to invoke the idea of a single process called “the scientific method” even after studying debates that cast doubt on the idea that science is characterized by a single, agreed-upon method. We present a few ways to incorporate popular and introductory pedagogical statements about the nature of science and “the scientific method” in the philosophy of science classroom: • lecture illustrations • classroom discussion starters • conceptual application exercises • critical analysis and evaluation exercises. We offer specific suggestions for assignments, including techniques for having students collect “pop” accounts of science to be used in the classroom. In addition, we consider the learning objectives embodied by each kind of exercise and, based on our own experience, some pitfalls to avoid.
Philosophy of Science00:06 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:06:00 UTC - 2018/11/03 06:59:00 UTC
Cordelia Erickson-Davis (Stanford University) In the computational theory of vision, the world consists of patterns of light that reflect onto the retina and provoke neural activity that the individual must then reconstruct into an image-based percept (Marr 1979). “Seeing” turns into an optimization problem, with the goal of maximizing the amount of visual information represented per unit of neural spikes. Visual prostheses - which endeavor to translate visual information like light into electrical information that the brain can understand, and thus restore function to certain individuals who have lost their sight - are the literal construal of computational theories of perception. Theories that scholars of cybernetic studies have taught us were born from data not of man but of machine (Dupuy 2000). So what happens when we implant these theories into the human body? What do subjects “see” when a visual prosthesis is turned on for the first time? That is, what is the visual phenomenology of artificial vision, and how might these reports inform our theories of perception and embodiment more generally? This poster will discuss insights gathered from ethnographic work conducted over the past two years with developers and users of an artificial retina device, and will elaborate on a method that brings together institutional ethnography and critical phenomenology as way to elucidate the relationship between the political and the perceptual.
7. Normative Aspects of Part-Making and Kind-Making in Synthetic Biology
Philosophy of Science00:07 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:07:00 UTC - 2018/11/03 06:59:00 UTC
Catherine Kendig (Michigan State University) The naming, coding, and tracking of parts and modules is pervasive in all fields of biology. However, these activities seem to play a particular role in synthetic biology where discovering something is the same part is crucial to ideas of identity as well as successful construction. Synthetic biology is frequently defined as the application of engineering principles to the design, construction, and analysis of biological systems. For example, biological functions such as metabolism may now be genetically reengineered to produce new chemical compounds. Designing, modifying, and manufacturing new biomolecular systems and metabolic pathways draws upon analogies from engineering such as standardized parts, circuits, oscillators, and digital logic gates. These engineering techniques and computational models are then used to understand, rewire, and reengineer biological networks. But is that all there is to synthetic biology? Is this descriptive catalogue of bricolage wholly explanatory of the discipline? Do these descriptions impact scientific metaphysics? If so, how might these parts descriptions inform us of what it is to be a biological kind? Attempting to answer these questions requires investigations into the nature of these biological parts as well as what role descriptions of parts play in the identification of them as the same sort of thing as another thing of the same kind. Biological parts repositories serve as a common resource where synthetic biologists can go to obtain physical samples of DNA associated with descriptive data about those samples. Perhaps the best example of a biological parts repository is the iGEM Registry of Standard Biological Parts (igem.org). These parts have been classified into collections, some labeled with engineering terms (e.g. chassis, receiver) some labeled with biological terms (e.g., proteindomain, binding), and some labeled with vague generality (e.g., classic, direction). Descriptive catalogues appear to furnish part-specific knowledge and informational specificity that allow us to individuate them as parts. Repositories catalogue parts. It seems straightforward enough to understand what is contained within the repository in terms of the general concept: part. But understanding what we mean by “part”, how we individuate parts, or how we attribute the property of parthood to something seems to rely on assumptions about the nature of part-whole relationships. My aim is to tease out these underlying concepts in an attempt to understand the process of what has been called “ontology engineering” (Gruber 2009). To do this, I focus on the preliminary processes of knowledge production which are prerequisite to the construction or identification of ontologies of parts. I investigate the activities of naming and tracking parts within and across repositories and highlight the ineliminable normativity of part-making and kind-making. I will then sketch some problems arising from the varied descriptions of parts contained in different repositories. Lastly, I will critically discuss some recent computational models currently in use that promise to offer practitioners a means of capturing information and meta-information relevant to answering particular questions through the construction of similarity measures for different biological ontologies.
8. Tool Development Drives Progress in Neurobiology and Engineering Concerns (Not Theory) Drive Tool Development: The Case of the Patch Clamp
Philosophy of Science00:08 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:08:00 UTC - 2018/11/03 06:59:00 UTC
John Bickle (Mississippi State Univerity) Philosophy of science remains deeply theory-centric. Even after the sea change over the past three decades, in which “foundational” questions in specific sciences have come to dominate concerns about science in general, the idea that everything of philosophical consequence in science begins and ends with theory still remains prominent. A focus on the way experiment tools develop in laboratory sciences like neurobiology, especially its cellular and molecular mainstream, is thereby illuminating. While theory progress has certainly been an outcome of the development and ingenious use of these tools, it plays almost no role in their development or justification. Engineering concerns predominate these stages. Theory is thus tertiary in these laboratory sciences. It depends on the development of experiment tools; while the latter depend on engineering ingenuity and persistence. Previously I have developed these points via metascientific investigations of tools that revolutionized neurobiology, at least in the judgment of practicing neurobiologists. These tools include gene targeting techniques, brought into neurobiology from developmental biology a quarter-century ago, and the more recent examples of optogenetic and chemogenetic technologies. All of these tools greatly increased the precision with which neurobiologists can intervene into intra- and inter-cellular signaling pathways specific neurons in behaving rodents to investigate directly cellular and molecular causal mechanisms of higher, including cognitive, functions. From these cases I have developed a model of tool development experiments in neurobiology, including a tool’s motivating problem, and first- and second-stage “hook” experiments by which a new tool is confirmed, further developed, and brought to more widespread scientific (and sometimes even public) awareness. Most recently I have confirmed this model with another case, the development of the metal microelectrode, which drove the “reductionist” program in mainstream neurobiology from the late-1950s to the early -1980s. In this poster I further confirm this model of tool development experiments, and sharpen this argument against theory-centricism in the philosophy of science, by reporting the results of a metascientifc investigation of the development of patch clamp technology and the initial achievement of the “gigaseal.” More than three decades ago this tool permitted experimentalists for the first time to resolve currents from single ion channels in neuron membranes. Experimental manipulations of this tool soon led to a variety of ways of physically isolating “patches” of neuron membrane, permitting the recording of single channel currents from both sides of the cell membrane. This tool sparked neurobiology’s “molecular wave,” and current theory, concerning the mechanisms of ion channels and active transporters to ionotropic and metabotropic receptors, was quickly achieved. This tool likewise developed through engineering ingenuity, not the application of theory. Its development likewise illustrates the independent “life” of experiment vis-à-vis theory in laboratory sciences, and opposes the theory-centric image of science that continues to pervade both philosophy of science generally and the specific fields of neuroscience—cognitive, computational, systems—that dominate philosophical attention.
John Bickle Mississippi State University/University Of Mississippi Medical Center
9. What Caused the Bhopal Disaster? Causal Selection in Safety and Engineering Sciences
Philosophy of Science00:09 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:09:00 UTC - 2018/11/03 06:59:00 UTC
Brian Hanley In cases where many causes together bring about an effect, it is common to select some causes as particularly important. Philosophers since Mill have been pessimistic about analyzing this reasoning due its variability and the multifarious pragmatic details of how these selections are made. I argue Mill was right to think these details matter, but wrong that they preclude philosophical understanding of causal selection. In fact, as I illustrate, analyzing the pragmatic details of preventing accidents can illuminate how scientists reason about the important causes of disasters in complex systems, and can shed new light on how causal selection works. I examine the case of the Bhopal disaster. Investigators found that human error and component failures caused the disaster. However, in addition to these proximate causes, many systemic factors also caused the disaster. Many safety scientists have argued that poor operating conditions, bad safety culture, and design deficiencies are the more important causes of disasters like Bhopal. I analyze this methodological disagreement about the important causes of disasters in terms of causal selection. By appealing to pragmatic details of the purposes and reasoning involved in selecting important causes, and relating these details to differences among causes in a Woodwardian framework, I demonstrate how analysis of causal selection can go beyond where most philosophers stop, and how engineering sciences can offer a new perspective on the problem of causal selection.
10. Fitting Knowledge: Enabling the Epistemic Collaboration between Science and Engineering
Philosophy of Science00:10 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:10:00 UTC - 2018/11/03 06:59:00 UTC
Rick Shang (Washington University, St. Louis) I first argue that philosophers' interest in unique and distinctive forms of knowledge in engineering cannot explain the epistemic collaboration between science and engineering. I then argue that, using the early history of neuroimaging as my case study, fitting knowledge both captures the distinctive nature of engineering and enables fruitful collaboration between science and engineering.
On the one hand, philosophers of science are increasingly interested in cross-discipline, cross-industry collaboration. The general philosophical interest reflects the reality that contemporary research is often interdisciplinary and interfield. For example, the development of the Large Hadron Collider is critical in basic physics research.
On the other hand, philosophers of engineering are interested in unique, distinctive forms of engineering knowledge that are separate from scientific knowledge. For example, Bunge, a pioneer in philosophy of engineering, talks about operative knowledge in engineering. Operative knowledge is a kind of “superficial” knowledge that is rough but sufficient for action. For example, knowledge sufficient for driving a car involves minimal knowledge of the mechanism of the car.
The challenge to philosophers of engineering, then, is how distinctive forms of engineering knowledge can learn from and inform scientific knowledge to enable science-engineering collaboration.
I suggest that philosophers should look at the early history of neuroimaging. The earliest instrument to measure positron emission came out of nuclear physics research into the nature of positron emission and annihilation in the 50s. Medical researchers quickly adopted the instrument to study the anatomy and physiology by introducing positron emitting isotopes into animal and human bodies. The adoption initially received lukewarm reception because existing technologies were already able to produce similar data at one tenth the cost. After years of adjusting and trying, medical researchers in the 70s decided to focus on the real time, in vivo measurement of cerebral physiological changes, because the positron emission detection instrument could perform scans faster than all existing technologies.
The history demonstrates the development of fitting knowledge in engineering. The fitting knowledge involves knowledge of what the engineered mechanism was best for. It involves mutual adjustments of the mechanism and potential uses to find a socially and scientifically viable fit between the mechanism and its use(s).
This form of knowledge is uniquely engineering because it is primarily about the adjustment of engineered mechanism and its uses. It does not involve extended research into natural phenomena. For example, both the rapid nature of cerebral physiological changes and the scientific importance of capturing the changes in real time were well known at the time.
Fitting knowledge, at the same time, bridges across science and engineering. First, the creation of the original mechanism often involves the input of scientific knowledge. In my case, the indispensable input was the nature of positron emission. Second, finding the best fit often involves scientific considerations and goals. In my case, the new use turned out to be measuring cerebral processes in real time. Locating the fit quickly enabled the scientific study of the physiological basis of cognition.
11. Re-Conceptualizing ‘Biomimetic Systems’: From Philosophy of Science to Engineering and Architecture
Philosophy of Science00:11 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:11:00 UTC - 2018/11/03 06:59:00 UTC
Hannah Howland (Pyatok), Vadim Keyser (California State University, Fresno) Current philosophy of science literature focuses on the relations between natural, experimental, and technological systems. Our aim is to extend philosophical analysis to engineering and architectural systems. The purpose of our discussion is to re-conceptualize what it means for an engineered system to be ‘biomimetic’. We argue that biomimicry is a process that requires establishing a heteromorphic relation between 2 systems: a robust natural system and a robust engineered system. We develop a visual schematic that embeds natural and biomimetic systems, and we support our argument with a visual schematic case study of the woodpecker by showing the step-by-step process of biomimicry. A recent trend in engineering and architecture is that so-called “biomimetic systems” are modeled after natural systems. Specifically, structural and functional components of the engineered system are designed to mimic system components in natural phenomena. For example, bird bone structures both in nature and in engineering effectively respond to force load. Such structures in nature are robust in that they maintain structural integrity with changing conditions. The bird bone remains resilient with increase in compressive stress; but also, the femur bones seem to maintain robustness of structure even at different scales—maintaining constant safety factors across a large size range. While such robust properties are evident in natural systems, we argue that there has been a failure to properly model the same kind of robustness in engineered systems. We argue that this failure of modeling is due to misconceptions about ‘biomimicry’ and ‘robustness’: Using the philosophical literature on representation and modeling, we show that biomimicry requires establishing a heteromorphic relation between 2 systems: a robust natural system and a robust design system. Additionally, we argue that in order to establish an adequate concept of ‘biomimicry’, engineering and architecture should consider a different conception of ‘robustness’. Using the philosophy of biology literature on ‘robustness’, we argue that robust systems are those that maintain responsiveness to external and internal perturbations. We present a visual schematic to show the continuum of robust systems in nature and engineering. By using visual examples from natural systems and engineered systems we show that so-called “biomimetic systems” fail to establish such a relation. The reason why is because most of these engineered systems focus on symbolic association and aesthetic characteristics. We categorize these focal points of failed biomimetic engineering and design in terms of ‘bio-utilization’ and ‘biophilia’. We conclude with the suggestion that these re-conceptualizations of ‘biomimicry’ and ‘robustness’ will be useful for: 1) Pushing the fields of engineering and architecture to make more precise the relations between natural and engineered systems; and 2) Developing new analytical perspectives about ‘mimetic’ systems in philosophy of science.
12. The Disunity of Major Transitions in Evolution
Philosophy of Science00:12 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:12:00 UTC - 2018/11/03 06:59:00 UTC
Alison McConwell (University of Calgary) Major transitions are events that occur at the grand evolutionary scale and mark drastic turning points in the history of life. They affect evolutionary processes and have significant downstream consequences. Historically, accounts of such largescale macroevolutionary patterns included progressive directionality, new levels of complexity, and emerging units of selection all toward human existence (Huxley 1942, Stebbins 1969, Buss 1987). In more recent models, human-centrism is less common, however, it’s not clear all events are of the same kind (Maynard-Smith and Szathmáry 1995, O’Malley 2014, Calcott and Sterelny 2011). The lack of unity is identified as a failure to “get serious about evolution at the macroscale” (McShea and Simpson 2011, 32). Disunity allegedly yields inconsistencies in our explanations, as well as an arbitrary collection of events, or “just one damn thing after another” (ibid, 22, 32). Against this, I argue for a pluralist view of major transitions, which yields a productive disunity. Epistemically, that all major events have a common property might be explanatorily useful. To unify major events under a single explanatory framework is supposed to reveal something about the robustness and stability of evolutionary processes, and their capacity to produce the same types of events over time. However, this unificatory aim concerning models of transitions is not the only fruitful approach. Setting unification aims aside provides the opportunity for detailed investigations of different transition kinds. Major transitions are diverse across life’s categories, scales, and can vary according to scientific interest. I draw on work from Gould (1989, 2001) who argued for chance’s greater role in life’s history; he denied both directionality and progress in evolution and focused on the prevalence of contingent happenstances. His research on evolutionary contingency facilitated an extensive program, which has primarily focused on the shape or overall pattern of evolutionary history. That pattern includes dependency relations among events and the chance-type processes (e.g. mutation, drift, species sorting, and external disturbances) that influence them. Gould’s evolutionary contingency thesis grounds a contingent plurality of major transitions kinds. Specifically, I argue that the causal mechanisms of major transitions are contingently diverse outcomes of evolution by focusing on two case studies: fig-wasp mutualisms and cellular cooperation. I also discuss how chance-based processes of contingent evolution, such as mutation, cause that diversity. And finally, I argue that this diversity can be classified into a plurality of transition kinds. Transition plurality is achieved by attention to structural details, which distinguish types of events. Overall, there is not one single property, or a single set of properties, that all and only major transitions share. On this picture, one should expect disunity, which facilitates a rich understanding of major shifts in history. Unity as an epistemic virtue need not be the default position. The lack of a common thread across transition kinds reveals something about the diversity and fragility in evolution, as well as the role of forces besides natural selection driving the evolutionary process. Overall, to accept a disunified model of major transitions does not impoverish our understanding of life’s history.
13. Representation Re-construed: Answering the Job Description Challenge with a Construal-based Notion of Natural Representation
Philosophy of Science00:13 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:13:00 UTC - 2018/11/03 06:59:00 UTC
Mikio Akagi (Texas Christian University) William Ramsey (2007) and others worry that cognitive scientists apply the concept “representation” too liberally. Ramsey argues that representations are often ascribed according to a causal theory he calls the “receptor notion,” according to which a state s represents a state of affairs p if s is regularly and reliably caused by p. Ramsey claims that the receptor notion is what justifies the ascription of representations to edge-detecting cells in V1, fly-detecting cells in frog cortex, and prey-detecting mechanisms in Venus flytraps. However, Ramsey argues that the receptor notion also justifies ascribing representational states to the firing pin in a gun: since the state of the trigger regularly and reliably causes changes in the state of the firing pin, the firing pin represents whether the trigger is depressed. The firing pin case is an absurd consequence. He concludes the receptor notion is too liberal to be useful to scientists. I argue that something like the receptor notion can be salvaged if being a receptor is contextualized in terms of construal. Construals are judgment-like attitudes whose truth-values can vary licitly independently of the situation they describe. We can construe ambiguous figures like the Necker cube as if it were viewed from above or below, and we can construe the duck-rabbit as if it were an image of a duck or of a rabbit. We can construe an action like skydiving as brave or foolhardy, depending on which features of skydiving we attend to. On a construal-based account of conceptual norms, a concept (e.g., “representation”) is ascribed relative to a construal of a situation. I describe a minimal sense of what it means to construe a system as an “organism,” and how ascriptions of representational content are made relative to such construals. Briefly, construing something as an organism entails construing it such that it has goals and mechanisms for achieving those goals in its natural context. For example, frogs qua organisms have goals like identifying food and ingesting it. I suggest that ascriptions of natural representations and their contents are always relative to some construal of the representing system qua organism. Furthermore, the plausibility of representation-ascriptions is constrained by the plausibility of their coordinate construal-qua-organism. So the contents we ascribe to representations in frog visual cortex are constrained by the goals we attribute to frogs. Absurd cases like Ramsey’s firing pin can be excluded (mostly) since guns are not easily construed as “organisms.” They have no goals of their own. It is not impossible to ascribe goals to artifacts, but the ascription of folk-psychological properties to tools generally follows a distinct pattern from representation-ascription in science. My construal-based proposal explains the practice of representation-ascription better than Ramsey’s receptor notion. It preserves Ramsey’s positive examples, such as the ascription of representations to visual cortex, but tends to exclude absurd cases like the firing pin. Since cognitive scientists do not actually ascribe natural representations to firearms, I submit that my account is a more charitable interpretation of existing scientific practice.
14. Adaptationism Revisited: Three Senses of Relative Importance
Philosophy of Science00:14 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:14:00 UTC - 2018/11/03 06:59:00 UTC
Mingjun Zhang (University of Pennsylvania) In the sixth edition of the Origin, Darwin wrote that, “I am convinced that Natural Selection has been the most important, but not the exclusive, means of modification” (Darwin 1872, 4). The idea that natural selection is the most important, if not the only important, driving factor of evolution is further developed and crystalized in the various views under the name of adaptationism. However, it is not always clear what exactly it means to talk about relative importance in the relevant debate. In this paper, I distinguish three senses of relative importance and use this distinction to reexamine the various claims of adaptationism. I give examples of how these different senses of relative importance are applied in different adaptationist claims, and discuss some possible issues in their application. The first sense: A factor is more important than others if the proportion of phenomena in a domain explained or caused by this factor is greater than the proportion of those explained or caused by other factors. I call it relative importance based on relative frequency. The famous debate between Fisher and Wright about the role of natural selection can be understood as a debate about the relative importance of natural selection in this sense, in which they disagree about the relative frequency of genetic variation within and between populations caused by natural selection and other factors like drift. However, philosophers like Kovaka (2017) have argued that there is no necessary connection between relative frequency and relative importance. The second sense: A factor is more important than others if it can explain special phenomena in a domain and help answer the central or most important questions within it. I call it relative importance based on explanatory power. This kind of relative importance is involved in the view of explanatory adaptationism formulated by Godfrey-Smith (2001). According to this view, natural selection is the most important evolutionary factor because it can solve the problems of apparent design and/or adaptedness, which are the central problems in biology. However, some biologists may deny that there are “central questions” in biological research. Even if there are central questions in biology, apparent design and adaptedness may not be the only ones. The third sense: A factor is more important than others if it has greater causal efficacy in the production of a phenomenon than others. For example, the gravity of the Moon is a more important cause of the tides on the Earth than the gravity of the Sun because the Moon has a bigger influence on the tidal height on the Earth. I call it relative importance based on causal efficacy. Orzack and Sober (1996) understand adaptationism as the view that selection is typically the most important evolutionary force. Here they use relative importance in the third sense because their formulation involves the comparison of causal efficacy between selection and other factors in driving evolution. The main issue is how to measure the causal efficacy of different factors.
Philosophy of Science00:15 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:15:00 UTC - 2018/11/03 06:59:00 UTC
Mara McGuire (Mississippi State University) Muk Wong (2016) has recently developed a theory of mood and mood function that draws on Laura Sizer’s (2000) computational theory of moods. Sizer argues that moods are higher order functional states, biases in cognitive processes such as attention allocation, memory retrieval, and a mode of information processing. Wong supplements Sizer’s account with one of mood elicitation: what mood is a response to and what function(s) mood serves. Wong claims that mood is a “mechanism” that monitors our energy levels, both mental and physical, in relation to environmental energy demands and, based on this relation, biases our functional states. Based on his account of mood elicitation, Wong next proposes a single function of mood: to maintain an “equilibrium” between our internal energy and the energy requirements of our environment. I argue that while the need for an account of mood elicitation is well taken, it cannot be understood in terms of a mechanism monitoring energy levels. A theory of mood elicitation must be able to explain the elicitation of different types of moods on different occasions (e.g. anxious, irritable, contented, etc.), that is, why different types of moods are elicited by different events or states of affairs. Understanding mood elicitation along a single dimension, such as the relation between energy level and energy demands, is incapable of doing this. Distinct mood types appear to be more complicated than just differential responses to energy levels and demands. But then Wong’s account of mood function must be rejected. I propose instead that we adopt a multi-dimensional account of mood elicitation. As a first step toward this, I draw upon a different conception of mental energy to Wong’s and argue that mental energy should be expanded to include states of ego-depletion as well as cognitive fatigue (Inzlicht & Berkman 2015). While this more robust account of mental energy increases the explanatory power of Wong’s account, his theory would still not be sufficient to account fully for the elicitation of different types of moods. I then propose that we draw on a related area of affective science, appraisal theories of emotion elicitation, and consider whether important dimensions recognized in these theories, such as “goal relevance and congruence,” “control” and “coping potential” (Moors et al. 2013) are helpful toward understanding the elicitation of moods. I suggest that drawing on these dimensions to start to construct a multi-dimensional account of mood elicitation may explain the elicitation of different types of moods and provide a better foundation for understanding mood function.
16. Armchair Chemistry and Theoretical Justification in Science
Philosophy of Science00:16 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:16:00 UTC - 2018/11/03 06:59:00 UTC
Amanda Nichols (Oklahoma Christian University), Myron Penner (Trinity Western University) In the late 19th century, Sophus Jørgensen proposed structures for cobalt complexes that utilized the more developed bonding principles of organic chemistry and the reigning understanding of valence. Similar to how organic compounds typically form hydrocarbon chains, Jørgensen created models for cobalt complexes that also had a chainlike structure. His models featured (1) a cobalt metal center with three attachments because cobalt was understood as trivalent and (2) one of those attachments was a chain of atoms, like the carbon chain featured in organic chemistry. Alfred Werner proposed a different model for cobalt compounds that featured octahedral arrangements around the cobalt metal center, calling the metal complex a coordination compound. Werner’s coordination theory introduced a new type of valence allowing cobalt to have six attachments and abandoned Jørgensen’s chain theory. Experimental work confirmed Werner’s theory making it central to inorganic chemistry. One issue in the Jørgensen-Werner debate over the structure of cobalt complexes concerns differences between the two scientists over the nature of theoretical justification-- the epistemic reasons each had for resisting change (as with Jørgensen) or looking for a different model (as with Werner). In our paper, we compare and contrast the concepts of theoretical justification employed by Jørgensen and Werner. Jørgensen felt that Werner lacked justification for his experimental model. Werner, presumably, had some justification for his model, albeit a different sort of justification than Jørgensen. While Werner constructed a radically different and creative model, Jørgensen resisted revision to the established framework. Werner emphasized symmetry and geometric simplicity in his model, and the consistent patterns that emerged were viewed as truth-conducive. Jørgensen, on the other hand, criticized Werner’s model on the basis that it lacked evidence and was an “ad hoc” explanation. Jørgensen disagreed that Werner’s method of hypothetical reasoning was the best approach in theory-building. G. N. Lewis’ electronic theory of valency and later theories, such as crystal field and molecular orbital theories of bonding that explain Werner’s coordination theory were not developed until later. Though Werner seemed comfortable proceeding with details not settled, Jørgensen was not. Werner’s descriptions of his model would frame him as a scientific realist, while some historical evidence suggests that Jørgensen could be classified as an anti-realist. Assuming this, we explore the contribution realism makes towards the progress of science, and how anti-realism might hinder. We conclude by noting how the different concepts of theoretical justification embodied by Jørgensen and Werner help us understand both continuity and diversity in multiple approaches to scientific method.
Philosophy of Science00:17 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:17:00 UTC - 2018/11/03 06:59:00 UTC
Cristin Chall (University of South Carolina) The Standard Model (SM) is one of our most well tested and highly confirmed theories. However, physicists, perceiving flaws in the SM, have been building models describing physics that goes beyond it (BSM). Many of these models describe alternatives to the Higgs mechanism, the SM explanation for electroweak symmetry breaking (EWSB). So far, no BSM model has been empirically successful; meanwhile, the Higgs particle discovered in 2012 has exhibited exactly the properties predicted by the SM. Despite this, many BSM models have remained popular, even years after this SM-like Higgs boson has been found. This is surprising, since it appears to y in the face of conventional understandings of scientific practice to have competing models interacting in a complex dynamics even though none of them have achieved empirical success and all of them are faced with a predictively superior alternative. The question becomes: How do we rationally explain physicists' continued work on models that, though not entirely excluded, are increasingly experimentally disfavoured? I will argue that the best framework for explaining these complex model dynamics is the notion of scientific research programmes, as described by Lakatos (1978). To apply this framework, however, I need to modify it to accommodate the collections of models which share the same core theoretical commitments, since Lakatos dismisses models to the periphery of research programmes. These collections of models, which I call `model-groups', behave as full-edged research programmes, supplementing the series of theories that originally defined research programmes. By allowing the individual models to be replaced in the face of unfavourable empirical results, the hard core of a model-group is preserved. The practical benefit of applying this framework is that it explains the model dynamics: physicists continue to formulate and test new models based on the central tenets of a model-group, which provide stability and avenues for making progress, and rationally continue giving credence to BSM models lacking the empirical support enjoyed by the SM account of EWSB. To demonstrate the model dynamics detailed by the Lakatosian framework, I will use the Composite Higgs model-group as an example. Composite Higgs models provide several benefits over the SM account, since many have a dark matter candidate, or accommodate naturalness. However, the measured properties of the Higgs boson give every indication that it is not a composite particle. I trace the changing strategies used in this model-group in order to demonstrate the explanatory power of Lakatosian research programmes applied in this new arena. Thus, I show that Lakatos, suitably modified, provides the best avenue for philosophers to describe the model dynamics in particle physics, a previously under-represented element of the philosophical literature on modelling.
Cristin Chall University Of South Carolina/Rheinische Friedrich-Wilhelms-Universität Bonn
18. Who Is Afraid of Model Pluralism?
Philosophy of Science00:18 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:18:00 UTC - 2018/11/03 06:59:00 UTC
Walter Veit This paper argues for the explanatory power of EGT models in three distinct but closely related ways. First, following Sugden and Aydinonat & Ylikoski I argue that EGT models are created parallel worlds i.e. surrogate systems in which we can explore particular (evolutionary) mechanisms by isolating everything that could be interfering in the real world. By specifying the pool of strategies, the game and the fitness of the strategies involved, EGT explores potential phenomena and dynamics emerging and persisting under natural selection. Given a particular phenomenon, e.g. cooperation, war of attrition, costly signalling, EGT enables the researcher to explore multiple ‘how-possibly’ explanations of how the phenomena could have arisen and contrast them with each other, e.g. sexual selection, kin selection and group selection. Secondly, I argue that by eliminating ‘how-possible’ explanations through eliminative induction, we can arrive at robust mechanisms explaining the stability and emergence of evolutionary stable equilibria in the real world. In order for such an eliminative process to be successful, it requires deliberate research in multiple scientific disciplines such as genomics, ethology and ecology. This research should be guided by the assumptions made in the applications of particular EGT models, especially the range of parameters for payoffs and the strategies found in nature. Thirdly, I argue that in order to bridge the gap between the remaining set of ‘how-possibly’ explanations to the actual explanation requires abduction, i.e. inference to the best explanation. Such inference shall proceed by considering issues of resemblance between the multiple EGT models and the target system in question evaluating their credibility. Together these three explanatory strategies will turn out to be sufficient and necessary to turn EGT models into a genuine explanation.
19. The Role of Optimality Claims in Cognitive Modelling
Philosophy of Science00:19 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:19:00 UTC - 2018/11/03 06:59:00 UTC
Brendan Fleig-Goldstein (University of Pittsburgh) Why might a scientist want to establish a cognitive model as rational or “optimal” in some sense (e.g., relative to some normal environment)? In this presentation, I argue that one motivation for finding optimal cognitive models is to facilitate a particular strategy for marshalling evidence for cognitive theories. This claim stands in contrast to previous thinking about the role of optimality claims in cognitive modelling. Previous thinking has generally suggested that optimality claims either: serve to help provide teleological explanations (explanatory role); heuristically aid in the search for predictively accurate models (methodological role); or are themselves hypotheses in need of testing (empirical role). The idea that optimality claims can play a role in the process of testing theories of cognition has not previously been explored. The evidential strategy proceeds as follows: a scientist proposes an optimal model, and then uses this optimal model to uncover systematic discrepancies between idealized human behavior and observed human behavior. The emergence of discrepancies with a clear signature leads to the discovery of previously unknown details about human cognition (e.g., computational resource costs) that explain the discrepancy. The incorporation of these details into models then gives rise to new idealized models that factor in these details. New discrepancies emerge, and the process repeats itself in an iterative fashion. Successful iterations of this process results in tighter agreement between theory and observation. I draw upon George E. Smith’s analysis of evidence in Newtonian gravity research (e.g., 2014) to explain how this process of iteratively uncovering “details that make a difference” to the cognitive system constitutes a specific logic of theory-testing. I discuss Thomas Icard’s (e.g, 2018,) work on bounded rational analysis as an illustration of this process in action.
20. Mechanistic Explanations and Mechanistic Understanding in Computer Simulations: A Case Study in Models of Earthquakes
Philosophy of Science00:20 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:20:00 UTC - 2018/11/03 06:59:00 UTC
Hernan Felipe Bobadilla Rodriguez (University of Vienna) Scientists often resort to computer simulations to explain and understand natural phenomena. Several philosophers of science claim that these epistemic goals are related: Explanations provide understanding. Controversially, while some philosophers say that explanations are the only way to gain understanding, others argue that there are alternative, non-explanatory ways to gain understanding. The aim of this paper is to assess explanations and understanding gained by means of computer simulations. In particular, I focus on assessing mechanistic explanations and mechanistic understanding – in the “new mechanist” sense. Furthermore, I examine the relations between mechanistic explanations and mechanistic understanding. In order to achieve these aims, I perform a case study based on an agent-based computer simulation, known as the Olami, Feder and Christensen model (OFC, 1992). The OFC model predicts and explains aspects of a robust behaviour of earthquakes, known as the Gutenberg-Richter law. This behaviour consists in the robust power-law distribution of earthquakes according to their magnitudes across seismic regions. Roughly speaking, the OFC model simulates the power-law distribution of earthquakes by modelling the reciprocal influence between frictional forces and elastic deformation at a generic geological fault. In this case, a geological fault is represented as a cellular automaton in which cells redistribute elastic potential energy to their neighbouring cells when local thresholds of static friction are exceeded. I deliver the following results: 1) The OFC model is a mechanistic model. That is, the component elements of the OFC model can be interpreted as mechanistic elements, namely entities, activities and organization. 2) The OFC model is a mechanism, namely a computing mechanism á la Piccinini (2007), which produces phenomena, namely outputs in a computer program. 3) A description of the OFC model, qua computing mechanism, mechanistically explains the power-law distribution of model-earthquakes. 4) The mechanistic explanation of the power-law distribution of model-earthquakes in the OFC models does not hold for real earthquakes. This is due to the lack of mapping between the mechanistic elements of the OFC model and the putative mechanistic elements in a geological fault. In particular, a mapping of mechanistic entities is problematic. The mechanistic entities in the OFC model, namely cells of the cellular automaton, are arbitrary divisions of space. They are not working parts in a geological fault. 5) However, the OFC model provides mechanistic understanding of the power-law distribution of real earthquakes. The OFC model provides us with a mechanism that can produce power-law distribution of earthquakes, even though it is not the actual one. Information about a possible mechanism may give oblique information about the actual mechanism (Lipton, 2009). In this sense, surveying the space of possible mechanisms advance our mechanistic understanding of real earthquakes.
21. Concepts of Approximate Solutions and the Finite Element Method
Philosophy of Science00:21 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:21:00 UTC - 2018/11/03 06:59:00 UTC
Nicolas Fillion (Simon Fraser University) I discuss epistemologically unique difficulties associated with the solution of mathematical problems by means of the finite element method. This method, used to obtain approximate solutions to multidimensional problems within finite domains with possibly irregular boundary conditions, has received comparatively little attention in the philosophical literature, despite being the most dependable computational method used by structural engineers and other modelers handling complex real-world systems. As most numerical methods that are part of the standard numerical analysis curriculum do, this method breaks from the classical perspective on exact mathematical solutions, as it involves error-control strategies within given modeling contexts. This is why assessing the validity of such inexact solutions requires that we emphasize aspects of the relationship between solutions and mathematical structures that are not required to assess putative exact solutions. One such structural element is the sensitivity or robustness of solutions under perturbations, whose characterization leads to a deeper understanding of the mechanisms that drive the behavior of the system. The transition to an epistemological understanding of the concept of approximate solution can thus be characterized as an operative process of structure enrichment. This transition generates a scheme to assess the justification of solutions that contains more complex semantic elements whose murkier inner logic is essential to a philosophical understanding of the lessons of applied mathematics. To be sure, there is a practical acceptance of the finite element method by practitioners in their attempt to overcome the representational and inferential opacity of the models they use, mainly because it has proved to be tremendously successful. However, the finite element method differs in important respects from other numerical methods. What makes the method so advantageous in practice is its discretization scheme, which is applicable to objects of any shape and dimension. This innovative mode of discretization provides a simplified representation of the physical model by decomposing its domain into triangles, tetrahedra, or analogs of the right dimension. Officially, each simplified inside element is then locally associated with a piecewise low-degree polynomial that is interpolated with the polynomial of other elements to ensure sufficient continuity between the elements. On that basis, a recursive composition of all the elements is made to obtain the solution over the whole domain. However, this presents applied mathematicians with a dilemma, since using piecewise polynomials that will be continuous enough to allow for a mathematically sound local-global “gluing” is typically computationally intractable. Perhaps surprisingly, computational expediency is typically chosen over mathematical soundness. Strang has characterize this methodological gambit as a "variational crime." I explain how committing variational crimes is a paradigmatic violation of epistemological principles that are typically used to make sense of approximation in applied mathematics. On that basis, I argue that the epistemological meaning of these innovations and difficulties in the justification of the relationship between the system and the solution lies in additional structural enrichment of the concept of validity of a solution that are in line with recently developed methods of a posteriori error analysis.
22. A Crisis of Confusion: Unpacking the Replication Crisis in the Computational Sciences
Philosophy of Science00:22 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:22:00 UTC - 2018/11/03 06:59:00 UTC
Dasha Pruss (University of Pittsburgh) A flurry of failed experimental replications in the 21st century has led to the declaration of a "replication crisis" in a number of experimental fields, including psychology and medicine. Recent articles (e.g., Hutson, 2018) have proclaimed a similar crisis in the computational sciences: researchers have had widespread difficulties in reproducing key computational results, such as reported levels of predictive accuracy of machine learning algorithms. At first, importing the experimental concept of a replication crisis to explain what is happening in the computational sciences might seem attractive - in both fields, questionable research practices have led to the publication of results that cannot be reproduced. With the help of careful conceptual analysis, however, it becomes clear that this analogy between experimental sciences and computational sciences is at best a strained one, and at worst a meaningless one. Scientific writing on experimental replication is awash with conceptual confusion; to assess the concept of replication in the computational sciences, I appeal to Machery's re-sampling account of experimental replication (Machery, Ms). On the re-sampling account, an experiment replicates an earlier experiment if and only if the new experiment consists of a sequence of events of the same type as the original experiment, while re-sampling some of its experimental components, with the aim of establishing the reliability (as opposed to the validity) of an experimental result. The difficulty of applying the concept of experimental replication to the crisis in the computational sciences stems from two important epistemic differences between computational sciences and experimental sciences: the first is that the distinction between random and fixed factors is not as clear or consistent in the computational sciences as it is in the experimental sciences (the components that stay unchanged between the two experiments are fixed components, and the components that get re-sampled are random components). The second is that, unlike in the experimental sciences, computational components often cannot be separately modified - this means that establishing the reliability of a computational result is often intimately connected to establishing the validity of the result. In light of this, I argue that there are two defensible ways to conceive of replicability in the computational sciences: weak replicability (reproducing an earlier result using identical code and data and different input or system factors), which is concerned with issues already captured by the concept of repeatability, and strong replicability (reproducing an earlier result using different code or data), which is concerned with issues already captured by robustness. Because neither concept of replicability captures anything new with regard to the challenges the computational sciences face, I argue that we should resist the fad of seeing a replication crisis at every corner and should do away with the concept of replication in the computational sciences. Instead, philosophers and computer scientists alike should focus exclusively on issues of repeatability and robustness.
23. Deep Learning Models in Computational Neuroscience
Philosophy of Science00:23 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:23:00 UTC - 2018/11/03 06:59:00 UTC
Imran Thobani (Stanford University) The recent development of deep learning models of parts of the brain such as the visual system raises exciting philosophical questions about how these models relate to the brain. Answering these questions could help guide future research in computational neuroscience as well as provide new philosophical insights into the various ways that scientific models relate to the systems they represent or describe. By being trained to solve difficult computational tasks like image classification, some of these deep learning models have been shown to successfully predict neural response behavior without simply being fit to the neural data (Yamins 2016). This suggests that these models are more than just phenomenological models of neural response behavior. There is supposed to be a deeper similarity between the deep learning model and the neural system it is supposed to represent that goes beyond the sharing of neural response properties. But what exactly is this similarity relationship? I argue that there are three distinct similarity relationships that can hold between a deep learning model and a target system in the brain, and I explicate each relationship. The first is surface-level similarity between the activation patterns in the set of model neurons in response to a range of sensory inputs and the activations of the firing rates of neurons in response to the same (or sufficiently similar) sensory stimuli. The second kind of similarity is architectural similarity between the neural network model and the actual neural circuit in a brain. The model is similar to the brain in this second sense, to the extent that the mathematical relationships that hold between the activations of model neurons are similar to actual relationships between firing rates of neurons in the brain. The third kind of similarity is similarity between the coarse constraints that were used in the design of the model, and constraints that the target system in the brain obeys. These constraints include, amongst other things, the objective function that the model is trained to optimize, the number of neurons used in the model, and the learning rule that is used to train the model. Having distinguished these three kinds of similarity, I address the question of which kind of similarity is most relevant to the question of what counts as a good model of the brain. I argue that similarity at the level of coarse constraints is a necessary criterion for a good model of the brain. While architectural and surface-level similarity are relevant criteria for a good model of the brain, their relevance needs to be understood in terms of providing evidence for similarity at the level of coarse constraints.
24. Empirical Support and Relevance for Models of the Evolution of Cooperation: Problems and Prospects
Philosophy of Science00:24 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:24:00 UTC - 2018/11/03 06:59:00 UTC
Archie Fields III (University of Calgary) Recently it has been argued that agent-based simulations which involve using the Prisoner’s Dilemma and other game-theoretic scenarios as a means to study the evolution of cooperation are seriously flawed because they lack empirical support and explanatory relevance to actual cooperative behavior (Arnold 2014, 2015). I respond to this challenge for simulation-based studies of the evolution of cooperation in two ways. First, I argue that it is simply false that these models lack empirical support, drawing attention to a case which highlights how empirical information has been and continues to be incorporated into agent based, game-theoretic models used to study the evolution of cooperation. In particular I examine the work of Bowles and Gintis and show how they draw upon ethnographic and biological evidence as well as experiments in behavioral psychology in their models of the evolution of strong reciprocity (2011). Ultimately, I take Arnold’s misdiagnosis of the empirical support and relevance of these models to result from too stringent standards for empirical support and a failure to appreciate the role the results of these models can play in identifying and exploring constraints on the evolutionary mechanisms (e.g. kin selection, group selection, spatial selection) involved in the evolution of cooperation. Second, I propose that a modified version of Arnold’s criticism is still a threat to model-based research in the evolution of cooperation: the game-theoretic models used to study the evolution of cooperation suffer from certain limitations because of the level of abstraction involved in these models. Namely, these models in their present state cannot be used to explore what physical or cognitive capacities are required for cooperative behavior to evolve because all simulated agents come equipped with the ability to cooperate or defect. That is, present models can tell us about how cooperation can persist or fail in the face of defection or other difficulties, but cannot tell us very much about how agents come to be cooperators in the first place. However, I also suggest a solution to this problem by arguing that there are promising ways to incorporate further empirical information into these simulations via situated cognition approaches to evolutionary simulation. Drawing on the dynamics of adaptive behavior research program outlined by Beer (1997) and more recent work by Bernard et al. (2016), I conclude by arguing that accounting for the physical characteristics of agents and their environments can shed further light on the origins of cooperation.
25. Philosophy In Science: A Participatory Approach to Philosophy of Science
Philosophy of Science00:25 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:25:00 UTC - 2018/11/03 06:59:00 UTC
Jessey Wright (Stanford University) The turn towards practice saw philosophers become more engaged with methodological and theoretical issues arising within particular scientific disciplines. The nature of this engagement ranges from close attention to published scientific research and archival materials, to structured interviews and ethnographic research (Leonelli 2012; Osbeck and Nersessian 2017), to participation in a research setting (Tuana 2013). I propose philosophy in science as an approach to inquiry that is continuous with these. It is philosophical research conducted via the integration philosophical ways of thinking into the practices of science. In this poster I describe the aims of this approach, a brief outline of a method for doing it, and identify some benefits and drawbacks. To develop this position, I examine my graduate training which involved close contact with neuroscientists and my current postdoctoral appointment as the resident philosopher in a neuroscience lab. My dissertation project was born out of the stark contrast I noticed between philosophical analyses of neuroscience and the activities I observed while attending lab meetings. Philosophical critiques of neuroimaging research often overlook small steps in the experimental process invisible in publications, but plainly visible in day-to-day activities. This work produced contributions to philosophy of science, and improved the data interpretation practices within my lab. I present this work as an example of philosophical inquiry that advances both philosophy and science. It demonstrates how philosophical theories can be directly applied to advance the scientific problems that they are descriptive of. The use of philosophy in empirical contexts allows the realities of scientific practice ‘push back,’ revealing aspects of scientific practice that are under-appreciated by the philosophical analyses and accounts of science you are using. My position as a resident philosopher in a lab shows how normative aims of philosophy are realized in collaboration. Projects in my lab are united by the goal of improving reproducibility and the quality of evidence in neuroimaging research. My project examines how the development of infrastructures for sharing and analyzing data influences the standards of evidence in neuroscience. In particular, recent disputes in cognitive neuroscience between database users and developers has made salient to neuroscience’s that the impact tool developers intend to have, and the actual uses of their tools, may be incompatible. The process of articulating philosophical dimensions of these disputes, and examining decisions surrounding tool development, have influenced the form, presentation, and promotion of those tools. My approach, of pursing philosophically interesting questions that the will provide valuable insight for scientists integrates philosophical skills and ways of thinking seamlessly into scientific practices. I conclude by noting advantages and pitfalls with this approach.
26. On the Death of Species: Extinction Reconsidered
Philosophy of Science00:26 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:26:00 UTC - 2018/11/03 06:59:00 UTC
Leonard Finkelman (Linfield College) Nearly all species that have ever evolved are now extinct. Despite its ubiquity, theorists have generally neglected to clarify the concept (Raup 1992). In the most extensive conceptual analysis currently available, Delord (2007) distinguishes three senses by which the term “extinct” may be predicated of a taxon. A taxon is “functionally” extinct if the taxon no longer contributes to ecosystem processes; a taxon is “demographically” extinct if the taxon has no living members; a taxon is “finally” extinct if the information necessary to propagate the taxon vanishes. Ambiguity between these senses contributes to confusions and inconsistencies in discussions of extinction (Siipi & Finkelman 2017). I offer a more general account that reconciles Delord’s three senses of the term “extinct” by treating the term as a relation rather than a single-place predicate: a taxon is extinct if and only if the probability of any observer’s encountering the species approaches zero. To treat extinction as a relation in this way follows from methods for diagnosing precise extinction dates through extrapolation from “sighting record” frequencies (Solow 1993; Bradshaw, et al. 2012). By this account, Delord’s three senses of extinction mark different levels of significance in the sighting probability’s approach to zero. This has the advantages of integrating all discussions of extinction under a single unitary concept and of maintaining consistent and unambiguous use of the term, even as technological advances alter the scope of extinction.
27. Do Heuristics Exhaust the Methods of Discovery?
Philosophy of Science00:27 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:27:00 UTC - 2018/11/03 06:59:00 UTC
Benjamin Jantzen (Virginia Tech), Cruz Davis (University of Massachusetts, Amherst) Recently, one of us presented a paper on the history of “algorithmic discovery” at an academic conference. As we intend the term, algorithmic discovery is the production of novel and plausible empirical generalizations by means of an effective procedure, a method that is explicitly represented and executed in finitely many steps. In other words, we understand it to be discovery by computable algorithm. An anonymous reviewer for the conference saw things differently, helpfully explaining that “[a]nother, more common name for algorithmic discovery would be heuristics.” This comment prompted us to investigate further to see what differences (if any) there are between heuristics and algorithmic discovery. The aim of this paper is to compare and contrast heuristics with algorithmic discovery and to explore the consequences of these distinctions within their applications in science and other areas. To achieve the first goal the term ‘heuristic’ is treated as a family resemblance concept. So for a method or rule to be classified as a heuristic it will have to satisfy a sufficient number of the properties involved in the family resemblance. There are eight features that we specify that are involved with being a heuristic. The first five correspond to the heuristic search program in artificial intelligence. The last three pick out more general characterizations of heuristics as methods that lack a guarantee, are rules of thumb, or transform one set of problems into another. We argue that there are methods of algorithmic discovery that have none of the eight features associated with heuristics. Thus, there are methods of algorithmic discovery which are distinct from heuristics. Once we’ve established that heuristic methods do not exhaust the methods of algorithmic discovery, we compare heuristic methods with non-heuristic discovery methods in their application. This is achieved by discussing two different areas of application. First, we discuss how heuristic and non-heuristic methods perform in different gaming environments such as checkers, chess, go, and video games. We find that while heuristics perform well in some environments – like chess and checkers – non-heuristic methods perform better in others. And, interestingly, hybrid methods perform well in yet other environments. Secondly, heuristic and non-heuristic methods are compared in their performance in empirical discovery. We discuss how effective each type of method is in discovering chemical structure, finding diagnoses in medicine, learning causal structure, and finding natural kinds. Again, we find that heuristic and non-heuristic methods perform well in different cases. We conclude by discussing the sources of the effectiveness of heuristic and non-heuristic methods. Heuristic and non-heuristic methods are discussed in relation to how they are affected by the frame problem and the problem of induction. We argue that the recent explosion of non-heuristic methods is due to how heuristic methods tend to be afflicted by these problems while non-heuristic methods are not.
28. What Can We Learn from How a Parrot Learns to Speak like a Human?
Philosophy of Science00:28 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:28:00 UTC - 2018/11/03 06:59:00 UTC
Shereen Chang (University of Pennsylvania) What is the significance of learning conditions for inferences about cognition in general? Consider the case of Alex the grey parrot, who was trained by researcher Irene Pepperberg to use English words in their appropriate contexts. When presented with an array of different objects, Alex could vocalize in English the correct answers to questions such as “How many green blocks?” He could compare two objects and vocalize how they were similar or different (e.g., “color”). In short, Alex could communicate meaningfully using English words. Alex learned to communicate with English words via various training methods that emphasized social context and interaction. To introduce new words to the parrot, Pepperberg primarily used a Model/Rival technique in which two human trainers demonstrate the reference and functionality of target words, while providing social interaction. After Alex attempted to vocalize a new word in the presence of the referent object, trainers would repeat the word in different sentences to clarify its pronunciation, reminiscent of how human parents talk to young children. Alex also engaged in self-directed learning; he learned the word “grey” after seeing his reflection in the mirror and asking his trainers, “What color?” Thus, a parrot acquired parts of the English language through techniques similar to how humans learn to speak English. On my analysis, there are four key conditions for the acquisition of such communication skills. How do we make sense of the similarities between the ways in which a parrot and a human child learn to speak? Since a parrot was able to acquire the meaningful use of words in English, a human-based communication code, it seems that parrots can learn communication codes other than those of their own species. If parrots have a general ability to learn communication codes, then either the conditions under which they learn words in English is specific to learning human-based communication codes or they are more general features of learning communication codes. I present reasons to rule out the former and argue that the conditions under which Pepperberg’s parrots learned English are likely to be more general features of learning communication codes. From research in cross-species communicative behaviour, where an individual learns how to communicate using the communication code of another species, we can learn about the relevance of particular learning conditions more generally. By studying how parrots learn to communicate using a human language such as English, for example, we can shed light on more general aspects of how we learn to communicate. In this way, we can garner special insight on the nature of social cognition, the acquisition of communication skills, and our cognitive evolution in general.
29. Circuit Switching, Gain Control and the Problem of Multifunctionality
Philosophy of Science00:29 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:29:00 UTC - 2018/11/03 06:59:00 UTC
Philipp Haueis (Berlin School of Mind and Brain) Neural structures with multiple functions make it unclear when we have successfully described what a structure does when it works. Several recent accounts attempt to tackle this problem of multifunctionality differently. Rathkopf (2013) proposes an intrinsic function concept to describe what a structure does whenever it works, whereas Burnston (2016a) argues for context-sensitive descriptions to tackle multifunctionality. McCaffrey (2015) proposes a middle road by indexing invariant or context-sensitive descriptions to the mechanistic organization of a multifunctional structure. In this paper, I argue that these accounts underestimate the problem of multifunctionality. Because they implicitly assume that “multifunctional” means “contributing to multiple cognitive functions”, they overlook other types within the purview of their accounts: circuit switching in central pattern generators and gain control in cortical microcircuits. Central pattern generators are multifunctional because they can switch between rhythmic motor outputs (Briggmann and Kristan 2008). Cortical microcircuits are multifunctional because some circuit elements process sensory information, whereas others prevent damage by controlling circuit gain (Merker 2013). These circuit functions are not operative in cognitive processing but instead enable such processing to occur at all. Yet they exhibit exactly the features that philosophical accounts recruit to handle (cognitive) multifunctionality. Similar to Rathkopf’s intrinsic function account, circuit switching and gain control can be analysed without reference to the behavior of the organism. Yet, they do not replace but complement task-based functional analyses of multifunctional structures, thus questioning the plausibility of the intrinsic function account. Circuit switching and gain control also show that Burnston’s and McCaffrey’s accounts are incomplete. Because he focuses on cognitive contexts, Burnston’s contextualism fails to capture how circuit switching and gain control change with biochemical and physiological contexts, respectively. These contexts make the problem of multifunctionality harder than Burnston acknowledges, because different context types cross-classify the response of multifunctional structures. Similarly, McCaffrey’s typology of mechanistic organization to classify multifunctional structures fails to capture how circuit switching or gain control are mechanistically organized. Because central pattern generators can switch rhythmic outputs independently of sensory inputs, they are mechanistically decoupled from cognitive functions that process those inputs. In contrast, gain control is essentially coupled to cognitive functions because it is only necessary to prevent damage when a cortical microcircuit processes sensory information. My analysis shows that existing philosophical accounts have underestimated the problem of multifunctionality because they overlooked circuit functions that are not operative in, but instead enable cognitive functions. An adequate account of multifunctionality should capture all types of multifunctionality, regardless of whether they are cognitive or not.
30. Brain-Machine Interfaces and the Extended Mind
Philosophy of Science00:30 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:30:00 UTC - 2018/11/03 06:59:00 UTC
Mahi Hardalupas (University of Pittsburgh), Alessandra Buccella (University of Pittsburgh) The Extended Mind Theory of cognition (EMT) claims that cognitive processes can be realized, or partially realized outside of the biological body. Unsurprisingly, proponents of EMT have become increasingly interested in brain-machine interfaces (BMIs). For example, Clark argues that BMIs will soon create human-machine “wholes” challenging any principled distinction between biological bodies and artifacts designed to enhance or replace biological functions. If this is what BMIs are capable of, then they potentially offer convincing evidence in favor of EMT. In this paper, we criticize the claim that BMIs, and especially motor BMIs (EEG-controlled robotic arms, exoskeletons, etc.), support EMT. First, Clark claims that BMIs incorporated into the so-called “body schema" will stop requiring complex representational resources mediating between neural inputs and motor outputs. If this is the case, then one has good grounds to claim that we should treat BMIs as genuinely extending cognition. However, at least for now, motor control-BMIs do necessarily require mediating representations. EMT theorists could reply that two systems can be functionally similar even if one requires representational mediation and the other doesn’t. However, it seems to us that when EMT suggests functional similarity as a criterion to decide whether BMIs genuinely extend cognition, they should mean similarity at the algorithmic level, that is, where more specific descriptions of the mechanisms involved between input and output are given. But at the algorithmic level the differences regarding representational mediation mentioned above matter. Moreover, research into BMIs seems to take for granted that their success depends on their proximity to the brain and their ability to directly influence it (e.g. invasive BMIs are considered a more viable research program than non-invasive BMIs). This seems in tension with EMT's thesis that it should not make a difference how close to the brain a device contributing to cognitive processes is. Finally, EMT is a theory about the constitution of cognitive processes, that is, it claims that the mind is extended iff a device constitutes at least part of the process. However, all the evidence that we can gather regarding the relationship between BMIs and cognitive processes only confirms the existence of a causal relation. Therefore, the currently available evidence leaves EMT undetermined. In conclusion, we claim that BMIs don't support EMT but, at most, a weaker alternative.
31. The Best System Account of Laws needs Natural Properties
Philosophy of Science00:31 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:31:00 UTC - 2018/11/03 06:59:00 UTC
Jason Kay (University of Pittsburgh) Humeans in metaphysics have two main desiderata for a theory of laws of nature. They want the laws to be a function of facts about the distribution of fundamental physical properties. They also want the laws to be epistemically accessible to science unaided by metaphysical theorizing. The most sophisticated attempt to realize this vision is the Best Systems Account (BSA), which claims that the laws are the generalizations which conjointly summarize the world as simply and exhaustively as possible. But the BSA faces the threat of so-called 'trivial systems' which, while simple and strong, intuitively are not the sort of thing which can be laws. Imagine a system that introduces an extremely informative predicate which contains all the facts about nature. Call the predicate 'F.' This gerrymandered predicate allows us to create a system containing the single sentence 'everything is F,' which describes the universe both exhaustively and extremely simply. Lewis rules out predicates like 'F' by arguing that only predicates expressing natural properties are fit to feature in the laws of nature. However, many Humeans since Lewis have rejected the existence of natural properties for their epistemic inaccessibility and ontological profligacy. In this paper I examine two recent attempts to address the Trivial Systems objection without natural properties and argue that they face serious difficulties. Cohen & Callendar concede that trivial systems will win the competition for in some cases, yet since they won't be the best system relative to the kinds we care about, this is not a problem. In essence, we are justified in preferring non-trivial systems because they organize the world into kinds that matter to us. I argue that this response fails for two reasons. First, if laws are the generalizations which best systematize the stuff we care about, this makes the laws of nature unacceptably interest relative. And second, doesn't the trivial system F also tell us about the stuff we care about? It also tells us about much, much more, but can it be faulted for this? Eddon & Meacham introduce the notion of 'salience' and claim that a system's overall quality should be determined by its salience along with its simplicity and strength. Since a system is salient to the extent that it is unified, useful, and explanatory, trivial systems score very low in this regard and thus will be judged poorly. I argue that it's not clear exactly how salience is supposed to do the work Eddon & Meacham require of it. I try to implement salience considerations in three different ways and conclude that each way fails to prevent trivial systems from being the Best under some circumstances. If I am right about this, versions of the BSA which reject natural properties continue to struggle against the trivial systems objection.
32. Alethic Modal Reasoning in Non-Fundamental Sciences
Philosophy of Science00:32 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:32:00 UTC - 2018/11/03 06:59:00 UTC
Ananya Chattoraj (University of Calgary) Modal reasoning arises from the use of expressions with modal operators like “necessary” or “possibly.” This type of reasoning arises in science through reasoning about future possibilities. Alethic modal reasoning is instantiated in science through scientific laws and single event probabilities. This means that when scientists use alethic modal reasoning, they appeal to laws and probabilities in their practices of explanation, manipulation, prediction, etc. In the philosophy of logic, alethic modality is sometimes distinguished from epistemic modality under the label of modal dualism (Kment 2014), which is instantiated in science through reasoning about future events based on past experimental results rather than an overarching law. In “An Empiricist’s Guide to Objective Modality,” Jennan Ismael presents a deflationary framework of alethic modality. This framework does not depend on possible worlds semantics and is instead couched in the way in which laws and probabilities guide scientific action. On this account, scientists do not create research programs to falsify theories that have been codified as a law – there is no research, for instance, to falsify gravity, though there are research programs to clarify the nature of the force. As such, laws, and similarly, probabilities, guide the way in which scientists perform their research. The effect of laws as guiding actions, however, has diminishing returns in non-fundamental sciences. In this poster, I present a case study of organic chemistry, where scientists use modal reasoning to classify organic molecules into functional groups. Functional group classification is based on how chemists manipulate molecules of one group by inducing reactions with molecules of a different group for results specific to their purposes. These classifications are experimentally established and provide a systematic way of classifying molecules useful for manipulation, explanation, and prediction. Since these molecules can be classified and named systematically, chemists are reasoning about how molecules will react in future reactions. However, unlike Ismael’s framework suggests, organic chemists are not guided by fundamental laws. Applying works like Goodwin (2013) and Woodward (2014), I show how modal reasoning exists in chemical practice. I argue that alethic reasoning through fundamental laws are downplayed and the non-alethic reasoning is elevated in the practice of organic chemistry. I show that while Ismael’s framework of modal reasoning has features worth preserving, including its abandonment of possible worlds semantics and the focus on action guidance, its focus on alethic modality as the main type of modal reasoning that guides actions is incorrect when considering the practices of scientists working in non-fundamental sciences. I will ultimately suggest that the current way of distinguishing alethic modality and epistemic modality in science is not helpful for understanding modal reasoning in non-fundamental sciences.
33. Is the Humongous Fungus Really the World’s Largest Organism?
Philosophy of Science00:33 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:33:00 UTC - 2018/11/03 06:59:00 UTC
Daniel Molter (University of Utah), Bryn Dentinger (University of Utah) Is the Humongous Fungus really the world’s largest organism? ‘World’s largest organism’ is often referenced in philosophy of biology, where it serves as something of a type specimen for the organism category, so it’s important to make sure the biological individual which holds this title really is one organism. The Humongous Fungus (HF), a 3.7 square mile patch of honey mushrooms (Armillaria solidipes) in Oregon’s Blue Mountains, is said to be the world’s largest organism. To determine if it really is will require both new empirical work (currently being planned) and philosophical clarification about what it means to be an organism. At question empirically is whether the HF is physiologically integrated; at question philosophically is whether physiological integration is necessary for organismality. Ferguson et al (2003) reported that all samples collected inside a 3.7 square mile patch were genetically homogenous and somatically compatible, indicating common descent from a single reproductive event and the potential to fuse into a single mycelium. Their results are consistent both with a single humongous mycelium and with a swarm of fragmented clones that periodically flair up and die out as they spread from tree to tree. Tests to see if the HF is all connected have not yet been done. If “organism” is defined in terms of evolutionary individuality, then the HF does not need to be connected in order to function as a discontinuous evolutionary organism, but it would not be the largest discontinuous evolutionary organism; that title instead probably* goes to Cavendish bananas (the common yellow variety), which are clones of a single genet cultivated on millions of hectares around the world. If, on the other hand, organismality is defined in terms of physiological integration, then the HF would have to be continuous for it to count as one organism. Interestingly, the distinction between fragmented and continuous might be blurred if the HF periodically breaks apart and comes back together, as mycelia sometimes do. If the HF really is physiologically integrated, then it is the world’s largest physiological organism, beating out Pando, an aspen grove in Utah, and another Humongous Fungus in Michigan (yes, they fight over the name). The first planned test for physiological integration involves sampling eDNA in soil along transects through the genet. This will tell us how far from infected trees the Armillaria extends, and it will help to locate areas of concentration that might represent physiologically isolated individuals. Further testing might include a stable isotope transplantation study to see if tracers absorbed by the mycelium in one region of the genet make their way to distal regions. Ferguson, B. A., Dreisbach, T. A., Parks, C. G., Filip, G. M., & Schmitt, C. L. (2003). Coarse-Scale Population Structure of Pathogenic Armillaria Species in a Mixed-Conifer Forest in the Blue Mountains of Northeast Oregon. Canadian Journal of Forest Research, 33(4), 612-623. * Other plants, such as dandelions, might also be contenders for the world’s largest genet.
34. Functions in Cell and Molecular Biology: ATP Synthase as a Case Study
Philosophy of Science00:34 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:34:00 UTC - 2018/11/03 06:59:00 UTC
Jeremy Wideman (Dalhousie University) There are two broad views of how to define biological functions. The selected effects (SE) view of function requires that functions be grounded in “the historical features of natural selection” (Perlman 2012), whereas the causal role (CR) view does not (Cummins and Roth 2012). SE functions are separated from mere effects by reference to events in evolutionary/selective history (e.g., Garson 2017). Therefore, SE functions are real things/processes, which thereby explain how traits originated and why they persist. CR functions are ascribed by “functional analysis” (Cummins and Roth 2012) which involves defining a containing system (which can be anything from a metabolic pathway to a medical diagnosis) and describing the role that the trait in question plays in the system of interest. CR functions are thus subjectively defined, and dependent upon the interests of the investigator. It has been suggested by CR proponents that biologists like molecular and cell biologists are do not need evolution to understand the functions that they are interested in. However, molecular and cell biologists are driven to determine ‘the function’ of organismal components, secondary effects are not so interesting. What then is meant by the function if not selected effect? Furthermore, comparative evolutionary biologists make inferences about conserved functions based on functions identified by molecular and cell biologists. An analysis of biological function at this level is lacking from the philosophical literature. In order to determine if an SE view of function can accommodate actual biological usage I have turned away from abstract examples like the heart, to a concrete case study from molecular cell biology, the multicomponent molecular machines called ATP synthases. ATP synthases are extremely well studied protein complexes present in all domains of life (Cross and Müller 2004). As their name suggests, their generally agreed upon function is to synthesize (or hydrolyze) ATP. My analysis demonstrates that SE views of function that require positive selection for an effect (e.g., Gould and Vrba 1982) do not accommodate contemporary usage. Instead biological usage requires that function be defined to include effects arising from solely purifying selection, constructive neutral evolution, or exaptation, in addition to positive selection. Thus, the SE view of function must be construed more broadly in order to accommodate all facets of biological usage. A consequence of an expanded view of SE function is that while all adaptations have functions not all functions result from adaptations. Therefore, this view is not panadaptationist.
35. Mechanistic Integration and Multi-Dimensional Network Neuroscience
Philosophy of Science00:35 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:35:00 UTC - 2018/11/03 06:59:00 UTC
Frank Faries (University of Cincinnati) Mechanistic integration, of the kind described by Craver and Darden (2013), is, at first glance, one way to secure sensitivity to the norms of mechanistic explanation in integrative modeling. By extension, models in systems neuroscience will be explanatory to the extent that they demonstrate mechanistic integration of the various data and methods which construct and constitute them. Recent efforts in what Braun and colleagues have dubbed “multi-dimensional network neuroscience” (MDNN) claim to provide increasingly mechanistic accounts of brain function by moving “from a focus on mapping to a focus on mechanism and to develop tools that make explicit predictions about how network structure and function influence human cognition and behavior” (Braun, et al., 2018). MDNN appears to provide examples of simple mechanistic integration, interlevel integration (looking down, up, and around), and intertemporal integration. Moreover, these models appear to increasingly satisfy the Model-to-Mechanism Mapping (3M) requirement (Kaplan and Craver, 2011), and allow for intervention, control, and the answering of “what-if-things-had-been-different” questions (Woodward, 2003). These efforts attempt to situate parametric correlational models “in the causal structure of the world” (Salmon, 1984). As such they appear to be excellent exemplars of mechanistic integration in systems neuroscience. However, despite such good prospects for mechanistic integration, it is unclear whether those integrative efforts would yield genuine explanations on an austere mechanistic view (of which I take Craver (2016) to be emblematic). I identify three objections that can raised by such a view—what I call the arguments from (i) concreteness, (ii) completeness, and (iii) correlation versus causation. I treat each of these in turn and show how a more sophisticated understanding of the role of idealizations in mechanistic integration implies a rejection of these objections and demands a more nuanced treatment of the explanatory power of integrated models in systems neuroscience. In contrast to austere mechanistic views, I offer a flexible mechanism view, which expands of the norms of mechanistic integration, including the 3M requirement, to better account for the positive ontic and epistemic explanatory contributions made by idealization—including the application of functional connectivity matrices—to integration in systems neuroscience. Further, I show how the flexible mechanistic view is not only compatible with mechanistic philosophy, but better facilitates mechanistic integration and explanation.
Philosophy of Science00:36 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:36:00 UTC - 2018/11/03 06:59:00 UTC
Martin Zach (Charles University) It has long been argued that idealized model schemas cannot provide us with factive scientific understanding, precisely because these models employ various idealizations; hence, they are false, strictly speaking (e.g., Elgin 2017, Potochnik 2015). Others defend a middle ground (e.g., Mizrahi 2012), but only few espouse (in one way or another) the factive understanding account (e.g., Reutlinger et al. 2017, Rice 2016). In this talk, and on the basis of the model schema of metabolic pathway inhibition, I argue for the conclusion that we do get factive understanding of a phenomenon through certain idealized and abstract model schemas. As an example, consider a mechanistic model of metabolic pathway inhibition, specifically the way in which the product of a metabolic pathway feeds back into the pathway and inhibits it by inhibiting the normal functioning of an enzyme. It can be said that such mechanistic model abstracts away from various key details. For instance, it ignores the distinction between competitive and non-competitive inhibition. Furthermore, a simple model often disregards the role of molar concentrations. Following Love and Nathan (2015) I submit to the view that neglecting concentrations from a model is an act of idealization. Yet, models such as these do provide us with factive understanding when they tell us something true about the phenomenon, namely the way in which it is causally organized, i.e. by way of negative feedback (see also Glennan 2017). This crucially differs from the views of those (e.g., Strevens 2017) who argue that idealizations highlight causal irrelevance of the idealized factors. For the phenomenon to occur, it makes all the difference precisely what kind of inhibition is at play and what the molar concentrations are. Finally, I will briefly distinguish my approach to factive understanding from those of Reutlinger et al. (2017) and Rice (2016). In Reutlinger et al. (2017), factive (how-actually) understanding is achieved by theory-driven de-idealizations, however, as such it importantly differs from my view which is free of such need. Rice (2016) suggests that optimization models provide factive understanding by providing us with true counterfactual information about what is relevant and irrelevant, which, again, is not the case in the example discussed above.
37. The Role of the Contextual Level in Computational Explanations
Philosophy of Science00:37 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:37:00 UTC - 2018/11/03 06:59:00 UTC
Jens Harbecke (Witten/Herdecke University), Oron Shagrir (The Hebrew University of Jerusalem) At the heart of the so-called "mechanistic view of computation" lies the idea that computational explanations are mechanistic explanations. Mechanists, however, disagree about the precise role that the environment — or the "contextual level" (Miłkowski 2013) — plays for computational (mechanistic) explanations. Some mechanists argue that contextual factors do not affect the computational identity of a computing system and, hence, that they do not play an explanatory role vis-á-vis the system’s computational aspects. If anything, contextual factors are important to specify the explanandum, not the explanation (cf. also Kaplan 2011, Miłkowski 2013, Dewhurst 2017, Mollo 2017). Other mechanists agree that the contextual level is indeed part of the computational level of a computing system, but claim that "[i]n order to know which intrinsic properties of mechanisms are functionally [computationally] relevant, it may be necessary to consider the interaction between mechanisms and their contexts." (Piccinini 2008, 220). In other words, computational explanations involve more than an explication of the relevant mechanisms intrinsic to a computational system. These further aspects specify the causal-mechanistic interaction between the system and its context. On this poster, we challenge both claims. We argue that (i) contextual factors do affect the computational identity of a computing system, but (ii) that it is not necessary to specify the causal-mechanistic interaction between the system and its context in order to offer a complete and adequate computational explanation. We then discuss the implications of our conclusions for the mechanistic view of computation. Our aim is to show that some versions of the mechanistic view of computation are consistent with claims (i) and (ii), whilst others are not. We illustrate the following argumentative steps. First, we introduces the notion of an automaton, and we point out that complex systems typically implement a large number of inconsistent automata all at the same time. The challenge is to single out those automata of a system that correspond to its actual computations, which cannot be achieved on the basis of the intrinsic features of the system alone. We then argue that extending the basis by including the immediate or close environment of computing systems does not do the trick. This establishes an externalist view of computation. We then focus on claim (ii) and argue that various different input mechanisms can be correlated with the same computations, and that it is not always necessary to specify the environment-to-system mechanism in order to explain a system’s computations. Finally, we assess the compatibility of claims (i) and (ii) with several versions of the mechanist view of computation.
Oron Shagrir Hebrew University Of Jerusalem, Israel
38. In Defense of Pragmatic Processualism: Expectations in Biomedical Science
Philosophy of Science00:38 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:38:00 UTC - 2018/11/03 06:59:00 UTC
Katherine Valde (Boston University) This poster will contrast the expectations generated by using mechanistic vs. process frameworks in biomedical sciences. A traditional mechanistic framework looks at a system in terms of entities and activities – it looks to finitely characterize the properties of entities that allow them to execute particular actions. A processual account, on the other hand, characterizes entities in terms of how they are maintained or stabilized, and in general, focuses on the generation of stability rather than facts about stability. Recent increased interest in a process framework for biology has focused on the ability of a process ontology to describe the natural world more accurately than a substance ontology. This poster examines the use of processual concepts in a practice-oriented approach- arguing for the importance of process on methodological (rather than metaphysical) grounds. Given the difficulty in settling theoretical metaphysical debates, and the grave importance of advancing biomedical research, this pragmatic approach offers a promising route forward for a process framework. This poster specifically examines two concrete cases: carcinogenesis and inflammatory bowel disease (IBD) research. Competing research programs in each of these domains can be understood as processual or mechanistic. The dominant theory for understanding carcinogenesis is somatic mutation theory (SMT). SMT holds that cancer is a cell-based disease that occurs when a single cell from some particular tissue mutates and begins growing and dividing out of control. A competing theory of carcinogenesis, Tissue Organizational Field Theory (TOFT), holds that cancer is a tissue-based disease that occurs when relational constraints are changed (Soto and Sonnenschein, 2005). TOFT provides a processual understanding of carcinogenesis, while SMT provides a mechanistic account. IBD research in humans has largely focused on genetic correlations and pathogen discovery, which have largely been unsuccessful. However, in a mouse models researchers have discovered several factors, each necessary, but individually insufficient, to cause the overall condition (Cadwell, et. al., 2010). While the traditional research takes a mechanistic approach, the mouse model takes a processual approach (characterizing IBD based on how it is maintained, rather than based on essential properties). The competing approaches to these conditions are not truly incommensurable, but they do generate different expectations and guide different research. This poster will compare the development of research projects under competing theories. The ultimate aim is to highlight the benefits of a process framework for the practice of biomedical science: generating different expectations for research, and thus leading to different experimental designs, and a capacity to measure different things regardless of the answers to the ultimate metaphysical questions.
39. Flat Mechanisms: Mechanistic Explanation Without Levels
Philosophy of Science00:39 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:39:00 UTC - 2018/11/03 06:59:00 UTC
Peter Fazekas (University of Antwerp) The mechanistic framework traditionally comes bundled with a levelled view of reality, where different entities forming part-whole relations reside at lower and higher levels. Here it is argued that contrary to the standard understanding and the claims of its own proponents, the core commitments of the mechanistic framework are incompatible with the levelled view. An alternative flat view is developed, according to which wholes do not belong to levels higher than the constituent parts of the underlying mechanisms, but rather are to be found as modules embedded in the very same complex of interacting units. Modules are structurally and functionally stable configurations of the interacting units composing them. Modules are encapsulated either in a direct physical way by a boundary that separates them from their environment, or functionally by the specific organisation of the interaction network of their units (e.g., causal feedback loops). Physical and functional encapsulation constrain internal operations, cut off some internal-external interactions, and screen off inner organisation and activities. Due to the cutting-off effect of encapsulation, the interacting units of a module are, to a certain degree, causally detached from their environment: some of the causal paths via which the units could normally (in separation) be influenced become either unavailable (due to the shielding effect of physical boundaries) or ineffective (due to the stabilising effect of feedback loops). Some units, however, still retain their causal links with the environment providing inputs and outputs for the organised activity of the cluster of units, and henceforth for the module itself. Modules, thus, are not epiphenomenal. The input of a module is the input of its input units, and the output of a module is the output of its output units. Via the causal links of their input and output units, modules are causally embedded in the same level of causal interactions as their component units. Since whole modules can be influenced by and can influence their environment only via their input and output units, their inner organisation is screened off: from the ‘outside’ modules function as individual units. Therefore, alternating between a module and a unit view is only a change in perspective and does not require untangling possibly complex relations between distinct entities residing at different levels. The mechanistic programme consists in turning units into modules, i.e., ‘blowing up’ the unit under scrutiny to uncover its internal structure, and accounting for its behaviour in terms of the organisation and activities of the units found ‘inside’. The flat view, thus, claims that mechanistic characterisations of different ‘levels’ are to be understood as different descriptions providing different levels of detail with regard to a set of interacting units with complex embedded structure. It sets the mechanistic programme free of problematic metaphysical consequences, sheds new light on how entities that traditionally were seen as belonging to different levels are able to interact with each other, and clarifies how the idea of mutual manipulability — that has recently been severely critcised — could work within the mechanistic framework.
40. Path Integrals, Holism and Wave-Particle Duality
Philosophy of Science00:40 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:40:00 UTC - 2018/11/03 06:59:00 UTC
Marco Forgione In the present work I argue that the path integral formulation of quantum mechanics displays a holistic machinery that allows one to predict and explain the total amplitude of the quantum system. The machinery shows that it is not the single path that counts, but rather, it is the whole ensemble that provides the total amplitude. In pursuing such interpretation, I refer to Healey's notion of holism and I show that -when applied to path integrals- it ultimately leads to a form of structural holism. To do so, I point out: (1) what the whole is composed of, (2) the non-supervenient relation the whole holds with its parts and (3) the mathematical object that instantiates such relation, i.e., the phase factor. Concerning (1), I argue that while the parts correspond to the single possible paths, the whole is to be interpreted as the total ensemble posited by the theory. I show that the single possible trajectories play the role of mathematical tools, which do not represent real particle paths. They can be individuated mathematically by varying the phase factor, but they do not describe what actually happens: they remain mathematical possibilities devoid of ontological meaning. Concerning (2), I will show that a strong reductionist account of the ensemble to the single paths is not possible. If that is the case, then the single paths will count as calculation tools, while it is the statistical representation of the whole that provides the description of the particle motion. In arguing for the irreducibility to single real paths of the total ensemble, I firstly take into consideration Wharton's realist account and secondly, I analyze the decoherent histories account of quantum mechanics. In the former case, I argue that even by parsing the total ensemble in sets of non-interfering paths and then mapping them into a space-time valued field, we cannot deny the holistic nature of the path integral formulation. Furthermore, although the decoherent histories account parses the total ensemble in coarse grained histories -where an history is a sequence of alternatives at successive times-, it ultimately fails in extrapolating the real history the particle undergoes. Ultimately (3), I suggest that the phase factor is the mathematical object that instantiates the non-supervenient relation. It determines the cancellation of the destructively interfering paths and, in the classical limit, it explains the validity of the least action principle. Once all these parts are addressed, I will argue that the holistic ensemble and the phase factor -which weights the probabilities for each trajectory- form a structural holism for which the distinction between particles and waves is no longer necessary.
41. Mechanisms and Principles: Two Kinds of Scientific Generalization
Philosophy of Science00:41 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:41:00 UTC - 2018/11/03 06:59:00 UTC
Yoshinari Yoshida (University of Minnesota), Alan Love (University of Minnesota) Confirmed empirical generalizations are central to the epistemology of science. Through most of the 20th century, philosophers focused on universal, exceptionless generalizations — laws of nature — and took these as essential to scientific theory structure and explanation. However, over the past two decades, many sought to characterize a broader range of generalizations, which facilitated the elucidation of a more complex space of possibilities and enabled a more fine-grained understanding of how generalizations with different combinations of properties function in scientific inquiry. However, much work remains to characterize the diversity of generalizations within and across the sciences. Here we concentrate on one area of science — developmental biology — to comprehend the role of two different kinds of scientific generalizations: mechanisms and principles. Mechanism generalizations (MGs) in developmental biology are descriptions of constituent biomolecules organized into causal relationships that operate in specific times and places during ontogeny to produce a characteristic phenomenon that is shared across different biological entities. Principle generalizations (PGs) in developmental biology are abstract descriptions of relations or interactions that occur during ontogeny and are exemplified in a wide variety of different biological entities. In order to characterize these two kinds of generalizations, we first discuss generalizations and explanatory aims in the context of developmental biology. Developmental biologists seek generalizations that are structured in four different dimensions — across taxa, across component systems, across developmental stages, and across scales — and in terms of two primary conditions: material and conceptual. Within scientific discourse, these generalizations appear in complex combinations with different dimensions or conditions foregrounded (e.g., distributions of developmental phenomena and causal interactions that underlie them in a specific component system at a particular stage under specified material conditions to answer some subset of research questions). MGs and PGs have distinct bases for their scope of explanation. MGs explain the development of a wide range of biological entities because the described constituent biomolecules and their interactions are conserved through evolutionary history. In contrast, the wide applicability of PGs is based on abstract relationships that are instantiated by various entities (regardless of evolutionary history). Hence, MGs and PGs require different research strategies and are justified differently; specific molecular interactions must be experimentally dissected in concrete model organisms, whereas abstract logical and mathematical properties can be modeled in silico. Our analysis shows why a particular kind of generalization coincides with a specific research practice and thereby illuminates why the practices of inquiry are structured in a particular way. The distinction between MGs and PGs is applicable to other sciences, such as physiology and ecology. Furthermore, our analysis isolates issues in general philosophical discussions of the properties of generalizations, such as ambiguities in discussions of “scope” (how widely a generalization holds) and a presumption that abstraction is always correlated positively with wide scope. Scope is variable across the four dimensions and MGs have wide scope as a consequence of their reference to concrete molecular entities that are evolutionarily conserved, not because of abstract formulations of causal principles.
42. The Autonomy Thesis and the Limits of Neurobiological Explanation
Philosophy of Science00:42 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:42:00 UTC - 2018/11/03 06:59:00 UTC
Nuhu Osman Attah (University of Pittsburgh) In this presentation I defend the “autonomy thesis” regarding the identification of psychological kinds, that is the claim that what psychological kinds there are cannot be determined solely by neuroscientific criteria, but must depend also on psychological or phenomenological evidence (Aizawa and Gillet, 2010). I argue that there are only three ways in which psychological kinds could be individuated if we are to rely on neuroscience alone, contra the “autonomy thesis”: (i) psychological kinds could be individuated on the back of broad/large-scale neurobiological bases such as network level connectivity, (ii) they could be individuated based on dissociations in realizing mechanisms, and (iii) psychological kinds could be picked out on the grounds of fine-grained neural details. I argue that these are the only options available to the methodological reductionist (who naysays the “autonomy thesis”) because they are the only options in the empirical space of neuroscientific explanation. I then argue following this that for the following respective reasons, none of these options can actually individuate psychological kinds in any useful sense: (a) particular cases of neuroscientific explanation (in particular, I have in mind the Grounded Cognition Model of concepts [Kemmerer, 2015, 274; Wilson-Mendenhall et al., 2013]) demonstrate that there are kinds employed by neuroscientists whose large-scale neurobiological instantiations differ significantly; (b) a circularity is involved in (ii) in that mechanisms presuppose a teleological individuation which already makes a reference to psychological predicates - that is to say, since mechanisms are always mechanisms "for" some organismal level phenomenon, individuating kinds based on mechanisms already involves a behavioral-level (non-neurobiological) criterion; and (c) besides a problem of too narrowly restricting what would count as kinds (even to the point of contradicting actual neuroscientific practice, as the case-study from (a) will demonstrate), there is also here a problem of vagueness in the individuation of fine-grained neurobiological tokens (Haueis, 2013). Since none of these three possible ways of picking out psychological kinds using neurobiology alone work, it would seem to be the case that there is some merit to the claims made by the autonomy thesis. I conclude from all of this, as has been previously concluded by philosophers arguing for the autonomy thesis, that while neurobiological criteria are important aids in identifying psychological kinds in some cases, they cannot strictly determine where and whether such kinds exist.
43. Du Châtelet: Why Physical Explanations Must Be Mechanical
Philosophy of Science00:43 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:43:00 UTC - 2018/11/03 06:59:00 UTC
Ashton Green (University of Notre Dame) In the early years of her research, Du Châtelet used the principle of sufficient reason (PSR) to develop a epistemological method, so that she could extrapolate from empirical data (such as the results of experiments in heating metals) in a rigorous way. Her goal was to attain knowledge of the hidden causes of the data. In this presentation, I will outline her method for this extrapolation, its assumptions, and consider the implications of such a view. According to Du Châtelet's method, any metaphysical claim, such as one concerning what substances make up the fundamental physical level, much be anchored in types of evidence which Du Châtelet considers reliable. Du Châtelet considers evidence reliable when it takes several forms. First, when it comes directly from empirical data, which is more rigorous than sense data, because it involves repeated and well-organized experimentation. But also, Du Châtelet also considers "principles" to be reliable epistemological tools, such as the law of non-contradiction, the principle of sufficient reason, and the principle of continuity, in addition to empirical data. For this reason, I call her mature position (after 1740) Principled Empiricism. In Principled Empiricism, beliefs are justified if they are based on reliable evidence of the following two kinds: empirical data, and what she calls “self-evident principles”. According to this method, beliefs based only on one or the other of these types of evidence, as well as both in conjunction, are justified. This allows her to make metaphysical hypotheses while still adhering to her Principled Empiricism, in which all knowledge is either self-evident, empirically confirmed, or built directly by those two pieces. By using the PSR as the principle which governs contingent facts, and is therefore appropriate to thy physical world, Du Châtelet's method extrapolates beyond empirical data, hypothesizing the best “sufficient reasons” for the effects gathered in empirical study. Sufficient reasons, however, according to Du Châtelet, are restricted to the most direct cause in the mechanical order of the physical world. Two parts of this definition need to be defended. First Du Châtelet must defend that the physical world is mechanical, and define what exactly she means by mechanical. Second, she must defend that in fact, the physical world consists of one mechanical system, and only one, of which all causes and effects are a part. If she is able to do this successfully, she can bring explanations of all phenomena into one “machine [of] mutual connection,” her new system will justify requiring mechanical explanations for all phenomena. She considers these arguments to be based on the PSR. In addition to establishing how Du Châtelet applied the PSR to her project, I discuss the problematic aspects of her restriction of explanations to mechanical ones, based on her premise that the universe is a single machine. Finally, I consider contemporary analogs to her position, and the difference between their foundations and Du Châtelet's.
45. Modes of Experimental Interventions in Molecular Biology:
Philosophy of Science00:45 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:45:00 UTC - 2018/11/03 06:59:00 UTC
Hsiao-Fan Yeh (National Chung Cheng University), Ruey-Lin Chen (National Chung Cheng University) This paper explores modes of experimental interventions in molecular biology. We argue for the following three points: (i) We distinguish between different modes of experimental interventions according to the two standards: the interventional direction and the interventional effect. (ii) There are two interventional directions (vertical/inter-level and horizontal/inter-stage) and two interventional effects (excitatory/positive and inhibitory/negative). (iii) In a series of related experiments, scientists can use multiple interventional modes to test given hypotheses and to explore novel objects. Our argument begins with a brief characterization of Craver and Darden’s taxonomy of experiments, because the taxonomy they have made implies various modes of intervention (Carver and Darden 2013). We propose to extract two interventional directions and two interventional effects from their taxonomy as the basis of classification. The vertical or inter-level direction means that an intervention is performed between different levels of organization and the horizontal or inter-stage direction means that an intervention is performed between different stages of a mechanism. Interventions may produce an excitatory or an inhibitory effect. As a consequence, we can classify modes of intervention according to different interventional directions and effects. We will do a case study of the PaJaMa experiment (Pardee, Jacob and Monod 1959) to illustrate the three points.
46. Mechanism Discovery Approach to Race in Biomedical Research
Philosophy of Science00:46 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:46:00 UTC - 2018/11/03 06:59:00 UTC
Kalewold Kalewold (University of Maryland, College Park) Race is commonly considered a risk factor in many complex diseases including asthma, cardiovascular disease, renal disease, among others. While viewing races as genetically meaningful categories is scientifically controversial, empirical evidence shows that some racial health disparities persist even when controlling for socioeconomic status. This poster argues that a mechanistic approach is needed to resolve the issue of race in biomedical research. The distinction between race-based studies, which hold that “differences in the risk of complex diseases among racial groups are largely due to genetic differences covarying with genetic ancestry which self-identified races are supposed to be good proxies for” (Lorusso and Bacchini 2015, 57), and race-neutral studies, which incorporate multiple factors by looking at individual level or population level genetic susceptibility, mirrors the “explanatory divide” Tabery (2014) highlights between statistical and mechanistic explanations in biology. In this poster I show that race-neutral studies constitute a Mechanism Discovery Approach (MDA) to investigating racial disparities. Using evidence from statistical studies, MDA seeks to build mechanism schemas that show causally relevant factors for racial disparities. This poster shows how MDA illuminates the productively active components of disease mechanisms that lead to disparate health outcomes for different self-identified races. By eschewing the “genetic hypothesis”, which favors explanations of racial disparities in terms of underlying genetic differences between races, MDA reveals the mechanisms by which social, environmental, and race-neutral genetic factors, including past and present racism, interact to produce disparities in chronic health outcomes. This poster focuses on the well-characterized disparity between birth weights of black and white Americans highlighted in Kuzawa and Sweet (2009). Their research in racial birthweight disparities provides sufficient evidence for a plausible epigenetic mechanism that produces the phenomenon. I argue that what makes their explanation of the racial disparity in US birth weights successful is that it is mechanistic. The mechanism is neither just hereditary or environmental; instead it is both; It is epigenetic. The poster will provide a diagram showing the mechanism. By showing how the various parts of the mechanism interact to produce the phenomena in question, MDA both avoids the pitfalls race-based studies while still accounting for the role of social races in mechanisms producing racial disparities. This approach also enables the identification of potential sites of intervention to arrest or reverse these disparities.
47. A Conceptual Framework for Representing Disease Mechanisms
Philosophy of Science00:47 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:47:00 UTC - 2018/11/03 06:59:00 UTC
Lindley Darden (University of Maryland, College Park), Kunal Kundu (University of Maryland, College Park), Lipika Ray (University of Maryland, College Park), John Moult (University of Maryland, College Park) The "big data" revolution is leading to new insights into human genetic disease mechanisms. But the many results are scattered throughout the biomedical literature and represented in many different ways, including free text and cartoons. Thus, a standard framework is needed to represent disease mechanisms. This poster presents a conceptual framework, utilizing a newly developed analysis of disease mechanisms (Darden et al.2018). The new mechanistic philosophy of science characterizes the components of mechanisms: entities and activities. Adapting this for genetic disease mechanisms yields the categories of "substate perturbations" plus the drivers of changes from one substate perturbation to the next, called "mechanism modules" (activities or group of entities and activities). The framework shows the organized stages of a genetic disease mechanism from a beginning substate perturbation (e.g., a gene mutation or chromosomal aberration) to the disease phenotype. It depicts environmental influences as well. It aids in finding possible sites for therapeutic intervention. It shows a schema builder's view of well-established components as well as uncertainty, ignorance, and ambiguity, based on evidence from the biomedical literature. Its abstract scaffolding directs the schema builder to fill in the key components of the disease mechanism, while the unknown components serve to direct future experimental work to remove sketchiness and provide additional evidence for its components. The poster will show progressively less abstract and more complete diagrams that represent the framework, as sketches become schemas. When a perturbation is correlated with a disease phenotype, it suggests searching for an unknown mechanism connecting them. The entire mechanism is a black box to be filled. Most abstractly and most generally, a disease mechanism is depicted by a series of substate perturbations (SSPs, rectangles) connected by lines labeled with the mechanism modules (MMs, ovals) that produce the changes from perturbation to perturbation. Optional additions include environmental inputs (cloud-like icons) and possible sites for therapeutic intervention (blue octagons). Telescoping of sets of steps into a single mechanism module increases focus on disease-relevant steps; e.g., transcription and translation telescope into the MM labeled "protein synthesis." The default organization is linear, from a beginning genetic variant to the ending disease phenotype, but it can include branches, joins, and feedback loops, as needed. Black ovals show missing components in the series of steps. The strength of evidence is indicated by color-coding, with green showing high confidence, orange medium, to red lowest. Branches labeled "and/or" show ambiguity about the path followed after a given step. Along with the general abstract diagrams, the poster will include detailed diagrams of specific disease mechanisms, such as cystic fibrosis. In addition to providing an integrated representational framework for disease mechanisms, these visual schemas facilitate prioritization of future experiments, identification of new therapeutic targets, ease of communication between researchers, detection of epistatic interactions between multiple schemas in complex trait diseases, and personalized therapy choice.
Presenters Lindley Darden University Of Maryland College Park Co-Authors
48. The Scope of Evolutionary Explanations as a Matter of “Ontology-Fitting” in Investigative Practices
Philosophy of Science00:48 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:48:00 UTC - 2018/11/03 06:59:00 UTC
Thomas Reydon (Leibniz Universität Hannover) Both in academic and in public contexts the notion of evolution is often used in an overly loose sense. Besides biological evolution, there is talk of the evolution of societies, cities, languages, firms, industries, economies, technical artifacts, car models, clothing fashions, science, technology, the universe, and so on. While in many of these cases (especially in the public domain) the notion of evolution is merely used in a metaphorical way, in some cases it is meant more literally as the claim that evolutionary processes similar to biological evolution occur in a particular area of investigation, such that full-fledged evolutionary explanations can be given for the phenomena under study. Such practices of “theory transfer” (as sociologist Renate Mayntz called it) from one scientific domain to others, however, raises the question how much can actually be explained by applying an evolutionary framework to non-biological systems. Can applications of evolutionary theory outside biology, for example to explain the diversity and properties of firms in a particular branch of industry, of institutions in societies, or of technical artifacts, have a similar explanatory force as evolutionary theory has in biology? Proponents of so-called “Generalized Darwinism” (e.g., Aldrich et al., 2008; Hodgson & Knudsen, 2010) think it can. Moreover, they think evolutionary thinking can perform a unifying role in the sciences by bringing a wide variety of phenomena under one explanatory framework. I will critically examine this view by treating it as a question about the ontology of evolutionary phenomena. My claim is that practices of applying evolutionary thinking in non-biological areas of work can be understood as what I call “ontology-fitting” practices. For an explanation of a particular phenomenon to be a genuinely evolutionary explanation, the explanandum’s ontology must match the basic ontology of evolutionary phenomena in the biological realm. This raises the question what elements this latter ontology consists of. But there is no unequivocal answer to this question There is ongoing discussion about the question what the basic elements in the ontology of biological evolutionary phenomena (such as the units of selection) are and how these are to be conceived of. Therefore, practitioners from non-biological areas of work cannot simply take a ready-for-use ontological framework from the biological sciences and fit their phenomena into it. Rather, they usually pick those elements from the biological evolutionary framework that seem to fit their phenomena, disregard other elements, and try to construct a framework that is specific to the phenomena under study. By examining cases of such “ontology fitting” we can achieve more clarity about the requirements for using evolutionary thinking to explain non-biological phenomena. I will illustrate this by looking at an unsuccessful case of “ontology fitting” in organizational sociology.
Presenters Thomas Reydon Leibniz Universität Hannover
49. Lessons from Synthetic Biology: Engineering Explanatory Contexts
Philosophy of Science00:49 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:49:00 UTC - 2018/11/03 06:59:00 UTC
Petri Turunen (University of Helsinki) The poster outlines a four-year empirical investigation into a synthetic biology (BIO, EBRC, Elowitz 2010, Morange 2009) consortium. The focus of the investigation was on how scientists in a highly interdisciplinary research consortium deal with interdisciplinary hurdles. In particular, we studied how the scientists communicated with each other when they were trying to explain issues related their field of expertise. What kind representational strategies were used? Which ones were successful? Synthetic biology was chosen as the target field for this investigation for two reasons. Firstly, synthetic biology is a particularly interdisciplinary field that brings together, among others, biologists, engineers, physicist and computer scientists. Secondly, synthetic biology is still a relatively new field of study. It does not yet have a clear disciplinary identity nor well regimented methodological principles. Since synthetic biology is still largely in the process of negotiating its practices it provides a particularly good case for studying how interdisciplinary practices get negotiated in actual practice. Our focus was on representational strategies, because our empirical case was particularly suited for observing them. We followed an interdisciplinary consortium made out of three separate groups with differing backgrounds ranging from industrial biotechnology and molecular plant biology to quantum many-body systems. We were given permission to observe consortium meetings, where the three different groups came together and shared their findings. These meetings made the representational strategies used by the scientists particularly visible, since their severe time constraints and discursive format forced the scientists to think carefully on how to present their findings. We followed and taped these consortium meetings. In addition, we performed more targeted personal interviews. Based on these materials we made the following general observations: 1. Interdisciplinary-distance promoted more variance in the use of differing representational means. That is, the bigger the difference in disciplinary background, the less standardized the communication. 2. Demands for concreteness varied: more biologically inclined researchers wanted connections to concrete biological systems where as the more engineering-oriented researchers wanted input on what sort of general biological features would be useful to model. Both aspects related to the model-target connection but imposed different demands on what was relevant for establishing that connection. 3. Interdisciplinary distance promoted the use of more schematic and general representations. Interdisciplinary distance was thus related to noticeable differences in the utilized representational strategies. All three observations also suggest that the scientists are not merely transmitting content but are instead trying to construct suitable representational contexts for that content to be transmissible.. That is, scientists are performing a kind of contextual engineering-work. Philosophically the interesting question then becomes: how exactly is content related to its representational context?
51. The Narrow Counterfactual Account of Distinctively Mathematical Explanation
Philosophy of Science00:51 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:51:00 UTC - 2018/11/03 06:59:00 UTC
Mark Povich (Washington University, St. Louis) An account of distinctively mathematical explanation (DME) should satisfy three desiderata: it should account for the modal import of DMEs; it should distinguish uses of mathematics in explanation that are distinctively mathematical from those that are not (Baron 2016); and it should also account for the directionality of DMEs (Craver and Povich 2017). Baron’s (forthcoming) deductive-mathematical account, because it is modeled on the deductive-nomological account, is unlikely to satisfy these desiderata. I provide a counterfactual account of distinctively mathematical explanation, the Narrow Counterfactual Account (NCA), that can satisfy all three desiderata. NCA satisfies the three desiderata by following Lange (2013; but not Lange 2017, apparently) in taking the explananda of DMEs to be of a special, narrow sort. Baron (2016) argues that a counterfactual account cannot satisfy the second desideratum, because such an account, according to Baron, holds that an explanation is a DME when it shows a natural fact to depend counterfactually on a mathematical fact. However, this does not distinguish DMEs from non-DMEs that employ mathematical premises. NCA satisfies the second desideratum by narrowing the explanandum so that it depends counterfactually *only* on mathematical fact. Such an explanandum is subject to a DME. This narrowing maneuver also allows NCA to satisfy the first desideratum. Since the narrowed explanandum depends counterfactually only on a mathematical fact, changes in any empirical fact have no "effect" on the explanandum. Narrowing the explanandum satisfies the third desideratum, because Craver and Povich's (2017) "reversals" are not DMEs according to NCA. To see this, consider the case of Terry's Trefoil Knot (Lange 2013). The explanandum is the fact that Terry failed to untie his shoelace. The explanantia are the empirical fact that Terry's shoelace contains a trefoil knot and the mathematical fact that the trefoil knot is distinct from the unknot. Craver and Povich (2017) point out that it is also the case that the fact that Terry’s shoelace does not contain a trefoil knot follows from the empirical fact that Terry untied his shoelace and the mathematical fact that the trefoil knot is distinct from the unknot. (One can stipulate an artificial context where the empirical fact partly constitutes the explanandum.) However, if we narrow the explananda, NCA counts Terry’s Trefoil Knot as a DME and not Craver and Povich’s reversal of it. This is because the first of the following counterfactuals is arguably true, but the second is arguably false: 1) Were the trefoil knot isotopic to the unknot, Terry would have untied his shoelace that contains a trefoil knot. 2) Were the trefoil knot isotopic to the unknot, Terry would have had a trefoil knot in the shoelace that he untied. (I use Baron, Colyvan, and Ripley’s [2017] framework for evaluating counterfactuals with mathematically impossible antecedents, so that these two counterfactuals get the right truth-values.) The same is shown for all of Lange’s paradigm examples of DME and Craver and Povich's "reversals".
52. Developing a Philosophy of Narrative in Science
Philosophy of Science00:52 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:52:00 UTC - 2018/11/03 06:59:00 UTC
Mary S. Morgan (London School of Economics), Mat Paskins (London School of Economics), Kim Hajek (London School of Economics), Andrew Hopkins (London School of Economics), Dominic Berry (London School of Economics) Narrative is at work in many sciences, operating at various levels of reasoning, performing a wide variety of functions. In some areas they are habitual, as in the natural historical sciences, but they are also to be found in less likely places: for example as integral with mathematical simulations, or in giving accounts of chemical syntheses. Despite their endemic nature, philosophers of science have not yet given much credence to narrative — either as kind of explanation, type of observational reporting, format of representation, or any of the other purposes to which they can be put. Yet — as is evident in the brief outline below — the usage of narratives carries both ontological implications, and prompts epistemic questions. Our poster introduces the ‘narrative science project’, which is investigating a number of scientific sites to develop a philosophical approach to scientists’ use of narratives within their communities, rather than in their pedagogical or popularising usages. Three questions exemplify the value of admitting narrative into the philosophy of science. How do candidate laws of nature interact with narrative explanation in natural historical sciences? Laws are traditionally required for explanation in the sciences, but it has been argued that in the natural historical sciences they rather ‘lurk in the background’. Initial project findings suggest that in narrative accounts in these fields, laws might rather ‘patrol’ than ‘lurk’ — to forbid certain narratives and to constrain those that are told without ever quite determining the account. This ‘patrolling’ may function differently with respect to long-term changes than with short-term upheavals — such as found in geology or earth science. But narratives have also been found in situations of disjunctions or gaps in law-based explanations in these historical sciences, or play a bridging or unlocking function between scientists from different fields working together. How do the social, medical, and human sciences rely on co-produced “analytical narratives” in reporting their observational materials? It is quite typical of a range of scientific methods that ‘observations’ consist of individual accounts of feelings or attitudes or beliefs so that data provided comes direct from the ‘subjects’ involved. Often the materials come in the form of anecdotes, small contained narratives, or fragments of longer ones. Our evidence suggests we should treat these as ‘co-produced’ observations, where sometimes the analytical work goes alongside the subject to be reported polyphonically, and at other times the ‘objective analysis’ of the observing scientist is integrated into the self-witnessed, ‘subject-based’, reporting to produce something like ‘analytical observations’. We should consider narrative seriously as an available format of representation in science, worthy of the same philosophical consideration given to models, diagrams, etc. Answers to these questions will rely not just on philosophy but also narrative theory, which help to distinguish narrative and narrating. Such an approach raises a number of issues, for example: Is there a standard plot, or does it vary with discipline? Our poster imagines the narrative plots of chemical synthesis, developmental biology, anthropology, engineered morphology, psychological testimony, and geological time.
53. Pluralist Explanationism and the Extended Mind
Philosophy of Science00:53 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:53:00 UTC - 2018/11/03 06:59:00 UTC
David Murphy (Truman State University) Proponents of the hypothesis of extended cognition (HEC) regularly invoke its explanatory contributions, while critics assign negative explanatory value. Mark Sprevak’s critique, inspired by Peter Lipton, casts doubt on the efficacy of the shared strategy of invoking explanation as justification. Specifically, an inference to the best explanation (IBE) concerning HEC is said to fail because there’s a close rival that makes a competing truth claim, namely the hypothesis of embedded cognition (HEMC), but HEMC cannot be differentiated meaningfully from HEC in relation to explanatory virtues. I argue that even though there’s merit to the critique when we accept its framing, the ascription of a narrow model of IBE to the discussants leads to a faulty generalization concerning available explanatory resources, and removes promising explanationist strategies from view. When we, by contrast, set explanatory tools sympathetically – actualizing a directive set by Sprevak for his critique -- the viability of arguments based on explanatory contributions returns to view. Lipton and Sprevak’s critique notwithstanding, commitment to “the core explanationist idea that explanatory considerations are a guide to inference” (Lipton, Inference to the Best Explanation, 153) comports well with endorsing explanatorily based arguments for and against HEC and HEMC. Strikingly, appropriating and developing resources presented by Lipton facilitates the deflection of much of Lipton and Sprevak’s critique. Placing broadening moves under the umbrella of pluralist explanationism (an explanationism assisted by Lipton’s “compatibilist” variant), I demonstrate how this resets the debate, concluding that the explanationist need not agree to the stalemate regarding explanatory virtues that the critique posits. First, in agreement with Lipton, I feature background beliefs and interest relativity. Sprevak draws from Lipton to set IBE as inferring to the hypothesis that best explains scientific data, but that standard model narrows when he ignores background beliefs and interest relativity. That narrowing illicitly enables key critical moves. Secondly, bringing contrastive explanation (CE) to bear (featured by Lipton in relation to IBE) not only illuminates an argument made by proponents of HEC that Sprevak resists, but draws in the “explanatory pluralism” Lipton connects to CE. Thirdly, much of strength of the critique depends on ascribing a model of IBE anchored in realism. When we, instead, explore perspectives arising from anti-realist variants of IBE, again using Lipton as prompt, that strength diminishes. Fourthly, I contend that an argument against extending HEC to consciousness stands when seen as a “potential” explanation (Lipton), akin to Peircean abduction, even though it fails when interpreted as an attempted IBE, narrowly conceived. Fifthly, developing a connection between explanationism and voluntarism adumbrated by Lipton, creates additional space for explanatory appeals that fail within the unnecessarily tight constraints ascribed by the critic. Discussants of HEC and HEMC need not accept the ascription of a narrow model of explanationism to themselves. Within a pluralist explanationist framework, we see that explanatory considerations provide significant backing for key positions regarding the extended mind, including retaining HEC as a live option, favoring HEC and HEMC in different contexts, and resisting extending the extended mind hypothesis to consciousness.
Philosophy of Science00:54 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:54:00 UTC - 2018/11/03 06:59:00 UTC
Josiah Skogen (Indiana University), Michael Goldsby (Washington State University), Samantha Noll (Washington State University) Wicked problems are defined as complex challenges that require multifaceted solutions, involving diverse scientific fields. The technical expertise scientists provide is part of the solution. Unfortunately, there can be paralysis as various value commitments within the scientific community collide when solutions are contemplated. This can provide policy-makers with the impression that the science is incomplete and unable to provide policy advice. For example, consider climate change plans in the city. In an effort to reduce the impact of a changing climate on urban citizens and ecologies, a wide range of cities are developing such plans in consultation with urban ecologists and conservation biologists. It is easy to assume then that these two fields can equally contribute to city climate change plans, especially in light of the fact that both are given a privileged position in environmental policy discussions (Shrader-Frechette 1993). However, constructive interactions have been infrequent between urban ecologists and conservation biologists involved in the crafting of climate change mitigation strategies and in fact, members of these groups are commonly unaware of each other’s work (McDonnell 2015). We argue that one of the reasons for the lack of collaboration is the following: urban ecologists and conservation biologists are guided by seemingly incompatible values. While urban ecology draws from a wide range of disciplines that are focused on human and ecological interactions, conservation biology often favors ecological restoration and place-based management approaches without considering social systems (Sandler 2008). This apparent conflict results in a failure of coordination between the two fields. However, this need not be so. In the case described above, key values guiding the two fields appear to be in conflict. However, when taking broader impacts goals into account, the values at the heart of urban ecology and conservation biology are not only consistent, but complementary. Unfortunately, scientists are rarely trained to consider the implications of their value commitments. As such, conflict can arise from talking past each other with respect to their broader impact goals. We have recently been awarded a fellowship to help scientists explore values guiding their research and thus better realize their broader impact goals. Specifically, we adapted a tool for promoting interdisciplinary collaboration (The Toolbox Dialogue Initiative) to help scientists better articulate and realize the values underlying their work. Our work is focused on helping them advocate for their solutions, but it can also be used to show how two disparate fields have common goals. The poster will describe the status of our project.
55. Enhancing Our Understanding of the Relationship Between Philosophy of Science and Scientific Domains: Results from a Survey
Philosophy of Science00:55 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:55:00 UTC - 2018/11/03 06:59:00 UTC
Kathryn Plaisance (University of Waterloo), John McLevey (University of Waterloo), Alexander Graham (University of Waterloo), Janet Michaud (University of Waterloo) Discussions among philosophers of science as to the importance of doing scientifically- and socially-engaged work seem to be increasing as of late. Yet, we currently have little-to-no empirical data on the nature of engaged work, including how common it is, the barriers philosophers face when engaging other communities, the broader impacts of philosophers’ work, nor the extent to which the discipline actually values an engaged approach. Our project seeks to address this gap in our collective knowledge. In this paper, we report the results of a survey with 299 philosophers of science about attitudes towards and experiences with engaging scientific communities, barriers to engagement, and the extent to which philosophers of science think scientifically engaged work is and should be valued by the discipline. Our findings suggest that most philosophers of science think it’s important that scientists read their work; most have tried to disseminate their work to scientific or science-related communities; and most have collaborated in a variety of ways (e.g., over half of respondents have co-authored a peer-reviewed paper with a scientist). In addition, the majority of our respondents think engaged work is undervalued by our discipline, and just over half think philosophy of science, as a discipline, has an obligation to ensure it has an impact on science and on society. Reported barriers to doing engaged work were mixed and varied substantially depending on one’s career stage. These data suggest that many philosophers of science want to engage, and are engaging, scientific and other communities, yet also believe engaged work is undervalued by others in the discipline.
56. The Epistemology of the Large Hadron Collider: An Interdisciplinary and International Research Unit
Philosophy of Science00:56 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:56:00 UTC - 2018/11/03 06:59:00 UTC
Stoeltzner Michael (University of South Carolina) The aim of this poster is to present the works of the research unit “The Epistemology of the Large Hadron Collider” that was granted in 2016 by the German Research Foundation (DFG) together with the Austrian Science Fund (FWF) for a six-year period. The group is composed of twelve principal investigators, six postdocs, and five doctoral students from the philosophy of science, history of science, and science studies. The research unit investigates the philosophical, historical, and sociological implications of the activities at the world’s largest research machine, the Large Hadron Collider (LHC), at the European Organization for Nuclear Research (CERN) in Geneva. Its general question is whether the quest for a simple and universal theory, which has motivated particle physicists for several decades, is still viable at a time when there are no clear indications for physics beyond the standard model and all experimental evidence is increasingly coming from a single large and complex international laboratory. Among the topics relevant to philosophers of science, and specifically philosophers of physics, are the nature of scientific evidence in a complex experimental and theoretical environment, the role of computer simulations in establishing scientific knowledge, the dynamics of the model landscape and its driving forces, the relationship between particle physics and gravitation (using the examples of dark matter searches and modified gravity), the significance of guiding principles and values for theory preference, the impressive career of and recent skepticism towards naturalness, along with its relationship to effective field theories, the natures of detectable particles and virtual particles, the role of large-scale experiments within model testing and explorative experimentation, and the understanding of novelty beyond model testing. These interactions between the change in the conceptual foundations of particle physics prompted by LHC and the complex practices engaged there are studied in six independent, but multiply intertwined, research projects: A1 The formation and development of the concept of virtual particles; A2 Problems of hierarchy, fine-tuning and naturalness from a philosophical perspective; A3 The contextual relation between the LHC and gravity; B1 The impact of computer simulations on the epistemic status of LHC data; B2 model building and dynamics; B3: The conditions of producing novelty and securing credibility from the sociology of science perspective.
57. The Novel Philosophy of Science Perspective on Applications of the Behavioural Sciences to Policy
Philosophy of Science00:57 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:57:00 UTC - 2018/11/03 06:59:00 UTC
Magdalena Malecka (University of Helsinki) The objective of this research project is to propose the novel perspective in the philosophy of science to analyse reliance on the behavioural findings in policy contexts. The recent applications of the behavioural sciences to policymaking are based on research in cognitive psychology, behavioural economics, decision theory. This research is supposed to provide the knowledge necessary to make policy that is effective (Shafir, ed. 2012, Oliver 2013). ‘Nudging’ is an example of a new approach to regulation, elicited by the application of the behavioural sciences to policy. Its adherents advocate using knowledge about factors influencing human behaviour in order to impact behaviour by changes in the choice architecture (Thaler, Sunstein 2008). The debate on nudging in particular, and on bringing the behavioural sciences to bear on policy in general, focuses predominantly on the moral limits to nudging, and the defensibility of libertarian paternalism (Hausman, Welch 2010; White 2013). Philosophers of science consider whether, for behavioural research to provide policy relevant insights, it should identify mechanisms underlying phenomena under study (Gruene-Yanoff, Marchionni, Feufel 2018; Heilmann 2014; Gruene-Yanoff 2015; Nagatsu 2015). I argue that the debate overlooks three important points. First, there is a lack of understanding that behavioural research is subject to interpretation and selective reading in policy settings. Second, the debate is based on simplistic understanding of behavioural research that fails to pay attention to how causal factors and behaviours are operationalized, and to what the behavioural sciences offer the knowledge of. Finally, there is a lack of broader perspective on the relationship between the type of knowledge provided by the behavioural sciences, and the type of governing that behaviourally-informed policies seek to advance. My project addresses these missing points in the debate. It shows that when reflecting on reliance on scientific findings (behavioural sciences) in policy settings, it is important not only to analyse conditions under which a policy works (is effective). It is equally consequential to understand: how the explanandum is conceptualized, what kinds of causal links are studied and what is kept in the background. My analysis builds on Helen Longino’s work on studying human behaviour (2013) that went virtually unnoticed in the discussion on behavioural science in policy.
Philosophy of Science00:58 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:58:00 UTC - 2018/11/03 06:59:00 UTC
Robert Meunier (University of Kassel) When philosophers of science turned their attention to practices, many aspects of science came in focus that had been neglected in earlier accounts, which were mainly interested in theories and their justification. The role of instruments and experiments, of various forms of modelling, or of images and diagrams in discovery, reasoning and concept formation have been discussed since then, providing rich and empirically based accounts of how scientific knowledge comes about and changes. These accounts have shifted the questions, broadened the field and brought to the fore many specificities of different areas of science. Yet, it seems worthwhile from a philosophical (probably as opposed to a historical or sociological) perspective to aim for a more general account of the nature of scientific practice and its capacity to result in new knowledge. Such a program would not aim at reestablishing the unity of science by reformulating on the level of practice the Scientific Method (in singular and with capitals), which was previously mainly addressed as a pattern of reasoning (where debates concerned which was the right one). Instead, the idea would be to spell out the relevant kinds and features of scientific activities, and the criteria for their individuation and the appropriate level of detail of their description regarding different epistemological questions. The result could be a unified analytical apparatus that can be used to map and explain the disunity found in the sciences. The proposed poster is meant to introduce a grant project that is set up to work toward such a program in philosophy of science. The project can build on the literature on various aspects of scientific practice to draw general lessons from it, as well as on previous philosophical accounts of the significance of activities as units of analysis in studying scientific practice (e.g., by Hasok Chang). To get a grip on the problem, the project starts from distinguishing three levels of analysis: First, the knowledge that goes into planning and initiating scientific activity, or “project knowledge” (to avoid the notion that science is an entirely preconceived activity, it should be emphasized that this involves knowledge that enables researchers to evaluate and seize opportunities in the face of unprecedented events). Second, the research activities themselves, which involve the work with the materials of interest or a model system that is taken to represent them, including the data recording techniques, as well as activities that result in the necessary material and social structures. Third, the representations of results, which includes the representation of the research activities and the material constellations they give rise to, as well as the resulting observational or experimental data and their conceptual interpretation. The design of the poster will represent these three levels and their interrelation, and indicate the necessary distinctions that should be made on each level to account for the fact that different forms of activities, guided by different project knowledge and resulting in different representations, gives rise to different forms of knowledge in various areas.
59. Analysis of the Division of Scientific Labor Using Contributor Sections
Philosophy of Science00:59 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:59:00 UTC - 2018/11/03 06:59:00 UTC
Phillip Honenberger (University of Nevada, Las Vegas), Evelyn Brister (Rochester Institute of Technology) Bibliometrics is now an established method for investigating scientific knowledge production and its social dynamics. This project analyzes contributor sections in order to better understand the collaborative dynamics of scientific research teams. Since 2009, scientific journals such as Nature and PNAS have adopted a protocol according to which co-authored research papers explicitly specify each author’s contribution – that is, they identify which authors performed experiments, analyzed data, contributed materials, or wrote the paper. Though not unbiased, these contributor sections provide explicit representations of the division of credit, responsibility, and labor type in research teams. Here we present methods for analyzing this data source, and we use the results to suggest and test theses about scientific collaboration and interdisciplinary integration. In a pilot study, we used webscraping techniques to collect and analyze contributor sections from 333 articles that had been published in PNAS in March-May 2017. We found distinct differences in the reporting of certain forms of labor between disciplines and a greater variety of types of labor reported in some disciplines as compared to others. The current study expands the dataset and number of disciplines to include all articles published in PNAS from 2013 to the present (~10,000 articles). First, we analyze patterns of distribution of scientific labor and connect these to philosophical questions concerning differences in collaborative practice between disciplines. We identify reasons why some differences might be due to reporting bias and others to differences in collaborative norms. We also identify how collaborative dynamics are related to size of research team. This sort of study bridges the gap between descriptive social scientific studies of team science and mathematical/conceptual models of the division of cognitive labor (e.g., Muldoon 2018, Bruner and O’Connor 2018). Second, we analyze this dataset to evaluate the forms of labor contributed by authors to multidisciplinary collaborations. We identify authors’ disciplinary affiliation, enabling an analysis of cross-disciplinary contributions based on the journal’s disciplinary classification of articles. For instance, we identify the type of labor most often contributed by statisticians to papers not classified as mathematics. Additionally, we evaluate a measure of the evenness of the division of cognitive labor proposed by Larivière et al. (2016) and propose a method for using the evenness of labor distribution to test hypotheses about the processes that foster interdisciplinary integration (Wagner 2011). The project illustrates both the promises and challenges of using empirical approaches in philosophical research.
60. Why Philosophers of Science Should Use Twitter (And What They Should Know About How to Do It Well)
Philosophy of Science01:00 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:00:00 UTC - 2018/11/03 06:59:00 UTC
Janet Stemwedel (San Jose State University) In the 12 years since its launch in 2006, the social media service Twitter has risen to prominence not least because of its use by elected officials and political candidates, by celebrities, and by journalists and media outlets. Among Twitter’s hundreds of millions of active users are a significant number of students and faculty members, including in philosophy. As of June 1, 2018, the website TrueSciPhi.org lists the Twitter accounts of 452 individual philosophers with at least 1000 Twitter followers (1), and of 136 philosophy organizations with at least 500 Twitter followers (2). At present, “tweeting” is still a new enough practice that there are not standard guidelines for how academics might best use Twitter, whether to advance their professional activities or at least to avoid getting in trouble with their employer, school, or professional community. This poster offers such guidance, tailored specifically to philosophers of science. The poster addresses particular benefits philosophers of science can get from tweeting, including finding the best audiences for one’s work, expanding the reach of conference presentations, cultivating networks and collaborations, and building synergies between research, pedagogy, and outreach. It presents practical strategies for philosophers of science who are new to Twitter and for those who are active users looking to increase their impact. This poster also considers some of the pitfalls of Twitter use (including trolls and risk-averse institutions) and offers advice on avoiding or mitigating them.
61. Integrating Philosophy of Science and ELSI Research: the Case of Animal Experimentation
Philosophy of Science01:01 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:01:00 UTC - 2018/11/03 06:59:00 UTC
Simon Lohse (Leibniz Universität Hannover), Dirk Alexander Frick (Leibniz Universität Hannover), Rebecca Knab (Leibniz Universität Hannover) Background: A polarized debate on animal experimentation has persisted for decades in both academia and the general public. Correspondingly, animal experimentation has been on the agenda of many disciplines in the humanities and the social sciences, such as ethics (e.g., "What is the moral status of animals?"), law (e.g., "How should we regulate animal experimentation?"), sociology (e.g., "How do scientists justify animal experimentation in practice?"), and philosophy of science (e.g., "What is a model organism?"). However, there has been only little interaction between these disciplinary approaches, leaving a number of issues underanalyzed. Aims & Methodology: In our poster presentation, we attempt to bridge this gap by integrating a philosophy of science approach and ELSI research (i.e. research on ethical, legal and social implications). We address common but problematic assumptions about the use of non-human animals in biomedical research that are held by proponents and opponents of this research practice, respectively. Our aim is to debunk these assumptions by showing that they are grounded in (a) an overly simplified picture of scientific practice and its regulation, (b) unjustified generalizations, (c) and/or the downplaying of uncertainties. We focus on the academic discourse in Europe and the US as well as resources and platforms related to science communication (such as "Cruelty Free International" and "Understanding Animal Research"). Since we are aiming for a balanced discussion, we will describe and analyze three misconceptions each held by proponents of animal experimentation (P1-P3) and opponents (O1-O3), respectively. Misconceptions: P1) “Progress in biomedical research and medicine is unfeasible without animal experimentation.” P2) “The current regulatory regime ensures that only scientifically sound and scientifically indispensable animal experiments are performed.” P3) “The implementation of the 3R principle (replacement, reduction, refinement) and harm-benefit-analyses ensure that animal experimentation is practiced in an ethical way.” O1) “Animal experimentation in basic research does not translate into health benefits for humanity.“ O2) "Animal experimentation for translational reasons is misguided in principle as ‘humans are no 70-kg mice.’” O3) “Almost all animal experiments used in translational research could be replaced by non-animal-methods.” Conclusion: Our analysis of these assumptions will show that the integration of ELSI research and philosophy of science is useful for understanding the disparity of views within the debate on animal experimentation. Most importantly, debunking common misconceptions among prominent positions allows for a more nuanced discussion of the uncertainties and the balancing of goals and values in animal-based biomedical research.
Presenters Simon Lohse Leibniz Universität Hannover Co-Authors
Philosophy of Science01:02 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:02:00 UTC - 2018/11/03 06:59:00 UTC
The PhilSci-Archive (philsci-archive.pitt.edu) is an open access, electronic archive of preprints and postprints in the philosophy of science. The archive’s goal is to preserve work in philosophy of science and to foster its rapid exchange. It is a service to philosophers of science by philosophers of science, and it is curated to limit its content to material of interest to professional philosophers of science. PhilSci-Archive currently hosts over 6,000 items, and sharing preprints on the archive is encouraged by many of the major journals in philosophy of science, including Philosophy of Science and the European Journal of Philosophy of Science. Papers can be downloaded freely and without needing to register or open an account. Our poster will display the inner-workings of the archive submission process, report statistics on the archive content, and introduce potential users to some of the capabilities and features of PhilSci-Archive. The archive echoes several open access journals: analytica; Lato Sensu; Philosophy, Theory, and Practice in Biology, and Theoria. The archive also hosts PhilMath-Archive, which is moderated separately by philosophers of mathematics. In the near future, the PhilSci-Archive will implement MePrints, which will allow archive users to have profile pages that collect their deposits and user information together on one page. PhilSci-Archive includes a special section for those organizing conferences or preparing volumes of papers and offers them an easy way to circulate advance copies of papers. On request, the archive will automatically generate a PDF preprint volume of the papers in a conference or volume section. A conference section has been established for PSA2018. Our poster will guide users through the process of depositing their PSA submissions in this conference section. Archive Board members will be available for discussion and demonstration of how to submit contributions.
Philosophy of Science01:03 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:03:00 UTC - 2018/11/03 06:59:00 UTC
James Ladyman (University of Bristol) Resources for school science teachers, created in a collaborative project between the University of Bristol, and teachers and educators in Bristol. The Thinking Science resources come in the form of questions designed to provoke thinking and discussion, to consolidate and extend core curriculum knowledge and understanding. There are resources for Physics, Chemistry, Biology and Working Scientifically. Each topic card has four ‘Get thinking’ questions, followed by a ‘Think big’ question. There is teacher guidance to accompany each topic card.
64. K-12 Science Teachers: Unsuspecting Philosophers of Science
Philosophy of Science01:04 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:04:00 UTC - 2018/11/03 06:59:00 UTC
Gregory Macklem (University of Notre Dame) I don’t need to argue the value of philosophy of science to the attendees of a PSA conference, and it is perhaps an obvious claim (for those same attendees, at least) that it is impossible to teach a science class without at least an implicit communication of philosophy of science. According to the National Science Teachers Association (NSTA), there are roughly 60,000 middle school and 105,00 high school science teachers in the United States, along with an additional 1.6 million elementary school teachers who are expected to teach science. This suggests that there are upwards of 2 million individuals in the U.S. who are engaging in and teaching basic philosophy of science but have had very little, if any, training. This poster is intended to describe ways that philosophers of science can engage with both pre-service and in-service science educators to help them (1) expand their understanding of philosophy of science, (2) understand the importance of philosophy of science, (3) be more reflective on their own practice as science educators, and (4) alter their instruction to more appropriately incorporate philosophy of science (explicitly or implicitly). Specific examples that can be used include: (A) Presentations at science education conferences: I gave a session at a 2017 NSTA regional conference along with another historian and a philosopher of science entitled “C'mon, Neil! Why Good Philosophy is Part of Good Science Teaching and Science.” I have also given a session at a state-level conference: “The top 5 reasons to abandon the 5-step scientific method.” (B) Discussion of professional development modules that can be used with pre-service and/or in-service science teachers. (C) Collaborating with professors in science education programs
65. The Greatest Challenge Facing Philosophy of Science Today (According to Philosophers of Science)
Philosophy of Science01:05 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:05:00 UTC - 2018/11/03 06:59:00 UTC
Nicholas Zautra In September of 2016, a doctoral student in History and Philosophy of Science started a social engagement project in the form of an interview-based podcast featuring prominent and up-and-coming philosophers of science. The initial goals of the project were to develop an outreach platform that would connect philosophers of science with other areas of academia, and to the public; to learn of the origin stories and diverse backgrounds of philosophers of science; and to gain a better understanding of the spectrum of philosophy of science methods. After two years and over forty 90-minute interviews later, the project continues to reach its initial goals, while having simultaneously evolved into a forum in which working philosophers of science freely share their meta-philosophic views on the conceptual, epistemic, and structural problems facing their discipline. The present study offers insight into a variety of such views via an analysis of the recorded and transcribed podcast interviews, with a focus on a central question asked of interviewees: “What is the greatest challenge facing philosophy of science today?” Results of the study suggest four perceived general challenges facing philosophy of science: 1) Staying relevant to mainstream philosophy, and to mainstream science; 2) drawing too much or too little on philosophic methods and/or empirical work; 3) over-reliance on case studies as a preferred methodology; and 4) how to contribute to the public understanding of science as a philosopher of science. Developing a general understanding of the challenge areas in philosophy of science as perceived by those in the field may prove useful in helping to direct current and future philosophy of science work, as well efforts to enhance the Philosophy of Science Association.
66. Ordinary Citizens vs Stakeholders: Which 'Public' Should Participate in Well-Ordered Science?
Philosophy of Science01:06 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:06:00 UTC - 2018/11/03 06:59:00 UTC
Renaud Fine (Université Grenoble-Alpes) My aim with this poster will be to explore ways to have a concrete influence on the shaping of the politicization of science and push it towards a more democratic direction, with a focus on the institutionalized modes of public participation to the definition of the research agenda. I investigate those questions through the prism of the ‘public’ supposed to participate: how does the way it is conceived of influence the potential applicability of the normative philosophical accounts of the democratization of science? My intuition, and the thesis I want to expose and defend, is that the conception mobilized by one of the main proposals articulated this way, Philip Kitcher’s ideal of a well-ordered science (2001; 2011), is what ultimately prevents it from being ever successfully translated into facts. I will argue that it can therefore be seen as what I want to call a counter-ideal, namely: a theory which, if applied, would ineluctably backfire and lead to an aggravation of the very problems it intends to solve. My argument builds on the classical distinction —made by sociological accounts of public engagement— between the figure of the general public and that of the stakeholder, to show that adopting one or the other has straightforward consequences on the concrete design of processes intended to implement them. The ordinary citizen, possessing neither expertise on, nor interests in the question at hand, appears to be the key element leading to the institutionalization of the classical, objectivist and discursive forms of public deliberation where participants are randomly chosen in order to best approximate this figure (Fishkin 2009). The random selection of participants, however, is inevitably bound to leave aside people that do not constitute a significant fraction of society, but are substantially more affected by the decision to be taken. Participative politics thus conceived have indeed more to do with tools in the engineering of the public acceptance of science than with the idea of building a more active citizenship; and the institution of such processes is more than often used as a way to play against spontaneous associative mobilization. Absorbed into disciplinary regimes of power, deliberative forums become new instruments of government, that, if unchecked, can easily perpetuate the very oppression they aim at containing (Freire 1970). The concrete application of model such as Kitcher’s would therefore very likely lead to excluding the most affected from the deliberation, reducing the participative options offered to stakeholders, and potentially aggravating the problem of unidentifiable oppression he aims at solving. I conclude by arguing that this could be addressed by implementing stake-oriented participative processes.
67. Scientific Consensus and Climate Science Skepticism
Philosophy of Science01:07 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:07:00 UTC - 2018/11/03 06:59:00 UTC
Michelle Pham (University of Washington, Seattle) Science skeptics often undermine expert scientific testimony by appealing to lack of consensus. Climate change deniers, for example, highlight lack of agreement among climate scientists regarding anthropogenic climate change (Oreskes and Conway 2010). A crucial, though implicit, assumption behind this strategy is that expert opinion is trustworthy only if it is nearly or completely unanimous. Let's call this the "unanimity criterion" for trustworthy scientific consensus, according to which one should be skeptical if there is dissent within the expert community. A piece from the Heritage Foundation, for example, states: “When the IPCC released its report in 2007, 400 climate experts disputed the findings; that number has since grown to more than 700 scientists, including several current and former IPCC scientists” (Loris 2010, p. 4). Here the skeptic appeals to dissent, supposedly by members of the same expert community, to undermine the IPCC’s consensus position. One response to climate science skeptics is to corroborate the IPCC’s consensus position. Analyzing the abstracts of 928 relevant peer-reviewed publications between 1993 and 2003, Naomi Oreskes finds that “none of the papers disagreed with the [IPCC’s] consensus position" (2004, p. 1686). This result, Oreskes argues, legitimates the IPCC's position. Such a response effectively accepts the unanimity criterion. Oreskes does not discuss, however, that many of the surveyed papers are jointly authored. The positions espoused in these papers are not typically a result of aggregating each individual author's beliefs. More often such papers display a jointly negotiated stance that I argue is amenable to Margaret Gilbert’s (1996) joint commitment model, where members of a group can agree to let a position stand as the group’s view even if they personally believe otherwise. I thus question the aggregative framework invoked by both climate change skeptics and Oreskes to assess expert scientific consensus on climate change. Instead, I offer an alternative conception of consensus based on the joint commitment model, which captures the collective nature of many jointly authored papers.
I also argue that the IPCC’s consensus position represents something much closer to a joint commitment. The reports from which the consensus position emerges are subject to multiple rounds of revision in response to expert reviewers. The report’s conclusion, rather than an aggregation of what individual participants believe about particular issues, represents an act of letting a position stand as the group’s view after a process of deliberation and negotiation about the scientific content. Understanding the IPCC’s consensus position as a joint commitment displays how unanimity is not a relevant marker for assessing trustworthiness of the group’s position. Rather, we should focus on the quality of deliberation, as well as the group’s response to criticism from the relevant experts.
68. The Epistemic and Ethical Import of Computational Simplicity Where Scientific Models Inform Risk Management
Philosophy of Science01:08 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:08:00 UTC - 2018/11/03 06:59:00 UTC
Casey Helgeson (Penn State), Nancy Tuana (Penn State) Other things being equal, simpler theories (or hypotheses, or models) are better. Theses of this form have long been discussed both in science and philosophy (see, e.g., Baker, 2016; Sober 2015). There are many variations on the idea, depending on — among other things — what you mean by “simple” and by “better.” We address the use of a large class of scientific models to inform risk management decisions, and within this context we articulate and defend a new variant of the “simple models are better” thesis. We also discuss associated trade-offs, though how best to balance those remains an open question. The measure of simplicity that we investigate is the computational cost of running a simulation model on a computer. Simple models require less computation; complex models require more time and/or computing power. We have in mind the kind of computational models — widely used in, e.g., atmospheric and earth system modeling — for which analytical solutions are unavailable and model outputs can be computed only by sequentially calculating subsequent states of the system to describe system behavior over time. The epistemically important benefit of computationally simpler models is that it is more feasible to estimate uncertainties in the projections of system behavior that they produce. Such uncertainty estimates come from repeated implementations of the model with different inputs. The larger the ensemble of model runs, the better the characterization of uncertainty. So on a fixed computing budget, (computationally) simpler models lead to more thorough exploration and characterization of uncertainties. In decision-making contexts such as coastal flood risk management, often it is extreme events that are the most concerning, and small differences in the estimated probability of extreme events can sway a decision about how to best manage the risk. Small ensembles of model runs -- a corollary of using computationally complex models -- are particularly badly suited for constraining the probability of such extremes. The result is a coupled ethical-epistemic problem (Tuana, 2017) in which capacity to supply the most decision-relevant information (probability estimates for the most concerning events) trades off with other desiderata that tend to make models more computationally complex (such as increased spatial or temporal resolution, or inclusion of more inputs and processes).
69. Dismantling the Holobiont Problem for Evolutionary Individuality
Philosophy of Science01:09 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:09:00 UTC - 2018/11/03 06:59:00 UTC
Lane DesAutels (Missouri Western State University), Caleb Hazelwood (Georgia State University) Individuals are things you can refer to, point to, and single out. In much of life science, individuality is indispensable for comparing, counting, and characterizing objects of study. But in the philosophy of life science, few authors can seem to agree what makes something a biological individual. In recent literature (e.g., especially Godfrey-Smith 2013 and Pradeu 2016), philosophers have pointed to two independently fruitful categorizations that do not always coextend: biological individuality understood by appeal to physiology, and biological individuality understood in terms of evolution by natural selection. Put roughly, physiological individuals are biological entities characterized by a functional or metabolic integration, and evolutionary individuals are biological entities characterized by their being “seen” by natural selection (Clarke 2011). On the latter way of thinking, biological entities are individuated if and only if they constitute a unit of selection, thereby satisfying Lewontin’s criteria of producing heritable variation and demonstrating differential fitness (Lewontin 1970). Within the evolutionary camp, however, there is significant debate over what is required for the satisfaction of Lewontin’s criteria of heritable variation. Godfrey-Smith, for example, makes a compelling argument for identifying evolutionary individuals via clearly defined parent-offspring lineages and vertically transmitted traits, specifically through reproduction (Godfrey-Smith 2013). The trouble with Godfrey-Smith’s commitment to heredity understood in terms of clearly identifiable parent-offspring lineages and strict, obligate vertical transmission is that, in accepting this criterion, we must further accept that there are many instances where perfectly good physiological individuals are not evolutionary individuals. For example, any of the vast number of host-microbial symbiont associations, referred to as 'holobionts', would have to be dismissed on these grounds. This is because many holobiont hosts inherit their microbiome in large part through horizontal transmission, i.e., from the environment. Thus, on Godfrey-Smith’s characterization, they are not considered to possess evolutionary individuality. I call this the holobiont problem for evolutionary individuality. This poster argues that, in pursuing a proper conception of evolutionary individuality, limiting the scope of heredity to reproduction fails to account for the ubiquity of horizontal transmission of adaptations in nature that play an important evolutionary role. In doing so, I aim to contribute to the development of a pluralistic account of evolutionary individuality that recognizes the myriad extra-genetic means of inheritance. I further contend that the Extended Evolutionary Synthesis (EES) may assist in such a development. In particular, I focus on one of the central notions developed in the EES literature: ecological inheritance understood via the mechanism of niche construction (Odling-Smee, Laland, & Feldman 2003). Ecological inheritance through niche construction is the process of heredity “through which previous generations as well as current neighbors can affect organisms by altering the external environment or niche that they experience” (Lamm 2012). I will argue that, through a horizontal transmission of microbial symbionts, holobionts partake in niche construction in a significantly heritable way, thus granting them some degree of evolutionary individuality. Having established this claim, I conclude, is sufficient for dismantling the holobiont problem for evolutionary individuality.
70. Post-Traumatic Stress Disorder in Non-human Animals: A Response to Descartes
Philosophy of Science01:10 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:10:00 UTC - 2018/11/03 06:59:00 UTC
Kate Nicole Hoffman (University of Pennsylvania) Classically, the suffering of non-human animals, although in certain circumstances recognized as unacceptable (or at least unsavory), has been viewed as different in kind from human suffering. Descartes famously argued that animals are incapable of experiencing pain, and merely react reflexively to bodily harm. Although this extreme view is not as prevalent as it once was, it is still difficult to find evidence to prove the contrary. It seems as though any case of animal "pain" can be explained away with some reference to bodily instinct, which need not include any kind of experience of suffering. The purpose of my project is to challenge Descartes' claims by investigating animal consciousness from the perspective of mental health. In particular, I argue that animals can experience Post-Traumatic Stress Disorder — a kind of suffering which is not easily explained merely with reference to stimulus response. Using specific case studies, I examine and compare the behavior of animals who have been affected by a traumatic event with the symptoms of human PTSD detailed in the Diagnostic and Statistical Manuel V. In the first study, a group of African elephants, all of whom were witness to the slaughter of their herds by poachers, killed over 100 rhinoceros — a violent act unheard of in elephants. Ecologists, puzzled by the non-normative behavior of the elephants, sought a psychological explanation. In the second study, a chimpanzee named Jeannie was released from the New York Laboratory for Experimental Medicine and Surgery in Primates (LEMSIP) after exhibiting "serious emotional and behavioral problems", including self injury, screaming, and what appeared to be anorexia. The symptoms of both the elephants and Jeannie have been documented by conversationalists and sanctuary workers. I have found that their symptoms match up with those displayed in human PTSD, and argue that, by the DSM V's standards, these animals should be diagnosed with the disorder. I conclude that PTSD is the simplest and best explanation for the behavior of these particular animals. Although my research is so far confined to elephants and chimpanzees, I suspect that similar results can be found in many other species. Descartes' view of animals as machines is hardly the norm nowadays. However, the general consensus still prefers a clear distinction between human and animal suffering. One basis for such a distinction is the idea that, although animals can surely experience physical suffering, mental suffering is restricted to humans. It is this outlook which allows for the poor treatment of animals in factory farms, laboratories, and zoos. By arguing that at least some animals can experience PTSD, I hope to bridge the gap between human and animal suffering. Doing so will reveal certain ethical implications related to protecting not just the physical, but also mental lives of animals.
71. How Non-Epistemic Values Can Be Epistemically Beneficial in Scientific Classification
Philosophy of Science01:11 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:11:00 UTC - 2018/11/03 06:59:00 UTC
Soohyun Ahn (University of Calgary) It is often assumed that science aims to discover real divisions of the world, what philosophers call ‘natural kinds.’ Since natural kinds are supposed to be independent of us and useful for our epistemic endeavours, it is thought that value-laden considerations are either irrelevant or harmful in identifying natural kinds. Non-epistemic values, such as social, moral, and political values, are thought to sidetrack the epistemic pursuit of scientific classification, resulting in arbitrary groupings. These assumptions about natural kinds—independence from human interests and the priority of epistemic purposes—are part and parcel of the value-free ideal in science (VFI). Indeed, some philosophers of science have raised a concern over value-driven modifications of natural kind classifications (Griffiths 2004; Khalidi 2013). The primary concern over value-driven modifications is that the epistemic aim of finding natural kinds is compromised by non-epistemic considerations. In this framework, normative dimensions are thought to divert scientific inquiry from revealing real divisions of the world. For example, Muhammad Ali Khalidi contends that the pursuit of non-epistemic purposes is a threat to social kinds being natural kinds (Khalidi 2013). Thus, he and others argued that the task of disentangling the epistemic from non-epistemic aspects of a category is critical in scientific classification. Khalidi’s suggestion that researchers “be guided by epistemic purposes and not be deflected by non-epistemic interests” is clearly in line with the VFI. The role of values in science has been discussed among philosophers of science for the last several decades. Much of that debate has focused on the legitimate roles of non-epistemic values in theory choice. There has been little examination of the role of non-epistemic values in scientific classification. By analyzing the case of “infantile autism,” I aim to suggest new argument against the VFI: one shows that non-epistemic considerations can contribute to the epistemic success of scientific categories. The early history of demarcating infantile autism shows that value-laden considerations can positively contribute to the production of scientific knowledge. During the mid-twentieth century, the psychogenic view that the lack of parental warmth was the main cause of the disorder was widely supported by the professional community. The situation was reversed when neurobiological hypotheses were proposed as alternatives (Rimland 1964). This reversal was initiated by a researcher’s commitment to promoting well-being of autistic children and their family. Propelled by value-laden considerations, the search for neurobiological basis of autism opened up a new research area and contributed to classifying autism into a neurodevelopmental disorder. This case study shows that far from epistemically detrimental, value-driven research in scientific classification can be epistemically beneficial and facilitates the process of knowledge production.
Philosophy of Science01:12 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:12:00 UTC - 2018/11/03 06:59:00 UTC
Franklin Jacoby (University of Edinburgh) Certain kinds of pluralism suggest scientific practice is organised into discrete units with autonomy. Autonomy means one scientific unit of practice cannot be criticised, rejected, or vindicated by another unit. Versions of this view are defended by Chang (2015, 2012). If scientific practice is partitioned in this way, we might ask: how do we or should we determine where the boundaries of a particular practice lie? Chang suggests we do so by grouping scientists by their aims and the activities they perform toward achieving those aims. Longino (2006) along similar lines suggests that the questions investigated by the scientists determine what unit of scientific practice they are members of. The notions that aims, goals, or questions should play a role here is more widespread that just pragmatism. Kusch (2018, 66) also suggests that we use goals (along with beliefs) to distinguish separate epistemic systems that are equally legitimate to the extent that they “cannot be ranked.” These various views suggest that goals play an important and structuring role in scientific practice. Some are also, consequently, committed to a strong form of scientific contingency. This discussion raises the following question: what is the role of goals in science? I argue that accounts that rely heavily on goals face several issues. I discuss those issues and present an alternative, drawing on literature from the philosophy of language and action, particularly Dummett (1993) and Bratman (1992). I argue that the responsiveness scientists exhibit toward one another’s work should be the means by which we partition scientific practices. Responsiveness suggests a scientist is influenced by and in turn influences another’s work. There are three main upshots of this account: first, it will help clarify when scientists can or should disagree and whether those disagreements are disagreements that can be resolved. Second, this approach places less emphasis on subjective aspects of science and more emphasis on the practice of science, which consequently illuminates how science is structured. Third, partitioning practices based on responsiveness instead of goals can suggest that science is not as contingent as other views suggest. It is worth pointing out that this poster is broadly sympathetic to the pluralist and relativist approaches discussed and also shares the general view Kusch’s relativism takes toward disagreement, i.e. that disagreements are not as deep and fundamental as some accounts suggest.
73. A Challenge to Seepage in the Global Warming 'Hiatus'
Philosophy of Science01:13 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:13:00 UTC - 2018/11/03 06:59:00 UTC
Ryan O'Loughlin (Indiana University, Bloomington) I question whether climate scientists actually adopted the ‘hiatus’ framing, as claimed by Stephan Lewandowsky et al. (2015), despite their use of the word and the large degree of attention they paid to the 1998-2012 time period. Much has been written about the apparent slowdown in warming—sometimes referred to as a ‘hiatus’—of global mean surface temperature (GMST) between 1998 and 2012. Climate denialists drew attention to this ‘hiatus’ starting around 2006 and climate scientists began publishing on the topic around 2013. Since the issue of human-caused global warming wasn’t in question for climate scientists, it is worth asking why they paid any attention to the ‘hiatus.’ Lewandowsky et al. focus on the fact that climate scientists adopted the ‘hiatus’ framing that was initially formulated by denialists, and further claim that “seepage”—that is, “infiltration and influence of…essentially non-scientific claims into scientific work” — has occurred (2015, 2). They argue that as a result of denialist discourse, “scientists came to doubt their own conclusions, and felt compelled to do more work to further strengthen them, even if this meant discarding previously accepted standards of statistical practice” (2015, 9). While it’s clear that, in some sense, seepage has occurred, there are good reasons to be skeptical that “scientists came to doubt their own conclusions.” A close look at scientific publications discussing the ‘hiatus’ reveals a host of legitimate scientific reasons to focus on this time period despite its lack of statistical significance. For example, scientists sought ways to reconcile global energy levels with GMST trends using ocean warming. In all of this work, there is no doubt regarding the reality of global climate warming; thus, the scientists did not doubt their own conclusions, I argue. The seepage analysis, however, does demonstrate that scientists notably focused on the hiatus. Generally, and especially in the Intergovernmental Panel on Climate Change (IPCC), they focused on the hiatus, partially in an attempt to present the most “objective” science possible, in response to climategate (Medhaug et al. 2017; Lloyd and Schweizer 2014). Thus, these issues related primarily to communication of science to the public, rather than actual scientific research that was underway. The seepage analysis fails to distinguish between these two activities, and since denialism impacts both the work of climate scientists and how they present it to the public, we must be careful to attend to this distinction. More generally, my analysis reveals that we—that is, philosophers of science and/or anyone interested in the relationship between values, science, the public, and politics—must be careful to distinguish between scientific research/publication (scientific “work”) and scientific communication insofar as these can be separately assessed.
74. Not All the Same – An Evolutionary Perspective on Diversity in Economic Decision-Making
Philosophy of Science01:14 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:14:00 UTC - 2018/11/03 06:59:00 UTC
Armin Schulz It is increasingly widely accepted that there are systematic differences in the ways in which humans make economic decisions. So, for example, it has been found that there are gender differences in risk aversion, as well as cultural differences in sharing norms. What is not yet clear is whether these differences are fundamental or merely evoked: are they just a product of differences in the decision situations faced by different humans, or are they a product of differences in the fundamental psychological makeup of different humans? Answering this question matters, as it has implications for how universally valid economic theories are. Importantly, while it is clear that addressing this issue requires comparative empirical studies concerning economic decision-making, this does not mean that an appeal to evolutionary biology could not be useful here as well. In particular, given the complexities involved in studying the mechanisms underlying economic choices, an appeal to evolutionary biology can provide useful further evidence concerning the fundamentality of the diversity in economic decision-making. It is the aim of this paper is to analyze the promises and challenges of this appeal to evolutionary biology further. The paper begins by laying out what is—and what is not—methodologically required to underwrite evidentially compelling evolutionary arguments for or against the fundamentality of human diversity in economic decision-making. It shows that these arguments need to be both internally coherent—plausible in their own right—and externally coherent—consistent with other findings in the literature. However, it also shows that these arguments do not need to provide a full account of the relevant issues—a partial account is all that is needed to make a useful contribution to the literature. The paper then applies this methodological standard to two specific arguments for or against the fundamentality of the two forms of diversity in human economic decision-making mentioned above. It first considers the argument that since males and females have different “minimal parental investment,” gender differences in risk aversion are fundamental. Here, the paper shows that this argument is not internally compelling, as it fails to analyze mating decisions as a strategic interaction involving two different time horizons—in which levels of minimal parental investment are merely an input into the analysis, and not its final arbiter. Second, the paper considers the argument that since it is adaptive for all humans to share in line with the same biological principles (such as Hamilton’s rule), cultural differences in sharing norms must be merely “evoked.” Here, the paper shows that this argument fails to be externally coherent, as there is much evidence that operationalizing the relevant biological principles is complex and requires information about the locally prevailing conditions (e.g. who is kin with who). In turn, this makes it more plausible that sharing norms are culturally learned—and thus fundamentally different. In all: the paper shows that while the appeal to evolutionary biology can say something about the nature of human diversity in economic decision-making, this appeal needs to be handled carefully.
75. Toward a Taxonomy of Value Judgments in Health Economics Modelling
Philosophy of Science01:15 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:15:00 UTC - 2018/11/03 06:59:00 UTC
Stephanie Harvard (Simon Fraser University) The values in science literature has resulted in much debate, as well as many examples of value judgments that scientists make in the course of their work. Several examples have emerged in climate change and other types of simulation modelling, but examples are lacking from health economics, which also relies heavily on modelling. A better understanding of value judgments in the modelling process would assist health economists involved in 'patient-oriented' research initiatives, which have the goals of 1) engaging patients throughout the research process, and 2) providing patients with meaningful opportunities to influence research decision-making (Canadian Institutes of Health Research 2011). This work aims to develop a taxonomy of value judgments in health economics modelling, building on Biddle and Kukla's (2017) concept and 'geography' of epistemic risk. First, I examine Biddle and Kukla's concept of 'alethic' risk, and argue that there are additional risks that can plague beliefs "once we have them" (p.218), beyond the singular risk that the beliefs will be mistaken (here, I take the term 'belief' to be interchangeable with 'knowledge', as what is sought in research). These include the risk that the knowledge will be true but misleading (e.g., as occurs when a well-researched intervention is deemed cost-effective, but an overlooked intervention is far more cost-effective); the risk that the knowledge will be true but futile (e.g., as occurs when an intervention is deemed not cost-effective but does not affect decision-makers' commitment to funding it and patients' commitment to seeking it); and the risk that the knowledge will be unwelcome (e.g., as occurs when the knowledge reinforces problematic concepts, including when health economics studies represent benefits of an intervention that matter to health providers and/or decision-makers and omit benefits that matter to patients). I then expand and re-work the list of 'phronetic' risks outlined by Biddle and Kukla (2017) using examples from health economics. As a means to evaluate my revision to Biddle and Kukla's (2017) list of epistemic risks, I then review four major challenges to the Value-Free Ideal (VFI) and examine whether they bring to light any additional types of epistemic risk not yet captured. I argue that the re-worked list allows for all four challenges to the VFI to be grouped according to the epistemic risks that they highlight, and, furthermore, that grouping the challenges to the VFI in this way captures their importance yet directs attention away from the task of undermining the VFI and toward the task of a developing a taxonomy of value judgments in health economics modelling. Finally, I present a case that all epistemic risk carries with it an ethical risk, thus arguing in favour of representing the two types of risks as fully overlapping. I conclude by considering implications for patient-oriented health economics research.
76. The Role of Values in Measurement: The Case of Brain-Computer Interfaces and the Illiteracy Metric
Philosophy of Science01:16 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:16:00 UTC - 2018/11/03 06:59:00 UTC
Marion Boulicault (MIT) Brain-computer interfaces (BCIs) are implantable devices that allow for computer-mediated interaction between a person’s brain activity and their environment. Examples of devices include those aimed at controlling prosthetic limbs and epilepsy. They work by analyzing brain activity (e.g. to determine an intention to move, or the beginnings of a seizure), and then translate that activity into action (e.g. the movement of a prosthetic arm, or neurostimulation to prevent a seizure).
Given cultural connections between the brain and identity, as well as worries about privacy and ‘neurohacking’ (to name just a few examples), significant attention has been rightly paid to the ethics of BCI use. However, in this poster, I want to raise a question that I contend has yet to receive sufficient attention: what are the philosophical, ethical and political implications of the way we measure BCIs?
There exists a subset of the population who, despite training, are unable to use BCIs. This failure is usually attributed to problematic translation between brain activity and action, e.g. the BCI cannot ‘read’ the brain signals produced by the individual, usually for unknown reasons. BCI researchers call this phenomenon ‘BCI illiteracy’ and report that it affects 15 – 30% of BCI users (Allison and Neuper 2010; Viduarre and Blankertz 2010; Thompson, draft). I argue that the use of ‘BCI illiteracy’ as a metric for success encodes a problematic model of human-technology interaction. In particular, it places responsibility for the ‘failure’ on the individual BCI user, as opposed to the technological system. This can have negative implications for how the BCI user perceives herself in relation to the technology, and on how neuroscientists and engineers understand and engage in their work, and thus on how the technology itself develops. As such, the case of the BCI illiteracy metric illustrates how the instruments and practices of measurement serve as sites for the interaction of science, technology and values.
77. Machine Learning, Theory Choice, and Non-Epistemic Values
Philosophy of Science01:17 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:17:00 UTC - 2018/11/03 06:59:00 UTC
Ravit Dotan (University of California, Berkeley) I argue that non-epistemic values are essential to theory choice, using a theorem from machine learning theory called the No Free Lunch theorem (NFL). Much of the current discussion about the influence of non-epistemic values on empirical reasoning is concerned with illustrating how it happens in practice. Often, the examples used to illustrate the claims are drawn from politically loaded or practical areas of science, such as social science, biology, and environmental studies. This leaves advocates of the claim that non-epistemic values are essential to assessments of hypotheses vulnerable to two objections. First, if non-epistemic factors happen to influence science only in specific cases, perhaps this only shows that scientists are sometimes imperfect; it doesn’t seem to show that non-epistemic values are essential to science itself. Second, if the specific cases involve sciences with obvious practical or political implications such as social science or environmental studies, then one might object that non-epistemic values are only significant in practical or politically loaded areas and are irrelevant in more theoretical areas.
To the extent that machine learning is an attempt to formalize inductive reasoning, results from machine learning are general. They apply to all areas of science, and, beyond that, to all areas of inductive reasoning. The NFL is an impossibility theorem that applies to all learning algorithms. I argue that it supports the view that all principled ways to conduct theory choice involve non-epistemic values. If my argument holds, then it helps to defend the view that non-epistemic values are essential to inductive reasoning from the objections mentioned in the previous paragraph. That is, my argument is meant to show that the influence of non-epistemic values on assessment of hypotheses is: (a) not (solely) due to psychological inclinations of human reasoners; and (b) not special to practical or politically loaded areas of research, but rather is a general and essential characteristic for all empirical disciplines and all areas of inductive reasoning. In broad strokes, my main argument is as follow. I understand epistemic values to be heuristics for choice that are presumed to make it more likely that the chosen theory is true. Learning algorithms are ways to induce general hypotheses from a given dataset. As such, they are procedures for theory choice – they are ways to choose the one hypothesis that best fits the data. The NFL determines that all learning algorithms, i.e. all ways to conduct theory choice, have the same average performance when averaging over all possible datasets. This entails that, if we don’t restrict the possible datasets, all ways of choosing between hypotheses have the same expected performance. That is, they are equally likely to produce true hypotheses. This includes all ways to choose between hypotheses, and in particular traditional epistemic heuristics like simplicity and even random guessing. When averaging over all possible data sets, all choice procedures are equally non-epistemic. Moreover, I argue that theory choice essentially involves non-epistemic values even when we restrict the range of admissible datasets, as we do in real life.
78. Contributions of Women to 20th-Century Philosophy of Science
Philosophy of Science01:18 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:18:00 UTC - 2018/11/03 06:59:00 UTC
Daniel Hicks (University of California, Davis), Evelyn Brister (Rochester Institute of Technology) A long tradition in feminist historiography of science has focused on uncovering the lost and obscured contributions of women scientists. For example, Katherine Brading's work has raised the status of Émilie du Châtelet from a mere translator of Newton to the author of an important conservation law, and the Raising Horizons project has highlighted the role of 19th century and contemporary women in building the institutions of the earth sciences (archaeology, geology, and paleontology). Such projects have shown that, in many fields of scientific research, women have always been significant contributors, establishing important institutions and making major discoveries. But, too often, their work has been credited to mentors, spouses, and siblings, passed on as unattributed “common knowledge,” or simply forgotten. Taking inspiration from these historiographical projects, we present an overview of contributions of women to 20th century and contemporary philosophy of science. Using the CrossRef publication metadatabase and webscraping techniques, we have constructed a dataset of 38,000 philosophy of science publications, including articles from twenty-eight journals and chapters from three book series (Boston Studies in Philosophy of Science, Western Ontario Studies in Philosophy of Science, and Minnesota Studies in Philosophy of Science). Using automated methods and manual corrections, we coded these for author gender, and applied text mining methods to classify articles by subspecialty (e.g., philosophy of biology; philosophy of physics). Our dataset is publicly available at [link redacted for review]. We analyze the participation of women in the philosophy of science over time and according to disciplinary specialty. This allows us to identify areas of philosophy of science with more women authors and in which women made important early contributions, and we compare our results with existing estimates of women’s participation in philosophy more generally. The project also highlights particular women philosophers of science whose influence may be underestimated. Finally, this poster will present possible future uses for this database, which we intend to release openly. The database lends itself to supplementation with other sources of data, including citation data and affiliation data. Tracking authors’ affiliations, for instance, could enable us to examine the institutional arrangement of women in 20th century philosophy of science: were women clustered in a few departments, or more isolated? Is there any correlation between productivity and clustering/isolation with other women philosophers of science? We will also discuss the strengths and limitations of these methods.
79. Challenges in Integrating Western Science and Indigenous Knowledge(s)
Philosophy of Science01:19 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:19:00 UTC - 2018/11/03 06:59:00 UTC
Megan Delehanty (University of Calgary) Integrating different types of evidence from various techniques and disciplinary perspectives is often a significant epistemic challenge. In general, the greater the overlap between the accepted ontologies, methods, and standards for evidence, the more easily integration can be achieved. Thus, for instance, if a difference in approach can be understood simply in terms of a focus on different levels or properties associated with the phenomenon of interest, methods for integration are likely to succeed. However, if one account of a phenomenon makes use of properties which are not recognized by the account we seek to integrate with it, it is often unclear how to proceed. In this project, I look at the challenges to integrating Western science and indigenous knowledge(s). I identify four significant challenges to this work. The first challenge is to understand what integration means in this context. Particularly in cases such as this where there are likely to exist significant power differentials, clearly differentiating methods of integration from tools of knowledge assimilation becomes particularly important. The second challenge is ontological and derives from the presumed inseparability of the empirical and the spiritual in most indigenous epistemologies. Here, careful attention is needed to clarify the nature and role of spiritual components of the belief system. On some interpretations of the “spiritual”, such hybrid epistemologies may present a less significant difference from Western science that we might initially believe. On other interpretations, however, the claim of inseparability requires further analysis to determine the degree to which integration may or may not be impeded. The third challenge is the failure of most literature in this area to represent science in a way that is recognizable to contemporary philosophers of science. Too often, the picture presented is one that the logical positivists would have endorsed, but that ignores over half a century of more recent work, most notably on anti-reductionism and on science and values. This creates an unnecessary obstacle to integration by presenting Western science as incompatible with systems of knowledge that take a holistic approach and that connect empirical and moral principles. Finally, the fourth challenge is that some concepts that play key roles in most indigenous epistemologies – such as land and place – rely on particular sorts of lived experience and, thus, present an obstacle to those without the appropriate hermeneutical resources. Together with the first and second challenges, this amplifies the degree to which various forms of epistemic injustice are at play in integration attempts.
80. Sex Essentialism in Neuroimaging Research on Human Sex/Gender Differences
Philosophy of Science01:20 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:20:00 UTC - 2018/11/03 06:59:00 UTC
Vanessa Bentley (University of Alabama, Birmingham) Sex essentialism, a form of biological essentialism, is the view that the two sexes are essentially distinct; males and females have different biological essences that are a result of their sex. Sex essentialism as an assumption imposes methodological and theoretical limitations. The assumption is socially and ethically problematic because it naturalizes sex/gender differences and can be used to justify the oppression of women. Although the problem of sex essentialism in general is recognized by feminist critiques of science (Jordan-Young and Rumiati 2012, Fine 2013, Rippon et al. 2014), I focus on how sex essentialism affects experimental practice. I investigate two case studies in the neuroimaging of sex/gender differences and find that sex essentialism is pervasive. The first case study, comprising 45 articles, is on structural differences in the corpus callosum. The second case study, comprising 14 articles, is on functional activation differences in the mental rotation task. I find that, although many articles report differences, few articles find the same differences and most articles contradict each other. Thus, there is no evidence for consistent sex/gender differences in the size or shape of the corpus callosum or in the activation associated with mental rotation processing. However, despite the lack of consensus across studies, researchers treat sex/gender differences as empirically verified. Additionally, I find that researchers: 1) fail to consider evidence that contradicts their sex-essentialist theory; 2) fail to distinguish sex and gender, giving the impression that all differences are due to sex factors (biology, hormones, genetics, “nature”); 3) assume their results generalize across time and cultures; and 4) assume that experience doesn’t affect brain structure and function. Throughout, it is unclear if researchers explicitly avow sex essentialism or if they are ignorant of the assumption. I suggest a new framework for cognitive neuroscience that is better founded epistemologically and is more socially and morally responsible. This framework connects feminist standpoint empiricism (Intemann 2010) to the practice of cognitive neuroimaging. This includes: initiating inquiry from the perspective of women’s lives, reflecting on the differences between men’s lives and women’s lives, and incorporating the interests of women in the research.
81. The Holobiont-Self: An Ontological Heterogeneity Perspective on the Immune-Self
Philosophy of Science01:21 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:21:00 UTC - 2018/11/03 06:59:00 UTC
Tamar Schneider (University of California, Davis) In immunology, the concept of the self/non-self frames the immune system as a discriminatory mechanism of harmful (i.e., pathogenic) from non-harmful (i.e., non-pathogenic) elements in the body. Consequently, the immune-self conceptualizes the role of the immune system as a physiological mechanism establishing the boundary between foreign and domestic elements (Tauber 2004). This framework falls well within the war metaphor used in immunology describing the immune system as ‘surveillance,' ‘detection,' and ‘protection' system against ‘invading' microbes. However, the immune system is, by itself, an elusive and complex system with many different, sometimes conflicting, functions (e.g., chronic inflammation, allergies, autoimmune deficiency). Furthermore, the immune system is not independently developed. Specifically, its behavior and functions cannot be studied and understood separately from its microbial context (Chiu & Eberl 2016). Thus, when looking and the organism the conceptualization of the immune system is based on a clear physiological definition of the organism’s self. However, when thinking about the holobiont, such self is the result of an interchangeable assembly of many rather than a stable one. In the poster, I examine new theories in immunology and their suggested solution to the problem of the immune-self illusive discriminatory function (Pradeu & Vivier 2016; Chiu & Eberl, 2016). In my analysis, I use the perspective of two contrasting epistemic virtues in science: simplicity in its ontological meaning and ontological heterogeneity (Longino, 2008). Then, I suggest an alternative framework to the immune-self, from the simplicity and one causal direction to the ontological heterogeneity and mutuality of interactions by viewing the immune-self as the holobiont-self. Through the examination of the ontological perspective of the new theories in immunology, I show that although their view centers on the interactions between the immune cells and the microbial cells, the immune system is the only causally effective entity. Then, I argue regarding immunity and the holobiont, that the ontological perspective needs to consider also the microbial community as causally effective entities, and as part of immunity. By taking the virtue of ontological heterogeneity, the framework of the holobiont-self includes the microbes as part of the immune-self which inevitably change the perception of the self to a relational self shaped by its interactions. This change implies the reconsideration of the war metaphor as well.
82. Searching for Culture: Social Construction Across Species
Philosophy of Science01:22 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:22:00 UTC - 2018/11/03 06:59:00 UTC
Rebecca Ring (York University) Do any non-human animals have culture? To find out, some scientists have attempted to isolate behaviours or information that are caused and spread by means other than genetic inheritance or ecological factors. However, cultural, genetic and ecological factors are not always isolatable since there is an entangled interplay between them, as in gene-culture co-evolution. The problem is exacerbated by disagreement on what counts as cultural. For example, some define culture in terms of behaviour patterns or information shared within communities via social transmission (e.g. Whitehead and Rendell 2015). Others add cognitive requirements, such as theory-of-mind, which some argue is uniquely human (e.g. Tomasello et al. 2005; Galef 2001). Still others define culture in terms of its human expressions, such as religious rituals, ethnic markers and politics (Hill 2009), thereby making it uniquely human. In ordinary use, culture is a vague term. For example, what constitutes Canadian culture? Does it exist? I argue that the definitional problem of culture stems from its socially constructed ‘nature’. Cultures are real social kinds, which are socially constructed ideas or objects that depend on social practices for their existence. Importantly, their etiology does not make them any less real, or preclude them from causal processes. Such phenomena can be grouped together as ‘kinds’ according to their causal or constitutive properties or processes, allowing reliable predictions and explanatory power. The facts of the matter for social kinds are determined (in part) by social factors, rather than (only) physical, biological, or psychological factors. I draw on feminist and critical theory on race and gender to make my case that culture is grounded in systems of social relationship. Some feminist scholars characterize gender as the social meaning of sex (Haslanger 2012). I argue that culture is the social meaning of normative practices. If this is the case, animal culture need not be precluded. Animals need not have the concept ‘culture’ to have culture, anymore than humans need the concept ‘gender’ to have gender. If researchers frame questions of animal culture with a focus on social relationality, then they will have a clearer path to recognizing it where it exists. As a case study, I will show how killer whales are cultural beings with socially constructed group-specific norms for communication, diet, foraging, social roles and interactions. Bodies of knowledge, experience and tradition are constructed, embedded and transmitted with meaning throughout these social normative cultural communities.
83. PAC Learning and Occam’s Razor: Probably Approximately Incorrect
Philosophy of Science01:23 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:23:00 UTC - 2018/11/03 06:59:00 UTC
Daniel Herrmann (University of California, Irvine) There are justifications for taking Occam's Razor as an epistemic principle in the computational learning theory and machine learning literature. I clarify and argue against one widely used justification of the epistemic value of Occam's Razor---that from probably approximately correct (PAC) learning. This result, first proved by Blumer, Ehrenfeucht, Haussler, and Warmuth in their 1987 paper (Blum et al., 1987) is still a highly reproduced and cited result. I present the theorem given to justify Occam's Razor in the PAC learning literature. I then motivate my work by stating and proving a similar theorem that, by the same reasoning, would justify Anti-Occam's Razor: the principle that we should favour complex hypotheses over simple ones. I discuss what both these theorems actually say, why they do not conflict, and suggest a different interpretation than the one standard in the literature. Rather than providing a justification for Occam's Razor, what we find is that we are able learn when we are able to restrict our hypothesis space to one that grows polynomially with the input string and we are guaranteed to output an approximately correct hypothesis. This requires that each hypothesis space in the family exhibits a certain similarity property. It is this requirement that is the unstated assumption that is doing most of the work. Finally, I also suggest some ways to move forward, highlighting what I think are the reasons that philosophers should care about PAC learning, and computational complexity in general.
84. Crowdsourcing Family Health History: Epistemic Virtues and Risks
Philosophy of Science01:24 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:24:00 UTC - 2018/11/03 06:59:00 UTC
Eleanor Gilmore-Szott (University of Utah) Even in the age of precision medicine an accurate family health history (FHx) remains a crucial tool in research on heritable diseases and for clinicians in assessing risk and treatment options for patients. However, most individuals lack crucial details about the health history of other family members limiting their ability to provide this valuable information (Welch et. al. 2015). When prompted for your FHx you likely know some details, but recognize you are missing others. Furthermore, you probably know which of your family members would know the details you are missing. This collective wisdom could be compiled to provide a robust FHx. Crowdsourcing is an epistemic strategy most well known as the methodology employed by Wikipedia. Recently a number of online services have begun to develop digital tools that enable family members to use crowdsourcing to compile their individual knowledge into a detailed FHx. These tools provide validation for the epistemic status of patient testimony and address the practical need for improved information. However, crowdsourcing in this context raises a number of epistemological concerns. Namely, what kind of information is being produced and how should it be evaluated as evidence in research and medical care? In answering these two questions it is useful to explore the parallels between crowdsourcing FHx and Wikipedia, as Wikipedia is a successful example of crowdsourcing (Fallis 2008). The epistemic success of Wikipedia bodes well for the application of crowdsourcing to FHx, however there are a number of important differences that require further consideration. First, the social circumstances that make Wikipedia a successful epistemological enterprise are not equivalent to those of family groups. Second, as patients can misunderstand their own health history there may not be clear experts. Third, the content produced is neither pure testimony, nor is it a primary source, thereby limiting one’s ability to verify the information presented on these services. All of this calls into question our ability to trust the information produced under these circumstances (Magnus 2009). This poster will apply tools from the field of philosophy of science to assess the use of crowdsourcing for the collection of family health history, and highlight a range of epistemic implications. Despite a number of caveats, the use of crowdsourcing tools will likely put us in a better epistemic position than we would be otherwise. Accordingly, the marks against this method are outweighed by the potential for good.
Philosophy of Science01:25 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:25:00 UTC - 2018/11/03 06:59:00 UTC
Zvi Biener (University of Cincinnati) Richard Feynman referred to universal gravitation as “the greatest generalization made by the human mind.” Not surprisingly, that generalization has been of perennial interest to philosophers of science, from William Whewell to recent authors in Philosophy of Science. Their concern has been with the status of Newtonian induction, particularly with Newton’s justification of induction in Rule 3 of his Rules of Philosophizing. I argue that we have improperly read post-Humean worries into Newton’s Rule 3. Although the resulting analyses have been illuminating, we have departed significantly from the historical record and from Newton’s own view of induction. By comparing the 1st and 2nd editions of the Principia, I show that Rule 3 was not intended as a defense of induction, but as a direct response to Christiaan Huygens. I also offer a deflationary view of universality in Newton, one that puts universal induction on par with any other, perhaps severely limited, induction. Starting with the rule’s genesis is important. The rule first appeared only in the Principia’s second edition (1713), where it replaced the first edition’s (1687) Hypothesis 3, an alchemically-tinged claim about the mutual transformation of all bodies. But Rule 3 doesn’t mention transformation. Rather, it focuses on the invariable qualities of matter. The tension between transmutation and invariability has caused significant interpretive problems. Some have speculated that Newton abandoned Hypothesis 3 because he came to realize it conflicted with atomism or because he adopted Locke’s primary/secondary distinction. But the genesis of Rule 3 betrays a simpler story. It shows that Newton was not concerned with tempering transmutation or promoting a Lockeanism, but with Huygens’s view of gravitation in Discours de la cause de la Pensanteur (1690). The Huygensian context explains some of the rule’s most curious features, such as the discussion of hardness and indivisibility (properties that play no role in the Principia), and Newton’s odd claim (after stressing how well-founded corporeal impenetrability is) that “the argument from phenomena will be even stronger for universal gravity than for… impenetrability.” Most importantly, the Huygensian context sheds light on Newton’s concept of universality. This concept has also been the subject of debate, since Newton went out of his way (disingenuously, to some) to assert that gravity’s universality did not entail that it was a primary or essential property. I offer a historically sensitive analysis of Newton’s adjectival and adverbial forms of universus that shows “universality” was a more deflationary concept. Its proper home was within discussions of simple induction from instances, and it was meant to indicate nothing more than the applicability of some predicate to all members of a certain class, even a highly restricted one. Taken together, these considerations entail that Newton didn’t see induction as a methodological problem. Rather, he used induction’s non-problematic status to broaden the range of qualities that could be employed in physical explanations according to Huygens. The analysis shows the benefit of detailed contextual studies in philosophy of science. It also bolsters the case of material theorists of induction like John Norton.
86. When Glaciers Prophesy: Building a Case for Predictive Historical Science
Philosophy of Science01:26 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:26:00 UTC - 2018/11/03 06:59:00 UTC
Meghan Page (Loyola University, Maryland) Models of “good science” often appeal to successful predictions and observable empirical results. This poses a problem for historical sciences, such as archaeology, evolutionary biology, and geology, that investigate historical events. It is difficult to replicate evolutionary stories in a laboratory, and the past that is no longer accessible for direct observation (e.g. we can’t watch dinosaurs eat to determine their palate.) These structural differences between historical science and experimental science have led to doubts whether claims about the past, even those made by experts, can be successfully verified by science. Carol Cleland offers a powerful defense of historical science by appealing to what David Lewis describes as “the asymmetry of overdetermination.” The asymmetry of overdetermination is a causal asymmetry---an event is usually underdetermined by any particular cause (e.g. tossing a baseball towards a window is not a guarantee that the window will break) but causes are epistemically overdetermined by their effects (if the baseball does break the window, it will leave a host of traces to prove that it did.) The widespread traces left by events on the world act as a breadcrumb trail---by uncovering enough of these traces, scientists navigate a path to an explanation through the search for a common cause. According to Cleland, both models of science, experimental and historical, are justified by the asymmetry of overdetermination. Because causes do not uniquely determine their effects, experimental scientists repeatedly test their hypotheses to isolate relationships between variables; scientists must verify they are tracking regularities and not accidents, and to do this they must isolate individual causal relationships from the complex web of total causes that converge at any particular event. In contrast, historical scientists trace a specific path from effect to cause. Given that any actual event leaves a great number of effects, scientists can rely on these traces to distinguish between competing causal explanations. While Cleland’s picture is compelling and accommodates many historical research programs, it fails to account for the specific role of historical science in making claims about the future. This is contrary to practice, considering, for example, that some of the best evidence we have concerning the relationship between CO2 emissions and abrupt global climate change comes from historical sciences such as glaciology and paleoclimatology. In this poster, I present a case-study concerning the introduction and verification of Walter C. Broecker’s hypothesis that there are alternating modes of operation in the meridional overturning circulation. Broecker’s historical work interpreting ice core data led him to hypothesize that there are differing modes of circulation in oceanic deep currents that, if switched, can lead to abrupt changes in climate. A number of predictions that follow from Broecker’s hypothesis (some historical, some not) have have proven accurate, offering support for his claim. I use this case as a reductio against Cleland’s view. If Cleland is right, historical science is only justified in making claims about the past. But historical science often offers successful predictions about both regularities and future events. Therefore, Cleland’s view is problematic.
Philosophy of Science01:27 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:27:00 UTC - 2018/11/03 06:59:00 UTC
Laura Cupples (Washington State University) I examine the dynamics of measure development using two case studies: temperature, and quality of life. I argue, following Bas van Fraassen (2008) and Leah McClimans (2010) that in each case these dynamics have a hermeneutic structure. Just as the hermeneutic circle represents an attempt to overcome epistemic circularity in the interpretation of a text, so too must epistemic circularity be overcome in measure development. Namely, we must establish a mathematical relationship between observable (e.g. volume) and unobservable variables (e.g. temperature), while the value of the unobservable variable remains unknown. I show that Gadamer’s (1991) philosophical hermeneutics are an effective lens through which to examine the development of the temperature standard as described by Hasok Chang (2004). First, the normative force of tradition found in Gadamer’s hermeneutics mirrors Chang’s “principle of respect”. Second, Gadamer argues that in order to interpret the meaning of a text, it must be applied in a concrete context. Similarly, in measurement we must bridge the gap between the abstract theory and concrete practice through the operationalization of the measure. Finally, Gadamer’s emphasis on coherence between part and whole is congenial to Chang’s justificatory philosophy of progressive coherentism. Despite similar grounding in hermeneutics, I note an important difference between measure development for temperature and for quality of life. Namely, while the meaning of temperature can be standardized, the meaning of quality of life cannot (McClimans 2010). A strategy of progressive coherentism, i.e., Chang’s epistemic iteration (2004), ultimately leads to a theory of heat and temperature, as well as to determinate values for temperature. The same cannot be said for quality of life. Quality of life is imperfectly understood, according to McClimans (2010). Asking genuine questions about its meaning will aid in interpretation, but just as in Gadamer’s hermeneutics, our horizons of meaning must remain open. This is because there are always new questions we might ask about quality of life in various contexts. The standardization of meaning for the temperature concept represents a limit to the analogy with hermeneutics, as Gadamer argues that the meaning of a text should remain open to new interpretations when encountered by new persons in new historical contexts. On the other hand, I argue that the indeterminacy we find in quality of life measurement is a result not only of an analogy with the hermeneutic task, but of full-fledged participation in it. Quality of life measures are texts authored by researchers and interpreted by respondents, each of whom brings his or her own experiential background to the encounter.
88. Evidence against Default Models in Comparative Psychology
Philosophy of Science01:28 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:28:00 UTC - 2018/11/03 06:59:00 UTC
Mike Dacey Experiments in comparative psychology typically aim to test a default model against an alternative. Morgan’s Canon dictates that researchers prefer models that posit the simplest processes. This is often interpreted by analogy to null hypothesis statistical testing (NHST): the simpler model should be the default (Andrews & Huss 2014). Morgan’s Canon has faced considerable criticism lately, and the two proposed replacements set up the central tension of this paper. One replacement, contextual null choice, accepts the general default model framing while choosing nulls/defaults case by case (Mikhalevich 2015, Mikhalevich, Powell, & Logan 2017). The other, evidentialism, rejects defaults altogether in favor of a more holistic inference to the best explanation (Sober 2005, Fitzpatrick 2008). I argue for a version of evidentialism over any view that retails the default model framing (even if one wishes to retain Morgan’s Canon in a weaker form). I do so by first undermining the analogy that supports the default model framing, then demonstrating that it has problematic effects. The analogy between default models and NHST fails to respect the difference between statistical hypotheses and substantive hypotheses. Statistical hypotheses specify a distribution of a certain feature (the thing to be measured); substantive hypotheses are models that motivate the statistical hypotheses and, potentially, explain them. The inferential gap between statistical and substantive hypotheses looms large in comparative psychology, because in comparative work it’s often the case that any model can be consistent with many possible specific experimental outcomes. In such cases, the failure of any statistical hypothesis does not entail the failure of any substantive hypothesis. The analogy that supports the default model framing does not hold: statistical nulls can be (and should be) chosen without treating any model as the default. Additionally, the default model framing has problematic effects, distorting the weighting of evidence, and systematically biasing experimental practices. One option mentioned above, contextual null choice involves choosing nulls based on the available evidence. While this is a step in the right direction, it means that a model will gain the same privileged status of “null” whether it wins by an inch or a mile. This distorts the weighting of evidence. Chosing default models biases practice by supporting the ‘associative/cognitive’ distinction that has become problematic in the field (Buckner 2011, 2017, Dacey 2016, 2017). No model should be treated as a ‘default.’ Understanding how any particular experimental finding impacts the credibility we should lend to a particular model requires a more inclusive inference to the best explanation, as described by evidentialism.
89. Epidemiology at the Interface of Environment and Health: Three Strategies for Evidential Claims on the Exposome
Philosophy of Science01:29 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:29:00 UTC - 2018/11/03 06:59:00 UTC
Stefano Canali (Leibniz Universität Hannover) Background: Philosophy and Epidemiology Most of the philosophical scholarship on epidemiology has focused on causality, by looking at causal explanations and interpretations of epidemiological results in terms of causal claims (Broadbent, 2013). In this poster, I take a different approach and present an account of contemporary epidemiology based on the notion of evidential claims. Building on philosophical analyses of data practices in biology (Leonelli, 2016) and evidential reasoning in archaeology (Chapman and Wylie, 2016), I argue that focusing on the dynamics of evidential claims enables to identify distinct approaches, methods, and types of evidential reasoning at work in epidemiology. Focus and Methodology I use a philosophy-of-science-in-practice approach and take a close look at ongoing environmental epidemiology that applies the “exposome approach” and investigates the totality of exposures faced by individuals (Russo and Vineis, 2017). I ground my analysis on qualitative interviews, participatory observation and discussions with researchers in the EXPOsOMICS project (http://exposomicsproject.eu). Evidential Claims on the Exposome: Three Strategies My account based on evidential claims enables to identify three main strategies employed to generate evidential claims. Each strategy encompasses a distinct approach to the phenomena under study; a distinct kind of work that researchers carry out; and a distinct type of evidential claims. These three strategies are: 1. The macro strategy, which generates scoping claims that restrict the sample and provides an initial understanding of the phenomena under study; it can be seen in the initial selection of data from cohort studies. 2. The micro strategy, which is applied at various steps of research (omics analysis, geographical information systems and experimental studies) to collect data of significantly different types and generate evidential claims on structures at the microscopic level of investigation. 3. The association strategy, that uses evidence from the macro and micro strategies to generate evidential claims at the statistical level of associations between environmental exposures and outcomes of interest. Discussion I argue that distinguishing strategies for evidential claims yields significant insights. It enables to unpack the epistemic issues and challenges that concern each strategy and, in turn, influence research done at a different stage. It gives a characterisation of the context of data practices in terms of evidential claims, which shows how much epidemiological research is not necessarily about causal claims, but neither is to be overlooked as producing ‘raw data’. In this way, it provides a new philosophical perspective on the epistemology and practice of epidemiological research at the interface of environment and health.
90. Epistemic and Pragmatic Reliability in Economics
Philosophy of Science01:30 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:30:00 UTC - 2018/11/03 06:59:00 UTC
Isaac Davis (Carnegie Mellon University) In epistemic reliability theory, verification and refutation are treated as success criteria for methods, rather than entailment relations between hypothesis and data (Kelly 2000). Utilizing formal learning theory, we can achieve deductive guarantees that a method will converge to the truth in the limit, even if deductive verification of hypotheses is impossible. This framework provides a formal characterization of underdetermination in scientific inquiry, and a basis for designing inference methods with deductively guaranteed truth-convergence and simplicity properties. In this paper, we investigate whether epistemic reliability criteria are applicable to economics, what methods satisfy this criteria, and how this can inform economic research decisions. We argue that, due to the nature of underdetermination inherent to economic research, the standard epistemic reliability framework does not apply to much of economics. However, in conjunction with a pragmatic research goal (e.g. predicting future phenomena, optimizing policy, or allocating resources), formal learning theory allows us to extract new, informative questions that admit a pragmatic reliability analysis, in the sense that we can obtain convergence-in-the-limit guarantees for achieving pragmatic goals. This method of extracting "coarser" questions from more "fine-grained" questions can be performed in a principled way, and the underlying procedure appears in different forms and different domains, such as the Adaptively Rational Learning framework in Wellen and Danks (2016) and the Causal Feature Learning framework in Chalupka et al (2017). We show how this criterion can be defined and applied in economics, and how it relates to the epistemic equivalent. Finally, we demonstrate that, in a special case where the pragmatic goal is prediction of future phenomena, the pragmatic and epistemic notions of reliability coincide in an important way. This equivalence provides some justification for the common view of economics as being primarily concerned with prediction of future phenomena, rather than discovering "true'' underlying causal mechanisms (e.g. Friedman 1953, McCloskey 1998).
Philosophy of Science01:31 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:31:00 UTC - 2018/11/03 06:59:00 UTC
Christian J. Feldbacher-Escamilla (DCLPS) Two peers have an epistemic disagreement regarding a proposition, if their epistemic attitudes towards the proposition differ. The question of how to deal with such a disagreement is the problem of epistemic peer disagreement. Several proposals to resolve this problem have been put forward in the literature. Most of them mainly concentrate on the question of if, and if so, to what extent one should incorporate evidence of such a disagreement in forming an epistemic attitude towards a proposition. Classical is the so-called "equal weight view" which suggests to generally incorporate such evidence by equally weighting. At the other end of the spectrum is the so-called "steadfast view" which suggests to generally not incorporate such evidence. In between are views that suggest incorporating such evidence from case to case differently as, e.g., the total evidence view. In this paper we want to present a new argument in favour of the equal weight view. A common argument for this view stems from a principle one might want to call the "principle of epistemic indifference": If the epistemic attitudes of n individuals are, regarding their rational formation, epistemically indistinguishable (i.e. the individuals are epistemic peers), then each attitude should be assigned a weight of 1/n. However, as we will show, the equal weight view results from a more general approach of forming epistemic attitudes towards propositions in an optimal way. By this the argument for equal weighting can be massively strengthened from reasoning via indifference to reasoning from optimality.
92. What Is Probability, Or: Rudolf Carnap, Logical Bayesian?
Philosophy of Science01:32 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:32:00 UTC - 2018/11/03 06:59:00 UTC
Marta Sznajder (Munich Center for Mathematical Philosophy) What exactly are the subjective and the logical interpretations of probability? Given how often these labels are used to characterize different positions in the philosophical foundations of probability, we should have a very sharp understanding of what they mean. In my presentation, I use the example of someone whose position is taken to be fully understood and show that these two standard labels are not enough to unambiguously describe it. From the 1940s Rudolf Carnap developed systems of inductive logic based on what he considered the logical concept of probability. According to what seems to be a leading interpretation of the development of this project, both the formal features of the systems of inductive logic and their conceptual underpinning changed significantly between 1945 and 1970. As the popular opinion has it, Carnap's interpretation of probability had evolved from a logical towards a subjective, or Bayesian, conception. Statements to this effect have been put forward by, among others, Skyrms, Zabell, Galavotti, and Earman --- all of them leading workers in the field of probability interpretations. However, this view of the conceptual evolution of Carnap's inductive logic is at odds with what he himself had declared. Even in the later phases, Carnap insists that his basic philosophical view of probability did not change and throughout all his probability publications he stressed the logical character of his systems and his concept of probability. The presentation addresses this apparent clash between Carnap's self-identification and the subsequent interpretations of his work. Are the modern accounts of Carnap's evolution misguided, or was he delusional about the conceptual implications of the developments within inductive logic? Or is the real issue our lack of clarity on what our labels actually mean? Following its original intentions, I reconstruct inductive logic as a project in explication. The picture that emerges is of a highly versatile linguistic framework, whose main function is not the discovery of objective logical relations in the object language, but the stipulation of practically useful conceptual possibilities. Within this representation, I map out the changes that the project went through and consider the way in which these changes led to a modification of the underlying concept of probability. It turns out that most of the interesting movement within the project happened on the level of the characterization of the explicandum, and not on the level of the explicated theory itself. Seen from such an explication perspective, Carnap becomes quite hard to categorize as either a subjectivist or a logicist about probability. I go through some possible interpretations of these terms, showing how according to neither of them is the early Carnap a clear logicist and the late Carnap a clear subjectivist. The result is not only a better understanding of the original inductive logic project, but also a new impulse to rethink the conceptual basics of probability interpretations.
Marta Sznajder Munich Center For Mathematical Philosophy
93. The Model of Evidential Reasoning in Archaeology
Philosophy of Science01:33 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:33:00 UTC - 2018/11/03 06:59:00 UTC
Kristin Kokkov (University of Tartu) Archaeology is a domain that studies material remains of past events for the purpose of understanding social structures and cultural dynamics of past people. The events and people in question do not exist anymore and cannot be observed directly. Thus, there is a gap between the subjects that are studied and the information that is preserved from the past. Archaeologists have tried to overcome this interpretational gap for many decades. In the 1970s, Lewis R. Binford introduced the method of middle-range theories as a tool of archaeological interpretation. In the 1980s, Ian Hodder laid the foundation for the post-processual movement that emphasised the importance of understanding past social context in interpreting past material culture. In recent years, the question of the interpretational gap between material remains and past events has been analysed by Alison Wylie. To explain how archaeologists interpret material remains, she (2011: 371) suggests the model of evidential reasoning. Wylie (2011: 380) describes this model by saying that it involves three functional components: 1) empirical input; 2) theory that mediates the interpretation of empirical input as evidence; and 3) the claims on which this empirical input bears as evidence. Taking this model as the basis for my study, I examine the archaeological research process it detail. I propose a specified version of the model and claim that the process of archaeological theory formation consists of at least three different stages of interpretation that proceed from the present material remains towards the past events: 1) the stage between material remains and archaeological record; 2) the stage between the description of the archaeological record and claims about the past; 3) the stage between claims about the past and general theory about the past historical-cultural context. I argue that each of these stages has the structure of the model of evidential reasoning, but has its own specific function. In the first stage, the material remains are interpreted as archaeological record. In the second stage, archaeologists make claims about the past and explain why the archaeological record is the way it appears to us. In the third stage, archaeologists try to explain why these past events took place that left behind the archaeological record we can see today. My aim is to explain in detail the structure of each interpretational stage, and show schematically how the archaeological research gradually proceeds from the material remains towards the understanding of the cultural past.
94. Better than Randomisation? A Defence of Dynamic Allocation
Philosophy of Science01:34 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:34:00 UTC - 2018/11/03 06:59:00 UTC
Oliver Galgut, Elselijn Kingma (University of Southampton) Introduction: Random allocation (randomisation) is widely considered the best allocation technique for double-masked, controlled, interventional medical trials. One problem for randomisation is that — contrary to what is often claimed — it cannot perfectly control for all potential confounders. Due to chance alone, important variables can be differentially distributed at baseline. Dynamic Allocation (DA) is a potential alternative to randomisation, developed as a response to this problem of baseline imbalance. DA techniques actively allocate patients to ensure the treatment and control groups are as similar as possible. This ensures that treatment effects are unlikely to be confounded. Unfortunately, DA techniques are not widely used because of fears that they may be susceptible to bias. In this poster, we examine the ability of DA to prevent bias and achieve balance. Methods: We shall assess DA by comparing it to randomisation using two key justifications of randomisation: 1. Randomisation prevents bias 2. Randomisation is the only method that guarantees that all confounding factors (known and unknown) can be balanced between treatment and control groups. These two justifications were chosen because they are foundational to randomisation’s justification. By examining DA’s performance on these, we can consider whether DA is a serious potential competitor to randomisation. Results: DA is better at balancing known confounders than randomisation. It is no worse than randomisation at balancing unknown confounders. DA is demonstrated to prevent biases, independent of balancing ability. Additionally, we find that the ideal DA technique: 1) inputs continuous covariates, 2) uses a variable allocation probability, and 3) is opaque. Conclusions: DA can be at least as good as randomisation at preventing bias and achieving balance. Therefore, it is a competitor to randomisation. This is particularly true if the technique inputs continuous covariates, uses variable allocation probabilities, and is opaque. Such techniques should be considered on an equal footing to randomisation when designing interventional trials.
95. What's the Signal?: Philosophical Misuses of the Signal-Noise Distinction
Philosophy of Science01:35 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:35:00 UTC - 2018/11/03 06:59:00 UTC
Kathleen Creel (University of Pittsburgh), David Colaço (University of Pittsburgh) For four years after the Parkes radio telescope identified “perytons,” terrestrial short chirped radio pulses, their cause was unknown. Researchers believed that perytons were terrestrial in origin because they occurred exclusively within the working week and were wide-field detectable, suggesting that they occurred close to the site of the telescope. Papers speculated that these pulses were caused by ball lightning, meteor trails or signals from aircraft (Katz 2014). Although perytons had only been observed at one telescope, explaining their existence was important because they called into question the interstellar origin of fast radio bursts (FRBs), which have similar quadratic forms (Petroff et al. 2015, 3934). Finally, Petroff and colleagues showed that the pattern characteristic of perytons could be reproduced by opening the door of a nearby microwave oven in the facility’s break room while the magnetrons were still active. This reliably caused the pattern that had been detected as a “peryton.” Once researchers declared use of the microwave off-limits during telescope hours, the perytons disappeared. Although “perytons” turned out to be artifacts, their patterns in data, due to their wide-field detectability, are stronger and more stable than those of their extragalactic analogues, FRBs. This case illustrates the shortcomings of current philosophical use of the signal-noise distinction. Philosophers use the terms “signal” and “noise” in their analyses of the detection of phenomena from the data they collect in experiments or observations (Bogen and Woodward 1988; Woodward 1989; McAllister 1997). However, their uses often blur an intuitive understanding of the difference between signal and noise with a technical definition derived from information theory. We propose a new use of the signal-noise distinction that distinguishes the identification of phenomena and artifacts. Contra James Woodward, who takes everything in a dataset that does not correspond to the phenomenon to be idiosyncratic, we argue that data patterns that indicate the presence of artifacts are often robust and more easily detected than the patterns that indicate phenomena. To apply the signal-noise distinction in a way that accurately captures the process of detecting phenomena, it is paramount to understand the role of researcher interest in the process. Researcher interest dictates what phenomenon is investigated; interest does not itself determine what counts as a phenomenon. Researchers investigate patterns they think correspond to interesting phenomena. These patterns are chosen due to their informational character, from which researchers formulate characterizations of phenomena of interest.
Presenters Co-Authors Kathleen Creel University Of Pittsburgh HPS
David Colaco Department Of History And Philosophy Of Science, University Of PIttsburgh
96. Evidential Discord in Observational Cosmology
Philosophy of Science01:36 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:36:00 UTC - 2018/11/03 06:59:00 UTC
Michael Begun (University of Pittsburgh) One intriguing feature of recent research on some prominent questions in astrophysics and cosmology is the presence of stubborn discrepancies or tensions in empirical results. For example, the two main approaches for determining the Hubble constant—classical determinations involving distance ladders typically calibrated by Cepheids and Type Ia Supernovae, and CMB-based determinations—have tended to yield discrepant results, with the most precise recent estimates of each type producing a 3.4σ difference. More controversially, in the context of dark matter research, the research group behind the DAMA/LIBRA experiment has claimed to find a strong signal for the existence of WIMP dark matter, whereas others have claimed to rule out the existence of dark matter in the same parameter range, yet no obvious explanation for the discrepancy has emerged. While evidential discord is by no means unique to astrophysics and cosmology, cases like these provide a useful starting point for understanding evidential discord more generally, as well as highlighting some of the unique evidential challenges facing astrophysics and cosmology. In this work, I examine the discrepant results in the Hubble constant and dark matter cases and use them to try to better understand the ways in which empirical results can conflict and the epistemic implications of those conflicts. Starting from Jacob Stegenga’s account of inconsistency and incongruence, I argue that a more nuanced picture of evidential discord is required for making sense of the Hubble constant and dark matter cases. I characterize evidential “non-conformity” as a weaker form of discord than inconsistency but a stronger form than incongruence, and show that the Hubble constant and dark matter cases fit this characterization of non-conformity. One reason why the results in these cases are better characterized as non-conforming rather than inconsistent is that because the competing approaches rely on different methodologies and background assumptions, the discrepancies may ultimately be found to be compatible, perhaps through the modification of background assumptions or with the discovery of currently unknown physical features affecting the results. I also show why the evidential discord in the Hubble constant and dark matter cases should not be characterized as incongruent on Stegenga’s definition. Finally, I examine the current prospects and scientific strategies for resolving the discrepancies in the Hubble constant measurements and in the dark matter detection experiments. There is now a strong push for new, more precise measurements and experiments, reexaminations of experimental methods to uncover systematic errors, and critical inspections of physical assumptions. I suggest that whereas judgments of evidential non-conformity are likely to be experimentally fruitful, leading to improved experiments and methodologies, judgments of inconsistency are more likely to be theoretically fruitful, leading to revised models or theories. While evidential discord is often seen by philosophers of science as a serious problem, this analysis highlights the positive epistemic role that it plays, at least in contemporary astrophysics and cosmology.
97. Are Beliefs Propositional Attitudes?: A Developmental Approach
Philosophy of Science01:37 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:37:00 UTC - 2018/11/03 06:59:00 UTC
Ayca Mazman (University of Cincinnati) In developmental psychology, a specific experimental design, namely the false belief task, is used to measure children's ability to attribute beliefs and intentionality to others. There are many variations of the false belief task, each of them claiming to test different aspects of certain theory of mind abilities. The most common type of false belief task is the Sally-Anne task (also known as change of location task). In this paper, I evaluate some philosophical theories of belief to see if any of them are able to provide an explanation for the implicit passing of the false belief task in early infancy. Most of the theories of what beliefs are rest on the assumption that beliefs are propositional attitudes since philosophers tend to formulate their theories on belief based on experiences in adulthood. Given the psychological research and experimentation stated above, the implicit passing of the false belief task then suggests that infants as young as 13-months are able to entertain propositional attitudes. I argue that attributing pre-verbal infants the ability to entertain propositions seems like the wrong approach since the gap between implicit and explicit passing suggests that children are not able to verbally confirm that they understand someone has a belief that theirs before the age of 4.5 or 5 (most likely due to their inability to entertain propositional attitudes before that age). I see two options to avoid this dilemma: 1) If we agree that false belief tasks in early infancy do indeed show that infants are able attribute false beliefs to others, then beliefs may not be propositional attitudes OR 2) Results of false belief tasks in early infancy do not show that infants possess a ToM but rather they tap into a more primitive mechanism whose usage mimics an understanding of false beliefs, at least in the case of false belief tasks in early infancy. I argue that the second option is more appealing for a variety of reasons, one being that it allows beliefs to be propositional attitudes without attributing infants as young as 13 months the ability to entertain propositional attitudes.
98. Towards a Process Ontology of Pregnancy: Links to the Individuality Debate
Philosophy of Science01:38 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:38:00 UTC - 2018/11/03 06:59:00 UTC
Hannah O'Riain (University of Calgary) Pregnancy is a neglected but useful case study for investigating biological individuality. Existing accounts of individuality in pregnancy use substance ontology to define the conceptus as a separate individual (as in Smith and Brogaard’s container model, 2003), or as part of its host (Kingma’s part-whole claim, 2018, forthcoming). Substance ontology frames the world in terms of static entities; if the biological world is ever-changing, yet composed of substances, persistent personal and organismal identities are puzzling. I argue these substance-based accounts are unsatisfactory because they must distort physiology and avoid answering important questions to provide a definitive ontology. While Kingma’s part-whole account is built on more correct physiology than Smith and Brogaard’s container model, she still struggles to address whether the foster is part of its gravida before implantation and after birth. She is tentative in proposing the part-whole account because these open questions have bearing on the production of a definitive ontology. Kingma recognizes that the metaphysical account we accept has practical consequences – in this case for the autonomy of pregnant women. She and I both argue that we should investigate our meagre sample of ontological accounts of pregnancy critically, and consider replacing them if they are biologically inaccurate and socially harmful. Nicholson and Dupré (2018) provide a way out of the persistent identity puzzle, criticizing both substance-based conceptions of organisms, and monist approaches to ontology. I apply these critiques to pregnancy. Using Nicholson and Dupré’s lens (2018), I resolve several difficulties that substance-based views of individuality encounter in the pregnancy case. Process ontologies are populated by individuals that are more like whirlpools or markets than tangible objects: usefully stable entities that are actively sustained (Dupré, 2014). In this vein, I discuss how there are no useful, clear boundaries between the conceptus and pregnant organism: pregnancy is a complex, intertwined relationship of hierarchical biological processes, including metabolic activities and life cycles. Implantation, birth, and breastfeeding are some of the biological processes that complicate our efforts to carve the world into distinct, static individuals according to any monolithic account of biological individuality. A process account of organismal and personal identity will provide better tools for biologists and philosophers investigating individuation. Dupré’s concept of nested hierarchies of processes allows us to zoom in or concentrate on stabilities that importantly form individual entities, be they framed as parts, wholes or background setting, according to our research question. In the pregnancy case, this clarifies the puzzle of how a foster could be both a part of its gravida and a meaningful individual. Future work to create a satisfactory account of individuality in the context of mammalian ovulation, gestation and lactation would bring up useful themes, empirical grounding and new approaches for understanding biological individuality and organismality in philosophy of biology more broadly; for example, in philosophical conversations about genes, development and species transitions in evolutionary biology. In this presentation, I conclude that individuation in pregnancy deserves careful consideration, and that our ontological investigations of pregnancy ought to include more processual understandings.
Philosophy of Science01:39 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:39:00 UTC - 2018/11/03 06:59:00 UTC
Susan Sterrett (Wichita State University) In mid-November 2018, an international body, the General Conference on Weights and Measures (CGPM) will meet to vote on a proposal to revise the International System of Units ("the SI"). The proposed revision is considered revolutionary, although the motivation for it is to achieve an ideal that has long existed yet remains unfulfilled to date: to provide a system of measurement based entirely on natural measures. There are two features of the proposed system of units (the "New SI" or "Revised SI") that are bound to arouse interest among those concerned with foundational questions in philosophy of science: (i) the proposed system of units can be defined without drawing on a distinction between base units and derived units; and (ii) the proposed system of units does not restrict (or even specify) the means by which the value of the quantities of the units are to be established. Instead, the system of units is defined by fixing the value of seven "well-recognized fundamental constants of nature." The change is akin to the approach that has already been taken for defining the unit of length (i.e., the meter) in terms of the velocity of light, a well-recognized "constant of nature." To define the entire system of units by fixing the values of seven constants of nature (one of which is the velocity of light) is a far more radical proposal. The proposal that is expected to be accepted presents two distinct alternative formulations of the definition of SI units. In the first of these, the definition includes not only the units that were previously designated as the seven base units of the SI, but it includes five additional SI units, and, in an unprecedented move, draws no distinction between base units and derived units. Yet, to minimize disruptive consequences of the change, a second formulation of the definition is provided as well, in which each of the base units is given a definition and, as a matter of convenience, the terminology of base units and derived units is retained. Thus, the question of the appropriate role of base units -- and, even, of whether the concept of base unit plays an essential role in the definition of a system of units at all -- arises. In this poster, I will first briefly present and explain the proposed reform of the SI. I will then highlight what will change and what will remain the same if the proposal is accepted by the international body in mid-November as expected. Finally, I will address the question of the role of base units in light of the new SI. I formulate and clarify the question: "Do the 'base units' of the SI play any essential role anymore, if they are neither at the bottom of a hierarchy of definitions themselves, nor the only units that figure in the statements for fixing the numerical values of the 'defining constants'?" I present an answer to this question.
100. In Defence of Branch Counting in an Everettian Multiverse
Philosophy of Science01:40 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:40:00 UTC - 2018/11/03 06:59:00 UTC
Foad Dizadji-Bahmani (California State University, Los Angeles) The main challenge for the Everett interpretation of quantum mechanics (EQM) is the Probability Problem': If every possible outcome is actualized in some branch, how can EQM make sense of the probability of a single outcome as given by the Born rule? Advocates of EQM have sought to make conceptual room for epistemic probabilities in one of two ways: the decision-theoretic approach (Deutsch (1999), Greaves (2004), Wallace (2012)) and the self-location uncertainty approach (Vaidman (1998; 2011), Sebens and Carroll (2016)). Both approaches aim to show that faced with branching one is required to set one's credences as per the Born rule. In the first, the result is variously proved from a set of decision-theoretic axioms, which encode what it is to be rational. In the second, Sebens and Carrol prove the result from a single principle, their "Epistemic Separability Principle" (ESP). Prima facie, the right way to set one's credences in an Everettian multiverse is by "Branch Counting" (BC): the credence a rational agent ought to have in a particular quantum measurement outcome is equal to the ratio of the number of branches in which that (kind of) outcome is actualized to the total number of branches because each branch is equally real. BC is at odds with the Born rule and thus advocates of EQM have sought to argue against it in various ways. The aim of this paper is to show that these arguments are not persuasive, and that, therefore, the probability problem in EQM has not been solved. I consider two different arguments against BC: that BC is not rational because 1) there is no such thing as the number of branches in EQM; and 2) at least in some salient cases, it conflicts with a more fundamental principle of rationality, namely the aforementioned ESP. Apropos 1: Wallace (2003, 2007, 2012) has argued that BC is irrational because there is no such thing as the number of branches. I draw a distinction between the following: that the number of branches is indeterminate (metaphysical) and that the number of branches is indeterminable (epistemological). It is there argued that neither claim is justifiable. Apropos 2: The Sebens and Carrol (2016) self-location uncertainty approach turns on ESP, which requires that the "credence one should assign to being any one of several observers having identical experiences is independent of the state of the environment." They proffer a thought experiment called 'Once-Or-Twice' and show that BC is inconsistent with ESP in this case, and they advocate adopting the latter. I argue contra this that A) BC is a far more intuitive principle than ESP in the given context and that B) a crucial move in their argument — taking equivalent mathematical expressions as representing identical physical situations — is B1) inconsistent in methodology (because in setting up their framework they need to assume that equivalent mathematical expressions do not necessarily represent identical physical situations) and B2) is unjustified in the given thought experiment.
101. Scientific Structuralism Does Not Necessitate Modal Realism
Philosophy of Science01:41 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:41:00 UTC - 2018/11/03 06:59:00 UTC
Ilmari Hirvonen (University of Helsinki), Ilkka Pättiniemi (University of Helsinki) In their book Every Thing Must Go (2007, Oxford: OUP) James Ladyman and Don Ross defend modal realism, which we argue is in conflict with their programme of naturalistic metaphysics. Ladyman and Ross criticise empirically unconstrained metaphysics that they call strong metaphysics. This variety of metaphysics is, according to them, mainly motivated by wanting to give explanations in order to make some things or phenomena seem less mysterious. Ladyman and Ross contrast strong metaphysics with weak metaphysics, which is based on Kitcherian unification of special science hypotheses with ones from fundamental physics. Ladyman and Ross maintain that strong metaphysics should not be pursued whereas weak metaphysics is the only viable sort of metaphysics that there is or can be. We argue that Ladyman and Ross’ modal realism doesn’t do the unificatory work they claim is the only acceptable form of metaphysics. Additionally, their reasons for endorsing modal realism are either lacking or in tension with their critique of strong metaphysics. Ladyman and Ross present three reasons for accepting modal realism. First, modal expressions are used in science, and at least some modal claims are considered to be theory-independently true. The second reason is a new version of the traditional no-miracles argument. Ladyman and Ross’ branch of realism focuses on the modal or nomological relations within scientific theories. They claim that without objective modal structures such standard features of scientific practice as successful theory conjunction and novel prediction would be entirely mysterious. Third, and lastly, modal realism justifies inductive generalisations. Concerning the first argument it must be acknowledged that Ladyman and Ross are right when they claim that modal terminology is indeed indispensable in science. However, the usage of such language does not yet, in itself, bind us to a realist interpretation of it. Hence, it is not clear how strong of an ontology the lexicon we have endorsed binds us to. At the very least, Ladyman and Ross should offer an argument for the conclusion that modal expressions necessarily force us to accept modal realism, and this is something they have not yet done. Ladyman and Ross’ second and third arguments are motivated by offering an explanation for something that would otherwise seem mysterious or miraculous, and this is precisely the kind of motivation for metaphysics that they oppose. The third argument seems to lead Ladyman and Ross on the path of strong metaphysics because induction is needed already in fundamental physics. So modal realism is required to justify induction before any unificationary work is done between fundamental physics and the special sciences. Therefore, instead of being weak unificatory metaphysics, modal realism seems to be some kind of a transcendental condition for empirical science. This seems to be a clear indication of strong metaphysics. We claim that in the end Ladyman and Ross have to face the following dilemma: either they must accept that they participate in strong metaphysics, or dilute their modal realism to the point of indistinguishability from empiricist antirealism.
Presenters Ilmari Hirvonen University Of Helsinki Co-Authors
102. Chemical Models in Biology: In Vitro Modeling in Biochemistry and the Production of Biochemical Knowledge
Philosophy of Science01:42 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:42:00 UTC - 2018/11/03 06:59:00 UTC
Erica Dietlein (University of Nevada, Reno) In vitro systems are commonly used within the fields of molecular biology and biochemistry. However, despite the prevalent use of these systems, discussion regarding the nature of in vitro modeling has thus far captured only a limited scope of what goes on within in vitro models and does not capture the diversity of in vitro modeling techniques employed by biochemists. The dialogue surrounding biochemistry and biochemical technology have thus far been largely restricted to conversation about mechanisms or their role in bioethical debates. Any discussion about the epistemology of in vitro systems has largely remained limited to claims that state in vitro studies are designed to mimic systems that already exist in nature (Garcia, 2015). This characterization fails with regard to a wide variety of in vitro systems, such as the ones used to characterize the CRISPR-Cas9 system. Additionally, biochemistry has its origins in chemistry, and the experimental design that goes into the creation of biochemical models still shares similarities with modern chemical experimental models. The nature of modeling in molecular biology and biochemistry is more diverse than it initially appears, and there is much within these fields that remains to be explored by philosophers. In this poster, I introduce one means by which biochemists have used in vitro studies to generate knowledge about molecular activity that goes beyond the imitation of natural systems, using characterizations of the CRISPR-Cas9 system as examples (Gasiunas 2012; Jinek, 2012). Next, I make suggestions about where the philosophy of chemistry and the philosophy of biology might be brought together to better address questions about modeling in biochemistry. Model system design in chemistry reveals more about how biochemists produce knowledge within their own field through the use of in vitro systems. The way in which some chemists model their objects of study is highly similar to how biochemists model biochemical objects. Thus, conversations within the philosophy of chemistry (Fisher, 2017; Chamizo, 2013) can enrich and expand our understanding of the modeling behind the development of biochemical theory and the technologies that develop within the field.
Philosophy of Science01:43 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:43:00 UTC - 2018/11/03 06:59:00 UTC
Vincent Bergeron (University of Ottawa) A primary goal of cognitive neuroscience is to identify stable relationships between brain structures and cognitive functions using, for example, functional neuroimaging techniques. Aside from the many technical, theoretical, and methodological issues that accompany this kind of research, an important empirical challenge has begun to receive widespread attention. There is mounting evidence that a great many brain structures are recruited by different tasks across different cognitive domains (Anderson 2010; Poldrack 2006), which suggests that a given brain structure can typically participate in multiple different functions depending on the cognitive context. One possible reason for the failure to observe systematic mappings between brain structures and cognitive functions is that a given brain structure, or network of brain structures, might do something different (i.e. perform a different set of operations) for each, or at least some, of the different types of cognitive functions it participates in (Anderson 2014). Another possible reason is that our cognitive ontologies—i.e. our current descriptions of cognitive processes and their components—are either incorrect or too coarse (Price and Friston 2005). Thus the basic contribution of a brain structure might be the same for the different types of cognitive functions it participates in—with the possibility of systematic structure-function mappings—but an adequate description of this basic contribution might not correspond to anything in the vocabulary of our current theories about the structure of the mind. My aim in this paper is to explore this second possibility. The human brain shares many of its anatomical and functional features with that of other species, and we can expect that for any human cognitive function, (at least) some component(s) of it could be found in the cognitive repertoire of another species (de Wall and Ferrari 2010). What is less clear, however, is how best to exploit this evolutionary continuity in order to identify the components of the human neurocognitive architecture that we share with other species and that have remained stable across extended evolutionary periods. Here, I argue that a useful way to think about these shared components is to think of them as cognitive homologies. In contrast with the well-known concept of structural homology in biology—defined as the same structure in different animals regardless of form and function, where sameness is defined by common phylogenetic origin—the proposed notion of cognitive homology focuses on the functional properties of homologous brain structures that tend to remain stable across extended evolutionary periods. I then argue, using recent findings from the cognitive neurosciences, that cognitive homologies are good candidates for stable structure-function mappings which, in turn, can be used for the construction of new cognitive ontologies.
Philosophy of Science01:44 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:44:00 UTC - 2018/11/03 06:59:00 UTC
Paul Hoyningen-Huene (Leibniz Universität Hannover) In a series of papers from 2000 on, Robert Sugden has analyzed the epistemic role of theoretical models in economics. His view is that these models describe a counterfactual world that is separated from the real world by a gap. This gap has to be filled if the model should have an epistemic function for our understanding of the real world. According to Sugden, this gap “can be filled only by inductive inference”. The putative inductive inference that Sugden constructs leads “from the world of a model to the real world”, based on “some significant similarity between these two worlds”. In philosophy, the “significant similarity” that Sugden correctly adduces for the legitimacy of inductive steps has been spelled out as common membership to a natural kind. However, for Sugden’s inductive step to be legitimate, the union of the appropriate set of models with the appropriate set of real target systems should form a natural kind, which is certainly not the case. For instance, with respect to causality model cities are utterly different from real cities, contrary to Sugden: models may at best represent the real causality. In fact, the inferential step from models to reality is abductive, as Sugden correctly notes. However, he misunderstands abduction as a sub-category of induction. Yet, abduction does not lead to generalizations as induction, but to risky explanatory hypotheses. The abductive inference from a model to reality has the following form: (i) x has property Z (empirical finding), (ii) Situations of type A have property Z (model), therefore (H) x is a situation of type A. If (H) is true, then (H) together with (ii) explain (i). However, the abductive step to (H) is risky, because it may also hold: (ii*) Situations of type B have property Z, with B ≠ A. Based on (i) and (ii*), one gets by abduction the alternative explanatory hypothesis (H*) x is a situation of type B, with B ≠ A. Thus, all one gets by an abductive step is a potential explanation (sketch). The only way to obtain the actual explanation is by showing that the model situation is sufficiently similar to reality and by excluding all alternative explanations. Thus, the real explanation is not distinguished from alternative explanations by an intrinsic property of high credibility, as Sugden assumes, but by its comparative advantage against competitors. The upshot is that a theoretical model in economics (like Schelling’s) never directly explains any particular empirical case (this resolves Reiss’ “explanation paradox”). Instead, a model allows for abductive generation of a sketch of a potentially (perhaps surprising) explanatory hypothesis. In order to transform this potential explanation sketch into an actual explanation, the sketch must be elaborated and its empirical adequacy be shown. The latter crucially contains the exclusion of alternative potential explanations. This may be accomplished by showing that the empirical conditions necessary for plausible alternative mechanisms to work do not obtain.
105. Selfish Genes and Selfish DNA: Is There a Difference?
Philosophy of Science01:45 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:45:00 UTC - 2018/11/03 06:59:00 UTC
W. Ford Doolittle (Dalhousie University) Although molecular biology’s “central dogma” (DNA → RNA → protein; Crick 1958) emphasized the causal primacy of genes, many molecular biologists held to a naïve, organism-centered panadaptationism until the publication in 1976 of The Selfish Gene (Dawkins 1976). This widely-read book argued that a gene’s phenotypic benefit to its bearers should be seen as nothing more than a mechanism by which its own spread and maintenance are ensured, a means but not an end. The 1980 “selfish DNA” papers (Doolittle and Sapienza 1980; Orgel and Crick 1980) went what their authors considered to be one step further, pointing out that regions of DNA with the ability to replicate within genomes (what were then called “jumping genes”) need confer no phenotypic benefit. They argued that both the insertion sequences and transposons then being investigated as endogenous mutagenic agents in bacteria and repetitive DNA in eukaryotes (comprising half our own genome, for example) are best understood as such “selfish DNA” elements, with no necessary individual phenotypic expression (hence not “selfish genes”), and possibly in sum detrimental. Both “selfish DNA” papers aimed to counter claims that such elements, because they might someday prove useful, were retained “for the good of the species”. Indeed as Hickey (1982) soon pointed out, even individual elements that reduce fitness by up to 50% will spread in a sexually-reproducing lineage. Reactions to this proposal are of four sorts, each still vigorously defended by a part of the community of practicing biologists. (1) What I will call the panadaptationist response, while now more openly gene-centric, holds that most of our genome is “functional’ (expressed in phenotypes under selection, and not “junk”). Many genomic researchers are in this camp. (2) Dawkins himself considers selfish DNA as but a minor tweaking of the genes-eye view promoted in The Selfish Gene. (3) Hierarchists, following Gould (1983), see in “selfish DNA” (as distinct from “selfish genes”) a compelling argument for multilevel selection theory. (4) Future-directed researchers imagine (as they did in the 1970s) that such elements are not selfish, but retained in anticipation of future use. There are some justifications for each of these positions: here I will try to come up with a balanced view, not simply pluralistic, but in line with “spatial tool” approach of Godfrey-Smith (2009).
106. A Causal Representation of Gene Regulation in Cancer
Philosophy of Science01:46 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:46:00 UTC - 2018/11/03 06:59:00 UTC
Valerie Racine (Western New England University), Wes Anderson (Western New England University) Within recent discussions about how to conceptualize and understand the behavior of biological systems, such as the genome, philosophers have presented arguments for non-reductionism. John Dupré (2012), in particular, develops a non-reductionist framework for conceptualizing and reasoning about these sorts of entities and their related phenomena. He argues that a reductionist methodology in biology is useful to understand the capacities of components of biological systems, but a non-reductionist methodology is required to understand what the system actually does in virtue of what their components are actually doing. More specifically, Dupré argues that when we do understand how a system actually behaves, then we have some understanding of how the system as a whole enacts the actual behavior of its components; i.e. we have some understanding of downward causation. We present an argument against Dupré’s non-reductionism. We do so by providing an explicitly causal representation of research on the role of micro-RNA in regulating certain pathways in lung cancer (Johnson et al. 2005) with the representational and inferential tools of causal modeling (Spirtes et al. 2000; Pearl 2009). We show that in such cases particular care is needed to define the appropriate variables measured on selected units to understand this system’s behavior. With well-defined variables, we need not appeal to downward causation at all. Using our case study, we claim that what Dupré calls the “reductionist principle” can be consistent with research on what biological systems and their components actually do. But, we provide reasons for thinking that the traditional reductionist/non-reductionist divide distracts from what is essential to understanding the behavior of biological systems in their actual settings. We argue that what is required for an understanding of the behavior of these systems is an understanding of its causal structure and the joint frequency distribution over the exogenous variables in the system. Thus, we aim to show that the tools of causal modeling can be instrumental for understanding what these systems and their components actually do. Developing these types of causal representations of the behavior of biological systems is more fruitful than focusing on intrinsic properties of the components of a system, interaction-relations of components, or downward causation because it provides researchers a framework to make causal inferences about their system of interest.
Philosophy of Science01:47 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:47:00 UTC - 2018/11/03 06:59:00 UTC
Marian J. R. Gilton (University of California, Irvine) The property of charge is conceptually central to the various gauge theories of fundamental physics. This project develops a geometrical interpretation of charge by comparing and contrasting electric charge and color charge. Color charge is usually introduced as a property in some sense ‘like’ electric charge, playing the same role in the theory of chromodynamics that electric charge plays in the theory of electrodynamics. Knowing only this much about color charge, one might mistakenly expect that color charge will have all the same metaphysical features as electric charge. However, a closer look at these theories shows that the analogy between the two properties does not license such thorough metaphysical similarities. Since these two theories are different in certain crucial respects, we should not expect that the nature of the property that plays the role of charge in each theory is exactly the same. The main claim of this project is that the charge property at work in gauge theories is best exemplified by color charge and not by electric charge. There are complex features of color charge which are shared by electric charge in principle, but in a degenerate way. In this sense, it is electric charge that is like color charge, and not the other way around. This claim is substantiated on three accounts. First, we consider charge insofar as it is a property attributed to fundamental particles using the mathematics of irreducible representations of Lie groups. Second, this project considers charge as a conserved quantity given by Noether’s theorem as it relates to the metaphysical significance of the Lie algebra. Third, we consider the role of charge in the force laws of gauge theory, using the interpretation of color charge in the Wong force law for chromodynamics to shed light on the standard interpretation of the Lorentz force law for electrodynamics.
108. The Role of Intentional Information Concepts in Ethology
Philosophy of Science01:48 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:48:00 UTC - 2018/11/03 06:59:00 UTC
Kelle Dhein (Arizona State University) Philosophers working on the problem of intentionality in non-linguistic contexts often invoke the historical process of natural selection as the objective grounds for attributing intentionality to living systems. Ruth Millikan’s (1984) influential teleosemantic account of intentionality, for example, holds that intentionality supervenes on evolutionary history such that the intentional content of a sign is the product of the biological functions that sign has mediated in the past. However, such etiological theories of intentionality don’t square with the way ethologists searching for causal explanations of animal behavior attribute intentional concepts to the systems they study. To support that claim, I analyze the norms governing ethologists’ attributions of intentional information concepts to eusocial insects, like ants and honeybees, in academic animal behavior journals. Ethologists have a long-standing practice of organizing behavior types by their causal contribution to a fitness-enhancing goal type (Tinbergen 1962, 414), and within that theoretical context, I argue that ethologists hang the concept of intentionality on goal-directed function, not the deep history of natural selection. More specifically, I argue that ethologists attribute intentional information concepts to behavioral processes when those processes play a special kind of role in achieving a goal-directed function. Namely, ethologists attribute intentional information concepts to behavioral processes that robustly achieve a difficult goal. Importantly, my account is objective in that it defines key notions like “robustness” “difficulty” and “goal” in a way that is independent of researchers’ interests. Instead, my account takes relationships between an organismal system, that system’s fitness, and that system’s environment to be the objective grounds for attributing intentionality to behaviors in ethology. Finally, I argue that ethologists’ attributions of intentional information concepts are scientifically fruitful in that they enable researchers to abstract from the causal details of behavioral systems and make useful generalizations about how those behaviors contribute to adaptive goals. In debates over the utility of biological information concepts, Sahotra Sarkar (1996, 2000) has argued that the concept of information failed to gain a substantive role in 1960’s molecular genetics because informational approaches to genetics failed and informational theories about genetics turned out to be false. I conclude by arguing that unlike in molecular genetics, intentional information concepts play a substantive role in ethology.
Presenters Co-Authors Kelle Dhein Arizona State University
Philosophy of Science01:49 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:49:00 UTC - 2018/11/03 06:59:00 UTC
TJ Perkins (University of Calgary) Bas van Fraassen’s book, Scientific Representation: Paradoxes of Perspective (2008) addresses what he takes to be one of the central aims of science: representation. However, instead of merely asking, ‘what is representation?’ van Fraassen shifts the question to, “how does this or that representation represent, and how does it succeed?” (33). Built into van Fraassen’s account of representation are pragmatic elements, or, how and why representations are used to achieve certain aims, and a lengthy section on how measurement procedures operate, among others. Measurement, according to van Fraassen, is “an operation that locates an item (already classified as in the domain of a given theory) in a logical space (provided by the theory to represent a range of possible states or characteristics of such items)” (164). It would seem here that measurements are made in a logical space determined by one guiding theory. However, much of van Fraassen’s exploration of scientific representation appeals to modeling and representational practices in quantum mechanics and physics; sciences which are heavily, if not exclusively, mathematical in nature. For this poster I would like to consider some of the concepts and ideas in Scientific Representation as they apply to sciences which have not garnered the same attention as physics and other mathematized sciences – specifically paleontology and ecology. Representations of phenomena are built from measurements of these paleontological and ecological data. In these sciences sophisticated measurement and modelling techniques have been developed to address questions of exceedingly complex systems in the case of ecology, and sparse evidential inferencing in the case of paleontology. These epistemic situations place constraints onto the ways in which theory influences the logical space referenced by van Fraassen. How does measurement work in more speculative scientific representational systems, where ‘the’ theory in question is not actually one theory, but many, and are not strictly speaking locating items in a logical space, but are more loosely appealed to, or playing a guiding role in some inference or justification? I will provide instances in the practice of paleontologists and ecologists which utilize theory differently from the physical and quantum sciences, often utilizing many theories into one logical space, as opposed to one theory creating the logical space. From these examples I hope to provide fuller account of scientific representation which makes an allowance for more than the mathematical sciences. van Fraassen has undoubtedly and ingeniously laid the ground work for understanding scientific representation broadly, however, there is room for forgotten or ignored sciences to enable revisions to that account.
Philosophy of Science01:50 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:50:00 UTC - 2018/11/03 06:59:00 UTC
Zachary Shifrel (Virginia Tech) Little research has been done on the application of structural realism to sciences beyond physics. It is unclear, for example, whether biology should lend itself precisely to talk of structural continuity. If biology does support such considerations, it is further unclear how structural realism might fare under the pessimistic meta-induction. This paper sketches an account of structural realism in biology through the analysis of structural continuity and surveys a few lessons that the biological application teaches. Biology is rich with mathematical models, its formal structures having recently gained the attention of those working in category theory, group theory, and algebraic topology. Mathematical frameworks like category theory also happen to allow for the representation and comparison of the structure of scientific theories. Drawing from recent applications of such mathematical frameworks to philosophical problems, I contrast the position of the structural realist in physics with that in biology. Some are content with a structural realism whose domain of validity is confined to the theory space of fundamental physics, but for those who want to extend their realism beyond coalescing neutron stars and elementary particles I show that certain critical features of structural realism are made salient in the biological application. I do this not by conferring inductive support on structural realism through a particular proof of structural continuity, but by drawing attention to difficulties that arise in the course of examining whether structure has remained invariant over time.
111. Challenges to Fundamentality: Two Notions of 'Force' in Classical Mechanics
Philosophy of Science01:51 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:51:00 UTC - 2018/11/03 06:59:00 UTC
Joshua Eisenthal (University of Pittsburgh) I argue that the notion of force in classical mechanics does not have a uniform meaning. In particular, I argue that the traditional formulation of classical mechanics appeals to at least two distinct notions of force which are not obviously compatible with one another. This equivocation at the heart of the theory challenges the standard construal of classical mechanics as a candidate fundamental theory. I claim that the apparent fundamentality of mechanics stems in part from the tacit unification of a diverse range of problem-solving strategies; strategies which may not sit comfortably within a single unified framework. There are two distinct traditions in the history of mechanics: the vectorial tradition and the variational tradition. (Cf. Lanczos (1962) xvii and Sklar (2013) pp. 76-79.) The vectorial tradition is most recognisable in Newton’s canonical laws of motion. Paradigm problems in this tradition involve distance forces acting between point-masses, such as gravitational forces in a simple model of the solar system. In contrast, the variational tradition is most recognisable in the ‘analytic’ methods of Lagrange and Hamilton. The paradigm problems in this tradition involve applications of extremal principles such as the principle of least action. The ‘Newtonian’ notion of force operative in the vectorial tradition is the familiar notion of a kind of push or pull, typically represented by a three-dimensional vector in ordinary space. In contrast, the ‘Lagrangian’ notion of force is an abstract vectorial quantity which can have as many dimensions as a system’s degrees of freedom. Many of the homely properties of Newtonian forces are not applicable to Lagrangian forces. For instance, although a Newtonian force can be regarded as acting from one body on another and causing the second body to accelerate, Lagrangian forces cannot typically be interpreted in this way: a Lagrangian force is an atemporal property of a system, comparable to the system’s total energy. Furthermore, although Newtonian forces often depend only on the relative distances between bodies, a Lagrangian force can often depend on a body’s velocity. These conflicting demands on the notion of ‘force’ have important implications for standard interpretations of classical mechanics. According to the standard view, Newton’s laws are intended to be universal and exceptionless, applying equally well to molecules, chairs, and galaxies. However, I argue that the apparent fundamentality of classical mechanics depends on the tacit unification of a diverse range of problem-solving strategies. (This criticism of the standard interpretation of classical mechanics takes its cue from Wilson (2013).) The contrasting conceptions of force evident in the vectorial and variational traditions makes the differences between these strategies vivid, and calls into question the standard construal of classical mechanics as a candidate fundamental theory.
112. Representationalism, Phenomenal Variation, and the Prospects for Intervallic Content Proposals
Philosophy of Science01:52 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:52:00 UTC - 2018/11/03 06:59:00 UTC
Alison Springle (University of Pittsburgh), Alessandra Buccella (University of Pittsburgh) Sometimes, perception seems to present the same thing in different ways – a phenomenon we will refer to as phenomenal variation. Several philosophers have argued that some phenomenal variation is systematic on the basis of certain empirical findings. Some of these findings strongly suggest the attention impacts phenomenal character. For instance, Carrasco et al. (1998, 2004) found that subjects’ perceived contrast of a Gabor patch differs systematically depending on whether or not a patch is being foveated (attended). In addition, Sperling (1960) showed that cueing attention after a grid of alphanumeric characters is presented for about 0.5 seconds influences how many characters are correctly identified and reported by subjects. But attention isn’t the only phenomenology-impacting factor. Bouma (1970) demonstrated that phenomenal character can change in response to the so-called “crowding effect”: the more “crowded” a perceptual scene, the less precise or determinate are the properties subjects report, and vice versa. Ned Block (2010, 2012, 2015) argues that systematic phenomenal variation as demonstrated in these experiments poses a serious challenge to representational accounts of perceptual phenomenology. According to representationalism, perceptual phenomenology is fully determined by representational content (e.g. Harman 1990; Tye 1995, 2000, 2009; Dretske, 1995). Block’s argument goes as follows: (1) According to representationalism, phenomenal variation is directly explained (and determined) by changes in representational content. (2) Phenomenal variation can be satisfactorily explained by changes in representational contents only if such contents are conceived as “intervallic” (Block 2015, 3), that is, if they specify a range, where the actual value is within the represented range in order to preserve veridicality. (3) However, phenomenal character is more determinate than any range content (i.e. there is always a single way something looks, even if the subject cannot always report it.) (4) Therefore, intervallic contents do not fully explain phenomenal variation in terms of representational content. (5) From (1), (2), and (4) it follows that, more generally, phenomenology cannot be fully determined by representational content, and thus representationalism is false. A number of representationalists have attempted to resist Block’s argument by developing accounts of intervallic contents that they claim are the right kind of tool to accommodate phenomenal variation. We will consider two classes of proposals that have emerged. The first appeals to notions like “perceptual precision” or “determinable/determined properties” (Nanay 2010, Stazicker 2011). The second appeals to notions of probabilistic percepts (e.g. Morrison 2016). Our principal aim is to clarify problematic ambiguities in these proposals concerning what part of perceptual representation explains phenomenology. We 1) identify the components of perceptual representations (content, vehicle, force) depending on how representation is conceived (e.g. Russellian vs. Fregean), 2) identify for each component how it might plausibly contribute to perceptual phenomenology, and 3) interpret the intervallic proposals according to these components. In light of what this clarificatory analysis, we argue that it is ultimately the phenomenal vehicle of the representation rather than the content that most plausibly explains phenomenal variation. Consequently, the prospects for the intervallic replies to Block’s argument are poor.
113. Abstraction and Probabilities in Evolutionary Theory: Why Drift is not Purely (or Perhaps even Primarily) a Function of Population Size
Philosophy of Science01:53 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 08:53:00 UTC - 2018/11/03 06:59:00 UTC
Jessica Pfeifer (UMBC) It is commonly argued that probabilities in evolutionary theory result from abstraction (e.g., Sober 1984, Matthen and Ariew 2002, Matthen 2009). I delineate different modes of abstraction, and show these different modes affect how we think about fitness, selection, and drift. One might abstract from factors causally relevant to some effect for epistemic reasons, or one might abstract from causal factors because they obscure some feature of the world. Adequately representing these features requires that we abstract from the “gory details.” Gory details might obscure general patterns, as Kitcher (1984) and Sober (1984) have argued, or they might obscure underlying causal structure, an argument attributable to Mill (1872) and defended by Cartwright (1989). Following Mill’s and Cartwright’s line of reasoning, I argue that abstracting from factors that affect evolutionary outcomes allows us to represent two different types of causal processes: selection and drift. To make sense of this difference, I draw a distinction between selective environmental factors and non-selective environmental factors. Some environmental factors make a difference to evolutionary success partly in virtue of differences between the competing entities (the selective factors); some environmental factors make a difference simply because those factors are unequally distributed among the competing types in the population of interest (the non-selective factors). If we want fitness values to reflect this difference, as causalists ought to, then we ought to abstract from non-selective factors and make our fitness values relative to selective environmental factors. This has important implications for understanding abstraction, for understanding of what it means for selection to act alone, and for understanding the relation between natural selection, drift, and population size. When we abstract from non-selective environmental factors, we are not considering what would happen were those factors absent. Instead, we are representing what would happen were the non-selective environmental factors equally distributed across the competing types of interest. What matters is whether the non-selective environmental factors make a difference to the relative success of the competing entities. The non-selective factors will not make a difference so long as they are equally distributed among the competitors. This marks a significant difference in the way that Mill thought about a causal factor acting alone. In the case of natural selection, selection can act alone even when non-selective causes are present, so long as the non-selective causal factors are equally distributed. Hence, when abstracting from non-selective causal factors, we are not thereby ignoring or subtracting the non-selective causal factors, but instead controlling for those factors. Drift, then, occurs whenever (though not only when) the non-selective causal factors are unequally distributed across the competing types of interest. Hence, whether drift occurs will not be purely a function of population size, but instead will depend partly on how the non-selective causal factors are distributed across the competing entities. Moreover, whether population size makes a difference to the likelihood of drift will be partly an empirical matter, not purely a mathematical truth.