Metropolitan Ballroom (Third Floor) Poster Forum & Reception
02 Nov 2018 06:00 PM - 08:00 PM(America/Los_Angeles)
20181102T1800 20181102T2000 America/Los_Angeles PSA2018 Poster Forum and Reception

Sponsored by UCI Department of Logic and Philosophy of Science

Metropolitan Ballroom (Third Floor) PSA2018: The 26th Biennial Meeting of the Philosophy of Science Association office@philsci.org
241 attendees saved this session

Sponsored by UCI Department of Logic and Philosophy of Science

uci16 DeptLogicScience 2l blue

Poster Forum in Metropolitan Ballroom

1. A Role for the History and Philosophy of Science in the Promotion of Scientific Literacy
Philosophy of Science 00:01 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:01:00 UTC - 2018/11/03 06:59:00 UTC
Holly VandeWall (Boston College), Margaret Turnbull (Boston College), Daniel McKaughan (Boston College)
In a democratic system non-experts should have a voice in research and innovation policy, as well as in those policy issues to which scientific and technological expertise are relevant – like climate change, GMOs and emergent technologies. The inclusion of non-expert voices in the debate is both a requirement for truly democratic process and an important counter to the kinds of jargon and group-think that can limit the perspective of more exclusively expert discussions. 
For non-experts to participate in a productive way does require a certain degree of scientific literacy. Yet in our present place of intensive specialization, access to understanding any one subfield or subdiscipline in the sciences requires years of study. Moreover, the relevant sort of literacy involves not simply familiarity with factual information, but some perspective on the goals, methods and practices that constitute knowledge formation in the scientific disciplines. 
We have spent the last decade developing a syllabus, readings, and tools for teaching science literacy through the history and philosophy of science. These include assemblage of appropriate primary and secondary course materials, creation of cumulative assignments, developing technology resources to connect students to key events and figures in the history of science, and implementation of assessment methods that focus on skill and concept development rather than fact memorization or problem sets. Our poster will showcase these tools and provide attendees with specific suggestions for similar course practices they can implement at their own institutions. 
In particular, we have found that coursework that familiarizes students with the how practices of knowledge formation in the sciences have developed over time has helped our students to:
1. Recognize that the methods of science are themselves developed through trial and error, and change over time.
2. Understand that different disciplines of science require different approaches and techniques, and will result in different levels of predictive uncertainty and different standards for what is considered a successful hypothesis.
3. Consider examples of scientific debate and processes through which those debates are resolved with the advantage of historical perspective. 
4. Trace some of the unintended effects of the sciences on society and to identify where the social and cultural values of the scientists themselves played a role in their deliberations – and whether or not that had a negative epistemic effect.
Presenters
HV
Holly VandeWall
Boston College
DM
Daniel McKaughan
Boston College
Co-Authors
MT
Margaret Greta Turnbull
Boston College/Baylor University
2. STEAM Teaching and Philosophy: A Math and the Arts Course Experiment
Philosophy of Science 00:02 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:02:00 UTC - 2018/11/03 06:59:00 UTC
Yann Benétreau-Dupin (San Francisco State University)
This poster presents the goals, method, and encouraging results of the first iteration of a course titled “The Art(s) of Quantitative reasoning”. It is a successful example of a STEAM (i.e., STEM+Arts) teaching experiment that relied on inquiry-based pedagogical methods that philosophers are well prepared for.
The course focused on a few issues in quantitative reasoning that have shaped the history of the arts, that is, to study a few cases in the history of the arts that posed a technical—mathematical—problem and different ways to overcome this problem. The main units were the problem of musical tuning and temperament, and perspective and projective geometry in visual arts.
The general pedagogical approach was to focus on problem solving, in small group class and at home, so as to foster conceptual understanding and critical thinking rather than learning rules. The small class size (enrollment capped at 30) made this manageable.
The mathematics level covered was not higher than high school level. Even though this was not, strictly speaking, a philosophy class, an argumentation-centered teaching method that is not constrained by disciplinary boundaries makes this a teaching experience in which many philosophers can partake.
To assess the ability of such a course to help students become “college ready” in math and meet their general education math requirement, a pre/post test was conducted on usual elementary notions, most of which weren’t explicitly covered during the semester. Overall students’ elementary knowledge improved, but this was much more true of those whose initial knowledge was lower, to the point where pre- and post-test results are not correlated. Assuming that any further analysis of the data is meaningful at all (given how small the sample size is), these results depend on gender (women’s scores improved more than men’s), but not on year (e.g., no significant difference between freshmen and seniors).
Presenters
YB
Yann Benétreau-Dupin
San Francisco State University
3. Teaching Philosophy of Biology
Philosophy of Science 00:03 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:03:00 UTC - 2018/11/03 06:59:00 UTC
Aleta Quinn (University of Idaho)
I teach “environmental philosophy,” “philosophy of biology” and related undergraduate courses. In this poster I reflect on what is/are the purpose(s) of teaching these courses, and in turn how I should teach. I am presenting this paper at a major scientific conference in July, to collect feedback from individuals with broad backgrounds in molecular or organismal biology and wildlife management, both to improve my own class and to contribute to pedagogical literature. At the PSA I will present the results of this interaction with biology professionals and students. Challenges include students’ belief that empirical studies will straightforwardly solve conceptual problems, colleagues’ views about the relative value of different sub-fields of biology, and administrators’ demand that pedagogy narrowly fit career objectives. Additionally, the things that interest me as a philosopher and a hobby herper differ from the things that would be of interest and value to my students. I recently argued successfully for my courses to earn credit towards biology degrees, and I expect to contribute to graduate students’ research. What issues and skills, broadly considered “conceptual,” do biologists wish that they and/or their students had an opportunity to study? My poster is an invitation to collaborate across disciplines to improve scientific literacy in the general population, but especially to help develop strong conceptual foundations for future biologists.
Presenters
AQ
Aleta Quinn
University Of Idaho
4. How to Teach Philosophy of Biology (To Maximal Impact)
Philosophy of Science 00:04 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:04:00 UTC - 2018/11/03 06:59:00 UTC
Alexandra Bradner
Given the accelerating pace of the biological sciences, there is arguably no more relevant, useful, and appealing course in the philosophical arsenal right now than the philosophy of biology. We are a scientifically illiterate nation, and philosophers of biology are poised to respond: we can present scientific problems clearly to non-specialists, place those problems in their socio-historical contexts, generate critical analyses, and imagine alternative hypotheses. But philosophy of biology is typically offered only every other year at R1 institutions (and only every 4-6 years elsewhere) as a small, specialized, upper-level undergraduate seminar or as an early graduate seminar—i.e. to minimal impact. 
To make matters worse, in order to succeed in philosophy of biology, students must arrive with prerequisites in math and biology, to process our contemporary readings, and with prerequisites in Aristotle and/or medieval philosophy, to grasp the significance of the Darwinian transition to populationism. Still, departments rarely require these prerequisites, first, because it can be hard enough to enroll the course without any prerequisites; and second, because requiring too many prerequisites can scare off scientists who are especially protective of their GPAs. As a result, general-ed students enroll, thinking they’re in for a “hot topics” course in bioethics, and end up behind and bored. 
In this poster, I will detail the syllabus of a philosophy of evolutionary biology course for a general undergraduate population that achieves three learning outcomes, without abandoning our field’s canonical texts. By the end of the course, students: 
1) come to understand the shift from essentialism/natural state to populationism by reading a series of Darwin’s precursors and much of both the "Origin" and "Descent;" 
2) master the populationist paradigm by exploring a collection of contemporary phil bio papers that build upon the issues encountered in the "Descent;" 
3) satisfy their hunger for bioethics by studying, in the last 2-3 weeks of the course, a group of articles drawn from recent journals.
I have taught this course four times at three different institutions to maximal enrollments. Pedagogically, the course employs a number of techniques and methodologies to maintain student engagement: a one-day philosophical writing bootcamp to alleviate science students’ anxiety about writing philosophy papers; a visit to the library’s rare book room to view historic scientific texts in their original editions, two classes on the "Origin" spent in jigsaws, one class spent on a team-based learning exercise, an external speaker invited to respond to students’ questions via Skype, two weeks of student-directed learning, and lots of lecture and discussion. 
This particular course design comes with some costs, primarily errors of omission, which I will detail. But the benefits of introducing a broader population of students to the philosophical problems of biology compensates for the losses, which can be recuperated in a second course or an independent study. Perhaps most importantly, teaching philosophy of biology in this way delivers to philosophy new students who otherwise would never have encountered the discipline, both sustaining our major and increasing enrollments in upper-level courses.
Presenters
AB
Alexandra Bradner
Kenyon College
5. Confronting the So-Called "Scientific Method"
Philosophy of Science 00:05 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:05:00 UTC - 2018/11/03 06:59:00 UTC
Brian Woodcock (University of Minnesota), Arthur Cunningham (St. Olaf College)
Both popular culture and introductory science pedagogy abound with statements about the nature of science and the so-called “scientific method.” This means that college students stepping into a philosophy of science course often come with deep-seated (though perhaps implicit) preconceptions about science, like the idea that there is a single, universally-recognized method that distinguishes science from other domains of inquiry. We believe that directly confronting such popular accounts of how science works is an important task in an introductory philosophy of science course.
Philosophy of science textbooks typically present the ideas of leading philosophers of science, past and present, together with critical evaluation of those ideas. The content contained in such textbooks (for example, about inductivism, hypothetico-deductivism, falsificationism, and contexts of discovery and justification) can be applied to critically evaluate “pop” accounts of how science works, including statements of the so-called “scientific method.” If we want students to understand and appreciate those applications, we need to make it an explicit goal of our courses that students learn to relate philosophical concepts and criticisms to popular accounts of science, and we need to support that goal with examples and exercises. Our experience shows that it is all too easy for students to compartmentalize the academic debates they encounter in a philosophy of science course so that they later fall back into routine ways of describing how science works—for example, by continuing to invoke the idea of a single process called “the scientific method” even after studying debates that cast doubt on the idea that science is characterized by a single, agreed-upon method.
We present a few ways to incorporate popular and introductory pedagogical statements about the nature of science and “the scientific method” in the philosophy of science classroom:
• lecture illustrations
• classroom discussion starters
• conceptual application exercises
• critical analysis and evaluation exercises.
We offer specific suggestions for assignments, including techniques for having students collect “pop” accounts of science to be used in the classroom. In addition, we consider the learning objectives embodied by each kind of exercise and, based on our own experience, some pitfalls to avoid.
Presenters
BW
Brian Woodcock
University Of Minnesota, Twin Cities
AC
Arthur Cunningham
St. Olaf College
6. Phenomenology of Artificial Vision
Philosophy of Science 00:06 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:06:00 UTC - 2018/11/03 06:59:00 UTC
Cordelia Erickson-Davis (Stanford University)
In the computational theory of vision, the world consists of patterns of light that reflect onto the retina and provoke neural activity that the individual must then reconstruct into an image-based percept (Marr 1979). “Seeing” turns into an optimization problem, with the goal of maximizing the amount of visual information represented per unit of neural spikes. Visual prostheses - which endeavor to translate visual information like light into electrical information that the brain can understand, and thus restore function to certain individuals who have lost their sight - are the literal construal of computational theories of perception. Theories that scholars of cybernetic studies have taught us were born from data not of man but of machine (Dupuy 2000). 
So what happens when we implant these theories into the human body? What do subjects “see” when a visual prosthesis is turned on for the first time? That is, what is the visual phenomenology of artificial vision, and how might these reports inform our theories of perception and embodiment more generally? This poster will discuss insights gathered from ethnographic work conducted over the past two years with developers and users of an artificial retina device, and will elaborate on a method that brings together institutional ethnography and critical phenomenology as way to elucidate the relationship between the political and the perceptual.
Presenters
CE
Cordelia Erickson-Davis
Stanford University
7. Normative Aspects of Part-Making and Kind-Making in Synthetic Biology
Philosophy of Science 00:07 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:07:00 UTC - 2018/11/03 06:59:00 UTC
Catherine Kendig (Michigan State University)
The naming, coding, and tracking of parts and modules is pervasive in all fields of biology. However, these activities seem to play a particular role in synthetic biology where discovering something is the same part is crucial to ideas of identity as well as successful construction. 
Synthetic biology is frequently defined as the application of engineering principles to the design, construction, and analysis of biological systems. For example, biological functions such as metabolism may now be genetically reengineered to produce new chemical compounds. Designing, modifying, and manufacturing new biomolecular systems and metabolic pathways draws upon analogies from engineering such as standardized parts, circuits, oscillators, and digital logic gates. These engineering techniques and computational models are then used to understand, rewire, and reengineer biological networks. But is that all there is to synthetic biology? Is this descriptive catalogue of bricolage wholly explanatory of the discipline? Do these descriptions impact scientific metaphysics? If so, how might these parts descriptions inform us of what it is to be a biological kind? Attempting to answer these questions requires investigations into the nature of these biological parts as well as what role descriptions of parts play in the identification of them as the same sort of thing as another thing of the same kind. 
Biological parts repositories serve as a common resource where synthetic biologists can go to obtain physical samples of DNA associated with descriptive data about those samples. Perhaps the best example of a biological parts repository is the iGEM Registry of Standard Biological Parts (igem.org). These parts have been classified into collections, some labeled with engineering terms (e.g. chassis, receiver) some labeled with biological terms (e.g., proteindomain, binding), and some labeled with vague generality (e.g., classic, direction). Descriptive catalogues appear to furnish part-specific knowledge and informational specificity that allow us to individuate them as parts. Repositories catalogue parts. It seems straightforward enough to understand what is contained within the repository in terms of the general concept: part. But understanding what we mean by “part”, how we individuate parts, or how we attribute the property of parthood to something seems to rely on assumptions about the nature of part-whole relationships.
My aim is to tease out these underlying concepts in an attempt to understand the process of what has been called “ontology engineering” (Gruber 2009). To do this, I focus on the preliminary processes of knowledge production which are prerequisite to the construction or identification of ontologies of parts. I investigate the activities of naming and tracking parts within and across repositories and highlight the ineliminable normativity of part-making and kind-making. I will then sketch some problems arising from the varied descriptions of parts contained in different repositories. Lastly, I will critically discuss some recent computational models currently in use that promise to offer practitioners a means of capturing information and meta-information relevant to answering particular questions through the construction of similarity measures for different biological ontologies.
Presenters Catherine Kendig
Michigan State University
8. Tool Development Drives Progress in Neurobiology and Engineering Concerns (Not Theory) Drive Tool Development: The Case of the Patch Clamp
Philosophy of Science 00:08 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:08:00 UTC - 2018/11/03 06:59:00 UTC
John Bickle (Mississippi State Univerity)
Philosophy of science remains deeply theory-centric. Even after the sea change over the past three decades, in which “foundational” questions in specific sciences have come to dominate concerns about science in general, the idea that everything of philosophical consequence in science begins and ends with theory still remains prominent. A focus on the way experiment tools develop in laboratory sciences like neurobiology, especially its cellular and molecular mainstream, is thereby illuminating. While theory progress has certainly been an outcome of the development and ingenious use of these tools, it plays almost no role in their development or justification. Engineering concerns predominate these stages. Theory is thus tertiary in these laboratory sciences. It depends on the development of experiment tools; while the latter depend on engineering ingenuity and persistence. 
Previously I have developed these points via metascientific investigations of tools that revolutionized neurobiology, at least in the judgment of practicing neurobiologists. These tools include gene targeting techniques, brought into neurobiology from developmental biology a quarter-century ago, and the more recent examples of optogenetic and chemogenetic technologies. All of these tools greatly increased the precision with which neurobiologists can intervene into intra- and inter-cellular signaling pathways specific neurons in behaving rodents to investigate directly cellular and molecular causal mechanisms of higher, including cognitive, functions. From these cases I have developed a model of tool development experiments in neurobiology, including a tool’s motivating problem, and first- and second-stage “hook” experiments by which a new tool is confirmed, further developed, and brought to more widespread scientific (and sometimes even public) awareness. Most recently I have confirmed this model with another case, the development of the metal microelectrode, which drove the “reductionist” program in mainstream neurobiology from the late-1950s to the early -1980s. In this poster I further confirm this model of tool development experiments, and sharpen this argument against theory-centricism in the philosophy of science, by reporting the results of a metascientifc investigation of the development of patch clamp technology and the initial achievement of the “gigaseal.” More than three decades ago this tool permitted experimentalists for the first time to resolve currents from single ion channels in neuron membranes. Experimental manipulations of this tool soon led to a variety of ways of physically isolating “patches” of neuron membrane, permitting the recording of single channel currents from both sides of the cell membrane. This tool sparked neurobiology’s “molecular wave,” and current theory, concerning the mechanisms of ion channels and active transporters to ionotropic and metabotropic receptors, was quickly achieved. This tool likewise developed through engineering ingenuity, not the application of theory. Its development likewise illustrates the independent “life” of experiment vis-à-vis theory in laboratory sciences, and opposes the theory-centric image of science that continues to pervade both philosophy of science generally and the specific fields of neuroscience—cognitive, computational, systems—that dominate philosophical attention.
Presenters Co-Authors
JB
John Bickle
Mississippi State University/University Of Mississippi Medical Center
9. What Caused the Bhopal Disaster? Causal Selection in Safety and Engineering Sciences
Philosophy of Science 00:09 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:09:00 UTC - 2018/11/03 06:59:00 UTC
Brian Hanley
In cases where many causes together bring about an effect, it is common to select some causes as particularly important. Philosophers since Mill have been pessimistic about analyzing this reasoning due its variability and the multifarious pragmatic details of how these selections are made. I argue Mill was right to think these details matter, but wrong that they preclude philosophical understanding of causal selection. In fact, as I illustrate, analyzing the pragmatic details of preventing accidents can illuminate how scientists reason about the important causes of disasters in complex systems, and can shed new light on how causal selection works.
I examine the case of the Bhopal disaster. Investigators found that human error and component failures caused the disaster. However, in addition to these proximate causes, many systemic factors also caused the disaster. Many safety scientists have argued that poor operating conditions, bad safety culture, and design deficiencies are the more important causes of disasters like Bhopal. 
I analyze this methodological disagreement about the important causes of disasters in terms of causal selection. By appealing to pragmatic details of the purposes and reasoning involved in selecting important causes, and relating these details to differences among causes in a Woodwardian framework, I demonstrate how analysis of causal selection can go beyond where most philosophers stop, and how engineering sciences can offer a new perspective on the problem of causal selection.
Presenters
BH
Brian Hanley
University Of Calgary
10. Fitting Knowledge: Enabling the Epistemic Collaboration between Science and Engineering
Philosophy of Science 00:10 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:10:00 UTC - 2018/11/03 06:59:00 UTC
Rick Shang (Washington University, St. Louis) 
I first argue that philosophers' interest in unique and distinctive forms of knowledge in engineering cannot explain the epistemic collaboration between science and engineering. I then argue that, using the early history of neuroimaging as my case study, fitting knowledge both captures the distinctive nature of engineering and enables fruitful collaboration between science and engineering. 

On the one hand, philosophers of science are increasingly interested in cross-discipline, cross-industry collaboration. The general philosophical interest reflects the reality that contemporary research is often interdisciplinary and interfield. For example, the development of the Large Hadron Collider is critical in basic physics research. 

On the other hand, philosophers of engineering are interested in unique, distinctive forms of engineering knowledge that are separate from scientific knowledge. For example, Bunge, a pioneer in philosophy of engineering, talks about operative knowledge in engineering. Operative knowledge is a kind of “superficial” knowledge that is rough but sufficient for action. For example, knowledge sufficient for driving a car involves minimal knowledge of the mechanism of the car. 

The challenge to philosophers of engineering, then, is how distinctive forms of engineering knowledge can learn from and inform scientific knowledge to enable science-engineering collaboration. 

I suggest that philosophers should look at the early history of neuroimaging. The earliest instrument to measure positron emission came out of nuclear physics research into the nature of positron emission and annihilation in the 50s. Medical researchers quickly adopted the instrument to study the anatomy and physiology by introducing positron emitting isotopes into animal and human bodies. The adoption initially received lukewarm reception because existing technologies were already able to produce similar data at one tenth the cost. After years of adjusting and trying, medical researchers in the 70s decided to focus on the real time, in vivo measurement of cerebral physiological changes, because the positron emission detection instrument could perform scans faster than all existing technologies. 

The history demonstrates the development of fitting knowledge in engineering. The fitting knowledge involves knowledge of what the engineered mechanism was best for. It involves mutual adjustments of the mechanism and potential uses to find a socially and scientifically viable fit between the mechanism and its use(s). 

This form of knowledge is uniquely engineering because it is primarily about the adjustment of engineered mechanism and its uses. It does not involve extended research into natural phenomena. For example, both the rapid nature of cerebral physiological changes and the scientific importance of capturing the changes in real time were well known at the time. 

Fitting knowledge, at the same time, bridges across science and engineering. First, the creation of the original mechanism often involves the input of scientific knowledge. In my case, the indispensable input was the nature of positron emission. Second, finding the best fit often involves scientific considerations and goals. In my case, the new use turned out to be measuring cerebral processes in real time. Locating the fit quickly enabled the scientific study of the physiological basis of cognition. 
Presenters
RS
Rick Shang
Washington University In St Louis
11. Re-Conceptualizing ‘Biomimetic Systems’: From Philosophy of Science to Engineering and Architecture
Philosophy of Science 00:11 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:11:00 UTC - 2018/11/03 06:59:00 UTC
Hannah Howland (Pyatok), Vadim Keyser (California State University, Fresno) 
Current philosophy of science literature focuses on the relations between natural, experimental, and technological systems. Our aim is to extend philosophical analysis to engineering and architectural systems. The purpose of our discussion is to re-conceptualize what it means for an engineered system to be ‘biomimetic’. We argue that biomimicry is a process that requires establishing a heteromorphic relation between 2 systems: a robust natural system and a robust engineered system. We develop a visual schematic that embeds natural and biomimetic systems, and we support our argument with a visual schematic case study of the woodpecker by showing the step-by-step process of biomimicry. 
A recent trend in engineering and architecture is that so-called “biomimetic systems” are modeled after natural systems. Specifically, structural and functional components of the engineered system are designed to mimic system components in natural phenomena. For example, bird bone structures both in nature and in engineering effectively respond to force load. Such structures in nature are robust in that they maintain structural integrity with changing conditions. The bird bone remains resilient with increase in compressive stress; but also, the femur bones seem to maintain robustness of structure even at different scales—maintaining constant safety factors across a large size range. 
While such robust properties are evident in natural systems, we argue that there has been a failure to properly model the same kind of robustness in engineered systems. 
We argue that this failure of modeling is due to misconceptions about ‘biomimicry’ and ‘robustness’: 
Using the philosophical literature on representation and modeling, we show that biomimicry requires establishing a heteromorphic relation between 2 systems: a robust natural system and a robust design system. 
Additionally, we argue that in order to establish an adequate concept of ‘biomimicry’, engineering and architecture should consider a different conception of ‘robustness’. Using the philosophy of biology literature on ‘robustness’, we argue that robust systems are those that maintain responsiveness to external and internal perturbations. We present a visual schematic to show the continuum of robust systems in nature and engineering. 
By using visual examples from natural systems and engineered systems we show that so-called “biomimetic systems” fail to establish such a relation. The reason why is because most of these engineered systems focus on symbolic association and aesthetic characteristics. We categorize these focal points of failed biomimetic engineering and design in terms of ‘bio-utilization’ and ‘biophilia’. 
We conclude with the suggestion that these re-conceptualizations of ‘biomimicry’ and ‘robustness’ will be useful for: 1) Pushing the fields of engineering and architecture to make more precise the relations between natural and engineered systems; and 2) Developing new analytical perspectives about ‘mimetic’ systems in philosophy of science. 
Presenters
HH
Hannah Howland
Pyatok
VK
Vadim Keyser
California State University, Fresno
12. The Disunity of Major Transitions in Evolution
Philosophy of Science 00:12 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:12:00 UTC - 2018/11/03 06:59:00 UTC
Alison McConwell (University of Calgary)
Major transitions are events that occur at the grand evolutionary scale and mark drastic turning points in the history of life. They affect evolutionary processes and have significant downstream consequences. Historically, accounts of such largescale macroevolutionary patterns included progressive directionality, new levels of complexity, and emerging units of selection all toward human existence (Huxley 1942, Stebbins 1969, Buss 1987). 
In more recent models, human-centrism is less common, however, it’s not clear all events are of the same kind (Maynard-Smith and Szathmáry 1995, O’Malley 2014, Calcott and Sterelny 2011). The lack of unity is identified as a failure to “get serious about evolution at the macroscale” (McShea and Simpson 2011, 32). Disunity allegedly yields inconsistencies in our explanations, as well as an arbitrary collection of events, or “just one damn thing after another” (ibid, 22, 32). Against this, I argue for a pluralist view of major transitions, which yields a productive disunity.
Epistemically, that all major events have a common property might be explanatorily useful. To unify major events under a single explanatory framework is supposed to reveal something about the robustness and stability of evolutionary processes, and their capacity to produce the same types of events over time. However, this unificatory aim concerning models of transitions is not the only fruitful approach. Setting unification aims aside provides the opportunity for detailed investigations of different transition kinds. Major transitions are diverse across life’s categories, scales, and can vary according to scientific interest. I draw on work from Gould (1989, 2001) who argued for chance’s greater role in life’s history; he denied both directionality and progress in evolution and focused on the prevalence of contingent happenstances. His research on evolutionary contingency facilitated an extensive program, which has primarily focused on the shape or overall pattern of evolutionary history. That pattern includes dependency relations among events and the chance-type processes (e.g. mutation, drift, species sorting, and external disturbances) that influence them. Gould’s evolutionary contingency thesis grounds a contingent plurality of major transitions kinds. 
Specifically, I argue that the causal mechanisms of major transitions are contingently diverse outcomes of evolution by focusing on two case studies: fig-wasp mutualisms and cellular cooperation. I also discuss how chance-based processes of contingent evolution, such as mutation, cause that diversity. And finally, I argue that this diversity can be classified into a plurality of transition kinds. Transition plurality is achieved by attention to structural details, which distinguish types of events. Overall, there is not one single property, or a single set of properties, that all and only major transitions share. On this picture, one should expect disunity, which facilitates a rich understanding of major shifts in history. Unity as an epistemic virtue need not be the default position. The lack of a common thread across transition kinds reveals something about the diversity and fragility in evolution, as well as the role of forces besides natural selection driving the evolutionary process. Overall, to accept a disunified model of major transitions does not impoverish our understanding of life’s history.
Presenters
AM
Alison McConwell
Stanford University
13. Representation Re-construed: Answering the Job Description Challenge with a Construal-based Notion of Natural Representation
Philosophy of Science 00:13 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:13:00 UTC - 2018/11/03 06:59:00 UTC
Mikio Akagi (Texas Christian University) 
William Ramsey (2007) and others worry that cognitive scientists apply the concept “representation” too liberally. Ramsey argues that representations are often ascribed according to a causal theory he calls the “receptor notion,” according to which a state s represents a state of affairs p if s is regularly and reliably caused by p. Ramsey claims that the receptor notion is what justifies the ascription of representations to edge-detecting cells in V1, fly-detecting cells in frog cortex, and prey-detecting mechanisms in Venus flytraps. However, Ramsey argues that the receptor notion also justifies ascribing representational states to the firing pin in a gun: since the state of the trigger regularly and reliably causes changes in the state of the firing pin, the firing pin represents whether the trigger is depressed. The firing pin case is an absurd consequence. He concludes the receptor notion is too liberal to be useful to scientists. 
I argue that something like the receptor notion can be salvaged if being a receptor is contextualized in terms of construal. Construals are judgment-like attitudes whose truth-values can vary licitly independently of the situation they describe. We can construe ambiguous figures like the Necker cube as if it were viewed from above or below, and we can construe the duck-rabbit as if it were an image of a duck or of a rabbit. We can construe an action like skydiving as brave or foolhardy, depending on which features of skydiving we attend to. On a construal-based account of conceptual norms, a concept (e.g., “representation”) is ascribed relative to a construal of a situation. 
I describe a minimal sense of what it means to construe a system as an “organism,” and how ascriptions of representational content are made relative to such construals. Briefly, construing something as an organism entails construing it such that it has goals and mechanisms for achieving those goals in its natural context. For example, frogs qua organisms have goals like identifying food and ingesting it. I suggest that ascriptions of natural representations and their contents are always relative to some construal of the representing system qua organism. Furthermore, the plausibility of representation-ascriptions is constrained by the plausibility of their coordinate construal-qua-organism. So the contents we ascribe to representations in frog visual cortex are constrained by the goals we attribute to frogs. 
Absurd cases like Ramsey’s firing pin can be excluded (mostly) since guns are not easily construed as “organisms.” They have no goals of their own. It is not impossible to ascribe goals to artifacts, but the ascription of folk-psychological properties to tools generally follows a distinct pattern from representation-ascription in science. 
My construal-based proposal explains the practice of representation-ascription better than Ramsey’s receptor notion. It preserves Ramsey’s positive examples, such as the ascription of representations to visual cortex, but tends to exclude absurd cases like the firing pin. Since cognitive scientists do not actually ascribe natural representations to firearms, I submit that my account is a more charitable interpretation of existing scientific practice. 
Presenters
MA
Mikio Akagi
Texas Christian University
14. Adaptationism Revisited: Three Senses of Relative Importance
Philosophy of Science 00:14 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:14:00 UTC - 2018/11/03 06:59:00 UTC
Mingjun Zhang (University of Pennsylvania)
In the sixth edition of the Origin, Darwin wrote that, “I am convinced that Natural Selection has been the most important, but not the exclusive, means of modification” (Darwin 1872, 4). The idea that natural selection is the most important, if not the only important, driving factor of evolution is further developed and crystalized in the various views under the name of adaptationism. However, it is not always clear what exactly it means to talk about relative importance in the relevant debate. In this paper, I distinguish three senses of relative importance and use this distinction to reexamine the various claims of adaptationism. I give examples of how these different senses of relative importance are applied in different adaptationist claims, and discuss some possible issues in their application.
The first sense: A factor is more important than others if the proportion of phenomena in a domain explained or caused by this factor is greater than the proportion of those explained or caused by other factors. I call it relative importance based on relative frequency. The famous debate between Fisher and Wright about the role of natural selection can be understood as a debate about the relative importance of natural selection in this sense, in which they disagree about the relative frequency of genetic variation within and between populations caused by natural selection and other factors like drift. However, philosophers like Kovaka (2017) have argued that there is no necessary connection between relative frequency and relative importance. 
The second sense: A factor is more important than others if it can explain special phenomena in a domain and help answer the central or most important questions within it. I call it relative importance based on explanatory power. This kind of relative importance is involved in the view of explanatory adaptationism formulated by Godfrey-Smith (2001). According to this view, natural selection is the most important evolutionary factor because it can solve the problems of apparent design and/or adaptedness, which are the central problems in biology. However, some biologists may deny that there are “central questions” in biological research. Even if there are central questions in biology, apparent design and adaptedness may not be the only ones.
The third sense: A factor is more important than others if it has greater causal efficacy in the production of a phenomenon than others. For example, the gravity of the Moon is a more important cause of the tides on the Earth than the gravity of the Sun because the Moon has a bigger influence on the tidal height on the Earth. I call it relative importance based on causal efficacy. Orzack and Sober (1996) understand adaptationism as the view that selection is typically the most important evolutionary force. Here they use relative importance in the third sense because their formulation involves the comparison of causal efficacy between selection and other factors in driving evolution. The main issue is how to measure the causal efficacy of different factors.
Presenters Mingjun Zhang
University Of Pennsylvania
15. Mood as More Than a Monitor of Energy
Philosophy of Science 00:15 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:15:00 UTC - 2018/11/03 06:59:00 UTC
Mara McGuire (Mississippi State University) 
Muk Wong (2016) has recently developed a theory of mood and mood function that draws on Laura Sizer’s (2000) computational theory of moods. Sizer argues that moods are higher order functional states, biases in cognitive processes such as attention allocation, memory retrieval, and a mode of information processing. Wong supplements Sizer’s account with one of mood elicitation: what mood is a response to and what function(s) mood serves. Wong claims that mood is a “mechanism” that monitors our energy levels, both mental and physical, in relation to environmental energy demands and, based on this relation, biases our functional states. Based on his account of mood elicitation, Wong next proposes a single function of mood: to maintain an “equilibrium” between our internal energy and the energy requirements of our environment. 
I argue that while the need for an account of mood elicitation is well taken, it cannot be understood in terms of a mechanism monitoring energy levels. A theory of mood elicitation must be able to explain the elicitation of different types of moods on different occasions (e.g. anxious, irritable, contented, etc.), that is, why different types of moods are elicited by different events or states of affairs. Understanding mood elicitation along a single dimension, such as the relation between energy level and energy demands, is incapable of doing this. Distinct mood types appear to be more complicated than just differential responses to energy levels and demands. But then Wong’s account of mood function must be rejected. I propose instead that we adopt a multi-dimensional account of mood elicitation. As a first step toward this, I draw upon a different conception of mental energy to Wong’s and argue that mental energy should be expanded to include states of ego-depletion as well as cognitive fatigue (Inzlicht & Berkman 2015). While this more robust account of mental energy increases the explanatory power of Wong’s account, his theory would still not be sufficient to account fully for the elicitation of different types of moods. I then propose that we draw on a related area of affective science, appraisal theories of emotion elicitation, and consider whether important dimensions recognized in these theories, such as “goal relevance and congruence,” “control” and “coping potential” (Moors et al. 2013) are helpful toward understanding the elicitation of moods. I suggest that drawing on these dimensions to start to construct a multi-dimensional account of mood elicitation may explain the elicitation of different types of moods and provide a better foundation for understanding mood function. 
Presenters
MM
Mara McGuire
Mississippi State University
16. Armchair Chemistry and Theoretical Justification in Science
Philosophy of Science 00:16 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:16:00 UTC - 2018/11/03 06:59:00 UTC
Amanda Nichols (Oklahoma Christian University), Myron Penner (Trinity Western University)
In the late 19th century, Sophus Jørgensen proposed structures for cobalt complexes that utilized the more developed bonding principles of organic chemistry and the reigning understanding of valence. Similar to how organic compounds typically form hydrocarbon chains, Jørgensen created models for cobalt complexes that also had a chainlike structure. His models featured (1) a cobalt metal center with three attachments because cobalt was understood as trivalent and (2) one of those attachments was a chain of atoms, like the carbon chain featured in organic chemistry.
Alfred Werner proposed a different model for cobalt compounds that featured octahedral arrangements around the cobalt metal center, calling the metal complex a coordination compound. Werner’s coordination theory introduced a new type of valence allowing cobalt to have six attachments and abandoned Jørgensen’s chain theory. Experimental work confirmed Werner’s theory making it central to inorganic chemistry.
One issue in the Jørgensen-Werner debate over the structure of cobalt complexes concerns differences between the two scientists over the nature of theoretical justification-- the epistemic reasons each had for resisting change (as with Jørgensen) or looking for a different model (as with Werner). In our paper, we compare and contrast the concepts of theoretical justification employed by Jørgensen and Werner. Jørgensen felt that Werner lacked justification for his experimental model. Werner, presumably, had some justification for his model, albeit a different sort of justification than Jørgensen. 
While Werner constructed a radically different and creative model, Jørgensen resisted revision to the established framework. Werner emphasized symmetry and geometric simplicity in his model, and the consistent patterns that emerged were viewed as truth-conducive. Jørgensen, on the other hand, criticized Werner’s model on the basis that it lacked evidence and was an “ad hoc” explanation. Jørgensen disagreed that Werner’s method of hypothetical reasoning was the best approach in theory-building. G. N. Lewis’ electronic theory of valency and later theories, such as crystal field and molecular orbital theories of bonding that explain Werner’s coordination theory were not developed until later. Though Werner seemed comfortable proceeding with details not settled, Jørgensen was not. Werner’s descriptions of his model would frame him as a scientific realist, while some historical evidence suggests that Jørgensen could be classified as an anti-realist. Assuming this, we explore the contribution realism makes towards the progress of science, and how anti-realism might hinder. We conclude by noting how the different concepts of theoretical justification embodied by Jørgensen and Werner help us understand both continuity and diversity in multiple approaches to scientific method.
Presenters
AN
Amanda Nichols
Oklahoma Christian University
MP
Myron Penner
Trinity Western University
17. Model-Groups as Scienti c Research Programmes
Philosophy of Science 00:17 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:17:00 UTC - 2018/11/03 06:59:00 UTC
Cristin Chall (University of South Carolina) 
The Standard Model (SM) is one of our most well tested and highly confirmed theories. However, physicists, perceiving flaws in the SM, have been building models describing physics that goes beyond it (BSM). Many of these models describe alternatives to the Higgs mechanism, the SM explanation for electroweak symmetry breaking (EWSB). So far, no BSM model has been empirically successful; meanwhile, the Higgs particle discovered in 2012 has exhibited exactly the properties predicted by the SM. Despite this, many BSM models have remained popular, even years after this SM-like Higgs boson has been found. This is surprising, since it appears to y in the face of conventional understandings of scientific practice to have competing models interacting in a complex dynamics even though none of them have achieved empirical success and all of them are faced with a predictively superior alternative. The question becomes: How do we rationally explain physicists' continued work on models that, though not entirely excluded, are increasingly experimentally disfavoured? 
I will argue that the best framework for explaining these complex model dynamics is the notion of scientific research programmes, as described by Lakatos (1978). To apply this framework, however, I need to modify it to accommodate the collections of models which share the same core theoretical commitments, since Lakatos dismisses models to the periphery of research programmes. These collections of models, which I call `model-groups', behave as full-edged research programmes, supplementing the series of theories that originally defined research programmes. By allowing the individual models to be replaced in the face of unfavourable empirical results, the hard core of a model-group is preserved. The practical benefit of applying this framework is that it explains the model dynamics: physicists continue to formulate and test new models based on the central tenets of a model-group, which provide stability and avenues for making progress, and rationally continue giving credence to BSM models lacking the empirical support enjoyed by the SM account of EWSB. 
To demonstrate the model dynamics detailed by the Lakatosian framework, I will use the Composite Higgs model-group as an example. Composite Higgs models provide several benefits over the SM account, since many have a dark matter candidate, or accommodate naturalness. However, the measured properties of the Higgs boson give every indication that it is not a composite particle. I trace the changing strategies used in this model-group in order to demonstrate the explanatory power of Lakatosian research programmes applied in this new arena. Thus, I show that Lakatos, suitably modified, provides the best avenue for philosophers to describe the model dynamics in particle physics, a previously under-represented element of the philosophical literature on modelling. 
Presenters
CC
Cristin Chall
University Of South Carolina/Rheinische Friedrich-Wilhelms-Universität Bonn
18. Who Is Afraid of Model Pluralism?
Philosophy of Science 00:18 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:18:00 UTC - 2018/11/03 06:59:00 UTC
Walter Veit
This paper argues for the explanatory power of EGT models in three distinct but closely related ways. First, following Sugden and Aydinonat & Ylikoski I argue that EGT models are created parallel worlds i.e. surrogate systems in which we can explore particular (evolutionary) mechanisms by isolating everything that could be interfering in the real world. By specifying the pool of strategies, the game and the fitness of the strategies involved, EGT explores potential phenomena and dynamics emerging and persisting under natural selection. Given a particular phenomenon, e.g. cooperation, war of attrition, costly signalling, EGT enables the researcher to explore multiple ‘how-possibly’ explanations of how the phenomena could have arisen and contrast them with each other, e.g. sexual selection, kin selection and group selection. Secondly, I argue that by eliminating ‘how-possible’ explanations through eliminative induction, we can arrive at robust mechanisms explaining the stability and emergence of evolutionary stable equilibria in the real world. In order for such an eliminative process to be successful, it requires deliberate research in multiple scientific disciplines such as genomics, ethology and ecology. This research should be guided by the assumptions made in the applications of particular EGT models, especially the range of parameters for payoffs and the strategies found in nature. Thirdly, I argue that in order to bridge the gap between the remaining set of ‘how-possibly’ explanations to the actual explanation requires abduction, i.e. inference to the best explanation. Such inference shall proceed by considering issues of resemblance between the multiple EGT models and the target system in question evaluating their credibility. Together these three explanatory strategies will turn out to be sufficient and necessary to turn EGT models into a genuine explanation.
 
Presenters Walter Veit
University Of Bristol
19. The Role of Optimality Claims in Cognitive Modelling
Philosophy of Science 00:19 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:19:00 UTC - 2018/11/03 06:59:00 UTC
Brendan Fleig-Goldstein (University of Pittsburgh)
Why might a scientist want to establish a cognitive model as rational or “optimal” in some sense (e.g., relative to some normal environment)? In this presentation, I argue that one motivation for finding optimal cognitive models is to facilitate a particular strategy for marshalling evidence for cognitive theories. This claim stands in contrast to previous thinking about the role of optimality claims in cognitive modelling. Previous thinking has generally suggested that optimality claims either: serve to help provide teleological explanations (explanatory role); heuristically aid in the search for predictively accurate models (methodological role); or are themselves hypotheses in need of testing (empirical role). The idea that optimality claims can play a role in the process of testing theories of cognition has not previously been explored.
The evidential strategy proceeds as follows: a scientist proposes an optimal model, and then uses this optimal model to uncover systematic discrepancies between idealized human behavior and observed human behavior. The emergence of discrepancies with a clear signature leads to the discovery of previously unknown details about human cognition (e.g., computational resource costs) that explain the discrepancy. The incorporation of these details into models then gives rise to new idealized models that factor in these details. New discrepancies emerge, and the process repeats itself in an iterative fashion. Successful iterations of this process results in tighter agreement between theory and observation. I draw upon George E. Smith’s analysis of evidence in Newtonian gravity research (e.g., 2014) to explain how this process of iteratively uncovering “details that make a difference” to the cognitive system constitutes a specific logic of theory-testing. I discuss Thomas Icard’s (e.g, 2018,) work on bounded rational analysis as an illustration of this process in action.
Presenters
BF
Brendan Fleig-Goldstein
University Of Pittsburgh
20. Mechanistic Explanations and Mechanistic Understanding in Computer Simulations: A Case Study in Models of Earthquakes
Philosophy of Science 00:20 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:20:00 UTC - 2018/11/03 06:59:00 UTC
Hernan Felipe Bobadilla Rodriguez (University of Vienna)
Scientists often resort to computer simulations to explain and understand natural phenomena. Several philosophers of science claim that these epistemic goals are related: Explanations provide understanding. Controversially, while some philosophers say that explanations are the only way to gain understanding, others argue that there are alternative, non-explanatory ways to gain understanding. 
The aim of this paper is to assess explanations and understanding gained by means of computer simulations. In particular, I focus on assessing mechanistic explanations and mechanistic understanding – in the “new mechanist” sense. Furthermore, I examine the relations between mechanistic explanations and mechanistic understanding. 
In order to achieve these aims, I perform a case study based on an agent-based computer simulation, known as the Olami, Feder and Christensen model (OFC, 1992). The OFC model predicts and explains aspects of a robust behaviour of earthquakes, known as the Gutenberg-Richter law. This behaviour consists in the robust power-law distribution of earthquakes according to their magnitudes across seismic regions. Roughly speaking, the OFC model simulates the power-law distribution of earthquakes by modelling the reciprocal influence between frictional forces and elastic deformation at a generic geological fault. In this case, a geological fault is represented as a cellular automaton in which cells redistribute elastic potential energy to their neighbouring cells when local thresholds of static friction are exceeded. 
I deliver the following results:
1) The OFC model is a mechanistic model. That is, the component elements of the OFC model can be interpreted as mechanistic elements, namely entities, activities and organization. 
2) The OFC model is a mechanism, namely a computing mechanism á la Piccinini (2007), which produces phenomena, namely outputs in a computer program. 
3) A description of the OFC model, qua computing mechanism, mechanistically explains the power-law distribution of model-earthquakes.
4) The mechanistic explanation of the power-law distribution of model-earthquakes in the OFC models does not hold for real earthquakes. This is due to the lack of mapping between the mechanistic elements of the OFC model and the putative mechanistic elements in a geological fault. In particular, a mapping of mechanistic entities is problematic. The mechanistic entities in the OFC model, namely cells of the cellular automaton, are arbitrary divisions of space. They are not working parts in a geological fault. 
5) However, the OFC model provides mechanistic understanding of the power-law distribution of real earthquakes. The OFC model provides us with a mechanism that can produce power-law distribution of earthquakes, even though it is not the actual one. Information about a possible mechanism may give oblique information about the actual mechanism (Lipton, 2009). In this sense, surveying the space of possible mechanisms advance our mechanistic understanding of real earthquakes.
Presenters
HB
Hernan Bobadilla
University Of Vienna
21. Concepts of Approximate Solutions and the Finite Element Method
Philosophy of Science 00:21 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:21:00 UTC - 2018/11/03 06:59:00 UTC
Nicolas Fillion (Simon Fraser University)
I discuss epistemologically unique difficulties associated with the solution of mathematical problems by means of the finite element method. This method, used to obtain approximate solutions to multidimensional problems within finite domains with possibly irregular boundary conditions, has received comparatively little attention in the philosophical literature, despite being the most dependable computational method used by structural engineers and other modelers handling complex real-world systems. As most numerical methods that are part of the standard numerical analysis curriculum do, this method breaks from the classical perspective on exact mathematical solutions, as it involves error-control strategies within given modeling contexts. This is why assessing the validity of such inexact solutions requires that we emphasize aspects of the relationship between solutions and mathematical structures that are not required to assess putative exact solutions. One such structural element is the sensitivity or robustness of solutions under perturbations, whose characterization leads to a deeper understanding of the mechanisms that drive the behavior of the system. The transition to an epistemological understanding of the concept of approximate solution can thus be characterized as an operative process of structure enrichment. This transition generates a scheme to assess the justification of solutions that contains more complex semantic elements whose murkier inner logic is essential to a philosophical understanding of the lessons of applied mathematics.
To be sure, there is a practical acceptance of the finite element method by practitioners in their attempt to overcome the representational and inferential opacity of the models they use, mainly because it has proved to be tremendously successful. However, the finite element method differs in important respects from other numerical methods. What makes the method so advantageous in practice is its discretization scheme, which is applicable to objects of any shape and dimension. This innovative mode of discretization provides a simplified representation of the physical model by decomposing its domain into triangles, tetrahedra, or analogs of the right dimension. Officially, each simplified inside element is then locally associated with a piecewise low-degree polynomial that is interpolated with the polynomial of other elements to ensure sufficient continuity between the elements. On that basis, a recursive composition of all the elements is made to obtain the solution over the whole domain. However, this presents applied mathematicians with a dilemma, since using piecewise polynomials that will be continuous enough to allow for a mathematically sound local-global “gluing” is typically computationally intractable. Perhaps surprisingly, computational expediency is typically chosen over mathematical soundness. Strang has characterize this methodological gambit as a "variational crime." I explain how committing variational crimes is a paradigmatic violation of epistemological principles that are typically used to make sense of approximation in applied mathematics. On that basis, I argue that the epistemological meaning of these innovations and difficulties in the justification of the relationship between the system and the solution lies in additional structural enrichment of the concept of validity of a solution that are in line with recently developed methods of a posteriori error analysis.
Presenters
NF
Nicolas Fillion
Simon Fraser University
22. A Crisis of Confusion: Unpacking the Replication Crisis in the Computational Sciences
Philosophy of Science 00:22 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:22:00 UTC - 2018/11/03 06:59:00 UTC
Dasha Pruss (University of Pittsburgh)
A flurry of failed experimental replications in the 21st century has led to the declaration of a "replication crisis" in a number of experimental fields, including psychology and medicine. Recent articles (e.g., Hutson, 2018) have proclaimed a similar crisis in the computational sciences: researchers have had widespread difficulties in reproducing key computational results, such as reported levels of predictive accuracy of machine learning algorithms. At first, importing the experimental concept of a replication crisis to explain what is happening in the computational sciences might seem attractive - in both fields, questionable research practices have led to the publication of results that cannot be reproduced. With the help of careful conceptual analysis, however, it becomes clear that this analogy between experimental sciences and computational sciences is at best a strained one, and at worst a meaningless one.
Scientific writing on experimental replication is awash with conceptual confusion; to assess the concept of replication in the computational sciences, I appeal to Machery's re-sampling account of experimental replication (Machery, Ms). On the re-sampling account, an experiment replicates an earlier experiment if and only if the new experiment consists of a sequence of events of the same type as the original experiment, while re-sampling some of its experimental components, with the aim of establishing the reliability (as opposed to the validity) of an experimental result. The difficulty of applying the concept of experimental replication to the crisis in the computational sciences stems from two important epistemic differences between computational sciences and experimental sciences: the first is that the distinction between random and fixed factors is not as clear or consistent in the computational sciences as it is in the experimental sciences (the components that stay unchanged between the two experiments are fixed components, and the components that get re-sampled are random components). The second is that, unlike in the experimental sciences, computational components often cannot be separately modified - this means that establishing the reliability of a computational result is often intimately connected to establishing the validity of the result. In light of this, I argue that there are two defensible ways to conceive of replicability in the computational sciences: weak replicability (reproducing an earlier result using identical code and data and different input or system factors), which is concerned with issues already captured by the concept of repeatability, and strong replicability (reproducing an earlier result using different code or data), which is concerned with issues already captured by robustness. Because neither concept of replicability captures anything new with regard to the challenges the computational sciences face, I argue that we should resist the fad of seeing a replication crisis at every corner and should do away with the concept of replication in the computational sciences. Instead, philosophers and computer scientists alike should focus exclusively on issues of repeatability and robustness.
Presenters
DP
Dasha Pruss
University Of Pittsburgh HPS
23. Deep Learning Models in Computational Neuroscience
Philosophy of Science 00:23 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:23:00 UTC - 2018/11/03 06:59:00 UTC
Imran Thobani (Stanford University)
The recent development of deep learning models of parts of the brain such as the visual system raises exciting philosophical questions about how these models relate to the brain. Answering these questions could help guide future research in computational neuroscience as well as provide new philosophical insights into the various ways that scientific models relate to the systems they represent or describe. 
By being trained to solve difficult computational tasks like image classification, some of these deep learning models have been shown to successfully predict neural response behavior without simply being fit to the neural data (Yamins 2016). This suggests that these models are more than just phenomenological models of neural response behavior. There is supposed to be a deeper similarity between the deep learning model and the neural system it is supposed to represent that goes beyond the sharing of neural response properties. But what exactly is this similarity relationship? 
I argue that there are three distinct similarity relationships that can hold between a deep learning model and a target system in the brain, and I explicate each relationship. The first is surface-level similarity between the activation patterns in the set of model neurons in response to a range of sensory inputs and the activations of the firing rates of neurons in response to the same (or sufficiently similar) sensory stimuli. The second kind of similarity is architectural similarity between the neural network model and the actual neural circuit in a brain. The model is similar to the brain in this second sense, to the extent that the mathematical relationships that hold between the activations of model neurons are similar to actual relationships between firing rates of neurons in the brain. The third kind of similarity is similarity between the coarse constraints that were used in the design of the model, and constraints that the target system in the brain obeys. These constraints include, amongst other things, the objective function that the model is trained to optimize, the number of neurons used in the model, and the learning rule that is used to train the model. 
Having distinguished these three kinds of similarity, I address the question of which kind of similarity is most relevant to the question of what counts as a good model of the brain. I argue that similarity at the level of coarse constraints is a necessary criterion for a good model of the brain. While architectural and surface-level similarity are relevant criteria for a good model of the brain, their relevance needs to be understood in terms of providing evidence for similarity at the level of coarse constraints.
Presenters
IT
Imran Thobani
Stanford University
24. Empirical Support and Relevance for Models of the Evolution of Cooperation: Problems and Prospects
Philosophy of Science 00:24 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:24:00 UTC - 2018/11/03 06:59:00 UTC
Archie Fields III (University of Calgary)
Recently it has been argued that agent-based simulations which involve using the Prisoner’s Dilemma and other game-theoretic scenarios as a means to study the evolution of cooperation are seriously flawed because they lack empirical support and explanatory relevance to actual cooperative behavior (Arnold 2014, 2015). I respond to this challenge for simulation-based studies of the evolution of cooperation in two ways. First, I argue that it is simply false that these models lack empirical support, drawing attention to a case which highlights how empirical information has been and continues to be incorporated into agent based, game-theoretic models used to study the evolution of cooperation. In particular I examine the work of Bowles and Gintis and show how they draw upon ethnographic and biological evidence as well as experiments in behavioral psychology in their models of the evolution of strong reciprocity (2011). Ultimately, I take Arnold’s misdiagnosis of the empirical support and relevance of these models to result from too stringent standards for empirical support and a failure to appreciate the role the results of these models can play in identifying and exploring constraints on the evolutionary mechanisms (e.g. kin selection, group selection, spatial selection) involved in the evolution of cooperation. Second, I propose that a modified version of Arnold’s criticism is still a threat to model-based research in the evolution of cooperation: the game-theoretic models used to study the evolution of cooperation suffer from certain limitations because of the level of abstraction involved in these models. Namely, these models in their present state cannot be used to explore what physical or cognitive capacities are required for cooperative behavior to evolve because all simulated agents come equipped with the ability to cooperate or defect. That is, present models can tell us about how cooperation can persist or fail in the face of defection or other difficulties, but cannot tell us very much about how agents come to be cooperators in the first place. However, I also suggest a solution to this problem by arguing that there are promising ways to incorporate further empirical information into these simulations via situated cognition approaches to evolutionary simulation. Drawing on the dynamics of adaptive behavior research program outlined by Beer (1997) and more recent work by Bernard et al. (2016), I conclude by arguing that accounting for the physical characteristics of agents and their environments can shed further light on the origins of cooperation.
Presenters
AF
Archie Fields III
University Of Calgary
25. Philosophy In Science: A Participatory Approach to Philosophy of Science
Philosophy of Science 00:25 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:25:00 UTC - 2018/11/03 06:59:00 UTC
Jessey Wright (Stanford University)
The turn towards practice saw philosophers become more engaged with methodological and theoretical issues arising within particular scientific disciplines. The nature of this engagement ranges from close attention to published scientific research and archival materials, to structured interviews and ethnographic research (Leonelli 2012; Osbeck and Nersessian 2017), to participation in a research setting (Tuana 2013). I propose philosophy in science as an approach to inquiry that is continuous with these. It is philosophical research conducted via the integration philosophical ways of thinking into the practices of science. In this poster I describe the aims of this approach, a brief outline of a method for doing it, and identify some benefits and drawbacks. To develop this position, I examine my graduate training which involved close contact with neuroscientists and my current postdoctoral appointment as the resident philosopher in a neuroscience lab.
My dissertation project was born out of the stark contrast I noticed between philosophical analyses of neuroscience and the activities I observed while attending lab meetings. Philosophical critiques of neuroimaging research often overlook small steps in the experimental process invisible in publications, but plainly visible in day-to-day activities. This work produced contributions to philosophy of science, and improved the data interpretation practices within my lab. I present this work as an example of philosophical inquiry that advances both philosophy and science. It demonstrates how philosophical theories can be directly applied to advance the scientific problems that they are descriptive of. The use of philosophy in empirical contexts allows the realities of scientific practice ‘push back,’ revealing aspects of scientific practice that are under-appreciated by the philosophical analyses and accounts of science you are using.
My position as a resident philosopher in a lab shows how normative aims of philosophy are realized in collaboration. Projects in my lab are united by the goal of improving reproducibility and the quality of evidence in neuroimaging research. My project examines how the development of infrastructures for sharing and analyzing data influences the standards of evidence in neuroscience. In particular, recent disputes in cognitive neuroscience between database users and developers has made salient to neuroscience’s that the impact tool developers intend to have, and the actual uses of their tools, may be incompatible. The process of articulating philosophical dimensions of these disputes, and examining decisions surrounding tool development, have influenced the form, presentation, and promotion of those tools.
My approach, of pursing philosophically interesting questions that the will provide valuable insight for scientists integrates philosophical skills and ways of thinking seamlessly into scientific practices. I conclude by noting advantages and pitfalls with this approach.
Presenters
JW
Jessey Wright
Stanford University
26. On the Death of Species: Extinction Reconsidered
Philosophy of Science 00:26 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:26:00 UTC - 2018/11/03 06:59:00 UTC
Leonard Finkelman (Linfield College)
Nearly all species that have ever evolved are now extinct. Despite its ubiquity, theorists have generally neglected to clarify the concept (Raup 1992). In the most extensive conceptual analysis currently available, Delord (2007) distinguishes three senses by which the term “extinct” may be predicated of a taxon. A taxon is “functionally” extinct if the taxon no longer contributes to ecosystem processes; a taxon is “demographically” extinct if the taxon has no living members; a taxon is “finally” extinct if the information necessary to propagate the taxon vanishes. Ambiguity between these senses contributes to confusions and inconsistencies in discussions of extinction (Siipi & Finkelman 2017). I offer a more general account that reconciles Delord’s three senses of the term “extinct” by treating the term as a relation rather than a single-place predicate: a taxon is extinct if and only if the probability of any observer’s encountering the species approaches zero. To treat extinction as a relation in this way follows from methods for diagnosing precise extinction dates through extrapolation from “sighting record” frequencies (Solow 1993; Bradshaw, et al. 2012). By this account, Delord’s three senses of extinction mark different levels of significance in the sighting probability’s approach to zero. This has the advantages of integrating all discussions of extinction under a single unitary concept and of maintaining consistent and unambiguous use of the term, even as technological advances alter the scope of extinction.
Presenters Leonard Finkelman
Linfield College
27. Do Heuristics Exhaust the Methods of Discovery?
Philosophy of Science 00:27 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:27:00 UTC - 2018/11/03 06:59:00 UTC
Benjamin Jantzen (Virginia Tech), Cruz Davis (University of Massachusetts, Amherst)
Recently, one of us presented a paper on the history of “algorithmic discovery” at an academic conference. As we intend the term, algorithmic discovery is the production of novel and plausible empirical generalizations by means of an effective procedure, a method that is explicitly represented and executed in finitely many steps. In other words, we understand it to be discovery by computable algorithm. An anonymous reviewer for the conference saw things differently, helpfully explaining that “[a]nother, more common name for algorithmic discovery would be heuristics.” This comment prompted us to investigate further to see what differences (if any) there are between heuristics and algorithmic discovery.
The aim of this paper is to compare and contrast heuristics with algorithmic discovery and to explore the consequences of these distinctions within their applications in science and other areas. To achieve the first goal the term ‘heuristic’ is treated as a family resemblance concept. So for a method or rule to be classified as a heuristic it will have to satisfy a sufficient number of the properties involved in the family resemblance. There are eight features that we specify that are involved with being a heuristic. The first five correspond to the heuristic search program in artificial intelligence. The last three pick out more general characterizations of heuristics as methods that lack a guarantee, are rules of thumb, or transform one set of problems into another. We argue that there are methods of algorithmic discovery that have none of the eight features associated with heuristics. Thus, there are methods of algorithmic discovery which are distinct from heuristics.
Once we’ve established that heuristic methods do not exhaust the methods of algorithmic discovery, we compare heuristic methods with non-heuristic discovery methods in their application. This is achieved by discussing two different areas of application. First, we discuss how heuristic and non-heuristic methods perform in different gaming environments such as checkers, chess, go, and video games. We find that while heuristics perform well in some environments – like chess and checkers – non-heuristic methods perform better in others. And, interestingly, hybrid methods perform well in yet other environments. Secondly, heuristic and non-heuristic methods are compared in their performance in empirical discovery. We discuss how effective each type of method is in discovering chemical structure, finding diagnoses in medicine, learning causal structure, and finding natural kinds. Again, we find that heuristic and non-heuristic methods perform well in different cases.
We conclude by discussing the sources of the effectiveness of heuristic and non-heuristic methods. Heuristic and non-heuristic methods are discussed in relation to how they are affected by the frame problem and the problem of induction. We argue that the recent explosion of non-heuristic methods is due to how heuristic methods tend to be afflicted by these problems while non-heuristic methods are not.
Presenters
BJ
Benjamin Jantzen
Virginia Tech
CD
Cruz Davis
UMass Amherst
28. What Can We Learn from How a Parrot Learns to Speak like a Human?
Philosophy of Science 00:28 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:28:00 UTC - 2018/11/03 06:59:00 UTC
Shereen Chang (University of Pennsylvania) 
What is the significance of learning conditions for inferences about cognition in general? Consider the case of Alex the grey parrot, who was trained by researcher Irene Pepperberg to use English words in their appropriate contexts. When presented with an array of different objects, Alex could vocalize in English the correct answers to questions such as “How many green blocks?” He could compare two objects and vocalize how they were similar or different (e.g., “color”). In short, Alex could communicate meaningfully using English words. 
Alex learned to communicate with English words via various training methods that emphasized social context and interaction. To introduce new words to the parrot, Pepperberg primarily used a Model/Rival technique in which two human trainers demonstrate the reference and functionality of target words, while providing social interaction. After Alex attempted to vocalize a new word in the presence of the referent object, trainers would repeat the word in different sentences to clarify its pronunciation, reminiscent of how human parents talk to young children. Alex also engaged in self-directed learning; he learned the word “grey” after seeing his reflection in the mirror and asking his trainers, “What color?” Thus, a parrot acquired parts of the English language through techniques similar to how humans learn to speak English. On my analysis, there are four key conditions for the acquisition of such communication skills.
How do we make sense of the similarities between the ways in which a parrot and a human child learn to speak? Since a parrot was able to acquire the meaningful use of words in English, a human-based communication code, it seems that parrots can learn communication codes other than those of their own species. If parrots have a general ability to learn communication codes, then either the conditions under which they learn words in English is specific to learning human-based communication codes or they are more general features of learning communication codes. I present reasons to rule out the former and argue that the conditions under which Pepperberg’s parrots learned English are likely to be more general features of learning communication codes. 
From research in cross-species communicative behaviour, where an individual learns how to communicate using the communication code of another species, we can learn about the relevance of particular learning conditions more generally. By studying how parrots learn to communicate using a human language such as English, for example, we can shed light on more general aspects of how we learn to communicate. In this way, we can garner special insight on the nature of social cognition, the acquisition of communication skills, and our cognitive evolution in general. 
Presenters
SC
Shereen Chang
University Of Pennsylvania
29. Circuit Switching, Gain Control and the Problem of Multifunctionality
Philosophy of Science 00:29 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:29:00 UTC - 2018/11/03 06:59:00 UTC
Philipp Haueis (Berlin School of Mind and Brain)
Neural structures with multiple functions make it unclear when we have successfully described what a structure does when it works. Several recent accounts attempt to tackle this problem of multifunctionality differently. Rathkopf (2013) proposes an intrinsic function concept to describe what a structure does whenever it works, whereas Burnston (2016a) argues for context-sensitive descriptions to tackle multifunctionality. McCaffrey (2015) proposes a middle road by indexing invariant or context-sensitive descriptions to the mechanistic organization of a multifunctional structure.
In this paper, I argue that these accounts underestimate the problem of multifunctionality. Because they implicitly assume that “multifunctional” means “contributing to multiple cognitive functions”, they overlook other types within the purview of their accounts: circuit switching in central pattern generators and gain control in cortical microcircuits. Central pattern generators are multifunctional because they can switch between rhythmic motor outputs (Briggmann and Kristan 2008). Cortical microcircuits are multifunctional because some circuit elements process sensory information, whereas others prevent damage by controlling circuit gain (Merker 2013). 
These circuit functions are not operative in cognitive processing but instead enable such processing to occur at all. Yet they exhibit exactly the features that philosophical accounts recruit to handle (cognitive) multifunctionality. Similar to Rathkopf’s intrinsic function account, circuit switching and gain control can be analysed without reference to the behavior of the organism. Yet, they do not replace but complement task-based functional analyses of multifunctional structures, thus questioning the plausibility of the intrinsic function account. Circuit switching and gain control also show that Burnston’s and McCaffrey’s accounts are incomplete. Because he focuses on cognitive contexts, Burnston’s contextualism fails to capture how circuit switching and gain control change with biochemical and physiological contexts, respectively. These contexts make the problem of multifunctionality harder than Burnston acknowledges, because different context types cross-classify the response of multifunctional structures. Similarly, McCaffrey’s typology of mechanistic organization to classify multifunctional structures fails to capture how circuit switching or gain control are mechanistically organized. Because central pattern generators can switch rhythmic outputs independently of sensory inputs, they are mechanistically decoupled from cognitive functions that process those inputs. In contrast, gain control is essentially coupled to cognitive functions because it is only necessary to prevent damage when a cortical microcircuit processes sensory information. 
My analysis shows that existing philosophical accounts have underestimated the problem of multifunctionality because they overlooked circuit functions that are not operative in, but instead enable cognitive functions. An adequate account of multifunctionality should capture all types of multifunctionality, regardless of whether they are cognitive or not.
Presenters
PH
Philipp Haueis
Bielefeld University
30. Brain-Machine Interfaces and the Extended Mind
Philosophy of Science 00:30 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:30:00 UTC - 2018/11/03 06:59:00 UTC
Mahi Hardalupas (University of Pittsburgh), Alessandra Buccella (University of Pittsburgh) 
The Extended Mind Theory of cognition (EMT) claims that cognitive processes can be realized, or partially realized outside of the biological body. Unsurprisingly, proponents of EMT have become increasingly interested in brain-machine interfaces (BMIs). For example, Clark argues that BMIs will soon create human-machine “wholes” challenging any principled distinction between biological bodies and artifacts designed to enhance or replace biological functions. If this is what BMIs are capable of, then they potentially offer convincing evidence in favor of EMT. 
In this paper, we criticize the claim that BMIs, and especially motor BMIs (EEG-controlled robotic arms, exoskeletons, etc.), support EMT. 
First, Clark claims that BMIs incorporated into the so-called “body schema" will stop requiring complex representational resources mediating between neural inputs and motor outputs. If this is the case, then one has good grounds to claim that we should treat BMIs as genuinely extending cognition. However, at least for now, motor control-BMIs do necessarily require mediating representations. 
EMT theorists could reply that two systems can be functionally similar even if one requires representational mediation and the other doesn’t. 
However, it seems to us that when EMT suggests functional similarity as a criterion to decide whether BMIs genuinely extend cognition, they should mean similarity at the algorithmic level, that is, where more specific descriptions of the mechanisms involved between input and output are given. But at the algorithmic level the differences regarding representational mediation mentioned above matter. 
Moreover, research into BMIs seems to take for granted that their success depends on their proximity to the brain and their ability to directly influence it (e.g. invasive BMIs are considered a more viable research program than non-invasive BMIs). This seems in tension with EMT's thesis that it should not make a difference how close to the brain a device contributing to cognitive processes is. 
Finally, EMT is a theory about the constitution of cognitive processes, that is, it claims that the mind is extended iff a device constitutes at least part of the process. However, all the evidence that we can gather regarding the relationship between BMIs and cognitive processes only confirms the existence of a causal relation. Therefore, the currently available evidence leaves EMT undetermined. 
In conclusion, we claim that BMIs don't support EMT but, at most, a weaker alternative. 
Presenters
MH
Mahi Hardalupas
University Of PIttsburgh
AB
Alessandra Buccella
University Of Pittsburgh
31. The Best System Account of Laws needs Natural Properties
Philosophy of Science 00:31 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:31:00 UTC - 2018/11/03 06:59:00 UTC
Jason Kay (University of Pittsburgh) 
Humeans in metaphysics have two main desiderata for a theory of laws of nature. They want the laws to be a function of facts about the distribution of fundamental physical properties. They also want the laws to be epistemically accessible to science unaided by metaphysical theorizing. The most sophisticated attempt to realize this vision is the Best Systems Account (BSA), which claims that the laws are the generalizations which conjointly summarize the world as simply and exhaustively as possible. But the BSA faces the threat of so-called 'trivial systems' which, while simple and strong, intuitively are not the sort of thing which can be laws. Imagine a system that introduces an extremely informative predicate which contains all the facts about nature. Call the predicate 'F.' This gerrymandered predicate allows us to create a system containing the single sentence 'everything is F,' which describes the universe both exhaustively and extremely simply. 
Lewis rules out predicates like 'F' by arguing that only predicates expressing natural properties are fit to feature in the laws of nature. However, many Humeans since Lewis have rejected the existence of natural properties for their epistemic inaccessibility and ontological profligacy. In this paper I examine two recent attempts to address the Trivial Systems objection without natural properties and argue that they face serious difficulties. Cohen & Callendar concede that trivial systems will win the competition for in some cases, yet since they won't be the best system relative to the kinds we care about, this is not a problem. In essence, we are justified in preferring non-trivial systems because they organize the world into kinds that matter to us. I argue that this response fails for two reasons. First, if laws are the generalizations which best systematize the stuff we care about, this makes the laws of nature unacceptably interest relative. And second, doesn't the trivial system F also tell us about the stuff we care about? It also tells us about much, much more, but can it be faulted for this? 
Eddon & Meacham introduce the notion of 'salience' and claim that a system's overall quality should be determined by its salience along with its simplicity and strength. Since a system is salient to the extent that it is unified, useful, and explanatory, trivial systems score very low in this regard and thus will be judged poorly. I argue that it's not clear exactly how salience is supposed to do the work Eddon & Meacham require of it. I try to implement salience considerations in three different ways and conclude that each way fails to prevent trivial systems from being the Best under some circumstances. If I am right about this, versions of the BSA which reject natural properties continue to struggle against the trivial systems objection. 
Presenters
JK
Jason Kay
University Of Pittsburgh
32. Alethic Modal Reasoning in Non-Fundamental Sciences
Philosophy of Science 00:32 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:32:00 UTC - 2018/11/03 06:59:00 UTC
Ananya Chattoraj (University of Calgary) 
Modal reasoning arises from the use of expressions with modal operators like “necessary” or “possibly.” This type of reasoning arises in science through reasoning about future possibilities. Alethic modal reasoning is instantiated in science through scientific laws and single event probabilities. This means that when scientists use alethic modal reasoning, they appeal to laws and probabilities in their practices of explanation, manipulation, prediction, etc. In the philosophy of logic, alethic modality is sometimes distinguished from epistemic modality under the label of modal dualism (Kment 2014), which is instantiated in science through reasoning about future events based on past experimental results rather than an overarching law. In “An Empiricist’s Guide to Objective Modality,” Jennan Ismael presents a deflationary framework of alethic modality. This framework does not depend on possible worlds semantics and is instead couched in the way in which laws and probabilities guide scientific action. On this account, scientists do not create research programs to falsify theories that have been codified as a law – there is no research, for instance, to falsify gravity, though there are research programs to clarify the nature of the force. As such, laws, and similarly, probabilities, guide the way in which scientists perform their research. The effect of laws as guiding actions, however, has diminishing returns in non-fundamental sciences. In this poster, I present a case study of organic chemistry, where scientists use modal reasoning to classify organic molecules into functional groups. Functional group classification is based on how chemists manipulate molecules of one group by inducing reactions with molecules of a different group for results specific to their purposes. These classifications are experimentally established and provide a systematic way of classifying molecules useful for manipulation, explanation, and prediction. Since these molecules can be classified and named systematically, chemists are reasoning about how molecules will react in future reactions. However, unlike Ismael’s framework suggests, organic chemists are not guided by fundamental laws. Applying works like Goodwin (2013) and Woodward (2014), I show how modal reasoning exists in chemical practice. I argue that alethic reasoning through fundamental laws are downplayed and the non-alethic reasoning is elevated in the practice of organic chemistry. I show that while Ismael’s framework of modal reasoning has features worth preserving, including its abandonment of possible worlds semantics and the focus on action guidance, its focus on alethic modality as the main type of modal reasoning that guides actions is incorrect when considering the practices of scientists working in non-fundamental sciences. I will ultimately suggest that the current way of distinguishing alethic modality and epistemic modality in science is not helpful for understanding modal reasoning in non-fundamental sciences.
Presenters
AC
Ananya Chattoraj
University Of Calgary
33. Is the Humongous Fungus Really the World’s Largest Organism?
Philosophy of Science 00:33 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:33:00 UTC - 2018/11/03 06:59:00 UTC
Daniel Molter (University of Utah), Bryn Dentinger (University of Utah) 
Is the Humongous Fungus really the world’s largest organism? 
‘World’s largest organism’ is often referenced in philosophy of biology, where it serves as something of a type specimen for the organism category, so it’s important to make sure the biological individual which holds this title really is one organism. The Humongous Fungus (HF), a 3.7 square mile patch of honey mushrooms (Armillaria solidipes) in Oregon’s Blue Mountains, is said to be the world’s largest organism. To determine if it really is will require both new empirical work (currently being planned) and philosophical clarification about what it means to be an organism. At question empirically is whether the HF is physiologically integrated; at question philosophically is whether physiological integration is necessary for organismality. Ferguson et al (2003) reported that all samples collected inside a 3.7 square mile patch were genetically homogenous and somatically compatible, indicating common descent from a single reproductive event and the potential to fuse into a single mycelium. Their results are consistent both with a single humongous mycelium and with a swarm of fragmented clones that periodically flair up and die out as they spread from tree to tree. Tests to see if the HF is all connected have not yet been done. 
If “organism” is defined in terms of evolutionary individuality, then the HF does not need to be connected in order to function as a discontinuous evolutionary organism, but it would not be the largest discontinuous evolutionary organism; that title instead probably* goes to Cavendish bananas (the common yellow variety), which are clones of a single genet cultivated on millions of hectares around the world. If, on the other hand, organismality is defined in terms of physiological integration, then the HF would have to be continuous for it to count as one organism. Interestingly, the distinction between fragmented and continuous might be blurred if the HF periodically breaks apart and comes back together, as mycelia sometimes do. If the HF really is physiologically integrated, then it is the world’s largest physiological organism, beating out Pando, an aspen grove in Utah, and another Humongous Fungus in Michigan (yes, they fight over the name). 
The first planned test for physiological integration involves sampling eDNA in soil along transects through the genet. This will tell us how far from infected trees the Armillaria extends, and it will help to locate areas of concentration that might represent physiologically isolated individuals. Further testing might include a stable isotope transplantation study to see if tracers absorbed by the mycelium in one region of the genet make their way to distal regions. 
Ferguson, B. A., Dreisbach, T. A., Parks, C. G., Filip, G. M., & Schmitt, C. L. (2003). Coarse-Scale Population Structure of Pathogenic Armillaria Species in a Mixed-Conifer Forest in the Blue Mountains of Northeast Oregon. Canadian Journal of Forest Research, 33(4), 612-623. 
* Other plants, such as dandelions, might also be contenders for the world’s largest genet.
Presenters
DM
Daniel Molter
University Of Utah
Co-Authors
BD
Bryn Dentinger
University Of Utah
34. Functions in Cell and Molecular Biology: ATP Synthase as a Case Study
Philosophy of Science 00:34 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:34:00 UTC - 2018/11/03 06:59:00 UTC
Jeremy Wideman (Dalhousie University)
There are two broad views of how to define biological functions. The selected effects (SE) view of function requires that functions be grounded in “the historical features of natural selection” (Perlman 2012), whereas the causal role (CR) view does not (Cummins and Roth 2012). SE functions are separated from mere effects by reference to events in evolutionary/selective history (e.g., Garson 2017). Therefore, SE functions are real things/processes, which thereby explain how traits originated and why they persist. CR functions are ascribed by “functional analysis” (Cummins and Roth 2012) which involves defining a containing system (which can be anything from a metabolic pathway to a medical diagnosis) and describing the role that the trait in question plays in the system of interest. CR functions are thus subjectively defined, and dependent upon the interests of the investigator. 
It has been suggested by CR proponents that biologists like molecular and cell biologists are do not need evolution to understand the functions that they are interested in. However, molecular and cell biologists are driven to determine ‘the function’ of organismal components, secondary effects are not so interesting. What then is meant by the function if not selected effect? Furthermore, comparative evolutionary biologists make inferences about conserved functions based on functions identified by molecular and cell biologists. An analysis of biological function at this level is lacking from the philosophical literature. 
In order to determine if an SE view of function can accommodate actual biological usage I have turned away from abstract examples like the heart, to a concrete case study from molecular cell biology, the multicomponent molecular machines called ATP synthases. ATP synthases are extremely well studied protein complexes present in all domains of life (Cross and Müller 2004). As their name suggests, their generally agreed upon function is to synthesize (or hydrolyze) ATP. My analysis demonstrates that SE views of function that require positive selection for an effect (e.g., Gould and Vrba 1982) do not accommodate contemporary usage. Instead biological usage requires that function be defined to include effects arising from solely purifying selection, constructive neutral evolution, or exaptation, in addition to positive selection. Thus, the SE view of function must be construed more broadly in order to accommodate all facets of biological usage. A consequence of an expanded view of SE function is that while all adaptations have functions not all functions result from adaptations. Therefore, this view is not panadaptationist.
Presenters
JW
Jeremy Wideman
Dalhousie University
35. Mechanistic Integration and Multi-Dimensional Network Neuroscience
Philosophy of Science 00:35 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:35:00 UTC - 2018/11/03 06:59:00 UTC
Frank Faries (University of Cincinnati)
Mechanistic integration, of the kind described by Craver and Darden (2013), is, at first glance, one way to secure sensitivity to the norms of mechanistic explanation in integrative modeling. By extension, models in systems neuroscience will be explanatory to the extent that they demonstrate mechanistic integration of the various data and methods which construct and constitute them. Recent efforts in what Braun and colleagues have dubbed “multi-dimensional network neuroscience” (MDNN) claim to provide increasingly mechanistic accounts of brain function by moving “from a focus on mapping to a focus on mechanism and to develop tools that make explicit predictions about how network structure and function influence human cognition and behavior” (Braun, et al., 2018). MDNN appears to provide examples of simple mechanistic integration, interlevel integration (looking down, up, and around), and intertemporal integration. Moreover, these models appear to increasingly satisfy the Model-to-Mechanism Mapping (3M) requirement (Kaplan and Craver, 2011), and allow for intervention, control, and the answering of “what-if-things-had-been-different” questions (Woodward, 2003). These efforts attempt to situate parametric correlational models “in the causal structure of the world” (Salmon, 1984). As such they appear to be excellent exemplars of mechanistic integration in systems neuroscience.
However, despite such good prospects for mechanistic integration, it is unclear whether those integrative efforts would yield genuine explanations on an austere mechanistic view (of which I take Craver (2016) to be emblematic). I identify three objections that can raised by such a view—what I call the arguments from (i) concreteness, (ii) completeness, and (iii) correlation versus causation. I treat each of these in turn and show how a more sophisticated understanding of the role of idealizations in mechanistic integration implies a rejection of these objections and demands a more nuanced treatment of the explanatory power of integrated models in systems neuroscience. In contrast to austere mechanistic views, I offer a flexible mechanism view, which expands of the norms of mechanistic integration, including the 3M requirement, to better account for the positive ontic and epistemic explanatory contributions made by idealization—including the application of functional connectivity matrices—to integration in systems neuroscience. Further, I show how the flexible mechanistic view is not only compatible with mechanistic philosophy, but better facilitates mechanistic integration and explanation.
Presenters
FF
Frank Faries
University Of Cincinnati
36. A Case for Factive Scientific Understanding
Philosophy of Science 00:36 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:36:00 UTC - 2018/11/03 06:59:00 UTC
Martin Zach (Charles University)
It has long been argued that idealized model schemas cannot provide us with factive scientific understanding, precisely because these models employ various idealizations; hence, they are false, strictly speaking (e.g., Elgin 2017, Potochnik 2015). Others defend a middle ground (e.g., Mizrahi 2012), but only few espouse (in one way or another) the factive understanding account (e.g., Reutlinger et al. 2017, Rice 2016).
In this talk, and on the basis of the model schema of metabolic pathway inhibition, I argue for the conclusion that we do get factive understanding of a phenomenon through certain idealized and abstract model schemas.
As an example, consider a mechanistic model of metabolic pathway inhibition, specifically the way in which the product of a metabolic pathway feeds back into the pathway and inhibits it by inhibiting the normal functioning of an enzyme. It can be said that such mechanistic model abstracts away from various key details. For instance, it ignores the distinction between competitive and non-competitive inhibition. Furthermore, a simple model often disregards the role of molar concentrations. Following Love and Nathan (2015) I submit to the view that neglecting concentrations from a model is an act of idealization. Yet, models such as these do provide us with factive understanding when they tell us something true about the phenomenon, namely the way in which it is causally organized, i.e. by way of negative feedback (see also Glennan 2017). This crucially differs from the views of those (e.g., Strevens 2017) who argue that idealizations highlight causal irrelevance of the idealized factors. For the phenomenon to occur, it makes all the difference precisely what kind of inhibition is at play and what the molar concentrations are.
Finally, I will briefly distinguish my approach to factive understanding from those of Reutlinger et al. (2017) and Rice (2016). In Reutlinger et al. (2017), factive (how-actually) understanding is achieved by theory-driven de-idealizations, however, as such it importantly differs from my view which is free of such need. Rice (2016) suggests that optimization models provide factive understanding by providing us with true counterfactual information about what is relevant and irrelevant, which, again, is not the case in the example discussed above.
Presenters Martin Zach
Charles University
37. The Role of the Contextual Level in Computational Explanations
Philosophy of Science 00:37 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:37:00 UTC - 2018/11/03 06:59:00 UTC
Jens Harbecke (Witten/Herdecke University), Oron Shagrir (The Hebrew University of Jerusalem) 
At the heart of the so-called "mechanistic view of computation" lies the idea that computational explanations are mechanistic explanations. Mechanists, however, disagree about the precise role that the environment — or the "contextual level" (Miłkowski 2013) — plays for computational (mechanistic) explanations. 
Some mechanists argue that contextual factors do not affect the computational identity of a computing system and, hence, that they do not play an explanatory role vis-á-vis the system’s computational aspects. If anything, contextual factors are important to specify the explanandum, not the explanation (cf. also Kaplan 2011, Miłkowski 2013, Dewhurst 2017, Mollo 2017). 
Other mechanists agree that the contextual level is indeed part of the computational level of a computing system, but claim that "[i]n order to know which intrinsic properties of mechanisms are functionally [computationally] relevant, it may be necessary to consider the interaction between mechanisms and their contexts." (Piccinini 2008, 220). In other words, computational explanations involve more than an explication of the relevant mechanisms intrinsic to a computational system. These further aspects specify the causal-mechanistic interaction between the system and its context. 
On this poster, we challenge both claims. We argue that (i) contextual factors do affect the computational identity of a computing system, but (ii) that it is not necessary to specify the causal-mechanistic interaction between the system and its context in order to offer a complete and adequate computational explanation. We then discuss the implications of our conclusions for the mechanistic view of computation. Our aim is to show that some versions of the mechanistic view of computation are consistent with claims (i) and (ii), whilst others are not. 
We illustrate the following argumentative steps. First, we introduces the notion of an automaton, and we point out that complex systems typically implement a large number of inconsistent automata all at the same time. The challenge is to single out those automata of a system that correspond to its actual computations, which cannot be achieved on the basis of the intrinsic features of the system alone. We then argue that extending the basis by including the immediate or close environment of computing systems does not do the trick. This establishes an externalist view of computation. We then focus on claim (ii) and argue that various different input mechanisms can be correlated with the same computations, and that it is not always necessary to specify the environment-to-system mechanism in order to explain a system’s computations. Finally, we assess the compatibility of claims (i) and (ii) with several versions of the mechanist view of computation. 
Presenters
JH
Jens Harbecke
Witten/Herdecke University, Germany
Co-Authors
OS
Oron Shagrir
Hebrew University Of Jerusalem, Israel
38. In Defense of Pragmatic Processualism: Expectations in Biomedical Science
Philosophy of Science 00:38 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:38:00 UTC - 2018/11/03 06:59:00 UTC
Katherine Valde (Boston University)
This poster will contrast the expectations generated by using mechanistic vs. process frameworks in biomedical sciences. A traditional mechanistic framework looks at a system in terms of entities and activities – it looks to finitely characterize the properties of entities that allow them to execute particular actions. A processual account, on the other hand, characterizes entities in terms of how they are maintained or stabilized, and in general, focuses on the generation of stability rather than facts about stability. Recent increased interest in a process framework for biology has focused on the ability of a process ontology to describe the natural world more accurately than a substance ontology. This poster examines the use of processual concepts in a practice-oriented approach- arguing for the importance of process on methodological (rather than metaphysical) grounds. Given the difficulty in settling theoretical metaphysical debates, and the grave importance of advancing biomedical research, this pragmatic approach offers a promising route forward for a process framework. 
This poster specifically examines two concrete cases: carcinogenesis and inflammatory bowel disease (IBD) research. Competing research programs in each of these domains can be understood as processual or mechanistic. The dominant theory for understanding carcinogenesis is somatic mutation theory (SMT). SMT holds that cancer is a cell-based disease that occurs when a single cell from some particular tissue mutates and begins growing and dividing out of control. A competing theory of carcinogenesis, Tissue Organizational Field Theory (TOFT), holds that cancer is a tissue-based disease that occurs when relational constraints are changed (Soto and Sonnenschein, 2005). TOFT provides a processual understanding of carcinogenesis, while SMT provides a mechanistic account. IBD research in humans has largely focused on genetic correlations and pathogen discovery, which have largely been unsuccessful. However, in a mouse models researchers have discovered several factors, each necessary, but individually insufficient, to cause the overall condition (Cadwell, et. al., 2010). While the traditional research takes a mechanistic approach, the mouse model takes a processual approach (characterizing IBD based on how it is maintained, rather than based on essential properties). 
The competing approaches to these conditions are not truly incommensurable, but they do generate different expectations and guide different research. This poster will compare the development of research projects under competing theories. The ultimate aim is to highlight the benefits of a process framework for the practice of biomedical science: generating different expectations for research, and thus leading to different experimental designs, and a capacity to measure different things regardless of the answers to the ultimate metaphysical questions.
Presenters
KV
Katherine Valde
Boston University
39. Flat Mechanisms: Mechanistic Explanation Without Levels
Philosophy of Science 00:39 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:39:00 UTC - 2018/11/03 06:59:00 UTC
Peter Fazekas (University of Antwerp)
The mechanistic framework traditionally comes bundled with a levelled view of reality, where different entities forming part-whole relations reside at lower and higher levels. Here it is argued that contrary to the standard understanding and the claims of its own proponents, the core commitments of the mechanistic framework are incompatible with the levelled view. An alternative flat view is developed, according to which wholes do not belong to levels higher than the constituent parts of the underlying mechanisms, but rather are to be found as modules embedded in the very same complex of interacting units. Modules are structurally and functionally stable configurations of the interacting units composing them. Modules are encapsulated either in a direct physical way by a boundary that separates them from their environment, or functionally by the specific organisation of the interaction network of their units (e.g., causal feedback loops). Physical and functional encapsulation constrain internal operations, cut off some internal-external interactions, and screen off inner organisation and activities. Due to the cutting-off effect of encapsulation, the interacting units of a module are, to a certain degree, causally detached from their environment: some of the causal paths via which the units could normally (in separation) be influenced become either unavailable (due to the shielding effect of physical boundaries) or ineffective (due to the stabilising effect of feedback loops). Some units, however, still retain their causal links with the environment providing inputs and outputs for the organised activity of the cluster of units, and henceforth for the module itself. Modules, thus, are not epiphenomenal. The input of a module is the input of its input units, and the output of a module is the output of its output units. Via the causal links of their input and output units, modules are causally embedded in the same level of causal interactions as their component units. Since whole modules can be influenced by and can influence their environment only via their input and output units, their inner organisation is screened off: from the ‘outside’ modules function as individual units. Therefore, alternating between a module and a unit view is only a change in perspective and does not require untangling possibly complex relations between distinct entities residing at different levels. The mechanistic programme consists in turning units into modules, i.e., ‘blowing up’ the unit under scrutiny to uncover its internal structure, and accounting for its behaviour in terms of the organisation and activities of the units found ‘inside’. The flat view, thus, claims that mechanistic characterisations of different ‘levels’ are to be understood as different descriptions providing different levels of detail with regard to a set of interacting units with complex embedded structure. It sets the mechanistic programme free of problematic metaphysical consequences, sheds new light on how entities that traditionally were seen as belonging to different levels are able to interact with each other, and clarifies how the idea of mutual manipulability — that has recently been severely critcised — could work within the mechanistic framework.
Presenters
PF
Peter Fazekas
University Of Antwerp
40. Path Integrals, Holism and Wave-Particle Duality
Philosophy of Science 00:40 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:40:00 UTC - 2018/11/03 06:59:00 UTC
Marco Forgione
In the present work I argue that the path integral formulation of quantum mechanics displays a holistic machinery that allows one to predict and explain the total amplitude of the quantum system. 
The machinery shows that it is not the single path that counts, but rather, it is the whole ensemble that provides the total amplitude. In pursuing such interpretation, I refer to Healey's notion of holism and I show that -when applied to path integrals- it ultimately leads to a form of structural holism. To do so, I point out: (1) what the whole is composed of, (2) the non-supervenient relation the whole holds with its parts and (3) the mathematical object that instantiates such relation, i.e., the phase factor. 
Concerning (1), I argue that while the parts correspond to the single possible paths, the whole is to be interpreted as the total ensemble posited by the theory. I show that the single possible trajectories play the role of mathematical tools, which do not represent real particle paths. They can be individuated mathematically by varying the phase factor, but they do not describe what actually happens: they remain mathematical possibilities devoid of ontological meaning. 
Concerning (2), I will show that a strong reductionist account of the ensemble to the single paths is not possible. If that is the case, then the single paths will count as calculation tools, while it is the statistical representation of the whole that provides the description of the particle motion. In arguing for the irreducibility to single real paths of the total ensemble, I firstly take into consideration Wharton's realist account and secondly, I analyze the decoherent histories account of quantum mechanics. In the former case, I argue that even by parsing the total ensemble in sets of non-interfering paths and then mapping them into a space-time valued field, we cannot deny the holistic nature of the path integral formulation. Furthermore, although the decoherent histories account parses the total ensemble in coarse grained histories -where an history is a sequence of alternatives at successive times-, it ultimately fails in extrapolating the real history the particle undergoes. 
Ultimately (3), I suggest that the phase factor is the mathematical object that instantiates the non-supervenient relation. It determines the cancellation of the destructively interfering paths and, in the classical limit, it explains the validity of the least action principle. 
Once all these parts are addressed, I will argue that the holistic ensemble and the phase factor -which weights the probabilities for each trajectory- form a structural holism for which the distinction between particles and waves is no longer necessary. 
Presenters
MF
Marco Forgione
University Of South Carolina
41. Mechanisms and Principles: Two Kinds of Scientific Generalization
Philosophy of Science 00:41 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:41:00 UTC - 2018/11/03 06:59:00 UTC
Yoshinari Yoshida (University of Minnesota), Alan Love (University of Minnesota) 
Confirmed empirical generalizations are central to the epistemology of science. Through most of the 20th century, philosophers focused on universal, exceptionless generalizations — laws of nature — and took these as essential to scientific theory structure and explanation. However, over the past two decades, many sought to characterize a broader range of generalizations, which facilitated the elucidation of a more complex space of possibilities and enabled a more fine-grained understanding of how generalizations with different combinations of properties function in scientific inquiry. However, much work remains to characterize the diversity of generalizations within and across the sciences. 
Here we concentrate on one area of science — developmental biology — to comprehend the role of two different kinds of scientific generalizations: mechanisms and principles. Mechanism generalizations (MGs) in developmental biology are descriptions of constituent biomolecules organized into causal relationships that operate in specific times and places during ontogeny to produce a characteristic phenomenon that is shared across different biological entities. Principle generalizations (PGs) in developmental biology are abstract descriptions of relations or interactions that occur during ontogeny and are exemplified in a wide variety of different biological entities. 
In order to characterize these two kinds of generalizations, we first discuss generalizations and explanatory aims in the context of developmental biology. Developmental biologists seek generalizations that are structured in four different dimensions — across taxa, across component systems, across developmental stages, and across scales — and in terms of two primary conditions: material and conceptual. Within scientific discourse, these generalizations appear in complex combinations with different dimensions or conditions foregrounded (e.g., distributions of developmental phenomena and causal interactions that underlie them in a specific component system at a particular stage under specified material conditions to answer some subset of research questions). 
MGs and PGs have distinct bases for their scope of explanation. MGs explain the development of a wide range of biological entities because the described constituent biomolecules and their interactions are conserved through evolutionary history. In contrast, the wide applicability of PGs is based on abstract relationships that are instantiated by various entities (regardless of evolutionary history). Hence, MGs and PGs require different research strategies and are justified differently; specific molecular interactions must be experimentally dissected in concrete model organisms, whereas abstract logical and mathematical properties can be modeled in silico. Our analysis shows why a particular kind of generalization coincides with a specific research practice and thereby illuminates why the practices of inquiry are structured in a particular way. 
The distinction between MGs and PGs is applicable to other sciences, such as physiology and ecology. Furthermore, our analysis isolates issues in general philosophical discussions of the properties of generalizations, such as ambiguities in discussions of “scope” (how widely a generalization holds) and a presumption that abstraction is always correlated positively with wide scope. Scope is variable across the four dimensions and MGs have wide scope as a consequence of their reference to concrete molecular entities that are evolutionarily conserved, not because of abstract formulations of causal principles. 
Presenters
YY
Yoshinari Yoshida
University Of Minnesota
Co-Authors
AL
Alan Love
University Of Minnesota, Twin Cities
42. The Autonomy Thesis and the Limits of Neurobiological Explanation
Philosophy of Science 00:42 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:42:00 UTC - 2018/11/03 06:59:00 UTC
Nuhu Osman Attah (University of Pittsburgh) 
In this presentation I defend the “autonomy thesis” regarding the identification of psychological kinds, that is the claim that what psychological kinds there are cannot be determined solely by neuroscientific criteria, but must depend also on psychological or phenomenological evidence (Aizawa and Gillet, 2010). 
I argue that there are only three ways in which psychological kinds could be individuated if we are to rely on neuroscience alone, contra the “autonomy thesis”: (i) psychological kinds could be individuated on the back of broad/large-scale neurobiological bases such as network level connectivity, (ii) they could be individuated based on dissociations in realizing mechanisms, and (iii) psychological kinds could be picked out on the grounds of fine-grained neural details. I argue that these are the only options available to the methodological reductionist (who naysays the “autonomy thesis”) because they are the only options in the empirical space of neuroscientific explanation. 
I then argue following this that for the following respective reasons, none of these options can actually individuate psychological kinds in any useful sense: (a) particular cases of neuroscientific explanation (in particular, I have in mind the Grounded Cognition Model of concepts [Kemmerer, 2015, 274; Wilson-Mendenhall et al., 2013]) demonstrate that there are kinds employed by neuroscientists whose large-scale neurobiological instantiations differ significantly; (b) a circularity is involved in (ii) in that mechanisms presuppose a teleological individuation which already makes a reference to psychological predicates - that is to say, since mechanisms are always mechanisms "for" some organismal level phenomenon, individuating kinds based on mechanisms already involves a behavioral-level (non-neurobiological) criterion; and (c) besides a problem of too narrowly restricting what would count as kinds (even to the point of contradicting actual neuroscientific practice, as the case-study from (a) will demonstrate), there is also here a problem of vagueness in the individuation of fine-grained neurobiological tokens (Haueis, 2013). 
Since none of these three possible ways of picking out psychological kinds using neurobiology alone work, it would seem to be the case that there is some merit to the claims made by the autonomy thesis. I conclude from all of this, as has been previously concluded by philosophers arguing for the autonomy thesis, that while neurobiological criteria are important aids in identifying psychological kinds in some cases, they cannot strictly determine where and whether such kinds exist. 
Presenters
NO
Nuhu Osman Attah
University Of Pittsburgh
43. Du Châtelet: Why Physical Explanations Must Be Mechanical
Philosophy of Science 00:43 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:43:00 UTC - 2018/11/03 06:59:00 UTC
Ashton Green (University of Notre Dame) 
In the early years of her research, Du Châtelet used the principle of sufficient reason (PSR) to develop a epistemological method, so that she could extrapolate from empirical data (such as the results of experiments in heating metals) in a rigorous way. Her goal was to attain knowledge of the hidden causes of the data. In this presentation, I will outline her method for this extrapolation, its assumptions, and consider the implications of such a view. 
According to Du Châtelet's method, any metaphysical claim, such as one concerning what substances make up the fundamental physical level, much be anchored in types of evidence which Du Châtelet considers reliable. Du Châtelet considers evidence reliable when it takes several forms. First, when it comes directly from empirical data, which is more rigorous than sense data, because it involves repeated and well-organized experimentation. But also, Du Châtelet also considers "principles" to be reliable epistemological tools, such as the law of non-contradiction, the principle of sufficient reason, and the principle of continuity, in addition to empirical data. 
For this reason, I call her mature position (after 1740) Principled Empiricism. In Principled Empiricism, beliefs are justified if they are based on reliable evidence of the following two kinds: empirical data, and what she calls “self-evident principles”. According to this method, beliefs based only on one or the other of these types of evidence, as well as both in conjunction, are justified. This allows her to make metaphysical hypotheses while still adhering to her Principled Empiricism, in which all knowledge is either self-evident, empirically confirmed, or built directly by those two pieces. 
By using the PSR as the principle which governs contingent facts, and is therefore appropriate to thy physical world, Du Châtelet's method extrapolates beyond empirical data, hypothesizing the best “sufficient reasons” for the effects gathered in empirical study. Sufficient reasons, however, according to Du Châtelet, are restricted to the most direct cause in the mechanical order of the physical world. Two parts of this definition need to be defended. First Du Châtelet must defend that the physical world is mechanical, and define what exactly she means by mechanical. Second, she must defend that in fact, the physical world consists of one mechanical system, and only one, of which all causes and effects are a part. If she is able to do this successfully, she can bring explanations of all phenomena into one “machine [of] mutual connection,” her new system will justify requiring mechanical explanations for all phenomena. She considers these arguments to be based on the PSR. 
In addition to establishing how Du Châtelet applied the PSR to her project, I discuss the problematic aspects of her restriction of explanations to mechanical ones, based on her premise that the universe is a single machine. Finally, I consider contemporary analogs to her position, and the difference between their foundations and Du Châtelet's. 
Presenters
AG
Ashton Green
University Of Notre Dame
45. Modes of Experimental Interventions in Molecular Biology:
Philosophy of Science 00:45 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:45:00 UTC - 2018/11/03 06:59:00 UTC
Hsiao-Fan Yeh (National Chung Cheng University), Ruey-Lin Chen (National Chung Cheng University)
This paper explores modes of experimental interventions in molecular biology. We argue for the following three points: (i) We distinguish between different modes of experimental interventions according to the two standards: the interventional direction and the interventional effect. (ii) There are two interventional directions (vertical/inter-level and horizontal/inter-stage) and two interventional effects (excitatory/positive and inhibitory/negative). (iii) In a series of related experiments, scientists can use multiple interventional modes to test given hypotheses and to explore novel objects.
Our argument begins with a brief characterization of Craver and Darden’s taxonomy of experiments, because the taxonomy they have made implies various modes of intervention (Carver and Darden 2013). We propose to extract two interventional directions and two interventional effects from their taxonomy as the basis of classification. The vertical or inter-level direction means that an intervention is performed between different levels of organization and the horizontal or inter-stage direction means that an intervention is performed between different stages of a mechanism. Interventions may produce an excitatory or an inhibitory effect. As a consequence, we can classify modes of intervention according to different interventional directions and effects. We will do a case study of the PaJaMa experiment (Pardee, Jacob and Monod 1959) to illustrate the three points. 
Presenters
HY
Hsiao-Fan Yeh
National Chung Cheng University
Co-Authors Ruey-Lin Chen
National Chung Cheng University
46. Mechanism Discovery Approach to Race in Biomedical Research
Philosophy of Science 00:46 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:46:00 UTC - 2018/11/03 06:59:00 UTC
Kalewold Kalewold (University of Maryland, College Park)
Race is commonly considered a risk factor in many complex diseases including asthma, cardiovascular disease, renal disease, among others. While viewing races as genetically meaningful categories is scientifically controversial, empirical evidence shows that some racial health disparities persist even when controlling for socioeconomic status. This poster argues that a mechanistic approach is needed to resolve the issue of race in biomedical research.
The distinction between race-based studies, which hold that “differences in the risk of complex diseases among racial groups are largely due to genetic differences covarying with genetic ancestry which self-identified races are supposed to be good proxies for” (Lorusso and Bacchini 2015, 57), and race-neutral studies, which incorporate multiple factors by looking at individual level or population level genetic susceptibility, mirrors the “explanatory divide” Tabery (2014) highlights between statistical and mechanistic explanations in biology. In this poster I show that race-neutral studies constitute a Mechanism Discovery Approach (MDA) to investigating racial disparities. Using evidence from statistical studies, MDA seeks to build mechanism schemas that show causally relevant factors for racial disparities. 
This poster shows how MDA illuminates the productively active components of disease mechanisms that lead to disparate health outcomes for different self-identified races. By eschewing the “genetic hypothesis”, which favors explanations of racial disparities in terms of underlying genetic differences between races, MDA reveals the mechanisms by which social, environmental, and race-neutral genetic factors, including past and present racism, interact to produce disparities in chronic health outcomes. 
This poster focuses on the well-characterized disparity between birth weights of black and white Americans highlighted in Kuzawa and Sweet (2009). Their research in racial birthweight disparities provides sufficient evidence for a plausible epigenetic mechanism that produces the phenomenon. I argue that what makes their explanation of the racial disparity in US birth weights successful is that it is mechanistic. The mechanism is neither just hereditary or environmental; instead it is both; It is epigenetic. The poster will provide a diagram showing the mechanism. By showing how the various parts of the mechanism interact to produce the phenomena in question, MDA both avoids the pitfalls race-based studies while still accounting for the role of social races in mechanisms producing racial disparities. This approach also enables the identification of potential sites of intervention to arrest or reverse these disparities.
Presenters
KK
Kalewold Kalewold
University Of Maryland, College Park
47. A Conceptual Framework for Representing Disease Mechanisms
Philosophy of Science 00:47 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:47:00 UTC - 2018/11/03 06:59:00 UTC
Lindley Darden (University of Maryland, College Park), Kunal Kundu (University of Maryland, College Park), Lipika Ray (University of Maryland, College Park), John Moult (University of Maryland, College Park)
The "big data" revolution is leading to new insights into human genetic disease mechanisms. But the many results are scattered throughout the biomedical literature and represented in many different ways, including free text and cartoons. Thus, a standard framework is needed to represent disease mechanisms. This poster presents a conceptual framework, utilizing a newly developed analysis of disease mechanisms (Darden et al.2018).
The new mechanistic philosophy of science characterizes the components of mechanisms: entities and activities. Adapting this for genetic disease mechanisms yields the categories of "substate perturbations" plus the drivers of changes from one substate perturbation to the next, called "mechanism modules" (activities or group of entities and activities). The framework shows the organized stages of a genetic disease mechanism from a beginning substate perturbation (e.g., a gene mutation or chromosomal aberration) to the disease phenotype. It depicts environmental influences as well. It aids in finding possible sites for therapeutic intervention. It shows a schema builder's view of well-established components as well as uncertainty, ignorance, and ambiguity, based on evidence from the biomedical literature. Its abstract scaffolding directs the schema builder to fill in the key components of the disease mechanism, while the unknown components serve to direct future experimental work to remove sketchiness and provide additional evidence for its components. 
The poster will show progressively less abstract and more complete diagrams that represent the framework, as sketches become schemas. When a perturbation is correlated with a disease phenotype, it suggests searching for an unknown mechanism connecting them. The entire mechanism is a black box to be filled. Most abstractly and most generally, a disease mechanism is depicted by a series of substate perturbations (SSPs, rectangles) connected by lines labeled with the mechanism modules (MMs, ovals) that produce the changes from perturbation to perturbation. Optional additions include environmental inputs (cloud-like icons) and possible sites for therapeutic intervention (blue octagons). Telescoping of sets of steps into a single mechanism module increases focus on disease-relevant steps; e.g., transcription and translation telescope into the MM labeled "protein synthesis." The default organization is linear, from a beginning genetic variant to the ending disease phenotype, but it can include branches, joins, and feedback loops, as needed. Black ovals show missing components in the series of steps. The strength of evidence is indicated by color-coding, with green showing high confidence, orange medium, to red lowest. Branches labeled "and/or" show ambiguity about the path followed after a given step. Along with the general abstract diagrams, the poster will include detailed diagrams of specific disease mechanisms, such as cystic fibrosis.
In addition to providing an integrated representational framework for disease mechanisms, these visual schemas facilitate prioritization of future experiments, identification of new therapeutic targets, ease of communication between researchers, detection of epistatic interactions between multiple schemas in complex trait diseases, and personalized therapy choice.
Presenters Lindley Darden
University Of Maryland College Park
Co-Authors
KK
Kunal Kundu
University Of Maryland College Park
LR
Lipika Ray
University Of Maryland
JM
John Moult
University Of Maryland College Park
48. The Scope of Evolutionary Explanations as a Matter of “Ontology-Fitting” in Investigative Practices
Philosophy of Science 00:48 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:48:00 UTC - 2018/11/03 06:59:00 UTC
Thomas Reydon (Leibniz Universität Hannover) 
Both in academic and in public contexts the notion of evolution is often used in an overly loose sense. Besides biological evolution, there is talk of the evolution of societies, cities, languages, firms, industries, economies, technical artifacts, car models, clothing fashions, science, technology, the universe, and so on. While in many of these cases (especially in the public domain) the notion of evolution is merely used in a metaphorical way, in some cases it is meant more literally as the claim that evolutionary processes similar to biological evolution occur in a particular area of investigation, such that full-fledged evolutionary explanations can be given for the phenomena under study. 
Such practices of “theory transfer” (as sociologist Renate Mayntz called it) from one scientific domain to others, however, raises the question how much can actually be explained by applying an evolutionary framework to non-biological systems. Can applications of evolutionary theory outside biology, for example to explain the diversity and properties of firms in a particular branch of industry, of institutions in societies, or of technical artifacts, have a similar explanatory force as evolutionary theory has in biology? Proponents of so-called “Generalized Darwinism” (e.g., Aldrich et al., 2008; Hodgson & Knudsen, 2010) think it can. Moreover, they think evolutionary thinking can perform a unifying role in the sciences by bringing a wide variety of phenomena under one explanatory framework. 
I will critically examine this view by treating it as a question about the ontology of evolutionary phenomena. My claim is that practices of applying evolutionary thinking in non-biological areas of work can be understood as what I call “ontology-fitting” practices. For an explanation of a particular phenomenon to be a genuinely evolutionary explanation, the explanandum’s ontology must match the basic ontology of evolutionary phenomena in the biological realm. This raises the question what elements this latter ontology consists of. But there is no unequivocal answer to this question There is ongoing discussion about the question what the basic elements in the ontology of biological evolutionary phenomena (such as the units of selection) are and how these are to be conceived of. Therefore, practitioners from non-biological areas of work cannot simply take a ready-for-use ontological framework from the biological sciences and fit their phenomena into it. Rather, they usually pick those elements from the biological evolutionary framework that seem to fit their phenomena, disregard other elements, and try to construct a framework that is specific to the phenomena under study. By examining cases of such “ontology fitting” we can achieve more clarity about the requirements for using evolutionary thinking to explain non-biological phenomena. I will illustrate this by looking at an unsuccessful case of “ontology fitting” in organizational sociology. 
Presenters Thomas Reydon
Leibniz Universität Hannover
49. Lessons from Synthetic Biology: Engineering Explanatory Contexts
Philosophy of Science 00:49 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:49:00 UTC - 2018/11/03 06:59:00 UTC
Petri Turunen (University of Helsinki) 
The poster outlines a four-year empirical investigation into a synthetic biology (BIO, EBRC, Elowitz 2010, Morange 2009) consortium. The focus of the investigation was on how scientists in a highly interdisciplinary research consortium deal with interdisciplinary hurdles. In particular, we studied how the scientists communicated with each other when they were trying to explain issues related their field of expertise. What kind representational strategies were used? Which ones were successful? 
Synthetic biology was chosen as the target field for this investigation for two reasons. Firstly, synthetic biology is a particularly interdisciplinary field that brings together, among others, biologists, engineers, physicist and computer scientists. Secondly, synthetic biology is still a relatively new field of study. It does not yet have a clear disciplinary identity nor well regimented methodological principles. Since synthetic biology is still largely in the process of negotiating its practices it provides a particularly good case for studying how interdisciplinary practices get negotiated in actual practice. 
Our focus was on representational strategies, because our empirical case was particularly suited for observing them. We followed an interdisciplinary consortium made out of three separate groups with differing backgrounds ranging from industrial biotechnology and molecular plant biology to quantum many-body systems. We were given permission to observe consortium meetings, where the three different groups came together and shared their findings. These meetings made the representational strategies used by the scientists particularly visible, since their severe time constraints and discursive format forced the scientists to think carefully on how to present their findings. 
We followed and taped these consortium meetings. In addition, we performed more targeted personal interviews. Based on these materials we made the following general observations: 
1. Interdisciplinary-distance promoted more variance in the use of differing representational means. That is, the bigger the difference in disciplinary background, the less standardized the communication. 
2. Demands for concreteness varied: more biologically inclined researchers wanted connections to concrete biological systems where as the more engineering-oriented researchers wanted input on what sort of general biological features would be useful to model. Both aspects related to the model-target connection but imposed different demands on what was relevant for establishing that connection. 
3. Interdisciplinary distance promoted the use of more schematic and general representations. 
Interdisciplinary distance was thus related to noticeable differences in the utilized representational strategies. All three observations also suggest that the scientists are not merely transmitting content but are instead trying to construct suitable representational contexts for that content to be transmissible.. That is, scientists are performing a kind of contextual engineering-work. Philosophically the interesting question then becomes: how exactly is content related to its representational context? 
Presenters
PT
Petri Turunen
University Of Helsinki
51. The Narrow Counterfactual Account of Distinctively Mathematical Explanation
Philosophy of Science 00:51 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:51:00 UTC - 2018/11/03 06:59:00 UTC
Mark Povich (Washington University, St. Louis) 
An account of distinctively mathematical explanation (DME) should satisfy three desiderata: it should account for the modal import of DMEs; it should distinguish uses of mathematics in explanation that are distinctively mathematical from those that are not (Baron 2016); and it should also account for the directionality of DMEs (Craver and Povich 2017). Baron’s (forthcoming) deductive-mathematical account, because it is modeled on the deductive-nomological account, is unlikely to satisfy these desiderata. I provide a counterfactual account of distinctively mathematical explanation, the Narrow Counterfactual Account (NCA), that can satisfy all three desiderata. 
NCA satisfies the three desiderata by following Lange (2013; but not Lange 2017, apparently) in taking the explananda of DMEs to be of a special, narrow sort. Baron (2016) argues that a counterfactual account cannot satisfy the second desideratum, because such an account, according to Baron, holds that an explanation is a DME when it shows a natural fact to depend counterfactually on a mathematical fact. However, this does not distinguish DMEs from non-DMEs that employ mathematical premises. NCA satisfies the second desideratum by narrowing the explanandum so that it depends counterfactually *only* on mathematical fact. Such an explanandum is subject to a DME. This narrowing maneuver also allows NCA to satisfy the first desideratum. Since the narrowed explanandum depends counterfactually only on a mathematical fact, changes in any empirical fact have no "effect" on the explanandum. 
Narrowing the explanandum satisfies the third desideratum, because Craver and Povich's (2017) "reversals" are not DMEs according to NCA. To see this, consider the case of Terry's Trefoil Knot (Lange 2013). The explanandum is the fact that Terry failed to untie his shoelace. The explanantia are the empirical fact that Terry's shoelace contains a trefoil knot and the mathematical fact that the trefoil knot is distinct from the unknot. Craver and Povich (2017) point out that it is also the case that the fact that Terry’s shoelace does not contain a trefoil knot follows from the empirical fact that Terry untied his shoelace and the mathematical fact that the trefoil knot is distinct from the unknot. (One can stipulate an artificial context where the empirical fact partly constitutes the explanandum.) However, if we narrow the explananda, NCA counts Terry’s Trefoil Knot as a DME and not Craver and Povich’s reversal of it. This is because the first of the following counterfactuals is arguably true, but the second is arguably false: 1) Were the trefoil knot isotopic to the unknot, Terry would have untied his shoelace that contains a trefoil knot. 2) Were the trefoil knot isotopic to the unknot, Terry would have had a trefoil knot in the shoelace that he untied. (I use Baron, Colyvan, and Ripley’s [2017] framework for evaluating counterfactuals with mathematically impossible antecedents, so that these two counterfactuals get the right truth-values.) The same is shown for all of Lange’s paradigm examples of DME and Craver and Povich's "reversals".
Presenters
MP
Mark Povich
Washington University
52. Developing a Philosophy of Narrative in Science
Philosophy of Science 00:52 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:52:00 UTC - 2018/11/03 06:59:00 UTC
Mary S. Morgan (London School of Economics), Mat Paskins (London School of Economics), Kim Hajek (London School of Economics), Andrew Hopkins (London School of Economics), Dominic Berry (London School of Economics) 
Narrative is at work in many sciences, operating at various levels of reasoning, performing a wide variety of functions. In some areas they are habitual, as in the natural historical sciences, but they are also to be found in less likely places: for example as integral with mathematical simulations, or in giving accounts of chemical syntheses. Despite their endemic nature, philosophers of science have not yet given much credence to narrative — either as kind of explanation, type of observational reporting, format of representation, or any of the other purposes to which they can be put. Yet — as is evident in the brief outline below — the usage of narratives carries both ontological implications, and prompts epistemic questions. Our poster introduces the ‘narrative science project’, which is investigating a number of scientific sites to develop a philosophical approach to scientists’ use of narratives within their communities, rather than in their pedagogical or popularising usages. Three questions exemplify the value of admitting narrative into the philosophy of science. 
How do candidate laws of nature interact with narrative explanation in natural historical sciences? Laws are traditionally required for explanation in the sciences, but it has been argued that in the natural historical sciences they rather ‘lurk in the background’. Initial project findings suggest that in narrative accounts in these fields, laws might rather ‘patrol’ than ‘lurk’ — to forbid certain narratives and to constrain those that are told without ever quite determining the account. This ‘patrolling’ may function differently with respect to long-term changes than with short-term upheavals — such as found in geology or earth science. But narratives have also been found in situations of disjunctions or gaps in law-based explanations in these historical sciences, or play a bridging or unlocking function between scientists from different fields working together. 
How do the social, medical, and human sciences rely on co-produced “analytical narratives” in reporting their observational materials? It is quite typical of a range of scientific methods that ‘observations’ consist of individual accounts of feelings or attitudes or beliefs so that data provided comes direct from the ‘subjects’ involved. Often the materials come in the form of anecdotes, small contained narratives, or fragments of longer ones. Our evidence suggests we should treat these as ‘co-produced’ observations, where sometimes the analytical work goes alongside the subject to be reported polyphonically, and at other times the ‘objective analysis’ of the observing scientist is integrated into the self-witnessed, ‘subject-based’, reporting to produce something like ‘analytical observations’. 
We should consider narrative seriously as an available format of representation in science, worthy of the same philosophical consideration given to models, diagrams, etc. Answers to these questions will rely not just on philosophy but also narrative theory, which help to distinguish narrative and narrating. Such an approach raises a number of issues, for example: Is there a standard plot, or does it vary with discipline? Our poster imagines the narrative plots of chemical synthesis, developmental biology, anthropology, engineered morphology, psychological testimony, and geological time. 
Presenters
MM
Mary Morgan
London School Of Economics
MP
Mat Paskins
London School Of Economics
KH
Kim Hajek
LSE
AH
Andrew Hopkins
London School Of Economics
DB
Dominic Berry
London School Of Economics And Political Science
53. Pluralist Explanationism and the Extended Mind
Philosophy of Science 00:53 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:53:00 UTC - 2018/11/03 06:59:00 UTC
David Murphy (Truman State University)
Proponents of the hypothesis of extended cognition (HEC) regularly invoke its explanatory contributions, while critics assign negative explanatory value. Mark Sprevak’s critique, inspired by Peter Lipton, casts doubt on the efficacy of the shared strategy of invoking explanation as justification. Specifically, an inference to the best explanation (IBE) concerning HEC is said to fail because there’s a close rival that makes a competing truth claim, namely the hypothesis of embedded cognition (HEMC), but HEMC cannot be differentiated meaningfully from HEC in relation to explanatory virtues. 
I argue that even though there’s merit to the critique when we accept its framing, the ascription of a narrow model of IBE to the discussants leads to a faulty generalization concerning available explanatory resources, and removes promising explanationist strategies from view. When we, by contrast, set explanatory tools sympathetically – actualizing a directive set by Sprevak for his critique -- the viability of arguments based on explanatory contributions returns to view. 
Lipton and Sprevak’s critique notwithstanding, commitment to “the core explanationist idea that explanatory considerations are a guide to inference” (Lipton, Inference to the Best Explanation, 153) comports well with endorsing explanatorily based arguments for and against HEC and HEMC. Strikingly, appropriating and developing resources presented by Lipton facilitates the deflection of much of Lipton and Sprevak’s critique. 
Placing broadening moves under the umbrella of pluralist explanationism (an explanationism assisted by Lipton’s “compatibilist” variant), I demonstrate how this resets the debate, concluding that the explanationist need not agree to the stalemate regarding explanatory virtues that the critique posits. First, in agreement with Lipton, I feature background beliefs and interest relativity. Sprevak draws from Lipton to set IBE as inferring to the hypothesis that best explains scientific data, but that standard model narrows when he ignores background beliefs and interest relativity. That narrowing illicitly enables key critical moves. Secondly, bringing contrastive explanation (CE) to bear (featured by Lipton in relation to IBE) not only illuminates an argument made by proponents of HEC that Sprevak resists, but draws in the “explanatory pluralism” Lipton connects to CE. Thirdly, much of strength of the critique depends on ascribing a model of IBE anchored in realism. When we, instead, explore perspectives arising from anti-realist variants of IBE, again using Lipton as prompt, that strength diminishes. Fourthly, I contend that an argument against extending HEC to consciousness stands when seen as a “potential” explanation (Lipton), akin to Peircean abduction, even though it fails when interpreted as an attempted IBE, narrowly conceived. Fifthly, developing a connection between explanationism and voluntarism adumbrated by Lipton, creates additional space for explanatory appeals that fail within the unnecessarily tight constraints ascribed by the critic. 
Discussants of HEC and HEMC need not accept the ascription of a narrow model of explanationism to themselves. Within a pluralist explanationist framework, we see that explanatory considerations provide significant backing for key positions regarding the extended mind, including retaining HEC as a live option, favoring HEC and HEMC in different contexts, and resisting extending the extended mind hypothesis to consciousness.
Presenters
DM
David Murphy
Truman State University
54. Broader Impacts Guidance System: Helping Cities Manage Ecological Impacts of Climate Change
Philosophy of Science 00:54 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:54:00 UTC - 2018/11/03 06:59:00 UTC
Josiah Skogen (Indiana University), Michael Goldsby (Washington State University), Samantha Noll (Washington State University)
Wicked problems are defined as complex challenges that require multifaceted solutions, involving diverse scientific fields. The technical expertise scientists provide is part of the solution. Unfortunately, there can be paralysis as various value commitments within the scientific community collide when solutions are contemplated. This can provide policy-makers with the impression that the science is incomplete and unable to provide policy advice. For example, consider climate change plans in the city. In an effort to reduce the impact of a changing climate on urban citizens and ecologies, a wide range of cities are developing such plans in consultation with urban ecologists and conservation biologists. It is easy to assume then that these two fields can equally contribute to city climate change plans, especially in light of the fact that both are given a privileged position in environmental policy discussions (Shrader-Frechette 1993). 
However, constructive interactions have been infrequent between urban ecologists and conservation biologists involved in the crafting of climate change mitigation strategies and in fact, members of these groups are commonly unaware of each other’s work (McDonnell 2015). We argue that one of the reasons for the lack of collaboration is the following: urban ecologists and conservation biologists are guided by seemingly incompatible values. While urban ecology draws from a wide range of disciplines that are focused on human and ecological interactions, conservation biology often favors ecological restoration and place-based management approaches without considering social systems (Sandler 2008). This apparent conflict results in a failure of coordination between the two fields. However, this need not be so. 
In the case described above, key values guiding the two fields appear to be in conflict. However, when taking broader impacts goals into account, the values at the heart of urban ecology and conservation biology are not only consistent, but complementary. Unfortunately, scientists are rarely trained to consider the implications of their value commitments. As such, conflict can arise from talking past each other with respect to their broader impact goals. 
We have recently been awarded a fellowship to help scientists explore values guiding their research and thus better realize their broader impact goals. Specifically, we adapted a tool for promoting interdisciplinary collaboration (The Toolbox Dialogue Initiative) to help scientists better articulate and realize the values underlying their work. Our work is focused on helping them advocate for their solutions, but it can also be used to show how two disparate fields have common goals. The poster will describe the status of our project.
Presenters
JS
Josiah Skogen
Indiana University-Bloomington
Samantha Noll
Washington State University
MG
Michael Goldsby
Washington State University
55. Enhancing Our Understanding of the Relationship Between Philosophy of Science and Scientific Domains: Results from a Survey
Philosophy of Science 00:55 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:55:00 UTC - 2018/11/03 06:59:00 UTC
Kathryn Plaisance (University of Waterloo), John McLevey (University of Waterloo), Alexander Graham (University of Waterloo), Janet Michaud (University of Waterloo) 
Discussions among philosophers of science as to the importance of doing scientifically- and socially-engaged work seem to be increasing as of late. Yet, we currently have little-to-no empirical data on the nature of engaged work, including how common it is, the barriers philosophers face when engaging other communities, the broader impacts of philosophers’ work, nor the extent to which the discipline actually values an engaged approach. Our project seeks to address this gap in our collective knowledge. In this paper, we report the results of a survey with 299 philosophers of science about attitudes towards and experiences with engaging scientific communities, barriers to engagement, and the extent to which philosophers of science think scientifically engaged work is and should be valued by the discipline. Our findings suggest that most philosophers of science think it’s important that scientists read their work; most have tried to disseminate their work to scientific or science-related communities; and most have collaborated in a variety of ways (e.g., over half of respondents have co-authored a peer-reviewed paper with a scientist). In addition, the majority of our respondents think engaged work is undervalued by our discipline, and just over half think philosophy of science, as a discipline, has an obligation to ensure it has an impact on science and on society. Reported barriers to doing engaged work were mixed and varied substantially depending on one’s career stage. These data suggest that many philosophers of science want to engage, and are engaging, scientific and other communities, yet also believe engaged work is undervalued by others in the discipline. 
Presenters Kathryn Plaisance
University Of Waterloo
Co-Authors
JM
John McLevey
University Of Waterloo
AG
Alexander Graham
University Of Waterloo
JM
Janet Michaud
University Of Waterloo
56. The Epistemology of the Large Hadron Collider: An Interdisciplinary and International Research Unit
Philosophy of Science 00:56 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:56:00 UTC - 2018/11/03 06:59:00 UTC
Stoeltzner Michael (University of South Carolina)
The aim of this poster is to present the works of the research unit “The Epistemology of the Large Hadron Collider” that was granted in 2016 by the German Research Foundation (DFG) together with the Austrian Science Fund (FWF) for a six-year period. The group is composed of twelve principal investigators, six postdocs, and five doctoral students from the philosophy of science, history of science, and science studies. 
The research unit investigates the philosophical, historical, and sociological implications of the activities at the world’s largest research machine, the Large Hadron Collider (LHC), at the European Organization for Nuclear Research (CERN) in Geneva. Its general question is whether the quest for a simple and universal theory, which has motivated particle physicists for several decades, is still viable at a time when there are no clear indications for physics beyond the standard model and all experimental evidence is increasingly coming from a single large and complex international laboratory. Among the topics relevant to philosophers of science, and specifically philosophers of physics, are the nature of scientific evidence in a complex experimental and theoretical environment, the role of computer simulations in establishing scientific knowledge, the dynamics of the model landscape and its driving forces, the relationship between particle physics and gravitation (using the examples of dark matter searches and modified gravity), the significance of guiding principles and values for theory preference, the impressive career of and recent skepticism towards naturalness, along with its relationship to effective field theories, the natures of detectable particles and virtual particles, the role of large-scale experiments within model testing and explorative experimentation, and the understanding of novelty beyond model testing. 
These interactions between the change in the conceptual foundations of particle physics prompted by LHC and the complex practices engaged there are studied in six independent, but multiply intertwined, research projects: A1 The formation and development of the concept of virtual particles; A2 Problems of hierarchy, fine-tuning and naturalness from a philosophical perspective; A3 The contextual relation between the LHC and gravity; B1 The impact of computer simulations on the epistemic status of LHC data; B2 model building and dynamics; B3: The conditions of producing novelty and securing credibility from the sociology of science perspective.
Presenters
MS
Michael Stöltzner
University Of South Carolina
57. The Novel Philosophy of Science Perspective on Applications of the Behavioural Sciences to Policy
Philosophy of Science 00:57 AM - 11:59 PM (America/Los_Angeles) 2018/11/02 07:57:00 UTC - 2018/11/03 06:59:00 UTC
Magdalena Malecka (University of Helsinki)
The objective of this research project is to propose the novel perspective in the philosophy of science to analyse reliance on the behavioural findings in policy contexts. The recent applications of the behavioural sciences to policymaking are based on research in cognitive psychology, behavioural economics, decision theory. This research is supposed to provide the knowledge necessary to make policy that is effective (Shafir, ed. 2012, Oliver 2013). ‘Nudging’ is an example of a new approach to regulation, elicited by the application of the behavioural sciences to policy. Its adherents advocate using knowledge about factors influencing human behaviour in order to impact behaviour by changes in the choice architecture (Thaler, Sunstein 2008). 
The debate on nudging in particular, and on bringing the behavioural sciences to bear on policy in general, focuses predominantly on the moral limits to nudging, and the defensibility of libertarian paternalism (Hausman, Welch 2010; White 2013). Philosophers of science consider whether, for behavioural research to provide policy relevant insights, it should identify mechanisms underlying phenomena under study (Gruene-Yanoff, Marchionni, Feufel 2018; Heilmann 2014; Gruene-Yanoff 2015; Nagatsu 2015). 
I argue that the debate overlooks three important points. First, there is a lack of understanding that behavioural research is subject to interpretation and selective reading in policy settings. Second, the debate is based on simplistic understanding of behavioural research that fails to pay attention to how causal factors and behaviours are operationalized, and to what the behavioural sciences offer the knowledge of. Finally, there is a lack of broader perspective on the relationship between the type of knowledge provided by the behavioural sciences, and the type of governing that behaviourally-informed policies seek to advance. 
My project addresses these missing points in the debate. It shows that when reflecting on reliance on scientific findings (behavioural sciences) in policy settings, it is important not only to analyse conditions under which a policy works (is effective). It is equally consequential to understand: how the explanandum is conceptualized, what kinds of causal links are studied and what is kept in the background. My analysis builds on Helen Longino’s work on studying human behaviour (2013) that went virtually unnoticed in the discussion on behavioural science in policy. 
Presenters
MM
Magdalena Malecka
Univeristy Of Helsinki
+ 56 more sub-sessions. View All
London School of Economics and Political Science
Texas Christian University
University of Pennsylvania
Bates College
University of Washington
+ 121 more speakers. View All
No moderator for this session!
Attendees public profile is disabled.
Upcoming Sessions
2273 visits