Nicolas Fillion (Simon Fraser University)
I discuss epistemologically unique difficulties associated with the solution of mathematical problems by means of the finite element method. This method, used to obtain approximate solutions to multidimensional problems within finite domains with possibly irregular boundary conditions, has received comparatively little attention in the philosophical literature, despite being the most dependable computational method used by structural engineers and other modelers handling complex real-world systems. As most numerical methods that are part of the standard numerical analysis curriculum do, this method breaks from the classical perspective on exact mathematical solutions, as it involves error-control strategies within given modeling contexts. This is why assessing the validity of such inexact solutions requires that we emphasize aspects of the relationship between solutions and mathematical structures that are not required to assess putative exact solutions. One such structural element is the sensitivity or robustness of solutions under perturbations, whose characterization leads to a deeper understanding of the mechanisms that drive the behavior of the system. The transition to an epistemological understanding of the concept of approximate solution can thus be characterized as an operative process of structure enrichment. This transition generates a scheme to assess the justification of solutions that contains more complex semantic elements whose murkier inner logic is essential to a philosophical understanding of the lessons of applied mathematics.
To be sure, there is a practical acceptance of the finite element method by practitioners in their attempt to overcome the representational and inferential opacity of the models they use, mainly because it has proved to be tremendously successful. However, the finite element method differs in important respects from other numerical methods. What makes the method so advantageous in practice is its discretization scheme, which is applicable to objects of any shape and dimension. This innovative mode of discretization provides a simplified representation of the physical model by decomposing its domain into triangles, tetrahedra, or analogs of the right dimension. Officially, each simplified inside element is then locally associated with a piecewise low-degree polynomial that is interpolated with the polynomial of other elements to ensure sufficient continuity between the elements. On that basis, a recursive composition of all the elements is made to obtain the solution over the whole domain. However, this presents applied mathematicians with a dilemma, since using piecewise polynomials that will be continuous enough to allow for a mathematically sound local-global “gluing” is typically computationally intractable. Perhaps surprisingly, computational expediency is typically chosen over mathematical soundness. Strang has characterize this methodological gambit as a "variational crime." I explain how committing variational crimes is a paradigmatic violation of epistemological principles that are typically used to make sense of approximation in applied mathematics. On that basis, I argue that the epistemological meaning of these innovations and difficulties in the justification of the relationship between the system and the solution lies in additional structural enrichment of the concept of validity of a solution that are in line with recently developed methods of a posteriori error analysis.