Search
  • Joe

On Berger's "Understanding Science: Why Causes Aren't Enough"

Some philosophers of science and scientists believe that science provides understanding by showing how natural phenomena fit into the causal structure of the world. They believe scientific explanations should tell what caused some event to happen. In this article, Ruth Berger asks how the process of doing science facilitates understanding. More specifically, she asks what syntactic and semantic relationships obtain between the explanandum and the explanans, and why explanations are capable of yielding understanding in science.


Berger’s aim is to give an empirical critique of causal accounts of scientific explanations. She offers an argument against causal accounts of scientific explanation by showing that causal relevance cannot constitute scientific explanations since causal relevance seems to be both too strong and too weak. Berger shows that causal accounts obscure how the process of mathematical modeling produces explanatory information. She offers three arguments for the inadequacy of causal accounts: (1) explanatorily relevant information is not always information about causes, (2) theoretical explanations as reductions from causal laws is an inaccurate depiction of “top-down” explanations offered by dynamical models, and (3) causal/mechanical approaches are vulnerable to the irrelevance problem. She concludes that it is very difficult to maintain a causal account of scientific explanation if we take the semantic view of theories seriously.


Berger levels an argument against, what she terms, the “basic causal” account of scientific explanation. The “basic causal” account incorporates some version of the following two claims: (1) “information is explanatorily relevant in virtue of its being information about causes ”, and (2) “science produces understanding by correctly locating the explanandum within a causal nexus” (Berger 1998, p. 310). According to the “basic causal” account, causality is the crucial relationship that makes information explanatorily relevant and it reveals causal mechanisms that constitute a comprehensive scientific explanation.


Some variations of the “basic causal” account include restricting the scope of these two claims to cases in which the explanandum has a causal history or expanding both claims to permit different types of mechanisms. The former variation avoids counterexamples in which the explananda may not have causal histories, i.e. quantum mechanics, while the latter variation avoids counterexamples such as Kitcher’s party knot (311-312). Another variation of the “basic causal” account is a hybrid of this account with Peter Railton’s idea of an “ideal explanatory text.” Such a variation serves as an amendment to the second claim since it allows for one to deduce the explanandum from general causal laws.


Second, the causal account fails since theoretical explanations as reductions from general causal laws do not accurately depict the “top-down” approach of dynamical modeling. Explanations of singular events are more characteristic of “basic causal” accounts. Since dynamic models accommodate explanations of patterns, an adequate causal account of dynamical explanations “must be designed to accommodate theoretical explanations” (Berger 1998, p. 321). A variation of the “basic causal” account has attempted to treat explanations as a part of ideal causal explanatory texts that conceives of theoretical explanations as reduction, in which a deductive consequence of some general regularity is a natural regularity; however, reduction fails to explain the ways scientists use dynamical models. In McKelvey et al’s study of Dungeness crabs, they place more emphasis upon the dynamics that result from several types of causal influence than upon the causal mechanisms themselves. Mathematical modeling that produces theoretical explanations differs from reduction in two important ways. First, explanations cannot be reconstructed as deductions from general causal laws, and, second, abductive reasoning accomplishes the explanatory work. A “bottom-up” method cannot provide information about the large-scale relationships between causal and structural factors.


Finally, Berger believes causal information is not always explanatory. To possess more information about the causal history does not mean that it is a better explanation. Causal accounts that try to include all of the causal history do not always possess the correct information. Correct information seems to be more explanatory than the quantity of information accumulated. “When scientists construct dynamical models, they incorporate a few of the physical system’s features into the model, and they deliberately and systematically ignore all other parts of the system’s causal history” (Berger 1998, p. 328). Causal accounts, particularly Salmon’s, designate a class of relevance relations, but it does not answer which parts of the explanandum’s causal history are explanatorily relevant and why these parts are relevant. Dynamical models, in a sense, avoid the irrelevance problem by discriminating between which facts are relevant and which ones are not.


The main thrust of Berger’s argument is to eliminate causal accounts of scientific explanation. First, has Berger told us what makes a scientific explanation powerful or forceful? What is it about dynamical modeling that facilitates understanding? One could say that dynamical modeling minimizes the baggage of causal history or that dynamical modeling provides a better way of identifying important types of facts, but minimizing the causal history may risk the loss of precision and to specify what is explanatorily relevant seems to reject contextual elements in scientific explanations. Is information only relevant for the task-at-hand or for a wider epistemic community? Information deemed irrelevant in one context may be very relevant in another. Also, what makes abductive reasoning more powerful than reduction? Finally, I find two parts of the article very confusing. The first is the paragraph beginning “Instead of…” on p. 324. What does it mean for structural similarity or dissimilarity claims to possess different strengths? What is explanatory force? And, how does this relate to strength? The second occurs on p. 330-331. Why must an adequate account of scientific explanation necessarily accommodate modeling explanations? To my mind, “Because they are simply too central to ignore” (Berger 1998, p. 330) does not count as an adequate answer to this important question.

1 view0 comments

Recent Posts

See All