• Joe

Pearson's biometry and Mendelism

Updated: May 22

Karl Pearson set out to produce a mathematical treatment of heredity – biometry – which modeled the inheritance of continuously variable characters. Such a treatment of heredity was opposed to the views of William Bateson. Weldon had set out the basic tenets of biometry in a paper published in the Proceedings of the Royal Society:

It cannot be too strongly urged that the problem of animal evolution is essentially a statistical problem: that before we can properly estimate changes at present going on in a race or species we must know accurately (a) the percentage of animals which exhibit a given amount of abnormality with regard to a particular character; (b) the degree of abnormality of other organs which accompanies a given abnormality of one; (c) the difference between the death rate per cent in animals of different degrees of abnormality with respect to any organ; (d) the abnormality of offspring in terms of the abnormality of parents and vice versa. These are all questions of arithmetic; and when we know the numerical answers to these questions for a number of species, we shall know the deviation and the rate of change in these species at the present day – a knowledge which is the only legitimate basis for speculations as to their past history, and future fate. (Weldon 1893, p. 329)

Weldon never developed the methods, but he did attract the attention of Pearson who created the basic methods for the statistical study of populations. For a discussion of Weldon’s insufficient knowledge of mathematics and inability to provide the methods of biometry, see Provine (1971). These statistical studies of heredity were based on a number of uncontested observations. First, offspring resemble their parents more than they resemble others. Second, children of the same parents vary in characteristic ways from their parents. Third, “exceptional” parents have more “mediocre” offspring than themselves. I am not sure of exactly what ‘exceptional’ and ‘mediocre’ entails, but, for the purpose of this post, I would imagine that such terminology refers to an individual’s behavior patterns or intelligence. Finally, some characteristics are able to skip several generations of offspring (atavism). These popular observations are derived from Galton’s statistical studies of heredity popular during the biometric phase (See especially Gayon [2000], pp. 73-74.) Statistical analysis can explain such phenomena. The statistical analysis was purely descriptive and did not require one to hypothesize about the mechanism of hereditary transmission. Pearson rejected Mendelism in light of its acceptance of a gene as a theoretical entity, and he denied the Mendelian claim that we need only know about the parental generation to calculate the offspring’s heritable variation. Essentially, Pearson and the biometricians were committed to a study of heredity and variation as a statistical method, and they were suspicious of any study that attempted to postulate an underlying mechanism of heredity. The statistical method should supply an accurate description of variation without reverting to postulating the existence of any nonveridical mechanism of heredity.

Pearson’s law of ancestral heredity avoided hypothetical entities altogether. His aim was to give a ‘metaphysics-free’ account of ancestral heredity. Pearson’s (1903) paper “The Laws of Ancestral Heredity” exemplified such an account. The laws were instruments of prediction that described the routines of experience and did not fall into the trap of trying to explain these routines. By trying to explain the routines of experience, one would be forced to postulate the existence of ‘theoretical entities’. Pearson wanted to avoid such a trap by describing the flow of appearances in scientific experiments. Pearson writes:

The law of ancenstral heredity in its most general form is not a biological hypothesis at all, it is simply the statement of a fundamental theorem in the statistical theory of multiple correlation applied to a particular type of statistics. If statistics of heredity are themselves sound the results deduced from this theorem will remain true whatever biological theory of heredity be propounded. (Pearson 1903, p. 226)

To my mind, Pearson means by biological hypothesis any assertion that claims to acknowledge the existence of some entity not verifiable by at least one of our sensory modalities while a fundamental theorem may be merely a composite of all propositions that describe empirically verifiable phenomena. Pearson’s laws of ancestral heredity did nothing more than summarize the observable differences between ancestors and progeny. These laws were “extended versions of the regression equation (1) which employed the results of the theory of multiple correlation (developed by Pearson) to predict x (the filial deviate) on a basis of knowledge not only of the midparental stature of intelligence (say) but also of the statures of or intelligences of a whole series of ancestors (Norton 1975, p. 540). The application of these statistical tools designed to measure variability and correlation, and the ways in which these influenced various kinds of selection, defined biometry. Biometry described patterns of phenomena, and, in matters of heredity, it avoided metaphysics by seeking better formulae for predicting filial phenotypes on a basis of knowledge of ancestral phenotypes.

Pearson’s treatment of ancestral heredity proposed an equation of multiple regression that linked the character of an individual to the average character of each ancestral generation. The equation took the form: y=b1x1+b2x2+…+bnxn where y is the character of the progeny, xn is the average character in generation n, and bn is the coefficient of partial regression of y on x for generation n (Gayon 2000, pp. 74-75). According to Pearson, such treatment of heredity was completely descriptive. The regular coefficients could not be determined a priori; rather, only the actual measurement could determine them in particular situations. Pearson states, “All this has nothing in it peculiar to heredity, it is simply an application of the higher theory of statistics” (Pearson 1903, p. 217). The claim is that scientists need not speculate about the existence of hereditary units to describe a science of heredity. All scientists need to do is observe certain regularities, apply statistical methods to these observations, and derive predictions from the implementation of these statistical methods. Pearson continues, “The above [statistical method] is in no sense a biological theory, it is based on no data whatever except the actual statistics; it is merely a convenient statistical method of expressing the observed facts” (Pearson 1903, p. 217). Pearson believed there is nothing speculative postulated in the statistical method. Scientists identify and report observed facts as legitimate bases for conclusions in the biometric method.

What the above seems to imply is that Pearson did not postulate the existence of certain ‘units’ of heredity. On the contrary, Pearson advocated a phenomenalist conception of scientific theory much akin to the approach of Ernst Mach (Cohen and Seeger 1970 and Blackmore 1972 have an informative discussion of Mach's positivism). Among other things, the main division between Pearson and Bateson was the commitment to the study of heredity and variation as a statistical analysis. Bateson rejected such a notion while Pearson and the other biometricians embraced a statistical method. Bateson may have rejected such a notion because of his ineptitude in mathematics, but that would be pure speculation on my part. Bateson rejected statistical analysis because of his rejection of continuous evolution. Since evolution was discontinuous, change consisted of large jumps in variation. There seemed to be little use of a statistical theory. Despite Pearson’s insistence that no such theoretical entities exist beyond what can be observed, there does seem to be room to interpret his statistical method as postulating some theoretical entities as existent. Pearson may not agree with such an interpretation, but that should not detract us from investigating these implications.

Pearson may not have postulated explicitly that theoretical entities are real things, but he did use a statistical method that implies some functional equivalent. The statistical method itself seems to be based on the presumption that it functions in the same way as some ‘shadowy unknowable’. In other words, that the statistical method describes variation from parent to progeny somehow implies it is representative of a ‘shadowy unknowable’. The statistical method functions in a similar way to the unit of heredity. Since they seem to be doing the same thing, it seems to follow that they are identical. Discussion of these statistical methods as describing observable relationships between parent and progeny surely seem to support a notion that theoretical entities are fictions or conventions useful in thinking about the ways in which nature works. These statistical methods are similar to scaffolding workmen use on a building project. The methods support the theory just like the statistical method supports the description of trait variation from parent to progeny. One uses statistical methods to describe variation of traits over time while the other uses units of heredity to describe variation over time. They may have different means of describing the phenomena, but the two outcomes are identical. Perhaps Pearson unknowingly has accepted the notion of unobservables framed in a different light; instead of postulating the existence of a substance simpliciter, Pearson has postulated the existence of a functional substance. It is difficult to understand the difference without properly unpacking Pearson’s philosophical presumptions. Functional substances are different only in that they do not require one to speculate about some unobservable entity. Functional descriptions or explanations act as the unobservable entity. Those who abide by a functional description or explanation avoid theoretical entities.

What I hope to show in future posts is that the difference between Bateson and Pearson can be interpreted as resulting from their ontological commitments. Bateson did not worry about committing himself to the existence of theoretical entities while Pearson rejected the insinuation that his theory could even allow for the existence of such entities. These commitments informed and determined their theoretical beliefs about the legitimacy of biometry and Mendelism. I shall eventually return to Pearson’s implicit acceptance of functional substances. Now, however, I want to turn to Bateson and his adherence to Mendelism.

1 view0 comments

Recent Posts

See All