Saturday 1 October 2011

A Material Theory of Induction


In this paper, Norton offers his solution to the problem of induction. He does this by rejecting the notion that we need a universal schema for inductions. In the classic thinking on induction, we would take some finite number of observations and generalise to an indefinite number of cases according to the schema "Some A's are B's, so all A's are B's". But, as Hume pointed out, when we try to justify applying this schema, we find ourselves using an inductive argument and so applying the schema itself - a vicious circle therefore emerges rapidly.

Norton claims that inductions need not depend on a general schema at are but are rather local and grounded in background facts. If, in a particular domain, we know the right background facts, then some kinds of inductions are made legitimate. Take his example of "All samples of bismuth melt at 92 degrees". This claim is arrived at inductively by looking at a finite number of samples of bismuth. The relevant background facts that allow this inductive generalisation include things like "bismuth is an element"; "All samples of the same element have the same melting point"*...

One way of thinking about this is to imagine playing a game of justification with the induction sceptic. You say "All samples of bismuth melt at 92 degrees". She asks you to justify that claim. "Well," you say, "bismuth is an element and all samples of the same element have the same melting point." And so the next question: "Why do you think that all samples of an element will have the same melting point?" And so you answer that the melting point of a substance is substantially determined by its electronic structure, and all atoms of a given element have the same atomic structure. And so the game continues. At each stage, instead of trying to justify a general statement by appealing to some particular instances, you instead appeal to some even more general statement.

From this thought experiment, it is apparent that there is still a kind of regress at the heart of Norton's theory of induction. He claims that this is a less dangerous regress than in the standard understanding of Hume's problem, as it avoids vicious circularity. This approach seems to fit with scientific practice of taking some facts for granted when working in a particular domain. However, the crucial issue is pointed out by Norton himself: "What remains an open question is exactly how the resulting chains (or, more likely, branching trees) will terminate and whether the terminations are troublesome." (p. 668) He claims that we simply do not know for sure, but that there is a "real possibility of benign termination". This is doubtful - presumably these chains of justification will in fact terminate in simple generalisations about basic sensory experience. But these are grist to the original Humean mill. Norton has not solved the problem of induction.

It seems that there is also a problem with time. I claim that all samples of Bismuth melt at 92 degrees and thus that the next sample of Bismuth I encounter will melt at that temperature. Now the induction sceptic asks how I know this. I respond with the above mentioned facts about elements and so on. She responds "How do you know that these things will continue to be true in the future?" What local facts do I appeal to to justify my expectation that the future will be like the past? Is this not exactly the same problem that afflicts the standard account of induction? If my response is to appeal to universal generalisations (which imply the relevant future directed claims) then I seem to be missing the point. The sceptic can, at every stage ask "yes, but how do you know that these things will continue to be true in the future?" Or she can ask "How do you know that these generalisations, made on the basis of past evidence, will hold in the future?" Likewise, appeal to more general generalisations won't help respond to the sceptics worry.


* Except sulphur. Elemental sulphur comes in different allotropes, or molecular arrangements, which means that different samples may have different melting points...

Monday 15 August 2011

Inductive realism and approximate truth

Theory Status, Inductive Realism, and Approximate Truth: No Miracles, No Charades

This paper offers two interesting things: a way of looking at scientific practice, and a case study of the eradication of smallpox. The paper also promises an explication of the concept of approximate truth, though I'm sceptical about whether it really delivers on this point.

Hunt starts by outlining what he calls the "inductive realist" model of scientific practice. This is encapuslated by a diagram (p.164) that distinguishes "theory proposals", "theory status", "theory use" and "external world". He then explains the various interconnections between these things. As a broad strokes outline of scientific practice, this seems to fit with what I understand of the process. As a tool for clarifying the concept of approximate truth, it falls somewhat short. What Hunt's analysis of approximate truth boils down to is the claim that
accepting a theory ... as approximately true is warranted when the evidence related to the theory is sufficient to give reason to believe that something like the specific entities ... posited by the theory is likely to exist in the world external to the theory. (p. 169, italics in original)
Now, stripping this claim of its numerous caveats, and disclaimers it becomes: "A theory is approximately true when there is something like the theoretical entities in the world." Without a further explanation of what it means for a theoretical entity to be "like" a physical entity, this is no explanation at all.

Hunt also seems to conflate metaphysical notions of "truth"and epistemic considerations for the credibility of some proposition. If this is supposed to be a contribution to the scientific realism debate, then it seems like there has to be something ontological at stake. But as the above quoted passage shows, Hunt is always couching his claims about truth in terms of what one has "reason to believe"...

After this somewhat unsatisfactory analysis of approximate truth, Hunt turns to his smallpox case study. It was common knowledge in Europe in the 18th century that if you'd survived smallpox, you were no longer susceptible to catching it. It was also discovered around this time that those inoculated by being exposed to a small amount of the virus had a smaller risk of death than those who caught the disease in the usual way. The death rate for inoculation was still something like 1%, however, compared to 12% for the normal form of the disease.

Edward Jenner followed up on stories that people who had contracted the milder disease "cowpox" were also immune to smallpox. He wondered whether inoculating people with cowpox would give them an immunity to smallpox. His early trials were successful, and within a few years smallpox vaccines were available. By 1900 smallpox was all but eradicated in North America and Europe. (It took until the 80s before the disease was finally declared eradicated worldwide).

Hunt tries to use his analysis of approximate truth to explain the success of the smallpox vaccine. This seems a very strange thing to do: he attributes something called "the smallpox theory" to everyone from Jenner in the 1790s to the WHO in 1980. There was no such theory that they shared. Jenner didn't have a modern understanding of virology; indeed, it's unclear that he had any "theory" about smallpox at all. All he needed was a relatively low-level inductive move from "Smallpox inoculation causes immunity" and "Cowpox is like smallpox" to "Cowpox inoculation will cause immunity". He had no idea that smallpox and cowpox were viruses, or that inoculation/vaccination work by allowing the body to generate antibodies. Hunt lists 7 statements that together explain the success of the smallpox vaccination programme. Only the first of these statements is something that Jenner would have "known" in the 1790s.

If Jenner ever made any explicit predictions about the effectiveness of his vaccine, the accuracy of those predictions cannot be explained by Jenner's having found an approximately true "smallpox theory". We now understand Jenner's success in terms of viruses and antibodies, but Jenner didn't know about these things. What the realist typically wants to do when trying to explain some scientific success is show that the theory the scientists had at the time is approximately true with respect to our current understanding. Now in the case of the smallpox vaccine, that doesn't seem plausible: Jenner didn't really have much of a theory, so there isn't any way to demonstrate correspondence between it and current understanding. In short, Jenner wasn't successful because he had some grand theory of disease that corresponds well with the world, he was successful because he was lucky enough to stumble upon a relatively low-level inductive leap that held true: cowpox is enough like smallpox for the same inoculation procedure to work.

I'd like to finish with two aspects of the paper that I did like. First, Hunt insists that we must be fallibilist about our theories: we must not assume that science never makes mistakes. Second, despite the odd use he puts it to, the smallpox case study is an interesting one for philosophy of science: that Jenner was so successful without any sort of high level theory of disease is an interesting case.

Wednesday 27 July 2011

Vickers on "Historical Magic in Old Quantum Theory"

Vickers (2011)

This paper follows closely from another paper, co-written by the author with Juha Saatsi,"Miraculous Success? Inconsistency and Truth in Kirchhoff's Diffraction theory" (2010). In both papers, the aim is to present historical case studies that stand as counterexamples to scientific realism. The key realist commitment here is that there must be an acceptable explanation for the fact that some scientific theories are spectacularly successful in providing explanations and/or predictions of empirical phenomena. The preferred explanation, of course, is that the key assumptions underlying these theories are in fact true.

Vickers offers several examples drawn from old quantum theory, which was the dominant idea in atomic physics at the beginning of the twentieth century until Heisenberg's development of quantum mechanics in 1925. The basic Bohr model of the atom depicted it as something like a planetary system, with electrons in elliptical orbits around the nucleus. The "quantum" aspect of this theory was that only certain kinds of orbits, corresponding to particular energy levels, were allowed. There were two key successes associated with this model. Firstly, Bohr was able to theoretically derive the value of the Rydberg constant, which is used to give the frequencies of the emission spectrumof the hydrogen atom. This theoretical value agreed very precisely with the value measured experimentally. The second success was Bohr's prediction of the spectrum of ionised helium (He+).

However, according to Vickers, these successes were based the assumption that the atom is literally like a planetary system, which we now believe to be quite false. In particular, one of the key boundary conditions used in the derivation of the Rydberg constant is that at higher quantum numbers the frequency of the light emitted must be the same as that predicted by classical electrodynamics. But the classical result for the case of a charged particle revolving around another charged object is that the frequency of emission should match the frequency of the revolution itself. Similarly, the derivation of the He+ spectrum relies on the notion that both electron and nucleus in fact orbit around their joint centre of gravity (this can also be referred to as calculating the "reduced mass" of the system).

The other major example discussed is Sommerfeld's derivation of the "fine structure" of the hydrogen spectrum. This derivation introduced a second quantum number that further subdivided each of the Bohr energy levels. For Sommerfeld, these numbers were understood to characterise the eccentricity of each electron's orbit. Converting the angular momentum of each orbit into a relativistic angular momentum then gives him a formula that accurately predicts the frequencies of the various fine structure lines. As with Bohr, however, the assumption that electron's have "real" orbits is radically false by our lights; in quantum mechanics, the second quantum number is understood to represent spin-orbit coupling and relativistic effects are introduced by means of aHamiltonian (representing energy) rather than angular momentum.

In all of these cases, Vickers believes that any attempt to explain the success of the theory by reference to the truth of its underlying assumptions is blocked since, by our current lights, these assumptions are false. He argues that there are two possible responses to this (there is also the possibility of discarding realism altogether, but this is not considered). The first is that the realist relax the claim that truth is always the only explanation for success. Sometimes, as he puts it, success may be due merely to "lucky coincidence". This appears to be his favoured resolution to this dilemma, as it is in the paper co-written with Saatsi.

The other response is to opt for structural realism. The structural realist does not claim that a successful theory is true as such, but only that the equations of the theory 'latch onto' the "structure" of the world. A (somewhat) clearer example is found in Worrall's "Structural Realism: The Best of Both Worlds?", where he argues that we should believe the wave equations of light, although we may disbelieve claims about what is "doing the waving". In a footnote (#23), Vickers argues that this position isn't quite realist enough, as while it gives an account of why the equation is successful, it doesn't explain how this success was achieved. In particular, the realist should be able to argue not only that the equation is true (or structural adequate), but that the assumptions used in deriving it are also true. This seems like an extremely strong claim about what is required for realism.

Indeed, the structuralist would certainly not see herself as required to defend this latter claim. She would wish to apply the structural interpretation to both the successful equation and the assumptions used in deriving it. Indeed, she would explain the success of the equation by reference to the claim that the assumptions are themselves structurally sound. To take one example, it is certainly true that Sommerfeld was wrong to believe that electrons have elliptical orbits, and that their energy levels are constrained because only certain eccentricities are allowed. He was not wrong to believe, however, that the electrons are characterised by a second quantum number, and this attribute further constrains their energy levels. He was right about the constraint, although wrong about what was "doing the constraining"!

Wednesday 20 July 2011

Batterman on the explanatory role of mathematics in empirical science

Batterman (2010)

Batterman starts from the claim that there are cases of mathematical entities doing explanatory work. The question he is asking himself is "How does this work?" He argues that accounts of this sort of phenomenon by Pincock and Bueno and Colyvan (so-called "mapping accounts") cannot work. He offers a sketch of his own approach.

The idea behind the mapping account can be seen in an example like the seven bridges of Königsberg. It is impossible to walk across each bridge only once and get back to where you started because the bridges instantiate a particular mathematical structure which has the property of being "non-Eulerian". It is in virtue of this property that it is impossible to cross each bridge once and get back to where you started. Generalising the idea, a mathematical explanation works because the mathematical structure maps onto the structure (or a crucial part of the structure) of the physical situation.

Batterman makes two criticisms of this account of mathematical explanation which, to my mind, pull in different directions. The first criticism is that these mapping accounts don't have global measures of "representativeness". This amounts to saying that there is no general way in these accounts to determine which of two models of some phenomenon is "closer to the truth". This means that one standard ("Galilean") approach for explaining the idealisations in models is not available. The Galilean approach has it that one can justify the use of an idealisation by showing how replacing this idealisation with a "more representative" component would make the model more accurate. Without a rank-ordering of models by representativeness, this sort of thing isn't an option for mapping accounts.

Batterman's second criticism is that sometimes the idealisation plays an important (essential, ineliminable) role in the explanation. He has several examples. One is the criticality behaviour in fluids by reference to the thermodynamic limit, the mathematical construction whereby the number of particles in the substance is taken to approach infinity. Another is the explanation of rainbows in terms of geometrical optics. In each of these cases, there is nothing in the physical system to which the crucial bit of mathematical apparatus can be straighforwardly mapped, hence at least a prima facie difficulty for a mapping account.

These two criticisms don't seem to work together. Suppose, as Batterman seems to think, there are two kinds of mathematical explanations: ones that involve (only) Galilean idealisations, and ones where the limit is essential. Then it seems to be a good thing that the mapping account people don't commit themselves to only dealing with Galilean idealisations and the relatively rigid system of mapping from model to world that goes along with them. Pincock and Bueno and Colyvan presumably don't think that the distinction between Galilean and non-Galilean explanations is so important, and would rather have a looser system of mapping that lacks a general measure of representativeness.

The behaviour of a system as one approaches some mathematical limit, and the behaviour at the limit are qualitatively different in some important cases. This seems to be an important point for Batterman. But why can't the mapping-proponent argue that the (mathematical) behaviour at the limit can map onto some physical phenomena, and thus explain Batterman's problem cases?

In the Galilean cases, the "ways to drop the idealisations" offers insight into how the mapping from mathematical structure to world is supposed to go. In cases where idealisation plays some essential role, this insight isn't available. But this isn't irreparably damaging for the proponents of mapping accounts.

The sketch Batterman offers of his own account of mathematical explanation is too short to judge fairly. It rests on a distinction between mathematical structures (which are "static") and mathematical operations (which are "dynamic"). I haven't fully understood what is being hinted at with this distinction, so I don't feel qualified to assess whether his sketch of an account shows promise. [Pincock, in a reply to Batterman, says that he doesn't think there is much distance between his own and Batterman's account...]
This blog emerges from the Philosophy of Science Reading Group, currently running at the Department of Philosophy, Logic and Scientific Method at the LSE. We meet every week (with occasional exceptions) to discuss a recently published article or book in the philosophy of science.

The idea is to post our thoughts here in order to invite comments, and to keep a record in case we decide to incorporate these thoughts into our work at a later date.