Transcripts from "Modularity of Animal Form" at FHL, 9/97


Edited Transcript of Session 7: William Wimsatt

Wimsatt: philosophers and a lot of other people, and Steve Gould is one of the worst sinners here, talk about emergence in ways that oppose it to reductionism. I'm going to argue that properly conceived, if by reductionism you mean something like giving a lower-level articulation of parts explanation of an upper-level phenomenon, then emergence is not properly opposed to reductionism, and that in fact some of the best examples we have of emergent phenomena are cases where we have interactions of differentiated sub-parts in which, in some sense, the characteristics of those parts and the order in which they're strung together in the system makes a property which none of them had in isolation, and whose explanation is really quite marvelous to behold; these are paradigms of good emergence.

Keller: do you want to say that's reductionist, or mechanist?

Wimsatt: let me say that I think there are different kinds of reductionism. I don't think that what most philosophers mean by "reduction" happens ever in science, so I'm willing to dismiss philosophical analyses to start with...

Keller: will you accept as a friendly amendment that you can avoid the problem by just saying it's mechanist?

Wimsatt: yes. The account I just gave is mechanist, and what I want to defend is mechanism. Whether you want to call that "reductionist" or not I'll leave open to discussion. I'll give a very quick gloss of the philospher's account of reductionism. The best example might be the research program of Russell and Whitehead, in which they wanted to reduce the whole of mathematics to logic. The idea was, you set out with a set of axioms, and you derive a bunch of things that show that you can capture all the results from another discipline; essentially that you can get everything from the axioms. Russell actually talked about the theoretical entities of the reduced sciences as "logical fictions"; the idea was, you could literally define them away, and show that they didn't exist. So you can see why so many philosophers who want to be reductionists end up being so-called eliminative reductionists, nothing-but reductionists. You have this image, if you can define "it" from below, well then gee, you don't really need "it" up above. Now this is really at odds with how most scientists think of reductionism, unless they've been talking with too many philosophers. I think that what most scientists mean by reductionism is something like this: understanding, or explaining, all the phenomena [at some level] in terms of things [at a lower level]. But they don't think in any sense that they're eliminating [the higher level phenomena]; you're getting another account, in some sense, of where this [higher-level] stuff comes from, but it's not eliminating them.

Keller: [would you] distinguish between atomic reductionism and level reductionism, because I think when most scientists talk about reductionism they mean a form of atomic reductionism, rather than level or theory reductionism.

Wagner: now could you say for us what you mean by those terms?

Keller: that you can get an property of a whole from the properties of the parts. The first-order term is the intrinsic properties [of the parts], and the second-order term is the two-body interactions, and so on.

Wagner: this is atomic? And what is level?

Keller: explaining biology in terms of physics, for instance.

Wimsatt: I think what actually happens in reductionism is they claim that, look, surely we could explain the system properties in terms of parts properties and all their inter-relations, that is, putting in all the terms. In that what happens is, well alright let's start with the theory, and the first-order theory is, for example, bean-bag genetics, or the point-mass thing that gives you the ideal gas laws with no volume terms. So the first cut on it includes only the first-order terms, and you get some results out of it, and then they claim in principle we can see how the whole thing goes.

Gilbert: does predictability have anything to do with this, that is, can you predict parts from the whole?

Wimsatt: it hasn't [had anything to do with it] for me, because I know some simple reductive systems in which you can't predict the results until you've actually generated them. So I've never found that to be an effective criterion. There are too many things, even in nice well-behaved reductive structures that you can't predict, like chaos and the logistic map. One parse on predictability comes out of aggregativity, because what's obvious is that if a property is an aggregative property of the parts then it falls out very simply what its values are.

Odell: when you described what you hoped to show about emergent phenomena, it seemed to me that you actually defined emergent phenomena. I've never understood it as anything other than phenomena that are astonishing that arise from interactions at a lower level of parts that have been reductionistically described.

Wimsatt: the problem is that that's sort of like an existence proof. That's a characterization you can give of emergent phenomena that's correct but doesn't allow you to do anything more with it. There's so many different ways that parts could be organized to produce a whole. What's nice about approaching it backwards, which is what I'm going to do -- what if I say, what do you do now, [knowing] how the different parts can interact? Suppose instead of trying to analyze that, because it appears open-ended, you back around and say, what is it for a system property to be nothing more than an aggregate of parts properties, and it turns out you can get four conditions out of it, which can be violated independently, and looking at the different ways they can be violated you produce a fairly rich classification of kinds of emergence.
[But before we go on to that,] I want to say one thing about theory reduction and why it doesn't work. Actually this applies [also] to level reduction, or compositional reduction. Every time you go up in levels you make approximations, and [in] all of the interesting cases of theory reductions, you're taking the limits and making approximations. This immediately throws the whole philosophical program right out the window, because of you're making approximations, you're saying, "this thing is enough like that, that we're going to treat them as equal." But they're not equal, so any deductive inferences involving them are not truth preserving. Philosophers want to start from true premises and derive from them true conclusions, and so the Russell/Whitehead program stops, right there, with the first approximation.

Odell: this is bizarre to me. [Are you saying] that from the get go philosophers are going to rule out all real mathematical physics, physics, chemistry, biology,...

Wimsatt: they would have to.

Odell: ...and all they're doing is mathematics!

Wimsatt: you see, they don't realize they have to. I think if they're going to be consistent they have to.

Burian: in defense of philosophers, that's an old-fashioned philosophy of science which we're trying to make out of favor.

Wimsatt: so let's go on. Suppose you want an account of properties that are in some sense dependent on mode of organization of the parts. Here I'm reminded of a story Jack Cowan tells. There were two small departments at our university, theoretical biology and biophysics. Cowan was asked by our university president, who doesn't like small departments, "look, you both talk about organisms, and you both use a lot of math, and a lot of physics, you must be doing the same thing. What's the difference between you theoretical biologists and the biophysicists?" And Cowan says, "take an organism, homogenize it in a Waring blender, and the biophysicist is interested in those properties that are invariant under that transformation." So I want to take that intuition and turn that into a criterion: one important criterion ought to be that if a system property is an aggregate of parts properties, then it ought to remain invariant under reaggregation. Call the first condition the Waring blender criterion, or the intersubstitutability criterion, that is, intersubstitution of any part in the system with a part from a reference class of relevantly similar parts.

Meir: wait, when you substitute, you're going to get the same thing when you rearrange the parts?

Wimsatt: yes, but "relevantly similar" bears all the weight here. What you want to rule out are cases of emergence that creep in because you've got a special set of parts, somehow, that you can intersubstitute all right but there's no external reference class to put them in.

Meir: but isn't that everything? How many systems could you take the parts and rearrange them and get the same...

Burian: a chunk of sulfur.

Wagner: it depends on the property you're talking about. The mass of an organism remains the same if you rearrange the parts, because mass is an aggregative property; but behavior is not.

Wimsatt: let's back up. I think by the time we have all these [four conditions out], there are no interesting organic properties, and not even very many interesting inorganic properties, that satisfy them. It just happens that those that do have so much theoretical power that they are very important. They turn out to be the properties of the great conservation laws of physics; nothing else meets all of these conditions. The second condition is qualitative similarity. Let's take mass, which turns out to be a nice aggregative property. You want it to be the case that mass still stays mass when you add another chunk, but you also want the amount of mass to increase appropriately. [The third condition is reaggregativity;] you can cut the system up into parts and put it back together [without losing anything]. I sometimes think that decomposition and reaggregation in effect amounts to intersubstitutibility applied over all possible decompositions. The fourth is a linearity condition; there are no cooperative or inhibitory or excitatory interactions among the parts.
You can argue about which of these are more important, but I don't care which are more important, I'm going to use all four of them, and I'm going to say, for any given case, which of these criteria and which does it fail. I want to use a physicist's example for a moment, namely critical mass. It's treated as if it is a property of elements though it clearly is not. It is not just a property of the element, or not just a property of the element, though it has to do with the emission and absorbtion properties of neutrons, but it's also a property of the geometry, and the density, and, as they discovered at Los Alamos, it's also a property of the neutron reflection and absorbtion features in the environment. So although at first it looks like you've got something that meets aggregativity, or meets some of the criteria (at least within an epsilon), it turns out that once you start [asking] what conditions have to be met, then you start pulling in features of context that you hadn't thought were relevant to the problem.

von Dassow: it seems to me the really important point that you made in [your paper on aggregativity] is that for a particular problem, the "best decomposition" is the one that behaves roughly aggregative, and in which [components behave] as natural kinds with respect to that problem.

Wimsatt: let me weaken that somewhat. In some sense, what would be taken as the best decomposition (it's not clear to me that it is the best decomposition) is the thing that meets as many of [my four] criteria as possible, as strongly as possible.

Munro: do you think it's the case that the decomposition that meets more of those criteria in general is going to lead to a more compact explanation, one that you would find more elegant and therefor easier to use? I mean, when biologists are for some reason trying to explain some phenomenon, the ultimate criterion is, can you construct an explanation that works, and [when it] fails you seek to put something else into it. You don't go down a list and say, hmmm, now which assumption did we violate? You actually look at the system again and try and find out something more about it.

Wimsatt: I think that's exactly right, but you need two important qualifiers. One is, it depends on your purposes. All decompositions are, as it were, interest-relative; that is, what's your problem? For example, Herbert Simon in "The Architecture of Complexity" says weaker interactions are all going to be dominated by stronger interactions; he presents it as if it's a tug of war, meaning [in his context] that all the polar and covalent bonds are going to [dominate] over hydrogen bonding. Does that mean that biological organization is dominated by polar and covalent bonding? No, and we wouldn't have any tertiary structure without hydrogen bonding. The point is you really can't give context-free solutions to interesting problems. [The other qualifier is,] one of the reasons why the aggregativity conditions are interesting is that to the extent that they are met for [a particular problem and decomposition,] the properties are more context independent.

Munro: but take trying to construct units of selection, or units of evolutionary change. Here's a paradoxical thing, isn't it? You're interested in something which on the one hand has stability, and identity, but on the other hand is mutable, whose properties are context-dependent.

Wagner: mutability is not the same thing as context-dependence. If you have different alleles you may have effects that are additive, which means context independent, and those are the properties natural selection is {caring for?}, that cause heritability.

Wimsatt: the point is, aggregativity can act as a powerful heuristic for, as it were, debugging theories and determining their ranges of applicability and things that they may depend upon, and may also provide a good heuristic for looking for good decompositions. The point I want to make is that almost every "nothing-but-ism" starts with a few properties that are more aggregative than other decompositions, and then people forget all contextual qualifications and over-generalize.

Odell: and the moral is, unless you're interested in just electric charge, mass, and money, everything's emergent.

Wimsatt: now let's talk about modularity. In some sense the most important criterion that's been given for modularity [is contained within] work done by Herbert Simon in the early 1960's in which he argued for a notion of "near-decomposability." He was interested basically in sets of linear equations in the social sciences, but it has some application more broadly to non-linear systems. He was asking, under what conditions can you break the system into smaller parts? He was imagining a large interaction matrix, in effect, where you've got coefficients for how strongly one variable interacts with another. Imagine you've got a system with a number of variables that is ordered such that it can be broken down into subsystems in the following way: suppose you've got two strengths of interaction, all the things in one class are strong, and stronger than the strongest in the weak class, and all the ones in the weak class are weaker, and there's a significant separation between the weakest one in the strong class and the strongest one in the weak class, then there are some interesting things that follow. You can then approximate the behavior of the system in the short run, [and] you can treat the [sub-]systems as isolated; you can solve them in terms of the strong interactions alone, thus enormously simplifying the system. Then in the long run,[any one set of variables grouped by strong interactions] will come to equilibrium, their behavior can be [indexed], and then you can treat the [whole] as an interaction among [the indexed groups]. So there are separate long and short range interactions; in the first one you're looking at only at the structure of the subsystems and in the second you're looking only at the interactions among them.

Munro: that can't possibly be true for non-linear systems.

Wimsatt: I would say in the long run it can't be true for non-linear systems. Even for non-linear systems you should be able to do [the first part], because non-linear systems take a while to diverge.

Mittenthal: I find this confusing. Are you talking about weak versus strong interactions among individual units the system is built on?

Wimsatt: Simon is actually talking about variables, but I think you can either go for parts or for variables under the right conditions.

Mittenthal: well then, you know, the interactions that hold the cell membrane together are weak; they're hydrophobic interactions [that are weak] compared to the strong electrostatic interactions of the material on either side of the membrane, yet the membrane is held together by these weak interactions. Doesn't that seem contradictory to what Simon is talking about?

Wimsatt: that's why I describe this as criterion for modularity rather than an analysis.

Wagner: but in your example[, Jay,] the hydrophobic entity does not interact with the hydrophilic entity. So even if the hydrophobic-hydrophobic interaction is weaker than the hydrophilic-hydrophilic, there's even less interaction between those two classes, and this model still applies.

Odell: I'm pretty sure that the first criterion, that the weakest strong interaction is stronger than the strongest weak interaction, is not a necessary condition for pulling out subsystems from a big system. That condition only has to be true of a local piece of the network. If it were necessary it would rule out anything interesting and real in biology.

Munro: I'm bothered by it and I can't quite figure out why. Part of it is that it seems in a lot of cases that what you specifiy in development are end states, attractive states. So when you look at a system like this you're not necessarily trying to figure out how you can decompose it so that you can watch in a stepwise fashion how it evolves, on the contrary you'd like to know how you can decompose a system so that the qualitative behavior of the individual parts makes sense somehow, and so you can understand the qualitative behavior of the whole as a function of the interactions of the parts. I'm not sure that [Simon's] kind of decomposition maps onto the second kind.

Wimsatt: in fact, I think [Simon's] framework does not work very well at all for systems whose state space changes qualitatively throughout their [time course]. This is [just one of] a pastiche of [ways to approach modularity], none of which is in itself a complete analysis.

von Dassow: but anyway if you've got [some system that's amenable to Simon's kind of decomposition], you can pick out modules easily.

Wimsatt: yes, the point is that under the conditions in which it works, it [gives] interesting results, and I'm not bothered if there are situations that it doesn't handle correctly.

Wagner: there's another condition that we found in an attempt to develop a decomposability theory for population genetics. You're asking, when can I look at one character or one gene, ignore the rest of it, and still have a predictive theory. It turns out that even if there are strong interactions, as long as those interactions are such that their influence on the dynamics [induces only] a similarity transformation in the dynamical system, then it is dynamically irrelevant even if...

Munro: so that's exactly my question. How does that relate to [Simon's framework]?

Wagner: the way I read it, is that this whole theory of decomposition is not very well developed. We don't know yet what the options are.