Appeared in Vol. 10 No. 2 Download PDF here
Ever since the Scopes trial, the question of evolution has loomed large in the American Christian mind. Fundamentalists in particular have been concerned to disprove the theory of biological evolution of species because of their belief that such a theory directly contradicts the evidence of the Bible. For Catholics, the potential conflict between science and faith has been less acute, but has still raised a variety of important questions. As the debate has continued, some Creationists have asserted that evolution contradicts the laws of thermodynamics. According to Thomas Fowler in this article, that claim is not true, but in delineating how the various thermodynamic processes apply to evolution, Fowler does much to clarify the state of the larger questions as well. Since the argumentation is highly technical, using scienti c formulae, this technical material has been placed in unindented smaller type, and may be omitted by the non-specialist.
Charles Darwin propounded his theory of evolution in 18591. It triggered a raging controversy which continues unabated to this day, more than a century later. Among the disputed questions, three are critical for religion in general and Christianity in particular: (1) Does the theory of evolution contradict the Bible? (2) Does the theory of evolution imply mechanisms which are inconsistent with the known laws of physics? and (3) Does the evolution of species represent an ongoing creation of order out of chaos in the universe? In other words, does the theory of evolution contradict the established metaphysical principle that order cannot come from disorder?2 The first question is the subject of Pope Pius XII’s encyclical Humani Generis; this article will concentrate on questions (2) and (3), with special reference to the problem of scientific truth and philosophical understanding of nature. Regretably, volumes of misleading and incorrect information on question (2) in particular have been published, and so it will be necessary to examine the subjects of thermodynamics and system theory in some detail in order to place the entire subject in proper perspective and clear the air of numerous erroneous ideas. The question of evolution as a comprehensive theory will be addressed insofar as it is necessary to answer the questions in the article’s title. Important topics will be considered from the standpoint of global system properties and potentials which permit system behavior to be analyzed without detailed knowledge of structure or implementation. The problem of the ascent of man will not be discussed.
The discussions of thermodynamics and system theory will perforce be somewhat technical at times simply because there is no other way to accurately expound these subjects and their relationship to the critical question of evolution. However, the more technical aspects of the discussion have been set in smaller type and may be omitted by readers willing to accept the relevant conclusions without a detailed study of the theoretical background.
Before embarking upon that discussion, it will be useful to review the questions we seek to answer: (1) Given that the laws of thermodynamics regulate in a global fashion all changes which occur in the physical universe, can evolution, defined as “change from a lower level to a higher level of order”, occur without violating these laws? This question is the most directly relevant, but there are two other closely related questions which must be addressed at the same time: (2) Could life arise or have arisen spontaneously in some primordial ocean, given the laws of thermodynamics? And (3) Independently of the origin and evolution of organisms, does their day-today existence and functioning violate the laws of thermodynamics?
Clearly, while a negative reply to questions (1) and (2) may not bother the practicing scientist greatly, a negative reply to question (3) certainly will since it unmistakably implies that each all living organism represents an on-going miracle. That would be disturbing indeed-to scientist as well as non-scientist.
As we shall see, the answers to these three questions are closely linked and follow from the same basic principles.
SYSTEM THEORY AND THE NOTION OF STABILITY
Stability is intimately related to the behavior of living systems and to questions of possible order arising from disorder, so an understanding of it is indispensable for an understanding of evolution. In order to provide the necessary foundation for the analysis of evolution and thermodynamics, let us begin with a brief discussion of the notion of stability, followed by a more detailed mathematical treatment. Stability is a notion which is intuitively familiar from ordinary experience. Something is `stable’ if it does not fall over, e.g. a ladder; or `stable’ if it does not go out of control, e.g. an automobile or aircraft; or `stable’ if it does not girate or fluctuate wildly, e.g. commodity prices.
For scientific purposes, it is necessary to sharpen these ideas somewhat, though the basic notion remains the same. Consider the following examples. First, assume a book is lying on a flat table. If the book is pushed a few inches in any direction, it will stop after being pushed and remain at its new location. The book of course will not spontaneously return to its original position, but neither will it continue to move for very long after the pushing has ceased. This clearly is a type of stability, a very general type, as it happens, which can be given precise mathematical formulation.3 And the foregoing scenario will persist through innumerable movements or perturbations (as they are called) of the book, until the book reaches the edge of the table. Then of course it will fall off, and in such case we may say that the pertubation (i.e. the last push) has caused the system consisting of book and table to become unstable.
Consider next a simple arrangment comprising a large salad bowl and a ping pong ball released anywhere on the inside surface of the bowl. The ensuing course of events is quite predictable: the ball will execute a spiral or oscillatory motion, eventually settling to rest at the bottom of the bowl. And of course, if the ball is resting at the bottom and then subjected to an impulsive force in any direction, it will once again execute the spiral or oscillatory motion, finally returning to rest at the bottom as before. This is a stronger type of stability than that exhibited by the book and table above, because the ball always returns to its rest or equilibrium position at the bottom of the bowl (provided that the perturbations are not so strong as to project it out of the bowl altogether). Indeed, this type of stability has a special name: it is termed `asymptotic stability’, and is an extremely desirable property for most systems because of the fact that proper system functioning does not require perfect initial positioning of the system. For the example under consideration, if we wish to have the ball at the bottom of the bowl, it is unnecessary to place it there, but only somewhere inside; the system itself will do the rest, so to speak.
Asymptotic stability means that a system can continue to operate in the presence of the inevitable perturbations from its environment without becoming dysfunctional because of them. For most complex systems, in fact, this type of stability is essential for their construction and continued operation because of the difficulty (if not impossibility) of simultaneous correct initial positioning of all parts of the system, as well as the system’s need to withstand the perturbations from its environment when it is functioning.
Obviously, if the ping pong ball in the foregoing example receives too large an impulse, i.e. too large a perturbation, then it will leave the confines of the bowl altogther, and of course will not return to the bottom of the bowl as it did in the previous cases. This is another example of a system becoming unstable. To be precise, we should say that the system consisting of bowl and ping pong ball is stable under a certain range of perturbations (i.e. those which do not drive the ball out of the bowl) and becomes unstable when the perturbations are outside of this range.
Now let us develop some of the foregoing ideas a bit more quantitatively. We begin with the notion of a `system’.
For the purposes of science, a system is a group of objects which can be observed and whose behavior can be described quantitatively.4 The objects, which may be falling rocks, reacting chemical elements, the organs of an animal, etc., are assumed to be changing in some way over time, and systems composed of such changing elements are referred to as `dynamic systems.’
Dynamic systems are usually described by ordinary or partial equations. A standard way of writing these equations is in state-space form, where each dimension of the “space” corresponds to a degree of freedom of the system. A typical state space description for an object moving in one dimension and governed by Newton’s laws is as follows:
This information can be displayed graphically on what is called a phase space diagram. The phase space diagram is not a picture in the usual geometric sense of what is happening to a system; rather, it contains that information and a great deal more besides because it shows both the position of the systemand its velocity or momentum, i.e. its rate of change of position. Specifically, the phase space has one coordinate for each possible type of motion (or degree of freedom, as it is usually called) that the system can exhibit. In the case of one dimensional motion, one axis is position (x coordinate), and the other is momentum (y coordinate); thus, the phase space is a plane in this case, even though the particle is always moving along a line. Clearly, however, the two dimensions are necessary to specify the particle’s motion because its momentum as well as its position must be shown. In the case of more complicated systems, the phase space can have an enormous number of dimensions; e.g. for a mole of ideal gas, where each molecule has 6 parameters to be specified (x,y,z position, x,y,z momentum), the phase space has a dimension of 6 x 6.023 x 1023 dimensions. But the system consisting of these 6x 1023 particles becomes a single point moving about these 3.6x 1024 dimensions; thus the system’s history becomes the trajectory of that one point as it moves with time. Despite the fact that such a space is obviously not visualizable, there are significant advantages not 6 x 102‘. Construction and analysis of these diagrams is beyond the scope of this article; interested readers should consult reference [7].
The mathematical description, and in particular the state-space description, permits qualitative determination of the behavior of the system when the closed form solution5 cannot be obtained-something which, regrettably, is the case for all but the simplest systems. But qualitative behavior? Yes, because frequently we are interested not in the response of the system under specified conditions, but whether it will `blow up,’ or go wildly out of control, or settle down to some (unspecified) position, i.e. we would like to know the type of stability (or instability) the system will exhibit. This qualitative information can often be determined without explicit solution of the system equations.
For example, the range of the system’s variables over which it is stable is of great interest, because when this range is exceeded, the system typically `self-destructs’ in some way. This, indeed, is one of the usual meanings of a system being destroyed: it becomes unstable. Even in ordinary discourse, we say that unstable systems are not of much value.
Stability theory concentrates on what are called the equilibrium points of the system-those points at which the system will remain if subjected to no external force. They are determined by setting the left hand side of the equations above to zero, and solving the right hand side. Notice that this is essentially an algebra problem, and can almost always be done.
Naturally, if a system tends to return to an equilibrium point when perturbed, it is called `asymptotically stable,’ as discussed above; if, upon being perturbed, it does not return, but does not move too far away, it is simply `stable,’; and if it goes out of control,’ it is unstable.6
The notion of the phase space, and of stability generally, may seem a bit strange at first, because we are used to thinking of such things as cars or houses which, if not built correctly to begin, will either fall apart or at least not be functional; i.e. their `range of attraction’ is extremely smallone does not see a car fixing itself, for example. But for many systems (including, as the reader may suspect, living systems), there is a definite region in state space where the system is asymptotically stable. And as pointed out above, this almost must be true for extremely complicated systems because of the impossibility of getting all the parts (i.e. subsystems) started in just the right way, and keeping them working given the inevitable perturbations which will occur over the lifetime of the system (which in the case of houses or cars is the responsibility of the owner!).
Now considerable information about system behavior can be determined just from the form of the system equations. If they are linear then the system, if asymptotically stable, can have at most one equilibrium point to which it tends in the absence of external forces.8This is an extremely important piece of information because it tells us, without doing anything but inspecting the form of the system equations and carrying out a relatively simply mathematical operation,9 that the system they describe must tend to a single point in state space and cannot, regardless of initial conditions, shift from one stable point of equilibrium to another in response to an external stimulus, or be in a different equilibrium state depending on initial conditions. Most systems, if sufficiently close to equilibrium, can be approximated by linear equations; this fact will be important when thermodynamic systems are analyzed below.
Systems which are not linear can exhibit markedly different behavior.
They can have several different equilibrium points, and the one to which they tend will generally depend on initial conditions. They can (and usually do) have stable and unstable regions of phase space. But perhaps most surprising, they can exhibit a type of periodic motion called a “limit cycle.” This is a semistable behavior which is of paramount importance in the analysis of thermodynamic systems.
Limit cycles are perhaps best understood by example; in the phase plane, they appear as closed curves of various shapes and act somewhat like equilibrium points in so far as the trajectory of a system placed near the limit cycle will (under certain circumstances) approach it. As an example, consider the Van der Pol equation (which describes the flow of current in a triad vacuum tube):
Depending on the value of µ, this can have many solutions, of which three are shown in Fig. 1. These periodic or repeating paths are called `limit cycles’; if the system finds itself in any vicinity of the path, it will, within a certain time, begin moving along the path.
In the diagrams, the limit cycle is the heavy black closed curve, and the lines with arrows are the trajectory of points near the cycle. Note that in all these figures, whether the initial position is inside or outside the limit cycle, it approaches the limit cycle and begins moving closer and closer to it, eventually following it exactly. In all these cases, the point (0,0) is an unstable equilibrium point because of the limit cycle.
Note that limit cycle behavior, which is highly ordered, arises spontaneously from the system itself provided only that sufficient free energy is available from the environment. In other words, once the right ingredients are put together, the ordered behavior appears spontaneously -there is no need to `design’ or otherwise create it. Note also that this type of behavior is of temporal regularity; spatial regularities can arise spontaneously as well.11
To summarize briefly, systems exhibit behavior which is either stable or unstable. Stable behavior may be either ordinary or asymptotic, and unstable behavior may be completely unstable, i.e. the system may `blow up,’ or it may be semistable, in the form of a limit cycle, which is an indefinite periodic motion. Every region in state space which corresponds to one of these types of behavior contains a point, called an equilibrium point, which is said to be stable or unstable depending on the behavior of the system in its vicinity. If an equilibrium point is asymptotically stable, then if the system finds itself anywhere in the region of attraction of the point in phase space, it will approach this point. Thus there is no need to `hit the nail on the head’, so to speak, with respect to getting a system set up in precisely the right way; for stable systems, it is only necessary to get sufficiently close and then laws governing system behavior will do the rest. Similarly, for limit cycles, the system will approach the limit cycle from some region on one or both sides of it; there is no need to place it exactly on the cycle itself.
We may note in passing that there are other notions of stability which are related to that discussed here, and concern the behavior of solutions to non-linear12 and partial differential equations;13 but these notions are not important for the present discussion.
THERMODYNAMIC SYSTEMS AND THE LAWS OF
THERMODYNAMICS
Thermodynamics as an intellectual discipline may conveniently be divided into two broad categories, whose epistemological status is quite different: macroscopic thermodynamics and microscopic thermodynamics. The first refers to measurement and prediction based on observation of quantities which characterize large-scale systems: temperature, heat, energy, entropy, and so forth. The second refers to measurement and prediction based on the microscopic structure of matter, viz. atoms, electrons, molecules, and such. Macroscopic thermodynamics developed first; it was later linked to microscopic thermodynamics and the two developed in parallel after about 1870. Both have an important role to play in the description of system behavior, as we shall see, with some questions better .addressed in macroscopic terms while for others the microscopic description is best. We shall begin with a discussion of the laws of thermodynamics expressed in their macroscopic form. The laws as stated below refer to systems at or very close to equilibrium, which means that their components are characterized by state properties that are constant throughout the system; 14 e.g. a cup of water having the same temperature everywhere is in equilibrium; but if a cube of ice is tossed into the cup, until the ice melts, there will be measurable temperature variations throughout the water, and the system will not be in equilibrium.
By way of general orientation, the discussion which follows will cover macroscopic and microscopic thermodynamics of systems at or near equilibrium, and will give answers to the three questions from the introduction in the light of equilibrium thermodynamics. Then non-equilibrium thermodynamics will be discussed, and the questions reviewed again. A chart summarizes the discussion. Readers who wish to press on may omit the following sections on thermodynamics and briefly study the chart on p. 125 but there are very important concepts presented in these sections.
EQUILIBRIUM THERMODYNAMICS: MACROSCOPIC
The four laws of thermodynamics are numbered 0 to 3, and deal respectively with temperature and its measurement; conservation of energy; entropy and its changes; and absolute values of entropy. The laws are explained in some detail below.
- The Zeroth Law: Temperature. This law gives an operational definition of the familiar notion of temperature. In technical terms, we say that there exists an intensive quantity, T, characterizing a system in equilibrium, such that if any two systems or bodies are characterized by the same value of T, there will be no heat flow between them when placed in contact. This quantity T is called ‘temperature’.
For the purposes of this law, the concepts of heat and system are taken as primitive. The measurement of temperature is in a sense somewhat arbitrary, though for purposes of simplicity in expressing other laws, a particular method based upon the behavior of ideal gases is usually chosen. The zeroth law is not quite as simple as it appears at first glance, but the notion of temperature and its measurement is quite familar from ordinary experience and the law is not controversial with respect to evolution.
- The First Law: Conservation of Energy. The first law simply expresses the principle of conservation of energy. In technical terms, we say that there exists an extensive quantity, U, such that, in a closed system, U is constant. In an open system, the change of U in the system must equal the net flow of U into the system. This quantity U is called `energy’.
This energy is of course the same quantity expressed in the laws of mechanics, and is defined as the capacity to perform work. The extent to which energy can be used to perform work in any thermodynamic system, i.e., its `availability’, is not answered by the First Law, which in fact places no restrictions on performance of work except that the conservation of energy itself must be maintained.
Mathematically, the first law can be expressed as
for a simple system exchanging heat (Q) with its surroundings and performing work W on its surroundings.15 For more complicated systems, the law assumes the form
where Qu is energy density and Ju is energy flux density.16
- The Second Law: Entropy
There exists an extensive quantity, S, called `entropy’, which is such that in a closed system not involving information,
In an open system not involving information,
and
where deS represents entropy flux resulting from exchanges of matter or energy with the environment, and d;S is the entropy production resulting from irreversible processes within the system, such as heat conduction, diffusion, chemical reactions, and so forth.
The reader may find that this definition is not particularly enlightening; just what is entropy, anyway? The question is not easily answered; hopefully the following discussion will shed some light on it. Bear in mind, however, that the notion of entropy is rather abstract and unlike that of temperature and perhaps energy, not familar from ordinary experience (who has ever seen an entropy meter?). Entropy is often described as a measure of the disorder of a system; this is true, but perhaps not the best way to understand the concept because ordinary substances such as water and steam are characterized by fixed values of entropy as a function of their temperature and pressure, even though they do not appear to be “ordered” or changing their degree of order when these parameters change. A better way to understand entropy is to think of it as a state property. A state property is one which a system possesses in virtue of being in a certain well-defined state which is specifiable with certain parameters such as temperature. Internal energy is another example of a state property, and entropy is no more mysterious or obscure than energy, even if it may be less familiar. Entropy, like energy, is an extensive property, which means that it is proportional to the quantity of substance you have. A cup of water has a certain quantity of energy, just as it has a certain quantity of internal energy. If you pour out half of the cup, the remainder will have only half as much entropy (and energy) as it had before. As a state property, it is governed by laws, as is energy, and those laws are shown in one form above. In words, they say that there is this state property, S (entropy), which is such that in any closed system, S must be constant or increase. Constant S means that changes (if any) going on in the system are reversible (i.e. can be made to run backward without doing any work on the system); and increasing S means that the changes are not reversible. Like energy, entropy can “flow” or be transported into or out of a system, and this entropy flow or flux governs the changes which interacting systems can undergo.
If one wishes to have a mathematical formula to calculate entropy changes, the following may be used: For any reversible paths between two equilibrium states 1 and 2 of a substance,
But knowing that entropy is a state property like energy and temperature is not necessarily going to provide the hoped-for enlightenment as to its nature and purpose, at least not right away. Unfortunately, there is no quickand-dirty explanation of entropy which will clarify it; that comes only with time and experience. But if entropy is understood as a characteristic of substances, as is temperature, which governs the extent to which they can exchange energy and perform work (just as temperature governs whether heat will flow to or from a body placed in contact with another body), then about as much is understood as can be understood without in-depth study.
Turning now to open systems, we begin by noting that various thermodynamic potentials can be derived which describe (open) systems under conditions of chemical reaction, mass flow, energy flow, etc.
For example, the Helmholtz free energy
H = E – TS
is used to describe equilibrium chemical reactions at constant pressure. However, of most interest here is the fact that for an open system in a state of stable equilibrium, there is a generalized thermodynamic potential 0(T,V,(µi 1) of temperature T, volume V, and chemical potential µj which is such that it is a minimum, with its first and second variations
This is a key result because the existence of this potential” and these variations implies that the system will, if perturbed, return to the equilibrium state.
The physical significance of this result is that no spatial or temporal order can arise in such systems; were it to somehow appear through random fluctuations, it would be quickly damped out, and the system would return to its steady-state equilibrium condition.
The foregoing discussion of entropy reveals that we can determine the value of entropy changes from state to state, but gives us no clue as to the absolute value of the entropy. For some purposes changes are enough, but
for others, notably in chemistry, the absolute value must be known. That value is the subject of the Third Law.
- The Third Law: Absolute Value of Entropy (Nernst’s Heat Theorem) The entropy of every system at absolute zero can always be taken equal to zero. 18 This law has no direct bearing on the present discussion; the reade may refer to any standard text for further details about it.
EQUILIBRIUM THERMODYNAMICS: MICROSCOPIC
The starting point of equilibrium microscopic thermodynamics is the atomic constitution of all matter, and the assumption that any system can be in one of a large (but finite) number S2 of different states, corresponding to different energies and positions of its component atoms and molecules. In this theory energy E is taken as primitive, or rather as defined by other branches of physics. Of primary interest is the probability that a system is in any particular state.
The probability that a system is in some state i of energy Ei was determined by the German physicist Ludwig Boltzmann and is given by19
Pi=Ce-E/kT
At low temperatures, only the lowest energy states are occupied, whereas at higher temperatures, higher energy states become more and more probable. This formula furnishes the basic principle governing the structure of equilibrium states, and permits us to describe a wide range of structures including those as complex and beautiful as a snowflake. Based on Boltzmann’s principle, the following relations may be determined:
and 0 =1/kT, where T is absolute temperature as defined for macroscopic thermodynamics. From this the so-called partition function can be expressed as
And based on the partition function, all the usual thermodynamic quantities can be determined;20 for example,
This is the famous Boltzmann formula relating entropy to the disorder of a system. It can be shown that for sufficiently low T, Ω – Ω0, which may be zero or a small number. Hence,
S – S0=k in Ω0
From his observation that entropy is simply a measure of disorder, Boltzmann concluded that the law of entropy increase, i.e. dS > 0 for a closed system, is simply a law of increasing disorder.21 Indeed, if one uses the partition function to determine the probability distribution function for energy states, the resulting curve has an extremely sharp peak about the mean energy, and (assuming the behavior of the system is an ergodic process) it follows that the probability the system’s energy will be any significant distance away from the mean energy (i.e. that it could exhibit ordered behavior) is vanishingly small, 10-12 for a mole of gas.
EQUILIBRIUM THERMODYNAMICS AND BIOLOGICAL SYSTEMS
The edifice wrought on the base of the four laws, called `classical thermodynamics’ or `equilibrium thermodynamics’, was the work of the generations of physicists and engineers from the Renaissance until well into the 20th century. Its accomplishments are legion, and in many ways responsible for modern civilization: the steam engine, the gasoline engine, the diesel engine, the jet engine, the rocket engine, the Gibbs phase rule, the law of mass action and most of the laws governing chemical transformations, just to name a few. The structures studied in equilibrium thermodynamics are such things as crystals and laminar flows. Classical thermodynamics, then, deals with systems at or near equilibrium and these systems are described by linear equations; in Prigogine’s words, we are interested in “. . . the solution corresponding to maximum entropy for isolated systems, minimum Helmholtz free energy for systems at given temperature and volume. We call this solution the `thermodynamic branch’.”22 As pointed out above, if a system in equilibrium, when displaced slightly tends to return to equilibrium, it is stable. This statement, in fact, is known as Le Chatlier’s Principle:
If a system is in stable equilibrium, then any spontaneous change of its parameters must bring about processes which tend to restore the system to equilibrium.23
The fact that isolated systems tend to evolve toward equilibrium, which corresponds to the maximum number of available states and therefore maximum entropy, is expressed in the Boltzmann relation. The fact that closed systems, which do not exchange energy with the outside world, have their free energy tending toward a minimum leads to the idea of the universe as being driven to a `heat death’. Yet the apparently irreversible headlong run of the universe toward its extinction leaves us with some misgivings. The `heat death’
… is, however, not what we observe around us in the present state of the universe or what we can infer from its continuous diversification and evolution toward complexity.24
Some25 have sought to explain life on the basis of physical laws, assuming that living organisms are an accidental occurence, and that the origin of life is the result of a series of extremely improbable events, though once created organisms obey the laws of physics and chemistry. There are, however, serious problems with this point of view, which we shall discuss below.
Let us consider first the general question of maintaining order in physical systems, since this is obviously necessary for any explanation of biological systems where extremely high levels of order exist. Clearly, before we can address the problem of evolution, we must have some idea of how the dayto-day existence, growth, and reproduction of biological organisms is possible. Now, as we have seen, if a system is isolated it will evolve in such a way that its entropy S increases monotonically until a maximum is reached.
This clearly rules out spontaneous formation of any type of ordered structures. As an example, a gas in one half of a sealed chamber will quickly fill the entire chamber when the partition separating the halves is removed; but the reverse phenomenon, viz. all of the gas going to one side of the chamber, cannot occur without violating the Second Law. Since it is known that most biological processes are irreversible, and therefore characterized by dS > 0, it follows that since nearly all biological systems are either in a state of constant or decreasing entropy (dS < 0), they cannot be analyzed on the basis of the thermodynamics of closed equilibrium processes.
Similar conclusions apply to open systems near equilibrium. To see this, consider such a system at constant temperature T. For such a system, we have the following relation:
F=E-TS
where F is the so-called ‘Helmholtz free energy’, and E is the internal energy. At equilibrium, F is a minimum since S, again, is a maximum. Microscopically, the situation is that if there are various states accessible to the system, the probability that it will be in one of them is, as we saw above
P exp (-E„/kT)
where k is Boltzmann’s constant. For T sufficiently low, only the lowest energy levels will have any probability of being occupied; but as T increases, higher energy levels become more and more probable. So, for sufficiently low T, low entropy ordered structures may be formed, provided that they correspond to low energy as well (recall that to form ice from water, a large amount of heat (i.e. energy) must be removed from the water at constant temperature T = 00 C, the so-called `latent heat of melting’ . Can such processes (i.e. extraction of heat) account for the order seen in biological structures? That is, could such processes account for the formation of, say, protein molecules, as they do the formation of such complex crystalline structures as snowflakes? Take, for example, the case of a biological macromolecule, consisting of a chain of 100 amino acids. There are 20 amino acids in nature, and a molecule can only function correctly if their order in it is correct.
The energy levels of all these different orders is essentially the same; hence, from a random distribution, the number of permutations necessary to arrive at the correct arrangement is
N ~ 20100 ~ 10130
Thus only 1 out of every 10130 molecules would be useable. If a new arrangement, or `mutation’, could occur every 10-8 seconds (which it could not), the average time required for the formation of the protein would be
t ~ 10122 seconds
which is more than 100 orders of magnitude longer than the estimated age of the earth (1017 seconds). In Prigogine’s words, “We realize that the spontaneousformation of this (rather small) protein must be ruled out. “26 Furthermore,
.. the maintenance of life would appear in this view to correspond to an ongoing struggle of an army of Maxwell’s demons against the laws of physics to maintain the highly improbable conditions the permit its existence. 27
Thus there is no escape whether one uses a macroscopic description (based on system theory) or a microscopic description (based on probability distributions of energy states): if we are dealing with equilibrium processes, they are described by equations which do not allow for ongoing decrease of entropy or movement among different points of equilibrium. The very existence of biological systems, to say nothing of their evolution, cannot be explained on the basis of classical equilibrium thermodynamics. Thus far, at least, the creationists28 who reject evolution because of its alleged contradiction with the laws of thermodynamics are correct. But their position is not really enviable, for if that were the end of thermodynamics, the matter would be closed and not only would evolution be ruled out, but each living thing would have to be regarded as an ongoing miracle. As Prigogine expresses it:
.. the apparent contradiction between biological order and the laws of physics-particularly the second law of thermodynamics-cannot be removed as long as one tries to understand living systems by the methods of equilibrium thermodynamics.29
Or in other words, analysis of biological systems on the basis of equilibrium thermodynamics yields a negative answer to all three of our earlier questions. There is more to thermodynamics, however; and as the reader may have surmised, thermodynamics must also deal with non- equilibrium situations. The important question is thus what kind of behavior can arise in systems that are, because of continuous energy inflows, constrained to be in a condition of nonequilibrium for indefinite periods? Is it chaotic, or can there be order?
NON-EQUILIBRIUM THERMODYNAMICS
Many familiar processes are clearly not of the equilibrium sort: free expansion of a gas, for example; or an ice cube tossed into a pot of boiling water. Can these processes be understood quantitatively? The answer is yes, but the mathematics is significantly more advanced. Moreover, the early workers in the field considered non-equilibrium
… as a perturbation temporarily preventing the appearance of structure identified with the order at equilibrium. To grow a beautiful crystal we require near-equilibrium conditions, and to obtain a good yield from a thermal engine we need to minimize irreversible processes such as friction and heat loss.30
In order to study the behavior of systems which are not in equilibrium, we need expressions for entropy which go beyond the inequalities given above and yield the actual entropy production. By means of these expressions it will be possible to draw important conclusions about non-equilibrium processes vis-a-vis living systems.
Such expressions for entropy production have the general form
where P designates the entropy production, and o is the source of the entropy per unit time and volume. The second law requires that o[S] >_ 0.31
Such systems, clearly, must be open since if they are to remain in a state of non-equilibrium,
Work by L. Onsager extended classical thermodynamics to include non equilbrium processes governed by linear differential equations. This new branch of thermodynamics, referred to as `linear non-equilibrium thermodynamics’, deals with flows and rates of irreversible processes which are linear functions of thermodynamic forces. Symbolically,
where Ji are generalized rates and Xi generalized forces.32 The Onsager relations take the form
J1 = L11X1 + L12X2
J2 = L21X1 + L22 X2
Onsager proved that, in general, L12 = L21.33 Now in the linear range of irreversible processes, which describe processes near equilibrium (and thus most real systems intended to operate at equilibrium which are usually not exactly at equilibrium at all times), the entropy production source term has the form
σ= Lk1X1X1 > 0
Utilizing the Onsager relations, it can be shown that for total entropy production P we have
Its time derivative has the following values
dP/dt < 0 away from steady state
dP/dt = 0 at steady state
This is the theorem of minimum entropy production.34 Thus entropy production acts as a Lyapunov function for the overall system which, because it is linear, will be asymptotically stable and hence quickly return to its steady state if perturbed.
Thus the steady-state of a system in the linear range of non-equilibrium processes may be characterized by a level of entropy which is lower than that of an equilibrium system; in the case of thermal diffusion, for example, concentration gradients can be produced with a resulting lower entropy than for uniform mixtures. Hence, nonequilibrium may be a source of order in the sense of decreased system entropy; but such steady states are very close to uniform in space if external system constraints permit. The stability of such systems therefore implies that any spontaneous emergence of order of a type which differs in some qualitative way from equilibrium-type behavior is ruled out.” Thus, biological systems are not explainable on the basis of linear non-equilibrium thermodynamics.
Consider now a general non-linear system. As discussed above in connection with system theory, any system in the vicinity of an equilibrium point can be linearized, and if the resulting ‘linearized’ system is stable, then the original non-linear system will be stable as well in some region around the equilibrium point.36 For thermodynamic systems in particular, it can be shown that while the excess entropy production term dP/dt no longer has special properties for non-linear systems,37 the second variation of the excess entropy 6S2 and its two derivatives have the form
Thus any such system is stable, in agreement with the general discussion, for a limited volume in phase space around the equilibrium point.38 It follows that the state of maximum entropy or disorder cannot be modified so long as deviations are the result of random fluctuations or disturbances. But now suppose some process comes about causing a systematic deviation from equilibrium. As we know from the general discussion of system behavior, if that occurs in the case of a non-linear system, the system can go out of the vicinity of state space where it was stable, and either become unstable, evolve to another point of stable equilibrium, or undergo limit cycle behavior. Prigogine has extensively investigated systems in which there are constraints preventing the attainment of equilibrium. The unexpected and rather spectacular results have been that radically new and different structures arise in far-from-equilibrium conditions.
In the case of autocatalytic chemical reactions, for instance, non-linear behavior is a natural result of the form of the reaction equations. Consider the following example:
A → X
2X+Y → 3X
B+X→Y+D
X → E
The differential equations describing this system are:
If B > 1+ A2, then this system, called the ‘Brusselator’, will exhibit limit cycle
behavior. Any point in the state space sufficiently close to the equilibrium point
approaches the same periodic trajectory.
A limit cycle, of course, represents a periodic motion, much as a pendulum,
and can be used as a type of clock. Temporal organization, clearly, implies structural stability.39
If now diffusion is added to the system, the reaction differential equations take the form
Here, under suitable boundary conditions, spatial dissipative structures can arise, e.g. waves or much more elaborate structures (see Fig. 2).
Thus, given
(1) an open system with sufficient free energy (2) subject to feedback of the correct type and (3) driven far from equilibrium it is possible for structures (both temporal and spatial) to arise spontaneously which correspond to the mathematical limit cycles discussed earlier.41(Lest the reader become concerned that the argument from design is now out the window, the Second Law does put constraints on the order produced; Philosophical implications of order-creating processes will be discussed in the last section of the article).
Such structures obviously are of critical importance for the understanding of biological systems (as well as many other systems), and they have accordingly been given a special name:
We have introduced the term ‘dissipative structures’ to contrast such structures from the equilibrium structures [such as crystals]. Dissipative structures provide a striking example of nonequilibrium as a source of order. Moreover, the mechanics of the formation of dissipative structures has to be contrasted with that of equilibrium structures based on Boltzmann’s order principle.42
Such structures can only exist provided there is a sufficient flow of energy and matter.
As hinted in the foregoing quotation, Boltzmann’s order principle, namely that all possible complexions of a system are equally probable, and that S = k In P, where P is the number of those complexions, breaks down in far from equilibrium conditions. We have already reviewed the case of the 100 amino acid protein molecule, and noted its vanishingly small probability of formation under the assumption of classical microscopic thermodynamics. In the case of dissipative structures, however, the existence of such molecules is explainable:
A new molecular order appears that corresponds basically to a macroscopic fluctuation stabilized by exchanges of energy with the outside world. This is the order characterized by the occurence of dissipative structures. We call this order `order through fluctuations’ to contrast it with the Boltzmann order principle, which is basic for the understanding of equilibrium structures.43
The key here is that such systems are non-linear. The laws of thermodynamics permit us to deal with such systems: conservation of energy
remains unchanged, though energy can no longer be regarded as a state variable (because such systems are not in `states’); rather, the sum of all energy flows plus potential energy changes must remain constant. Entropy of course is still extremely important because it bounds the reactions which can take place and, through excess entropy production, plays a key role in determining when a system will shift from one region of state space to another or go into limit cycle behavior. Naturally, the relationships
dSi>_0
dS = dSe + dSi > 0
remain true for systems involving dissipative structures.
Thermodynamics is summarized in chart 1, with both of its major branches (equilibrium and non-equilibrium) shown, and examples of the type of behavior which occur in each are given. As discussed above, biological systems and evolution are not possible on the basis of equilibrium thermodynamics; non-equilibrium processes are required.
One further point should also be clarified. It is not the case that
dissipative structures can only or do only arise in living systems; in fact, they are readily produced in laboratory environments, and rather striking
photographs of them have been taken.44
Clearly, the foregoing argument demonstrates that evolution does not represent any prima facie contradiction of the laws of thermodynamics when understood in their full generality.45 Just as clearly, they are a constraint upon any sort of evolution; can we refer to the situation as cooperation?
Again, system considerations come into play.
THERMODYNAMICS AND EVOLUTION: COOPERATION?
Thus far, we have seen that there are no prima facie contradictions between thermodynamics and evolution when account is taken of nonequilibrium thermodynamics. Moreover, it appears that all biological processes (and all evolutionary processes) take place in accordance with the second law as it applies to systems in the most general sense:
dSi > 0
a qla t+ div JS = Q
Thus, the second law is clearly a constraint on evolutionary processes and mechanisms. Can we speak of cooperation, in the sense that evolution necessarily or at least probably follows from the laws of thermodynamics?ab
Unfortunately, the final answer to this question cannot be given on the basis of what is known at the present time. Very little work has been done to date on such topics as how complexity can evolve or increase. Nonetheless, we can at least frame the question and make some general observations with respect to it.
Let us begin by fixing more precisely the meaning of evolution. Three `levels’ or `hypotheses’ of evolution may be distinguished. No doubt, these are not the only ones; but for the purposes of our discussion, they are convenient:
(1) Historical Evolution: in times past, various differing life forms existed; at first, one celled organisms, later multi-celled animals, reptiles, dinosaurs, mammals, etc.,47 and the approximate time of these life forms can be determined. No connection between them is assumed other than chronological.
(2) `Weak’ Darwinian Evolution: Historical Evolution is true, and in ad
dition, later or `higher’ forms of life arose from lower forms in the sense of being based upon them, though not necessarily determined (in a complete or exhaustive sense) by them. The nature of the mechanisms giving rise to later forms of life is not specified; they may or may not be natural processes.
(3) `Strong’ Darwinian Evolution: `Weak’ Darwinian Evolution is true, and in addition it is assumed that the laws of physics and chemistry are sufficient to explain the origin and development of all species, including man, assuming random genetic mutations as the mechanism for increasing complexity mediated by the process of natural selection.
Very briefly, we may review the evidence for these three theories or levels. First, for `historical’ evolution, there is an enormous amount of convergent data, including fossil records, radio-carbon dating, geological strata, and so forth After a century of intensive paleontological, genetic, anatomical, geological, astrophysical, and related scientific research, historical evolution has to be regarded as exceedingly well established from a scientific standpoint.48 However, from a theological and natural philosophy standpoint, its very incompleteness presents some rather serious problems:
The natural philosopher would abhor a jumbled, disorderly concourse of unrelated natural events as totally out of keeping with natural laws. Natura non facit saltus. The theologian would abhor the thought of God specially and immediately creating, for example, distinct species of finches for each of the several Galapagos Islands at different times (multiply this miraculous intervention by the hundreds of thousands!) for it goes directly contrary to the theological axiom that God ordinarily orders all things wisely through secondary causes.49
For `Weak’ Darwinian evolution, there is the evidence of morphological structures and comparative anatomy; for example, the fact that large numbers of plants and animals share many common structural features, be they the central nervous system, similar hand and foot bone structures, circulatory and respiratory systems, etc., including the basic cellular and genetic structures in both plant and animal kingdoms.50There is also evidence from biochemistry: the fact that the basic chemical structure of all living things is uniform throughout the plant and animal kingdoms (all employ the same 20 amino acids, for example). And there is the voice of comparative embriology: “In the ontogenetic development of plants and animals (from embryo or seed to adult), the stages through which all members of each natural group pass are very similar.’51 For this theory, then, there is a considerable body of such circumstantial evidence, though it cannot be regarded as proved beyond all doubt.
With respect to the `strong’ form of evolution theory, there is at least one global system problem which has not been adequately addressed by the theory’s proponents: What mechanism can account for the increasing complexity of organisms which is such a prominant feature of evolution? That is, how and in what way can more complex forms of life arise from less complex forms? This type of large-scale change is sometimes referred to as macro-evolution; it must be carefully distinguished from a related but fairly well understood question with which it is often confused: Are organisms programmed to make minor structural modifications in response to environmental pressures? Such change, possibly including the development of new species, is sometimes termed, micro-evolution. But the evidence for micro-evolution must not be taken as evidence for macro-evolution. Or, in the terminology of optimization theory, Natural Selections could perhaps reach a local maximum, (micro-evolution), but not a global maximum (macro-evolution).52 The minor structural changes do not necessarily indicate the beginning of an evolutionary process or trend; virtually all stable systems must be capable of minor structural variations to optimize themselves for a particular environment. Consider, as a trivial example, the modifications to an automobile to make it fit for a particular climate or terrain, e.g. different tires, different transmission, carburetor adjustments, different oil or fuel, additional air conditioning or heating, and so forth; automobiles would not be very practical if a whole new type had to be invented for each possible condition of operation! An altogether different question is that of major structural changes to permit new functions to be performed, or all existing functions to be performed at a higher level. How much change is required to convert a horse and buggy to an automobile? Major structural innovations: invention of the transmission and drive train, fuel system, internal combustion engine, and so forth. This cannot be regarded as the same type of modification required to adapt the car to a new environment. Regretably, the proponents of `strong’ Darwinian evolution gloss over this problem so often that one wonders if they are aware of it at all:
Evolutionists reason that if small changes can occur in a short time, large scale changes can take place during the many millions of years of earth history.53
The famous geneticist Dobzhanski comments:
Experience [!] shows, however, that there is no way toward understanding of the mechanisms of evolutionary change, which require time on geological scales, other than through understanding of microevolutionary processes observable within the span of a human lifetime, often controlled by man’s will, and sometimes reproducible in laboratory experiments.54
There is always great danger in long-range extrapolation. As Taylor has observed,
The fact of evolution is not in question. What is in question is how it occurred and whether natural selection explains more than a small part of it ….[Natural selection] accounts brilliantly for the minor adaptations which living organisms make to meet the challenges of the environment but it is by no means clear that it explains the major changes in evolution: the change from spineless jellyfish to fish with brains and backbones, for example, or the change from fish to air-breathing, four-legged land animals, to name only the most obvious examples.55
As long as a suitable theoretical explanation of the emergence of progressively higher levels of complexity is not forthcoming, the `strong’ form of evolution theory cannot be regarded as an established scientific theory. It may be an article of faith on the part of many biologists, geologists, paleontologists, and others; but they should not allow themselves to be blinded by ideological or extra-scientific considerations regarding its true status. The usual explanation of the emergence of complexity in terms of random mutations is seriously deficient when anaylzed mathematically. A discussion of that subject is outside the scope of this article; the interested reader should consult reference [56].
We shall not pursue this subject further; from a rigorous scientific standpoint, the third level or `strong’ Darwinism must be regarded as quite speculative now because of the system-level problems indicated, though it has not been refuted in any sense. Its proponents should recognize it for what it is: a theory about how large-scale changes came about without much hard supporting empirical evidence, and acknowledge that one can admit the other two levels of evolution without being committed to Darwin’s full theory.57 Of course, some other purely natural mechanism or set of mechanisms could be discovered which would account for the evolutionary changes; but at present the question remains open. As a consequence, we cannot say that there is definitive evidence for cooperation between thermodynamics and evolution in the sense of thermodynamics engendering evolutionary processes; more research needs to be done.
EVOLUTION AND CHRISTIANITY
Turning now to the question of evolution and Christianity, we may being by noting that historical evolution does not present any insuperable problems unless one insists on a particular type of literal interpretation of the Bible (as do the Fundamentalist Protestants). God could have created each species at a different time. `Weak’ Darwinian evolution does not pose any insuperable problems either; God could have intervened in the natural history of the world once or any number of times to ensure the correct development of increasing complex organisms.58 Theological problems arise when one pushes a`strong’ Darwinian evolution as the only possible scientific hypothesis (or any evolution theory in which all changes in the natural order, including man and the totality of his faculties, comes about by purely natural processes regulated by the laws of physics and chemistry). Such a theory, which in effect reduces man to another creature in a purely material universe, will of course relegate the soul to a position between imaginary and metaphorical.59 As the Dominican priest and biologist Raymond Nogar has pointed out,60 the problem with evolution arises primarily because of its automatic association with the strong form, leading inexorably to pantheism or atheism. Huxley is a typical representative of this school, referred to as evolutionism, for which science is the new religion:
I submit that the discoveries of physiology, general biology and psychology not only make possible, but necessitate, a naturalistic hypothesis (for religion), in which there is no room for the supernatural, and the spiritual forces at work in the cosmos are seen as a part of nature just as much as the material forces.61
This identification of evolution with strong Darwinian evolution is entirely gratuitous.
Provisionally, then, we may regard the relation between evolution and thermodynamics as not contradictory, certainly involving contraint, and possibly one of cooperation, where by `cooperation’ we mean that natural processes can account for some (though perhaps not all) of the panorama of life forms.
CREATIONISM AND EVOLUTION
A few remarks are in order on some of the better known critiques of evolution by creationists. The general thrust of the creationists’ arguments is fourfold:
(1) The theory of evolution contradicts the Bible (O’Reilly).
(2) Evolution would violate the laws of thermodynamics (Gish, O’Reilly, Morris62).
(3) The theory of evolution is not in accord with the fossil record (Dewar63, O’Reilly).
(4) Evolution would require changes in organisms which are not possible on the basis of the presumed mechanism of change (Dewar). Argument (1) is outside the scope of this article. Argument (4) is the system level argument expounded earlier. There is evidence for argument (3) but a general discussion of paleontology is also outside the scope of this article. Here we shall concentrate on argument (2). The creationists who pursue this line of reasoning typically attempt to demonstrate that any creation of order out of primordial chaos is in violation of the Second Law, and therefore impossible; moreover were it to happen, or have happened, it would be prima facie evidence for atheism:
… there is no such thing that could be legitimately called theistic evolution. By definition, evolution is a strictly mechanistic, naturalistic, and therefore atheistic process.64
We pass over the evident non-sequitur of identifying naturalistic and atheistic processes, which is a red herring in the present context, and caution the reader that he must not permit his attention to be diverted from the central question by such emotional appeals. That question is, Does thermodynamics say that `order’ can never arise out of `disorder’, i.e., can organization and structure never increase, even locally? Here is the view of Duane Gish, associate director of the Institute for Creation Research:
First, no scientist has ever detected any tendency of matter to transform itself from a disordered state to a complex, ordered state. There is no natural law in science that describes such a property of matter. There is, however, a natural law that describes exactly the opposite tendency known as the second law of thermodynamics.65
He restates the issue more specifically as follows:
Unquestionably, the second law applies to an isolated system, one into which no energy is entering from the outside. The second law says that the order and complexity of such a system can never increase, but that the disorderliness or randomness of such a system (its entropy) will steadily increase with time. Yet evolutionists believe the universe is an isolated system that transformed itself from an initial chaotic state (following the Big Bang) to its present highly complex state. This is directly contradictory to the second law.66
As we have discussed above, the second law has a very precise mathematical statement, which is
dSi >_ 0
But it can happen that
dSe + dSi < 0
in an open subsystem. The state following the Big Bang may have been `chaotic’ in a sense to an observer of it; but from a thermodynamic standpoint, that state was one very rich in available energy and correspondingly had a relatively low entropy-the things which alone determine the possible future evolution of it.67 ‘Disorder’ is too imprecise a term for the scientific purposes of thermodynamics; its connotative meaning, in particular, is quite misleading because of its application to situations in daily life. Clearly, the entropy of the universe has increased since the Big Bang; but just as clearly, thermodynamic processes have occured which have resulted in a high degree of order in some places-albeit at the `expense’ of disorder elsewhere. The Second Law applied to the universe as a whole only says that the sum total of all the entropy changes must be greater than zero, which is almost certainly true (no evolutionist, certainly, disputes it), given the extremely low entropy `cost’ of information and organization.68 Gish’s attempt to extrapolate from the order found in biological systems on earth to what the universe as a whole manifests does not take account of the orders of magnitude involved.
Several paragraphs later, he belatedly acknowledges (or at least does not dispute) that the energy from the sun could account for the order appearing on the earth, but then argues that it could not account for the origin of life because `. . . raw, uncontrolled energy is destructive, not constructive.”69 On this point, Gish needs to reexamine the facts. The question involved is one of the behavior of dynamic systems, as discussed in detail earlier, and order can arise spontaneously in these circumstances if certain conditions are met, among them, the system being in the realm of non-equilibrium processes. And the nature of asymptotically stable systems (and systems exhibiting limit cycle behavior) is such that once they are started at any point of their region of stability, they go to the equilibrium point (or limit cycle) of themselves, by their own dynamics; there is no need of external guidance at every step. Thus, it is a great oversimplification, and indeed, false in some instances, to state that `. . . machines are required to build machines, and something or somebody must operate the machinery.’70 The scientific facts of the matter have been clearly stated earlier in this article; and anyone who genuinely seeks to understand natural processes and the Second Law must eschew vague generalizations about going from disorder to order, and machines making machines, and instead make the effort to understand what order is in physics and how it applies in the case of interest. The defense of the faith is not served by misapplication of scientific laws.
Gish also discusses the amino acid creation problem. His arguments are vitiated once again because he claims that the amino acids could not be created at all by natural processes since they could not arise on the basis of the chance arrangements produced by (equilibrium) thermodynamicsa fact established earlier in this article and not in dispute. One might just as well argue that space travel is impossible because every time he throws his toy rocket into the air, it falls back to earth. Obviously, processes different in kind are involved in the creation of biological macromolecules as well as the launching of rocket ships.
Similar arguments to those of Gish were advanced by Sean O’Reilly, who wrote:
The second law also directly contradicts evolutionary theory: if language has any meaning, both cannot be true. Evolutionary theory requires a universal principle of upward change; the entropy law is a universal principle of downward change. The latter has been proved to apply in all systems tested so far, the former cannot even be tested scientifically.71
He lists the four criteria which creationists generally claim are necessary for a system to be one in which order can increase:72
- It must be an open system.
- There must be available energy.
- There must be a directing program.
- There must be a conversion mechanism, to convert the available energy into the specific work needed.
Then he gives the example of the construction of a house:
All can agree that no amount of building materials left exposed to the sun for an unlimited period would ever result in a house unless there were some builders-builders moreover with a specific know-how. An army of children or of monkeys might manage to erect something like a shack, but it would not be a house.73
But this example bears some scrutiny. Recalling the discussion of stability theory given above, observe that a house is stable in the same sense as the book lying on the table: if displaced slightly, it remains in its new position but does not return to its original position. Similarly, a house can tolerate slight disturbances-windstorms, minor earthquakes, etc.-with possibly some slight damage such as loss of roof shingles or broken windows. But the house is not capable of restoring itself to its original condition; in other words, it is stable but not asymptotically stable nor does it exhibit limit cycle behavior. Hence it will not spontaneously come together from its component materials. However, the chemical processes responsible for biological molecules must be of such a type, as we have seen, otherwise the molecules would not be formed. Thus, the analogy with the house is not well taken: houses do not exhibit the requisite stability and hence one cannot reason based on their behavior. Reasoning about biological systems must be on the basis of non-linear, non-equilibrium systems, since they are the only ones which can account for observed behavior-and not just creation a la evolution, but ordinary functioning of such systems, the happening of which creationists do not dispute. Such systems, again, only require that the right ingredients be put together; inherent system dynamics do the rest. Thus at least O’Reilly’s condition (3) is not necessary.
Nor is it true that `. . . the relation between irreversible thermodynamics and information theory is one of the fundamental unsolved problems in biology.’74 Once the nature of dissipative processes is known, the relation with information theory is fairly straightforward: information and negentropy are two physical quantities which can be interconverted; and they are related in other ways.75 Other creationists have enunciated arguments similar to the foregoing, and the same remarks apply to them.
THE POTENTIAL DANGERS OF CREATIONISM
While their arguments against evolution based on the laws of thermodynamics have no merit, the creationists have rendered a valuable service insofar as they have pointed out weaknesses and serious lacunae in the evolution theory,76 and as well criticized the proponents of evolution for stonewalling evidence which is not in conformity with it-often done, it seems, more from ideological than scientific motives. Nonetheless, the creationists as a whole are on weak ground scientifically and philosophically and, by making the same mistake as the evolutionists-viz. identifying evolution with `strong’ Darwinian evolution, close off all paths out of the creation-evolution dilemma and in effect make the truth of Christianity dependent upon the falsity of a broad range of scientific hypotheses when this is not necessary. In the author’s opinion, we have here the most serious problem with creationism because it has the explosive potential to deal Christianity a blow which will dwarf that of the Galileo affair. The Fundamentalist Protestants may find themselves with no other escape because of their literal interpretation of the Bible and their virtually non-existent philosophy of nature (and consequent ignorance of the notion of secondary causes), but such is not the case with Catholics who are heir to long and deep traditions in both of these areas. Evolution may compel a rethinking of some positions and theories, as did the heliocentric theory of the solar system but unless one links it with certain extreme versions (usually based on extra scientific assumptions), it does not necessarily pose a danger to Christianity understood in a mature way. Obviously, Christians must maintain that there is something unique about man, and that something other than purely material processes are required for his generation-as opposed to that of other animals and plants, for example; but this does not mean that natural processes are not required as well, and have an essential part to play.
THERMODYNAMICS AND METAPHYSICS
One question remains: does the theory of evolution contradict the established metaphysical principle that order cannot come from disorder? In particular, does the existence of dissipative structures contradict that principle? As discussed above, some are clearly of the opinion that this is so, and are determined to represent evolution as a theory which is in direct contradiction to the Bible, Christianity and philosophy in general. But before pursuing this matter further, it would be useful to examine some proposed solutions, and then critically review the underlying assumptions of the belief in question to see if it has not been turned into an unnecessarily difficult problem.
In the first place, it will not do to affirm, as does Vincent Smith, former Director of the Philosophy of Science Institute at St. John’s University, that
Evolution is a sign of form; entropy of privation; and the indifferent substratum, of primary matter …. For evolution and entropy, if they do signify form and privation, are derived and secondary contraries which must be traced back to their first principles …. Entropy means the loss factor, the privation, the exhaust of what is `burned up’ in the movement toward form.77
This merely sidesteps the issue, namely how can a universal degradation principle such as the Second Law be consistent with the fact of increasing order (in a metaphysical sense; we have already seen how it is so in a scientific sense)? Simply identifying entropy with privation and evolution with form does not constitute an answer but at best another description of the situation to be explained.
Benedict Ashley, O.P.78 has a better appreciation of the problem, and argues that there is no contradiction with the principle that `Nothing which comes to be, comes to be without a proportionate cause’79 because all creation of new entities (atoms, molecules, or organisms) is the result of historical processes and in each case the new entity
… is not a`greater emerging from the less’ because the amount of information it contains in integrated form is no greater than the amount of information present in the historical evolutionary process.80
But this fails to resolve the problem on three accounts. (1) Ashley confuses two meanings of information, viz. information about a system or process and the information a system or process uses to carry out its functions. I have discussed information in connection with system theory and behavior elsewhere at length, and the reader is referred there for further discussion of this important distinction.81 (2) Unlike the energy in the universe, information is not a quantity which can be summed and statements made about its grand total as compared to how much individuals have or to what extent they partake of it. Moreover, the structure and organization of a complex system cannot be regarded as any sort of `sum’ of a sufficiently large number of unrelated other organisms (as the creation of a new energy storage device, for example, could be the `sum’ of a large number of smaller storage devices). With the emergence of a new species, something new and possibly more organized and more complex has arisen in the universe which was not `caused’ by as highly structured a predecessor. There is, in short, no law of conservation of information as there is of energy. (3) Finally, Ashley’s remark does not even touch upon the emergence of ordered behavior in dissipative structures, where the collective history of the universe plays a much less obvious role.
But his idea points in the right direction: if we retreat to a more abstract view of order and disorder, then there may not necessarily be any contradiction because there are constraints on the order produced by non-equilibrium systems; it does not arise ex nihilo. As noted above, the Second Law should not be loosely regarded as one of `increasing order’ but one dealing with a particular physical quantity, entropy, and its possible changes under various circumstances. And in one of them, namely systems far from equilibrium, the Second Law itself implies that order can arise through dissipative structures.
Let us now turn to original sources and consider the words of St. Thomas and Aristotle. In St. Thomas we find the following comments: In connection with the (preexistence) of all perfection in God:
… owing to the fact that any perfection which is in an effect must be found in the efficient cause, whether it is of the same nature (as when the cause is univocal, e.g. a man engendering another man), or is of a more eminent type, (as when the agent is equivocal, e.g. in the sun there is the equivalent of what is produced by its energy). For it is manifest that an effect preexists virtually in the acting cause . . .
In connection with the action of grace on man:
… an effect cannot be superior [potiorl to its cause. 83 In connection with the differences among virtues:
A cause is always superior [potior] to its effect, and among the effects [of the cause], those are superior which are closest to it.84
In connection with the question of God as the sole cause of all grace:
Nothing can act beyond its species, because necessarily a cause must be superior [potior] to its effect.85
In connection with creation (ex nihilo) by creatures:
… the agent and the effect must be similar to each other [sibi esse similia]
…86
And distinguishing instrumental and principal causes:
It is not necessary that something be more perfectly realized [principalius] in an instrumental cause than in an effect, but only in the principle cause.87
The acuteness of the `problem’ of reconciling evolution (or any ordercreating process) considered metaphysically with the principle that an effect cannot be `greater’ or more perfect than its cause is linked to one’s conception of the relation between science and metaphysics. There are basically two approaches to this relationship: (1) One regards science and metaphysics as covering the same ground, with science either (a) an extension of metaphysics in the sense that the scientist is looking for causes in somewhat the same way (though perhaps with more refined tools) as the philosopher; or (b) scientific knowledge and philosophical knowledge are hierarchically organized, with the two having the same material object but different formal objects, and metaphysics exercising a directing function over mathematics and empirical science. (2) In the other approach, one views science as a completely different type of knowledge about the world proper to man’s condition as a sentient being, but one which does not put him directly into contact with reality (as does his normal mode of perception). Among the representatives of the first view are Jaki88 and Maritain89; of the second, Zubiri90 and Strob191.
The problem is most acute for representatives of the first view, since for them, every scientific law can at least potentially be in conflict with a metaphysical principle, as in this case.92 Construing the notion of cause too narrowly, along the lines of generation of living beings, or requiring direct contact of cause and effect, will clearly lead to serious problems because the creation of order in a dissipative process has no immediately identifiable efficient cause with the `order’ already in it to transmit to the effect. If, however, one’s philosophy of science permits a more abstract view of causes to be taken, then one can argue that the cause is `greater’ than its effect insofar as the measure of its quality or perfection, in this case negentropy and information (N+I), is greater than the corresponding sum of the changes in that upon which it acts. Aquinas himself, though he could not possibly have known of problems such as this, makes a remark somewhat along the suggested lines in the first quotation above when he speaks of a cause being of a more eminent type, as in the case of the sun with respect to its effects (he probably had in mind the notion of heat, from Aristotle’s Metaphysics 993b25-26, but the basic idea is the same). Indeed, Aquinas believed that natural generation often requires the combined activity of several agents:
One must suppose a variable active principle which, through its presence or absence, causes variety in the generation and corruption of lower organisms [inferiorum corporum]. And this principle is the celestial bodies. Whence it follows that everything in these lower organisms which engenders and determines the species acts as an instrument of the celestial bodies.93
The idea of a more abstract measure of perfection also appears in his remarks on the question of spontaneous generation, since he believed that animals, which are more perfect, could be generated from plants, which are less:
As the generation of one thing proceeds from the corruption of another, there is nothing repugnant in saying that more perfect things [nobiliora] are generated from ignoble things [ignobiliorum]. Hence one can indeed say that animals are generated from the corruption of plants.94
This, of course, would require the action of the sun.
Indeed, if one is going to regard science as a different type of knowledge than is philosphy, then such an abstract view of causes is a neccessary concomitant since, in that view, the basic thrust of science is to organize phenomenological experience by fitting mathematical structures to it, and causes appear only very indirectly.95
Curiously, Aristotle himself does not seem to think that an efficient cause possessing as high a degree of perfection as its effect is always necessary because of his (incorrect) belief in the natural motion of objects.96 Consider his remarks:
The question might be raised, why some things are produced spontaneously as well as by art, e. g. health, while others are not, e. g. a house. The reason is that in some cases the matter which governs the production in the making and producing of any work of art, and in which a part of the product is present-some matter is such as to be set in motion by itself and some is not of this nature, and of the former kind some can move itself in the particular way required, while other matter is incapable of this …97
Indeed, for Aristotle the source of natural movement is the form;98 as Ross points out, Aristotle `. . . habitually identifies nature as power of movement with nature as form.’99
For Aristotle, moreover, for things to be in their proper place is part of their form100, which thus functions as both a final and efficient cause.101
Aristotle also believes that some low forms of life, at least, are produced
spontaneously from ‘putrifying earth or vegetable matter’ acted upon by the heat of the sun.102
But there is a very real sense in which the `natural motion’ of nonequilibrium dynamical systems leads to the creation of order just as it could
be created by a directly acting external cause. And thus, even though Aristotle’s theory is incorrect, at least he believed that such things as the creation of order could occur in the absence of an efficient cause of the same (or higher) type.
The only passage where Aristotle discusses the required degree of perfection in causes is Book a of the Metaphysics, where he says:
… a thing has a quality in a higher degree than other things if in virtue of it the same quality belongs to the other things as well (e.g. fire is the hottest of things; for it is the cause of the heat of all other things).103
This passage, however, clearly refers to the formal cause rather than the efficient cause, as Aquinas points out in his commentary on it:
When a universal predicate is applied to several things, in each case that which constitutes the reason for the predication about other things has that attribute in the fullest sense .., it sometimes happens that an effect does not become like its cause, so as to have the same specific nature, because of the excellence of that cause; for example, the sun is the cause of heat in these lower bodies, but the form which these lower bodies receive cannot be of the same specific nature as that possessed by the sun … since they do not have a common matter.104
Thus, we cannot presume that Aristotle was committed to any theory of nature incompatible with the existence of order-creating non-equilibrium processes.
One could conceivable identify the `form’ of non-equilibrium processes with the mathematical structures describing them, as some (e.g. Strobl105 and Weizsacker106) have proposed to do with science in general:
The mathematical form-which is in fact a type of cause formalis-subsists in physics as the last remaining content of our old ideas of causality. The concept of form is extended here through the course of time. Differential equations and variational principles indicate that a physical process temporally represents a whole, a form …107
There would, however, be a serious problem if one sought to integrate this
conception of scientific laws into Aristotle’s philosophy. That problem arises because Aristotle’s epistemology is based on the fact that the definition of a thing is not in any sense stipulative or approximating, but actually expresses what the thing is, truly:
Health is the substance of disease (for disease is the absence of health); and health is the formula in the soul or the knowledge of it.108
… the last differentia will be the substance of the thing and its definition
. . . 109
…since one element is definition, and one is matter, contrarities which are in the definition make a difference in species, but those which are
in the thing taken as including its matter do not make one.110
That is, there is a strict parallel between definition, form, and essence, so that we know something when we know its essence; i.e. the concept of essence is the real correlate of the definition and is a real, physical moment of a substance come together:111
With regard, then, to essences and actualities, we cannot err: either we know them, or we do not. Inquiry as to what they are takes the form of asking whether they are of such-and-such a nature or not.112
This, however, is precisely the problem with respect to modern science; for if we knew the essence of anything in this radical, noumenal sense113, it is difficult to see why we should not be able to predict everything about it, without the need to investigate it scientifically-a rather slow and painstaking process, to say the least. This critical problem, indeed, is one of the starting points for Zubiri’s rethinking of classical metaphaysics114; but that is the subject of another article. For now, suffice it to say that Aristotle may not have completely thought through all the implications of his remarks on causality, but being the good natural scientist that he was, he probably would not have been surprised by the existence of non-equilibrium processes and dissipative structures.
CONCLUSION
On the basis of the evidence reviewed here, several important conclusions can be drawn. First, there is no reason to suppose that the theory of evolution (in any of the forms discussed) is in contradiction with any known basic physical law, or that any biological organism operates outside of the laws of physics and chemistry. Rather, these laws function as constraints on such organisms. Hence, if any form of evolution, such as the strong Darwinian form, is to be refuted, it will have to be on the basis of larger system considerations.
Second, there are at least three forms or theories of evolution, and the evidence for them is rather unequally distributed. Hence, one can believe in the first or second without being thereby committed to what is usually meant when the word `evolution’ is used, namely the third or strong form. Christians should realize that evolutionists cannot form a dilemma of scienceor-religion on the basis of established facts. To be sure, there are versions of the theory of evolution which contradict the faith; but they are rather far-reaching theories which go considerably beyond empirical facts and, indeed, represent a philosophy of evolutionism which encompasses questions quite outside the scope of science itself, such as determinism vs. free will, creation ex nihilo, and so forth. These theories, based on extremely long-range extrapolation, must not be identified with evolution. Regrettably, there is as yet no general quantitative model which depicts complex systems and how they can change with time-something that would notably clarify many problems of evolutionary theory and go a long way to verifying or disproving the third form of evolution as discussed here.
Third, the foregoing does not establish the creationists’ hypothesis since there are many pieces of circumstantial evidence which point to common descent with modifications: common cellular and chemical structures in all living organisms, common physiological features among organisms, and an apparent progression from less complex to more complex organisms in the fossil record. From the standpoint of natural theology, a particular creation of every species would be quite incongruous with the principle that God orders all things wisely through secondary causes. Furthermore, if the creationists want to claim scientific status for their theories, they must play by the rules of the game: what evidence would they admit to disprove scientific creationism?
Fourth, there is no immediate philosophical problem with evolution provided that one does not construe the notion of cause too narrowly, or associate physical causality too closely with metaphysical causality, such as a requirement for contiguous efficient causes.
In view of the present situation, it seems that the wisest position to assume is an agnostic one; another century may be required before the facts are in and some definite conclusion can be drawn. We may be looking at perhaps as few as ten percent of them now.
So, we can assert that there is no contradiction between thermodynamics and evolution; there certainly is constraint in the sense that all organisms obey the laws of thermodynamics; but we cannot yet judge of the question of cooperation.
In the meantime, Christians must proceed with utmost caution. This is especially important with respect to creationism because we can scarcely afford a repetition of the Galileo affair; it is quite possible that new theories may be forthcoming which explain how evolution occurred. Bad science, like bad history, makes bad apologetics. Since some version of evolution theory is almost certainly true, our attention should be directed to the task of integrating evolution and other contemporary scientific theories into a new philosophical synthesis. Given the dominance of science in our culture, there is probably little chance of regenerating Christendom in the West until that task is complete.
NOTES
1Charles Darwin, Origin of Species, London, 1859. Various theories of evolution in a broad sense ranging from eternal return to social progress had been devised prior to Darwin’s time (Empedocles’ rudimentary theory of natural selection was criticized by Aristotle, Phys. 198b10-35); but Darwin was the first to base his theory on a large body of empirical data and try to explain the data in a systematic way.
2Aristotle, Met. 993b23-25; St. Thomas, S.T. la, q.4, a.2.
3 Mathematically, a point 0 is a stable equilbrium point for a time invariant system of the form
x = f(x(t))
if, for each e > 0, there exists a d(s) > 0 such that
II x(0) II < d[e) => II x(t)II < E for all t? 0
4See, for example, A. Hall and R. E. Fagen, “Definition of System,” in Modern Systems Research for the Behavioral Scientist, ed. by Walter Buckley, Chicago: Aldine, 1968.
5 A`closed form’ solution is one in which the dependent variable x is given as a known function of t, e.g. the equation x + ax = 0 has closed form solution x = e-at
6 Sage & White, Optimum Systems Control, Second Edition, Englewood Cliffs:
Prentice Hall, 1977, p. 177-78. Another type of behavior, which involves the system executing a type of periodic motion called a`limit cycle,’ will be discussed later.
7M. Vidyasagar, Nonlinear Systems Analysis, Englewood Cliffs: Prentice-Hall, 1978
8C. T. Chen, Introduction to Linear System Theory, New York: Holt, Rinehart, and Winston, 1970, p. 332. A linear system is one which does not involve powers of any term higher than 1, and no products of the xZi3‘s. It does not matter if the A matrix is time-varying, i.e. involves terms which are a function of time.
9That of determining the system eigenvalues; this is easily done with a programmable calculator or small computer.
10W. Boyce and R. DiPrima, Elementary Differential Equations, Second Edition, New York: John Wiley, 1969, p. 418.
11Cf. G. Nicolis and I. Prigogine, Self-Organization in Non-Equilibrium Systems, New York: John Wiley, 1977, p. 324ff; S.N. Chow and J. K. Hale, Methods of Bifurcation Theory, New York: Springer-Verlag, 1982.
12Robert Sugerman and Paul Wallich, “The Limits to Simulation,” Spectrum, Vol. 20, No. 4 (April, 1983), p. 36-41.
13Paul E. Gustafson, Partial Differential Equations, New York: John Wiley & Sons, 1980, p. llff.
14Arnold Munster, Classical Thermodynamics, London John Wiley & Sons, Ltd., 1971, p. 6.
15Paul A Tipler, Physics, Second Edition, New York: Worth Publishers, 1982, p. 500.
16W Yourgrau, A. van der Merwe, G. Rau, Treatise on Irreversible and Statistical Thermodynamics, New York: Dover Publications, 1982, p. 13.
17G. Nicolis and I. Prigogine, op. cit., p. 43-44.
18Enrico Fermi, Thermodynamics, New York: Dover Publications, 1956, p. 139. This is a generalization by Planck of Nernst’s original theorem; cf. W. Pauli, Thermodynamics and the Kinetic Theory of Gases, Cambridge, Mass: MIT Press, 1977, p. 91.
19F. Reif, Fundamentals of Statistical and Thermal Physics, New York: McGrawHill, 1965, p. 203.
20Ibid., p. 145.
21Nicolis and Prigogine, op. cit., p. 4. They use the term `disorganization,’ which
has misleading connotations.
22Nicolis and Prigogine, op. cit., p. 3.
23Reif, op. cit., p. 298.
24 Nicolis and Prigogine, op. cit., p. 2.
25E.g. J. Monod in his book Le Hasard et la Necessite.
26Nicolis and Prigogine, op. cit., p. 23.
27Ibid., p. 14.
28Sean O’Reilly, Bioethics and the Limits of Science, Front Royal: Christendom
Publications, 1980, p. 55-60; D. T. Gish, “It is Either `In the Beginning God’ –
or `. ., Hydrogen’ “, Christianity Today, XXVI, No. 16 (8 October 1982), p.
- O’Reilly’s and Gish’s views will be discussed in detail below.
29Nicolis and Prigogine, op. cit., p. 23.
30Ibid, p. 2-3. - 31P. Glansdorff and L Prigogine, Thermodynamic Theory of Structure, Stability, and Fluctuations, New York: Interscience, 1970, p.4-13; Yourgrau, et. al., op.cit., p. 12-1332I. Prigogine, Thermodynamics of Irreversible Processes, Third Edition, New
York: Interscience, 1967, p. 40.
33Ibid., p. 46.
34Nicolis and Prigogine, op. cit., p. 46-47.
35Ibid., p. 46.
36Vidyasagar, op. cit., p. 21. The technique is to linearize the non-linear system
by using the Jacobian matrix evaluated at the equilibrium point. The method works
unless the linearized system has eigenvalues with zero real part.
37Nicolis and Prigogine, op. cit., p. 50.
38Ibid., p. 57. ds thus forms a Lyapunov function.
39Nicolis and Prigogine, op. cit., p. 87.
40 From Nicolis and Prigogine, op. cit., p. 181, 182.
41Ibid., p. 89.
42Ibid., p. 4.43Ibid., p. 5.
There is one point which may be causing the reader some confusion. Recall the examples of the snowflake and the protein molecule discussed earlier, and observe that the case of the protein molecule is quite distinct from that of the snowflake for one primary reason: while both have highly ordered structures, the protein forms part of a complex organic system in which its detailed structure is vitally important, whereas the snowflake can have any of an almost infinite number of crystalline structures and still be a snowflake as good as any other, since it does not form part of a system. If the detailed crystalline structure of a snowflake were important for the functioning of a biological system, the processes of equilibrium thermodynamics would not be adequate to assure it correct formation. Conversely, if any arrangement of amino acids in a protein were as good as any other, formation of proteins by equilibrium methods would be quite adequate. Or, to state the matter in another way, both equilibrium and non-equilibrium thermodynamic systems are capable of creating ordered structures, where `ordered’ means that the created structures have a lower entropy than the components of which they are formed. The difference is that, in the case of equilibrium thermodynamics, the particular structure created is a random one of an extremely large number all of which have the same (or nearly the same) energy; whereas in non-equilibrium systems, a particular one of a large number of possible structures can be created, all of which again have nearly the same energy, and some of which have even lower energy and so, on the basis of equilibrium thermodyanamics, will be even more probable.
44A. N. Zaikin, A. L Kawczynski, “Spatial Effects in Active Chemical Systems,” J. Non-Equilibrium Thermodynamics, Vol. 2, No. 1 (1977), p. 39-48 and Vol. 2, No. 3, p. 139-152; Glansdorff and Prigogine, op. cit., p. 263. Shows a photo of the dissipative structure in the Zhabotinski reaction; J. Pantaloui, R. Bailleux. J. Salade, M. G. Velarde, “Rayleigh-Benard-Marangoni Instability: New Experimental Results”, J. Non-Equilibrium Thermodynamics, Vol. 4, No. 4 (1979), p. 201-218;
- A. Azouni, “Survey of Thermoconvective Instabilities of Confined Fluids”, J. Non-Equilibrium Thermodynamics, Vol. 4, No. 6 (1979), p. 321-348.
- 45The argument in the text rests upon the assumption that the universe, or at least part of it, is amply supplied with available energy-which is another way of saying that its entropy is low enough that it can readily afford to squander `large’ amounts on dissipative processes. The entropy of the universe has been estimated to have stabalized at a value of 108 photons/nucleon as of about 10-35 second after the Big Bang (See J. Barrow and J. Silk, “The Structure of the Early Universe”,Scientific American, Vol. 242, No. 4 (April, 1980), p. 118-128.) This is a fairly large number, relative to the value 1 in these units for terrestial systems. But the earth is in a much more favorable position than the universe as a whole; it receives about 5.5 x 1021 J/day of available energy from the sun, which corresponds to 2 x 10i9 J/degree of negentropy – an enormous number, guaranteeing the availability of free energy for dissipative (and other) processes.46Nicolis and Prigogine, op. cit., ch. 17. 47For a brief summary of the steps, see Verne Grant, The Origin of Adaptations, New York: Columbia University Press, 1963, p. 66ff.
48For a brief review of the evidence, see Raymond J. Nogar, “From the Fact of Evolution to the Philosophy of Evolutionism,” The Thomist, Vol. 24, (1961), p. 463-501.
49lbid., p. 479.
50Raymond J. Nogar, The Wisdom of Evolution, New York: Mentor-Omega, 1966, p. 115.
51Ibid., p. 121.
52This point is very well made by Prof. R. V. Young in his article, “An Anatomy of Evolution,” Part I, in The Wanderer, Vol 114, No. 41 (October 1, 1981), p. 4. See also William R. Fix, The Bone Peddlers, New York: Macmillan, 1984, p. 158ff. The suggestion of applying optimization theory to evolution owes to Herbert Simon, The Sciences of the Artificial, 2nd edition, Cambridge: MIT Press, 1981, P. 54-55. This line of reasoning should be pursued further.
53Grant, op. cit., p. 36; p. 426-427.
54Theodosius Dobzhanski, Genetics and the Origin of Species, 3rd edition, New York: Columbia University Press, 1951, p. 16.
55G. R. Taylor, The Great Evolution Mystery, New York: Harper & Row, 1983, p. 11; 13-14. Taylor is definitely not a creationist; he is convinced, however, that the Neo-Darwinian orthodoxy cannot account for known facts.
56Mathematical Challenges to the Neo-Darwinian Theory of Evolution, rev. ed., ed. by P. Moorhead and M. Kaplan, Ross-Erikson, 1983.
57Given the biochemical and morphological similarities within, say, a phylum, there is some reason to postulate a `top down’ development of organisms rather than the `bottom up’ development favored by the theory of evolution. That is, one could think of organisms evolving from a master plan, rather than the master plan arising from the organisms. But this would not seem to account for the apparent progress (in the sense of “doing things better”) within a phylum, for example the sequence psilophytes to gymnosperms to angiosperms. The top-down theory was advocated by the German biologist Richard Goldschmidt. See Taylor, op. cit., p. 42-43, 162. This is the type of question which could be addressed with a comprehensive quantitative model of system change.
58This would not necessarily appear as a discontinuity or abrupt miraculous irruption in the natural order; if exact determinism is untrue, then events will, at many levels, be enveloped in a probabalistic `fog’ in the sense that their precise course is not accessible to science, though it does not violate any scientific law. The subject remains for another occasion.
59Zubiri’s solution to the dilemma of man as evolved and man as created with an immortal soul is to adopt a form of the `weak’ Darwinian evolution which stops just short of the strong form. He believes that, of itself, matter could not evolve to the point where it is reality-conscious, nor can it pass this quality on from generation to generation. That last step requires direct intervention by God each time a human being is created.
60Nogar, op. cit.
61J. Huxley, Religion Without Revelation, New York: 1957, p. 187.
62Henry W. Morris, “Evolution, Thermodynamics, and Entropy,” Creation
Acts, Facts, Impacts, San Diego: Creation Life Publishers, 1974, p. 123-129.
63Douglas Dewer, Difficulties of the Evolution Theory, London: Edward Arnold & Co., 1931, p. 93.
64Gish, op. Cit., p. 28ff.65Ibid., p. 29-30.
66Ibid.
67Barrow and Silk, op. cit.
68L. Brillouin, Science and Information Theory, Second Edition, New York: Academic Press, 1962; T. Fowler, “Brillouin and the Concept of Information”,
Int. J. General Systems, Vol. 9 (1983), p. 143-155.
69Gish, op. cit., p. 30.
70Ibid., p. 31.
71O’Reilly, op. cit., p. 56.
72Ibid., p. 57.
73Ibid., p. 58.74Ibid.
75T. Fowler, “Computation as a Thermodynamic Process Applied to Biological
Systems”, Int. J. Biomedical Computing, Vol. 10, No. 6 (1979), p. 477-489; Fowler,
“Brillouin and the Concept of Information”, op. cit.
76G. E. Parker, Creation The Facts of Life, San Diego: C.L.P. Publishers, 1980. Parker’s book concentrates on the problems of paleontology and natural selection, and deals with the problem of order only indirectly. Also, A. W. Field, “The Evolution Hoax Exposed”, Rockford, II.: Tan Books and Publishers, 1971.
77Vincent Smith, “Evolution and Entropy,” The Thomist, Vol. 24, 1961, p.
441-462.
78Benedict M. Ashley, “Causality and Evolution,” The Thomist, Vol. 36, (April,
1972), No. 2, p. 199-230.
79Ibid., p. 199.
80Ibid., p. 215.81Fowler, “Brillouin and the Concept of Information”, op. cit.
82S.T. la, q.4, a.2.
81 S.T. la, q.94, a.l.
- 84S.T. la, IIae, q.66, a.l.85S.T. la, IIae, q.112, a. 1. 86Summa Contra Gentiles, II, ch. 21.
87S.T. Ia, IIae, q.83, a.1, ad.3.
88Stanley Jaki, The Origin of Science and the Science of its Origin, South Bend: Regnery/Gateway, 1979.
89Jacques Maritain, The Degrees of Knowledge, New York: Charles Scribner’s Sons, 1959. See esp. ch. II.
90Xavier Zubiri, “Science and Reality” and “The Idea of Nature: The New Physics”, Nature, History, God, tr. by Thomas Fowler, Washington: University Press of America, 1981.
91Wolfgang Strobl, La Realidad Cientifica y su Critica Filosofica, Pamplona: University of Navarra Press, 1966.
92Cf. the problem of the (physical) principle of inertia and the (metaphysical) argument for the Prime Mover in Dennis Bonnette’s article “A Variation on the First Way of St. Thomas”, Faith & Reason, Vol. VIII, No. 1(Spring, 1982), p.
34-56.
93S.T. I, 115, a. 3, ad. 2.
94 S.T. I, 72, a. 1, ad. 5.
95The present author has discussed this question at length in other articles in
this journal; see T. Fowler, “Xavier Zubiri: Science, Nature, Reality”, Faith &
Reason, Vol. VI, No. 1(Spring, 1980), p. 7-25; and T. Fowler, “Three Dogmas
of Modern Science,” Faith & Reason, Vol. VII, No. 3 (Fall, 1981), p. 188-220. 96 Phys. 192b8-23, Met. 1071b34-35.
97Met. 103a8-18.98Phys. 193b7-12.
99 W. D. Ross, Aristotle, New York: Barnes & Noble, 1966, p. 68.
100De Caelo 311a1-6.
101Ross, op. cit., p. 75.
102Historia Animalium, 539a15-25.
103Met. 993b23-25.
104Aquinas, Commentary on the Metaphysics of Aristotle, tr. by J. P. Rowan, vol I, Chicago: Regnery, 1961, p. 121. Italics added.
105Strobl, op. cit., p. 112.106 C. F. Von Weizsacker, Zum Weltbild der Physik, Fourth edition, Stuttgart, 1949, p. 108.
107Ibid.
108 Met. 1032b2-6.
109Met. 1038a18-20. - 110Met. 1058a37-b3. 111Xavier Zubiri, Sobre la esencia, Madrid: Sociedad de Estudios Y Publicaciones, 1962, p. 80.112Met. 1051b31-33.
113 On this point, cf. Zubiri, “Science and Reality”, op. cit., p. 74-78.
114Zubiri criticizes Aristotle for confusing the logical and ontological order in his analysis of substance; see Sobre la esencia, op. cit., p. 82-94. Zubiri notes the following:
Aristotle forcefully rejected the Platonic conception, according to which species have a “separate” (xweiaTOV χωρϊστού) reality; and he repeats over and over that species are separate only according to the ύόύς and the λογος. Thus, a species has physical reality in the individual, but that which is species in the individual is the unity of the concept as realized in multiple individuals. Now, this is rather questionable. Does the mere identity of concept univocally realized in many individuals suffice for them to constitute a “species”? To be sure, it is necessary; but in no wise is it sufficient. And in any case, it immediately comes to mind that even in this presumed physical characterization of essence, there is an undeniable primacy of conceptive unity over individual physical unity, to the point where this latter remains formally unclarified, and ultimately not even posed as a problem. Le. there is an undeniable primacy of essence as something defined over essence as a physical moment. And this leads to an inadequate idea of essence, because however important the structure of the definition may be (which is a strictly logical problem), it is something which is completely secondary for the structure of things (which is a metaphysical problem). p. 88-89.