Schneider, E.D, Kay, J.J., 1994 "Complexity and Thermodynamics:Towards a New Ecology", Futures 24 (6) pp.626-647, August 1994
Towards a New Ecology
Ecology is the science of the interactions of living organisms with each other and their interactions with the physical and chemical components of their enveloping environment. Ecology is a misunderstood, underfunded, orphan branch of science. The sciences of the macroscopic, astronomy, and the microscopic, particle physics, are the darlings of the funding agencies as we spend billions of our tax dollars on Hubble telescopes and bigger and bigger supercolliders. It is interesting to know of an astronomical event thousands of light years ago or to know the that one subatomic particle is comprised of eight or ten smaller particles. These scientific facts seem to intersect us in the peripheral aspects of our lives. Ecology however, is a science on the human scale. We humans are part of our ecosystems, and the environment with which we interact is on a scale that is a part of our daily lives. Those of us who study the dynamics and functioning of ecosystems are amazed that a science, that seems so important to mankind, receives so little money and that minimal attention is paid to understanding processes that may influence the future of mankind and the future of our biosphere.
Part of the problem is that many people confuse the science of ecology with the environmental movement. When one of us tells a new friend, "I am a theoretical ecologist." they often think that we are members of a fringe political movement or that we spend time laying in front of bulldozers in National Forests. Even the well known journal The Ecologist is not a journal about ecology but a journal about science, politics and socio-economic issues related to environmental management. The environmentalists' goal should be to encourage management of local through global ecosystems so as to maintain or enhance environmental quality. However ecologists as scientists should advise society on ecological interactions and the impact of human activities on natural resources. Ecosystem management is about tradeoffs and the role of the ecologist should be to identify these tradeoffs. Which tradeoff we decide to make is a political decision which environmentalists can seek to influence.
Ecosystems are systems of organisms interacting with each other and their environment within some spatial and temporal boundaries. However, ecosystems are not just 'things out there in the environment' but are comprised of processes that bind organisms together and which influence ecosystem development, structure and function. There is an ecosystem of bacteria in the rumen of a cow. Is a cow an ecosystem? Probably it is, as well as being an organism. A decade ago, James Lovelock, shocked the scientific community by suggesting the Gaia hypothesis, that the entire planet earth, with its associated atmosphere, oceans and rocks, make up one entire ecosystem. The atmosphere, primarily made up of nitrogen, oxygen, and carbon dioxide, all have direct links to the biosphere. Lovelock's concept has gained much popular interest and many prominent earth scientists are now studying feedback between the biosphere, the atmosphere and the oceans. Implicit in the Lovelock model is the interconnectivity of and feedback between components of ecosystems.
One might think that ecologists study ecosystems, but most scientists who call themselves ecologists are students of genetics, organisms, populations, and communities. There are physiological ecologists, ant ecologists, evolutionary ecologists, and even urban and psychological ecologists. Few ecologists study whole ecosystems. Many ecologists seem stuck in quasi-reductionist branches of science and claim to infer knowledge of complete, complex ecosystems by studying it's parts. Much as the blind men studying an elephant claimed knowledge of the whole animal by feeling its trunk (a snake) or its leg (a tree), one cannot understand the operations of a watch by studying all its parts. I do not wish to down play much of the excellent work contributed by the autecologists of the organism, species, population, or community ecology. However most ecologists generally do not imbed their specialty within a larger ecosystem context. For instance, fisheries ecologists have focused their research for years on predator-prey interactions and only recently have they attempted to integrate parameters such as nutrients, water temperature, and regional oceanography into their biologic models. It is a rare ecologist that studies the function and structure of whole ecosystems.
The science of ecology is a relatively new branch of biology and has basically been a descriptive endeavor that focused on organism, population or the biotic interaction at the community level. With the pioneering work during the 1950's of Yale University Professor G. Evelyn Hutchinson and his students Eugene and Howard Odum, ecologists started to study ecosystems as whole interconnected processes. Eugene Odum synthesized the phenomenological attributes of developing ecosystems and noted regular changes in species composition with ecological succession as well as changes in system production, biomass cycling and use of resources. His brother, Howard Odum developed methods for measuring energy flow through and among the biotic compartments of ecosystems.
More recently, ecologists have refined this work and have developed energy flow measures for ecosystems that allow one to calculate the total energy flow through an ecosystem, cycling of material and energy through ecosystems, average trophic level of organisms, and measures of the interconnectivity of the populations of organisms in the ecosystem. These measures require that the investigator know or must measure all the important abundant species in the ecosystem and the flows of energy among the system components. With these methods and extensive phenomenological observations of changing ecosystems, we now have some general rules on how ecosystems behave as they evolve with time.
One of the major obstacles that ecology faces as a science is that it does not have an accepted theoretical framework that can be derived from first principles, as do physics and chemistry. Due to the lack of ecological theory, it has proven difficult to predict the future states of perturbed ecosystems beyond simple limited resource models. Because the structure and function of an ecosystem is dependant on a large number of variables, many of which exhibit indirect effects on other variables in the system, it is not possible to prove cause and effect linkages between changes in an ecosystem. Even the most simple ecosystems have 20 or more interacting compartments which would permit 2.4 x 1018 potential direct or indirect interactions. Ecosystems are also highly variable on spatial and temporal scales and it is difficult to determine natural variability from variations caused by human activities. This situation is compounded by nonlinear interactions between components, emergence of new structures and functions, and the inability to predict catastrophic events (i.e.. fire, storm) that dramatically alter ecosystem structure.
Over the past decade we, Schneider and Kay, have collaborated on a research program that has been a search for underlying principles of ecology. It is obvious that different ecosystems develop similar patterns of structure and function. Successional processes observed in a small laboratory microcosm are similar to those of vast forest systems, ocean plankton systems, as well as prairie grass systems. Such similar phenomenology suggests underlying rules and processes. We have focused our work in linking physics and systems sciences with biology, and especially linking the science of ecology with the laws of thermodynamics.
We are not the first researchers to hunt for stronger linkages between biology and physics. Exactly 50 years ago, Erwin Schrödinger addressed potential biology/chemistry/physics research programs in his seminal book; What is Life?.  In What is Life?, Schrödinger attempted to draw together the fundamental processes of biology and connect them with the sciences of physics and chemistry. He noted that life was comprised of two fundamental processes; order from order and order from disorder. He observed that the gene with its soon to be discovered DNA, controlled a process that generated order from order within a species, that is, the progeny inherited the traits of the parent. Schrödinger recognized that this process was controlled by an aperiodic crystal, with unusual stability and coding capabilities. Over a decade later these processes were mapped by Watson and Crick in 1953. Their work provided biologists with a research framework that allowed for some of their most important findings of the last fifty years.
However, Schrödinger's perhaps equally important and less understood observation was his order from disorder premise. This was an effort to link biology with the fundamental theorems of thermodynamics. He noted that at first glance, living systems seem to defy the second law of thermodynamics which insists that, within closed systems, entropy should be maximized and disorder should reign. Living systems, however, are the antithesis of such disorder. They display marvelous levels of order created from disorder. For instance, plants are highly ordered structures, which are synthesized from disordered atoms and molecules found in atmospheric gases and soils.
Schrödinger solved this dilemma by turning to nonequilibrium thermodynamics, where he recognized that living systems exist in a world of energy and material fluxes. An organism stays alive in its highly organized state by taking energy from outside itself, from a larger encompassing system, and processing it to produce, within itself, a lower entropy, more organized state. Schrödinger recognized that life is a far from equilibrium system that maintains its local level of organization at the expense of the larger global entropy budget. He proposed that to study living systems from a nonequilibrium perspective would reconcile biological self-organization and thermodynamics. Furthermore he expected that such a study would yield new principles of physics.
Our research has taken on the task proposed by Schrödinger and expanded on his thermodynamic view of life. We explain that the second law of thermodynamics is not an impediment to the understanding of life but rather is necessary for a complete description of living processes. We further expand thermodynamics into the causality of the living process and assert that the second law generates constraints that are a necessary but not sufficient cause for life itself. In short, our reexamination of thermodynamics shows that the second law underlies and determines the direction of many of the processes observed in the development of living systems. This work harmonizes physics and biology at the macro level and shows that biology is not an exception to physics, but rather that we have simply misunderstood the rules of physics.
We have made some progress in this theory, especially applying the laws of thermodynamics to ecology. We deal with ecosystems as evolving complex systems that are held away from thermodynamic decay by imposed physical or chemical gradients. Terrestrial ecosystems grow and develop by degrading the energetic gradient imposed by the sun. Our work seems to explain many of Eugene Odum's phenomenological attributes of developing ecosystems. In collaboration with scientists from NASA we have been measuring the emitted temperatures from terrestrial ecosystems. Later in the paper we will discuss these data which show that poorly developed ecosystems degrade the incoming solar energy less effectively than more mature ecosystems. These and other data support our thermodynamically derived hypothesis of ecosystem development and may provide ecology with some needed theoretical underpinnings. This work may also allow remotely sensed measures of ecological integrity needed for environmental management.
What is commonly understood to be thermodynamics was developed in the nineteenth century by Carnot, Clausius, Boltzmann and Gibbs as a science describing the balance and flow of energy in nature. The common statements of the first and second law are that energy is conserved and entropy increases respectively. Unfortunately entropy is strictly defined only for equilibrium situations. Thus these statements are not sufficient for discussing non-equilibrium situations, the realm of all self-organizing systems including life.
In the mid 1960's Hatsopoulos & Keenan and Kestin brilliantly synthesized thermodynamics with a statement that subsumes the 0th, 1st and 2nd Laws:
"When an isolated system performs a process after the removal of a series of internal constraints, it will reach a unique state of equilibrium: this state of equilibrium is independent of the order in which the constraints have been removed".This is called the Law of Stable Equilibrium by Hatsopoulos & Keenan and the Unified Principle of Thermodynamics by Kestin. The importance of the statement is that, unlike all the earlier statements which show that all real processes are irreversible, it dictates a direction and an end state for all real processes. As well it does not depend on the entropy concept and hence is applicable to equilibrium and non-equilibrium situations alike.
We have proposed an extension to this principle. In simple terms it is that systems will resist being removed from their equilibrium state. It should be noted that what drives systems away from equilibrium are externally applied gradients (e.g. the temperature and pressure differences in classical thermodynamic systems). More formally then:
The thermodynamic principle which governs the behaviour of systems is that, as they are moved away from equilibrium, they will utilize all avenues available to counter the applied gradients. As the applied gradients increase, so does the system's ability to oppose further movement from equilibrium.
Thermodynamic systems exhibiting temperature, pressure, and chemical equilibrium resist movement away from their equilibrium states. When moved away from a local equilibrium state a system will behave in a way which opposes the applied gradients and moves it back to its local equilibrium attractor. The stronger the applied gradient, the greater the effect of the equilibrium attractor on the system. The more a system is moved from equilibrium, the more sophisticated its mechanisms for resisting being moved from equilibrium. If dynamic and or kinetic conditions permit, self organization processes are to be expected. This behaviour is not sensible from a classical second law perspective, but is what is expected given the restated second law. No longer is the emergence of coherent self-organizing structures a surprise, but rather it is an expected response of a system as it attempts to resist and dissipate externally applied gradients which would move the system away from equilibrium.
Bénard Cells (heat driven fluid convection systems), tornadoes, autocatalytic chemical reactions, and life itself, are examples of non-equilibrium self-organizing systems whose development is in accordance with this principle. As the applied gradients increase, new structures can emerge to dissipate the gradients in these systems. In carefully conducted Bénard cells experiments, when the temperature gradient increases to a critical threshold, hexagonal cell structures emerge. This transition between non-coherent, molecule to molecule, heat transfer to spontaneously emergent coherent structure results in excess of 1022 molecules acting together in an organized manner. This seemingly improbable occurrence is the direct result of the applied temperature gradient, the dynamics of the system at hand, and is the system's response to attempts to move it away from equilibrium. In a manner similar to the emergence of the Bénard cells, vortices emerge in fluids as pressure differences increase, and more species become part of ecosystems as the available energy increases, and all these have the effect dissipating more of the energy gradient.
All of these structures have one thing in common, they increase the system's ability to dissipate the applied gradient (hence the term dissipative structures). Prigogine and his colleagues have shown that dissipative structures self-organize through fluctuations, small instabilities which lead to irreversible bifurcations and new stable system states. Thus the future states of such systems are not deterministic. Dissipative structures are stable over a finite range of conditions and are sensitive to fluxes and flows from outside the system. Convection cells, hurricanes, autocatalytic chemical reactions and living systems are all examples of far-from-equilibrium dissipative structures which exhibit coherent behavior. The notion of dissipative systems as gradient dissipators holds for nonequilibrium physical and chemical systems and describes the processes of emergence and development of complex systems. Not only are the processes of these dissipative systems consistent with the restated second law, under the right circumstances it can be expected that they will emerge wherever gradients exist.
Using a scenario based on Prigogine and Wicken's work, it is argued that the solution to this problem is the development of systems (chemical factories) which are joined together in a supersystem. The supersystem degrades the incoming energy by producing and then breaking down molecular structures. The chemical factories have four common behaviors: a self-construction and death cycle, reproduction, evolution and adaptation.
To be more specific, consider a chemical soup bombarded with solar energy. Wicken's work suggests that the second law predicts the emergence of chemical factories in this soup. The factories would degrade the energy impinging on the soup. Degradation would be accomplished largely by utilizing the available molecules and energy to form new more complex molecules. The formation of new molecules could degrade the impinging available potential energy by transforming it into bond, translational, and vibrational energy, and into heat. Many different types of processes and molecular forms should emerge, as the larger their number, the more thoroughly degraded the incoming solar energy.
As time goes on, these systems (the chemical factories) should become stable. That is they would evolve mechanisms to stabilize their internal chemical processes and to maintain the functioning of the system in the face of environmental changes. The degradation of the incoming solar energy, as required by the second law, would then be assured. This expectation would be justified by the second law alone, but is reinforced by Prigogine's findings regarding the emergence of stable dissipative structures.
The above argument suggests the emergence of primary producers who would use the incoming solar energy to produce complex molecules and stored energy. These primary producers would be expected to degrade as much as possible of the incoming energy into lower quality forms. They would produce only as much stored potential energy (via for example photosynthesis) as is required to fuel the processes necessary for the internal stability of the system. The stored potential energy of the primary producers could be further degraded if used by other chemical factories to fuel more production of complex molecules. Chains of such systems, each system feeding on the stored potential energy of another system, would emerge in accordance with Prigogine's order through fluctuations scenario. The characteristics of these chains is that they would degrade as much of the incoming energy as possible per unit production of complex molecules. Such chains will be referred to as energy degrading chains.
If only energy degrading chains existed, they would quickly run out of material to be used as inputs for the production of complex molecules. Thus, if they are to continue functioning, the emergence of consumers who would use the complex molecules and stored energy of the energy degrading chains as inputs to processes which simplify the complex molecules, is necessary. The existence of such matter simplifying chemical factories would guarantee the supply of simple molecules to be used by the primary producers. These consumers would be expected to simplify the molecules as much as possible per unit of energy flow. These matter simplifiers would allow for the reuse of materials by the energy degrading chains.
The restriction placed on the systems to be either energy degrading or matter simplifying is artificial. There is no reason why one system (chemical factory) cannot degrade the potential energy by forming complex molecules from the available molecules, and at the same time break down some of the available molecules into their components. The two cases described, maximizing energy degradation per unit of complex molecules produced, and maximizing molecular simplification per unit of energy consumed, are extremes. Any system could fit somewhere between the two and would be made up of different processes each of which would correspond to one of the two cases. For this reason it is impossible to constrain the individual systems to be either efficient users of energy or material.
There is no reason to expect the emergence of only a few simple chains, made up of either energy degraders or matter simplifiers. Rather the systems would be expected to be interconnected in a complex web. Each individual system would operate somewhere between the two efficiency extremes. This web would offer many different paths of energy and material flow. A set of interconnected chemical factories will be called a "supersystem". Because of the constraints imposed by the principles of matter conservation and the second law of thermodynamics, the supersystem would be expected to emerge in a way which makes it an efficient, if not self-contained, user of material resources, and a very good degrader of incoming solar energy.
The supersystem proposed above on the basis of purely thermodynamic and system theoretic arguments are manifested as ecosystems in our biosphere. The individual chemical factories are the individual living organisms. The classes of components which make up ecosystems consist of organisms which share the same pre-programming and are the highest level in the system's hierarchy which spontaneously die. Such a class of organisms is a species. The chains of energy degraders would be the grazing chain and matter simplifiers, the detrital cycle.
A second hypothesis, which is a consequence of the first, is that ecosystems will evolve and adapt so as to increase the potential for the ecosystem and its component systems to survive. Such behavior will assure the continued degradation of incoming energy. This process is subject to the constraint that any evolutionary or adaptive strategy or mechanism which enhances survival is only economical if its net effect is to increase the energy degradation ability of the ecosystem. That is the thermodynamic cost (in terms of loss of loss of exergy capture capability) of the strategy or mechanism must be offset by the overall gain in energy degrading ability of the ecosystem. Also, each component system will not be able to globally maximize its own survival because this would be done at the expense of other systems. Evolution and adaptation to improve survival potential are optimization processes subject to hierarchical thermodynamic and system constraints.
These hypothesis can be tested by observing the energetics of ecosystem development during the successional process, or by determining their behavior as they are stressed or their boundary conditions change. As ecosystems develop or mature they should increase their total dissipation, and should develop more complex structures with greater diversity and more hierarchical levels to assist in energy degradation. Successful species are those that funnel energy into their own production and reproduction and contribute to autocatalytic processes thereby increasing the total dissipation of the ecosystem. In short, ecosystems develop in a way which systematically increases their ability to degrade the incoming solar energy. Thus one would expect successional processes in ecosystems to result in systems with the attributes seen in Table 1.
As ecosystem organization increases, grows and develops we expect the following system changes: (The appropriate measure are in italics).
1. More energy capture. Inflow
2. More effective use of energy. Exergy Destruction rate
3. More energy flow activity within the system . Total system throughput (TST)
4. More cycling of energy and material:
7. Higher respiration.
8. Higher transpiration in terrestrial systems.
9. Larger ecosystem biomass
10. More types of organisms. diversity
Borrowing from the economic input-output analyses of Leontief ecologists have developed a number of input/output indices that allow for analysis of material-energy flows through ecosystems. With these methods it is possible to detail the energy flow and the partitioning of energy in ecosystems. Total systems throughput is a measure of the size of the system in terms of all the flows in the system, including straight-thru flows and all the cycled flows, which is similar to the gross national product measure. Using input-output matrix algebra analysis it is possible to calculate the proportion of flows involved in cycling, the number of cycles, and the length of the cycles. Trophic levels can be enumerated as well as the average trophic level of all species. Many of the effects in these systems are from direct flows, but large numbers of the interactions in these complex systems are indirect, as one species may cause effect on another by affecting intermediate species.
If a change in environmental conditions causes disorganization in an ecosystem, our hypothesis would suggest that the energy degradation potential of the ecosystem would decrease, that is the ecosystem would change in a way opposite to that described in Table 1. We have analyzed a data set of carbon-energy flows collected from two tidal marsh ecosystems adjacent to a large power generating facility on the Crystal River in Florida. The ecosystems in question have identical environmental conditions except that one is exposed to hot water effluent from the power station. The effluent results in an maximum 6deg. C water temperature increase. The objective of the analysis was to determine the effects of the changes in environmental conditions on these otherwise identical ecosystems.
An input/output analysis describes various aspects of the flows through the ecosystems and provide for a comparison between them. In absolute terms all the flows dropped in the stressed ecosystem. Overall the drop in flows was about 20%, in particular the imported flows (that is the resources available for consumption) drop by 18% and the TST (the total system throughput, the total flow activity in the system) dropped by 21%. The biomass dropped by about 35%. The implication of these numbers is that the stress has resulted in the ecosystem shrinking in size, in terms of biomass, its consumption of resources, and its ability to degrade and dissipate incoming energy.
If the flows are scaled by the import to the ecosystem from the outside, the resulting numbers indicate how well the ecosystem is making use of the resources it does capture. The most substantial change in the total scaled flow rates is that the stressed ecosystem is exporting more from its detrital subsystem (30%). In other words, it is losing the material it does captures more quickly than the control ecosystem. It is a leaky ecosystem. Analysis of the food web further confirms this. The number of cycles in the stressed ecosystem is only 51% of the number in the control and these cycles are shorter in length. In the effective grazing chain the number of trophic levels did not change, but the trophic efficiencies did so dramatically, as did the flow to the top trophic levels.
This analysis reveals that the impact of the hot water effluent from the power station has decreased both the size of the "stressed" ecosystem and its consumption of resources, while at the same time lessening its ability to retain and utilize the resources it has captured. In short the impacted ecosystem is smaller, has lower trophic levels, recycles less, and leaks nutrients and energy. All of these are signs of disorganization and a step backward in development as we have defined it in Table 1. This is an example of how our thermodynamic perspective on ecosystems can lead to a quantitative analysis and evaluation of the status of ecosystem organization.
Definition of terms
Luvall and Holbo conducted experiments in which they overflew terrestrial ecosystems and measured surface temperatures using a NASA Thermal Infrared Multispectral Scanner (TIMS). Their technique allows assessments of energy budgets of terrestrial landscapes, integrating attributes of the overflown ecosystems, including vegetation, leaf and canopy morphology, biomass, species composition and canopy water status. Luvall and his co-workers have documented ecosystem energy budgets, including tropical forests, mid-latitude varied ecosystems, and semiarid ecosystems. Their data shows one unmistakable trend; when other variables are constant, the more developed the ecosystem, the colder its surface temperature and the more degraded its reradiated energy.
Table 2 portrays TIMS data from a coniferous forest in western Oregon, North America. Ecosystem surface temperature varies with ecosystem maturity and type. The warmest temperatures were found at a clearcut and over a rock quarry. The coldest site, 299deg.K, some 26deg. colder than the clear cut, was a 400 year old mature Douglas Fir forest with a three tiered plant canopy. The 23 year old naturally regrowing forest had the same temperature as the 25 year old plantation of Douglas Fir. Even with the initial planting of the late successional species of fir, the natural system obtained a similar energy degrading capacity in a similar time.
Luvall's data allows for a calculation of the percent of the net solar radiation (K*) that is degraded into energy in the form of molecular motion (Rn). Rn is the net available energy dissipated through evaporation, sensible heat and storage. The ratio Rn/K* is the percentage of the radiative fluxes converted to lower exergy thermal heat. It is a measure of second law effectiveness of the ecosystem. The quarry degraded 62% of the net incoming radiation while the 400 year old forest degraded 90%. The remaining sites fell between these extremes, increasing degradation with more mature or less perturbed ecosystems.
These data sets show that ecosystems develop structure and function that degrades imposed energy gradients more effectively. Analysis of airborne collected reradiated energy fluxes is a unique and valuable tool for measuring the energy budget and energy transformations in terrestrial ecosystems. If as we suggest, a more developed ecosystem degrades more energy, then the ecosystem temperature or Rn/K* are excellent indicators of ecosystem integrity.
Satellite derived earth radiation data developed for global climate analysis, also shows that this same phenomena may also be apparent at the global scale. The Climate Analysis Center and the Satellite Research Laboratory of the National Oceanic and Atmospheric Administration (NOAA) produce monthly maps of outgoing longwave radiation, (OLR), collected from multiwave spectral scanners aboard polar orbiting satellites. Figure 1 (not available at this time) is a global OLR map for February 1991 For the tropical rainforests of the Amazon, the Congo, and over Indonesia and Java, the OLR is less than 200 watts/m2. The deserts emit a net OLR of over 280 watt/m2. Interestingly enough, the tropical rain forests with their coupled cloud system, with the sun directly overhead, have the same surface temperature as Canada in the winter. The low tropical rain forest OLR temperatures are due to the cold temperatures of the convective cloud tops which are generated by the underlying cooler forests.
As we have seen earlier, mature ecosystems can lower surface temperature by approximately 25deg.C. The low reradiation from the rainforest-cloud systems appears to be a global-scale signal of solar gradient degradation. Most of the energy degradation in terrestrial ecosystems is due to the latent heat production via evapotranspiration. Tropical rain forests produce a prodigious amount of water vapor via this process, and convective-induced cooling produces high clouds which tends to reinforce the cooling of the rain forests. The coupled rain forest-cloud system lowers the earth to space gradient even more than the forest alone.
Much attention is being given carbon loss from tropical rainforests due to deforestation and its impact on global CO2 scenarios and global warming. Little attention has been paid to the important role vegetation plays, through evapotranspiration, in regional and continental scale climate. In fact, our preliminary calculations suggest that the contribution to potential global warming, which is due to the loss of the role that global rain forests play in global and regional climate by controlling water budgets, transpiration, cloud formation, energy reradiation and surface temperatures, may be much more important (between one and two orders of magnitude) than warming caused by increases in CO2 due to tropical deforestation. More importantly loss of the rainforest is loss of a global scale climate control system.
However our research and that of others suggests that the search for simple causal rules of ecosystem behavior is futile. The diversity-stability hypothesis of ecology is a classic example of the kind of simple rule that environmental managers seek. The diversity-stability hypothesis arose from a paper of MacArthur's in which he proposed that the diversity of a food web was a measure of community stability. Hutchinson mistook this paper as proof that species diversity explains community stability. Margalef elaborated a theory of ecosystem development which argued that species diversity was the cornerstone of the emergence of a stable system. This hypothesis was "codified" as dogma by the Brookhaven Symposium #22 of 1968. This simple hypothesis is based on the notion that "you don't put all your eggs in one basket". In the early 70's a number of empirical counter-examples to this hypothesis were presented. Goodman  wrote a paper which systematically examined the literature and demonstrated clearly that there was no scientific basis for the diversity-stability hypothesis. However, it is still widely believed that preserving species diversity is crucial to maintaining the health of ecosystems.
The ecosystem-diversity-stability hypothesis illustrates the problems of ecosystem management nicely. Examination of the term "diversity" and "stability" quickly leads us into the quagmire of complex issues. First what is meant by diversity? Is it the number of species? The relative abundance of species? Their richness? Which species are included; big ones that are easy to count? All the micro-organisms in the soil? The are perhaps 10 million bacteria in a gram of soil. Very quickly it turns out that there is not one way to measure diversity. In the end it is an observer dependent phenomena dependent on the species you decide to include.
The notion of stability is even more slippery. Traditionally (in the classical Lyapunov sense), it focuses on some numerical function and whether that function has constant value which the system tends towards and returns to when disturbed. But what state function should we measure; population numbers, biomass, productivity? The list is endless. Even if we choose a function to represent the ecosystem and its stability, we are now discovering that these functions are not stable. Rather ecosystems are dynamic and constantly changing. Stability gives way to the notion of a shifting steady mosaic. So the diversity-stability hypothesis evaporates because the basic concepts of diversity and stability are just too simple to describe complex ecological phenomena.
And this brings us to the crux of the problem facing ecology as a science. The demand on ecology to provide simple answers is predicated on a vision of science which assumes that the only approach to knowing a system is the classical cause and effect scientific method, a technique which works well with billiard balls, simple chemical reactions, etc. In this Newtonian world view, all activities of a system can be explained by interactions of components, usually in a linear way. Because component interactions are thought sufficient to explain all, science today focuses on establishing which components are responsible for what events. The logical extreme of this form of inquiry, which attempts to explain everything through mechanistic interactions of components, is the elementary particles of physics, the selfish gene of biology and most recently the mechanical Boolean networks of Kaufman. However for this version of the scientific method to work, an artificial situation of consistent reproducibility must be created. This requires simplification of the situation to the point that it is controllable and predictable. The very nature of this act removes the complexity, that leads to the emergence of the new phenomena which makes complex systems interesting, the very phenomena ecology seeks to understand.
What is needed to deal with ecology is an "ecosystem approach", an approach based on the notions of complex systems theory, the grandchild of von Bertalanffy's general systems theory. There are a number of important lessons to be learnt from the study of complex systems. First, such systems can only be understood from a hierarchical perspective. Neither a reductionist nor holistic approach is sufficient. One must look at the system as a whole and as something composed of subsystems and their components. One must also look at the system in the context of its being a subsystem of a bigger system which is in turn part of a wider environment. So to study a population in ecology without reference to the individuals that make it up, the community it belongs to, and the environment it lives in, is not sufficient. This is not to say that population ecology is not useful. It is just not sufficient to explain ecological phenomena. Self-organization of complex systems, including ecosystems, can only be understood in the context of what makes them up and the environment in which they must function.
Another property of these systems is that everything is connected (at least weakly) to everything else. But no scientist can look at everything at once. So any analyst must make decisions about what to include and what to leave out of the system to be studied. Scale and extent and the hierarchical units of study must be selected. These decisions, while done in a systematic and consistent way, are necessarily subjective, reflecting the viewpoint of the analyst about which connections are important to the study at hand, and which can be ignored. So, because of their very nature, the notion of a pristine objective scientific observer is not applicable to the study of self-organizing systems.
Complex systems exhibit emergent dynamic behaviours. Catastrophe theory describes one class of surprising dynamics of these systems. It predicts that systems will undergo dramatic, sudden changes in a discontinuous way similar to the Bénard cell emergence described earlier. The choice of the name, catastrophe theory, is unfortunate as it denotes abnormal nasty events. What we have come to realize is that such events are normal and necessary for the continued smooth functioning of many systems. For example our heartbeat is a catastrophic event, as is the emptying of our bladder. They are discontinuous events which occur suddenly and are necessary for our continued survival. Furthermore, at the point where a system undergoes a "catastrophic" change, there may be several possible distinct changes which can occur. Which one will actually happen is not always predictable. The insight from catastrophe theory is that the world is not a place where change always happens in a continuous and deterministic way.
Chaos theory takes this one step further. It notes that any real dynamic system, even one described by a set of deterministic equation, is ultimately not predictable because of the accumulation of individually small interactions between its components. This applies to balls on a billiard table and the planets in the heavens. This means that our ability to forecast and predict will always be limited regardless of how sophisticated our computers are or how much information we have. The insight gained from these two theories is that, in principle, it is not, a priori possible, to make precise, mechanistic, deterministic predictions about the future development of self-organizing systems including ecosystems. Computers cannot substitute for crystal balls, except for very limited classes of problems that occur over short spatial and temporal dimensions.
Another set of insights into self-organizing systems comes from thermodynamics and are the subject of the earlier portions of this paper. As discussed earlier, Prigogine showed that spontaneous coherent behaviour and organization (i.e. tornadoes, vortices in fluids, lasers) can occur and is completely consistent with thermodynamics. The key to understanding such phenomena is to realize that these are open systems with a flow of high quality energy. In these circumstances, coherent behaviour appears in systems almost magically. Prigogine showed that this occurs because the system reaches a catastrophe threshold and flips into a new coherent behavioral state. An example is the formation of convection cells in the Bénard instability.
In examining the energetics of open systems we have taken Prigogine's work one step further. We are interested in open systems with high quality energy pumped into them and their consequent movement away from equilibrium. Systems resist this movement away from equilibrium. If new kinetic and dynamic pathways for dissipation are available, the open system will respond with the spontaneous emergence of organized behavior that uses high quality energy to maintain its structure, and dissipates high quality energy in its movement away from equilibrium. The more high quality energy pumped into a system the more organization can emerge to dissipate the energy. Again, the emergence of organized behavior, (and even life) in systems, is now expected according to modern thermodynamics. This self-organization is characterized by abrupt changes which represent a new set of interactions and activities by components and the whole system. The form of expression this self-organization takes is not predictable. This is because the very process of self-organization is via catastrophic change (in the catastrophe theory sense) and flips into new regimes.
Another important observation about systems that exhibit self-organization is that they exist in an energetic window where they get enough energy, but not too much. If the they do not get sufficient energy of high enough quality (beyond a minimum threshold level), organized structures cannot be supported and self-organization does not occur. If too much energy is supplied, chaos ensues in the system, as the energy overwhelms the dissipative ability of the organized structures and they fall apart. So self-organizing systems exist in a middle ground of enough, but not too much.
Furthermore, these systems do not maximize or minimize their functioning. Rather their functioning represents an optimum, a trade-off among all the forces acting on them. If there is too much development of any one type of structure, the system becomes overextended and brittle. If a structure is not sufficiently developed to take full advantage of the available energy and resources, then some other more optimal (i.e. better adapted) structure will displace it. In sum, these systems represent a fine balancing act. Inevitably then, human management strategies that focus on maximizing or minimizing some aspect of these systems will always fail. Only management strategies which maintain a balance will succeed.
The consideration of the role of information is a topic for which there is the great divergence between traditional science and complex systems theory. There is a continuum of complexity of systems, which we can divide along the following lines. First is complication, where a single optimizing function suffices, as 'least action' in dynamical systems or goal-seeking in cybernetic devices. Then we have complexity, which we have discussed here and which includes a balanced optimization rather than simple maximizing or minimizing. This will include autocatalytic reaction systems, tornadoes, Bénard cells, and life itself. Finally there is emergent complexity, where the information includes symbols, and where conflict between goals is characteristic; this is the realm of socio-economic systems, mainly of humans. Within complex systems there is a distinction between the living systems and the nonliving dissipative systems; for the living systems exhibit a birth/renewal-growth-death cycle, making them more sophisticated than the others. We are all familiar with the death and reproduction at the cellular level, the birth, growth and death of individuals, but it is only recently that Holling has made us aware that this cycle occurs on many temporal and spatial scales. The process of fire outbreak or pest outbreaks in forests are examples of agents of renewal.
Living systems must function within the context of the system and environment they are part of. If a living system does not respect the circumstances of the supersystem it is part of, it will be selected against. This process of selection functions at all levels. The supersystem imposes a set of constraints on the behavior of the system be it at the level of a cell, an individual, a population or a community. Living systems which are evolutionarily successful have learned what these constraints are and how to live within them. But this presents a problem. When a new living system is generated after the demise of an earlier one, the self-organization process would be much more efficient if it were constrained to variations which have a high probability of success. From cell to species level, genes play this role. Genes constrain the self-organization processes to those which have a high probability of success. It is not that genes direct or control the process of ecosystem development, rather they constrain it to forms which will respect the realities of the supersystem and environment. They are a record of successful self-organization. Genes are not the mechanism of development, the mechanism is self-organization. Genes bound and constrain the process of self-organization.
At higher hierarchical levels other devices constrain the self-organization process. For example, some species will kill their young under certain conditions, and many tree species need specific micro-climate conditions to trigger self-organization (forest fire for the germination of lodge pole pine). At the larger levels of organization, the ability of an ecosystem to regenerate is a function of the species available for the regeneration process. This, of course, is related to biodiversity on the larger landscape. As perplexing and intractable as the scientific problems of species diversity seem today, our management instinct should be toward preservation of biodiversity because we are in effect preserving the library used for regeneration of landscapes. Given that living systems go through a constant cycle of birth/development/regeneration/death on many temporal and spatial scales, preserving information about what works and what doesn't is crucial for the continuance of life. This is the role of the gene and, on a larger scale, biodiversity.
This brings us to the role of information theory in ecology. Popper speaks of weighted conditional probabilities, propensities, that are inherent in a system. Propensities are not fixed properties of isolated things but they are context dependent properties of relational processes. Given that what we know about complex systems is the overall direction of the process, their propensities, and the constraints imposed upon them, what predictions can we make? This is the question addressed by the work of E.T. Jaynes and the later developments of Johnson & Shore on the mathematics of information theory. Their mathematical theory is about assigning probabilities to system states given the constraints imposed upon the system. Their tool for inductive inference puts probability and statistics in a very different light, one which is consistent with complex systems theory. Their work suggests a very different methodology for scientific inquiry. However this tool is only beginning to be applied to complex systems theory and has not been applied to ecology.
But this is not to say that ecosystem behaviour is chaotic or random and haphazard. On the contrary, ecosystem behaviour and development is like a large musical piece such as a symphony, which is also dynamic and not predictable and yet has a sense of flow, of connection between what has played and what is still to play, the repetition of recognizable themes and a general sense of orderly progression. In pieces such as symphonies or suites we know the stages (allegro, adagio, etc) that the piece will progress through, even though we don't know the details of the piece. The same is true of ecosystems, some behave in a very ordered way as does a Baroque suite, and others are full of improvision as in modern jazz. And yet we know the difference between music and random collections of noise.
Ecosystem self-organization unfolds like a symphony. Our challenge is to understand the rules of composition and the limitations and directions they place on the organization process, as well as what makes for the ecological equivalent of a musical masterpiece which stands up to the test of time. However we should not expect to have a science of ecology which allows us to predict what the next note will be.
We must always remember that left to their own devices, living systems are self-organizing, that is they will look after themselves. The challenge facing the practice of environmental management is to learn how to work with these self-organizing processes in a way which allows us to meet our species needs, while still preserving the integrity of ecosystems, that is to say the integrity of the self-organizing processes. Only by acknowledging that the essence of ecosystems is self-organization, and our responsibility for maintaining these self-organizing process, will we assure our species a sustainable niche in the biosphere.
Of paramount importance, in this respect, is that we must not destroy the information needed for the regeneration process which is continually ongoing. A damaged ecosystem, left to its own devices, has the capability to regenerate, if it has access to the information required for renewal, that is biodiversity, and if the context for the information to be used, that is the bio-physical environment, has not been so altered as to make the information meaningless.
Recognition of all of this requires us to fundamentally change our approach. We must stop managing ecosystems for some fixed state, whether it be an idealistic pristine climax forest or a corn farm. Ecosystems are not static things, they are dynamic entities made up of self-organizing processes. Management goals that involve maintaining some fixed state in an ecosystem or maximizing some function (biomass, productivity, number of species) or minimizing (pest outbreak) will always lead to disaster at some point, no matter how well meaning they are. We must instead recognize that ecosystems represent a balance, an optimum point of operation, and this balance is constantly changing to suit a changing environment. And if this isn't radical enough, we must bear in mind that the demise and regeneration of living systems, from cells to communities, is required by our new understanding of the second law. It is a thermodynamic necessity.
Ecology is a young science which is reaching an exciting stage of integration which will change our view of the world in much the same way that the integration of physics by Kepler, Galileo and Newton did. The very way that we think about science is changing and this change is in response to the challenge posed by ecosystems. Our contribution to this integration has been to better reconcile biology and thermodynamics through the development of Schrödinger's order from disorder theme. We see this as a cornerstone in the emerging science of the study of complex systems, a science which has much promise for understanding self organization and ecosystems. Our real concern is whether the development and acceptance of the new ecology will be in time to deal with the problems of environmental degradation facing the planet.
Thanks to Henry Regier, George Francis, and Laura Westra for their support of James Kay's research through their Donner, NSERC, and SSHRC grants and Marie Lagimodiere for her extensive literature review on "ecosystem" and "complex system thinking".
Back to JK publication page