Ergebnis für URL: http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading7
   This is chapter 1 of the [1]"The Phenomenon of Science" by [2]Valentin F. Turchin
     ____________________________________________________________________________

   Contents:
     * [3]THE BASIC LAW OF EVOLUTION
     * [4]THE CHEMICAL ERA
     * [5]CYBERNETICS
     * [6]DISCRETE AND CONTINUOUS SYSTEMS
     * [7]THE RELIABILITY OF DISCRETE SYSTEMS
     * [8]INFORMATION
     * [9]THE NEURON
     * [10]THE NERVE NET
     * [11]THE SIMPLE REFLEX (IRRITABILITY)
     * [12]THE COMPLEX REFLEX
     ____________________________________________________________________________

                                       CHAPTER ONE.
                             THE INITIAL STAGES OF EVOLUTION

THE BASIC LAW OF EVOLUTION

   IN THE PROCESS of the evolution of life, as far as we know, the total mass of
   living matter has always been and is now increasing and growing more complex in
   its organization. To increase the complexity of the organization of biological
   forms, nature operates by trial and error. Existing forms are reproduced in many
   copies, but these are not identical to the original. Instead they differ from it
   by the presence of small random variations. These copies then serve as the
   material for natural selection. They may act as individual living beings, in
   which case selection leads to the consolidation of useful variations, or elements
   of more complex forms, in which case selection is also directed to the structure
   of the new form (for example, with the appearance of multicellular organisms). In
   both cases selection is the result of the struggle for existence, in which more
   viable forms supplant less viable ones.

   This mechanism of the development of life, which was discovered by Charles
   Darwin, may be called the basic law of evolution. It is not among our purposes to
   substantiate or discuss this law from the point of view of those laws of nature
   which could be declared more fundamental. We shall take the basic law of
   evolution as given.

THE CHEMICAL ERA

   THE HISTORY OF LIFE before the appearance of the human being can be broken into
   two periods, which we shall call the "chemical'' era and the ''cybernetic'' era.
   The bridge between them is the emergence of animals with distinct nervous
   systems, including sense organs, nerve fibers for transmitting information, and
   nerve centers (nodes) for converting this information. Of course, these two terms
   do not signify that the concepts and methods of cybernetics are inapplicable to
   life in the ''chemical" era; it is simply that the animal of the ''cybernetic''
   era is the classical object of cybernetics, the one to which its appearance and
   establishment as a scientific discipline are tied.

   We shall review the history and logic of evolution in the precybernetic period
   only in passing, making reference to the viewpoints of present-day
   biologists.[13][1] Three stages can be identified in this period.

   In the first stage the chemical foundations of life are laid. Macromolecules of
   nucleic acids and proteins form with the property of replication, making copies
   or ''prints'' where one macromolecule serves as a matrix for synethesizing a
   similar macromolecule from elementary radicals. The basic law of evolution, which
   comes into play at this stage, causes matrices which have greater reproductive
   intensity to gain an advantage over matrices with lesser reproductive intensity,
   and as a result more complex and active macromolecules and systems of
   macromolecules form. Biosynthesis demands free energy. Its primary source is
   solar radiation. The products of the partial decay of life forms that make direct
   use of solar energy (photosynthesis) also contain a certain reserve of free
   energy which may be used by the already available chemistry of the macromolecule.
   Therefore, this reserve is used by special forms for which the products of decay
   serve as a secondary source of free energy. Thus the division of life into the
   plant and animal worlds arises.

   The second stage of evolution is the appearance and development of the motor
   apparatus in animals.

   Plants and animals differ fundamentally in the way they obtain energy. With a
   given level of illumination the intensity of absorption of solar energy depends
   entirely on the amount of plant surface, not on whether it moves or remains
   stationary. Plants were refined by the creation of outlying light catchers--green
   leaves secured to a system of supports and couplings (stems, branches, and the
   like). This design works very well, ensuring a slow shift in the green surfaces
   toward the light which matches the slow chance in illumination.

   The situation is entirely different with animals, in particular with the most
   primitive types such as the amoeba. The source of energy-- food--fills the
   environment around it. The intake of energy is determined by the speed at which
   food molecules are diffused through the shell that separates the digestive
   apparatus from the external environment. The speed of diffusion depends less on
   the size of the surface of the digestive apparatus than on the movement of this
   surface relative to the environment; therefore it is possible for the animal to
   take in food from different sectors of the environment. Consequently, even
   simple, chaotic movement in the environment or, on the other hand, movement of
   the environment relative to the organism (as is done, for example, by sponges
   which force water through themselves by means of their cilia) is very important
   for the primitive animal and, consequently, appears in the process of evolution.
   Special forms emerge (intracellular formations in one-celled organisms and ones
   containing groups of cells in multicellular organisms) whose basic function is to
   produce movement.

   In the third stage of evolution the movements of animals become directed and the
   incipient forms of sense organs and nervous systems appear in them. This is also
   a natural consequence of the basic law. It is more advantageous for the animal to
   move in the direction where more food is concentrated, and in order for it to do
   so it must have sensors that describe the state of the external environment in
   all directions (sense organs) and information channels for communication between
   these sensors and the motor apparatus (nervous system). At first the nervous
   system is extremely primitive. Sense organs merely distinguish a few situations
   to which the animal must respond differently. The volume of information
   transmitted by the nervous system is slight and there is no special apparatus for
   processing the information. During the process of evolution the sense organs
   become more complex and deliver an increasing amount of information about the
   external environment. At the same time the motor apparatus is refined, which
   makes ever-increasing demands on the carrying capacity of the nervous system.
   Special formations appear--nerve centers which convert information received from
   the sense organs into information controlling the organs of movement. A new era
   begins: the ''cybernetic'' era.

CYBERNETICS

   TO ANALYZE evolution in the cybernetic period and to discover the laws governing
   the organization of living beings in this period (for brevity we will call them
   "cybernetic animals'') we must introduce certain fundamental concepts and laws
   from cybernetics.

   The term ''cybernetics'' itself was, of course, introduced by Norbert Wiener, who
   defined it descriptively as the theory of relationships and control in the living
   organism and the machine. As is true in any scientific discipline, a more precise
   definition of cybernetics requires the introduction of its basic concepts.
   Properly speaking, to introduce the basic concepts is the same as defining a
   particular science, for all that remains to be added is that a description of the
   world by means of this system of concepts is, in fact, the particular, concrete
   science.

   Cybernetics is based above all on the concept of the system, a certain material
   object which consists of other objects which are called subsystems of the given
   system. The subsystem of a certain system may, in its turn, be viewed as a system
   consisting of other subsystems. To be precise, therefore, the meaning of the
   concept we have introduced does not lie in the term ''system'' by itself, that
   is, not in ascribing the property of ''being a system'' to a certain object (this
   is quite meaningless, for any object may be considered a system), but rather in
   the connection between the terms ''system'' and "subsystem," which reflects a
   definite relationship among objects.

   The second crucial concept of cybernetics is the concept of the state of a system
   (or subsystem). Just as the concept of the system relies directly on our spatial
   intuition, the concept of state relies directly on our intuition of time and it
   cannot be defined except by referring to experience. When we say that an object
   has changed in some respect we are saying that it has passed into a different
   state. Like the concept of system. the concept of state is a concealed
   relationship: the relationship between two moments in time. If the world were
   immobile the concept of state would not occur, and in those disciplines where the
   world is viewed statically, for example in geometry, there is no concept of
   state.

   Cybernetics studies the organization of systems in space and time, that is, it
   studies how subsystems are connected into a system and how change in the state of
   some subsystems influences the state of other subsystems. The primary emphasis,
   of course, is on organization in time which, when it is purposeful, is called
   control. Causal relations between states of a system and the characteristics of
   its behavior in time which follow from this are often called the dynamics of the
   system, borrowing a term from physics. This term is not applicable to
   cybernetics. because when we speak of the dynamics of a system we are inclined to
   view it as something whole, whereas cybernetics is concerned mainly with
   investigating the mutual influences of subsystems making up the particular
   system. Therefore, we prefer to speak of organization in time, using the term
   dynamic description only when it must be juxtaposed to the static description
   which considers nothing but spatial relationships among subsystems.

   A cybernetic description may have different levels of detail. The same system may
   be described in general outline, in which it is broken down into a few large
   subsystems or "blocks,'' or in greater detail, in which the structure and
   internal connections of each block are described. But there is always some final
   level beyond which the cybernetic description does not apply. The subsystems of
   this level are viewed as elementary and incapable of being broken down into
   constituent parts. The real physical nature of the elementary subsystems is of no
   interest to the cyberneticist, who is concerned only with how they are
   interconnected. The nature of two physical objects may be radically different,
   but if at some level of cybernetic description they are organized from subsystems
   in the same way (considering the dynamic aspect!), then from the point of view of
   cybernetics they can be considered, at the given level of description. identical.
   Therefore, the same cybernetic considerations can be applied to such different
   objects as a radar circuit, a computer program, or the human nervous system.

DISCRETE AND CONTINUOUS SYSTEMS

   THE STATE OF A SYSTEM is defined through the aggregate of states of all its
   subsystems, which in the last analysis means the elementary subsystems. There are
   two types of elementary subsystems: those with a finite number of possible
   states, also called subsystems with discrete states, and those with an infinite
   number, also called subsystems with continuous states. The wheel of a mechanical
   calculator or taxi meter is an example of a subsystem with discrete states. This
   wheel is normally in one of 10 positions which correspond to the 10 digits
   between 0 and 9. From time to time it turns and passes from one state into
   another. This process of turning does not interest us. The correct functioning of
   the system (of the calculator or meter) depends entirely on how the ''normal''
   positions of the wheels are interconnected, while how the change from one
   position (state) to another takes place is inconsequential. Therefore we can
   consider the calculator as a system whose elementary subsystems can only be in
   discrete states. A modern high-speed digital computer also consists of subsystems
   (trigger circuits) with discrete states. Everything that we know at the present
   time regarding the nervous systems of humans and animals indicates that the
   interaction of subsystems (neurons) with discrete states is decisive in their
   functioning.

   On the other hand, a person riding a bicycle and an anal computer are both
   examples of systems consisting of subsystems with continuous states. In the case
   of the bicycle rider these subsystems are all the parts of the bicycle and human
   body which are moving relative to one another: the wheels, pedals, handlebar,
   legs, arms, and so on. Their states are their positions in space. These positions
   are described by coordinates (numbers) which can assume continuous sets of
   values.

   If a system consists exclusively of subsystems with discrete states then the
   system as a whole must be a system with discrete states. We shall simply call
   such systems ''discrete systems,'' and we shall call systems with continuous sets
   of states ''continuous systems.'' In many respects discrete systems are simpler
   to analyze than continuous ones. Counting the number of possible states of a
   system, which plays an important part in cybernetics, requires only a knowledge
   of elementary arithmetic in the case of discrete systems. Suppose discrete system
   A consists of two subsystems a[1] and a[2]; subsystem al may havea[1] possible
   states, while subsystem a[2] may have n[2]. Assuming that each state of system al
   can combine with each state of system a[2] we find that N, the number of possible
   states of system A, is n[1]n[2]. If system A consists of m subsystems a[1] where
   i = 1, 2, . . ., m, then

                               N = a[2]a[2], . . . n[m]

   From this point on we shall consider only discrete systems. In addition to the
   pragmatic consideration that they are simpler in principle than continuous
   systems, there are two other arguments for such a restriction.

   First, all continuous systems can in principle be viewed as discrete systems with
   an extremely large number of states. In light of the knowledge quantum physics
   has given us, this approach can even be considered theoretically more correct.
   The reason why continuous systems do not simply disappear from cybernetics is the
   existence of a very highly refined apparatus for consideration of such systems:
   mathematical analysis, above all, differential equations.

   Second, the most complex cybernetic systems, both those which have arisen
   naturally and those created by human hands, have invariably proved to be
   discrete. This is seen especially clearly in the example of animals. The
   relatively simple biochemical mechanisms that regulate body temperature, the
   content of various substances in the blood, and similar characteristics are
   continuous, but the nervous system is constructed according to the discrete
   principle.

THE RELIABILITY OF DISCRETE SYSTEMS

   WHY DO DISCRETE SYSTEMS prove to be preferable to continuous ones when it is
   necessary to perform complex functions? Because they have a much higher
   reliability. In a cybernetic device based on the principle of discrete states
   each elementary subsystem may be in only a small number of possible states, and
   therefore the system ordinarily ignores small deviations from the norm of various
   physical parameters of the system, reestablishing one of its permissible states
   in its ''primeval purity.'' In a continuous system, however, small disturbances
   continuously accumulate and if the system is too complex it ceases functioning
   correctly. Of course, in the discrete system too there is always the possibility
   of a breakdown, because small changes in physical parameters do lead to a finite
   probability that the system will switch to an ''incorrect'' state. Nonetheless,
   discrete systems definitely have the advantage. Let us demonstrate this with the
   following simple example.

   Suppose we must transmit a message by means of electric wire over a distance of,
   say, 100 kilometers (62 miles). Suppose also that we are able to set up an
   automatic station for every kilometer of wire and that this station will amplify
   the signal to the power it had at the previous station and, if necessary, convert
   the signal.

   [IMG.FIG1.1.GIF]

   Figure 1.1. Transmission of a signal in continuous and discrete systems (The
   shaded part shows the area of signal ambiguity.)

   We assume that the maximum signal our equipment permits us to send has a
   magnitude of one volt and that the average distortion of the signal during
   transmission from station to station (noise) is equal to 0.1 volt.

   First let us consider the continuous method of data transmission. The content of
   the message will be the amount of voltage applied to the wire at its beginning.
   Owing to noise, the voltage at the other end of the wire--the message
   received--will differ from the initial voltage. How great will this difference
   be? Considering noise in different segments of the line to be independent, we
   find that after the signal passes the 100 stations the root-mean square magnitude
   of noise will be one volt (the mean squares of noise are summed). Thus, average
   noise is equal to the maximum signal, and it is therefore plain that we shall not
   in fact receive any useful information. Only by accident can the value of the
   voltage received coincide with the value of the voltage transmitted. For example,
   if a precision of 0.1 volt satisfies us the probability of such a coincidence is
   approximately 1/10.

   Now let us look at the discrete variant. We shall define two "meaningful'' states
   of the initial segment of the wire: when the voltage applied is equal to zero and
   when it is maximal (one volt). At the intermediate stations we install automatic
   devices which transmit zero voltage on if the voltage received is less than 0.5
   volt and transmit a normal one-volt signal if the voltage received is more than
   0.5 volt. In this case, therefore, for one occasion (one signal) information of
   the "yes/no" type is transmitted (in cybernetics this volume of information is
   called one "bit" ). The probability of receiving incorrect information depends
   strongly on the law of probability distribution for the magnitude of noise. Noise
   ordinarily follows the so-called normal law. Assuming this law we can find that
   the probability of error in transmission from one station to the next (which is
   equal to the probability that noise will exceed 0.5 volt) is 0.25 ^. 10^-6. Thus
   the probability of an error in transmission over the full length of the line is
   0.25 ^. 10^-4. To transmit the same message as was transmitted in the previous
   case--that is, a value between 0 and 1 with a precision of 0.1 of a certain
   quantity lying between 0 and l--all we have to do is send four ''yes/no'' type
   signals. The probability that there will be error in at least one of the signals
   is 0^-4. Thus, with the discrete method the total probability of error is 0.01
   percent, as against 90 percent for the continuous method.

INFORMATION

   WHEN WE BEGAN describing a concrete cybernetic system it was impossible not to
   use the term information--a word familiar and understandable in its informal
   conversational meaning. The cybernetic concept of information, however., has an
   exact quantitative meaning.

   Let us imagine two subsystems A and B

   [IMG.FIG1.2.GIF]

   The two subsystems are interconnected in such a way that a change in the state of
   A leads to a change in the state of B. This can also be expressed as follows: A
   influences B. Let us consider the state of B at a certain moment in time t[1] and
   at a later moment t[2]. We shall signify the first state as S[1] and the second
   as S[2]. State S[2] depends on state S[1]. The relationship of S[2] to S[1] is
   probabilistic, however, not unique. This is because we are not considering an
   idealized theoretical system governed by a deterministic law of movement but
   rather a real system whose states Si are the results of experimental data. With
   such an approach we may also speak of the law of movement, understanding it in
   the probabilistic sense--that is, as the conditional probability of state S[2] at
   moment t[2] on the condition that the system was in state S[1] at moment t[1].
   Now let us momentarily ignore the law of movement. We shall use N to designate
   the total number of possible states of subsystem B and imagine that conditions
   are such that at any moment in time system B can assume any of N states with
   equal probability, regardless of its state at the preceding moment. Let us
   attempt to give a quantitative expression to the degree (or strength) of the
   cause-effect influence of system A on such an inertialess and ''lawless''
   subsystem B.

   Suppose B acted upon by A switches to a certain completely determinate state. It
   is clear that the ''strength of influence'' which is required from A for this
   depends on N, and will be larger as N is larger. For example, if N= 2 then B,
   even if it is completely unrelated to A, when acted upon by random factors can
   switch with a probability of .5 to the very state A ''recommends.'' But if N =
   10^9, when we have noticed such a coincidence we shall hardly doubt the influence
   of A on B. Therefore, some monotonic increasing function of N should serve as the
   measure of the 'strength of the influence'' of A on B. What this essentially
   means is that it serves as a measure of the intensity of the cause-effect
   relationship between two events, the state of A in the time interval from t[1] to
   t[2] and the state of B at t[2]. In cybernetics this measure is called the
   quantity of information transmitted from A to B between moments in time t[1] and
   t[2], and a logarithm serves as the monotonic increasing function. So, in our
   example, the quantity of information I passed from A to B is equal to log N.

   Selection of the logarithmic function is determined by its property according to
   which

                          log N[1]N[2 ]= log N[1] + log N[2]

   Suppose system A influences system B which consists of two independent subsystems
   B[1] and B[2]with number of possible states N[1] and N[2] respectively.

   [IMG.FIG1.3.GIF]

   Then the number of states of system B is N[1]N[2] and the quantity of information
   I that must be transmitted to system B in order for it to assume one definite
   state is, owing to the above-indicated property of the logarithm, the sum

                I = log N[1]N[2 ]= log N[1 +] log N[2 ]= I[1 ]+[ ]I[2]

   where I[1 ]and[ ]I[2] are the quantities of information required by subsystems
   B[1 ]+[ ]B[2]. Thanks to this property the information assumes definite
   characteristics of a substance; it spreads over the independent subsystems like a
   fluid filling a number of vessels. We are speaking of the joining and separation
   of information flows, information capacity, and information processing and
   storage.

   The question of information storage is related to the question of the law of
   movement. Above we mentally set aside the law of movement in order to define the
   concept of information transmission. If we now consider the law of movement from
   this new point of view, it can be reduced to the transmission of information from
   system B at moment t[1] to the same system B at moment t[1]. If the state of the
   system does not change with the passage of time, this is information storage. If
   state S[2] is uniquely determined by S[1] at a preceding moment in time the
   system is called fully deterministic. If S[1] is uniquely determined by S[2] the
   system is called reversible; for a reversible system it is possible in principle
   to compute all preceding states on the basis of a given state because information
   loss does not occur. If the system is not reversible information is lost. The law
   of movement is essentially something which regulates the flow of information in
   time from the system and back to itself.

   Figure 1.4 shows the chart of information transmission from system A to system C
   through system B.

   [IMG.FIG1.4.GIF]

   B is called the communication channel. The state of B can be influenced not only
   by the state of system A, but also by a certain uncontrolled factor X, which is
   called noise. The final state of system C in this case depends not only on the
   state of A, but also on factor X (information distortion). One more important
   diagram of information exchange is shown in figure 1.5.

   [IMG.FIG1.5.GIF]

   This is the so-called feedback diagram. The state of system A at t[1] influences
   the state of B at t[2], then the latter influences the state of A at t[3]. The
   circle of information movement is completed.

   With this we conclude for now our familiarization with the general concepts of
   cybernetics and turn to the evolution of life on earth.

THE NEURON

   THE EXTERNAL APPEARANCE of a nerve cell (neuron) is shown schematically in figure
   1.6.

   [IMG.FIG1.6.GIF]

   Figure 1.6. Diagram of the structure of a neuron.

   A neuron consists of a fairly large (up to 0.1 mm) cell body from which several
   processes called dendrites spread, giving rise to finer and finer processes like
   the branching of a tree. In addition to the dendrites one other process branches
   out from the body of the nerve cell. This is the axon, which resembles a long,
   thin wire. Axons can be very long, up to a meter, and they end in treelike
   branching systems as do the dendrites. At the ends of the branches coming from
   the axon one can see small plates or bulblets. The bulblets of one neuron
   approach close to different segments of the body or dendrites of another neuron,
   almost touching them.

   These contacts are called synapses and it is through them that neurons interact
   with one another. The number of bulblets approaching the dendrites of the single
   neuron may run into the dozens and even hundreds. In this way the neurons are
   closely interconnected and form a nerve net.

   When one considers certain physicochemical properties (above all the propagation
   of electrical potential over the surface of the cell) one discovers the neurons
   can be in one of two states--the state of dormancy or the state of stimulation.
   From time to time, influenced by other neurons or outside factors, the neuron
   switches from one state to the other. This process takes a certain time, of
   course, so that an investigator who is studying the dynamics of the electrical
   state of a neuron, for example, considers it a system with continuous states. But
   the information we now have indicates that what is essential for the functioning
   of the nervous system as a whole is not the nature of switching processes but the
   very tact that the particular neurones are in one of these two states. Therefore,
   we may consider that the nerve net is a discrete system which consists of
   elementary subsystems (the neurons) with two states.

   When the neuron is stimulated, a wave of electrical potential runs along the axon
   and reaches the bulblets in its branched tips. From the bulblets the stimulation
   is passed across the synapses to the corresponding sectors of the cell surface of
   other neurons. The behavior of a neuron depends on the state of its synapses. The
   simplest model of the functioning of the nerve net begins with the assumption
   that the state of the neuron at each moment in time is a single-valued function
   of the state of its synapses. It has been established experimentally that the
   stimulation of some synapses promotes stimulation of the cell, whereas the
   stimulation of other synapses prevents stimulation of the cell. Finally. certain
   synapses are completely unable to conduct stimulation from the bulblets and
   therefore do not influence the state of the neuron. It has also been established
   that the conductivity of a synapse increases after the first passage of a
   stimulus through it. Essentially a closing of the contact occurs. This explains
   how the system of communication among neurones, and consequently the nature of
   the nerve net's functioning, can change without a change in the relative
   positions of the neurons.

   The idea of the neuron as an instantaneous processor of information received from
   the synapses is, of course, very simplified. Like any cell the neuron is a
   complex machine whose functioning has not yet been well understood. This machine
   has a large internal memory, and therefore its reactions to external stimuli may
   show great variety. To understand the general rules of the working of the nervous
   system, however. we can abstract from these complexities (and really, we have no
   other way to go!) and begin with the simple model outlined above.

THE NERVE NET

   A GENERALIZED DIAGRAM of the nerve system of the "cybernetic animal'' in its
   interaction with the environment is shown in figure 1.7.

   [IMG.FIG1.7.GIF]

   Figure 1.7. Nervous system of the "cybernetic animal"

   Those sensory nerve cells which are stimulated by the action of outside factors
   are called receptors (that is, receivers) because they are the first to receive
   information about the state of the environment. This information enters the nerve
   net and is converted by it. As a result certain nerve cells called effectors are
   stimulated. Branches of the effector cells penetrate those tissues of the
   organism which the nervous system affects directly. Stimulation of the effector
   causes a contraction of the corresponding muscle or the stimulation of the
   activity of the appropriate gland. We shall call the state of all receptors at a
   certain moment in time the situation at that moment. (It would be more
   precise--if more cumbersome--to say the ''result of the effect of the situation
   on the sense organs.'') We will call the state of all the effectors the
   ''action.''. Therefore, the role of the nerve net is to convert a situation into
   an action.

   It is convenient to take the term ''environment'' from figure 1.7 to mean not
   just the objects which surround the animal, but also its bone and muscle system
   and generally everything that is not part of the nervous system. This makes it
   unnecessary to give separate representations in the diagram to the animal body
   and what is not the body, especially because this distinction is not important in
   principle for the activity of the nervous system. The only thing that is
   important is that stimulation of the effectors leads to certain changes in the
   "environment.'' With this general approach to the problem as the basis of our
   consideration, we need only classify these changes as ''useful'' or ''harmful''
   for the animal without going into further detail.

   The objective of the nervous system is to promote the survival and reproduction
   of the animal. The nervous system works well when stimulation of the effectors
   leads to changes in the state of the environment that help the animal survive or
   reproduce, and it works badly when it leads to the reverse. With its increasing
   refinement in the process of evolution. the nervous system has performed this
   task increasingly well. How does it succeed in this? What laws does this process
   of refinement follow?

   We will try to answer these questions by identifying in the evolution of the
   animal nervous system several stages that are clearly distinct from a cybernetic
   point of view and by showing that the transition from each preceding stage to
   each subsequent stage follows inevitably from the basic law of evolution. Because
   the evolution of living beings in the cybernetic era primarily concerns the
   evolution of their nervous systems, a periodization of the development of the
   nervous system yields a periodization of the development of life as a whole .

THE SIMPLE REFLEX (IRRITABILITY)

   THE SIMPLEST VARIANT of the nerve net is when there is no net at all. In this
   case the receptors are directly connected to the effectors and stimulation from
   one or several receptors is transmitted to one or several effectors. We shall
   call such a direct connection between stimulation of a receptor and an etfector
   the simple reflex.

   This stage, the third in our all-inclusive enumeration of the stages of
   evolution, is the bridge between the chemical and cybernetic eras. The
   Coelenterata are animals fixed at the level of the simple reflex. As an example
   let us take the hydra, which is studied in school as a typical representative of
   the Coelenterata. The body of a hydra has the shape of an elongated sac. Its
   interior, the coelenteron, is connected to the environment through a mouth, which
   is surrounded by several tentacles. The walls of the sac consist of two layers of
   cells: the inner layer (entoderm) and the outer layer (ectoderm). Both the
   ectoderm and the entoderm have many muscle cells which contain small fibers that
   are able to contract. thus setting the body of the hydra in motion. In addition,
   there are nerve cells in the ectoderm; the cells located closest to the surface
   are receptors and the cells which are set deeper, among. the muscles, are
   effectors. If a hydra is pricked with a needle it squeezes itself into a tiny
   ball. This is a simple reflex caused by transmission of the stimulation from the
   receptors to the effectors.

   [IMG.FIG1.8.GIF]

   Figure 1.8. The structure of the hydra.

   But the hydra is also capable of much more complex behavior. After it has
   captured prey, the hydra uses its tentacles to draw the prey to its mouth and
   then swallows the prey. This behavior can also be explained by the aggregate
   action of simple reflexes connecting effectors and receptors locally. within
   small segments of the body. For example, the following model of a tentacle
   explains its ability to wrap itself around captured objects.

   [IMG.FIG1.9.GIF]

   Figure 1.9. Model of a tentacle

   Let us picture a certain number of links connected by hinges (for simplicity we
   shall consider a two-dimensional picture). Points A and B, A ' and B', B and C,
   and B' and C', etc. are interconnected by strands which can contract (muscles).
   All these points are sensitive and become stimulated when they touch an object
   (receptors). The stimulation of each point causes a contraction of the two
   strands connected to it (reflex).

THE COMPLEX REFLEX

   THE SIMPLE REFLEX relationship between the stimulated cell and the muscle cell
   arises naturally, by the trial and error method, in the process of evolution. If
   the correlation between stimulation of one cell and contraction of another proves
   useful for the animal, then this correlation becomes established and reinforced.
   Where interconnected cells are mechanically copied in the process of growth and
   reproduction, nature receives a system of parallel-acting simple reflexes
   resembling the tentacle of the hydra. But when nature has available a large
   number of receptors and effectors which are interconnected by pairs or locally.
   there is a " temptation" to make the system of connections more complex by
   introducing intermediate neurons. This is advantageous because where there is a
   system of connections among all neurons, forms of behavior that are not possible
   where all connections are limited to pairs or localities now become so. This
   point can be demonstrated by a simple calculation of all the possible methods of
   converting a situation into an action with each method of interconnection. For
   example, assume that we have n receptors and effectors connected by pairs. In
   each pair the connection may be positive (stimulation causes stimulation and
   dormancy evokes dormancy) or negative (stimulation evokes dormancy and dormancy
   causes stimulation). In all, therefore, 2^n variants are possible, which means
   2^n variants of behavior. But if we assume that the system of connections can be
   of any kind, which is to say that the state of each effector (stimulation or
   dormancy) can depend in any fashion on the state of all the receptors, then a
   calculation of all possible variants of behavior yields the number *2^(2^n)n,
   which is immeasurably larger than 2^n .

   Exactly the same calculation leads to the conclusion that joining any subsystems
   which join independent groups of receptors and effectors into a single system
   always leads to an enormous increase in the number of possible variants of
   behavior. Throughout the entire course of the history of life, therefore, the
   evolution of the nervous system has progressed under the banner of increasing
   centralization.

   But 'centralization'' can mean different thins. If all neurons are connected in
   one senselessly confused clump, then the system-- despite its extremely
   ''centralized'' nature--will hardly have a chance to survive in the struggle for
   existence. Centralization poses the following problem: how to select from all the
   conceivable ways of joining many receptors with many effectors (by means of
   intermediate neurons if necessary) that way which will correlate a correct action
   (that is, one useful for survival and reproduction) to each situation? After all,
   a large majority of the ways of interconnection do not have this characteristic.

   We know that nature takes every new step toward greater complexity in living
   structures by the trial and error method. Let us see what direct application of
   the trial and error method to our problem yields. As an example we shall consider
   a small system consisting of 100 receptors and 100 effectors. We shall assume
   that we have available as many neurons as needed to create an intermediate nerve
   net and that we are able to determine easily whether the particular method of
   connecting neurons produces a correct reaction to each situation. We shall go
   through all conceivable ways of connection until we find the one we need. Where n
   = 100 the number of functionally different nerve nets among n receptors and n
   effectors is

                                2^(2^n)n =~ 10^(10^32)

   This is an inconceivably large number. We cannot sort through such a number of
   variants and neither can Mother Nature. If every atom in the entire visible
   universe were engaged in examining variants and sorting them at a speed of I
   billion items a second, even after billions of billions of years (and our earth
   has not existed for more than 10 billion years) not even one billionth of the
   total number of variants would have been examined.

   But somehow an effectively functioning nerve net does form! And higher animals
   have not hundreds or thousands but millions of receptors and effectors. The
   answer to the riddle is concealed in the hierarchical structure of the nervous
   system. Here again we must make an excursion into the area of general cybernetic
   concepts. We shall call the fourth stage of evolution the stage of the complex
   reflex, but we shall not be able to define this concept until we have
   familiarized ourselves with certain facts about hierarchically organized nerve
   nets.
   _________________________________________________________________________________

   [14][1] I am generally following the report by S. E. Schnoll entitled "The
   Essence of Life. Invariance in the General Direction of Biological Evolution,''
   in Materialy seminara "Dialektika i sovremennoe estestvoznanie" (Materials of the
   '"Dialectics and Modern Natural Science'' Seminar), Dubna, 1967.
     ____________________________________________________________________________

References

   1. http://pespmc1.vub.ac.be/POS/default.html
   2. http://pespmc1.vub.ac.be/turchin.html
   3. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading2
   4. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading3
   5. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading4
   6. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading5
   7. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading6
   8. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading7
   9. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading8
  10. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading9
  11. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading10
  12. http://pespmc1.vub.ac.be/POS/Turchap1.html#Heading11
  13. http://pespmc1.vub.ac.be/POS/Turchap1.html#fn0
  14. http://pespmc1.vub.ac.be/POS/Turchap1.html#fnB0


Usage: http://www.kk-software.de/kklynxview/get/URL
e.g. http://www.kk-software.de/kklynxview/get/http://www.kk-software.de
Errormessages are in German, sorry ;-)