Ergebnis für URL: http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading10 This is chapter 2 of the [1]"The Phenomenon of Science" by [2]Valentin F. Turchin
____________________________________________________________________________
Contents:
* [3]THE CONCEPT OF THE CONCEPT
* [4]DISCRIMINATORS AND CLASSIFIERS
* [5]HIERARCHIES OF CONCEPTS
* [6]HOW THE HIERARCHY EMERGES
* [7]SOME COMMENTS ON REAL HIERARCHIES
* [8]THE WORLD THROUGH THE EYES OF A FROG
* [9]FRAGMENTS OF A SYSTEM OF CONCEPTS
* [10]THE GOAL AND REGULATION
* [11]HOW REGULATION EMERGES
* [12]REPRESENTATIONS
* [13]MEMORY
* [14]THE HIERARCHY OF GOALS AND PLANS
* [15]STRUCTURAL AND FUNCTIONAL DIAGRAMS
* [16]THE TRANSITION TO PHENOMENOLOGICAL DESCRIPTIONS
* [17]DEFINITION OF THE COMPLEX REFLEX
____________________________________________________________________________
CHAPTER TWO. HIERARCHICAL STRUCTURES
THE CONCEPT OF THE CONCEPT
LET US LOOK at a nerve net which has many receptors at the input but just one
effector at the output. Thus, the nerve net divides the set of all situations
into two subsets: situations that cause stimulation of the effector and
situations that leave it dormant. The task being performed by the nerve net in
this case is called recognition (discrimination), recognizing that the situation
belongs to one of the two sets. In the struggle for existence the animal is
constantly solving recognition problems, for example. distinguishing a dangerous
situation from one that is not, or distinguishing edible objects from inedible
ones. These are only the clearest examples. A detailed analysis of animal
behavior leads to the conclusion that the performance of any complex action
requires that the animal resolve a large number of "small'' recognition problems
continuously.
In cybernetics a set of situations is called a concept.[18][1] To make clear how
the cybernetic understanding of the word ''concept'' is related to its ordinary
meaning let us assume that the receptors of the nerve net under consideration are
the light-sensitive nerve endings of the retina of the eye or, speaking in
general, some light-sensitive points on a screen which feed information to the
nerve net. The receptor is stimulated when the corresponding sector of the screen
is illuminated (more precisely, when its illumination is greater than a certain
threshold magnitude) and remains dormant if the sector is not illuminated. If we
imagine a light spot in place of each stimulated receptor and a dark spot in
place of each unstimulated one, we shall obtain a picture that differs from the
image striking the screen only by its discrete nature (the fact that it is broken
into separate points) and by the absence of semitones. We shall consider that
there are a large number of points (receptors) on the screen and that the images
which can appear on the screen (''pictures'') have maximum contrasts--that is,
they consist entirely of black and white. Then each situation corresponds to a
definite picture.
According to traditional Aristotelian logic, when we think or talk about a
definite picture (for example the one in the upper left corner of figure 2.1) we
are dealing with a particular concept. In addition to particular concepts there
are general or abstract concepts. For example, we can think about the spot in
general--not as a particular, concrete spot (for example, one of those
represented in the top row in figure 2.1) but about the spot as such. In the same
way we can have an abstract concept of a straight line, a contour, a rectangle, a
square, and so on.[19][2]
[IMG.FIG2.1.GIF]
Figure 2.1. Pictures representing various concepts.
But what exactly does ''possess an abstract concept'' mean? How can we test
whether someone possesses a given abstract concept--for example the concept of
''spot''? There is plainly just one way: to offer the person being tested a
series of pictures and ask him in each case whether or not it is a spot. If he
correctly identifies each and every spot (and keep in mind that this is from the
point of view of the test-maker) this means that he possesses the concept of
spot. In other words, we must test his ability to recognize the affiliation of
any picture offered with the set of pictures which we describe by the word
''spot.'' Thus the abstract concept in the ordinary sense of the word (in any
case, when we are talking about images perceived by the sense organs) coincides
with the cybernetic concept we introduced--namely, that the concept is a set of
situations. Endeavoring to make the term more concrete, we therefore call the
task of recognition the task of pattern recognition, if we have in mind
''generalized patterns" or the task of recognizing concepts, if we have in mind
the recognition of particular instances of concepts.
In traditional logic the concrete concept of the ''given picture'' corresponds to
a set consisting of one situation (picture). Relationships between sets have
their direct analogs in relationships between concepts. If capital letters are
used to signify concepts and small ones are used for the respective sets, the
complement of set a, that is, the set of all conceivable situations not included
in a, corresponds to the concept of "not A.'' The intersection of sets a, and b,
that is, the set of situations which belong to both a and b, corresponds to the
concept of ''A and B simultaneously". For example, if A is the concept of
''rectangle'' and B is the concept of ''rhombus,'' then ''A and B
simultaneously'' is the concept of ''square". The union of sets a and b, that is,
the set of situations which belong to at least one of sets a and b, corresponds
to the concept "either A, B or A and B.'' If set a includes set b, that is, each
element of b is included in a but the contrary is not true, then the concept B is
a particular case of the concept A. In this case it is said that the concept A is
more general (abstract) than the concept B, and the concept B is more concrete
than A. For example, the square is a particular case of the rectangle. Finally,
if sets a and b coincide then the concepts A and B are actually identical and
distinguished, possibly, by nothing but the external form of their description,
the method of recognition. Having adopted a cybernetic point of view, which is to
say having equated the concept with a set of situations, we should consider the
correspondences enumerated above not as definitions of new terms but simply as an
indication that there are several pairs of synonyms in our language.
DISCRIMINATORS AND CLASSIFIERS
WE SHALL CALL a nerve net that performs the task of recognition a discriminator
(recognizer), and the state of the effector at its output will simply be called
the state of the discriminator. Moving on from the concept of discriminator, we
shall introduce the somewhat more general concept of classifier. The
discriminator separates the set of all conceivable situations into two
nonintersecting subsets: A and not -A. It is possible to imagine the division of
a complete set of situations into an arbitrary number n of nonintersecting
subsets. Such subsets are ordinarily called classes. Now let us picture a certain
subsystem C which has n possible states and is connected by a nerve net
containing receptors in such a way that when a situation belongs to class i
(concept i) the subsystem C goes into state i. We shall call such a subsystem and
its nerve net a classifier for a set of n concepts (classes), and when speaking
of the state of a classifier it will be understood that we mean the state of
subsystem C (output subsystem). The discriminator is, obviously, a classifier
with number of states n = 2.
In a system such as the nervous system, which is organized on the binary
principle, the subsystem C with n sets will, of course, consist of a certain
number of elementary subsystems with two stages that can be considered the output
subsystems (effectors) of the discriminators. The state of the classifier will,
therefore, be described by indicating the states of a number of discriminators.
These discriminators, however, can be closely interconnected by both the
structure of the net and the function performed in the nervous system; in this
case they should be considered in the aggregate as one classifier.
If no restrictions are placed on the number of states n the concept of the
classifier really loses its meaning. In fact, every nerve net correlates one
definite output state to each input state, and therefore a set of input states
corresponds to each output state and these sets do not intersect. Thus, any
cybernetic device with an input and an output can be formally viewed as a
classifier. To give this concept a narrower meaning we shall consider that the
number of output states of a classifier is many fewer than the number of input
states so that the classifier truly ''classifies'' the input states (situations)
according to a relatively small number of large classes.
HIERARCHIES OF CONCEPTS
[IMG.FIG2.2.GIF]
FIGURE 2.2 shows a diagram of a classifier organized on the hierarchical
principle. The hierarchy is, in general, that structure of a system made up of
subsystems in which each subsystem is given a definite whole number, called its
level, and the interaction of subsystems depends significantly on the difference
in their levels according to some general principle. Ordinarily this principle is
transmission of information in a definite direction, from top to bottom or bottom
to top, from a given level to the next. In our case the receptors are called the
zero level and the information is propagated from the bottom up. Each first-level
subsystem is connected to a certain number of receptors and its state is
determined by the states of the corresponding receptors. In the same way each
second-level subsystem is connected with a number of first-level subsystems and
so on. At the highest level (the fourth level in the diagram) there is one output
subsystem, which gives the final answer regarding the affiliation of the
situations with a particular class.
All subsystems at intermediate levels are also classifiers. The direct input for
a classifier at level K is the states of the classifiers on level K - 1, the
aggregate of which is the situation subject to classification on level K. In a
hierarchical system containing more than one intermediate level, it is possible
to single out hierarchical subsystems that bridge several levels. For example, it
is possible to consider the states of all first-level classifiers linked to a
third-level classifier as the input situation for the third-level classifier.
Hierarchical systems can be added onto in breadth and height just as it is
possible to put eight cubes together into a cube whose edges are twice as long as
before. One can add more cubes to this construction to make other forms.
Because there is a system of concepts linked to each classifier the hierarchy of
classifiers generates a hierarchy of concepts. Information is converted as it
moves from level to level and is expressed in terms of increasingly
''high-ranking'' concepts. At the same time the amount of information being
transmitted gradually decreases, because information that is insignificant from
the point of view of the task given to the ''supreme'' (output) classifier is
discarded.
Let us clarify this process with the example of the pictures shown in figure 2.1.
Suppose that the assigned task is to recognize ''houses.'' We shall introduce two
intermediate concept levels. We shall put the aggregate of concepts of
''segment'' on the first level and the concept of ''polygon'' on the second. The
concept of ''house'' comes on the third level.
By the concepts of ''segment'' we mean the aggregate of concepts of segments with
terminal coordinates x[1], y[1], and x[2], y[2], where the numbers x[1], y[1],
and x[2], y[2], can assume any values compatible with the organization of the
screen and the system of coordinates. To be more concrete, suppose that the
screen contains 1,000x 1,000 light-sensitive points. Then the coordinates can be
ten-digit binary numbers (2^10 = 1,024> 1,000), and a segment with given ends
will require four such numbers, that is to say 40 binary orders, for its
description. Therefore, there are 2^40 such concepts in all. These are what the
first-level classifiers must distinguish.
One should not think that a segment with given ends is a concrete concept--a set
consisting of a single picture. When we classify this picture as a segment with
given ends we are abstracting from the slight curvature of the line, from
variations in its thickness, and the like (see figure 2.1). There are different
ways to establish the criterion for determining which deviations from the norm
should be considered insignificant. This does not interest us now.
Each first-level classifier should have at the output a subsystem of 40 binary
orders on which the coordinates of the ends of the segment are ''recorded.'' How
many classifiers are needed? This depends on what kind of pictures are expected
at the input of the system. Let us suppose that 400 segments are sufficient to
describe any picture. This means that 400 classifiers are enough. We shall divide
the entire screen into 400 squares of 50 x 50 points and link each square with a
classifier which will fix a segment which is closest to it in some sense (the
details of the division of labor among classifiers are insignificant). If there
is no segment, let the classifier assume some conventional ''meaningless'' state,
for example where all four coordinates are equal to 1,023.
If our system is offered a picture that shows a certain number of segments then
the corresponding number of first-level classifiers will indicate the coordinates
of the ends of the segments and the remaining classifiers will assume the state
''no segment.'' This is a description of the situation in terms of the concepts
of "segment.'' Let us compare the amount of information at the zero level and at
the first level. At the zero level of our system 1,000 x 1,000 = 106 receptors
receive 1 million bits of information. At the first level there are 400
classifiers, each of which contains 40 binary orders, that is, 40 bits of
information; the total is 16,000 bits. During the transition to the first level
the amount of information has decreased 62.5 times. The system has preserved only
the information it considers ''useful'' and discarded the rest. The relativity of
these concepts is seen from the fact that if the picture offered does not
correspond to the hierarchy of concepts of the recognition system the system's
reaction will be incorrect or simply meaningless. For example, if there are more
than 400 segments in the picture not all of them will be fixed, and if a picture
with a spot is offered the reaction to it will be the same as to any empty
picture.
We divide the aggregate of concepts of ''polygon," which occupies the second
level of the hierarchy, into two smaller aggregates: isosceles triangles and
parallelograms. We single out a special aggregate of rectangles from the
parallelograms. Considering that assigning the angle and length requires the same
number of bits (10) as for the coordinate, we find that 50 bits of information
are needed to assign a definite isosceles triangle, 60 bits for a parallelogram.
and 50 bits for a rectangle. The second-level classifiers should be designed
accordingly. It is easy to see that all the information they need is available at
the first level. The existence of a polygon is established where there are
several segments that stand in definite relationships to one another. There is a
further contraction of the information during the transition to the second level.
Taking one third of the total of 400 segments for each type of polygon we obtain
a system capable of fixing 44 triangles, 33 rectangles, and 33 parallelograms
(simultaneously). Its information capacity is 5,830 bits, which is almost three
times less than the capacity of the first level. On the other hand. when faced
with an irregular triangle or quadrangle, the system is nonplussed!
It is easy to describe the concept of ''house'' in the language of second-level
concepts. A house consists of four polygons--one rectangle, one isosceles
triangle, and two parallelograms--which stand in definite relationships to one
another. The base of the isosceles triangle coincides with one side of the
rectangle, and so on.
To avoid misunderstanding it should be pointed out that the hierarchy of concepts
we are discussing has a much more general meaning than the hierarchy of concepts
by abstractness (generality) which is often simply called the ''hierarchy of
concepts.'' The pyramid of concepts used in classifying animals is an example of
a hierarchy by generality. The separate individual animals (the ''concrete''
concepts) are set at the zero level. At the first level are the species, at the
second the genera, then the orders families, classes, and phyla. At the peak of
the pyramid is the concept of ''animal.'' Such a pyramid is a particular case of
the hierarchy of concepts in the general sense and is distinguished by the fact
that each concept at level k is formed by joining a certain number of concepts at
level k--1. This is the case of very simply organized classifiers. In the general
case classifiers can be organized any way one likes. The discriminators necessary
to an animal are closer to a hierarchy based on complexity and subtlety of
concepts, not generality.
HOW THE HIERARCHY EMERGES
LET US RETURN again to the evolution of the nervous system. Can a hierarchy of
classifiers arise through evolution? It is apparent that it can, but on one
condition: if the creation of each new level of the hierarchy and its subsequent
expansion are useful to the animal in the struggle for existence. As animals with
highly organized nervous systems do exist, we may conclude that such an expansion
is useful. Moreover, studies of primitive animals show that the systems of
concepts their nervous systems are capable of recognizing are also very
primitive. Consequently, we see for ourselves the usefulness of the lowest level
of the hierarchy of classifiers.
Let us sketch the path of development of the nervous system. In the initial
stages we find that the animal has just a few receptors. The number of possible
methods of interconnecting them (combinations) is relatively small and permits
direct selection. The advantageous combination is found by the trial and error
method. That an advantageous combination can exist even for a very small number
of neurons can easily be seen in the following example. Suppose that there are
just two light-sensitive receptors. If they are set on different sides of the
body the information they yield (difference in illuminations) is sufficient for
the animal to be able to move toward or away from the light. When an advantageous
combination has been found and realized by means, we shall assume, of one
intermediate neuron (such neurons are called associative), the entire group as a
whole may be reproduced. In this way there arises a system of associative neurons
which, for example, register differences between the illumination of receptors
and sum these differences, as in Figure 2.3a.
[IMG.FIG2.3.GIF]
Figure 2.3. Simplest types of connections among receptors.
Any part of a system of connected neurons may be reproduced, for example one or
several receptors. In this way there arises a system of connections of the type
shown in figure 2.3b. The diagrams of both types taken together form the first
level of a hierarchy, based on the concepts of the sum and difference of
illuminations. Because it is very important that animal movement be able to
adjust to changes in illumination at a given point, we may assume that neurons
capable of being triggered by changes in illumination must have appeared in the
very early stages of evolution. They could have been either receptors or
associative neurons connected to one or several receptors. In general,
first-level classifiers can be described as registers of the sum and differences
of the stimuli of receptors in space and time.
Having proven their usefulness for the animal, first-level classifiers become an
established part of its capabilities in the struggle for existence. Then the next
trial and error series begins: a small number of first-level classifiers (to be
more precise, their output subsystems) are interconnected into one second-level
trial classifier until a useful combination is obtained. Then the reproduction of
this combination is useful. It may be assumed that on the second level of the
hierarchy (pertaining to the organs of sight) there appear such concepts as the
boundary between light and shadow, the spot, the average illumination of a spot,
and movement of the boundary between light and shadow. The successive levels of
the hierarchy will arise in the same way.
The scheme we have outlined leads one to think that any complex system which has
arisen by the method of trial and error in the process of evolution should have a
hierarchical organization. In fact, nature--unable to sort through all
conceivable combinations of a large number of elements--selects combinations from
a few elements. When it finds a useful combination, nature reproduces it and uses
it (the whole of it) as an element to be tentatively connected with a small
number of other similar elements. This is how the hierarchy arises. This concept
plays an enormous role in cybernetics. In fact, any complex system, whether it
has arisen naturally or been created by human beings, can be considered organized
only if it is based on some kind of hierarchy or interweaving of several
hierarchies. At least we do not yet know any organized systems that are arranged
differently.
SOME COMMENTS ON REAL HIERARCHIES
THUS FAR our conclusions have been purely speculative. How do they stand up
against the actual structure of the nervous systems of animals and what can be
said about the concepts of intermediate levels of a hierarchy which has actually
emerged in the process of evolution?
When comparing our schematic picture with reality the following must be
considered. The division of a system of concepts into levels is not so
unconditional as we have silently assumed. There may be cases where concepts on
level K are used directly on level K + 2, bypassing level K + 1. In figure 2.2 we
fitted such a possibility into the overall diagram by introducing classifiers
which are connected to just one classifier of the preceding level and repeat its
state; they are shown by the squares containing the x' s. In reality, of course,
there are no such squares, which complicates the task of breaking the system up
into levels. To continue, the hierarchy of classifiers shown in figure 2.2. has a
clearly marked pyramidal character; at higher levels there are fewer classifiers
and at the top level there is just one. Such a situation occurs when a system is
extremely ''purposeful,'' that is, when it serves some very narrow goal, some
precisely determined method of classifying situations. In the example we have
cited this was recognition of ' houses.'' And we saw that for such a system even
irregular triangles and quadrangles proved to be ''meaningless''; they are not
included in the hierarchy of concepts. To be more universal a system must
resemble not one pyramid but many pyramids whose apexes are arranged at
approximately the same level and form a set of concepts (more precisely, a set of
systems of concepts) in whose terms control of the animal's actions takes place
and which are ordinarily discovered during investigation of the animal's
behavior. These concepts are said to form the basis of a definite ''image'' of
the external world which takes shape in the mind of the animal (or person). The
state of the classifiers at this level is direct information for the executive
part of the nerve net (that is, in the end, for the effectors). Each of these
classifiers relies on a definite hierarchy of classifiers, a pyramid in which
information moves as described above. But the pyramids may overlap in their
middle parts (and they are known to overlap in the lower part, the receptors).
Theoretically the total number of pyramid apexes may be as large as one likes,
and specifically it may be much greater than the total number of receptors. This
is the case in which the very same information delivered by the receptors is
represented by a set of pyramids in a set of different forms figured for all
cases in life.
Let us note one other circumstance that should be taken into account in the
search for hierarchy in a real nerve net. If we see a neuron connected by
synapses with a hundred receptors, this by itself does not mean that the neuron
fixes some simple first-level concept such as the total number of stimulated
receptors. The logical function that relates the state of the neuron to the
states of the receptors may be very complex and have its own hierarchical
structure.
THE WORLD THROUGH THE EYES OF A FROG
FOUR SCIENTISTS from the Massachusetts Institute of Technology (J. Lettvin et
al.) have written an article entitled ''What the Frog's Eye Tells the Frog's
Brain" which is extremely interesting for an investigation of the hierarchy of
classifiers and concepts in relation to visual perception in animals.[20][3] The
authors selected the frog as their test animal because its visual apparatus is
relatively simple, and therefore convenient for study. Above all, the retina of
the frog eye is homogeneous; unlike the human eye it does not have an area of
increased sensitivity to which the most important part of the image must be
projected. Therefore, the glance of the frog is immobile; it does not follow a
moving object with its eyes the way we do. On the other hand, if a frog sitting
on a water lily rocks with the motion of the plant, its eyes make the same
movements, thus compensating for the rocking, so that the image of the external
world on the retina remains immobile. Information is passed from the retina alone
the visual nerve to the so-called thalamus opticus of the brain. In this respect
the frog is also simpler than the human being; the human being has two channels
for transmitting information from the retina to the brain.
Vision plays a large part in the life of the frog, enabling it to hunt and to
protect itself from enemies. Study of frog behavior shows that the frog
distinguishes its prey from its enemies by size and state of movement. Movement
plays the decisive part here. Having spotted a small moving object (the size of
an insect or worm) the frog will leap and capture it. The frog can be fooled by a
small inedible object wiggled on a thread, but it will not pay the slightest
attention to an immobile worm or insect and can starve to death in the middle of
an abundance of such food if it is not mobile. The frog considers large moving
objects to be enemies and flees from them.
The retina of the frog's eye, like that of other vertebrates, has three layers of
nerve cells. The outermost layer is formed by light sensitive receptors, the rods
and cones. Under it is the layer of associative neurons of several types. Some of
them (the biopolar cells) yield primarily vertical axons along which stimulation
is transmitted to deeper layers. The others (the horizontal or amacrine cells)
connect neurons that are located on one level. The third, deepest layer is formed
of the ganglion cells. Their dendrites receive information from the second-layer
cells and the axons are long fibers that are interwoven to form the visual nerve,
which connects the retina with the brain. These axons branch out, entering the
thalamus opticus, and transmit information to the dendrites of the cerebral
neurons.
The eye of a frog has about 1 million receptors, about 3 million associative
second-level neurons, and about 500,000 ganglion cells. Such a retinal structure
gives reason to assume that analysis of the image begins in the eye of the animal
and that the image is transmitted alone the visual nerve in terms of some
intermediate concepts. It is as if the retina were a part of the brain moved to
the periphery. This assumption is reinforced by the fact that the arrangement of
the axons on the surface of the thalamus opticus coincides with the arrangement
of the respective ganglion cells at the output of the retina--even though the
fibers are interwoven a number of times along the course of the visual nerve and
change their position in a cross-section of the nerve! Finally, the findings of
embryology on development of the retina lead to the same conclusion.
In the experiments we are describing a thin platinum electrode was applied to the
visual nerve of a frog, making it possible to record stimulation of separate
ganglion cells. The frog was placed in the center of an aluminum hemisphere,
which was dull grey on the inside. Various dark objects such as rectangles,
discs, and the like, were placed on the inside surface of the hemisphere; they
were held in place by magnets set on the outside.
The results of the experiments can be summarized as follows.
Each ganglion cell has a definite receptive field, that is, a segment of the
retina (set of receptors) from which it collects information. The state of
receptors outside the receptive field has no effect on the state of the ganglion
cell. The dimensions of receptive fields for cells of different types, if they
are measured by the angle dimensions of the corresponding visible areas, vary
from 2 degrees to 15 degrees in diameter.
The ganglion cells are divided into four types depending on what process they
record in their receptive field.
1. Detectors of long-lasting contrast. These cells do not react to the switching
on and off of general illumination, but if the edge of an object which is darker
or lighter than the background appears in the receptive field the cell
immediately begins to generate impulses.
2. Detectors of convex edges. These cells are stimulated if a small (not more
than three degrees) convex object appears in the receptive field. Maximum
stimulation (frequency of impulses) is reached when the diameter of the object is
approximately half of the diameter of the receptive field. The cell does not
react to the straight edge of an object.
3. Detectors of moving edges. Their receptive fields are about 12 degrees in
width. The cell reacts to any distinguishable edge of an object which is darker
or lighter than the background, but only if it is moving. If a smoothly moving
object five degrees in width passes over the field there are two reactions, to
the front and rear edges.
4. Detectors of darkening of the field. They send out a series of impulses if the
total illumination of the receptive field is suddenly decreased.
The arrangement of the ends of the visual fibers in the thalamus opticus is
extremely interesting. We have already said that on a plane this arrangement
coincides with the arrangement of the corresponding ganglion cells in the retina.
In addition, it turns out that the ends of each type of fiber are set at a
definite depth in the thalamus opticus, so that the frog brain has four layers of
neurons that receive visual information. Each layer receives a copy of the
retinal image--but in a certain aspect that corresponds to one of the four types
of ganglion cells. These layers are the transmitters of information for the
higher parts of the brain.
Experiments such as those we have described are quite complex and disputes
sometimes arise concerning their interpretation. The details of the described
system may change or receive a different interpretation. Nonetheless, the general
nature of the system of firstlevel concepts has evidently been quite firmly
established. We see a transition from point description to local description
which takes account of the continuous structure of the image. The ganglion cells
act as recognizers of such primary concepts as edge, convexness, and movement in
relation to a definite area of the visible world.
FRAGMENTS OF A SYSTEM OF CONCEPTS
THE LOWEST-LEVEL concepts related to visual perception for a human being probably
differ little from the concepts of a frog. In any case, the structure of the
retina in mammals and in human beings is the same as in amphibians.
The phenomenon of distortion of perception of an image stabilized on the retina
gives some idea of the concepts of the subsequent levels of the hierarchy. This
is a very interesting phenomenon. When a person looks at an immobile object,
''fixes'' it with his eyes, the eyeballs do not remain absolutely immobile; they
make small involuntary movements. As a result the image of the object on the
retina is constantly in motion, slowly drifting and jumping back to the point of
maximum sensitivity. The image "marks time'' in the vicinity of this point.
An image which is stabilized, not in continuous motion, can be created on the
retina. To achieve this, the object must be rigidly connected to the eyeball and
move along with it.
[IMG.FIG2.4.GIF]
Figure 2.4. Device for stabilizing an image on the retina
A contact lens with a small rod secured to it is placed on the eye. The rod holds
a miniature optical projector[21][4] into which slides a few millimeters in size
can be inserted. The test subject sees the image as remote to the point of
infinity. The projector moves with the eye so the image on the retina is
immobile.
When the test subject is shown a stabilized image, for the first few seconds he
perceives it as he would during normal vision, but then distortions begin. First
the image disappears and is replaced by a grey or black background, then it
reappears in parts or whole.
That the stabilized image is perceived incorrectly is very remarkable in itself.
Logically, there is no necessity for the image of an immobile object to move
about the retina. Such movement produces no increase in the amount of
information, and it becomes more difficult to process it. As a matter of fact,
when similar problems arise in the area of engineering--for example when an image
is transmitted by television or data are fed from a screen to a computer--special
efforts are made to stabilize the image. But the human eye has not merely adapted
to a jerking image; it simply refuses to receive an immobile one. This is
evidence that the concepts related to movement, probably like those which we
observed in the frog, are deeply rooted somewhere in the lower stages of the
hierarchy, and if the corresponding classifiers are removed from the game correct
information processing is disrupted. From the point of view of the designer of a
complex device such as the eye (plus the data processing system) such an
arrangement is strange. The designer would certainly fill all the lower stages
with static concepts and the description of object movement would be given in
terms of the concepts of a higher level. But the hierarchy of visual concepts
arose in the process of evolution. For our remote frog-like ancestors the
concepts related to movement were extremely important and they had no time to
wait for the development of complex static concepts. Therefore, primitive dynamic
concepts arose in the very earliest stages of the development of the nervous
system, and because nature uses the given units to carry out subsequent stages of
building, these concepts became firmly established at the base of the hierarchy
of concepts. For this reason, the human eyeball must constantly make brownian
movements.
Even more interesting is the way the image breaks up into parts (fragmentation).
Simple figures, such as a lone segment, disappear and come back in toto. More
complex figures sometimes disappear in toto and sometimes break into parts which
disappear and reappear independent of one another.
[IMG.FIG2.5.GIF]
Figure 2.5. Fragmentation of a stabilized image.
Fragmentation does not occur chaotically and it is not independent of the type of
image, as is the case when a drawing on a chalkboard is erased with a rag; rather
the fragmentation corresponds to the ''true'' structure of the image. We have put
the word ''true'' in quotation marks because fragmentation actually occurs in
accordance with the structure of image perception by the eye-brain system. We do
not know exactly what the mechanics of the distortion of perception in
stabilization are; we know only that stabilization disables some component of the
perception system. But from this too we can draw certain conclusions.
Imagine that several important design elements have suddenly disappeared from an
architectural structure. The building will fall down, but probably the pieces
would be of very different sizes. In one place you may see individual bricks and
pieces of glass, while in another a part of the wall and roof may remain, and in
still another place a whole corner of the building may be intact. Perception of
the stabilized image is approximately that kind of sight. It makes it possible to
picture the nature of the concepts of a higher level (or higher levels) but not
to evaluate their mutual relations and dependences. It should be noted that in
the human being the personal experience of life, the learning (to speak in
cybernetic language), plays a large part in shaping higher-level concepts. (This
will be the next stage in evolution of the nervous system, so we are getting
somewhat ahead of things here. For an investigation of the hierarchy of concepts,
however, it is not very important whether the hierarchy were inherited or
acquired through one's own labor.)
Let us cite a few excerpts from the work mentioned above ([22]footnote 4).
The figure of the human profile invariably fades and regenerates in meaningful
units. The front of the face, the top of the head. the eye and ear come and go as
recognizable entities, separately and in various combinations. In contrast, on
first presentation a meaningless pattern of curlicues is described as extremely
"active"; the individual elements fade and regenerate rapidly, and the subject
sees almost every configuration that can be derived from the original figure.
After prolonged viewing however, certain combinations of curlicues become
dominant and these then disappear and reappear as units. The new reformed
groupings persist for longer periods. . . .
Linear organization is emphasized by the fading of this target composed of rows
of squares. The figure usually fades to leave one whole row visible: horizontal,
diagonal, or vertical. In some cases a three-dimensional "waffle" effect is also
noted. . . .
A random collection of dots will fade to leave only those dots which lie
approximately in a line. . . . Lines act independently in stabilized vision, with
breakage in the fading figure always at an intersection of lines. Adjacent or
parallel lines may operate as units. . . In the case of figures drawn in solid
tones as distinguished from those drawn in outline . . . the corner now replaces
the line as the unit of independent action. A solid square will fade from its
center, and the fading will obliterate first one and then another corner. Leaving
the remaining corners sharply outlined and isolated in space. Regeneration
correspondingly begins with the reappearance of first one and then another
corner, yielding a complete or partial figure with the corners again sharply
outlined.
THE GOAL AND REGULATION
WE HAVE DESCRIBED the first half of the action of a complex reflex, which
consists of analyzing the situation by means of a hierarchy of classifiers. There
are cases where the second half, the executive half, of the reflex is extremely
simple and involves the stimulation of some local group of effectors--for example
the effectors that activate a certain gland. These were precisely the conditions
in which I. P. Pavlov set up most of his experiments, experiments which played an
important part in the study of higher nerve activity in animals and led to his
widely known theory of unconditioned and conditioned reflexes. Elementary
observations of animal behavior under natural conditions show, however, that this
behavior cannot be reduced to a set of reflexes that are related only to the
state of the environment. Every action of any complexity whatsoever consists of a
sequence of simpler actions joined by a common goal. It often happens that
individual components in this aggregate of actions are not simply useless but
actually harmful to the animal if they are not accompanied by the other
components. For example, it is necessary to fall back on the haunches before
jumping and in order to grasp prey better it must be let go for an instant. The
two phases of action, preliminary and executive, which we see in these examples
cannot be the result of independent reflexes because the first action is
senseless by itself and therefore could not have developed.
When describing behavior the concepts of goal and regulation must be added to the
concept of the reflex. A diagram of regulation is shown in figure 2.6.
[IMG.FIG2.6.GIF]
An action which the system is undertaking depends not only on the situation
itself but also on the goal, that is, on the situation that the system is trying
to achieve. The action of the system is determined by comparing the situation and
the goal: the action is directed toward eliminating the discrepancy between the
situation and the goal. The situation determines the action through the
comparison block. The action exerts a reverse influence on the situation through
change in the environment. This feedback loop is a typical feature of the
regulation diagram and distinguishes it from the reflex diagram where the
situation simply causes the action.
HOW REGULATION EMERGES
HOW COULD A SYSTEM organized according to the regulation diagram occur in the
process of evolution? We have seen that the appearance of hierarchically
organized classifiers can be explained as a result of the combined action of two
basic evolutionary factors: replication of biological structures and finding
useful interconnections by the trial and error method. Wouldn't the action of
these factors cause the appearance of the regulation diagram?
Being unable to rely on data concerning the actual evolutionary process that
millions of years ago gave rise to a complex nervous system, we are forced to
content ourselves with a purely hypothetical combinative structure which
demonstrates the theoretical possibility of the occurrence of the regulation
diagram. We shall make a systematic investigation of all possibilities to which
replication and selection lead. It is natural to assume that in the process of
replication relations are preserved within the subsystem being replicated, as are
the subsystem's relations with those parts not replicated. We further assume that
owing to their close proximity there is a relationship among newly evolved
subsystems, which we shall depict in our diagrams with a dotted line. This
relationship may either be reinforced or disappear. We shall begin with the
simplest case--where we see just one nerve cell that is receptor and effector at
the same time (figure 2.7 a).
[IMG.FIG2.7.GIF]
Here there is only one possibility of replication, and it leads to the appearance
of two cells (figure 2.7 b). If one of them is closest to the surface and the
other closer to the muscle cells, a division of labor between them is useful.
This is how the receptor-effector diagram emerges (figure 2.7 c).
Now two avenues of replication are possible. Replication of the receptor yields
the pattern shown in figure 2.7 d; after the disappearance of the dotted-line
relationship, this becomes figure 2.7 e. A similar process generates the patterns
in figures 2.7 f, g, and so on. In this way the zero level of the hierarchy
(receptors) expands.
The second avenue is replication of effectors (see figure 2.8).
[IMG.FIG2.8.GIF]
In figure 2.8 b, the stimulation of one receptor should be transmitted along two
channels to two effectors. But we know that the electrical resistance of the
synapses drops sharply after the first time a current passes along them.
Therefore, if the stimulation is sent along one channel this communications
channel will be reinforced while the other will be bypassed and may ''dry up''
(figure 2.8 c). Then the stimulation may make a way across the dotted-line
relationship (figure 2.8 d), which marks the birth of the first level of the
hierarchy of classifiers.
[IMG.FIG2.9.GIF]
Figure 2.9 shows possible variations of the development of the three-neuron
diagram shown in figure 2.7 d. The diagrams correspond to replication of
different subsystems of the initial system. The subsystem which is replicated has
been circled. Figures 2.9 a-c explain the expansion of the zero level, while
figures 2.9 d-f show the expansion of the first two levels of the hierarchy of
classifiers. In the remainder we see patterns that occur where one first-level
classifier is replicated without a receptor connected to it. The transition from
figure 2.9 h to 2.9 i is explained by that ''drying up'' of the bypass channel we
described above. Figure 2.9 j, the final development, differs substantially from
all the other figures that represent hierarchies of classifiers. In this figure,
one of the classifiers is ''hanging in the air''; it does not receive information
from the external world. Can such a diagram be useful to an animal'? It certainly
can, for this is the regulation diagram!
As an example we can suggest the following embodiment of figure 2.9 j. Let us
consider a certain hypothetical animal which lives in the sea. Suppose R is a
receptor which perceives the temperature of the environment. Classifier A also
records the temperature of the water by change in the frequency of stimulation
impulses. Suppose that greater or less stimulation of effector E causes expansion
or contraction of the animal's shell, which results in a change in its volume;
the animal either rises toward the surface of the sea or descends deeper. And
suppose that there is some definite temperature, perhaps 16deg. C (61deg. F)
which is most suitable for our animal. The neuron Z (the goal fixer) should
maintain a certain frequency of impulses equal to the frequency of neuron A at a
temperature of 16deg.. Effector E should register the difference of stimulation
of neurons A and Z and in conformity with it, raise the animal toward the surface
where the water is warmer or immerse it to deeper, cooler water layers. Such an
adaptation would be extremely helpful to our imaginary animal.
REPRESENTATIONS
REPLICATION of the various subsystems of the nerve net can give rise to many
different groups of classifiers which ''hang in the air". Among them may be
copies of whole steps of the hierarchy whose states correspond exactly to the
states of those ''informed'' classifiers which receive information from the
receptors. They correspond but they do not coincide. We saw this in the example
of neurons A and Z in figure 2.9j. In complex systems the uninformed copies of
informed classifiers may store a large amount of information. We shall call the
states of these copies representations, fully aware that in this way we are
giving a definite cybernetic interpretation to this psychological concept. It is
obvious that there is a close relationship between representations and
situations. which are really nothing but the states of analogous classifiers, but
ones receiving information from the receptors. The goal is a particular case of
the representation, or more precisely, it is that case where the comparison
between a constant representation and a changing situation is used to work out an
action that brings them closer to one another. The hypothetical animal described
above loves a temperature of 16deg. and the "lucid image" of this wonderful
situation, which is a certain frequency of impulses of neuron A, lives in its
memory in the form of precisely that frequency o~ pulses of neuron Z.
This is a very primitive representation. The more highly organized the
''informed'' part of the nervous system is, the more complex its duplicates will
be (we shall call them representation fixers), and the more varied the
representations will be. Because classifiers can belong to different levels of
the hierarchy and the situation can be expressed in different systems of
concepts, representations can also differ by their "concept language'' because
they can be the states of fixers of different levels. Furthermore, the degree of
stability of the states of the representation fixers can also vary greatly.
Therefore, representations differ substantially in their concreteness and
stability. They may be exact and concrete, almost perceptible to the sensors. The
extreme case of this is the hallucination, which is perceived subjectively as
reality and to which the organism responds in the same way as it would to the
corresponding situation. On the other hand, representations may be very
approximate, as a result of both their instability and their abstraction. The
latter case is often encountered in artistic and scientific creative work where a
representation acts as the goal of activity. The human being is dimly aware of
what he needs and tries to embody it in solid, object form. For a long time
nothing comes of it because his representations do not have the necessary
concreteness. But then, at one fine moment (and this is really a fine moment!) he
suddenly achieves his goal and realizes clearly that he has done precisely what
he wanted.
MEMORY
IN PRINCIPLE. as many representation fixers as desired can be obtained by
replication. But a question arises here: how many does an animal need? How many
copies of "informed" classifiers are needed? One? Two? Ten? It follows from
general considerations that many copies are needed. After all, representation
fixers serve to organize experience and behavior in time. The goal fixer stores
the situation which, according to the idea, should be realized in the future.
Other fixers can store situations which have actually occurred in the past. The
temporal organization of experience is essential to an animal which is striving
to adapt to the environment in which it lives, for this environment reveals
certain rules, that is, correlations between past, present, and future
situations. We may predict that after a certain initial increase in the number of
receptors the further refinement of the nervous system will require the creation
of representation fixers, and a large number of them. There is no reason to
continue to increase the number of receptors and classifiers and thus improve the
"instantaneous snapshots'' of the environment if the system is not able to detect
correlations among them. But the detection of correlations among the
''instantaneous snapshots'' requires that they be stored somewhere. This is how
representation fixers, which in other words are memory, arise. The storage of the
goal in the process of regulation is the simplest case of the use of memory.
THE HIERARCHY OF GOALS AND PLANS
IN THE REGULATION DIAGRAM in figure 2.5 the goal is shown as something unified.
But we know very well that many goals are complex, and while working toward them
a system sets intermediate goals. We have already cited the examples of two-phase
movement: to jump onto a chair, a cat first settles back on its haunches and then
springs up. In more complex cases the goals form a hierarchy consisting of
numerous levels. Let us suppose that you set the goal of traveling from home to
work. This is your highest goal at the particular moment. We shall assign it the
index (level number) 0. To travel to work you must leave the building, walk to
the bus stop, ride to the necessary stop, and so on. These are goals with an
index of --1. To leave the building you must leave the apartment, take the
elevator down, and go out the entrance. These are goals with an index of --2. To
take the elevator down you must open the door, enter the elevator, and so on;
this is index --3. To open the elevator door you must reach your hand out to the
handle, grasp it, and pull it toward you; this is index --4. These goals may
perhaps be considered elementary.
The goal and a statement of how it is to be achieved--that is, a breakdown into
subordinate goals--is called a plan of action. Our example is in fact a
description of a plan for traveling to work. The goal itself, which in this case
is the representation ''me--at my work place,'' does not have any hierarchical
structure. The primary logical unit that forms the hierarchy is the plan, but the
goals form a hierarchy only to the extent that they are elements of the plan.
In their book Plans and the Structure of Behavior American psychologists G.
Miller, E. Galanter, and K. Pribram take the concept of the plan as the basis for
describing the behavior of humans and animals. They show that such an approach is
both sound and useful. Unlike the classical reflex arc (without feedback) the
logical unit of behavior description used by the authors contains a feedback
loop.
[IMG.FIG2.10.GIF]
They call this unit the Test-Operate-Test-Exit diagram (T-O-T-E--based on the
first letters of the English words ''test,'' ''operate,'' ''test,'' "exit".) The
test here means a test of correspondence between the situation and the goal. If
there is no correspondence an operation is performed, but if there is
correspondence the plan is considered performed and the system goes to ''exit".
[IMG.FIG2.11.GIF]
As an example, figure 2.11 shows a plan for driving a nail into a board; the plan
is represented in the form of a T-O-T-E unit. The T-O-T-E diagram in figure 2.10
shows the same phenomenon of regulation that was depicted in figure 2.6. The
difference is in the method of depiction. The diagram in figure 2.6 is structural
while in figure 2.10 it is functional. We shall explain these concepts, and at
the same time we shall define the concept of control more precisely.
STRUCTURAL AND FUNCTIONAL DIAGRAMS
A STRUCTURAL DIAGRAM of a cybernetic system shows the subsystems which make up
the particular system and often also indicates the directions of information
flows among the subsystems. Then the structural diagram becomes a graph. In
mathematics the term graph is used for a system of points (the vertices of the
graph), some of which are connected by lines (arcs). The graph is oriented if a
definite direction is indicated on each arc. A structural diagram with an
indication of information flows is a directed graph whose vertices depict the
subsystems while the arcs are the information flows.
This description of a cybernetic system is not the only possible one. Often we
are interested not so much in the structure of a system as in its functioning.
Even more often we are simply unable to say anything sensible about the
structure. but there are some things we can say about the function. In such cases
a functional diagram may be constructed. It is also a directed graph, but in it
the vertices represent different sets of states of the system and the arcs are
possible transitions between states. An arc connects to vertices in the direction
from the first to the second in the case where there is a possibility of
transition from at least one state relating to the first vertex into another
state relating to the second vertex. We shall also call the sets of states
generalized states. Therefore, the arc in a diagram shows the possibility of a
transition from one generalized state to another. Whereas a structural diagram
primarily reflects the spatial aspect, the functional diagram stresses the
temporal aspect. Formally, according to the definition given above, the
functional diagram does not reflect the spatial aspect (division of the system
into subsystems) at all. As a rule, however, the division into subsystems is
reflected in the method of defining generalized states, that is, the division of
the set of all states of the system into subsets which are ''assigned'' to
different vertexes of the graph. Let us review this using the example of the
system whose structural diagram is shown in figure 2.12. This is a control
diagram.
[IMG.FIG2.12.GIF]
One of the subsystems, which is called the control device, receives information
from ''working" subsystems A[1], A[2], A[3], . . . , processes it, and sends
orders (control information) to subsystems A[1], A[2], A[3], . . ., as a result
of which these subsystems their state. It must be noted that, strictly speaking,
any information received by the system changes its state. Information is called
control information when it changes certain distinct parameters of the system
which are identified as ''primary,'' ''external,'' ''observed,'' and the like.
Often the control unit is small in terms of information-capacity and serves only
to switch information flows, while the real processing of data and development of
orders is done by one of the subsystems, or according to information stored in
it. Then it is said that control is passed to this subsystem. That is how it is
done, specifically, in a computer where subsystems A[1], A[2], A[3], . . . are
the cells of operational memory. Some of the cells contain ''passive''
information (for example numbers), while others contain orders (instructions).
When control is in the cell which contains an instruction the control unit
performs this instruction. Then it passes control to the next cell, and so on.
The functional diagram for systems with transfer of control is constructed as
follows. To each vertex of the graph is juxtaposed one of the subsystem A[i] and
the set of all states of the system when control is in the particular subsystem.
Then the arcs (arrows) signify the transfer of control from one subsystem to
another.
[IMG.FIG2.13.GIF]
Figure 2.13. Functional diagram of transfer of control
Even where each successive state is fully determined by the preceding one there
may be branching on such a diagram because each vertex corresponds to a vast
number of states and the transfer of control can depend on the state of the
control unit or the subsystem in which control is located. Functional diagrams
are often drawn in generalized form, omitting certain inconsequential details and
steps. It may then turn out that the path by which control branches depends on
the state of several different subsystems. The condition on which this switch is
made is ordinarily written alongside the arrow. The diagram shown in figure 2.10
can he understood in precisely this sense. Then it will he assumed that the
system has two subsystems, a test block and an operation-execution block, and
control passes from one to the other in conformity with the arrows. The system
can also have other subsystems ( in this case the environment), but they never
receive control and therefore are not shown in the diagram (to be more precise,
those moments when the environment changes the state of the system or changes its
own state when acted upon by the system are included in the process of action of
one of the blocks).
We can move even further from the structural diagram. Switching control to a
certain subsystem means activating it, but there can be cases where we do not
know exactly which subsystem is responsible for a particular observed action.
Then we shall equate the vertices of the graph with the actions as such and the
arcs will signify the transition from one action to another. The concept of
''action as such,'' if strictly defined, must be equated with the concept of
''generalized state'' (''set of states'') and this returns us to the first, most
abstract definition of the functional diagram. In fact, when we say that a dog
''runs," ''barks," or "wags his tail,'' a set of concrete states of the dog fits
each of these definitions. Of course one is struck by a discrepancy, ''state'' is
something static, but ''action'' is plainly something dynamic, closer to a change
of state than a state itself. If a photograph shows a dog's tail not leaving the
plane of symmetry, we still do not know whether the dog is wagging it or holding
it still. We overcome such contradictions by noting that the concept of state
includes not only quantities of the type "position,'' but also quantities such as
''velocity,'' ''acceleration,'' and the like. Specifically, a description of the
state of the dog includes an indication of the tension of its tail muscles and
the stimulation of all neurons which regulate the state of the muscles.
THE TRANSITION TO PHENOMENOLOGICAL DESCRIPTIONS
THEREFORE In the functional diagram an action is, formally speaking, a set of
states. But to say that a particular act;on is some set is to say virtually
nothing. This set must he defined. And if we do not know the structure of the
system and its method of functioning it is practically impossible to do this with
precision. We must he content with an incomplete phenomenological definition
based on externally manifested consequences of internal states. It is this kind
of functional diagram, with more or less exactly defined actions at the vertices
of the graph, that is used to describe the behavior of complex systems whose
organization is unknown--such as humans and animals. The diagrams in figures 2.10
and 2.11 are, of course, such diagrams. The phenomenological approach to brain
activity can be carried out by two sciences: psychology and behavioristics (the
study of behavior). The former is based on subjective observations and the latter
on objective ones. They are closely connected and are often combined under the
general name of psychology.
Because the operational component of the T-O-T-E unit may be composite, requiring
the performance of several subordinate plans, T-O-T-E units can have hierarchical
structure. Miller, Galatier, and Pribram give the following example. If a hammer
striking a nail is represented as a two-phase action consisting of raising and
lowering the hammer, then the functional diagram in figure 2.11 which depicts a
plan for driving a nail, becomes the diagram shown in figure 2.14.
[IMG.FIG2.14.GIF]
Figure 2.14. Hierarchical plan for driving a nail
In its turn, this diagram can become an element of the operational component of a
T-O-T-E diagram on a higher level.
We have seen that the elementary structural diagram of figure 2.6 corresponds to
the elementary functional diagram in figure 2.9. When plans make up the
hierarchy, what happens to the structural diagram? Or, reversing the statement to
be more precise, what structural diagrams can ensure execution of a
hierarchically constructed plan'?
Different variants of such diagrams may be suggested. For example. it can be
imagined that there is always one comparison block and that the same subsystem
which stores the goal is always used, but the state of this subsystem (that is,
the goal) changes under the influence of other parts of the system, ensuring an
alternation of goals that follows the plan. By contrast, it may be imagined that
the comparison block-goal pair is reproduced many times and during execution of a
hierarchical plan, control passes from one pair to the other. A combination of
these two methods may be proposed and, in ,general, we can think up many
differently organized cybernetic devices that carry out the same hierarchical
functional diagrams. All that is clear is that they will have a hierarchical
structure and that devices of this type can arise through evolution by the
replication of subsystems and selection of useful variants.
But what kind of structural diagrams actually appear in the process of evolution?
Unfortunately, we cannot yet be certain. That is why we had to switch to
functional diagrams. This is just the first limitation we shall be forced to
impose on our striving for a precise cybernetic description of higher nervous
activity. At the present time we know very little about the cybernetic structure
and functioning of the brains of higher animals, especially of the human being.
Properly speaking, we know virtually nothing. We have only certain facts and
assumptions. In our further analysis, therefore, we shall have to rely on
phenomenology, the findings of behavioristics and psychology, where things are
somewhat better. As for the cybernetic aspect, we shall move to the level of
extremely general concepts, where we shall find certain rules so general that
they explain the stages of development of both the nervous system and human
culture, in particular science. The relatively concrete cybernetic analysis of
the first stages of evolution of the nervous system, which is possible thanks to
the present state of knowledge, will serve as a running start for the subsequent,
more abstract analysis. Of course, our real goal is precisely this abstract
analysis, but it would be more satisfying if knowing more about the cybernetics
of the brain. we were able to make the transition from the concrete to the
abstract in a more smooth and well-substantiated manner.
DEFINITION OF THE COMPLEX REFLEX
SUMMARIZING our description of the fourth stage in the development of the nervous
system we can define the complex reflex as that process where stimulation of
receptors caused by interaction with the environment is passed along the nerve
net and is converted by it, thus activating a definite plan of action that
immediately begins to be executed. In this diagram of behavior all feedbacks
between the organism and the environment are realized in the process of
regulation of actions by the plan. while overall interaction between the
environment and the organism is described by the classical stimulus response
formula. Only now the response means activation of a particular plan.
In a logic textbook (Logika [Logic], State Publishing House of Political
Literature, Moscow, 1956) we read the following: ''A concept by whose properties
an object is conceived as such and as a given object is called concrete. A
concept by whose properties what is conceived is not the given object as such but
a certain property of the object or relationship among objects is called
abstract". This definition can hardly qualify as a masterpiece of clear thinking.
Still, we may conclude from it that general concepts can also be considered
abstract if they are formed not by listing particular objects included in them
but rather by declaring a number of properties to be significant and abstracting
from the other, insignificant properties. This is the only kind of general
concepts we are going to consider and so we shall call them abstract concepts
also. For example, an abstract triangle is any triangle regardless of its size,
sides, or angles, or its position on the screening surface; therefore this is an
abstract concept. The term ''abstract'' is used this way both in everyday life
and in mathematics. At the same time, according to the logic textbook,
"triangle,'' "square,'' and the like are concrete general concepts, but
"triangularity" and ''squareness," which are inherent in them. are abstract
concepts. What is actually happening here is that a purely grammatical difference
is being elevated to the rank of a logical difference, for, even from the point
of view of an advocate of the latter variant of terminology, the possession of an
abstract concept is equivalent to the possession of the corresponding general
concept.
_________________________________________________________________________________
Footnotes:
^1 Later we shall give a somewhat more general definition of the concept and a
set of situations shall be called an Aristotelian concept. At present we shall
drop the adjective ''Aristotelian'' for brevity.
[23][2] According to the terminology accepted by many logicians, juxtaposing
abstract concepts to concrete concepts is not at all the same as juxtaposing
general concepts to particular ones.
[24][3] See the Russian translation in the collection of articles entitled
Elekronika i kibernetika v biologii i meditsine (Electronics and Cybernetics in
Biology and Medicine), Foreign Literature Publishing House, Moscow, 1963.
[Original Lettvin et al., Proc. IRE, 47, 1940-1951 (1959, # 11 )].
[25][4] See R. Pritchard, "Images on the Retina and Visual Perception," in the
collection of articles Problemy bioniki (Problems of Bionics), Mir Publishing
House, 1965. [Original in English Stabilized Images on the Retina. Scientific
American 204 no. 41 (June 1961): 72-78.
____________________________________________________________________________
References
1. http://pespmc1.vub.ac.be/POS/default.html
2. http://pespmc1.vub.ac.be/turchin.html
3. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading2
4. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading3
5. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading4
6. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading5
7. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading6
8. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading7
9. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading8
10. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading9
11. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading10
12. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading11
13. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading12
14. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading13
15. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading14
16. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading15
17. http://pespmc1.vub.ac.be/POS/Turchap2.html#Heading16
18. http://pespmc1.vub.ac.be/POS/Turchap2.html#fn-1
19. http://pespmc1.vub.ac.be/POS/Turchap2.html#fn0
20. http://pespmc1.vub.ac.be/POS/Turchap2.html#fn1
21. http://pespmc1.vub.ac.be/POS/Turchap2.html#fn2
22. http://pespmc1.vub.ac.be/POS/Turchap2.html#fn2
23. http://pespmc1.vub.ac.be/POS/Turchap2.html#fnB0
24. http://pespmc1.vub.ac.be/POS/Turchap2.html#fnB1
25. http://pespmc1.vub.ac.be/POS/Turchap2.html#fnB2
Usage: http://www.kk-software.de/kklynxview/get/URL
e.g. http://www.kk-software.de/kklynxview/get/http://www.kk-software.de
Errormessages are in German, sorry ;-)