Ergebnis für URL: http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading10
   This is chapter 13 of the [1]"The Phenomenon of Science" by [2]Valentin F.
   Turchin
     ____________________________________________________________________________

   Contents:
     * [3]EXPERIMENTAL PHYSICS
     * [4]THE SCIENTIFIC METHOD
     * [5]THE ROLE OF GENERAL PRINCIPLES
     * [6]CRITERIA FOR THE SELECTION OF THEORIES
     * [7]THE PHYSICS OF THE MICROWORLD
     * [8]THE UNCERTAINTY RELATION
     * [9]GRAPHIC AND SYMBOLIC MODELS
     * [10]THE COLLAPSE OF DETERMINISM
     * [11]"CRAZY" THEORIES AND METASCIENCE
     ____________________________________________________________________________

                                    CHAPTER THIRTEEN.
                                 SCIENCE AND METASCIENCE

EXPERIMENTAL PHYSICS

   WHEN THE FOUNDATIONS of the new mathematics were being constructed at the turn of
   the seventeenth century, the basic principles of experimental physics were also
   developed. Galileo (1564-1642) played a leading role in this process. He not only
   made numerous discoveries and inventions which constituted an epoch in
   themselves, but also--in his books, letters, and conversations--taught his
   contemporaries a new method of acquiring knowledge. Galileo's influence on the
   minds of others was enormous. Francis Bacon (1566-1626) was also important in
   establishing experimental science. He gave a philosophical analysis of scientific
   knowledge and the inductive method.

   Unlike the ancient Greeks, the European scientists were by no means contemptuous
   of empirical knowledge and practical activity. At the same time they were full
   masters of the theoretical heritage of the Greeks and had already begun making
   their own discoveries. This combination engendered the new method. "Those who
   have treated of the sciences,'' Bacon writes.

   "have been either empirics or dogmatical. The former like ants only heap up and
       use their store, the latter like spiders spin out their own webs. The bee, a
       mean between both, extracts matter from the flowers of the garden and the
       field, but works and fashions it by its own efforts. The true labor of
       philosophy resembles hers, for it neither relies entirely nor principally on
       the powers of the mind, nor yet lays up in the memory the matter afforded by
       the experiments of natural history and mechanics in its raw state, but
       changes and works it in the understanding. We have good reason, therefore, to
       derive hope from a closer and purer alliance of these faculties (the
       experimental and rational) than has yet been attempted.[12][1]

                                  THE SCIENTIFIC METHOD

   THE CONCEPT of the experiment assumes the existence of a theory. Without a theory
   there is no experiment: there is only observation. From the cybernetic (systems)
   point of view the experiment is a controlled observation: the controlling system
   is the scientific method, which relies on theory and dictates the organization of
   the experiment. Thus, the transition from simple observation to the experiment is
   a metasystem transition in the realm of experience and it is the first aspect of
   the emergence of the scientific method. Its second aspect is awareness of the
   scientific method as something standing above the theory--in other words,
   mastering the general principle of describing reality by means of formalized
   language, which we discussed in the previous chapter. As a whole, the emergence
   of the scientific method is one metasystem transition which creates a new level
   of control, including control of observation (organization of the experiment) and
   control of language (development of theory). The new metasystem is what we mean
   by science in the modern sense of the word. Close direct and feedback ties are
   established between the experiment and the theory within this metasystem. Bacon
   describes them this way: ''Our course and method . . . are such as not to deduce
   effects from effects, nor experiments from experiments (as the empirics do), but
   in our capacity of legitimate interpreters of nature, to deduce causes and axioms
   from effects and experiments.''[13][2]

   We can now give a final answer to the question: what happened in Europe in the
   early seventeenth century? A very major metasystem transition took place,
   engulfing both linguistic and nonlinguistic activity. In the sphere of
   nonlinguistic activity it took shape as the experimental method. In the realm of
   linguistic activity it gave rise to the new mathematics, which has developed by
   metasystem transitions (the stairway effect) in the direction of ever-deeper
   self-awareness as a formalized language used to create models of reality. We
   described this process in the preceding chapter without going beyond mathematics.
   We can now complete this description by showing the system within which this
   process becomes possible. This system is science as a whole with the scientific
   method as its control device--that is, the aggregate of all human beings engaged
   in science who have mastered the scientific method together with all the objects
   used by them. When we were introducing the concept of the stairway effect in
   chapter 5 we pointed out that it takes place in the case where there is a
   metasystem Y which continues to be a metasystem in relation to systems of the
   series X, X', X",. . . , where each successive system is formed by a metasystem
   transition from the preceding one and, while remaining a metasystem, at the same
   time insures the possibility of metasystem transitions of smaller scale from X to
   X', from X" to X"', and so on. Such a system Y possesses inner potential for
   development: we called it an ultrametasystem. In the development of physical
   production ultrametasystem Y is the aggregate of human beings who have the
   ability to convert means of labor into objects of labor. In the development of
   the exact sciences ultrametasystem Y is the aggregate of people who have mastered
   the scientific method--that is, who have the ability to create models of reality
   using formalized language.

   We have seen that in Descartes the scientific method, taken in its linguistic
   aspect, served as a lever for the reform of mathematics. But Descartes did not
   just reform mathematics; while developing the same aspect of the same scientific
   method he created a set of theoretical models or hypotheses to explain physical,
   cosmic, and biological phenomena. If Galileo may be called the founder of
   experimental physics and Bacon its ideologist, then Descartes was both the
   founder and ideologist of theoretical physics. It is true that Descartes' models
   were purely mechanical (there could be no other models at that time) and
   imperfect, and most of them soon became obsolete. But those imperfections are not
   so important as the fact that Descartes established the principle of constructing
   theoretical models. In the nineteenth century, when the first knowledge of
   physics was accumulated and the mathematical apparatus was refined. this
   principle demonstrated its full utility.

   It will not be possible here to give even a brief survey of the evolution of the
   ideas of physics and its achievements or the ideas and achievements of the other
   natural sciences. We shall dwell on two aspects of the scientific method which
   are universally important, namely the role of general principles in science and
   the criteria for selecting scientific theories, and then we shall consider
   certain consequences of the advances of modern physics in light of their great
   importance for the entire system of science and for our overall view of the
   world. At the conclusion of this chapter we shall discuss some prospects for the
   development of the scientific method.

THE ROLE OF GENERAL PRINCIPLES

   BACON SET FORTH a program of gradual introduction of more and more general
   statements (''causes and axioms'') beginning with unique empirical data. He
   called this process induction (that is to say, introduction) as distinguished
   from deduction of less general theoretical statements from more general
   principles. Bacon was a great opponent of general principles; he said that the
   mind does not need wings to raise it aloft, but lead to hold it on the ground.
   During the period of the ''initial accumulation'' of empirical facts and very
   simple empirical rules this conception still had some justification (it was also
   a counterbalance to Medieval Scholasticism), but it turned out later that the
   mind still needs wings more than lead. In any case, that is true in theoretical
   physics. To confirm this let us turn to Albert Einstein. In his article entitled
   ''The Principles of Theoretical Physics,'' he writes:

   To apply his method the theoretician needs a foundation of certain general
   assumptions, so-called principles, from which he can deduce consequences. His
   activity thus breaks into two stages. In the first place he must search for the
   principles, and in the second place he must develop the consequences which follow
   from these principles. School has given him good weapons to perform the second
   task. Therefore, if the first task has been accomplished for a certain area, that
   is to say a certain aggregate of interdependencies, the consequences will not be
   long in coming. The first task mentioned, establishing the principles which can
   serve as the basis for deduction, is categorically different. Here there is no
   method which can be taught and systematically applied to achieve the goal. What
   the investigator must do is more like finding in nature precisely formulated
   general principles which reflect definite general characteristics of the set of
   experimentally determined facts.[14][3]

   In another article entitled ''Physics and Reality,''[15][4] Einstein speaks very
   categorically: ''Physics is a developing logical system of thinking whose
   foundations cannot be obtained by extraction from past experience according to
   some inductive methods, but come only by free fantasy.'' The words about "free
   fantasy" do not mean, of course, that general principles do not depend on
   experience at all but rather that they are not determined uniquely by experience.
   The example Einstein often gave is that Newton's celestial mechanics and
   Einstein's general theory of relativity were constructed from the same facts of
   experience. But they began from completely different (in a certain sense even
   diametrically opposed) general principles, which is also seen in their different
   mathematical apparatuses.

   As long as the edifice of theoretical physics had just a few ''stories'' and the
   consequences of general principles could be deduced easily and unambiguously,
   people were not aware that they had a certain freedom in establishing the
   principles. The distance between the trial and the error (or the success) in the
   trial and error method was so slight that they did not notice that they were
   using this method, but rather thought that they were deducing (although it was
   called inducing, not deducing) principles directly from experience. Einstein
   writes: ''Newton, the creator of the first vast, productive system of theoretical
   physics still thought that the basic concepts and principles of his theory
   followed from experience. Apparently this is how his statement, 'Hypotheses non
   fingo' (I do not compose hypotheses) must be understood.'' With time, however,
   theoretical physics changed into a multistory construction and the deduction of
   consequences from general principles became a complex and not always unambiguous
   business, for it often proved necessary in the process of deduction to make
   additional assumptions, most frequently "unprincipled'' simplifications without
   which the reduction to numerical calculation would have been impossible. Then it
   became clear that between the general principles of the theory and the facts
   permitting direct testing in experience there is a profound difference: the
   former are free constructions of human reason, while the latter are the raw
   material reason receives from nature. True, we should not overestimate the
   profundity of this difference. If we abstract from human affairs and strivings it
   will appear that the difference between theories and facts disappears: both are
   certain reflections or models of the reality outside human beings. The difference
   lies in the level at which the models originate. The facts, if they are
   completely ''deideologized,'' are determined by the effect of the external world
   on the human nervous system which we are compelled (for the present) to consider
   a system that does not permit alteration, and therefore we relate to facts as the
   primary reality. Theories are models embodied in linguistic objects. They are
   entirely in our power and thus we can throw out one theory and replace it with
   another just as easily as we replace an obsolete tool with a more highly refined
   one.

   Growth in the abstractness (construct quality) of the general principles of
   physical theories and their remoteness from the immediate facts of experience
   leads to a situation in which it becomes increasingly more difficult using the
   trial and error method to find a trial which has a chance of success. Reason
   begins to experience an acute need for wings to soar with, as Einstein too is
   saying. On the other hand, the increase in the distance between general
   principles and verifiable consequences makes the general principles invulnerable
   to experience within certain limits, which was also frequently pointed out by the
   classics of modern physics. Upon finding a discrepancy between the consequences
   of a theory and the experiment, the investigator faces two alternatives: look for
   the causes of the discrepancy in the general principles of the theory or look for
   them somewhere between the principles and the concrete consequences. In view of
   the great value of general principles and the significant expenditures required
   to revise the theory as a whole, the second path is always tried first. If the
   deduction of consequences from the general principles can be modified so that
   they agree with the experiment, and if this is done in a sufficiently elegant
   manner, everyone is appeased and the problem is considered solved. But sometimes
   the modification very clearly appears to be a patch, and sometimes patches are
   even placed on top of patches and the theory begins to tear open at the seams:
   nonetheless, its deductions are in agreement with the data of experience and
   continue to have their predictive force. Then these questions arise: what
   attitude should be taken toward the general principles of such a theory? Should
   we try to replace them with some other principles? What point in the
   ''patchwork'' process, how much ''patching,'' justifies discarding the old
   theory?

CRITERIA FOR THE SELECTION OF THEORIES

   FIRST OF ALL let us note that a clear awareness of scientific theories as
   linguistic models of reality substantially lessens the impact of the competition
   between scientific theories and the naive point of view (related to Platonism)
   according to which the linguistic objects of a theory only express some certain
   reality, and therefore each theory is either ''really'' true if this reality
   actually exists or "really'' false if this reality is fabricated. This point of
   view is engendered by transferring the status of the language of concrete facts
   to the language of concept-constructs. When we compare two competing statements
   such as ''There is pure alcohol in this glass'' and ''There is pure water in this
   glass,'' we know that these statements permit an experimental check and that the
   one which is not confirmed loses all meaning as a model and all truth value. It
   is in fact false and only false. Things are entirely different with statements
   which express the general principles of scientific theories. Many verifiable
   consequences are deduced from them and if some of these prove false it is
   customary to say that the initial principles (or methods of deducing
   consequences) are not applicable to the given sphere of experience; it is usually
   possible to establish formal criteria of applicability. In a certain sense,
   therefore, general principles are ''always true'': to be more precise, the
   concepts of truth and falsehood are not applicable to them, but the concept of
   their greater or lesser utility for describing real facts is applicable. Like the
   axioms of mathematics, the general principles of physics are abstract forms into
   which we attempt to squeeze natural phenomena. Competing principles stand out by
   how well they permit this to be done. But what does ''well'' mean'?

   If a theory is a model of reality, then obviously it is better if its sphere of
   application is broader and if it can make more predictions. Thus, the criterion
   of the generality and predictive power of a theory is the primary one for
   comparing theories. A second criterion is simplicity; because theories are models
   intended for use by people they are obviously better when they are simpler to
   use.

   If scientific theories were viewed as something stable, not subject to
   elaboration and improvement, it would perhaps be difficult to suggest any other
   criteria. But the human race is continuously elaborating and improving its
   theories, which gives rise to one more criterion, the dynamic criterion, which is
   also the decisive one. In The Philosophy of Science this criterion was well
   stated by Phillip Frank:

   If we investigate which theories have actually been preferred because of their
   simplicity, we find that the decisive reason for acceptance has been neither
   economic nor esthetic, but rather what has often been called "dynamic.'' This
   means that the theory was preferred that proved to make science more "dynamic,"
   i.e., more fit to expand into unknown territory. This can be made clear by using
   an example that we have invoked frequently in this book: the struggle between the
   Copernican and the Ptolemaic systems. In the period between Copernicus and Newton
   a great many reasons had been invoked on behalf of one or the other system.
   Eventually, however, Newton advanced his theory of motion, which accounted
   excellently for all motions of celestial bodies (e.g., comets), while Copernicus
   as well as Ptolemy had accounted for only the motions in our planetary system.
   Even in this restricted domain, they neglected the "perturbations'' that are due
   to the interactions between the planets. However, Newton's laws originated in
   generalizations of the Copernican theory, and we can hardly imagine how they
   could have been formulated if he had started with the Ptolemaic system. In this
   respect and in many others, the Copernican theory was the more ''dynamic'' one
   or, in other words, had the greater heuristic value. We can say that the
   Copernican theory was mathematically "simpler'' and also more dynamic than the
   Ptolemaic theory.[16][5]

   The esthetic criterion or the criterion of the beauty of a theory, which is
   mentioned by Frank, is difficult to defend as one independent of other criteria.
   But it becomes very important as an intuitive synthesis of all the
   above-mentioned criteria. To a scientist a theory seems beautiful if it is
   sufficiently general and simple and he feels that it will prove to be dynamic. Of
   course, he may be wrong in this too.

THE PHYSICS OF THE MICROWORLD

   IN BOTH PHYSICS and pure mathematics, as the abstractness of the theories
   increased the understanding of their linguistic nature became solidly rooted. The
   decisive impetus was given to this process in the early twentieth century when
   physics entered the world of atoms and elementary particles, and quantum
   mechanics and the theory of relativity were created. Quantum mechanics played a
   particularly large part. This theory cannot be understood at all unless one
   constantly recalls that it is just a linguistic model of the microworld, not a
   representation of how it would "really" look if it were possible to see it
   through a microscope with monstrous powers of magnification; there is no such
   representation nor can there be one. Therefore the notion of the theory as a
   linguistic model of reality became a constituent part of modern physics,
   essential for successful work by physicists. Consequently their attitude toward
   the nature of their work also began to change. Formerly the theoretical physicist
   felt himself to be the discoverer of something which existed before him and was
   independent of him, like a navigator discovering new lands; now he feels he is
   more a creator of something new, like a master artisan who creates new buildings,
   machines, and tools and has complete mastery of his own tools. This change has
   even appeared in our way of talking. Traditionally, Newton is said to have
   ''discovered'' [otkryl] infinitesimal calculus and celestial mechanics; when we
   speak of a scientist today we say that he has ''created'' [sozdal], "proposed"
   [predlozhil], or ''worked out'' [razrabotal] a new theory. The expression
   ''discovered'' sounds archaic. Of course, this in no way diminishes the merits of
   the theoreticians, for creation is as honorable and inspiring an occupation as
   discovery.

   But why did quantum mechanics require awareness of the "linguistic quality'' of
   theories?

   According to the initial atomistic conception, atoms were simply very small
   particles of matter, small corpuscles which had, in particular, a definite color
   and shape which determined the color and physical properties of larger
   accumulations of atoms. The atomic physics of the early twenth century
   transferred the concept of indivisibility from the atom to elementary
   particles--the electron, the proton, and soon after the neutron. The word
   ''atom'' began to mean a construction consisting of an atomic nucleus (according
   to the initial hypothesis it had been an accumulation of protons and electrons)
   around which electrons revolved like planets around the sun. This representation
   of the structure of matter was considered hypothetical but extremely plausible.
   The hypothetical quality was understood in the sense discussed above: the
   planetary model of the atom must be either true or false. If it is true (and
   there was virtually no doubt of this) then the electrons ''really'' are small
   particles of matter which describe certain trajectories around a nucleus. Of
   course, in comparison with the atoms of the ancients, the elementary particles
   were already beginning to lose some properties which would seem to be absolutely
   essential for particles of matter. It became clear that the concept of color had
   absolutely no application to electrons and protons. It was not that we did not
   know what color they were; the question was simply meaningless, for color is the
   result of interaction with light by at least the whole atom, and more precisely
   by an accumulation of many atoms. Doubts also arose regarding the concepts of the
   shape and dimensions of electrons. But the most sacred element of the
   representation of the material particle, that the particle has a definite
   position in space at each moment, remained undoubted and taken for granted.

THE UNCERTAINTY RELATION

   QUANTUM MECHANICS destroyed this notion, through the force of new experimental
   data. It turned out that under certain conditions elementary particles behave
   like waves, not particles; in this case they are not ''blurred'' over a large
   area of space, but keep their small dimensions and discreteness. The only thing
   that is blurred is the probability of finding them at a particular point in
   space.

   As an illustration of this let us consider figure 13.1.

   [IMG.FIG13.1.GIF]

   Figure 13.1. Diffraction of electrons.

   The figure shows an electron gun which sends electrons at a certain velocity
   toward a diaphragm behind which stands a screen. The diaphragm is made of a
   material which is impervious to electrons, but it has two holes through which
   electrons pass to strike the screen. The screen is coated with a substance that
   fluoresces when acted upon by electrons, so that there is a flash at the place
   struck by an electron. The stream of electrons from the gun is sufficiently
   infrequent so that each electron passes through the diaphragm and is recorded on
   the screen independently of others. The distance between the holes in the
   diaphragm is many times greater than the dimensions of the electrons (according
   to any estimate of their size) but comparable with the quantity h/p where h is
   the Planck constant and p is the momentum of the electron--i.e., the product of
   its velocity and mass. These are the conditions of the experiment. The result of
   the experiment is a distribution of flashes on the screen. The first conclusion
   from analyzing the results of the experiment is the following: electrons strike
   different points of the screen and it is impossible to predict which point each
   electron will strike. The only thing that can be predicted is the probability
   that a particular electron will strike a particular point--that is, the average
   density of flashes after a very large number of electrons have struck the screen.
   But this is just half the trouble. One can imagine that different electrons pass
   through different parts of the hole in the diaphragm, experience effects of
   differing force from the edges of the holes, and therefore are deflected
   differently. The real troubles arise when we begin to investigate the average
   density of flashes on the screen and compare it with the results which are
   obtained when we close one of the holes in the diaphragm. If an electron is a
   small particle of matter, then when it reaches the region of the diaphragm it is
   either absorbed or passes through one of the holes. Because the holes in the
   diaphragm are set symmetrically relative to the electron gun, on the average half
   of the electrons pass through each hole. This means that if we close one hole and
   pass 1 million electrons through the diaphragm then close the second hole and
   open the first and pass 1 million more electrons through, we should receive the
   same average density of flashes as if we were to pass 2 million electrons through
   the diaphragm with two holes open. But it turns out that this is not the case!
   With two holes open the distribution is different; it contains maximums and
   minimums as is the case in diffraction of waves.

   The average density of flashes can be calculated by means of quantum mechanics,
   relating the electrons to the so-called wave function, which is a certain
   imaginary field whose intensity is proportional to the probability of the
   observed events.

   It would take too much space to describe all the attempts, none successful, which
   have been made to correlate the representation of the electron as a
   ''conventional" particle (such particles have come to be called classical as
   opposed to quantum particles) with the experimental data on electron behavior.
   There is a vast literature, both specialized and popular, devoted to this
   question. The following two things have become clear. In the first place. if we
   simultaneously measure the coordinate of a quantum particle (any such particle,
   not necessarily an electron) on a certain axis X and the momentum in this
   direction p, the errors of measurement, which we designate [Delta][x] and
   [Delta][p ]respectively, comply with Heisenberg's uncertainty relation:

                              [Delta][x] [Delta][p ]>= h

   No clever tricks can get around this relation. When we try to measure coordinate
   X more exactly the spread of magnitudes of momentum p is larger, and vice versa.
   The uncertainty relation is a universally true law of nature, but because the
   Planck constant h is very small, the relation plays no part in measurements of
   bodies of macroscopic size.

   In the second place, the notion that quantum particles really move along certain
   completely definite trajectories--which is to say at each moment they really have
   a completely definite coordinate and velocity (and therefore also momentum) which
   we are simply unable to measure exactly--runs up against insurmountable logical
   difficulties. On the other hand, the refusal on principle to ascribe a real
   trajectory to the quantum particle and adoption of the tenet that the most
   complete description of the state of a particle is an indication of its wave
   function yields a logically flawless, mathematically simple and elegant theory
   which fits brilliantly with experimental facts; specifically. the uncertainty
   relation follows from it immediately. This is the theory of quantum mechanics.
   The work of Niels Bohr (1885-1962) the greatest scientist-philosopher of our
   time, played the major part in clarifying the physical and logical foundations of
   quantum mechanics and interpreting it philosophically.

GRAPHIC AND SYMBOLIC MODELS

   SO AN ELECTRON does not have a trajectory. The most that can be said of an
   electron is an indication of its wave function whose square will give us the
   probability of finding the electron in the proximity of a particular point in
   space. But at the same time we say that the electron is a material particle of
   definite (and very small) dimensions. Combining these two representations, as was
   demanded by observed facts, proved a very difficult matter and even today there
   are still people who reject the standard interpretation of quantum mechanics
   (which has been adopted by a large majority of physicists following the Bohr
   school) and want to give the quantum particles back their trajectories no matter
   what. Where does such persistence come from? After all, the expropriation of
   color from the electrons was completely painless and, from a logical point of
   view, recognizing that the concept of trajectory cannot apply to the electron is
   no different in principle from recognizing that the concept of color does not
   apply. The difference here is that when we reject the concept of color we are
   being a little bit hypocritical. We say that the electron has no color, but we
   ourselves picture it as a little greyish (or shiny, it is a matter of taste)
   sphere. We substitute an arbitrary color for the absence of color and this does
   not hinder us at all in using our model. But this trick does not work in relation
   to position in space. The notion of an electron which is located somewhere at
   every moment hinders understanding of quantum mechanics and comes into
   contradiction with experimental data. Here we are forced to reject completely the
   graphic geometric representation of particle movement. And this is what causes
   the painful reaction. We are so accustomed to associating the space-time picture
   with true reality, with what exists objectively and independently of us, that it
   is very difficult for us to believe in an objective reality which does not fit
   within this conception. And we ask ourselves again and again: after all, if the
   electron is not ''blurred'' in space, then it must really be somewhere, mustn't
   it?

   It requires real mental effort to recognize and feel the meaninglessness of this
   question. First we must be aware that all our knowledge and theories are
   secondary models of reality, that is, models of the primary models which are the
   data of sensory experience. These data bear the ineradicable imprint of the
   organization of our nervous system and because space-time concepts are set in the
   very lowest levels of the nervous system, none of our perceptions and
   representations, none of the products of our imagination, can go outside the
   framework of space-time pictures. But this framework can still be broadened to
   some extent. This must be done, however, not by an illusory movement
   ''downward,'' toward objective reality ''as it is, independent of our sense
   organs,'' but rather by a movement "upward," that is, by constructing secondary
   symbolic models of reality. Needless to say, the symbols of the theory preserve
   their continuous space-time existence just as the primary data of experience do.
   But in the relations between the one and the other, which is to say in the
   semantics of the theory, we can allow ourselves significant freedom if we are
   guided by the logic of new experimental facts, and not by our customary
   space-time intuition. And we can construct a sign system whose functioning is in
   no way related to graphic representations but is entirely appropriate to the
   condition of adequately describing reality. Quantum mechanics is such a system.
   In this system the quantum particle is neither a little greyish sphere nor a
   shiny one, and it is not a geometric point; it is a certain concept, a functional
   node of the system which, together with the other nodes, ensures description and
   anticipation of the real facts of experience: flashes on the screen, instrument
   readings, and the like.

   Let us return to the question of how the electron ''really'' moves. We have seen
   that, owing to the uncertainty relation, the experiment cannot in principle give
   an answer to this question. This question is therefore meaningless as an
   '"external part'' of the physical model of reality. All that we can do is to
   ascribe a purely theoretical meaning to it. But then it loses its direct linkage
   with observed phenomena and the expression "really" becomes pure deception! When
   we go outside the sphere of perception and declare that such-and-such ''really''
   takes place we are always moving upward, not downward; we are constructing a
   pyramid of linguistic objects and it is only because of the optical illusion that
   it seems to us we are going deeper into the realm which lies beneath sensory
   experience. To put it metaphorically, the plane that separates sensory experience
   from reality is absolutely impervious; and when we attempt to discern what is
   going on beneath it we see only the upside-down reflection of the pyramid of
   theories. This does not mean that true reality is unknowable and our theories are
   not correct models of it; one must remember, however, that all these models lie
   on this side of sensory experience and it is meaningless to correlate distinct
   elements of theories with the illusory ''realities'' on the other side, as was
   done by Plato for example. The representation of the electron as a little sphere
   moving along a trajectory is just as much a construction as is the interlinking
   of the symbols of quantum theory. It differs only in that it includes a
   space-time picture to which, following convention, we ascribe illusory reality by
   using the expression ''really,'' which is meaningless in this case.

   The transition to conscious construction of symbolic models of reality that do
   not rely on any graphic representations of physical objects is the great
   philosophical achievement of quantum mechanics. In fact physics has been a
   symbolic model since Newton's time and it owes its successes (numerical
   calculations) to precisely this symbolic nature; but graphic representations were
   present as an essential element. Now they are not essential and this has
   broadened the class of possible models. Those who want to bring back the graphic
   quality no matter what, although they see that the theory works better without
   it, are in fact asking that the class of models be narrowed. They will hardly be
   successful. They are like the odd fellow who hitched his horse to a steam
   locomotive for, although he could see that the train moved without a horse, it
   was beyond his powers to recognize such a situation as normal. Symbolic models
   are a steam engine which has no need to be harnessed to the horse of graphic
   representations for each and every concept.

THE COLLAPSE OF DETERMINISM

   THE SECOND IMPORTANT result of quantum mechanics, the collapse of determinism,
   was significant in general philosophy. Determinism is a philosophical concept. It
   is the name used for the view which holds that all events occurring in the world
   have definite causes and necessarily occur; that is, they cannot not occur.
   Attempts to make this definition more precise reveal the logical defects in it
   which hinder precise formulation of this viewpoint as a scientific proposition
   without introducing any additional representations about objective reality. In
   fact, what does ''events have causes'' mean? Can it really be possible to
   indicate some finite number of ''causes'' of a given event and say that there are
   no others? And what does it mean that the event "cannot not occur?'' If this
   means only that it has occurred then the statement becomes a tautology.

   Philosophical determinism can, however, obtain a more precise interpretation
   within the framework of a scientific theory which claims to be a universal
   description of reality. It actually did receive such an interpretation within the
   framework of mechanism (mechanical philosophy), the philosophical-scientific
   conception which emerged on the basis of the advances of classical mechanics in
   application to the motions of the celestial bodies. According to the mechanistic
   conception the world is three-dimensional Euclidean space filled with a multitude
   of elementary particles which move along certain trajectories. Forces operate
   among the particles depending on their arrangement relative to one another and
   the movement of particles follows the laws of Newton's mechanics. With this
   representation of the world, its exact state (that is, the coordinates and
   velocities of all particles) at a certain fixed moment in time uniquely
   determines the exact state of the world at any other moment. The famous French
   mathematician and astronomer P. Laplace (1749-1827) expressed this proposition in
   the following words:

   Given for one instance an intelligence which could comprehend all the forces by
       which nature is animated and the respective situation of the beings who
       compose it--an intelligence sufficiently vast to submit these data to
       analysis--it would embrace in the same formula the movements of the greatest
       bodies of the universe and those of the lightest atom; for it, nothing would
       be uncertain and the future, as the past, would be present to its
       eyes.[17][6]

   This conception became called Laplacian determinism. It is a proper and
   inevitable consequence of the mechanistic conception of the world. It is true
   that Laplace's formulation requires a certain refinement from a modern point of
   view because we cannot recognize as proper the concepts of an all-knowing reason
   or absolute precision of measurement. But it can be modernized easily, almost
   without changing its meaning. We say that if the coordinates and velocities of
   all particles in a sufficiently large volume of space are known with adequate
   precision then it is possible to calculate the behavior of any system in any
   given time interval with any given precision. The conclusion that all future
   states of the universe are predetermined can be drawn from this formulation just
   as from Laplace's initial formulation. By unrestrictedly increasing the precision
   and scope of measurements we unrestrictedly extend prediction periods. Because
   there are no restrictions in principle on the precision and range of measurements
   (that is, restrictions which follow not from the limitations of human
   capabilities but from the nature of the objects of measurement) we can picture
   the extreme case and say that really the entire future of the world is already
   absolutely and uniquely determined today. In this case the expression ''really''
   acquires a perfectly clear meaning: our intuition easily recognizes that this
   ''really'' is proper and we object to its discrediting.

   Thus, the mechanistic conception of the world leads to the notion of the complete
   determinism of phenomena. But this contradicts our own subjective feeling of free
   choice. There are two ways out of this: to recognize the feeling of freedom of
   choice as ''illusory'' or to recognize the mechanistic conception as unsuitable
   as a universal picture of the world. It is already difficult today to say how
   thinking people of the "pre-quantum'' age were divided between these two points
   of view. If we approach the question from a modern standpoint, even knowing
   nothing of quantum mechanics, we must firmly adhere to the second point of view.
   We now understand that the mechanistic conception, like any other conception, is
   only a secondary model of the world in relation to the primary data of
   experience; therefore the immediate data of experience always have priority over
   any theory. The feeling of freedom of choice is a primary fact of experience just
   like other primary facts of spiritual and sensory experience. A theory cannot
   refute this fact; it can only correlate new facts with it, a procedure which,
   where certain conditions are met, we call explanation of the fact. To declare
   freedom of choice ''illusory'' is just as meaningless as telling a person with a
   toothache that his feeling is ''illusory.'' The tooth may be entirely healthy and
   the feeling of pain may be a result of stimulation of a certain segment of the
   brain, but this does not make it "illusory.''

   Quantum mechanics destroyed determinism. Above all the representation of
   elementary particles as little corpuscles moving along definite trajectories
   proved false, and as a consequence the entire mechanistic picture of the
   world--which was so understandable, customary, and seemingly absolutely beyond
   doubt--also collapsed. Twentieth-century physicists can no longer tell people
   what the world in which they live is really like, as nineteenth-century
   physicists could. But determinism collapsed not only as a part of the mechanistic
   conception, but also as a part of any picture of the world. In principle one
   could conceive of a complete description (picture) of the world that would
   include only really observed phenomena but would give unambiguous predictions of
   all phenomena that will ever be observed. We now know that this is impossible. We
   know situations exist in which it is impossible in principle to predict which of
   the sets of conceivable phenomena will actually occur. Moreover, according to
   quantum mechanics these situations are not the exception; they are the general
   rule. Strictly determined outcomes are the exception to the rule. The quantum
   mechanics description of reality is a fundamentally probabilistic description and
   includes unequivocal predictions only as the extreme case.

   As an example let us again consider the experiment with electron diffraction
   depicted in figure 13.1. The conditions of the experiment are completely
   determined when all geometric parameters of the device and the initial momentum
   of the electrons released by the gun are given. All the electrons propelled from
   the gun and striking the screen are operating under the same conditions and are
   described by the same wave function. However, they are absorbed (produce flashes)
   at different points of the screen, and it is impossible to predict beforehand at
   what point an electron will produce a flash. It is even impossible to predict
   whether the electron will be deflected upward or downward in our picture; all
   that can be done is to indicate the probability of striking different segments of
   the screen.

   It is permissible, however, to ask the following question: why are we confident
   that if quantum mechanics cannot predict the point which an electron will strike
   no other future theory will be able to do this?

   We shall give two answers to this question. The first answer can be called
   formal. Quantum mechanics is based on the principle that description by means of
   the wave function is a maximally complete description of the state of the quantum
   particle. This principle, in the form of the uncertainty relation that follows
   from it, has been confirmed by an enormous number of experiments whose
   interpretation contains nothing but concepts of the lowest level, directly linked
   to observed quantities. The conclusions of quantum mechanics, including the more
   complex mathematical calculations, have been confirmed by an even larger number
   of experiments. And there are absolutely no signs that we should doubt this
   principle. But this is equivalent to the impossibility of predicting the exact
   outcome of an experiment. For example, to indicate what point on the screen an
   electron will strike one must have more knowledge about it than the wave function
   provides.

   The second answer requires an understanding of why we are so disinclined to agree
   that it is impossible to predict the point the electron will strike. Centuries of
   development in physics have accustomed people to the thought that the movement of
   inanimate bodies is controlled exclusively by causes external to them and that
   these causes can always be discovered by sufficiently precise investigation. This
   statement was completely justified as long as it was considered possible to watch
   a system without affecting it, which held true for experiments with macroscopic
   bodies. Imagine that figure 13.1 shows the distribution of cannonballs instead of
   electrons. and that we are studying their movement. We see that in one case the
   ball is deflected upward while in another it goes downward; we do not want to
   believe that this happens by itself, but are convinced that the difference in the
   behavior of the cannonballs can be explained by some real cause. We photograph
   the flight of the ball, do some other things, and finally find phenomena A[1] and
   A[2], which are linked to the flight of the cannonball in such a way that where
   A[1] is present the ball is deflected upward and where A[2 ]is present it goes
   downward. We therefore say that A[1] is the cause of deflection upward while A[2]
   is the cause of deflection downward. Possibly our experimental area will prove
   inadequate or we shall simply get tired of investigating and not find the
   sought-for cause. We shall still remain convinced that a cause really exists, and
   that if we had looked harder we would have found phenomena A[1] and A[2].

   In the experiment with electrons, once again we see that the electron is
   deflected upward in some cases and downward in others and in the search for the
   cause we try to follow its movement, to peek behind it. But it turns out here
   that we cannot peek behind the electron without having a most catastrophic effect
   on its destiny. A stream of light must be directed at the electron if we are to
   ''see'' it. But the light interacts with the substance in portions, quanta, which
   obey the same uncertainty relation as do electrons and other particles. Therefore
   it is not possible to go beyond the uncertainty relation by means of light or by
   any other investigative means. In attempting to determine the coordinate of the
   electron more precisely by means of photons we either transfer such a large and
   indeterminate momentum to it that it spoils the entire experiment or we measure
   the coordinate so crudely that we do not find out anything new about it. Thus,
   phenomena A[1] and A[2 ](the causes according to which the electron is deflected
   upward in some cases and downward in others) do not exist in reality. And the
   statement that there "really'' is some cause loses any scientific meaning.

   Thus, there are phenomena that have no causes, or more precisely, there are
   series of possibilities from which one is realized without any cause. This does
   not mean that the principle of causality should be entirely discarded: in the
   same experiment, by turning off the electron gun we cause the flashes on the
   screen to completely disappear, and turning off the gun does cause this. But this
   does mean that the principle must be narrowed considerably in comparison with the
   way it was understood in classical mechanics and the way it is still understood
   in the ordinary consciousness. Some phenomena have no causes; they must be
   accepted simply as something given. That is the kind of world we live in.

   The second answer to the question about the reasons for our confidence that
   unpredictable phenomena exist is that the uncertainty relation assists us in
   clarifying not only a mass of new facts but also the nature of the break
   regarding causality and predictability that occurs when we enter the microworld.
   We see that belief in absolute causality originated from an unstated assumption
   that there are infinitely subtle means of watching and investigating, of
   ''peeking'' behind the object. But when they came to elementary particles
   physicists found that there is a minimum quantum of action measurable by the
   Planck constant h, and this creates a vicious circle in attempts to make the
   description of one particle by means of another detailed beyond measure. So
   absolute causality collapsed, and with it went determinism. From a general
   philosophical point of view it is entirely natural that if matter is not
   infinitely divisible then description cannot be infinitely detailed so that the
   collapse of determinism is more natural than its survival would have been.

"CRAZY" THEORIES AND METASCIENCE[18][7]

   THE ABOVEMENTIONED SUCCESSES of quantum mechanics refer primarily to the
   description of nonrelativistic particles--that is, particles moving at velocities
   much slower than the velocity of light, so that effects related to relativity
   theory (relativistic effects) can be neglected. We had nonrelativistic quantum
   mechanics in mind when we spoke of its completeness and logical harmony.
   Nonrelativistic quantum mechanics is adequate to describe phenomena at the atomic
   level, but the physics of elementary high-energy particles demands the creation
   of a theory combining the ideas of quantum mechanics with the theory of
   relativity. Only partial successes have been achieved thus far on this path; no
   single, consistent theory of elementary particles which explains the enormous
   material accumulated by experimenters exists. Attempts to construct a new theory
   by superficial modifications of the old theory do not yield significant results.
   Creation of a satisfactory theory of elementary particles runs up against the
   uniqueness of this realm of phenomena, phenomena which seem to take place in a
   completely different world and demand for their explanation completely
   unconventional concepts which differ fundamentally from our customary scheme of
   concepts.

   In the late 1950s Heisenberg proposed a new theory of elementary particles. Upon
   becoming familiar with it Bohr said that it could hardly prove true because it
   was ''not crazy enough.'' The theory was not in fact recognized, but Bohr's
   pointed remark became known to all physicists and even entered popular writing.
   The word "crazy" [Russian sumasshedshaya, literally ''gone out of the mind''] was
   naturally associated with the epithet ''strange,'' which was applied to the world
   of elementary particles. But does ''crazy'' mean just ''strange,'' ''unusual"?
   Probably if Bohr had said "not unusual enough,'' it would not have become an
   aphorism. The word ''crazy'' has a connotation of ''unreasoned,'' ''coming from
   an unknown place,'' and brilliantly characterizes the current situation of the
   theory of elementary particles, in which everyone recognizes that the theory must
   be fundamentally revised, but no one knows how to do it.

   The question arises: does the ''strangeness'' of the world of elementary
   particles--the fact that our intuition, developed in the macroworld, does not
   apply to it--doom us to wander eternally in the darkness?

   Let us look into the nature of the difficulties which have arisen. The principle
   of creating formalized linguistic models of reality did not suffer in the
   transition to study of the microworld. But if the wheels of these models, the
   physical concepts, came basically from our everyday macroscopic experience and
   were only refined by formalization, then for the new, ''strange" world we need
   new, "strange'' concepts. But we have nowhere to take them from; they will have
   to be constructed and also combined properly into a whole scheme. In the first
   stage of study of the microworld the wave function of nonrelativistic quantum
   mechanics was constructed quite easily by relying on the already existing
   mathematical apparatus used to describe macroscopic phenomena (the mechanics of
   the material point, the mechanics of continuous media, and matrix theory).
   Physicists were simply lucky. They found prototypes of what they needed in two
   (completely different) concepts of macroscopic physics and they used them to make
   a ''centaur,'' the quantum concept of the wave-particle. But we cannot count on
   luck all the time. The more deeply we go into the microworld the greater are the
   differences between the wanted concept-constructs and the ordinary concepts of
   our macroscopic experience: it thus becomes less and less probable that we shall
   be able to improvise them, without any tools, without any theory. Therefore we
   must subject the very task of constructing scientific concepts and theories to
   scientific analysis, that is, we must make the next metasystem transition. In
   order to construct a definite physical theory in a qualified manner we need a
   general theory of the construction of physical theories (a metatheory) in the
   light of which the way to solve our specific problem will become clear.

   The metaphor of the graphic models of the old physics as a horse and the abstract
   symbolic models as a steam engine can be elaborated as follows. Horses were put
   at our disposal by nature. They grow and reproduce by themselves and it is not
   necessary to know their internal organization to make use of them. But we
   ourselves must build the steam engine. To do this we must understand the
   principles of its organization and the physical laws on which they are based and
   furthermore we must have certain tools for the work. In attempting to construct a
   theory of the ''strange'' world without a metatheory of physical theories we are
   like a person who has decided to build a steam engine with his bare hands or to
   build an airplane without having any idea of the laws of aerodynamics.

   And so the time has come for the next metasystem transition. Physics needs . . .
   I want to say ''metaphysics," but, fortunately for our terminology, the
   metatheory we need is a metatheory in relation to any natural science theory
   which has a high degree of formalization and therefore it is more correct to call
   it a metascience. This term has the shortcoming of creating the impression that a
   metascience is something fundamentally outside of science whereas in fact the new
   level of the hierarchy created by this metasystem transition must, of course, be
   included in the general body of science, thereby broadening it. The situation
   here is similar to the situation with the term metamathematics: after all,
   metamathematics is also a part of mathematics. Inasmuch as the term
   ''metamathematics was acceptable nonetheless, the term "metascience'' may also be
   considered acceptable. But because a very important part of metascientific
   investigation is investigation of the concepts of a theory, the term conceptology
   may also be suggested.

   The basic task of metascience can be formulated as follows. A certain aggregate
   of facts or a certain generator of facts is given. How can one construct a theory
   that describes these facts effectively and makes correct predictions?

   If we want metascience to go beyond general statements it must be constructed as
   a full-fledged mathematical theory and its object the natural science theory,
   must be presented in a formalized (albeit simplified: such is the price of
   formalization) manner, subject to mathematics. Represented in this form the
   scientific theory is a formalized linguistic model whose mechanism is the
   hierarchical system of concepts, a point of view we have carried through the
   entire book. From it, the creation of a mathematical metascience is the next
   natural metasystem transition, and when we make this transition we make our
   objects of study formalized languages as a whole--not just their syntax but also,
   and primarily, their semantics, their application to description of reality. The
   entire course of development of physico-mathematical science leads us to this
   step.

   But in our reasoning thus far we have been basing ourselves on the needs of
   physics. How do things stand from the point of view of pure mathematics?

   Whereas theoretical physicists know what they need but can do little, "pure"
   mathematicians might rather be reproached for doing a great deal but not knowing
   what they need. There is no question that many pure mathematical works are needed
   to give cohesion and harmony to the entire edifice of mathematics, and it would
   be silly to demand immediate ''practical'' application from every work. All the
   same, mathematics is created to learn about reality, not for esthetic or sporting
   purposes like chess, and even the highest stages of mathematics are in the last
   analysis needed only to the extent that they promote achievement of this goal.

   Apparently, upward growth of the edifice of mathematics is always necessary and
   unquestionably valuable. But mathematics is also growing in breadth and it is
   becoming increasingly difficult to determine what is needed and what is not and,
   if it is needed, to what extent. Mathematical technique has now developed to the
   point where the construction of a few new mathematical objects within the
   framework of the axiomatic method and investigation of their characteristics has
   become almost as common, although not always as easy, a matter as computations
   with fractions were for the Ancient Egyptian scribes. But who knows whether these
   objects will prove necessary? The need is emerging for a theory of the
   application of mathematics, and this is actually a metascience. Therefore, the
   development of metascience is a guiding and organizing task in relation to the
   more concrete problems of mathematics.

   The creation of an effective metascience is still far distant. It is difficult
   today to even picture its general outlines. Much more preparatory work must be
   done to clarify them. Physicists must master "Bourbakism" and develop a "feel"
   for the play of mathematical structures, which leads to the emergence of rich
   axiomatic theories suitable for detailed description of reality. Together with
   mathematicians they must learn to break symbolic models down into their
   individual elements of construction in order to compose the necessary blocks from
   them. And of course, there must be development of the technique of making formal
   computations with arbitrary symbolic expressions (and not just numbers) using
   computers. Just as the transition from arithmetic to algebra takes place only
   after complete assimilation of the technique of arithmetic computations, so also
   the transition to the theory of creating arbitrary sign systems demands highly
   sophisticated techniques for operations on symbolic expressions and a practical
   answer to the problem of carrying out cumbersome formal computations. Whether the
   new method will contribute to a resolution of the specific difficulties that now
   face the theory of elementary particles or whether they will be resolved earlier
   by ''oldtime'' manual methods we do not know and in the end it is not important
   because new difficulties will undoubtedly arise. One way or another, the creation
   of a metascience is on the agenda. Sooner or later it will be solved, and then
   people will receive a new weapon for conquering the strangest and most fantastic
   worlds.
   _________________________________________________________________________________

   [19][1] Francis Bacon, Novum Organum, Great books of the Western World,
   Encyclopedia Britannica, 1955, Aphorism 95, p 126.

   [20][2] Bacon, Aphorism 117, p 131.

   [21][3] See the collection A. Einstein, Fizika i real'nost (Phyics and Reality).
   Moscow: Nauka Publishing House, 1965. The quotations below are also taken from
   this Russian work [Original article available in Mein Weltbild, Amsterdam, 1934
   -trans.]

   [22][4] Original article in Franklin Institute Journal, vol 221, 1936, pp. 313-57
   - trans.

   [23][5] Phillip Frank, The Philosophy of Science (Englewood Cliffs, New York:
   Prentice Hall, 1957).

   [24][6] Laplace. P., Opyt filosofi terertt veroyatnostei, Moscow, 1908. p 9
   [Original Essai philosophique des probabilités, 1814. English translation, A
   Philosophical Essay on Probabilities, New York: Dover Publications. 1951--trans.]

   [25][7] This section is written on the motifs of the author's article published
   under the same title in the journal Voprosy filosofi (Questions of Philosophy),
   No 5, 1968.
     ____________________________________________________________________________

References

   1. http://pespmc1.vub.ac.be/POS/default.html
   2. http://pespmc1.vub.ac.be/turchin.html
   3. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading2
   4. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading3
   5. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading4
   6. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading5
   7. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading6
   8. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading7
   9. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading8
  10. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading9
  11. http://pespmc1.vub.ac.be/POS/Turchap13.html#Heading10
  12. http://pespmc1.vub.ac.be/POS/Turchap13.html#fn0
  13. http://pespmc1.vub.ac.be/POS/Turchap13.html#fn1
  14. http://pespmc1.vub.ac.be/POS/Turchap13.html#fn2
  15. http://pespmc1.vub.ac.be/POS/Turchap13.html#fn3
  16. http://pespmc1.vub.ac.be/POS/Turchap13.html#fn4
  17. http://pespmc1.vub.ac.be/POS/Turchap13.html#fn5
  18. http://pespmc1.vub.ac.be/POS/Turchap13.html#fn6
  19. http://pespmc1.vub.ac.be/POS/Turchap13.html#fnB0
  20. http://pespmc1.vub.ac.be/POS/Turchap13.html#fnB1
  21. http://pespmc1.vub.ac.be/POS/Turchap13.html#fnB2
  22. http://pespmc1.vub.ac.be/POS/Turchap13.html#fnB3
  23. http://pespmc1.vub.ac.be/POS/Turchap13.html#fnB4
  24. http://pespmc1.vub.ac.be/POS/Turchap13.html#fnB5
  25. http://pespmc1.vub.ac.be/POS/Turchap13.html#fnB6


Usage: http://www.kk-software.de/kklynxview/get/URL
e.g. http://www.kk-software.de/kklynxview/get/http://www.kk-software.de
Errormessages are in German, sorry ;-)