History

An historical essay on cybernetics (Peter Asaro)

An essay from the archives by Peter M. Asaro This essay has been resurfaced because although it represents a point of view...

Written by Angus Jenkinson · 9 min read >

An essay from the archives by Peter M. Asaro

This essay has been resurfaced because although it represents a point of view on ground that other would see differently, it gives an interesting take, covers much history and lets us reflect on how cybernetics is seen. How do you respond? —AJ The word “cybernetics” was coined by MIT mathematician Norbert Wiener in the summer of 1947 to refer to the new science of command and control in animals and machines which he helped to establish and develop. The word was derived from the Greek kubernetes meaning “steersman” or “ship pilot” (See Cybernetics, Wiener). Unknown to Wiener at the time, Plato had used the adjective kubernetiken in the Gorgias to refer to the “science of piloting,” and the French physicist André Marie Ampére had derived the French word Cybernétique directly from the Greek to refer to the science of government in his classification of sciences, the Essai sur la Philosophie des Sciences. Cybernetics holds that complex systems-such as living organisms, societies and the brain-are self-regulated by the feedback of information. By systematically analyzing the feedback mechanisms which regulate complex systems, cybernetics hopes to discover the means of controlling these systems technologically, and to develop the capability of synthesizing artificial systems with similar capacities.

Early work in cybernetics was primarily concerned with two problems: how to best design “man-machine” systems which depended upon the performance of both humans and machines, and in illuminating the similarities between computers and the brain. Work on the former problem led to detailed studies of human performance and influenced the fields of ergonomics and Human-Computer Interaction. Work on the later problem sought to identify the feedback mechanisms responsible for mental properties, and to build computers which simulated them, giving rise to the fields of cognitive science and Artificial Intelligence.

Many of the people who worked on the digital computer projects in the U.S. and England during the 1940’s also participated in cybernetics conferences, and saw their work as a contribution to the movement. The mathematician and designer of the IAS Machine, John von Neumann, for example, engaged in heated debates over the mechanisms of memory in the brain at several cybernetics meetings. The connections which evolved between digital computers and theories of the brain were reflected in the popular media which, for many years, referred to computers as “electronic brains” or “giant brains.” Even today, we call the numeric storage of the computer its “memory.”

The first paper to bring together the central concepts of cybernetics was one written by the British psychiatrist W. Ross Ashby in 1940. There he outlines a theory of how a concrete physical mechanism can exhibit adaptive properties once thought only to be abstract properties held by living, thinking beings. His theory is based on the concept of “homeostasis” developed by the physiologist Walter Cannon (1932) to explain the biological mechanisms which maintain vital balances within an organism, such as the regulation of blood pressure, blood sugar, and body temperature.

Ashby’s idea is that a mechanism which can alter its internal configurations can do a random search for a configuration which achieves some desired “goal.” The objective of an organism is to maintain a vital quantity in a stable equilibrium, like body temperature, by a complex set of mechanisms such as sweating and shivering. For a machine, the goal is to keep the values of certain “essential variables” within a desired range, and when these fall outside that range, to randomly vary the non-essential variables it can control until the values of the essential variables are restored. He called this mechanism of trial and error a “functional circuit” because it responded to its own success or failure, but later recognized it to be identical to the concept of feedback.

Early work in cybernetics was primarily concerned with two problems: “man-machine” systems and the similarities between computers and the brain

The theory thus offers a way to explain learning and biological adaptation, in terms of a single type of physical mechanism. In 1947, Ashby built an analog computer to demonstrate his idea. Called the Homeostat, it consisted of four interconnected units which sought to establish a pattern of electrical currents between them such that the whole ensemble would resist various external disturbances. The model of a goal-directed search which it embodied has become central in Artificial Intelligence.

However, the birth of cybernetics is often dated back to 1943, with the publication of two foundational papers in the U.S.: Rosenblueth, Wiener and Bigelow’s “Behavior Purpose and Teleology” and Warren McCulloch and Walter Pitts’ “A Logical Calculus of the Ideas Immanent in Nervous Activity” (see McCulloch, Warren (artificial neural networks)). This group of scientists would instigate a far-reaching scientific movement out of the concepts contained in these papers. There were, of course, numerous antecedents and at least three independent discoveries of the same set of concepts, most notably Ashby, Schmidt and Sommerhoff.

Wiener, the physiologist Arturo Rosenblueth, and the engineer Julian Bigelow had been brought together by the U.S. war effort to work on a device for controlling and targeting anti-aircraft guns. They concluded that the targeting problem was intimately related to the control problem of making sure the gun was pointing toward the target. They further argued that the solution to both problems depended on the ability of the gun to continuously correct itself based on the moment-to-moment changes in the position of the plane and the orientation of the gun. Their “AA-Predictor” never worked very well, but in designing it they had recognized the fundamental importance of feedback loops for the self-regulation of purposive mechanisms.

In particular, negative feedback is the ability of a mechanism to receive information about the result of its own action, to calculate a correction based on the distance of that result from a pre-specified goal, and to act so as to reduce that distance. Negative feedback thus creates a circular causal loop whereby an action A causes an effect B, which in turn causes a new action A’ which has been calculated to reduce the error of the next effect B’, and so on. The challenge of designing a useful machine for solving a given problem thus lies in determining how to perform the error-reducing calculation.

Negative feedback can be produced even by a very simple mechanism. One example is a thermostat which regulates the temperature of the room by turning the heat on when the temperature falls too low, and switches it off again when the temperature rises. Another is James Watt’s governor which regulates a steam engine by opening a valve when it spins too fast and closing the valve again as it slows down. The AA-Predictor had a far more complicated method of error-correction, and utilized time-series methods of linear extrapolation using the known limitations of airplane maneuverability as parametric constraints.

The concept of feedback was further developed from the early work of Nyquist’s 1932 regeneration theory, into a general theory of servomechanisms (MacColl 1945). Information was given a precise mathematical definition by the Bell Telephone Labs engineer Claude Shannon in 1948. According to his “Mathematical Theory of Communication,” information is a measure of the reduction in uncertainty caused by receiving a particular message from the whole set of possible messages. What cybernetics had shown was that information feedback imparted purposive, goal-directed or teleological behavior to machines by allowing them to act in response to the world. This challenged the dominant psychological theory of behaviorism, which had ignored purpose and argued that teleology was too metaphysical to be scientific. By becoming the central concept of cybernetics, the information feedback loop instigated the shift from behaviorism to functionalism and cognitivism in psychology.

Wiener, Rosenblueth, Bigelow, McCulloch and Pitts, came together to form the Teleological Society in January, 1945. They were joined by von Neumann, the Spanish neuro-anatomist Rafael Lorente de Nó, the engineer of ENIAC and EDVAC, Herman Goldstine, and the engineer of the Harvard Mark I electro-mechanical computer, Howard Aiken. Many of these individuals also joined in the series of ten conferences on “Circular Causal and Feedback Mechanisms in Biological and Social Systems” sponsored by the Josiah Macy Foundation which ran from March of 1946 to March of 1952. These small meetings are commonly referred to as the Macy Conferences and also included representatives from physics, psychology, anthropology, ecology, social science and philosophy, and saw visitors from still other fields.

The Secretary of the Macy Conferences, the physicist Heinz von Foerster, went on to establish the Biological Computer Laboratory (BCL) at the University of Illinois at Urbana (1958-1974). Research at this lab included the construction of analog neural computers, biologically inspired speech, audio signal and video image processors, teaching machines, and mathematical investigations of multi-valued logics, uncertainty analysis, distributed computation and self-organizing systems. The BCL was one of the key sites for the development of cybernetic machines during its operation, and its researchers and visitors at various times included Ashby, the Chilean biologists Humberto Maturana and Francisco Varela, the German logician Gotthard Günther, the Swedish mathematician Lars Löfgren, and the English cyberneticians Stafford Beer and Gordon Pask.

A similar cybernetics movement also emerged in England. The movement was catalyzed by a group called the Ratio Club, which met monthly to present and discuss ongoing research into the possibilities of synthesizing mental capacities in computing machines. The club was founded in 1949 by the physiologist John A. V. Bates, who had worked on problems of gun targeting for the British military during the war, and had built tank-gun simulators to measure human performance in target identification. The members of this group included Ashby, the neuro-physiologist W. Grey Walter, the mathematician Alan M. Turing, the statistician I. J. Good, the physicist Donald MacKay, the psychologist Albert Uttley and others. Many of the members of the Ratio Club also attended the Macy conferences as visitors, and McCulloch was an invited guest to the first meeting of the Club. It was in these meetings that the “Turing Test” for intelligence was first presented.

Many significant technical developments have been inspired by cybernetics.

Unlike the Macy Conferences, the Ratio Club was interested in building working machines. The most famous of the devices to come out of these meetings were Walter’s machina speculatrix, or “tortoises,” built in 1948-9. The tortoise was a small autonomous robot which had a light sensor and simple animal-like behaviors hard-wired into its circuitry. The tortoise would wander around until its batteries ran low, which would switch it into a light-seeking mode, and it would move towards a bright light mounted over its hutch where it would be recharged through contacts in its wheels, and go off wandering again.

While the tortoises’ behavior was achieved using feedback mechanisms, the device was fairly simple and theoretically trivial. It was quite popular in the media, however, showing up at the Festival of Britain in 1951, in the pages of Scientific American and Time and Life magazines, on BBC television, and in London’s Science Museum and the Smithsonian Institution. Walter’s tortoises pioneered work in the field of Robotics.

The latest developments of cybernetics proper include the concept of “autopoiesis” in “second-order cybernetics.” Autopoiesis is a term coined by Maturana and Varela (1972) in an attempt to describe the self-referring and self-making autonomy of living systems embedded in the world. It is intimately related to the epistemic problem of objectivity in knowledge, particularly to the recognition that all knowledge presupposes an observer, and this observer implicates an entire universe of relations when attempting to communicate an observation to a listener, including the listener and the observer. Second-order cybernetics thus became preoccupied with trying to understand itself reflexively. The result of this is that the feedback loop of cybernetics is replaced by an infinite regress of moebius-loops which (re)produce themselves, and come into existence in the very act of verifying their existence by observation.

Many significant technical developments have been inspired by cybernetics. Among these are genetic algorithms and evolutionary programming. Genetic algorithms were first devised by John Holland (1975), and attempt to simulate the self-organizing properties of biological evolution. They do this by dividing the possible solutions to a problem into pieces called alleles, which are analogous to the pieces which make up biological genes. Various combinations of alleles are combined into hypothetical solutions which are then tested against one another in a fashion analogous to Darwinian natural selection by competition. The “fittest” solutions are then recombined into a new population with minor mutations in a process analogous to sexual reproduction. The processes of recombination and selection are repeated many times until a near-optimal solution is found. This technique is often used as a method of non-linear optimization in computer science and engineering.

Cybernetics slowly dissolved as a coherent scientific field during the 1970’s, though its influence is still broadly felt. Cybernetic concepts and theories continue on, reconstituted in various guises, including the fields of self-organizing systems, dynamical systems, complex/chaotic/non-linear systems, communications theory, operations research, cognitive science, Artificial Intelligence, artificial life, Robotics, Human-Computer Interaction, multi-agent systems and artificial neural networks.

by Peter M. Asaro

2090 words

For Further Research

Ashby, W. Ross. Design For a Brain. London UK: Chapman and Hall, New York, NY: John Wiley and Sons, 1952.

Ashby, W. Ross. An Introduction to Cybernetics.London, UK: Chapman and Hall, New York, NY: John Wiley and Sons, 1956.

de Latil, Pierre. La Pensée Artificielle. Paris: Librairie Gallimard, 1956. Thinking by Machine: A Study of Cybernetics. Translated by Y. M. Golla, Boston, MA: Houghton Mifflin Company, 1957.

Dyson, George B. Darwin Among the Machines: The Evolution of Global Intelligence. New York, NY: Addison Wesley, 1997.

Gardner, Howard. The Mind’s New Science: A History of the Cognitive Revolution. New York, NY: Basic Books, 1987.

Heims, Steven J. John von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death. Cambridge, MA: MIT Press, 1980.

Heims, Steven J. The Cybernetics Group. Cambridge, MA: MIT Press, 1991.

McCorduck, Pamela. Machines Who Think. San Francisco, CA: W. Freeman and Company, 1979.

Pask, Gordon, and Susan Curran. Microman: Computers and the Evolution of Consciousness. New York, NY: Macmillan Publishing Company, 1982.

References

Ashby, W. Ross. “Adaptiveness and Equilibrium.” Journal of Mental Science, Vol. 86, 1940: 478-483.

Ashby, W. Ross. Design For a Brain. London, UK: Chapman and Hall: London, UK., New York, NY: John Wiley and Sons, 1952.

Cannon, Walter B. The Wisdom of the Body. New York, NY: W. W. Norton and Company, 1932.

Holland, John. Adaptation in Natural and Artificial Systems. Ann Arbor, MI: University of Michigan Press, 1975.

MacColl, LeRoy A. Fundamental Theory of Servomechanisms. New York, NY: D. Van Nostrand Company, 1945.

Maturana, Humberto R., and Francisco Varela. De Maquinas y Seres Vivos. Editorial Universitaria, S.A. 1972. Reprinted as Autopoiesis and Cognition: The Realization of the Living. Boston Studies in the Philosophy of Science, Volume 42. Dordrecht, Holland D. Reidel Publishing Company, 1980.

McCulloch, Warren S., and Walter Pitts. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics, 5 (1943): 115-133. Reprinted in The Collected Works of Warren S. McCulloch, vol. 1, Edited by Rook McCulloch, Salinas, CA: Intersystems Publications, 1989: 343-361.

Rosenblueth, Arturo, Norbert Wiener and Julian Bigelow. “Behavior, Purpose, and Teleology.” Philosophy of Science, 10 (1943): 18-24.

Schmidt, H. Denkshrift zur Gründung eines Instituts für Regelungstechnik, 2nd edition Hamburg: Verlag Schnelle, Quickborn, 1961.

Shannon, Claude E. “A Mathematical Theory of Communication,” Bell Systems Technical Journal, vol. 27, (1948): 379-423 & 623-656. Reprinted in Shannon, Claude E., and Warren Weaver The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press,1948.

Sommerhoff, G. Analytical Biology. London, UK: Oxford University Press, 1950.

von Foerster, Heinz (editor) Cybernetics: Circular Causal and Feedback Systems in Biological and Social Systems, Volumes 6-10. New York, NY: Macy Foundation, 1948-1953.

Walter, W. Grey. The Living Brain. London, UK: Duckworth, New York, NY: W. W. Norton, 1953.

Wiener, Norbert. Cybernetics, or Control and Communication in the Animal and the Machine. Paris: Hermann and Co., Cambridge, MA: The Technology Press, and New York, NY: John Wiley and Sons, 1948.

Leave a Reply