MENU

Human Enhancement: Beyond the Machine Age

BACK

The continuation of the Human Enhancement piece, once again co-written with Nilo Lindgren.


Self-organizing systems of hardware; computer simulations of evolutionary logics, artificial intelligence, novel conceptions of time and language—all these could herald the design of evolutionary systems that work for human enhancement.


Our previous article proposed a very broad evolving and ecological “think,” urging designers to begin considering new types of systems, some to incorporate artificial intelligence, to work for the enhancement of human powers. Here we describe some bases of that think, and attempt to join researches that, as they evolve, could form the ingredients of evolutionary systems. In the broadest sense, we attempt to interpret aspects of today’s science and technology within the evolutionary framework, hoping to stimulate a dialogue among all kinds of system designers.


“Nothing at first can appear more difficult to believe than that the more complex organs and instincts should have been perfected, not by means superior to, though analogous with, human reason, but by the accumulation of innumerable slight variations, each good for the individual possessor”[1]


In our first article, we stressed the need for an evolutionary technology, citing the urgency of our present environmental situation, owing to the increasing disruption of our natural ecology. We tried to bring forward the important central role of “dialogue” in using the evolutionary process, especially as it might be applied to human enhancement.[2]


In this present article, we describe certain “building blocks” now becoming available that, as they accumulate in number and variety, could be used in evolutionary systems. The computer is the herald of major change. It is the one ingredient connecting hardware and software advances that could contribute to the design of evolutionary systems.


Our society has been willing to develop weapons systems, space systems, highway systems, telephone systems, but it has manifested no coherent, coalescing drive to make devices and systems to satisfy human users in the unique ways we have described; namely, that they should be responsive and interactive in an intelligent way with their individual users. Such systems are possible, although perhaps not probable, unless we are willing to create them.


ON ARTIFICIAL INTELLIGENCE


The crucial ingredient in evolutionary systems, we believe, will be artificial intelligence, which itself may come about through computer simulation of evolutionary processes. By artificial intelligence, we mean the acquisition by machines of capacities of pattern recognition, problem solving, self-improvement through an “understanding” of their own problem-solving processes, “creative” thought, and so on—capacities normally associated with the mentality of man as his distinction.


In our previous article, we perhaps failed many readers by supposing them familiar with the vast recent literature on artificial intelligence in the technical journals.[3] That machines can think has become as manifest to us as that machines could work became to our forefathers. The differences between kinds of hardware and software, between dry and wet thinking, are beside the point in this article.


We were more remiss as to distributed intelligence, which is related to the efforts directed toward making available computer capacity (say, by time-sharing) to the many, in the same way that linked utilities supply electric power to almost any place. Project MAC (for multiaccess computer or machine-aided cognition) is the most publicized example of the on-line computer time-shared system. Naturally we jumped from such distributed computational power to the day when artificial intelligence will be a common facility, like the mail and the telephone.


By lumping together small units of computation such as tiny chips of integrated circuits with the global tool envisioned by some scientists, we undoubtedly irritated many readers and led others astray. But we sought a philosophical unity. By juxtaposing large and small systems, ordinarily held separate, we sought to focus a perspective of evolutionary kinship. Designers of toys, furniture, homes, cities, transportation and communication media, political and even symbol systems, can capitalize on the new possibilities.[4].[5]


Artificial intelligence will not come simply or all at once, but designers should feel some sense of urgency since parts of it are already here. We take M. Minsky’s point seriously when he writes: “Once we have devised programs with a genuine capacity for self-improvement a rapid evolutionary process will begin.”[6] Those who wish details on progress in artificial intelligence will want to read M. Minsky’s now classic summary[7] and R. J. Solomonoff’s follow-up review.[8]


It should be clear to designers that the melding of artificial intelligence functions (information manipulators) and physical parts (energy manipulators) will create new entities. For instance, the designing of integrated circuits gradually forced circuit designers to abandon their concepts of discrete components (resistors, capacitors, diodes, etc.). In integrated circuits, these functions are no longer discrete, but are blended and distributed throughout the chip. Thus, the circuit components do not exist in their old form, either in their physical formation or in the designer’s conception of them. That transition has taken nearly two decades. It will undoubtedly take us much longer to acclimate ourselves to the roughly comparable concept of sharing intelligence functions with our man-made environment in any large measure, even when it will be to our advantage to do so.


ON MIND AND HAND, NOISE, AND NOVELTY


Artificial intelligence is a term that usually embraces purely intellectual functions. For applications aiming at human enhancement, however, intellectual power alone will not suffice. Multichanneled, multileveled, richly coupled input-output faculties and languages will also be required.


Our point of view is that in the whole animal, intellectual and physical functions are really inseparable— that is, thinking cannot be separated from the context of the living animal anymore than the animal can be separated from the context of his real environment. The goals of the living creature only make sense in terms of the specific world he inhabits. This distinction may seem trivial, but we believe that much research proceeds on the “antievolutionary” assumption that living functions can be meaningfully studied outside of context. Such context-free studies spring out of, and automatically reinforce, the view of man as a mechanistic system of being, and such studies encourage the design of machine systems that militate against man’s growth and wellbeing. In the design of evolutionary systems for human enhancement, it must be recognized that man’s changeability, and the evolving nature of his responsiveness to changes in his environment, his variability in learning, must be taken into account from the very beginning of the design.


Even Shannon’s information theory, as great a step as it is, has not been extended to systems in which there is learning (i.e., in which the receiver’s behavior is affected by what he has received, and in which the sender is affected by the way his message is received). Shannon did not preclude such an extension, and his collaborator, Weaver, specifically states that such extensions of the theory should be attempted.


We recognize that in our discussion we make a fatal error by misusing the term “noise.” The Shannon use, though it presents an easily managed mathematical conception, does not attempt to model the real-world problem of noise. It does give an intuitively right approach by placing novelty, noise, ambiguity, and information in the same bag and forcing the discrimination between these. Because we merely highlight the need for such discriminations, and because we know no other recourse, we have used the Shannon relationships and terms out of context. In the evolutionary approach, we do not accept for our definition, as Shannon does, that there is a limited vocabulary of symbols that both sender and receiver know. Instead, we see the sender and receiver linked in a communication loop, seeking to maintain a level of communication appropriate to their purpose by changing code, channel capacity, use of the message, etc. The whole message system is evolving at once in all its parts. This is a far cry from Shannon’s description, yet it is the world that now needs to be modeled.


In the development of evolutionary hardware and software, particularly as it is integrated in man-machine evolutionary systems, we must seek those situations that are sufficiently transparent and controlled to help provide us with the quantification necessary for resolving some of these dark problems.


SELF-ORGANIZING CONTROL SYSTEM


Traditionally, engineers have been conditioned to avoid the introduction of “noise” in the design of systems. Noise is usually considered by engineers to be any unwanted, indeterminate signal; but, more generally, it is a signal that is uncorrelated with other (informationbearing) signals within the system. If noise cannot be eliminated, it is at least minimized and looked upon with ill favor. Historically, too, controls, as in airplanes and spacecraft, have been treated as being linear, and if they have not been linear, control engineers have tended to look the other way.


Now, however, hardware has been developed for use in control systems that deliberately makes use of noise injection to resolve interactive nonlinear control situations. The hardware uses the noise to generate rapid random predictions of control response to unforeseen disturbances. Such control elements, called self-organizing controllers (or SOCs), are an advanced form of an adaptive control that gathers information about the aircraft plant while the craft is in operation and uses this information to improve performance of the system. Adaptation by this self-organizing controller is so fast that it can readapt mans times within the closed-loop response period of the plant. When the plant characteristics change, owing to the plane's changing environment. this SOC system compensates for such changes and keeps the plane stable. That is. the learning is done by the plant itself, thus easing the pilot's control function.


There are now many types of adaptive and learning systems, developed experimentally over the past halfdozen years.[9] and going under many names. One might distinguish two general types: self-organizing systems, which are those with short-term memory, and learning systems, which tire those with long-term memory.[10] Learning systems take a long time to adjust to new conditions. and long training periods are associated with their use. However, a self-organizing system adjusts to new conditions within a small fraction of the plant time constant.


One feature of particular importance in the SOC system developed by L. Gilstrap. R. L. Barron, et a/., at Adaptronics. Inc., is the use of random search techniques that allow rapid convergence in multiple-parameter spaces. For SOC applications, random search methods have proved superior in both speed and memory requirements to both systematic and gradient hill-climbing techniques. Interest in the random search methods may be traced to the original contributions of S. Fl. Brooks[11] in 1958 1959. I he work of the Soviet scientist L. A. Rastrigin[12] and the Czechoslovakian .J. Matyas[13] have stimulated wide attention to the potentialities of the random search method. In the United States, the application of random search techniques is exemplified by the ISV (probability state variable) control systems and in the development of accelerated random searches"[14].[15] for application to transformational automata (pattern recognizers, adaptive computers) and to systems that infer the properties of incompletely known dynamic processes. PSV random search makes it possible for the SOC to do without explicit identifications of all plant parameters."' In a sense, the SOC is engaged in a dialogue with the plant, rapidly postulating models of its performance and comparing these against its actual performance.


The SOC is the outcome of a neuron analog developed theoretically in the 1950s. which has led to a whole family of PSV devices that could be used for many problems that can be reduced to a problem of a search in nonlinear spaces.[12],[13],[14],[15],[16],[17]


Figure 1 shows the basic elements of a self-organizing controller. It is described as follows[17]: I here is (I) a goal circuit (performance assessment logic), which is a means for evaluating current performance: (2) a conditioning logic for computing and affecting suitable changes of the controller parameters and or output signals: and (3) a memory for storing information concerning past parameter states. The memory exhibits an "exponential forgetting.” important in control applications because experiences in the remote past usually have less pertinence to present actions than do relatively recent experiences.[18]


It should be noted that the performance criterion in the conditioning subsystem may be defined in somewhat abstract terms. Many criteria of system performance are possible because there are many uses for adaptive control systems. The important thing is that the performance criterion should provide an unequivocal indication as to whether or not the experiments of the SOC are leading to betterment or worsening of system performance. Future SOCs may employ variable performance criteria, which themselves are subject to learning processes guided by one or more supreme goals.


The principal characteristic of the operation of the functional elements in the SOC is the closed-loop nature of the system: namely, a change in internal parameters must be fed back through the (unknown) plant and environment to the performance assessment logic to permit correlation between change and overall effect.


A comparison must be made between current performance relative to stored information regarding recent parameter experiments.


From the evolutionary point of view, one of the features of the SOC is its use of noise to create variety of action in the face of variety in the environment.[19] The noise generator, which sits in the PSV decision unit shown in Fig. 1, generates a random sequence of outputs that rapidly converge to the correct values for the actuator signals. The statistical source (random pulse generator with bistable statistics) fed by a noise generator is especially advantageous in problems of multiple plant identifications, where the correct signals for many actuators must be found rapidly simultaneously.


The use of SOCs could be significant for solving problems in which there are many interacting response variables. Multiple component systems, which theroretically could be built up to great size, can exploit coupling effects to achieve stable, simultaneous control of all variables. Barron and Gilstrap point out that “multipleinput, multiple-actuator SOC connectivity is an elegant expression of Ashby’s Law of Requisite Variety[19] and may also have some interesting biological parallels as in, for example, the reticular formation. . .when one has a complex plant the crucial problem can be that of sorting out the cause-effect relationships between SOC experiments and plant responses. The PSV logic in the ‘actuation-correlation logic module’ (ACL) appears to be well- suited to the achievement of simultaneous correlations. These simultaneous correlations are enhanced by the mutual competition between multiple correlation processes. In the early days of SOC development, one or two prophets of doom forecast that multiple correlation processes would bog down if run simultaneously. In fact, while there is no doubt that the correlation processes become slower as their number multiplies, the competition between them is the sine qua non for obtaining correct identifications of multiple cause-effect relationships.”[20]


“One of the many fascinating aspects of self-organizing control,” R. L. Barron says, “is the possibility that we may some day see systems that not only adapt to the plant or process and its inputs and disturbances, but which also adjust system behavior in response to the varying needs and desires of human users. . . .It is expected that future SOC systems will adapt their responses to individual human traits. Achievement of this capability will require development of performance assessment structures that provide the SOC with pertinent valuefunction information. This information should be obtained without communication between the SOC and the human on a verbal level. That is, the ‘how goes it’ question should be answered by evaluation of the human’s automatic reactions ”[21]


ARTIFICIAL INTELLIGENCE THROUGH EVOLUTIONARY PROGRAMMING


We have seen how the use of noise has been turned to a very useful end in an unusual piece of hardware. Now, let us consider how noise has been made the essential ingredient in a computer program that attempts to replicate a fundamental aspect of the processes of natural evolution. Although such simulation of evolution by computer, which has been studied by L. J. Fogel, A. J. Owens, and M. J. Walsh, has thus far dealt with only relatively simple problems, the method appears to offer an extremely powerful tool.[22],[23],[24]


In his attack on the problems of artificial intelligence, Fogel stresses the role of prediction. In his argument, he views intelligent behavior as a composite of ability to predict one’s environment coupled with a translation of each prediction into a suitable response in the light of the given goal.[23]


The “organism” in the evolutionary program is not a physical device, but is rather a mathematical entity that describes a particular logic for transforming a sequence of input symbols into a sequence of output symbols. Nonetheless, as the organism evolves, it becomes a rationale for prediction, reflects the pertinent logic underlying the data base, and thus provides a first approximation for solving similar problems in the future.


Fogel describes the computer simulation of evolutionary problem solving somewhat as follows: The computer is instructed to evaluate an arbitrary logic that describes the “stimulus-response” behavior of an initial “organism” in terms of its appropriateness with respect to a given goal in the context of the observed environment. That is, the computer measures the suitability for “survival” of the organism, namely, its demonstrated ability to solve the given prediction problem. This organism is then mutated by means of the introduction of randomness or noise so that there is produced an offspring different from its parent. The ability of this offspring to solve the given prediction problem—that is, its ability to survive in the given environment—is then measured. If the offspring does better than the parent, the parent is discarded, but if the parent does better, the offspring is discarded. The survivor becomes the parent of the next generation. This process is iterated until a cost criterion has been reached.[25]


This simulation of nonregressive evolution is carried out in fast time so that many “generations” come to pass in minutes. Since this is all done by computer simulation, the organisms produced are simply descriptions of desired stimulus-response behavior in the given environment. The randomness of mutation makes the process not wholly predictable. In fact, the evolved logic may well be a surprise to the programmer.


Although we cannot go into detail here (the book[23] and papers[22],[24] are readily available), there are many interesting aspects that should be mentioned.


For instance, as in natural evolution of species, a great variety is produced and preserved, so the evolutionary program can also preserve some of the less-than-best organisms or species of logic. This allows, among other things, simulation across species, giving each species a different length of recall (memory) of the environment. Thus, a logic that was previously less than best might win a superior position. Likewise, through the operation of selection in natural evolution, certain species, out of the great variety nature produces, find themselves best adapted for differing environmental conditions. There are, as it were, ecological niches in which they fit, and the species that fall between these environmental “sets” tend to be eliminated. This Darwinian idea can be extended to species of machines.


Of great interest in Fogel’s experiments are his efforts to develop successive models of the relationship among sensed variables in a way that reflects the nature of the goal of the investigator. The evolutionary modeling of the self may well provide a foundation for achieving higher levels of artificial intelligence. At a rudimentary level, this endeavor means the random mutation of models; at higher levels, it means the random combination of selected models into new logics. In point of fact, evolutionary programming will probably be needed to operate simultaneously at various levels of abstraction (each level interacting with each other level although on different time scales) if machines are to determine the "logical" structure or behavior of unknown and changeable interactive environments as complex as a man. (Insights into how man himself actively models and predicts his environment appear in the work of K. Craik and D. M. MacKay.[26])
In any event, here in generic form, through the preservation of those aspects of randomness that appear worthwhile to the organism, is the scientific method (induction being performed through a nondeterministic manipulation of the data base comprised of previous observations of consistencies and commonalities built into relationships or models). Fogel goes on to recognize that the scientific method is itself an evolutionary process that can be simulated. Thus, high-speed computers may find new logics for addressing old problems.


AUGMENTING HUMAN INTELLECT


In any truly sophisticated man machine system, we cannot imagine that the “two” traditional categories ot human behavior (mind and hand) would be separated. Rather, they are viewed as reinforcing one another, so that what happens on the intellectual level affects what can happen on the psychomotor level, and vice versa. In the work of D. C. Engelbart, such interactions are considered to exemplify the "reverberation principle.” He says: "In improving a system, an innovation at one level often leads to reverberating waves of (1) possibilities for other innovations, and (2) needs for other innovations. Waves of possibilities tend to propagate upward with an increasingly broad effect new gains from an innovation (or possibility ) at a lower level provide a new innovation possibility (or perhaps several) at a higher level. Gains from these possible innovations are added to the original gains, to stimulate possibilities at still higher levels, etc. .. . Each new innovation arising in either the upward wave of possibilities or the downward avalanche of needs is the potential source of a new wave in the opposite direction: it reverberates."


For nearly ten years. Engelbart has been pushing a broadly evolutionary program of research for "augmenting human intellect.” aimed at using new tools such as computers, and incorporating a whole systems approach, to find ways in which men's capabilities for solving complex problems could be enhanced or, to use his word, augmented.


Although Engelbart's research does seem to stress the augmentation of intellectual activ ities, his writings make it clear that he does not regard learning and problem solving as going on at the mental level alone. For instance, he postulates a human-communication subsystem, as shown in Fig. 2. in terms of which he considers how computer-aided lower-level systems can be integrated into a higher-level system. The basic aid given this subsystem by computer processing he sees as the providing of versatile feedback through the two open-ended subsystem channels, thereby enabling more effective use of each channel through cooperation with the other channel The augmentation research, then, looks at the chief design factors the information characteristics of the messages. the signal forms and the information encoding at the interface, and the computer decoding process which must be compatible with human ability to learn and to perform.


Engelbart’s first philosophical statement of his program was laid out in 1962.[27] He set forward a conceptual framework that broke down the means of extending human capabilities into four basic classes. These included: (1) artifacts, the physical objects designed for the manipulation of things and materials, for the manipulation of symbols, for human comfort; (2) language. with which an individual models his world into concepts, and the symbols he attaches to those concepts for his thinking and reasoning about them; (3) methodology- the methods, procedures, strategies, plans, both small and large, through which an individual organizes his goal-seeking or problem-solving activity; (4) training, the conditioning and attainment of human skills for using effectively the artifacts, languages, and methodologies available. Any changes wrought in any one of these categories would reverberate and cause changes in the other categories. In reviewing the spectrum of research activities in the man-computer community, in terms of the “total system” comprised of these four classes, Engelbart rightly notes that the major share of interest thus far has been centered on artifacts.


In taking an engineering-like approach to all the elements of the man-computer system, rather than, for instance, focusing on the man or the computer alone, Engelbart makes it clear why a new conceptual framework is needed, and what the consequences of a new approach must be.[27],[28],[29],[30]


For purposes of identification, he relates his ideas to the “Whorfian hypothesis, which states that ‘the world view of a culture is limited by the structure of the language which that culture uses.’ But there seems to be another factor to consider in the evolution of language and human reasoning ability. We offer the following hypothesis, which is related to the Whorfian hypothesis: Both the language used by a culture, and the capability for effective intellectual activity, are directly affected during their evolution by the means by which individuals control the external manipulation of symbols.” Engelbart refers to this as the neo-Whorfian hypothesis.[27]


Individuals, he writes, who operate effectively in our culture have already been considerably augmented. For instance, an aborigine who possesses all of our basic sensory-mental-motor capabilities, but does not possess our background of indirect knowledge and procedure, cannot organize the proper direct actions necessary to drive a car through traffic, request a book from the library, call a committee meeting to discuss a tentative plan, call someone on the telephone, or compose a letter on the typewriter.


On the other hand, the aborigine can see, hear, smell, and interpret the meaning of events in the wilderness that would completely bypass our awareness.[31] He is not trained in our system of augmentation, and we are not trained in his. The restoration of our keenness in some of the aborigine’s faculties would be meaningful to consider if we could create an environment that was worth seeing, hearing, and smelling. Who wants to see only square boxes of buildings and rooms, who wants to see fluorescent lights, who wants to hear cars, trucks, and planes continually, who wants to smell poisonous air? To sense such aspects of our environment is only to awaken one’s helpless rage, and under that (at a deeper level, since we are talking of levels) one’s sorrow at the grievous things we men have inflicted on ourselves. It is difficult to value, in such terms, the price we are paying by hanging too long onto a primitive, machine-age technology whose chief virtue is mass-produced consistency, that notorious “hobgoblin of little minds.” The machine age is over; let us hasten on to the creation of an environment whose novelty, variety, and intelligence could nourish our growth instead of stunting it.


Engelbart’s major project has been with computer-aided text manipulation. This program has many facets that are well reported in the literature.[27],[28],[29],[30] The text-editing systems allow users to see their text on a scope, to compose on it, to modify text, to study it, to store it, and, to retrieve it in any order they wish; thus, there is already a great power and ease, and flexibility in keeping “plastic” working records. The system has both off-line and on-line text-manipulating capabilities; recent reports have been produced entirely through the use of these computer aids, and a 12-terminal time-shared system has recently been set up.


In addition, Engelbart has developed a unique chord handset that allows improved human display to the computer. He uses five-finger chords for transmitting English text to the computer, which, he says, allows the human user to achieve a more intimate sense of communication with the computer.


AUTOMATED PSYCHOMOTOR SKILL TRAINING


Another facet of Engelbart’s broad program for the augmentation of human intellect is a study for automating psychomotor skill training, an area that is potentially very important for human enhancement applications since teaching and learning go on at a nonverbal or nonsym- bolic level.[32] In such applications, automatic display devices would present direct stimuli to the trainee and automatic sensing devices would monitor his behavioral and physiological responses, both in real time. By using a computer to analyze the performance of the trainee, and through the accumulated insight into how to guide him toward a desired response (this would be the evolutionary dialogue situation) through nonverbal feedback as well as through symbolic information, it should be possible to bring enhancement to many kinds of training situations, to add enriched experience to the training of the young, and so on. The range of possibilities could cover the simplest tasks, such as operating keyboards, on out to piloting high-speed vehicles, and so on. Engelbart’s proposal was to experiment with simple tactile cueing stimuli that were to intervene between the primary stimulus and the completion of the desired response. These cueing stimuli were to guide the subject through coordinated sequences of elemental actions. It was intended that through automatic monitoring, the cueing stimuli would be modified (either automatically or assisted by a human coach) to resonate with the changes in the subject’s performance. Engelbart’s notion was that by relieving the load on the subject’s “higher” faculties, by cueing signals applied cutaneously at points on his body, the subject would not need to memorize the details of the patterns through which he was learning to weave his way. The specific project was to train people to use the five-key chording device for transmitting English text mentioned earlier. Both visual (lights) and tactile (air jets) stimuli were used. The instruction to the trainees was, in effect, “Push the key with the finger on which you feel the sensation of the air jet.” The conclusion of the study was that automated sensorimotor skill training was certainly feasible, and that the potential of computer-based training systems of this sort appeared great.[33]


These were certainly not the first experiments in this area. One of the pioneers in dynamic and adaptive automatic teaching machines was the English cybernetician, Gordon Pask,[34],[35],[36] whose investigations have ranged widely from highly theoretical to highly practical projects, that is, to the development of many varieties of machines. One of the central ideas involved in his machines that teach is that their control mechanisms be adaptive (having a degree of artificial intelligence) so that they change their characteristics as a function of the subject’s performance, thus maintaining a degree of novelty so that the subject does not become bored.


Computer-assisted instruction, on many levels of sophistication from the evolutionary dialogue point of view, is a growing field of interest, as evidenced in a recent special issue of IEEE Transactions on Human Factors in Electronics.[37] Some of Pask’s recent work is reported there, as is the work of W. Feurzeig of BBN, who has been working on machines that teach medical students approaches to diagnostics; there is also a discussion of the well-known PLATO system at the University of Illinois by D. L. Bitzer et al. J. Weizenbaum’s ELIZA system, being developed for teaching by Edwin Taylor at M.I.T.’s Educational Research Center, is also of much interest because of its efforts to mimic realistically the psychiatric situation between patient and psychiatrist.[38] Insights from such behavioral studies should certainly be relevant to evolutionary man-machine dialogue.


Also, recently, Oliver Selfridge of the M.I.T. Lincoln Laboratory has been evolving methods whereby computers can teach their own language and usage to a user who otherwise has no “outside” guidance. Such “responsiveness” by the machine to its users’ problems is certainly significant.


ENRICHED COUPLING


In 1961, W. A. Rosenblith wrote: “We may in ten or fifteen years have derived some useful generalizations from the widespread experience with man-machine (including man-computer) systems and from exposure to novel sensory environments. We shall certainly be able to telemeter converging data (electrophysiological, neurochemical, behavioral) from organisms while they interact with their environment. It should also be possible to sharpen the formulation of issues that are critical to our understanding of sensory function by incorporating the experimental subject into a closed loop—an arrangement wherein stimulus sequences are made contingent upon the physiological and psychological responses that the subject emits.”[39]


Investigations of sensory communication in such closed-loop situations have not yet been conducted on the scale Rosenblith imagined. There is, however, an enormous literature relating to the monitoring of physiological and behavioral variables. When such information grows from the closed-loop situation,[39],[40],[41],[42],[43],[44],[45] it could be utilized in richly coupled, multimodal man-machine dialogue, employing a redundancy of channels in which each channel acts as a metaphor of the other.


To indicate something of the unexpected and even bizarre findings that might turn up in monitoring human dynamic functioning, one might consider one kind of relation between stimuli and cerebral activity discovered by W. Grey Walter of the Burden Neurological Institute, England.[40] He noted that there were certain brain waves that were set up in anticipation of some external happening, waves that were extinguished when the anticipation has been satisfied. The clear determination of such measurable brain events might be used in automatic training situations, where a suitable monitor would tell the computer control that a subject had already received a message he was expecting.


In a similar vein, E. M. Dewan of the Air Force Cambridge Research Laboratory, and Belmont Farley (who has made studies in the computer simulation of neuron nets[41],[42] have experimented with the modification of brain-wave activity. Both men determined that they could consciously alter their brain waves in such a way that through the use of electrodes on the scalp they could control external devices. Dewan thus activates an external switch to turn a light off or on and he postulates that a kind of communication by such electroencephalography might have a number of useful applications.[43]


A study carried out by Hecker, Stevens, et al., at BBN, on the effects of task-induced stress on speech indicates that the range of bearable stress can be measured and serve as an indication of what different individuals will tolerate in task loading.[44] The monitoring of such stress could be another channel in a multichannel man-machine communication situation.


A proposal for a massive and coherent monitoring of physiological variables (through implanted probes, etc.) has been put forward by Dr. Charles Ray of Johns Hopkins College of Medicine.[45] By monitoring transitions (chemical changes) rather than absolute levels of the living system and through using appropriate sophisticated displays, Ray visualizes, for instance, that it would be possible to track the changes, and thus interrupt or control changes that would otherwise become dangerous to a patient. This kind of sensitive control of a patient’s responses to drugs, etc., are not now possible.


COMPUTER MODEL OF RETICULAR SYSTEM


There are many approaches being taken to artificial intelligence that could then evolve to higher-order, stable, intelligent systems. As we have seen, in the approaches of Gilstrap, Barron, and Fogel, there are, in a sense, “searching” ways of evolving small models, which then may grow into larger models. There are also profoundly theoretical approaches of how models may gain in problem-solving power, as in the work of Amarel, Newell, Simon, Minsky, Papert, Pask, and others, who make no special effort to relate their models to how intelligent living systems function. There are combinations of such approaches. And then there are specifically approaches that attempt to mimic or caricature, to a fairly realistic extent, the functions of the vertebrate nervous system.


In this last camp belongs some work of a deep order, like that of W. Kilmer, principally inspired through the lifelong work of W. S. McCulloch in his studies on the functioning of the brain.[46] In recent years, McCulloch has devoted his efforts to unraveling the mysteries of the operation of the core of the reticular formation, which runs through the spinal cord and the brain stem. From it all other parts of the brain have evolved; it is the “begetter” to which they all report and whose commands they obey. Figure 3 shows a diagram of the reticular formation in a cat. The “retie” has evolved so little that, if you do not know the magnification, you cannot tell whether a cross section from the retie came from a man, a mouse, an elephant, or a frog. Its crucial business is to commit the whole animal to one of a dozen or more incompatible modes of total behavior, such as eating, mating, sleeping, fleeing, fighting, hunting, or hiding.


Its million or so of relatively undifferentiated neurons sample at random all sensory channels and all tracts ascending and descending in the nervous system and talk to each other as well as to all other parts of the brain and sense organs. A single one of its neurons may respond to clicks, to a touch of the nose, and to a shaking of the left hind leg; but, if the stimulus is repeated, it responds less and less and may quit altogether unless that stimulation say a click is associated with a signal to which it must respond say a strong electric shock to the left hind leg. Then the response to the click returns. When the animal sleeps that same neuron may respond only to respiration or a bubble in the gut.


Thanks to V. Amasian, A. and M. Scheibel, and many others, the activity of the retie components is well known. The Scheibels, D. Fortuyn. Valverde, and W. Nauta have worked out the anatomy of its internal connections. Much is known of its paths to receptors and to other parts of the brain, and much of its actions upon them. The problem confronting the neurophysiologist is its intrinsic functional organization, which must account for the ability of the reticular core to reach a working consensus of its many components in a fraction of a second so as to commit the whole animal to the proper mode of behavior.


Because neurons are nonlinear oscillators the natural way to analyze this circuit action is in terms of the behavior of a vast number of coupled nonlinear oscillators. Unfortunately, the mathematics of such systems remained where N. Wiener and B. Vander Pol had left it. until the recent work of E. Caianiello, which may be of future help. It was simply inadequate to the task; neither McCulloch nor Kilmer see how to use Caianiello’s theory today. Next, since the reticular core is a net iterated throughout the length of the nervous system, the theory of iterated nets was investigated, but F. Hennie and W. Kilmer were able to show that the questions they thought worth asking could be proved to be recursively insoluble problems.


The model they finally attempted was based on the flow of information of the modern naval fleet in which every important ship has its one center to which come signals from many other ships, and from all sensors, to detect friends and foes in the air, on the sea, and under the sea. The admiral has disciplined his fleet in a number of maneuvers and remains in titular command, but what the fleet does in games or in battles is actually determined by that ship having the requisite variety of information at the moment; the real command moves from one ship to another as the engagement goes on. In McCulloch’s words. “It enjoys a redundancy of potential command in which information constitutes authority.” The admiral is but its mouthpiece.


It is such a system that W. Kilmer, with the assistance of J. Blum, has been computer-simulating at the Instrumentation Laboratory of M.I.T. for L. Sutro’s group, which has been working on a sophisticated artificial visual system for unmanned exploration of Mars.[47] The simulation is designed so that it can be realized in hardware and miniaturized. Kilmer’s model consists of a dozen hybrid probabilistic computing modules coupled much as the specialists in a diagnostic team who, having each examined the patient, must come to a good working accord as to proper mode of treatment. The proper mode might be, say, one out of 16. In the present model, it is only one out of four. Figure 4 shows the system organization.


Each of the computing modules receives a random sample of the inputs and makes the best guess it can at the relative appropriateness of each mode of behavior; and each is informed of the random samples of the other modules. The coupling of the modules is fairly weak so that the computers exchange their preliminary guesses for several rounds, usually less than 15, before reaching their consensus. The dissimilarity of random inputs ensures the requisite variety of guesses. The nonlinear skewing of the estimates of probability and the high weighting of those that peak on a preferred mode prevent a “hung jury.” The model can be made to decide fast enough to command the whole system in response to a real environment. The redundancy of potential command insures that if a fraction of the modules lock in any position, oscillate wildly, or are shot out, the remainder can override them and continue to command reliably. The coupling of the modules leading to a given modal decision gives the system an inertial property, stabilizing the mode in the face of small or insignificant changes in inputs so that it is less distractable than, say, a monkey, which darts about responding to each slight stimulus. The system would be as stubborn as a pig, however, but for a second trick. Confronted with a drastic environmental change requiring a new mode, its modules decouple to an extent determined by the significance of the change and for the duration directly proportional to the system’s degree of entrenchment in a past mode.[48],[49]


The monkey-pig imagery highlights the fact that there is a great variability in the inertias of entrenched modes among animals; and one can easily see such a variability among one’s friends. That is, in the human species, there are a variety of behavioral styles of organizing response, and certain deeply entrenched modes have long been the source of interesting “characters” in the tradition of the novel.


In the Kilmer system, this kind of inertial response of the overall system is balanced through the cooperative decision behavior of the computers.


The limitations of the present model are due to oversimplification purposely made to keep its programming transparent and flexible. The number of modules can be increased, yielding ever-better action. The number of modes can be increased to a dozen or more as the problems require. The time between a conditioning stimulus and the unconditional stimulus can be extended by shift registers that are easily miniaturized. In hardware, its complexity is today limited by the technical difficulties of making the vast number of required connections.


Ordinary algorithms for designing systems to handle information require some regularity and even distribution of input, whereas this model of the reticular core has to work, like the real reticular core, when the crucial information is presented partially to few and scattered modules. This, and the necessity of adjusting its nonlinearities, made its design extremely difficult, and only the necessity of inventing a computer to command computers and controllers (it could be used as a command center for a robotic device or a community of robots) pushed the work forward.


In fact, Kilmer tells us, the system is so complex to build and to test that its designers have had to play with it, to engage in a dialogue with it, to feel out the best way of using it, to feel out the best way of even describing it.[50] Only through dialogue of the kind we have described[2] has the system been evolved to its present status. In this, Kilmer and his associates have been following a principle of McCulloch’s, namely, that to think about such systems in new terms, it was necessary to evolve a system that specifically one could think with rather than just think about. In this sense, the system incorporates noise, novelty, a sense of the importance of relative factors, the capacity to shift attention, and so on, none of which would have been possible without dialogue (Kilmer’s statement, not ours). The system thus fits clearly into the camp of evolutionary machines.


OTHER APPROPRIATE MODELS


There are many other important efforts at modeling intelligence that ought to be described at equal length. We should at least consider briefly the works of men like Papert, Minsky, Amarel, Pask, Arbib,[51] and others.


Without hesitation, we should cite the models being developed by S. Papert in his cooperative endeavor with M. Minsky to evolve a robotic system with powers of artificial intelligence. Papert52 has described three levels of computer-based models that he has labelled tau, theta, and rho models. The first level of models, the tail’s, are really toys, small idea embodiments, with which to develop a feeling for how they might operate. From these, there arise a second series of models, the theta’s, that are theoretical constructs, or higher-level models based on the experience with the toys. On the last level come the rho’s, the real-world models, machine constructs that could operate in real time in the real world, carry out intelligent tasks, form the basis for robotic intelligence (e.g., carrying out construction tasks on other planets).


Another man who has produced many important ideas on problems of representation and modeling, especially in realms where there is very little a priori knowledge about the structure of the problem territory, is S. Amarel.[53],[54],[55],[56],[57] He shows in his work that a change in problem representation has a profound effect on the power of a problemsolving system. The problems of mechanizing such evolutionary transitions, Amarel sees as being related to the problem of mechanizing certain creative processes.[57]


Pask, whose research on adaptive teaching machines was mentioned earlier, in his most recent paper on communications between men and machines,5 makes certain most relevant points about the organization of the levels and languages of evolutionary machines. He says, for instance, that a stable tutorial discourse must include higher-level components, in the sense that the student can propose modifications of the educational goals and of the mode of instruction just as he would in a real-life tutorial. Pask says: The machine must be able to accept or reject these proposals according to whether or not they foster the learning process and it must, in some sense, discuss its acceptance or rejection with the student. The mechanical language used to mediate the man-machine interaction must be rich enough to accommodate this sort of repartee.


The trouble is, says Pask, the present design of machines (including procedural machines and institutions) precludes an adequate man-machine rapport, because we design mechanical languages (and consequently design machines) in terms that are rigorous (as in logic textbooks) rather than natural to our way of thinking. And, Pask contends, the solution to the dilemma appears to rest upon a radical reappraisal of the character of human and mechanical systems. Unlike the control systems we traditionally have considered, man, he goes on, shows a propensity for seeking novel goals at all levels in the hierarchy (for posing problems as well as solving them). And he argues, very much as we have, that new machines must possess internal evolutionary processes; the goals and the concepts they generate cannot be completely described in the working language of the system at the moment of their inception, but rather become describable as selection occurs. If two systems (men or machines) are to innovate jointly, the language used for the discourse must be able to express ambiguous or “incompletely described” concepts, as they might in natural language, which might be augmented by gesture and pictorial displays. He says that if there is to be a coupling between the evolutionary processes in the two systems (men or machines), then the two systems must-be able to interpret ambiguous expressions. And the trouble, of course, he says, is that existing mechanical languages differ from natural languages in that they do not accommodate ambiguous expressions.


In his work, Pask proposes an organizational or cybernetic model to replace our present existing models. Perhaps the highlights to remember here are that different languages must be evolved for each level of the hierarchy in the evolutionary machine, that ambiguity must exist in each language at each level, that discourse or dialogue should go on at many levels (in parallel, or simultaneously), and that either machine or man will compensate for a heightened ambiguity (and hence novelty) by checking information inputs on one channel against information inputs on another channel.


There has been implicit in our examples and arguments up to this point a crucial shift in viewpoint on how it is that a living creature such as man “reads” his environment and maintains his stability as a whole organism within it. This shift in viewpoint, which has to do basically with man’s perceptual system,[58] is explicated in a paper of D. M. MacKay.[26] The traditional view of perception took it to be a kind of passive two-way mechanical process in which the organism is delivered a stimulus and then executes a response, as though there were two sequential processes involved. MacKay’s research suggests that perception is the activity of keeping up to date the internal organizing system, that represents the external world, by an internal matching response to the current impact of the world upon the receptive system. In a sense, the internal organizing system is continually making fast-time predictions on what is going on out there and what is likely to happen out there, and it takes anticipatory action to counter the small errors that might threaten its overall stability mode.


One simple way of looking at this model of perception is to perform a simple experiment that shows how you bring your own meaning to the environment through action on your own part to heighten your sense of interaction. Take a small object, close your eyes, and strike it or drag it across your open palm. Then grasp the object with your whole hand, and manipulate it so that you feel its shape. At some point, you will instinctively grasp mentally the nature or character of the object. That is, your positive action of searching finally illuminates a whole image in your mind, whereas the passive reception does not. This analogy is very rough and ready, of course, and does not convey the heightened sense of the anticipatory updating action, but it may at least suggest how it is that perception is an active process. The fact that you can see the object as well as feel it, etc., heightens and reinforces your sense of its significance to your own purposes.


Studies of speech perception exhibit similar findings, showing that we tend to anticipate what another person is going to say, so that when the other person says something surprising, something unexpected, our whole receptive system tends to become more alert. The unexpected words, sentences, tones, etc., may excite, threaten, and so on, depending on the context, the relation to our internal mapping of the world, and so on. Speech simulators that utter sentences without intonation may leave the listener with the feeling that he has heard the words without being able to determine their meaning.


A. R. Johnson has also written about this view of sensory-motor behavior acting as an active perceptual buffer between the real world and the person’s stored cognitive model of that world.[59],[60] In short, he says, our understanding is now this: our ability to be passive observers of incoming data is a learned and highly developed ability, and such learning may only ensue through active interaction with the external world. Like us, machines may learn best by active participation.


And, certainly, your “response” and interaction with a machine that actively “looks out” at you through many channels, and that actively anticipates your moves (as another person does), is going to be quantitatively and qualitatively different.


H. T. Hermann and J. C. Kotelly, in studying psychiatric situations, have been the first authors who have really tried to deal with “context” formally.[61]


Finally, to go afield, we should realize that Soviet cyberneticians are well advanced in their models of thinking and of the mind.[62],[63]


TEMPORAL COUPLING


In the design of evolutionary systems, it will be crucial to recognize that different time cycles will be manifested on different levels of such systems. In general, this will mean that different forms of dialogue will occur. Dialogue at a microbehavioral level occurs, for instance, when both man and machine have short time constants (the machine might monitor a man’s eye pupil dilation, finger movement, etc.); very large systems, such as whole societies of men and machines, are characterized by very slow change, measured in years, as for instance in the dialogue of politics. In general, a man or machine, one acting slightly slower or faster than the other, might be used to evolve a kind of “time driving” or “entrainment,” that would intentionally drive systems out of their habitual timing. For instance, a fast machine can push a slower man, as in speed-reading training; on the social level, things such as telephones and television have speeded up political responsiveness and created a tighter dialogue within the body politic.


Machine-man investigation of the use of such phenomena to enhance or disrupt information exchange is yet to be carried out in any coherent way. We are hard pressed to omit consideration of such time questions here, but we must stake out a few signposts in this dark and difficult territory so that those who take evolutionary systems seriously will know where to begin looking for relevant research.[64],[65],[66],[67],[68],[69],[70]


A recent report by A. Iberall and W. McCulloch shows something of how a physicist looks at temporal events in biosystems, from the cell or atomistic level on up to the whole organism.[64] Figure 5, for instance, from that report, shows a kind of rough schematic of the time for complete event cycles at various levels of man’s physiological behavior.


The concept of different kinds of time, usually associated with different levels of abstraction, called “time graining,”[65] apparently has a near counterpart in cybernetic thinking in the Soviet Union. N. M. Amosov describes levels of time storage shown schematically in Fig. 6, in his book, Modeling of Thinking and the Mind.[62]


The use of transmission time, in links in self-organizing systems, for information storage, is opened up clearly by D. M. MacKay.[66] In effect, MacKay considers what aspects of systems are being neglected, or not doing their share. For information storage and handling, he concludes that time itself is a medium for such storage. For instance, the length of delay in a person’s response tells his interlocuter (man or machine) information he might otherwise miss. It is information that can be sensed on a nonverbal and nonvisual level. Moreover, time information, MacKay notes, may be transmitted on many levels simultaneously, and the event cycles on these different levels would, of course, be of different duration; which means, too, that interactions between different levels would be of a complicated kind.[66]


THE EVOLUTION OF NEW LANGUAGES


We noted earlier that change in any part of an evolutionary system brings in its train reverberating changes in the remainder of the system. Thus, we should recognize that much recent research on languages and on methods of representation, and research on new media for communications, will exercise an expansive power in the nature of our dialogue with machines. As D. C. Engelbart points out. because present languages, present media, and present methods of manipulating languages have limitations, it is difficult if not impossible to attack many problems because we have no adequate means of quantifying those problems. And quantify here does not mean merely the traditional sense of measuring and assigning numbers; it also means finding units for measure. New languages could allow us to find new coherent and well-defined shapes or Gestalts in our relation with our environment. For instance, as every scientist and engineer knows, the discovery and invention of new forms of mathematics opened new worlds. So too, computer languages, as they steadily grow more powerful, are opening new worlds.


In the scale of languages, we are weighing not only verbal languages in the classical sense, but languages of gesture, graphical languages of all kinds, and so forth. We shall touch on just a few points of recent relevant research to suggest the potential of new developments.


Graphical representations of two, three, and more dimensions generated through the assistance of computers are currently of keen research interest. One of the important first examples of computer graphics was I. Sutherland's Sketchpad system, and it has been followed by many others since. Sutherland, along with Coons and others, is investigating true three-dimensional display languages. Two-dimensional projections of three-dimensional figures that can be rotated at will with a hand-operated hemispheric control hate already been used for some time on the “Kluge” system. R. Stotz and others at Project MAC are now working on low-cost graphic displays for computer time-sharing consoles.[71]


A delightful example of graphical language for writing computer-generated music is that of M. V. Mathews.[72] He has a procedure for drawing scores as graphical functions of time using a light pen on a CRT attached to a small computer. The graphical input is transmitted digitally to a larger computer, which synthesizes the sound and reproduces it immediately with a loudspeaker. Figure 7 shows the arrangement of the man machine loop. Programs have been developed for producing a variety of sounds using programmed instruments that simulate real- world instruments. Composers such as Varese and Stockhausen have already experimented with graphic scores. An interesting element of the system is that it can be programmed so that the computer generates parts of the music through random and mathematical algorithms, “suggesting” new possibilities to the composer. Thus, the composer plays with musical ideas that might not occur to him on his own. The joyful part of a system like this, aside from its obvious powers, is that it could be used by children, who could learn to compose music directly through making drawings before, for instance, their hands were big enough to play a piano. We might then see an adult interest in children’s music comparable to the interest in children’s paintings during the past two decades.


The really significant aspect of the Mathews system is that it puts the man directly in the feedback loop with the responsive machine, and although the system is not quite operating in real time yet, it is getting there in orders of magnitude jumps.


Another computer-based graphical system for studying the characteristics of human speech is the system set up by P. Denes. With a light pen and a keyboard, he can vary a dozen or so parameters of speech, (e.g.. pitch, amplitude, formant frequencies, etc.), so as to study the interactive roles of such parameters, and hear an immediate synthetic voice output. Thus, he has the power of both visual and audio feedback.[73] The results can be delicious to hear. In interacting with his kind of system, a person can gain new insights into his own speech production. The machine would have a positive effect in the education of children.


Computer-based graphical experiments of an unusual kind have also been carried out by A. M. Noll.[74] He produces three-dimensional images by having the computer calculate separate pictures for the left and right eyes that, viewed stereooptically, fuse and generate the illusion of true 3-D depth. He points out that with currently available systems that use two CRTs mounted on the side of a helmet and with half-silvered mirrors, images can be presented to each eye. in this way, a person can see both the normal surroundings and also the CRT images superimposed on those surroundings.[75] If a computer generates images for this helmet device, and the movement of the head is picked up and fed into the computer, a person can move about and the computer-generated 3-D image will change accordingly, giving the person the sensation that he is in that environment. Imagine the excitement of “taking a walk" with such a system in a world governed by the laws of molecular systems, or of actually making those “quantum jumps” one had to study about in textbook equations. Would this not be a new world of realization for the scientific student? A computer-driven, 3-D movie that time-drives us and allows us to feel our way into the dynamics of the “underworld” would thus make relationships plainly and simply visible to us that were barely evident before, and that could barely be guessed by the most carefully educated intuition. The value of this kind of system in architectural designing, in which an architect could walk through and get the feel of his building before a physical element ever went up on the site, is incalculable.


Noll also envisions a system in which a mechanical manipulator similar to those used in nuclear installations would be connected to the computer. The user, he says, would put his hands into this manipulator, and could then freely move his hands about with the computer sensing the exact position in space of his hands and fingers. The servomechanism in the manipulator would also be connected to the computer so that as the hands approached a computer-stored object, the servo would lock to give the person the feeling of touching the object. With such a system, the man in this loop would see the object in 3-D, feel it, and even have the sensation of picking it up and moving it about[75] (one could project this scene in to the microcosm or out to the macrocosm).


S. Silverstone and N. Negroponte have been developing graphic languages especially designed for representing aspects of urban growth and large city-size systems.[76],[77] S. Boutourline has also been designing systems that employ enriched coupling.[78]


Another exciting venture into new languages is cited by G. Kepes, namely, the language of artificial light, a creative medium that has thus far been used mainly by artists. Kepes notes that “in spite of the current vogue of light art, we have only begun to explore the rich potentials of the medium. The works done today. . are no more than the individual words—if not just the letters of the alphabet—of an emerging vocabulary of light art.”[79] He also cites the acceptance of randomness—that is, of nonmechanical process—as an important aspect of kinetic design that tended to avoid the deadening effect of mere cyclical events. Anyone who has gone to a pyschedelic nightclub, and simply allowed himself to “ride” the gyrations of strobe lights, knows how he can be led to a decidedly new and provocative perception of the world.


What we have been emphasizing here, largely because of their novelty, are physical applications of new languages growing out of new media. The full speaking of these languages will indeed press far out into territories lying beyond old constraints.


But these new languages, too, will evolve through definite levels of sophistication and abstraction, and each level gained will bring in its wake new powers of evolution of conceptualizing the world. Psycholinguists have also recently discovered that children, as they grow, pass through definite levels of grammatical structure and linguistic capacity that in some deep way must be a reflection of the natural evolution of man.[80]


So too, as S. Papert points out, must computers (or artificial intelligence machines) be trained, so that they will make their kind of conceptual leaps.[81]


On theoretical grounds, one must watch also for the work of G. Gunther, W. S. McCulloch, and R. Moreno-Diaz. Their “triadic logic,” although not discussed here, will be seen in due course by the artificial-intelligence community as being of crucial importance in future mechanization of logical processes that are like those in living systems, logical processes that depend on “languages of becoming” rather than on “languages of being.”[82],[83]


A way of grasping how a man’s on-line interaction with a computer can bring new meaning is to look back historically to the day that the movie was invented—a process of seeing photographs one after the other, projected at a rate faster than the visual system could accommodate as “stills” (this is, by the way, another example of time driving). Thus, it was discovered that this new medium, the movie, could arouse feelings, sensations, and insights that could not be aroused by single photos. The movie was a new language medium, and so too is the computer. Inasmuch as it is the step to artificial intelligence and to evolutionary applications of machines, we are only beginning to guess what possibilities of communication it holds in store for us and our children.

Notes

Darwin, Charles, On the Origin of Species. Facs. of 1859 ed., intro, by E. Mayr. Cambridge, Mass.: Harvard Univ. Press, 1964, p. 459.

Brodey, W. M., and Lindgren, N., “Human enhancement through evolutionary technology,” IEEE Spectrum, vol. 4, pp. 8797, Sept. 1967.

Feigenbaum, E. A., and Feldman, J., eds., Computers and Thought. New York: McGraw-Hill, 1963.

Jones, J. C., “The designing of man-machine systems,” Conf, on the Human Operator in Complex Systems, Univ, of Aston, Birmingham, England, July 26-28, 1966.

Pask, G., “Comments on men, machines, and communication between them,” Vision 67—ICCAS, International Center for the Communication Arts and Sciences, New York Univ., Oct. 19, 1967.

Minsky, M., “Artificial intelligence,” Sci. Am., vol. 215, no. 3, pp. 246-260, Sept. 1966.

Minsky, M., “Steps toward artificial intelligence,” Proc. IRE, vol. 49, pp. 8-30, Jan. 1961.

Solomonoff, R. J., “Some recent work in artificial intelligence,” Proc. IEEE, vol. 54, pp. 1687-1697, Dec. 1966.

Sklansky, J., “Learning systems for automatic control,” IEEE Trans. Automatic Control, vol. AC-11, pp. 6-19, 1966.

Barron, R. L., “Self-organizing and learning control systems,” 1966 Bionics Symp., May 2-5, Dayton, Ohio.

Brooks, S. H., “A comparison of maximum-seeking methods,” Operations Res., vol. 7, pp. 430-457, 1959

Rastrigin, L. A., “The convergence of the random search method in the extremal control of a many-parameter system,” trans, from Avtomatika i Telemekhan., vol. 24, pp. 1467-1473, 1963.

Matyas, J., “Random optimization,” trans, from Avtomatika i Telemekhan., vol. 26, pp. 246-253, 1965.

Gilstrap, L. O., Jr., et al, “Study of large neuromime networks,” Tech. Report AFAL-TR-67-316, Dec. 1967.

Barron, R. L., “Inference of vehicle and atmosphere parameters from free flight motions,” Paper no. 67-600, AIAA Guidance, Control and Flight Dynamics Conf., Huntsville, Ala., Aug. 14-16, 1967

Barron, R. L., et al., “Synthesis of a spacecraft probability state variable adaptive control system,” Final Project Rept., June 30, 1967.

McKechnie, R. M., and Barron, R. L., “Design principles for self-organizing control system flight hardware,’’ NAECON, Dayton, Ohio, May 15-17, 1967.

Von Foerster, H., “Quantum mechanical theory of memory,” Trans. Sixth Conf, on Cybernetics, Josiah Macy, Jr., Foundation, New York, N.Y., pp. 112-145, 1950.

Ashby, W. R., Design fora Brain. New York: Wiley, 1960.

Barron, R. L., Private communication, Oct. 31, 1967.

Barron, R. L., “The future for self-organizing control,” Control Eng., to be published.

Fogel, L. J., et al., “Intelligent decision-making through a simulation of evolution,” IEEE Trans. Human Factors in Electronics, vol. HFE-6, pp. 13-23, Sept. 1965.

Fogel, L. J., et al., Artificial Intelligence Through Simulated Evolution. New York: Wiley, 1966.

Fogel, L. J., “Inanimate intellect through evolution,” Naval Res. Rev., to be published.

Fogel, L. J., Private communication.

MacKay, D. M., “Ways of looking at perception,” in Models for the Perception of Speech and Visual Form, W. Wathen-Dunn, ed. Cambridge, Mass.: The M.I.T. Press, 1967.

Engelbart, D. C., “Augmenting human intellect; a conceptual framework,” SRI AD 289565, Oct. 1962.

Engelbart, D. C., “Augmenting human intellect: experiments, concepts, and possibilities,” Summary Rept., SRI Project 3578, March 1965.

English, W. K., et al., “Computer-aided display control,” Final Rept., SRI Project 5061, July 1965.

Engelbart, D. C., et al., “Study for the development of human intellect augmentation techniques,” Interim Progress Rept., SRI Project 5890, March 1967.

Hoffer, Eric, “Man, play and creativity,” Think, vol. 33, no. 5, Sept.-Oct. 1967.

Engelbart, D. C., “Automated psychomotor skill training,” unpublished proposal.

Engelbart, D. C., and Sorensen, P. H., “Explorations in the automation of sensorimotor skill training,” Tech. Rept. NAVTR- ADEVCEN 1517-1, Jan. 1965.

Pask, G., “Machines that teach,” New Scientist, vol. 10, June 1961.

Pask, G., “Thresholds of learning and control,” Data and Control, Feb. 1964.

Lewis, B. N., and Pask, G., “The development of communication skills under adaptively controlled conditions,” Programmed Learning, July 1964.

IEEE Trans. Human Factors in Electronics, vol. HFE-8, June 1967.

Weizenbaum, J., “Contextual understanding by computers,” Commun. ACM, vol. 10, pp. 474-480, Aug. 1967.

Rosenblith, W. A., ed., Sensory Communication. Cambridge, Mass.: The M.I.T. Press.; also New York: Wiley, 1961. See “Editor’s Comment,” p. 824.

Walter, W. G., “Specific and non-specific cerebral responses and autonomic mechanisms in human subjects during conditioning,” in Progress in Brain Research—Vol. 1. Amsterdam: Elsevier, 1963.

Farley, B. G., “Some similarities between the behavior of a neutral network model and electrophysiological experiments,” in Self-Organization Systems, M. Yovits, ed. New York: Pergamon, 1962.

Farley, B. G., and Clark, Jr., W. A., “Activity in networks of neuron-like elements,” Proc. 4th London Conf, on Information Theory, C. Cherry, ed. London: Butterworth, 1961.

Dewan, E. M., “Communication by voluntary control of the electroencephalogram,” AFCRL-66-736, Oct. 1966.

Hecker, M. H. L., “The effects of task-induced stress on speech,” AFCRL-67-0499, Aug. 25, 1967.

Ray, Charles, “The coming internal instrumentation of physiology and medicine,” to be published.

McCulloch, W. S., Embodiments of Mind. Cambridge, Mass.: The M.I.T. Press, 1965.

Sutro, L. L., Kilmer, W. L., Moreno-Diaz, R., McCulloch, W. S., et al., “Development of visual, contact and decision subsystems for a Mars rover,” M.I.T. Instrumentation Lab. Rept. R-565 (July 1966-Jan. 1967), July 1967.

Kilmer, W. L., McCulloch, W. S., and Blum, J., “An embodiment of some vertebrate command and control principles,” to be published.

Kilmef, W. L., “A reticular formation model decision computer for a robot,” unpublished report.

Kilmer, W. L., Private communication, Dec. 15, 1967.

Arbib, M. A., Brains, Machines, and Mathematics. New York: McGraw-Hill, 1964.

Papert, S., paper presented at Conf, on Intelligence and Intelligent Systems, Athens, Ga., Jan. 15-18, 1967.

Amarel, S., “Introductory comments: on representations and modeling in problem solving,” Conf, on Intelligence and Intelligent Systems, Athens, Ga., Jan. 15-18, 1968; publ. in RCA report.

Amarel, S., “On the automatic formation of a computer program which represents a theory,” in Self Organizing Systems —62, M. Yovits et al., eds. New York: Spartan, 1962.

Amarel, S., “On machine representations of problems of reasoning about actions—the missionaries and cannibals problem,” in Machine Intelligence 3, D. Michie, ed., Edinburgh: Univ, of Edinburgh Press, 1968.

Amarel, S., “An approach to heuristic problem solving and theorem proving in the propositional calculus,” in System and Computer Science. Toronto: Univ, of Toronto Press, 1967.

Amarel, S., “On the mechanization of creative processes,” IEEE Spectrum, vol. 3, pp. 112-114, April 1966.

Gibson, J. J., The Senses Considered as Perceptual Systems. Boston: Houghton Mifflin, 1966.

Johnson, A. R., “A structural, preconscious Piaget: heed without habit,” Proc. Nat. Electron. Conf, vol. XXIII, 1967.

Johnson, A. R., “A cybernetic trend,” Am. Soc.for Cybernetics Newsletter, to be published.

Hermann, H. T., and Kotelly, J. C., “An approach to formal psychiatry,” Perspectives Biol. Med., Winter 1968.

Amosov, N. M., Modeling of Thinking and the Mind. New York: Spartan, 1967.

Ford, J. J., “The development pattern of the Soviet automation program,” Am. Soc. for Cybernetics, unpublished.

Iberall, A., and McCulloch, W., “1967 behavioral model of man—his chains revealed,” NASA Rept. CR-858, July 1967.

Brodey, W. M., “Information exchange modeled in the time domain,” presented at Am. Psychiatric Assoc, meeting, Atlantic City, N.J., May 1966.

MacKay, D. M., “Self-organization in the time domain,” in Self Organizing Systems—1962, M. Yovits ef al. eds., New York: Spartan, 1962.

Gibson, J. J., “The problem of temporal order in stimulation and perception,” Artorga, Aug./Sept. 1967.

Von Foerster, H., “Time and memory,” from panel discussion, Interdisciplinary Perspectives of Time, Ann. New York Acad. Sci., vol. 138, pp. 866-873, Feb. 6, 1967.

Brodey, W. M., “The clock manifesto,” Ann. New York Acad. Sci., vol. 138, pp. 895-899, 1967.

Smith, K. U., Delayed Sensory Feedback and Behavior. Philadelphia: Saunders, 1962.

Stotz, R. H., and Cheek T. B., “A low-cost graphic display for a computer time-sharing console,” M.I.T. ESL-TM-316 Tech. Memo., July 1967.

Mathews, M. V., and Rosler, L., “Graphical language for the scores of computer-generated sounds,” to be published.

Denes, P. B., “The elusive process of speech,” Bell Lab. Record, pp. 254-259, Sept. 1966.

Noll, A. M., “The digital computer as a creative medium,” IEEE Spectrum, vol. 4, pp. 89-95, Oct. 1967.

Noll, A. M., “Description of graphic manipulator,” Private communication, Aug. 25, 1967.

Silverstone, S. M., and Rusch, C. W., “Communication in the design process,” or “The medium is not the solution,” AIA J., vol. 48, Dec. 1967.

Negroponte, N., and Groisser, L., “Urban 5: An on-line urban design partner,” IBM Sci. Center Rept. 36. Y15, June 1967.

Boutourline, S., “The concept of environmental management,” Dot Zero IV, Sept. 1967.

Kepes, G., “Kinetic light as a creative medium,” Technol. Review, M.I.T., pp. 25-35, Dec. 1967.

Smith, F., and Miller, G. A., The Genesis of Language. Cambridge, Mass.: The M.I.T. Press, 1966.

Papert, S., “Why machines can’t think,” First Ann. Symp. of Am. Soc. for Cybernetics, Gaithersburg, Md., Oct. 26, 1967, to be published.

McCulloch, W. S., and Moreno-Diaz, R., “On a calculus for triadas,” M.I.T., R. L. E. Progress Rept., no. 84, pp. 33 5-3 46, Jan. 15, 1967.

Moreno-Diaz, R., “Description of the dialogue implicit in man-environment interaction—a triadic formal language,” Private communication, June 1967.