MENU

Experiments in Self-Learning Closed Loop Simulation

BACK

A highly technical presentation by Brodey and Johnson, which sheds light on their view — emerging mostly on Johnson's doctoral studies — about the important role that perception and intentionality play in cognitive processes. How should this view change how we build prosthetic devices?


Progress in direct accessing of the cortex by electrical stimulation allows us seriously to consider prosthetic systems which either bypass disrupted human—environment interfacing systems, or augment those already functioning normally. This conference bears upon the former. The study of either application benefits the other. 


Direct accessing first brings the thought of blindness being undone by transducing light phenomena into electrical phenomena recognizable by brain: an exercise in finding the “right” transducers and filters. Many simple transducing systems have been fashioned in this mode but none has received more than passing interest from the blind. Though the devices have often fulfilled their designer's requirements, the blind com-plain that the information received is not worth the effort. The idea of an artificial eye, like an artificial heart, is intriguing. How can such a device be rendered congenial to the user? Can the task be simplified by alloying technological sophistication with dependence upon the user resources to provide the prosthesis with the “grasp” of active looking, rather than the simple “two-point threshold” of being shown? 


The point is not a trivial one. Raw sensory information, passively received, has little meaning for the recipient unless he has the opportunity to explore it actively. As a passive observer, you might have an object pressed, moved, rotated, or otherwise touched to your hand and in the absence of any response on your part you would find it difficult to make an identification. However, allow but a fleeting moment of grasp of the object and its identification can be immediate: the process of grasping, as organized by the object itself, is the descriptor that is recognized. The behavior of the perceiver engaged in applying his perceptual apparatus to the environment forms the percept [1][2].


This paper will particularly emphasize, and perhaps exaggerate the effective use of the effector aspects of the perceptual system. As noted above this aspect is often given design importance only as an after-thought; complex user participation in looking for/at what he wants to see we believe, is essential to congeniality of the system. The user twisting hand dials is not what we mean. We are concerned primarily with interfacing systems which organize themselves to explore the environment in a somewhat autonomous way, and in whose behavior the user participates as one removed from the sense data. This is the kind of participation with which we are now beginning to become familiar.

For the purposes of argument, we will propose an embodiment which has not yet been realized in hardware but which is technically feasible. We will suggest later the steps of simulation which may be necessary for its actual design and fabrication. The focus of our interest at this conference is upon the fundamental qualities of the interfacing system rather than upon its structural details. We hope that later conferences will bring more detailed entelechies to review.

Consider a prosthetic replacement for an excised eyeball which has been attached to the periocular muscles both mechanically and by way of stimulator electrodes which can cause those muscles to contract. No other means of attachment nor of signal transduction is proposed. We wish thus, to define an active search process which will elaborate an interface with the higher-resolution positional control loops conceivable via direct cortical stimulation.

Within the eyeball itself are: (1) an optical system with a single and variable iris, (2) a retina of a few tens of photosensitive elements covering perhaps 30° of visual angle and distributed in depth over the useful image planes, (3) a microminiature special-purpose hybrid computer with sufficient conditional programming logic to render it capable of self-organization, (4) output control channels for muscular stimulation, iris control, and perhaps a simulated retinographic potential, and (5) battery power supplies.

Consider the prosthesis to be a multiple-goal multiple-actuator, self-organizing controller [3].

Designs and applications of these controllers are available in the literature on control systems but have not heretofore been applied to prosthetic systems. In this case the prosthesis experiments with the photosensor changes produced by muscle stimulation, adjusting the muscles so as to achieve goals defined by various spatiotemporal distributions of contrast boundaries on the simple photosensor retina. The purposive behavior of this system as a whole may be seen to be that of accessing for itself functional aspects of the visual information available from the environment. Its behavior is autonomous yet simple and recognizable. This loop of the final composite prosthesis is like a seeing eye dog (or reader), not yet aware of fine detail of what is wanted. The dog (or reader) is autonomous. The user does not have to teach either how to perform his usual routine.

The user is engaged in delivering his own signals to the periocular muscles in the “normal” way and also in manipulating the surroundings musculature: lids, cheek, brows, and in moving his head so as to re-direct his attention or so as to experience the prosthesis as he modifies its normative behavior. He overrides or harnesses the prosthesis using the fact that he, can change the state of his periocular muscles and thus change the effect of the prosthetic stimulation. The dialog which takes place between the user and his environment is by way of their mutual steering of the prosthesis. The extraocular means by which the subject normally constrains the movement of his eye in “attending” can serve as reinforcement to the prosthesis in learning the desired relative priority of the available goal structures. That is, he can train his prosthesis to grasp in a preferred manner. 


Some objections have been raised to the effect that the periocular muscles are not controlled by a “gamma system” and have no muscle spindle afferents which can deliver kinesthetic information of eyeball position to the central nervous system. We acknowledge the objection in its specifics but find it somewhat empty in principle. A prosthetic eyeball may certainly be made sufficiently irregular in shape to provide unambiguous cues as to its orientation; and in any case the focus of our attention is to promote the use of what normally would be considered as output information instead to be useful input information if in fact it has arisen from a self-organizing system which is actively dealing with one or more aspects of the environment in real time.

The burden of our argument is this: that in the replacement or enhancement of perceptual apparatus, one must provide a purposive system whose behavior is indicative of the information to be perceived. The processing in the brain—visual cortex loop now has a visual context for recognition and interpretation. Information delivered by direct cortical stimulation enters into the user’s experiments of looking/seeing. ln absence of an “effector mapping,” however, direct access from transducer to brain is unlikely to offer means whereby the user may evolve a broad spectrum of familiar identifications. He is also unlikely to incorporate the prosthesis into his body image unless its identity for his is in terms of behavioral modes in which he participates.


If a simulation of the proposed prosthesis were to be undertaken, consideration would have to be made at the outset as to the replacement of each transactional channel that is lost due to the removal of the prosthesis from the subject’s periorbitum. Initially, perhaps, a head-mounted “eyeball” would suffice. An eye movement monitor might provide positional data, and stimulating electrodes could deliver to the skin around that eye an indication of the position of the artificial device with respect to the axes of the head. In no sense, however, should such a simulation be expected to provide the kind of synergistic learning experience which a more closely integrated and cosmetically acceptable prosthesis would make possible.


Restoration of the interconnectedness of the visual system effectors with sensors may make scientific analysis by older methods more difficult, but provides for real-time study of the user’s expecting, search, recognizing what was searched for or what was completely unexpected. It allows study of the user’s efforts to maintain his information level within workable limits by changing his tactics for visual experimentation, if not his strategy. The user who can learn to use his prosthesis with increasing elegance of control will, we think, actually integrate it into his body image. This is not likely to occur if the prosthesis is an elegant camera to be extolled for its pictures rather than its congeniality with its users moment-to-moment changing style of and need for picture taking.


We are going to be talking about the situation where the input and the output, and the transforms between them are now connected so that the output again becomes the input.

We want to talk about the actual person who is the user being in the loop, because when you connect the output back to the input, you are really connecting through the field of activity and behavior, and we want that field to be modifying the whole system at all times. So, we want the actual use of the device, the behavior with the device, to be modifying how the device operates.


What we are saying is, something that I think we all know, but perhaps have not said quite this way. The computer is now capable not only of operating in terms of the transform, but also helping the person to use the device. All of this is happening simultaneously. Each one of us here so far has talked about a piece of it, and our effort is to think about the whole getting together.


I do not know what part of the slide is being projected on my face. You do; you can see it. I have a hand mirror here, so that perhaps I can see it, too.


How can I find out what procedures to go through to determine what is being projected on my face? I could use you as super-preprocessors just to tell me what is on my face. That does not help me in the future if I do not have you around. The next step is that you could tell me what to do. If I can find the letter, and I can, “C,” you could tell me what to do in order to keep the white color in these. All I have is two sensors here (indicating eyes). All I can see is what is coming out of the bright bore of that projector, and I can see it with either eye, so I have two sensors I can carry over the field, and you can tell me what to do to move through the shape of a “C,” let us say. Helping me through one letter may teach me a little bit of how I could go through letters and what letters are like. 


What must I be in order to discover letters, shapes, and objects to which I want to pay attention in this scene?


Let us go on to the next slide. It might be interesting to someone to know that this is a one-way street, but you do not really have to scan across here and read the letters. It would be enough if you had something such as an arrow out there, something that could behave in such a way that by my feeling its behavior I would know there was an arrow. 


The most interesting thing in this next scene is perhaps the red shoes. They really caught my eye. They are not quite as red as in real life. Suppose I had a device that was looking for or would respond to something interesting; a couple of fast-moving bright red objects might be interesting to it. It is the movement that counts. The prosthesis does not have to really know what is up and what is down, because if it is moving with inspect to you, you can feel it aiming up or aiming down. One would want a prosthesis to give you a “feel” for distance, and what it might mean to walk up to it and walk along the surface or up to that surface. I would want a prosthetic device to give me an idea that moving my hand out to it would be appropriate, but it is not something that you look at so much as you would feel.


We want to show you the business of connecting the input and the output together, i.e., closing the loop. We are now moving a television camera over toward the monitor. There it is in the monitor, and now we have a feedback situation where the camera is looking at the m • and within that loop there develop patterns that are quite unstable monitor seems to be “fooling around” with itself. We have some movies of this, and the small ambient changes in the elements of the monitor and the electronic circuit in the room will change what happens in that loon Then I can put my hand in the loop, and when I do that it is very sensitive, but it does not do just what I expected.


The system is interacting with the hand only in a visible way. I cannot feel anything with my hand. It is a visual loop, as it were, and I am trying to interact with my hand, but if I were blind I could not see the effect I produce.


Our self-organizing eyeball has a camera with a retina in it of 13 cells. The connections to it arc at random. I do not keep track of what is connected to where. Tire behavior of the eye, as it looks, up and down, and left and right, is driven by “McKibben” muscles. Each of these consists of an air balloon inside a braided tube, and when the balloon is inflated the muscle contracts and it feels very animal-like. This is all hooked up to a computer. The inputs to the computer are from the retinal cells, but again at random. I do not care where the changes of information are coming from. The program in this machine is a simulated self-organized controller. In this particular program the eyeball is simply trying to look for the maximum light. It is trying to maximize the number of photocells that see light, rather than dark.


One goal is to maximize the number of cells that change within any sample interval, thus representing a form of behavior toward contrast boundaries. If you say, “I want the number of cells that change from dark to light to be as equal as possible to the number of those that change from light to dark, then you will tend to hang onto contrast boundaries once you find them.


I have not gone any further than that, but I have some ideas. You can change the purpose depending upon what it is that seems to be useful to the person in the loop.


We are illustrating, again, a motive approach. Our technology is rather primitive. I am not steering that camera very much; it is sort of steering me at this moment with my hand on it. You can see the muscles contracting and relaxing. There will be some pictures with the device just by itself.


We are thinking of this kind of a device being in a prosthetic eyeball connected with the muscles. The muscles being, as it were, controlled by stimulation at the output from the controller. So, you have this dialog going on between your input and the input from the device in common at the muscles.


The scope pattern, which is very faint right now, is showing the goal structure and the goals are shown in an up-and-down direction. The goals that are being pursued are the horizontal lines moving up and down, and the output to the actuators, the vertical lines going left and right. The desire remains fixed in place. Whatever form the object happens to have at that moment will continue in form.


This would mean that the editing that takes place for the Brindley screen could take place as a function of the interaction between the person tugging on the muscles, which are activating this interocular procedure, and the interocular procedure itself trying to follow rules of behavior that it has developed for itself.


Just for the record, the control program that is simulated in the computer is a controller designed and devised by Adaptronies, in McLean, Virginia, which is what they call a “Probability State Variable Controller.” In this case, it is programmed to operate on up to six actuators in the pursuit of four goals, although I am only using one goal at the moment. It is programmed so that instead of playing the usual game of going through a long list, it is essentially playing the game of hot and cold; that is, as the device moves toward the goals that have been set up for it, or it has set up for itself, it is rewarded, more or less.


The device does not make a measurement on the world and then try to do a transformation on that, deciding what to do next that will make it go better. It learns by doing. It keeps a record, not a moment-by-moment record, but sort of a history in an up-down counter of successful trends in a second derivative. It will reverse what it is doing if the trend turns bad, but will continue in the way it is going if the trend is favorable toward achieving the goal.


This system does not care if one of the actuators is defeated. It will continue to operate, not quite as well, not quite as fast, at a lower resolution. There is a “graceful” degradation, which is the current term for its kind of gradual failure. The interesting point is that you feel this—you put your hands on this object, or on the muscles, and you get the feeling of something that is so dynamic it is almost embarrassingly alive.


DISCUSSION


STOCKHAM: I would very much like to hear Dr. Brodev and Dr. Johnson say something explicit about this. Do you think vou can help us out a little bit? Can you “shift gears,” so to speak, to outsole a little more specifically and explicitly about how this might be an aid?


BRODEY: Let me start by saying I do not think what is required is psychological sophistication. There is a different point of view that we have been trying to build throughout the meeting, which is the view of speaking in terms of input-output. If vou want us to do what other people have done namely, to take the elements of the loop and describe them separately, we can.


Part of the impact of doing it this way, even though it is hard to understand, stems from the demonstration, which was set up to describe a whole loop with a person in the loop.


STOCKHAM: I understand that.

BRODEY: If you want to talk about the use of this for blind people, we will say at this point we are not in the business of designing. We did not design this for use by blind people.

On the other hand, we are making a set of suggestions for whomever is building the “Cadillac” model; and the suggestions have to do with how you get a person into the loop where he is governing what information is available to him.

It is possible, I think, to find, and again, to look for, patterns or features in the way that the man and this sort of thing operate and interact, so you find locations where he is tugging one way while the prosthesis is going another; and he seems to want to look in a particular way. He finally takes over the prosthesis, and surely at that point he would get material from the camera, as it were.


This is a technique of trying to decide what the man wants to look at, and as he looks at it he then gets a feeling of this being his own looking rather than the passive system which compels him to take the incoming information. 


JOHNSON: Let me make a somewhat philosophical statement about the behavior of this sort of device. The central nervous system is engaged, if it is engaged in anything, in trying to discern the meaning of the object and the events around the organism.


I know that Dr. MacKay has presented outstanding topological descriptions in the past as to what the definition of meaning is in terms of the probabilities of events that can be associated with what is happening, but let me give a more behavioral one for the moment. Meaning is embodied in the appropriate response of the organism to the event or object that is being perceived.


The devices that we are interested in are ones which are, at all times, engaged in appropriate response, so that there is an interfacing loop, as it were; there is a self-organizing loop that is interfacing you with the environment, and you are sharing a part of that loop with it. You are not interested in the sense data which that loop uses. It is operating in a fast time with a lot of data, some of which may be irrelevant; but what is relevant to you is its behavior, and from that behavior you discern the meaning of what is out there. You make the identification of objects and events.


MACKAY: I should only like to try to use a simplifying analogy, and it will be oversimple, I am sure. Suppose that you had in your hands a powerful magnet in an environment which included iron objects. Then, as you waved this magnet about with your eyes shut, you would find your hand being pulled this way and that and you would sense the location of iron objects, in a different way, a more kinesthetic way than if you merely had a beep signal reflected back when an iron object was in the vicinity. And you would do this without attributing any “goal” to the magnet in your hand.


My question is, would it be sufficient to illustrate what you are saying to think of your artificial sensor not so much as being like a seeing-eye dog, which is somewhat like a person, but as something like a magnet sensitive to a variety different things, not only iron, but objects of a particular shape, and so forth. If so, then you could sav what you have developed is a series of selective skills, rather than bringing in the rather abstruse and perhaps over-personal notion of goals.


JOHNSON: We like using the world “role” these days, rather than “goal”; that is, this behaving object takes on certain roles with respect to this particular environment.


The principal point, however, that we want to get across that the way it interacts with you is the same modality as your movement of it. It is a physical response to a physical movement, not an auditory, or a phosphene feedback. Now we are not saying this substitutes for implanted electrodes. We are only including this as an enhancement device for forming percepts.


MACKAY: With respect to that, the magnet would do if iron was all you n wanted to locate.

BRODEY: The magnet is fine, particularly the magnet, not only in the sense you have been using it, as a sensor, but also as an actor; that is, it is searching out.


LOEB: Did I understand you to say that you felt the visual prosthesis should have a certain amount of its own searching built into it, and its own goals?


BRODEY: That is right. In this situation, let us say we have an edge-detecting purpose built in. It would be looking for edges, and following edges itself. Then you would override it if you particularly did not like that edge. You wanted to get into another territory where it might follow other edges that were more interesting to you.


LOEB: Suppose you had this device inserted into the eye socket, and it was attached to the extraocular muscles. Is there any reason why you would need anything like this? Is there any reason why you could not rely on the normal tracking system which is elicited by the percepts in the cortex, presuming you have a decent image in the cortex?


JOHNSON: We have not suggested any connection from there to the We have only been talking about a self-organizing eyeball connected muscles with no connection to the cortex at all.


I do not think that much exploratory behavior comes back from the I think it is a much shorter loop than that, and we are trying to circumvent that loop.


LOEB: Are you saying, then that a person who is blind does not have the tracking pathways?


JOHNSON: Yes.


MACKAY: I think we really should clear up the kind of control of the periocular muscles that is envisaged in this proposed device, because if the idea is to apply local octivation rather than central innervation, then there is no direct evidence that you will get any central nervous system indication of the direction of the eyes at the conscious level.


So, it is only if Dr. Brodey is envisaging implanting the oculo-motor center, and giving rise to collateral innervation of the central nervous system from the neural control, that I could see any advantage in using the eye rather than the artificial motor.


BERING: The eye muscles in the normal person are such that you know where the muscles are pointing from the information from the muscle, You know where they are pointing because of the visual information are getting back.


MACKAY: That is right.


BERING: I am not talking about theirs (Brodey’s and Johnsons); I am talking about the ultimate system. If you are getting useful visual information as a result of the use of the oculo-motor muscles, you would then learn with experience where they were, and it would be useful, and it would be giving you information because you were directing it.


MACKAY: That is an argument in favor of proceeding with the development of a camera which could be coupled to the cortex, or something like that, but not necessarily this active, magnet-like device.


BRODEY: There are two systems we have been talking about. One is the loop system without connections to the screen. The other is with connections to the screen.


For the purposes of the blind, we are interested in the one with connections to the screen. For other purposes, we are interested in the other pattern, and the other path.


MISKY: I do not understand why they (Brodey and Johnson) are proposing this in connection with visual prosthetics, since the magnet-hunting device is only good for one thing at a time, and therefore it misses the goal of a visual prosthetic, namely, restoring or providing a person with an ability to handle a larger number of data at once, including the relative ones.

If you have a motor action that will find only one thing, it will not be so useful as a prosthetic device. If this device is to be used, you had better know what features it is picking out. It has to have a way of telling you.


JOHNSON: The argument is one that we have had many times.


MINSKY: We have never had that argument.


JOHNSON: Let us make it very clear that we are not interested exclusively in the area of prosthesis. We are interested in ways of organizing the users behavior with respect to visual space.


MINSKY: It has to know something to say that, and you have to answer to criticism.


BACH-Y-RITA: May I point out you have other muscle systems. You have the neck muscles. They have very fine feedback; and if you mount the camera, or whatever it is, on the head, you can point it wherever you wish, which is much better than having it point itself.


LOEB: I appreciate the research nature of Dr. Brodeys investigations; however, some of us are very much concerned with the very real problems of visual prosthesis, and trackers require a great deal of hardware. We do not have a great deal of room. I want to know if anybody has any evidence to indicate trackers would be necessary over and above the extraocular hookup?


BRODEY: What sort of editing device you are planning for this “Cadillac” model, so you can choose what nature of picture to present is one of the questions.


MINSKY: Most of the papers are about how the experiments on what kind of preprocessing must be done.


BRODEY: It is not only preprocessing. It is sort of letting the person into ‘he loop to get the kind of preprocessing you want.


MINSKY: This is true; you have to; but still you have to set the preprocessing


BRODEY: We are not talking about the errors; we are talking about getting the person into the loop so they have some control over the preprocessing that.

Gibson, J. J., "The Senses Considered as Perceptual Systems.” Houghton Mifflin, Boston, 1966.

Johnson, A. R., Organization, perception and control in living systems, Indust. Management Rev. (MIT) 10 (2), 1969.

Barron, R. L., Self-organizing control: the next generation of controllers " Control Engineering,” Part I, Feb., 1968; Part II, March, 1968.

Brodey, W. M.. and Lindgren, N., Human enhancement: beyond the age, Instit. Elect. Electron, Eng. Spectrum 5 (2), 79, 1968.

Brodey, W. M„ and Lindgren, N., Human enhancement through evolution technology, Instit. Elect. Electron. Eng. Spectrum 4 (9), 87, 1967.