MENU

Self-Organizing Control in Prosthetics

BACK

Yet another piece on how Johnson’s valorization of active participation in perception and cognition should inform the design of prosthetic devices.

SUMMARY


Give a man a short stick and bid him rub it against a textured surface. Ask him what he feels. The description you receive will cha­racterize the surface and its interaction with the stick; it will not tell you what the stick feels like where his hand grasps iL So long as we continue to refer to prosthetic bands as “terminal devices” we will fail to understand perception and manipulation as having essentially identical natures and as resulting from having essentially identical natures and as resulting from a “loop process” which participates acti­vely in its environment. Input and output should not be considered separately but as integral with a sensorimotor whole. Man and his world engage in a dialogue, sharing an interface from which information is elicited: information which permits One to form a predictive model of the other. This paper Focuses primarily on the properties of dialogue and gives some attention to the “higher level central proces­ses” which play a role in the uses to which that dialogue is put.


INTRODUCTION


If a carpenter is to hammer a nail, he will want to use his own familiar hammer because he has adjusted himself to its parti­cular weight and balance; but a skilled carpenter will do practically as well with any hammer he can lift. He can strike straight down or sideways, forehand, backhand, or even over his head where gravity- works opposite to its usual direction. Hammering a nail is a formal procedure with informal variations[8]: it is a behavioral style, “une mode d’emploi du system.” The “terminal device” is the com­bination of the nail and the board into which it is being driven, and the major focus of attention of the carpenter is on the physical relationships of that combination. He uses his eyes primarily for aiming the swing; the progress of the nail and the hardness of the wood come to him through a multi-channeled dialogue that inclu­des both arm and ear at least. He is participating in a multiple-loop process and will establish, without conscious effort, a predictive model of the properties of each impact. Any variation from the expected pattern — such as one that might indicate the bending of the nail or the approach to a hard knot — will be immediately apparent to him because his style will be changed by it. He “senses” it, not because something has been done to him as a raw input from the outside, but because change in the environment has interven­ed in a loop process in which he is actively engaged, and it has altered the properties of the loop.


Consider a further aspect of our parable. When we use a ham­mer for the first time in our lives we are clumsy with it — we miss or bend the nail or hit our other thumb — and we are uncomfort­ably and consciously aware of the shape of the hammer handle, the instability of the wood in the other hand, the weight and length ol the hammer, and all of those other measurable features of the situ­ation which we are accustomed to report on when we make a “sci­entific” evaluation of the activity. However, once we have acquired skill we simply pick up the hammer and nail and start the process going without consciously effecting control over any of the sub­components of the experience. The hammer and its style of use have been assimilated into our body image in such a way that to ham­mer a nail might well be thought of as a unitary process which we commence at an appropriate time and place and which thereupon carries itself to completion without much active monitoring on our part. It requires our visual participation only marginally but con­veys a lot of information to us about the properties of that particu­lar segment of the world with which we are dealing: properties which we suddenly feel called upon to describe technically when we present a paper on the subject. None of the technical statements we could make about the process are of any use to the carpenter when he addresses himself to the prospect of nailing nor when he is teaching an apprentice his craft. They might help a designer of hammers or of nails, but the design modifications which result are not a measure of improvement until someone has engaged in the formal process of nailing and has sensed the informal shifts he has had to make in his style. I would like to appeal strongly to the wri­ters and presenters of papers that we develop a language descrip­tive of changes in formal and informal procedures which do not rely for their validity or acceptance upon our rigid, contextless ha­bits of technical deliberation. Otherwise we will be forever con­demned to the description of hammers and nails and will never talk to each other about the realms of craftsmanship.


The nature of dialogue, the subject toward which this paper is directed, is such that it is itself context dependent and therefore not amenable to the kind of timeless descriptions to which science has become habituated. There are no irrefutable statements one can make about a particular dialogue, principally because the referent event was an unrepeatable experiment. The reportable elements of the exchange do not themselves retain that quality which consti­tuted dialogue: the changes of style that beget immediately respon­sive changes in the respondent, and so on recursively. We are left with the alternative either of making the kinds of “human” state­ments with which our listeners can identify themselves or — and this method is preferrable by far — of bringing the medium of the dialogue along so that our audience may experience a “hands on” participation with us in the referrent system. This paper will have to choose the former since it does not describe the behavior of any extant prosthetic device; rather, its field of background study has been a series of “toys to think with” which were intended to em­body broad applications to man s perceptual ecology 


PRIOR ART


The central focus of attention of prosthetic design. while lumped generally under the standard of gaining the acceptance of the patient, has really revolved around three central issues: cosme­tic appearance, articulation limits, and energy resource constraints. Let us dismiss the first by agreeing that if we select the right mate­rials and provide for natural movement repertoires the cosmesis will present no further difficulty. The third issue we will also set aside by assuming that unlimited energy sources are available either because we are pulling power out of the wall or because we are far enough in the future to have energy storage and conversion systems somewhere nearly as efficient and convenient as biological ones. Problems one and three, that is, concern our ability someday to improve the “stuff” we can design or fabricate. Our primary interest will be that of articulation; it concerns the way in which we deal with the information available to us: where it comes from and how we employ it.


Every developer of a prosthetic device feels called upon at some juncture to state how many “degrees of freedom” his unit incorporates, The choice of descriptor is unfortunate because it refers to little more than the number of distinct axes of rotation the various mechanical bits share among them; it gives no indica­tion of the elegance of style of movement that the user might build into his repertoire. Hands take on many roles: they can grasp, sup­port, manipulate, strike, clamp, deflect, touch, lead, follow, caress, and so on. If offered a variety of prosthesis shape at any one instant (the way that remote manipulators can change their “business end”) a subject could probably get along with little more than one degree of freedom for each. The point is that the selection of the role that a hand will play at any moment is tantamount to asking that the hand be appropriately responsive to the context of use which pertains. Appropriate response to shifts of context[10],[12] should, ideally, be automatic and continuously available. Since pro­sthetic designs have not heretofore incorporated the profusion of sensors and effectors, richly interconnected, which this paper is inevitably going to suggest, it has not been possible to provide for a great verity of relevant responses. Mechanical design as it has always been practiced is very clearly the making of a prior commit­ment to a very finite topology of context of application. Let us con­sider why this is necessarily so.


The extreme limitations placed upon the design of mechanical limbs by the desire to promote the absolute minimization of energy dissipation have been profound and have inevitably condemned these imitations of human extremities to behave quite unlike their originals. The actual use of energy resources is generally “latent”: that is, when not in motion about a joint the requirement has been for “zero standby power”. The results have been: a) fairly rigid (or alternatively flaccid) postures when at rest and b) the choice of on/off switching controls rather than proportional ones so that no inefficiency is introduced by the control operation itself. Since the acquisition of local control information requires sensors (hence energy) and circuitry (hence weight, expense, and more energy) the only performance measures fed back to the user have been those offered by his direct, visual observation. Modern man is sufficiently visual in his orientation, however, that this does not seem to bother him greatly, but the loss of behavioral style which is imposed by the absence of tactile and kinaesthetic information should not be over­looked. It is not a question of providing the user with the ability to make tactile or kinaesthetic identifications of objects, textures or positions: the ability, that is, to name or report on his own gestu­res or the objects around him. Rather, tactile and kinaesthetic feed­back in the normal extremity allow it to transact a dialogue with the environment at the interface. The manner of grasp is modified during the act of grasping by the character of the object grasped. No designer’s prior commitment to a finite number of mechanical degrees of freedom could possibly anticipate the variety of transac­tions for which the normal hand can adapt itself. Only a complex control system can hope to achieve such elaboration with a finite number of parts. W. Ross Ashby’s “Law of Requisite Variety”[1] applies and we are faced with the necessity of providing prosthetic systems with the opportunity for on-line, real-time dialogue both with the user and with the “external” environment.



PURPOSIVE SYSTEMS


It is apparent that we are seeking to provide a mechanism with a means to adjust its own behavior in a manner relevant not only to the exigencies of the environment upon which it operates but also to the intentions of the user. A simple servomechanism suggests itself wherein a signal (goal setting) is introduced and is compared with the actual position of a joint or with the torque exerted there, and the difference or error signal serves to operate the associated actuators so as to null or minimize that error, In short, a simple negative-feedback servo loop seems to be indicated However, as we snail see, the conditions for dialogue are not met by such a system because the information return to the user him­self remains the responsibility of other perceptual systems. He must watch or listen or reel with his other hand in order to receive the kind of “behavioral change” information which his normal percep­tual apparatus would have generated.[7] Single-loop, negative- -feedback controls are essentially “impedance transforming” devices: they take over the responsibility of power-valving and in so doing are intended in fact not to return information for the use of the goal-setting system. Such mechanisms are purposive but the purpo­ses which they pursue are of so extremely limited a context (one- -dimensional error minimization without the inclusion of time even as an implicit parameter) that one would not be justified in describ­ing their behavior as a participation with the user in the explora­tion of his environment. They are dumb messengers sent on errands by themselves, unavailable thereafter for comment upon the condi­tions of performance. We shall need something more.


DIALOGUE


A man, Mr. M, is trying to acquire for himself an idea which resides in the head (?) of his respondent, R. In what kind of beha­vior must he engage for the purpose? To begin with, as the dialogue progresses M must have some way of assessing when he is “doing better” in his grasp of the idea. It is not sufficient that he be able to perform as a parrot or a tape-recorder: simply capable of spout­ing back verbatim what R has said or done. For the idea to have meaning to M, it must be able to modify his behavioral responses to his environment in somewhat the same way it serves to modify the. responses of R. In order to find out during the dialogue whe­ther the idea is acquiring that status of behavioral modifier, it will be necessary for M to form a model for himself not only of the idea but, prior to that, a model of “R-having-the-idea.” Then — and here is the essence of dialogue — M and R must participate in a similar way in a part of the environment shared in common so that M may have the opportunity to identify with R and thereby to compare his own responses (as if he were R) with those of R himself. Only in this way can M hope to be able to know whether he is building for himself an idea which can serve him meaningfully in the same way that R’s version of the idea served R. The information exchange during dialogue is not a handing back and forth of words and phra­ses like letters in the mail: it is, rather, a joint participation in a common activity: a sharing tangentially of a multiplicity of self-referrent loops.



In consideration of verbal dialogues it is perhaps too easy to imagine input and output as separated both in their times of occurrence and in their anatomical sites; so easy, in fact, that sequential exchanges have come to represent dialogue for us. Look carefully at eye and ear. The eye does not emit any light of its own nor the ear any sound. They do not participate by themselves, that is, in modifying the media from which they draw their “sensations”. But consider a prosthesis: dealing as it does with touch and movement wherein that which touches also moves, there arises the opportu­nity to provide a user with a truly active form of dialogue with his environment — the form, it should be noted, with which mother and child first communicate and which, for each of us, began our introduction to the world of more “remote” perceptions.


Let us bring the above description of dialogue further to bear upon the problems of prosthetics. If we can free our future thoughts from worry about energy resources and about the size and weight of actuators and sensors, perhaps we can evolve a picture of a truly active, adaptive manipulator which acts in the necessarily conjoint role of perceptual interface. Let us assume that we have available, in addition to a bottomless energy supply, a variety of small, efficient actuators capable of push or pull or rotation wher­ever we want to put them, and that we may distribute among them a profusion of sensors to respond to mechanical changes: and that there is an artificial skin covering the whole that can signal when touched or stretched or heated or jabbed. In short, we shall ima­gine a device capable not only of a large number of “degrees of freedom” but also capable of a complex monitoring of its own beha­vior as signals which can serve to modify that behavior. We shall pool the “sensory data” together in various ways[11] which can offer a “preprocessed” description of broadly denned modes of interaction with the environment (e.g., grasping causes contact on the palmar surface and extension of the dorsal). We shall let these serve as inputs to self-organizing controllers [2],[3],[4] which in turn deliver signals to the actuators. A second source of signals, to be added to the pooled and preprocessed sensory data would arise from control sites on the user (e.g., EMG). Still other signals, generated by the output behavior of the prosthesis, would be return­ed to those sites. A dialogue will then ensue between the user end his environment with the prosthesis acting as interface[13]. The dialogue will be crude at first, having the appearance of repetitive, tropismatic movements whose occurrence seems not to be under the user’s control. As he develops skill, the elaboration and refine­ment of the transaction will develop and will allow him to evolve a style of his own. The evolution will continue as he or the environ­ment place changing demands upon it. One may imagine a mixture of control variables, some of which might include time (or rhythm) as an explicit parameter thus allowing patterns of movement to evolve along with the more unitary, transient gestures such as hold­ing or lifting.


RESTATEMENT OF THE PRINCIPAL SUGGESTIONS


There are two issues raised above which the author wishes to make especially clear through reformulation. He believes that all of the imaginable improvements in materials, energy sources, and watchmaker’s craft will provide only slight improvements over today’s prosthetic systems if the following two principles are not incorporated.


I. The behavior of the prosthetic device must be somewhat autonomous and self-organizing. Engaging constantly in the process of exploring both environment and the interface with the user, it will develop a style of behavior which reflects both and hence beco­mes relevant to both. Lest the objection be raised that one would not want an extremity writhing incessantly, let us acknowledge that one of its defined roles should have the description of repose. We would not want the user to switch the unit off because that would cease the dialogue by which he is learning to incorporate the pros­thesis into his body image. Switching off the prosthesis is detrimen­tal to the process of assimilation, (Besides, whom do you know whose hands remain immobile for long?)


Il, The other crucial point is that of returning a signal to the user at the site where his control signals arise, and in a form simi­lar to the control signals themselves. This notion is one which, if applied even to present-day prostheses, should improve greatly the user’s ability to control them and to attenuate increasingly the involvement of other perceptual systems as feedback pathways. The brain is not what many people think it to be: a super computer with many input and output channels which may be assessed in any manner irrespective of the source or destination of the informa­tion. Brains are best suited to the supervision of dialogue where the interface is a perceptual one: that is, where at least one effector-sensor pair are closely enough related physically for their partici­pation in the environment to be direct and for the parts of the “sequence” of sense-process-respond to be indistinguishable[5],[13]. For example, if the control site pickup is myographic, then let us return an electrical signal to the vicinity of the pickup which can be sensed both by the user and by the myograph. If the pickup is a transducers of mechanical stretch or of muscule bulge, we should provide a means to answer with a squeeze or change of pres­sure. We must make sure, however, that the information being returned partakes in some way of the environmental response to the prosthesis action. That is, unless the loop is closed through the changes wrought by the intended act, its information content can­not be expected to be useful. Dialogue is a tangential sharing of a part of the environment and in this case the common ground must be the prosthesis and its interfaces with the user and with the exter­nal world.


A FINAL NOTE OF WARNING AND ENCOURAGEMENT


The design and exercise of systems as complex as those sug­gested — especially where they are intended as hitman interfacing systems — are not amenable to prolonged study by algorism, simu­lation, formula, nor “Gedankensexperiment”, Adequate insight into the communication problems and variety qua perceptual experi­ence may only be achieved on-line, tn actual dialogue, and in real time with a substantial world. In short, the process of invention itself has the essence of dialogue.

Notes

Ashby, W, R,, “Principles of the Self-Organizing System”, Principles of Self-Organization, H, von Foerster and G, W. Zopf, eds., Pergamon Press, New York, 1962.

Barron, R L., “Self-Organizing Learning and Control Systems”, (1966 Bio­nics Conference), Cybernetic Problems in Bionics, Mesarovic. ed., Gordon and Breach, New York, 1968.

Barron, R. L, “Self-Organizing Control: The Next Generation of Control­lers”, Control Engineering, Feb, and Mar. 1968.

Barron, R. L. and Scbalkowsky, S.r “On-Line, Self-Organizing Control of Multiple-Goal, Multiple Actuator Systems”, 1967 Joint Automatic Control Conf,, Univ, of Penna.

Brodey, W. M. and Johnson, A. R,, “A Visual Prosthesis That Looks”, 2nd Conf. on Visual Prosthesis, Assn. for Computing - Machinery, Univ, of Chicago, Tune 1969 (in press).

Brodey, W. M, and Lindegren, N., “Human Enhancement: Beyond the Machine Age”, IEEE Spectrum, February 1968.

Gibson, J. J., The Senses Considered as Perceptual Systems, Houghton Mifflin, Boston, 1966.

Hall, E, T., The Silent Language, Doubleday, New York, 1959.

Held, R., “Plasticity in Sensory Motor Systems”, Scientific American, Vol. 213, No. 5, November 1965.

Hermann, H. and Kotelly, J. C., “An Approach to Formal Psychiatry”, Perspec. in Biol, and Med,, Winter 1967.

Johnson, A, R., “A Structural, Preconscious Piaget; Heed Without Habit”, Proc, Nat’l, Electronics Conf., Vol. 23, 1967.

Johnson, A- R., “Organization, Perception, and Control in Living Systems”, Industrial Management Rev., Vol. 10, No. 2, 1969.

Johnson, A. R., “The Active, Self-Organizing Interface”, IEEE-GMMS, ERS, Int’l Symp. on Man-Machine Systems, Cambridge, England, Sep­tember 1969 (in press).