You are on page 1of 33

PERSONAL CONTROL OVER AVERSIVE STIMULI AND ITS RELATIONSHIP TO STRESS

It is almost axiomatic to assume that personal control over an impending harm will
help to reduce stress reactions. However, a critical review of experimental research
indicates that this assumption is not always warranted. Specifically, three main
types of personal control may be distinguished: (a) behavioral (direct action on the
environment), (6) cognitive (the interpretation of events), and (c) decisional (having
a choice among alternative courses of action). Each type of control is related to
stress in a complex fashion, sometimes increasing it, sometimes reducing it, and
sometimes having no influence at all. As a broad generalization, it may be said that
the relationship of personal control to stress is primarily a function of the meaning
of the control response for the individual. Stated differently, the stress-inducing or
stress-reducing properties of personal control depend upon such factors as the
nature of the response and the context in which it is embedded and not just upon its
effectiveness in preventing or mitigating the impact of a potentially harmful
stimulus.
From many different quarters today, there is the demand for more personal control:
Students complain that they have no control over the political process, persons in
poverty complain that they have no control over economic resources, and old
people complain that they have little control over anything, not even how they die.
Although the first reaction to these complaints is that they contain considerable
merit, it is not clear exactly what is meant by "personal control" and whether it is
such an unmixed blessing as might at first be supposed.
Among psychologists, too, there is a tendency to assume that personal control has a
beneficient or stress-reducing effect. For example, Sells (1970) has argued that
stress occurs when two conditions are met: (a) An individual is called upon to
respond under circumstances in which he has no adequate response available, and
(b) the consequences of not responding are important to the individual. These
conditions, Sells (1970) has claimed, provide "a new principle to distinguish stress
from other phenomena of human behavior [p. 139]." In other words, the lack of
control (i.e., the non-availability of an adequate response) is a necessary if not
sufficient condition for stress. Handler and Watson (1966) have argued for a similar

type of relationship: "Any situation which interrupts, or threatens the interruption of


organized response sequences, and which does not offer alternate responses to the
organism, will be anxiety-producing [p. 280]." According to Mandler's view, personal
control makes it possible for the individual to in-corporate a potentially threatening
event into a cognitive plan, thus reducing anxiety. To take a final example, in a
recent defense of individual freedom and autonomy (as op-posed to Skinner's
Beyond Freedom and Dignity), Lefcourt (1973) has reviewed a number of studies
which indicate that control over an aversive stimulus helps reduce stress reactions.
He concluded: The perception of control would seem to be a common predictor of
the response to aversive events regardless of species. . . . the sense of control, the
illusion that one can exercise personal choice, has a definite and a positive role in
sustaining life [p. 424].
As indicated by the above quotations, different types of control may be (but
generally are not) distinguished: The first type is behavioral control, the availability
of a response which may directly influence or modify the objective characteristics of
a threatening event; the second type is cognitive
control, the way in which an event is interpreted, appraised, or incorporated into a
cognitive "plan"; and the third type is decisional control, the opportunity to choose
among various courses of action. The purpose of the present paper is to review the
experimental research relating each of these types of control to the experience of
stress. (For recent reviews of control expectancies as a personality dimension, see
Joe, 1971; Lef-court, 1972.)
BEHAVIORAL CONTROL AND STRESS
In many everyday situations, a person has no alternative but to endure a potentially
noxious stimulus. He may, however, be able to control such things as who
administers the stimulus (e.g., himself or another) and how and when the stimulus
will be encountered. In other cases, the stimulus may be prevented entirely,
terminated prematurely, or otherwise modified by some form of direct action (e.g.,
avoidance, escape, attack, and so forth). This suggests a twofold subdivision of
behavioral control, namely, regulated administration and stimulus modification.
Each is considered in turn beginning with the regulated administration of a noxious
stimulus.

Regulated Administration
An experiment by Haggard (1943) was perhaps the first study to deal with regulated
administration as a mode of control. In the first phase of Haggard's study, subjects
were presented with a list of words, one of which was always followed by an electric
shock. For half of the subjects, the shock was administered by the experimenter; for
the other half, a signal light came on at the appropriate time and subjects
administered the shock to themselves. Following this conditioning procedure,
subjects were assigned to one of three analogue therapy conditions: rest,
experimental extinction, and catharsis-information. During a third and final phase of
the study, the same procedure was followed as in the first conditioning phase but
without the electric shock.
During the first (conditioning) phase of the experiment, the self-administered shocks
resulted in smaller changes in skin conductance than did the experimenteradministered shocks. This might be taken as an indication that subjects who had
control over the delivery of shock were less stressed. However, the lessened
autonomic reactivity of these subjects cannot be attributed to behavioral control per
se. Since the self-administered shocks were preceded by a warning signal, whereas
shocks in the experimenter-administered condition were not, the two conditions
differed in uncertainty or ambiguity as well as behavioral control. Other studies to
be reviewed below indicate that uncertainty it-self can have a significant effect on
stress. Moreover, Haggard found that subjects who administered shock to
themselves also tended to be more aware of the experimental contingencies (i.e.,
which word was followed by shock) than were subjects to whom the shocks were
administered by the experimenter. When aware subjects were contrasted to
unaware subjects, the difference in response patterns was similar to that between
subjects in the self-administered and experimenter-administered shock conditions,
respectively. Haggard took this to mean that administering shock to the self
"facilitated structuration of the cognitive field" and that cognitive structuring, rather
than behavioral control, was responsible for the differences between selfadministered and experimenter-administered shock groups.

Haggard also found that subjects who received experimenter-administered shocks


showed more rapid extinction during therapy than did subjects who administered
shock to themselves. Thus, by the third phase of the study (a repetition of the
original conditions but without shock), no differences existed between
experimenter-administered and self-administered shock groups in terms of psychophysiological reactions to the stimulus word. This would indicate that the stressreducing effects of behavioral control, if indeed any existed in this experiment, were
rather ephemeral and short lived.
Pervin (1963) attempted to disentangle the influence of behavioral control from that
due to the reduction of uncertainty. In a 3 X 2 factorial design, he varied 3 levels of
uncertainty (signal, no signal, and inconsistent signal) and 2 levels of control (selfadministered shock and experimenter-administered
shock). Each subject experienced all six conditions in a series of paired
comparisons. There was a significant tendency for subjects to prefer selfadministered shock, but differences in anxiety and pain ratings between self- and
experimenter-administered conditions were small and statistically nonsignificant.
Still, there was a trend toward less stress when subjects had control. This trend,
however, was most evident during the early trials and in the unsignaled and
inconsistently signaled conditions, that is, when uncertainty was the greatest. This
might suggest, as did the results of the Haggard study de-scribed above, that the
factor of behavioral control is less important in determining stress reactions than
the reduction of uncertainty which generally accompanies such control. In line with
this, Pervin found a clear indication that subjects preferred signaled shock to either
unsignaled or inconsistently signaled shock and that they found the former less
anxiety producing and less painful than the latter. In other words, the reduction of
uncertainty was a much more potent variable than behavioral control.
Studies investigating the preference for immediate versus delayed threat (e.g.,
Badia, McBane, Suter, & Lewis, 1966; Cook & Barnes, 1964; D'Amato & Gumenik,
1960; Hare, 1966; Maltzman & Wolff, 1970) are
also relevant to the issue of behavioral control. In most experiments using this
paradigm (see Maltzman & Wolff, 1970, for an exception), subjects have been
required to make some response which resulted in either immediate or delayed

threat. Since in the immediate threat condition the noxious stimulus follows directly
upon the subject's response, the stimulus can be considered as self-administered. In
the delayed condition, on the other hand, the noxious stimulus is delivered by the
experimenter after some
time interval.
The results of these experiments have been rather consistent: Most subjects prefer
an immediate to a delayed noxious stimulus and find the former less stressful. The
interpretation of these results is ambiguous, however, for variables other than
behavioral control are involved. For example, delayed threat involves an element of
temporal uncertainty which immediate threat does not. Delayed threat also entails
a waiting period, which itself may be an independent source of anxiety (Folkins,
1970; Franzini, 1970). Thus, the generally found preference of subjects for
immediate as opposed to delayed threats could be due to (a) a desire for behavioral
control, (b) a preference for greater certainty, and/or (c) a wish to avoid an anxietyinducing waiting period. Among these alter-natives, behavioral control seems to be
the least important. In the first place, one study which did not involve control as a
variable
(Maltzman & Wolff, 1970) still found the immediate threat condition to be the least
stressful. In the second place, Badia et al. (1966) found that the number of subjects
preferring a delayed threat could be increased substantially (to about 50% of the
sample) if the immediate threat was made less predictable (by varying its
probability) and the delayed threat was made more predictable (by preceding it with
a warning signal). This result is consistent with the findings of Haggard and Pervin
which suggest that the self-administration of noxious stimuli is stress reducing
primarily when it is accompanied by a reduction of uncertainty.
Using an experimental design similar in many respects to the immediate-versusdelayed-shock paradigm described above, Ball and Vogler (1971) allowed subjects
to administer shocks to themselves (upon the
presentation of a signal) or to have shocks administered at random time intervals
by the experimenter. After experiencing both conditions in a series of forced trials,
subjects were allowed to choose between them on subsequent trials. Out of 39
subjects 25 showed a preference for self-administered shock, and of these 21 said

they did so in order to avoid uncertainty. Eleven subjects indicated a preference for
experimenter-administered shocks, and 3 showed no particular preference. The
reasons subjects gave for preferring experimenter-administered shocks were highly
varied and idiosyncratic and included such things as (a) negativism, doing the
opposite of what might be expected; (b) religious conviction, not struggling against
pain; and (c) excitement, guessing when a shock might come.
Staub, Tursky, and Schwartz (1971) have reported several studies involving
regulated administration which also implicate uncertainty as an important
mediating variable. Briefly, in the first of two experiments, subjects were presented
with a series of gradually increasing electric shocks. One half of the subjects were
allowed to administer the shocks to themselves; each of these had a yoked partner
who received shocks in the same temporal sequence but administered by the
experimenter. (In the self-administered shock condition the shock also was preceded
by a warning signal, whereas in the yoked-control condition it was not. However,
subjects tended to administer shocks to them-selves in such rapid succession that
timing was predictable for both groups. The major difference between groups, the
authors there-fore argue, is the variable of control.) Under these conditions,
behavioral control had no
discernible effect on the levels of shock judged as discomforting, painful, and
intolerable.
In a follow-up experiment, Staub et al. (1971) made subjects wait for varying time
intervals before they could administer shocks to themselves; they also allowed
subjects to determine the intensity of the next shock in the sequence. In this way,
subjects who had control also knew how intense the next shock would be and when
it would occur; their yoked partners had no such information. Under these
conditions of increased uncertainty, self-administration did have an ameliorative
effect on the perceived intensity of shock. The experiments reviewed thus far have
employed relatively simple stimuli as sources
of stress (e.g., electric shock or loud noise). An experiment by Stotland and
Blumenthal (1964) employed a more complex threat, namely, an intelligence test.
Subjects in one group were informed that they could take the component parts of
the test in any order they desired, whereas a second group was told that they had

to take the test in a prescribed order. Subjects who believed they had control over
the order of administration showed less of an increase in palmar sweating than did
subjects who had no control. These results also would seem to indicate that
regulated administration can mitigate the experience of stress provided the
situation is complex enough to involve a degree of uncertainty.
Some observations in retrospect and prospect. A brief digression is in order at this
point not only to summarize the results of the discussion thus far but also to
adumbrate briefly some observations which are of importance for subsequent
discussion.
1. The studies just reviewed provide little evidence that by itself, the regulated
administration of a noxious stimulus has an ameliorative effect on stress reactions.
In fact, the data suggest that regulated administration is stress reducing primarily
(and perhaps only) when other factors are involved, such as the reduction of
uncertainty regarding the nature and/or timing of the threatening event. The
potential importance of the reduction of uncertainty as a mediating variable is
further illustrated by the findings of Howell (1971) that subjects tend to become
overconfident about outcomes which depend on their own performance. That is,
from a subjective point of view, the reduction of uncertainty may be integrally
linked to behavioral control. How-ever, the relationship between the reduction of
uncertainty and stress reactions is itself quite inconsistent, as will be seen below
when research dealing with cognitive control is reviewed.
2. Studies on regulated administration indicate that most persons prefer to have
control over a potentially noxious stimulus even when that control has no
instrumental value in altering the objective nature of the threat. This preference
generally has been interpreted to mean that conditions of control are less stressful;
that is, persons generally do not prefer the more stressful of several conditions.
Such an interpretation, however, is open to question in many instances. In the
studies just reviewed, for example, direct measures of stress (e.g., self-reports and
psychophysiological reactions) provide little evidence that regulated administration
per se has an ameliorative effect on stress, regardless of how much a person might
prefer it.

Actually, a certain independence of preference ratings from stress reactions makes


good biological and psychological sense. An animal (especially an unspecialized
primate of the type ancestral to man) who did not seek information about, or
attempt to exert control over, potentially harmful events probably did not survive to
contribute to the evolution of the species. This would lead to a biological
predisposition for personal control, even though under certain conditions the
exercise of that control might be stress inducing rather than stress reducing. Of
course, similar reasoning could be applied to onto-genetic as well as phylogenetic
development. The growing child is taught to assume responsibility and exercise
control, even though he may at times find this rather frightening. Once socialized,
the preference for control may be manifested regardless of whether or not such
control -is actually effective in reducing stress in any particular instance. In other
words, the desire for personal control may be a deep-seated motivational variable,
whether phylogenetically or ontogenetically based.
3. The above argument regarding the adaptive significance of control also makes
necessary a distinction between short-term and long-term stress reactions. Over the
long run, personal control may be stress reducing or adaptive even though in the
short run it may be stress inducing. But what is long-term adaptation? On the
phylogenetic level, long-term adaptation refers to the survival of the species,
regardless of the fate of any single individual. It is easy to see how the exercise of
personal control could, under appropriate circumstances, be stress inducing and still
adaptive in this sense. On a personal level, long-term adaptation could refer either
to a net reduction in stress over an extended period of time, or it could refer to the
"psychic cost" of maintaining equilibrium with the environment. With regard to the
latter, Glass and Singer (1972) have demonstrated that even when lack of control
does not lead to greater stress reactions, and does not retard short-term
habituation, it still may impair performance on tasks administered after the noxious
stimuli have terminated. The types of control studied by Glass and Singer belong to
categories we have not yet discussed (e.g., stimulus modification). It therefore is
not clear whether regulated administration would also reduce the psychic cost
incurred during adaptation to noxious stimuli, but this is a possibility. In any case,
even though regulated administration by itself (i.e., in the absence of uncertainty)
may have little or no effect on short-term stress reactions, it re-mains possible that
it does facilitate long-term adaptation to stress.

4. Even the conclusion that regulated ad-ministration has little or no effect on shortterm stress reactions is subject to qualification due to confounding of various types
of control in single experiments. For example, in nearly all the studies reviewed thus
far (Badia et al., 1966; Ball & Vogler, 1971; D'Amato & Gumenik, 1960; Hare, 1966;
Pervin, 1963; Staub et al., 1971) the intensity of the noxious stimulus (electric shock
in these instances) was determined individually for each subject. That is, the level of
shock was set at that point where a subject found it unpleasant or indicated that he
would not accept anything stronger. This is an instance of a second type of
behavioral control to be discussed shortly, namely, stim-ulus modification. When
given the opportunity, subjects can prevent more intense shocks during an
experiment by indicating at the outset a low tolerance level. Such stimulus
modification need not involve conscious dissimulation. Nevertheless, it is a
potentially confounding factor even when all subjects in an experiment (e.g., in both
self-administered and experimenter-administered shock groups) are able to set their
own levels. Thus, Bowers (1968) found that if subjects were first allowed to select a
level of shock and were informed subsequently that the delivery of shock would be
dependent on their performance during the experiment, there were no differences
in pain and anxiety ratings as a function of control versus no-control instructions. On
the other hand, if subjects were told they would have such control before they
selected shock levels, those given the control instructions were willing to tolerate
significantly higher intensities of shock. In other words, allowing a subject to select
his own level of shock prior to (or in conjunction with) the experimental
manipulations may serve to mask other, more subtle effects.
Let us turn now to a detailed consideration of studies in which the subject explicitly
is given the opportunity to modify the objective characteristics of the impending
harm.
Stimulus Modification
Most studies of behavioral control have allowed the subject to modify (or at least to
believe that he could modify) the objective nature of the threatening event. For
example, subjects have been allowed (a) to prevent entirely or at least avoid some
instances of a noxious stimulus, say, by having punishment contingent upon the

performance of some task (e.g., Averill & Rosenn, 1972; Bowers, 1968; Glass &
Singer, 1972; Hous-ton, 1972); (b) to interpose rest periods or take time out from a
series of noxious stimuli (Hokanson, DeGood, Forrest, & Brittain, 1971); (c) to
terminate prematurely (escape) a noxious stimulus (Bandler, Madaras, & Bern,
1968; Champion, 1950; Elliot, 1969; Geer, Davison, & Gatchel, 1970; Geer & Maisel,
1972); or (d) to limit the intensity of a noxious stimulus (as when subjects select the
level of shock they will tolerate in an experimentcf. previous discussion).
Obviously, in delineating stimulus modification as a subvariety of behavioral control
we are dealing with a complex category. The situation is complicated even further
by the fact that most studies investigating this type of control have actually
involved more than one manipulation. An experiment by Hokan-son et al. (1971)
may serve to illustrate this point. The study was designed to test, among other
things, the hypothesis that the availability of an avoidance response reduces
physiological indicants (blood pressure) of stress. The avoidance response available
to one group of subjects, but not to a yoked comparison, was the possibility of
introducing one-minute rest periods during a one-half-hour work schedule. In fact,
however, there were two other types of personal control involved in this
experiment. First, subjects were allowed to determine for themselves the level of
shock they would endure during the experiment, and second, the work schedule
from which subjects could take time out was itself an avoidance task.
The Hokanson et al. experiment revealed that subjects who could interpose rest
periods had smaller increases in blood pressure than did their yoked partners who
received the same number and sequence of time-outs. Hokanson et al. (1971)
therefore concluded that "the availability of an avoidance response
reduces autonomic signs of arousal [p. 66]." This conclusion may serve as a starting
point for the present analysis, for it is a representative summary of the research on
stimulus modification cited in the introduction to this section. That is, when subjects
are given the opportunity to modify the nature of an aversive stimulus, decreased
stress reactions generally have been observed in comparison with conditions in
which no control is possible. In spite of the seeming ubiquity of this finding, its
generality is open to question. Without going into details, it may be stated that for
any particular experiment in which lower stress reactions have been observed on

the part of the subjects who have had control, the results typically are open to
alternative interpretations. In the Hokanson et al. study, for example, it is
reasonable to assume that subjects who had control took their rest periods at
psychologically propitious moments, that is, when they were especially fatigued,
tense, etc. Subjects in the yoked condition, on the other hand, had time-outs
imposed upon them regardless of their momentary state. For the latter group, then,
one would expect the rest periods to be less effective in alleviating stress.
The above criticism might seem relatively minor. After all, the same alternative
explanation cannot be applied to all experiments which have observed reduced
stress reactions as a function of control. It might therefore be argued that the more
parsimonious explanation for the totality of experimental results is in terms of
personal control. Two further points must be noted, however. First, some
investigators have observed little or no reduction in stress among subjects who
have had control (e.g., Houston, 1972). Second, even in studies in which the
majority of subjects who have had control showed reduced stress reactions, a
sizable minority (typically be-tween 10% and 20% of the total sample) have shown
the opposite pattern of response (e.g., Averill & Rosenn, 1972). That is, for some
subjects under certain circumstances, the availability of a control response appears
to be stress inducing rather than stress reducing. The relevant question, then, is not
whether having personal control reduces stress reactions but under what conditions
it has this effect and under what conditions it has the opposite (stress-inducing)
effect. At present, studies on the human level do not allow an answer to this
question, partly because they have tended to be so complex. On the animal level,
however, a start has been made in elucidating the conditions under which the
availability of a control response will lead to increased as opposed to decreased
stress reactions.
In a well-known experiment by Brady, Porter, Conrad, and Mason (19S8), it was
found that the member of a yoked pair of monkeys who could prevent the delivery
of shock (working on a Sidman avoidance schedule) was also the member who
developed ulcers and died. In a recent series of experiments, Weiss (1968, 1971a,
197lb, 1971c) has presented evidence that the rea-son for the "executive" monkey's
stress was not the ineffectiveness of the control response but rather a lack of

feedback regarding its success. Using rats as subjects, Weiss has found that
ulceration increases proportionately with the number of coping responses emitted
by an animal but inversely with the amount of positive feedback regarding the
success of the response. The most stressful conditions are those in which many
responses are demanded but the responses result in negative or inconsistent
feedback (the traditional conflict situation); also stressful, how-ever, is the situation
like that of the executive monkeyin which the animal must wait to learn the
outcome of his response or rely on difficult temporal discriminations. On the other
hand, if an animal receives immediate and positive feedback, then the availability of
a coping response alleviates stress in comparison to animals who receive the same
amount of noxious stimulation but who have no control.
The point which needs to be emphasized in the above analysis by Weiss is that
stress is not only a function of feedback but also of the number of coping responses.
Holding feed-back constant, the animal who responds more, even if the responses
are effective, also exhibits more stress. In extreme cases of ambiguous feedback,
one might predict that the animal who does not respond at all, and hence receives
the noxious stimulus, should experience less stress than the animal who responds
frequently and receives little punishment. (This assumes, of course, that the
punishing stimulus does not itself lead to physiological damage.) It remains to be
tested whether or not this prediction is ac-curate, and whether the conditions
elucidated by Weiss for animals also hold for humans, with their greater capacity for
cognitive modes of control. Nevertheless, the results of Weiss represent one of the
few concrete demonstrations of the conditions under which the availability of a
control response may lead to increased rather than decreased stress.
COGNITIVE CONTROL AND STRESS
While behavioral control involves direct action on the environment, cognitive control
refers to the way a potentially harmful event is interpreted. It might seem that we
are stretching the concept of control too far by subsuming under it the
interpretation of events. But consider the oft-demonstrated finding that the
perception of pain is a function, in part, of the meaning or significance of the
aversive stimulus (Melzack & Casey, 1970). If pain can be mitigated on the basis of
a person's interpretation of events, then certainly such an interpretation deserves to

be called, in some sense at least, a mode of control. Nevertheless, the notion of


cognitive control does present some conceptual difficulties. For example, the
research reviewed thus far indicates that personal control may have potentially
stress-inducing as well as stress-reducing properties. Is there any sense in which an
interpretation of an event can lead to increased stress and still legitimately be
called a mode of control? There is no doubt that certain interpretations (e.g., that
one has cancer) may increase stress reactions. But if cognitive control is equated
with any interpretation, then the concept of control is stretched beyond useful
limits. What is
needed is some definition of cognitive control which is independent of immediate
stress reactions but still more restrictive than just any type of stimulus evaluation.
The distinction previously made between immediate and long-term stress reactions
(cf. summary observations on regulated administration) is helpful in this regard.
Cognitive control may be defined as the processing of potentially threatening
information in such a manner as to reduce the net long-term stress and/or the
psychic cost of adaptation. This definition allows for the possibility that cognitive
control, just as behavioral control, may lead in the short run to increased rather
than decreased stress.
Depending upon whether the interpretation of an event is essentially reality
oriented or whether meaning is imposed upon the stimulus, two types of cognitive
control may be distinguished: information gain and appraisal. In certain respects,
this distinction is similar to that between regulated administration and stimulus
modification as subvarieties of behavioral control. That is, in the case of
information gain, the evaluation of threat is relatively objective; in the case of
appraisal, on the other hand, the threat is modified to conform to the needs and
desires of the individual.
Information Gain
The role of information in reducing stress what Furedy and Doob (1970) have
called "informational control"recently has become a topic of considerable interest
and controversy. In examining this issue it is helpful to start with the simplest case,
namely, the effect of having a warning signal prior to an aversive stimulus.

Discussion is limited to situations in which the warning signal does not allow the
individual to exert any behavioral control over the threatening event.
A number of studies have found that a warning signal tends to increase the
stressfulness of a situation. Brady, Thornton, and Fisher (1962), for example,
observed greater weight loss and mortality in rats subjected to a regimen of
signaled versus unsignaled shock. Similar findings have been reported by Friedman
and Ader (1965) in mice and by Liddell (1950) in sheep and goats. On the other
hand, Seligman (1968) and Weiss (1970) have found that rats exhibit less stress
when electric shock is preceded by a warning signal than when it is not.
The reasons for the above discrepant findings are not clear. Weiss (1970) has noted
that in the studies by Brady et al. (1962) and by Friedman and Ader (1965), electric
shock was delivered through a floor grid to a freely moving animal. This might have
allowed the animal to exert some behavioral control over the shock, for example, by
rearing on its hind paws, jumping, etc. Such control would be relatively inefficient,
how-ever, interrupting or perhaps mitigating the shock only for brief periods.
Inefficient control might increase the stressfulness of a situation by providing
negative feedback to the subject (cf. the previous discussion of stimulus
modification). However, experiments by Liddell (1950) indicate that this probably is
not the entire explanation. Liddell delivered shock on a fixed time schedule (e.g.,
every two or seven minutes). Although the animals (goats and sheep) learned quite
accurately when to expect the shock, this temporal conditioning was accompanied
by little stress. However, if each shock was preceded by a 10-second warning signal,
the animals showed signs of severe disturbance. According to Liddell (1950), "the
added pinprick of vigilance supplied by a ten second signal preceding the shock
appears to be the deter-mining factor in precipitating the chronic abnormalities of
behavior which we call experimental neurosis [p. 195]." In other words, it was not
the negative feedback regarding an inefficient response which produced stress
(both the signaled and unsignaled animals knew through temporal conditioning
when to expect the shock); rather, the stress appeared to result from stimulus
overload.

Now let us consider instances in which signaled shock has led to less stress than
unsignaled shock. Seligman (1968) and Weiss (1970) presented evidence that this is
due to the fact that the subject learns not only that a warning signal predicts shock
but also that the absence of the signal predicts safety. Thus, by providing a warning
signal, one also provides a safety signal (cues associated with the absence of the
warning signal). During the safety signal, the animal can "relax," knowing that shock
will not be delivered. When there is no warning signal, on the other hand, the
animal must constantly remain vigilant and, hence, the stressfulness of the entire
regimen is increased. According to this line of reasoning, then, it is not the warning
signal per se which is stress reducing but rather the meaning it imparts to other
cues not associated with threat.
Experiments using animal subjects thus lead to the following tentative conclusions
regarding the stressfulness of signaled versus unsignaled shock. First, signaled
shock may be accompanied by greater stress reactions than unsignaled shock
under conditions which lead to negative feedback (due, for example, to ineffective
avoidance responses) and/or when the limits of the organism to assimilate new
information are exceeded. Second, signaled shock may be accompanied by fewer
stress reactions under conditions in which viable avoidance responses are possible
and/ or in which the contingencies between the warning and the shock are such that
the subject can relax during the intershock interval. The latter point is worth
emphasizing because it means that whatever stress-reducing properties a warning
signal might have are not intrinsic to the signal it self but rather are a function of
the entire context in which the warning is embedded.
A few laboratory studies have investigated the effects of warning signals with
human subjects (Averill & Rosenn, 1972; Glass & Singer, 1972; Lovibond, 1968).
Although not without exception, these experiments, like their animal counterparts,
generally have found that a warning signal by itself has little effect on the
experience of stress. It should be pointed out, however, that a warning signal
probably is not the most efficacious way of providing information regarding the
onset of a threatening stimulus. For example, if a threatening event follows too
closely upon a warning signal, waiting for the warning could be as demanding as
waiting for the threatening event itself. Perhaps a more efficacious way to provide

information regarding the onset of a noxious stimulus is to have the latter occur at
the end of a specified time interval. Thus, Glass and Singer (1972) observed that
periodic bursts of noise produce fewer undesirable consequences on performance
following termination of the stimuli than do randomly presented bursts (cf. also the
previously cited studies of Lid-dell on temporal conditioning in animals). On the
other hand, Monat, Averill, and Lazarus (1972) found that knowing the time of
occurrence of a noxious stimulus (indicated by a clock) led to greater anticipatory
stress reactions than did a condition of temporal uncertainty, perhaps because the
latter allowed subjects to engage in avoidance like cognitive activities as time
progressed.
Jones, Bentler, and Petry (1966) found that subjects appear to be more motivated to
obtain information about the time of occurrence of a noxious stimulus than about its
intensity, which would seem to highlight the importance of temporal information. As
indicated before, however, it is hazardous to draw inferences from preference
ratings to stress reactions. In the just cited experiment by Monat et al. (1972), for
example, subjects expressed a preference for temporal certainty (knowing when a
shock would occur) than for temporal uncertainty, although anticipatory reactions
tended to be less when the
time of occurrence was unknown.
Epstein (1973) has reviewed recently a number of studies conducted in his
laboratory which deal with the effects of different kinds of information on reactions
to noxious stimuli (loud sounds and electric shocks). He found that information
regarding the time of occurrence, likelihood, and nature of the stimulus could either
reduce or enhance reactivity to the impact of the stimulus. He concluded : The
finding of greatest generality was that an accurate expectancy tended to facilitate
habituation. .. . It follows that, depending upon the threat value of a stimulus, it is at
times necessary to pay the price of a momentary unpleasurable increase in arousal
if one is to later be able to respond at a reduced level of arousal [p. 105].
Lanzetta and Driscoll (1966) examined the preference for information as a function
of whether the anticipated event was an electric shock (versus no shock), a
monetary reward (versus no reward), or shock versus reward. They found that

subjects generally preferred to have information as opposed to no information


regarding an anticipated event, but there were no significant differences in the
number of subjects seeking information among the shock-no-shock, reward-noreward, and shock-reward conditions. Moreover, information regarding the nature of
the impending event had little influence on reactivity to its occurrence.
At still a more complex level of information, Staub and Kellett (1972) examined the
relative value of two types of knowledge concerning an impending threat (electric
shock). Subjects were given information about (a) the objective characteristics of
the shock (e.g., the nature of the delivery apparatus and its safety features, transfer
of electricity, and the like) and/or (b) about the types of sensations subjects would
experience when the shock was delivered. Subjects who received both types of
information were willing to accept more intense shocks before evaluating them as
painful than were subjects who received either type of information by itself. The
latter subjects, in fact, did not differ in pain tolerance from those who received no
information at all. The no-information subjects, however, did express more worry or
anxiety than those who had either or both kinds of knowledge about the threat and
its consequences. From the responses to questionnaire items, Staub and Kellett
have argued that the two kinds of information enhanced each others' perceived
trustworthiness, or usefulness, or both. While the apparatus information helped
reduce worry about the danger of shock, only the sensation information could be
validated by the subject's own experience. The reduction of objective worry and the
validation by experience may be necessary before information can represent an
effective control device, at least in terms of pain tolerance.
All of the research reviewed thus far has dealt with relatively simple stimuli such as
electric shock or bursts of loud noise. The results indicate that subjects generally
prefer to have information about an impending harm but that there is no consistent
relationship between such information and initial reactivity. There is some
indication, however, that predictability, even when it leads to increased reactivity
initially, may facilitate long-term adaptation. Evidence reviewed below on the
appraisal of complex emotional stimuli would seem to support this conclusion.
Appraisal

When a situation is complex or ambiguous, a person does not simply obtain


information, he also actively imposes meaning on events. The imposition of
meaning on a potentially threatening event often has been considered a form of
defense (cf. psychoanalytic theory). However, since no psychopathology is implied
in the present discussion, the term "appraisal" is preferable to "defense
mechanism" as a generic name for this subvariety of cognitive control. As it relates
to personal control, the concept of appraisal resembles Kelly's (19SS) notion of
personal constructs. Kelly viewed man as an incipient scientist whose ultimate aim
is to predict and control events. Such prediction and control is achieved by
abstracting from events certain features and weaving these abstractions into a
system of constructs which lends meaning to the separate events.
The body of literature related to the appraisal of threat is so extensive that no
attempt is made to review it here (see, e.g., Lazarus, 1966). Rather, we simply
illustrate with a few instances how appraisals, like other modes of control, may
under appropriate circumstances increase as well as decrease stress reactions.
Sensitizing like defense mechanisms are perhaps the prime example of appraisals
which are stress inducing in the short run but which may facilitate long-term
adaptation (e.g., Davidson & Bobey, 1970). The person who uses this type of control
focuses attention on threatening events, searches for cues regarding potential
harm, and in general emphasizes the affective quality of experience.
Closely related to sensitization as a mode of defense is the "work of worry"
described by Janis (1958). According to Janis, worry is a form of inner preparation
that increases the level of tolerance for subsequent noxious stimuli but at some cost
in terms of immediate stress reactions (see also Breznitz, 1971). Janis found that
surgical patients who experienced too much or too little fear prior to an operation
evidenced less rapid recovery than did patients who showed moderate anticipatory
stress reactions. Presumably, the cognitive strategies employed by the latter
allowed better preparation for the surgical trauma which followed.
Egbert, Battit, Welch, and Bartlett (1964), following up the ideas of Janis, prepared a
group of patients for the stress of surgery by providing them with information
regarding the impending operation and their possible reactions and experiences

during recovery. Patients who received such instruction required less medication
and were sent home earlier than were patients who received no preparation.
However, the instruction given to patients by Egbert et al. was fairly intensive and
included advice with regard to behavioral as well as cognitive control.
The influence of information per se on the stress of surgery has been the subject of
several dissertations conducted under the direction of Michael Goldstein. In these
studies, patients were first divided into three groups depending upon whether they
exhibited a preference for sensitizing or avoidant (denial like) defenses or whether
they showed no specific preference ("nonspecific defenders"). In an initial study,
Andrew (1970) found that information regarding surgery facilitated recovery of
those patients classified as nonspecific defenders but may have been
counterproductive for patients who preferred denial like defenses. Contrary to
expectations, sensitizers were not affected by the receipt of information, perhaps
because they already had prepared themselves sufficiently for the operation. In a
second dissertation, DeLong (1970) gave some patients very specific information
describing the nature of their illness, the reasons for surgery, what to expect
preoperatively and postoperatively, and the like. Other patients received more
general information regarding hospital facilities, rules and regulations, etc. DeLong
found that sensitizers recovered more quickly than avoiders when given specific
information but less quickly when given general information. Type of information did
not seem to influence the recovery of nonspecific defenders or avoiders; the former
recovered well under both conditions, while the latter recovered poorly. How-ever,
complaints of discomfort were more numerous among avoiders who received
specific information (in comparison to sensitizers who received the same
information), while
complaints were less among avoiders who received only general information.
Although these data from the Andrew and DeLong studies are not entirely
consistent, they do indicate that the type of information a person receives about an
impending danger may interact in a generally predictable fashion with his
characteristic style of defense.
In her study, DeLong also assessed the anxiety of patients on the day they were
informed that surgery was to be performed and again 24 hours before the actual

operation. These data help clarify some of the ambiguities in the relationship
between defensive style and the use of information. Non-specific defenders showed
large increases in anxiety when surgery was first scheduled, but their anxiety
diminished considerably by the day before the operation. Sensitizers, on the other
hand, showed only a moderate increase in anxiety upon learning of surgery and
maintained this level during the preoperative
period. Finally, patients who were classified as avoiders showed an actual decrease
in anxiety when surgery was first scheduled, but this was followed by a rise to
preoperative levels. In short, the three groups of patients were showing different
gradients of anxiety (one descending, one remaining constant, and one ascending)
at the time they received information regarding surgery. The use of the
information as a source of control appears to have been a function of these
gradients (being safety. Thus, by providing a warning signal, one also provides a
safety signal (cues associated with the absence of the warning signal). During the
safety signal, the animal can "relax," knowing that shock will not be delivered.
When there is no warning signal, on the other hand, the animal must constantly
remain vigilant and, hence, the stressfulness of the entire regimen is increased.
According to this line of reasoning, then, it is not the warning signal per se which is
stress reducing but rather the meaning it imparts to other cues not associated with
threat.
Experiments using animal subjects thus lead to the following tentative conclusions
regarding the stressfulness of signaled versus unsignaled shock. First, signaled
shock may be accompanied by greater stress reactions than unsignaled shock
under conditions which lead to negative feedback (due, for example, to ineffective
avoidance responses) and/or when the limits of the organism to assimilate new
information are exceeded. Second, signaled shock may be accompanied by fewer
stress reactions under conditions in which viable avoidance responses are possible
and/ or in which the contingencies between the warning and the shock are such that
the subject can relax during the intershock interval. The latter point is worth
emphasizing because it means that whatever stress-reducing properties a warning
signal might have are not intrinsic to the signal itself but rather are a function of the
entire context in which the warning is embedded.

A few laboratory studies have investigated the effects of warning signals with
human subjects (Averill & Rosenn, 1972; Glass & Singer, 1972; Lovibond, 1968).
Although not without exception, these experiments, like their animal counterparts,
generally have found that a warning signal by itself has little effect on the
experience of stress. It should be pointed out, however, that a warning signal
probably is not the most efficacious way of providing information regarding the
onset of a threatening stimulus. For example, if a threatening event follows too
closely upon a warning signal, waiting for the warning could be as demanding as
waiting for the threatening event itself. Perhaps a more efficacious way to provide
information regarding the onset of a noxious stimulus is to have the latter occur at
the end of a specified time interval. Thus, Glass and Singer (1972) observed that
periodic bursts of noise produce fewer undesirable consequences on performance
following termination of the stimuli than do randomly presented bursts (cf. also the
previously cited studies of Lid-dell on temporal conditioning in animals).
On the other hand, Monat, Averill, and Lazarus (1972) found that knowing the time
of occurrence of a noxious stimulus (indicated by a clock) led to greater anticipatory
stress reactions than did a condition of temporal uncertainty, perhaps because the
latter allowed subjects to engage in avoidance like cognitive activities as time
progressed.
Jones, Bentler, and Petry (1966) found that subjects appear to be more motivated to
obtain information about the time of occurrence of a noxious stimulus than about its
intensity, which would seem to highlight the importance of temporal information. As
indicated before, however, it is hazardous to draw inferences from preference
ratings to stress reactions. In the just cited experiment by Monat et al. (1972), for
example, subjects expressed a preference for temporal certainty (knowing when a
shock would occur) than for temporal uncertainty, although anticipatory reactions
tended to be less when the
time of occurrence was unknown.

Epstein (1973) has reviewed recently a number of studies conducted in his


laboratory which deal with the effects of different kinds of information on reactions
to noxious stimuli (loud sounds and electric shocks). He found that information

regarding the time of occurrence, likelihood, and nature of the stimulus could either
reduce or enhance reactivity to the impact of the stimulus. He concluded :
The finding of greatest generality was that an accurate expectancy tended to
facilitate habituation. .. . It follows that, depending upon the threat value of a
stimulus, it is at times necessary to pay the price of a momentary unpleasurable
increase in arousal if one is to later be able to respond at a reduced level of arousal
[p. 105].
Lanzetta and Driscoll (1966) examined the preference for information as a function
of whether the anticipated event was an electric shock (versus no shock), a
monetary reward (versus no reward), or shock versus reward. They found that
subjects generally preferred to have information as opposed to no information
regarding an anticipated event, but there were no significant differences in the
number of subjects seeking information among the shock-no-shock, reward-noreward, and shock-reward conditions. Moreover, information regarding the nature of
the impending event had little influence on reactivity to its occurrence.
At still a more complex level of information, Staub and Kellett (1972) examined the
relative value of two types of knowledge concerning an impending threat (electric
shock). Subjects were given information about (a) the objective characteristics of
the shock (e.g., the nature of the delivery apparatus and its safety features, transfer
of electricity, and the like) and/or (b) about the types of sensations subjects would
experience when the shock was delivered. Subjects who received both types of
information were willing to accept more intense shocks before evaluating them as
painful than were subjects who received either type of information by itself. The
latter subjects, in fact, did not differ in pain tolerance from those who received no
information at all. The no-information subjects, however, did express more worry or
anxiety than those who had either or both kinds of knowledge about the threat and
its consequences. From the responses to questionnaire items, Staub and Kellett
have argued that the two kinds of information enhanced each others' perceived
trust worthiness, or usefulness, or both. While the apparatus information helped
reduce worry about the danger of shock, only the sensation information could be
validated by the subject's own experience. The reduction of objective worry and the

validation by experience may be necessary before information can represent an


effective control device, at least in terms of pain tolerance.
All of the research reviewed thus far has dealt with relatively simple stimuli such as
electric shock or bursts of loud noise. The results indicate that subjects generally
prefer to have information about an impending harm but that there is no consistent
relationship between such information and initial reactivity. There is some
indication, however, that predictability, even when it leads to increased reactivity
initially, may facilitate long-term adaptation. Evidence reviewed below on the
appraisal of complex emotional stimuli would seem to support this conclusion.
Appraisal
When a situation is complex or ambiguous, a person does not simply obtain
information, he also actively imposes meaning on events. The imposition of
meaning on a potentially threatening event often has been considered a form of
defense (cf. psychoanalytic theory). However, since no psychopathology is implied
in the present discussion, the term "appraisal" is preferable to "defense
mechanism" as a generic name for this subvariety of cognitive control. As it relates
to personal control, the concept of appraisal resembles Kelly's (19SS) notion of
personal constructs. Kelly viewed man as an incipient scientist whose ultimate aim
is to predict and control events. Such prediction and control is achieved by
abstracting from events certain features and weaving these abstractions into a
system of constructs which lends meaning to the separate events.
The body of literature related to the appraisal of threat is so extensive that no
attempt is made to review it here (see, e.g., Lazarus, 1966). Rather, we simply
illustrate with a few instances how appraisals, like other modes of control, may
under appropriate circumstances increase as well as decrease stress reactions.
Sensitizing like defense mechanisms are perhaps the prime example of appraisals
which are stress inducing in the short run but which may facilitate long-term
adaptation (e.g., Davidson & Bobey, 1970). The person who uses this type of control
focuses attention on threatening events, searches for cues regarding potential
harm, and in general emphasizes the affective quality of experience.

Closely related to sensitization as a mode of defense is the "work of worry"


described by Janis (1958). According to Janis, worry is a form of inner preparation
that increases the level of tolerance for subsequent noxious stimuli but at some cost
in terms of immediate stress reactions (see also Breznitz, 1971). Janis found that
surgical patients who experienced too much or too little fear prior to an operation
evidenced less rapid recovery than did patients who showed moderate anticipatory
stress reactions. Presumably, the cognitive strategies employed by the latter
allowed better preparation for the surgical trauma which followed.
Egbert, Battit, Welch, and Bartlett (1964), following up the ideas of Janis, prepared a
group of patients for the stress of surgery by providing them with information
regarding the impending operation and their possible reactions and experiences
during recovery. Patients who received such instruction required less medication
and were sent home earlier than were patients who received no preparation.
However, the instruction given to patients by Egbert et al. was fairly intensive and
included advice with regard to behavioral as well as cognitive control.
The influence of information per se on the stress of surgery has been the subject of
several dissertations conducted under the direction of Michael Goldstein. In these
studies, patients were first divided into three groups depending upon whether they
exhibited a preference for sensitizing or avoidant (denial like) defenses or whether
they showed no specific preference ("nonspecific defenders"). In an initial study,
Andrew (1970) found that information regarding surgery facilitated recovery of
those patients classified as nonspecific defenders but may have been
counterproductive for patients who preferred denial like defenses. Contrary to
expectations, sensitizers were not affected by the receipt of information, perhaps
because they already had prepared themselves sufficiently for the operation. In a
second dissertation, DeLong (1970) gave some patients very specific information
describing the nature of their illness, the reasons for surgery, what to expect
preoperatively and postoperatively, and the like. Other patients received more
general information regarding hospital facilities, rules and regulations, etc. DeLong
found that sensitizers recovered more quickly than avoiders when given specific
information but less quickly when given general information. Type of information did
not seem to influence the recovery of nonspecific defenders or avoiders; the former

recovered well under both conditions, while the latter recovered poorly. However,
complaints of discomfort were more numerous among avoiders who received
specific information (in comparison to sensitizers who received the same
information), while complaints were less among avoiders who received only general
information. Although these data from the Andrew and DeLong studies are not
entirely consistent, they do indicate that the type of information a person receives
about an impending danger may interact in a generally predictable fashion with his
characteristic style of defense.
In her study, DeLong also assessed the anxiety of patients on the day they were
informed that surgery was to be performed and again 24 hours before the actual
operation. These data help clarify some of the ambiguities in the relationship
between defensive style and the use of information. Non-specific defenders showed
large increases in anxiety when surgery was first scheduled, but their anxiety
diminished considerably by the day before the operation. Sensitizers, on the other
hand, showed only a moderate increase in anxiety upon learning of surgery and
maintained this level during the preoperative
period. Finally, patients who were classified as avoiders showed an actual decrease
in anxiety when surgery was first scheduled, but this was followed by a rise to
preoperative levels. In short, the three groups of patients were showing different
gradients of anxiety (one descending, one remaining constant, and one ascending)
at the time they received in-formation regarding surgery. The use of the
information as a source of control appears to have been a function of these
gradients (being unnecessary in the case of the first, facilitative in the second, and
counterproductive in the third).
In attempting to clarify further the mechanisms involved in the interaction between
in-formation and defensive style, Goldstein and his colleagues reasoned that
different defensive styles represent different cognitive sets concerning the likelihood
of danger. Paul (1969) attempted to manipulate such sets on
a temporary basis by showing subjects either a stressful or a benign film when they
first reported to the laboratory. After the establishment of either a sensitizing (stress
film) or denial (benign film) set, subjects were informed that they would see a
second film of a threatening nature. Paul found that sub-jects with the sensitizing

set exhibited greater stress reactions when they were shown the threatening film
shortly after being informed but lower stress reactions if a day elapsed between the
warning and the showing of the film. Subjects with a denial set, on the other hand,
exhibited lower stress reactions when they saw the second film after a short delay
but greater reactions following a day's delay. These results were confirmed by
Cooley (1971), who also demonstrated that the high stress after a day's delay on
the part of subjects with a denial set was due to the specific "tranquilizing" effect of
the benign film. Initial exposure to this film (as opposed to no particular set
induction) seems to have caused subjects to ignore the warning of future threat.
From the above laboratory and surgery studies, it appears that if a person is going
to utilize effectively information regarding an impending harm, there must be an
initial set for the appraisal of threat based either on specific situational cues or on a
person's characteristic cognitive style.
In concluding this discussion of appraisal as a mode of control, it is perhaps worth
pointing out the obvious, namely, that any interpretation of events is an ongoing
process. What is appraised in one manner now may be reappraised in another
manner subsequently. One such form of reappraisal which
deserves brief mention is that involved in dissonance reduction. (This is what
Zimbardo, 1969a, has referred to as "cognitive control.") Most of the research on
cognitive dissonance has emphasized the reduction of stress. How-ever, even
dissonance reduction does not go against the general conclusion that personal
control may sometimes lead to increased rather than decreased stress reactions. An
experiment by Bandler et al. (1968) illus-trates this. Bandler et al. administered
electric shock under conditions in which the subjects were instructed either to
escape or to endure the stimulus. Results indicated that subjects tend to rate as
more painful shocks which they escape as opposed to shocks which they endure.
The authors interpreted these results to mean that a person's perception of pain is
determined, in part, by his original response to the painful stimulus: If he tries to
escape the stimulus, it must be more painful than if he tries to endure it. In a followup experiment, Corah and Boffa "(1970) obtained similar results but only if subjects
believed they had a choice in escaping or enduring the shock.

Interaction among various types of control. The experiment by Corah and Boffa
(1970) is worth considering in some detail because it illustrates the manner in which
the various types of control we have distinguished (behavioral, cognitive, and
decisional) may interact to enhance or inhibit stress reactions. Corah and Boffa
arranged situations in which subjects believed that they could either terminate or
not terminate a loud noise. (The same subjects experienced both conditions in a
repeated measures design.) Termination of the noise is, of course, an example of
stimulus modification, the second type of behavioral control outlined previously. In
addition, one half of the subjects were given instructions emphasizing that it was up
to them whether or not they terminated the noise in the escape condition and
whether they endured it in the no-escape condition. Such choice is an example of
decisional control, which will be discussed in more detail below. The other half of the
subjects were not
given any choice but were simply instructed to escape or not escape in the
respective conditions.
Let us consider first the stress reactions of subjects in the no-choice group.
Comparisons of the escape with the no-escape trials yielded significant differences
in terms of both self-report and autonomic (skin conductance) indexes of stress.
When subjects had behavioral control (could escape), they experienced
less stress than when they had no control (could not escape). Now let us consider
only the no-escape condition and contrast the responses of subjects who had a
choice with the responses of those who had no choice. Again, the results were quite
clear. Both subjective reports and autonomic arousal indicated that subjects who
had some degree of choice experienced less stress than those who did not.
Taken together, the above results indicate that both types of controlbehavioral
and decisionalwere stress reducing when considered independently. What
happened when subjects had both modes of control, that is, in the condition where
escape was possible and subjects felt free to respond or not? In terms of subjective
reports of discomfort, at least, this condition was judged as stressful as its opposite
the no-choice, no-escape condition. Moreover, it was significantly more stressful
than either the behavioral or decisional control conditions separately. (The
autonomic indexes of stress provided more equivocal results. The no-escape

condition elicited significantly more arousal than the other three conditions, but the
latter did not differ among themselves.)
Interpreting the results of this experiment,Corah and Boffa (1970) suggest :
that a sense of control is a determinant of the cognitive appraisal of threat. A
procedure which gives the subject the choice of avoiding or not avoiding the
aversive consequences of a stimulus is equivalent to giving him perceived control
over the potential threat [p. 4].
We are not concerned here with the adequacy of this particular interpretation, which
seems basically sound as far as it goes. What is of interest is the complex
interaction which may be observed among different types of control. In the study of
Corah and Boffa, behavioral control and decisional control were experimentally
manipulated, while cognitive control (in the sense of reappraisal) was inferred.
DECISIONAL CONTROL AND STRESS
The study by Corah and Boffa introduces the problem of decisional control, which
may be defined as the range of choice or number of options open to an individual.
There has been a considerable amount of research on the consequences of choice
in negative situations. However, most of this research has concerned the reduction
of post-decisional conflict, for example, reappraising a negative stimulus in a
positive direction following the voluntary decision to experience that stimulus. Such
reappraisals represent the type of cognitive control discussed in the previous
section. We are concerned here with the conditions of choice per se, that is, with
pre-decisional processes.
If the question were put to most people, there is little doubt that they would prefer
to have a choice among alternative courses of action rather than to have decisions
made for them. Yet social commentators from Hobbes to Fromm have emphasized
the willingness of man to relinquish such control, to "escape from freedom,"
subjecting himself to external authority. In spite of this seeming paradox, relatively
little research has been devoted to the relationship between decisional control and
stress reactions (cf. Steiner, 1970).

Of potential relevance to decisional con-trol and stress is the program of research


by Zimbardo (1969b) and his colleagues on "deindividuation." Deindividuation
occurs, according to Zimbardo, in novel or unstructured settings in which behavior
is not constrained by situation-bound cues. The control of behavior, therefore, shifts
from the external physical and social reality to internal constraints. The result is
freedom of choice to engage in alternative behaviors, and, in particular, acts which
are emotional, impulsive, irrational, and otherwise out of character. Zimbardo
contrasts this type of internal control with the reappraisals involved in the reduction
of post-decisional conflicts. Underlying the latter are such considerations as
consistency, commitment, and responsibility, considerations which play a minor role
in deindividuated behavior.
For better or worse, we have here the emergence of a kind of freedom different
from that made possible through the use of cognitive control mechanisms we
described earlier [i.e., dissonance-reducing reappraisals]. It is the freedom to act, to
be spontaneous, to shed the straightjacket of cognition, rumination, and excessive
concern with "ought" and "should." Behavior is freed from obligations, liabilities and
the restrictions imposed by guilt, shame, and fear [Zimbardo, 1969b, p. 248],
It will be noted that the situations conducive to deindividuation, as described by
Zimbardo, are similar to situations which sociologists have described under the
heading of "anomie." The result of deindividuation, as well as anomie, is often
extreme anxiety or stress on the part of the individual. As Zimbardo notes, violence
in the form of "senseless" beatings and thrill killings, as well as other antisocial acts
(e.g., vandalism), are also manifestations of deindividuation. But not all
deindividuated behaviors are negative, either subjectively or socially. A variety of
ecstatic states, such as Dionysiac celebrations and shamanistic revelries can also be
considered examples of behavior released from most normal external constraints,
with great freedom of choice being afforded the individual. A somewhat different
view of decisional control has been advanced by Kelly (1955) and Chein (1972).
Instead of viewing freedom primarily in terms of the lack of external constraints,
these authors emphasize the agreement of the individual with whatever constraints
do exist. Both Kelly and Chein analyze behavior in terms of hierarchically organized

systems of personal constructs (Kelly) or motives (Chein). A person experiences


decisional control when goals are established by superordinate systems which then
can be met by relevant subordinate behaviors. This is, in a sense, a variation on the
ancient Stoic ideal of accommodation to necessity.
Thus, according to Kelly (1955), a man controls his destiny to the extent that he can
develop a construction system with which he identifies himself and which is
sufficiently comprehensive to subsume the world around him. If he is unable to
identify himself with this system, he may be able to predict events determinatively,
but he can experience no personal control [p. 126].
At the risk of oversimplification, one might operationalize the above analysis of
decisional control by saying that a person will experience choice when he is acting
according to his beliefs or doing that with which he agrees. An experiment by Lewis
and Blanchard (1971) is relevant to decisional control so conceived. A situation was
arranged in which subjects were given three levels of choice as to whether they
were going to give or receive electric shocks in an ostensible learning experiment.
At the high-choice level, subjects had complete freedom to choose whether they
would be the "teacher" or "learner." All subjects in this group chose to be the
teacher, and hence to give rather than to receive shocks. In the medium-choice
condition, it was rather off-handedly suggested to one half of the subjects that they
be teachers and to the other half that they be learners. Each subject was then given
the opportunity to choose the other role if he so desired. In the no-choice condition,
subjects were told definitely to be either the teacher or the learner. After the
experiment, subjects rated how free they were to assume or reject the roles to
which they were assigned. Lewis and Blanchard (1971) found that
perceived freedom of choice varied as a function of whether or not subjects (in the
medium- and low-choice conditions) were as-signed to the role of teacher or learner.
This is what would be expected on the basis of the preceding analysis. That is, the
experience of choice is a function, in part, of how well a person identifies with the
roles he assumes. In the present case, the teacher role was the more desirable, as
evidenced by the fact that all subjects in the high-choice condition chose it.
Subjects assigned to the role of teacher in the medium- and no-choice conditions

probably identified with it, at least to a greater extent than those assigned to the
role of learner, and hence experienced greater decisional control.
OBJECTIVE VERSUS EXPERIENCED CONTROL
This review has emphasized the fact that personal control can sometimes lead to
increased rather than decreased stress. How-ever, the above experiment by Lewis
and Blanchard (1971) suggests the need to further refine our analysis and, in
particular, to distinguish between objective and experienced control. We have just
seen, for example, that it is not the objective range of choice which determines
whether or not a person experiences decisional control; rather, it is the degree to
which he agrees or identifies with the choices he does have, no matter how limited.
Indeed, as the analysis of Zimbardo (1969b) has indicated, too many response
options may lead to conflict, anomie, and feelings of helplessness, as well as to
positive feelings of freedom. Similarly, with regard to behavioral and cognitive
control, a person of limited competence might still experience considerable control
provided that his goals were not set beyond his capabilities. In short, although there
undoubtedly is a rough relationship between the experience of control and actual
control possibilities, the relationship is by no means one to one.
Might there not be a more direct relationship between stress and the subjective
experience of control than between stress and control objectively defined? At first,
ah affirmative answer to this question appears plausible. There are difficulties,
however, which can be illustrated by an experiment by Hamsher, Geller, and Rotter
(1968). These investigators found that persons classified as "internals" on
behavioral criteria may describe themselves as "externals" on Rotter's (1966)
internal-external control scale. Hamsher et al. refer to this phenomenon as
"defensive externality." It indicates that a lack (or denial) of experienced control, far
from inevitably leading to stress, may actually be used as a defense against anxiety
(cf. also the common defense of giving in to "fate"). Of course, it could be argued
that by shifting responsibility from oneself to the environment, certain events
become more predictable, which in turn allows some degree of objective control.
However true this may be(and it probably does occur in some instances), such an
explanation makes untenable any claim for a direct relationship between stress
reactions and personal control, whether objective or experienced. That is, if the

subjective experience of control is invoked to help account for the lack of a direct
relationship between objective control and stress, then objective control cannot be
invoked to save the presumed relationship between the subjective experience of
control and stress.
CONCLUDING OBSERVATIONS
From the foregoing review, it is evident that no simple relationship exists between
personal control and stress. About the only general statement which can be made
with confidence is that the stress-inducing or stress-reducing properties of personal
control depend upon the meaning of the control response for the individual; and
what lends a response meaning is largely the context in which it is embedded. For
example, neither the self-administration of a noxious stimulus nor the mere
presence of a warning signal
(the simplest kinds of behavioral and cognitive control, respectively) have much
influence on stress reactions. However, as a situation becomes more complex, or
ambiguous, then regulated administration (or a warning signal) can serve as a
vehicle for meaningful information regarding the nature and significance of the
impending harm. The result is oftenbut not alwaysa decrease in stress reactions.
These rather mundane observations do not require much elaboration except to
point out one fact which seems to have been overlooked in much of the research on
personal control. Many control responses have their own meaning built into them,
so to speak. Thus, an aggressive response to an impending harm means something
different than an escape response, although both might be equally effective in
eliminating the source of threat. Other than the main division of control into
behavioral, cognitive, and decisional varieties, little attempt has been made in this
review to consider the many qualitatively distinct ways in which a potentially
noxious stimulus might be controlled under natural conditions. However, it is
reasonable to assume that the experience of stress is determined, in part, by the
qualitative nature and meaning of the control response. This observation is
especially important in the case of behavioral control. Most laboratory studies have
allowed subjects only arbitrary and artificial responses (e.g., pressing a button,
solving mental arithmetic problems, and the like) in order to prevent or terminate a
noxious stimulus. Although such responses may be convenient from the

experimenter's point of view, they probably have little inherent significance for the
subject. This fact is bound to influence the extent to which many laboratory studies
of personal control can be generalized to real-life
situations.

You might also like