simplebooklet thumbnail

of 0
O
5
Learning
Sniffer Dogs on Call: Putting Learning
to Work in Japan
n March 11, 2011, a magnitude 9.0 undersea earthquake rocked the eastern coast of Japan. The Tohoku earthquake
was the most powerful ever to hit the island nation, causing 133-foot tsunami waves that led to the meltdown of
the Fukushima nuclear power plant. The quake’s damage killed over 15,000 people and left at least 300,000 homeless.
In the temblor’s aftermath, humanitarian aid poured in. In addition to the throngs of people who rushed to help,
battalions of rescue dogs were dispatched from around the world. Elite teams of humans and dogs from the United
Kingdom, Australia, New Zealand, the United States, South Korea, Russia, Mexico, Switzerland, and many other nations
reported for duty.
The “sniffer” dogs in these teams rely not only on their amazing canine olfactory abilities but also on the months
and years of laborious training they receive, geared toward locating survivors trapped under rubble. Indeed, rescue
dogs (some of which are themselves “rescued dogs”—that is, adopted from shelters) are rigorously trained animals
that have passed a set of strict criteria to earn a place on the special teams. In the United States, the Federal Emer-
gency Management Administration (FEMA) has established strict guidelines for the training of rescue dogs (FEMA, 2003).
The dogs must demonstrate mastery of a set of diffi cult skills, including walking off-leash with a trainer on a crowded
city street, without getting distracted, and performing search-and-rescue tasks without immediate practice and with-
out their regular trainer. They must demonstrate their abilities without food rewards (although a toy reward placed on
rubble is allowed). Further, these hardworking canines must be recertifi ed every two years to ensure that their skills
remain at peak level. In the invaluable work they performed in Japan, the dogs not only helped with rescue efforts
but also raised everyone’s spirits with their tirelessness and persistence.
Truly, rescue dogs are nothing less than highly skilled professionals. You might well wonder how the dogs are
trained to perform these complex acts. It’s simple—through the principles that psychologists have uncovered in
studying learning, our focus in this chapter.
kin35341_ch05_166-200.indd Page 166 8/4/12 8:48 PM user-f502kin35341_ch05_166-200.indd Page 166 8/4/12 8:48 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Types of Learning // 167
1
Types of Learning
L e a r n i n g a n y t h i n g n e w i n v o l v e s c h a n g e . O n c e y o u l e a r n e d t h e a l p h a b e t , i t d i d
not leave you; it became part of a “new you” who had been changed through
the process of learning. Similarly, once you learn how to drive a car, you
do not have to go through the process again at a later time. If you ever try
out for the X-Games, you may break a few bones along the way, but at
some point you probably will learn a trick or two through the experience,
changing from a novice to an enthusiast who can at least stay on top of a
skateboard.
By way of experience, too, you may have learned that you have to study to
do well on a test, that there usually is an opening act at a rock concert, and that a
eld goal in U.S. football adds 3 points to the score. Putting these pieces together, we
arrive at a de nition of learning: a systematic, relatively permanent change in behavior
that occurs through experience.
If someone were to ask you what you learned in class today, you might mention
new ideas you heard about, lists you memorized, or concepts you mastered. However,
how would you de ne learning if you could not refer to unobservable mental pro-
cesses? You might follow the lead of behavioral psychologists. Behaviorism is a
theory of learning that focuses on observable behaviors. From the behaviorist perspec-
tive, understanding the causes of behavior requires looking at the environmental
factors that produce them. Behaviorists view internal states like
thinking, wishing, and hoping as behaviors that are caused by
external factors as well. Psychologists who examine learning
from a behavioral perspective de ne learning as relatively sta-
ble, observable changes in behavior. The behavioral approach
has emphasized general laws that guide behavior change and
make sense of some of the puzzling aspects of human life
(Miltenberger, 2012).
Behaviorism maintains that the principles of learning are the
same whether we are talking about animals or humans. Because of
the in uence of behaviorism, psychologists understanding of learn-
ing started with studies of rats, cats, pigeons, and even raccoons. A
century of research on learning in animals and in humans suggests
that many of the principles generated initially in research on animals
also apply to humans (Domjan, 2010).
In this chapter we look at two types of learning: associative
learning and observational learning. Associative learning occurs
when we make a connection, or an association, between two
events. Conditioning is the process of learning these associations
(Klein, 2009). There are two types of conditioning: classical and
operant, both of which have been studied by behaviorists.
learning
A systematic, relatively
permanent change in
behavior that occurs
through experience.
behaviorism
A theory of learn-
ing that focuses
solely on observ-
able behaviors,
discounting the
importance of
such mental ac-
tivity as thinking,
wishing, and
hoping.
associative
learning
Learning that
occurs when an
organism makes
a connection, or
an association,
between two
events.
“I didn’t actually catch anything, but I do feel I
gained some valuable experience.”
Used by permission of CartoonStock, www.CartoonStock.com.
Learning is RELATI VELY
per manent somet i mes we f or get
what we ve l ear n ed . Al so, l ear ni ng
involves EXPERI ENCE. Changes in
be havi or t hat r e sul t f r om physi ca l
ma t u r a t i o n w o u l d n o t b e
con si der ed l ear n i ng.
This chapter begins by defi ning learning and sketching out its main types—associative
learning and observational learning. We then turn to two types of associative learning—
classical conditioning and operant conditioning—followed by a close look at observational
learning. We next probe into the role of cognitive processes in learning, before nally
considering biological, cultural, and psychological constraints on learning. As you read,
ask yourself about your own beliefs concerning learning. If a dog can learn to rescue
earthquake victims, surely the human potential for learning has barely been tapped.
kin35341_ch05_166-200.indd Page 167 8/4/12 8:48 PM user-f502kin35341_ch05_166-200.indd Page 167 8/4/12 8:48 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
168 // CHAPTER 5 // Learning
In classical conditioning, o r g a n i s m s l e a r n t h e a s s o c i a t i o n b e t w e e n t w o s t i m u l i .
As a result of this association, organisms learn to anticipate events. For example,
lightning is associated with thunder and regularly precedes it. Thus, when we see
lightning, we anticipate that we will hear thunder soon afterward. Fans of horror
lms know the power of classical conditioning. Watching one of the Friday the
13th movies, we nd the tension building whenever we hear that familiar “Ch-ch-
ch—ch-ha-ha-ha-ha” that signals Jason’s arrival.
I n operant conditioning, o r g a n i s m s l e a r n t h e a s s o c i a t i o n b e t w e e n a b e h a v i o r a n d a
consequence, such as a reward. As a result of this association, organisms learn to
increase behaviors that are followed by rewards and to decrease behaviors that are fol-
lowed by punishment. For example, children are likely to repeat their good manners if
their parents reward them with candy after they have shown good manners. Also, if
children’s bad manners are followed by scolding words and harsh glances by parents,
the children are less likely to repeat the bad manners. Figure 5.1 compares classical and
operant conditioning.
Much of learning, however, is not a result of direct consequences but rather of expo-
sure to models performing a behavior or skill. For instance, as you watch someone shoot
baskets, you get a sense of how it is done. The learning that takes place when a person
observes and imitates another’s behavior is called observational learning . Observational
learning is a common way that people learn in educational and other settings. Observa-
tional learning is different from the associative learning described by behaviorism because
it relies on mental processes: The learner has to pay attention, remember, and reproduce
what the model did. Observational learning is especially important to human beings. In
fact, watching other people is another way in which human infants acquire skills.
Human infants differ from baby monkeys in their strong reliance on imitation
(Bandura, 2010). After watching an adult model perform a task, a baby monkey will
gure out its own way to do it, but a human infant will do exactly what the model
did. Imitation may be the human baby’s way to solve the huge problem it faces:
to learn the vast amount of cultural knowledge that is part of human life. Many
of our behaviors are rather arbitrary. Why do we clap to show approval or
wave “hello” or “bye-bye”? The human infant has a lot to learn and may be
well served to follow the old adage “When in Rome, do as the Romans do.
L e a r n i n g a p p l i e s t o m a n y a r e a s o f a c q u i r i n g n e w b e h a v i o r s , s k i l l s , a n d
knowledge (Bjork, Dunlosky, & Kornell, 2013; Mayer, 2011). Our focus in
this chapter is on the two types of associative learning—classical conditioning
and operant conditioning—and on observational learning.
observational learning
Learning that occurs
through observing and
imitating another’s behavior.
Thi s i s goi n g t o sound
ver y abst r act r i ght now. Hang
ononce we ge t t o t he det ai l s,
it will make sense.
Hav e y ou ev er n ot i c ed
that humans eyes differ from
ot her ani mal s eye s be cau se t he
“whit escan be seen? I t might
be t hat t hi s char act er i st i c al l ows
human s t o mod el one anot her
cl osel ybecause we can see
what t he mo del i s l o oki ng at .
Behavior Consequences
Classical Conditioning
Stimulus 1
Doctor’s office
Stimulus 2
Shot
Operant Conditioning
FIGURE 5.1 Associative Learning: Comparing Classical and Operant Conditioning (Left ) In this example of classical conditioning,
a child associates a doctor’s of ce (stimulus 1) with getting a painful injection (stimulus 2). (Right) In this example of operant conditioning, performing well in a
swimming competition (behavior) becomes associated with getting awards (consequences).
EXPERIENCE IT!
Learning and the Brain
kin35341_ch05_166-200.indd Page 168 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 168 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Classical Conditioning // 169
1. Any situation that involves learning
A. requires some relatively permanent
change to occur.
B. requires a great deal of effort.
C. involves conscious determination.
D. is relatively automatic.
2. A cat that associates the sound of a can
opener with being fed has learned
through
A. behaviorism.
B. operant conditioning.
C. classical conditioning.
D. observational learning.
3. Which one of the following statements
is true about learning?
A. Learning can be accomplished only
by higher-level species, such as
mammals.
B. Learning is not permanent.
C. Learning occurs through experience.
D. Learning processes in humans are
distinct from learning processes in
animals.
APPLY IT! 4. After seeing dogs catch-
ing Frisbees in the park, Lionel decides that
he wants to teach his dog Ivan to do it too.
He takes Ivan to the park and sits with
him, making sure that he watches the other
dogs successfully catching Frisbees. What
technique is Lionel using on Ivan, and what
are the chances for success?
A. He is using associative learning, and
his chances for success are very good,
because dogs and humans both learn
this way.
B. He is using operant conditioning, and
his chances for success are very good,
because dogs and humans both learn
this way.
C. He is using observational learning, and
his chances for success are pretty bad,
because dogs are not as likely as people
to learn in this way.
D. He is using classical conditioning, and
his chances for success are pretty bad,
because dogs are much less likely than
people to learn in this way.
Early one morning, Bob is in the shower. While he showers, his wife enters the bathroom
and ushes the toilet. Scalding hot water bursts down on Bob, causing him to yell in
pain. The next day, Bob is back for his morning shower, and once again his wife enters
the bathroom and ushes the toilet. Panicked by the sound of the toilet ushing, Bob
yelps in fear and jumps out of the shower stream. Bob’s panic at the sound of the toilet
illustrates the learning process of classical conditioning , in which a neutral stimulus (the
sound of a toilet ushing) becomes associated with a meaningful stimulus (the pain of
scalding hot water) and acquires the capacity to elicit a similar response (panic).
Pavlovs Studies
Even before beginning this course, you might have heard about Pavlov’s dogs. The
R u s s i a n p h y s i o l o g i s t I v a n P a v l o v s w o r k i s v e r y w e l l k n o w n . S t i l l , i t i s e a s y t o t a k e i t s
true signi cance for granted. Importantly, Pavlov demonstrated that neutral aspects of
the environment can attain the capacity to evoke responses through pairing with other
stimuli and that bodily processes can be in uenced by environmental cues.
In the early 1900s, Pavlov was interested in the way the body digests food. In his
experiments, he routinely placed meat powder in a dog’s mouth, causing the dog to
salivate. By accident, Pavlov noticed that the meat powder was not the only stimulus
that caused the dog to salivate. The dog salivated in response to a number of stimuli
associated with the food, such as the sight of the food dish, the sight of the individual
who brought the food into the room, and the sound of the door closing when the food
arrived. Pavlov recognized that the dog’s association of these sights and sounds with the
food was an important type of learning, which came to be called classical conditioning.
P a v l o v w a n t e d t o k n o w w h y t h e d o g s a l i v a t e d i n r e a c t i o n t o v a r i o u s s i g h t s a n d s o u n d s
before eating the meat powder. He observed that the dog’s behavior included both
unlearned and learned components. The unlearned part of classical conditioning is based
on the fact that some stimuli automatically produce certain responses apart from any
prior learning; in other words, they are innate (inborn). Re exes a r e s u c h a u t o m a t i c
stimulus–response connections. They include salivation in response to food, nausea in
response to spoiled food, shivering in response to low temperature, coughing in response
to throat congestion, pupil constriction in response to light, and withdrawal in response
classical conditioning
Learning process in which a
neutral stimulus becomes
associated with an innately
meaningful stimulus and
acquires the capacity to
elicit a similar response.
2
Classical Conditioning
EXPERIENCE IT!
Classical Conditioning
kin35341_ch05_166-200.indd Page 169 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 169 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
170 // CHAPTER 5 // Learning
to pain. An unconditioned stimulus (US) is a stimulus that produces a response without
prior learning; food was the US in Pavlov’s experiments. An unconditioned response
(UR) is an unlearned reaction that is automatically elicited by the US. Unconditioned
responses are involuntary; they happen in response to a stimulus without conscious effort.
In Pavlov’s experiment, salivating in response to food was the UR.
In classical conditioning, a conditioned stimulus (CS) is a previously neutral stimu-
lus that eventually elicits a conditioned response after being paired with the uncondi-
tioned stimulus. The conditioned response (CR) is the learned response to the conditioned
stimulus that occurs after CS–US pairing (Pavlov, 1927). Sometimes conditioned
responses are quite similar to unconditioned responses, but typically they are not as
strong.
In studying a dog’s response to various stimuli associated with meat powder, Pavlov
rang a bell before giving meat powder to the dog. Until then, ringing the bell did not
have a particular effect on the dog, except perhaps to wake the dog from a nap.
The bell was a neutral stimulus. However, the dog began to associate the sound
of the bell with the food and salivated when it heard the bell. The bell had
become a conditioned (learned) stimulus (CS), and salivation was now a condi-
tioned response (CR). In the case of Bob’s interrupted shower, the sound of the
toilet ushing was the CS, and panicking was the CR after the scalding water
(US) and the ushing sound (CS) were paired. Figure 5.2 summarizes how classical
conditioning works.
Research has shown that salivation can be used as a conditioned response not only in
dogs and humans but also in, of all things, cockroaches. In one study, researchers
paired the smell of peppermint (the CS, which was applied to the cockroaches’
antennae) with sugary water (the US) (Watanabe & Mizunami, 2007). Cock-
roaches naturally salivate (the UR) in response to sugary foods, and after repeated
pairings between the peppermint smell and sugary water, the cockroaches salivated
in response to the peppermint scent (the CR). Collecting and measuring the cockroach
saliva, the researchers found that the cockroaches had slobbered over that scent for two
minutes.
A C Q U I S I T I O N Whether it is human beings, dogs, or cockroaches, the rst part of
classical conditioning is called acquisition. Acquisition is the initial learning of the
unconditioned
stimulus (US)
A stimulus that
produces a re-
sponse without
prior learning.
unconditioned response (UR)
An unlearned reaction that
is automatically elicited by
the unconditioned stimulus.
conditioned stimulus (CS)
A previously neutral stimulus
that eventually elicits a
conditioned response after
being paired with the
unconditioned stimulus.
conditioned
response (CR)
The learned
response to the
conditioned
stimulus that oc-
curs after condi-
tioned stimulus–
unconditioned
stimulus pairing.
acquisition
The initial learning of the
connection between the un-
conditioned stimulus and the
conditioned stimulus when
these two stimuli are paired.
Pavlov (the white-bearded gentleman in the center) is shown demonstrating the nature of classical
conditioning to students at the Military Medical Academy in Russia.
Not e t h at t he
as s oci at i on bet ween f ood and
sal i vat i ng is nat ur al (unl ear ned) ,
whi l e t he associ at i on bet ween
a bel l and s al i vat i ng i s l ear ned.
Awesome addit ion t o
any r és umé: Cockr oach Sal i va
Technician.
kin35341_ch05_166-200.indd Page 170 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 170 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Classical Conditioning // 171
connection between the US and CS when these two stimuli are paired (as with the
p e p p e r m i n t s c e n t a n d t h e s u g a r y w a t e r ) . D u r i n g a c q u i s i t i o n , t h e C S i s r e p e a t e d l y p r e -
sented followed by the US. Eventually, the CS will produce a response. Note that clas-
sical conditioning is a type of learning that occurs without awareness or effort, based on
the presentation of two stimuli together. For this pairing to work, however, two important
factors must be present: contiguity and contingency.
Contiguity simply means that the CS and US are presented very close together in
time—even a mere fraction of a second (Wheeler & Miller, 2008). In Pavlov’s work, if
the bell had rung 20 minutes before the presentation of the food, the dog probably would
not have associated the bell with the food. However, pairing the CS and US close together
in time is not all that is needed for conditioning to occur.
Contingency means that the CS must not only precede the US closely in time, it must
also serve as a reliable indicator that the US is on its way (Rescorla, 1966, 1988, 2009).
To get a sense of the importance of contingency, imagine that the dog in Pavlov’s exper-
iment is exposed to a ringing bell at random times all day long. Whenever the dog
receives food, the delivery of the food always immediately follows a bell ring. However,
in this situation, the dog will not associate the bell with the food, because the bell is not
a reliable signal that food is coming: It rings a lot when no food is on the way. Whereas
contiguity refers to the fact that the CS and US occur close together in time, contingency
refers to the information value of the CS relative to the US. When contingency is pres-
ent, the CS provides a systematic signal that the US is on its way.
G E N E R A L I Z A T I O N A N D D I S C R I M I N A T I O N P a v l o v f o u n d t h a t t h e d o g
salivated in response not only to the bell tone but also to other sounds, such as a whistle.
These sounds had not been paired with the unconditioned stimulus of the food. Pavlov
Before Conditioning
US UR
CS CR
Neutral stimulus No response
Dog salivates
Dog salivates
No salivation
Conditioning After Conditioning
USNeutral stimulus
+
UR
+
Food
Food
Dog salivatesBell
Bell
Bell
FIGURE 5.2 Pavlov’s Classical Conditioning In one experiment, Pavlov presented a neutral stimulus (bell) just before an unconditioned stimulus
(food). The neutral stimulus became a conditioned stimulus by being paired with the unconditioned stimulus. Subsequently, the conditioned stimulus (bell) by itself
was able to elicit the dog’s salivation.
kin35341_ch05_166-200.indd Page 171 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 171 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
172 // CHAPTER 5 // Learning
discovered that the more similar the noise was to the original
sound of the bell, the stronger was the dog’s salivary ow.
Generalization in classical conditioning is the tendency
of a new stimulus that is similar to the original conditioned
stimulus to elicit a response that is similar to the conditioned
response (April, Bruce, & Galizio, 2011; Harris, Andrew, &
Livesey, 2012). Generalization has value in preventing learn-
ing from being tied to speci c stimuli. For example, once
you learn the association between a given CS (say, ashing
police lights behind your car) and a particular US (the dread
associated with being pulled over), you do not have to learn
it all over again when a similar stimulus presents itself (a
police car with its siren moaning as it cruises directly behind
your car).
S t i m u l u s g e n e r a l i z a t i o n i s n o t a l w a y s b e n e cial. For example, the cat that general-
izes from a harmless minnow to a dangerous piranha has a major problem; therefore,
it is important to also discriminate among stimuli. Discrimination i n c l a s s i c a l c o n -
ditioning is the process of learning to respond to certain stimuli and not others. To
produce discrimination, Pavlov gave food to the dog only after ringing the bell and
not after other sounds. In this way, the dog learned to distinguish between the bell
and other sounds.
E X T I N C T I O N A N D S P O N T A N E O U S R E C O V E R Y After conditioning the
dog to salivate at the sound of a bell, Pavlov rang the bell repeatedly in a single
session and did not give the dog any food. Eventually the dog stopped salivating. This
result is extinction , which in classical conditioning is the weakening of the condi-
tioned response when the unconditioned stimulus is absent (Joscelyne & Kehoe,
2007). Without continued association with the US, the CS loses its power to produce
the CR.
Extinction is not always the end of a
conditioned response (Urcelay, Wheeler, &
Miller, 2009). The day after Pavlov extin-
guished the conditioned salivation to the
sound of a bell, he took the dog to the labora-
tory and rang the bell but still did not give
the dog any meat powder. The dog salivated,
indicating that an extinguished response can
spontaneously recur . Spontaneous recovery
is the process in classical conditioning
by which a conditioned response can recur
after a time delay, without further condition-
ing (Gershman, Blei, & Niv, 2010). Consider
an example of spontaneous recovery you
may have experienced: You thought that
you had forgotten about (extinguished) an
ex-girlfriend or boyfriend, but then you found
yourself in a particular context (perhaps
the restaurant where you always dined
together), and you suddenly got a mental
image of your ex, accompanied by an emo-
tional reaction to him or her from the past
(spontaneous recovery).
F i g u r e 5 . 3 s h o w s t h e s e q u e n c e o f a c q u i s i -
tion, extinction, and spontaneous recovery.
Spontaneous recovery can occur several times,
but as long as the conditioned stimulus is
generalization
(classical
conditioning)
The tendency of
a new stimulus
that is similar
to the original
conditioned
stimulus to elicit
a response that
is similar to the
conditioned
response.
discrimination (classical
conditioning)
The process of learning to
respond to certain stimuli
and not others.
extinction
(classical
conditioning)
The weakening
of the condi-
tioned response
when the uncon-
ditioned stimulus
is absent.
spontaneous
recovery
The process in
classical condi-
tioning by which
a conditioned re-
sponse can recur
after a time delay,
without further
conditioning.
Used by permission of CartoonStock, www.CartoonStock.com.
Acquisition
Strength of conditioned response
High
Rest
CS aloneCS–US paired
Extinction Spontaneous
recovery
Low
CS alone
FIGURE 5.3 The Strength of a Classically Conditioned
Response During Acquisition, Extinction, and Spontaneous
Recovery During acquisition, the conditioned stimulus and unconditioned
stimulus are associated. As the graph shows, when this association occurs, the
strength of the conditioned response increases. During extinction, the conditioned
stimulus is presented alone, and, as can be seen, the result is a decrease in
the conditioned response. After a rest period, spontaneous recovery appears,
although the strength of the conditioned response is not nearly as great at this
point as it was after a number of CS–US pairings. When the CS is presented
alone again, after spontaneous recovery, the response is extinguished rapidly.
kin35341_ch05_166-200.indd Page 172 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 172 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Classical Conditioning // 173
presented alone (that is, without the unconditioned stimulus), spontaneous
recovery becomes weaker and eventually ceases.
Classical Conditioning in Humans
Classical conditioning has a great deal of survival value for human beings
(Powell & Honey, 2013). Here we review examples of classical condition-
ing at work in human life.
E X P L A I N I N G F E A R S Classical conditioning provides an explanation
of fears (Amano & others, 2011; Hawkins-Gilligan, Dygdon, & Conger,
2011). John B. Watson (who coined the term behaviorism ) a n d R o s a l i e
Rayner (1920) demonstrated classical conditioning’s role in the develop-
ment of fears with an infant named Albert. They showed Albert a white
laboratory rat to see whether he was afraid of it. He was not (so the rat was
a neutral stimulus or CS). As Albert played with the rat, the researchers
sounded a loud noise behind his head (the noise was then the US). The
noise caused little Albert to cry (the UR). After only seven pairings of the
loud noise with the white rat, Albert began to fear the rat even when the
noise was not sounded (the CR). Albert’s fear was generalized to a rabbit,
a dog, and a sealskin coat.
T o d a y , W a t s o n a n d R a y n e r s ( 1 9 2 0 ) s t u d y w o u l d v i o l a t e t h e e t h i c a l g u i d e -
lines of the American Psychological Association (see Chapter 1). In any case,
Watson correctly concluded that we learn many of our fears through classical conditioning.
We might develop fear of the dentist because of a painful experience, fear of driving after
having been in a car crash, and fear of dogs after having been bitten by one.
I f w e c a n l e a r n f e a r s t h r o u g h c l a s s i c a l c o n d i t i o n i n g , w e a l s o c a n p o s s i b l y u n l e a r n t h e m
through that process (Tronson & others, 2012; Vetere & others, 2011). In Chapter 13, for
example, we will examine the application of classical conditioning to therapies for treat-
ing phobias.
B R E A K I N G H A B I T S Psychologists have applied classical conditioning to helping
individuals unlearn certain feelings and behaviors. For example, counterconditioning is
a classical conditioning procedure for changing the relationship between a conditioned
stimulus and its conditioned response. Therapists have used counterconditioning to break
the association between certain stimuli and positive feelings (Kerkhof & others, 2011).
Aversive condi tioning i s a f o r m o f t r e a t m e n t t h a t i n v o l v e s r e p e a t e d p a i r i n g s o f a s t i m -
ulus with a very unpleasant stimulus. Electric shocks and nausea-inducing substances are
examples of noxious stimuli that are used in aversive conditioning (A. R. Brown & others,
2011). In a treatment to reduce drinking, for example, every time a person drinks an alco-
holic beverage, he or she also consumes a mixture that induces nausea. In classical condi-
tioning terminology, the alcoholic beverage is the conditioned stimulus and the
nausea-inducing agent is the unconditioned stimulus. Through a repeated pairing of alcohol
with the nausea-inducing agent, alcohol becomes the conditioned stimulus that elicits nau-
sea, the conditioned response. As a consequence, alcohol no longer is associated with
something pleasant but rather something highly unpleasant. Antabuse, a drug treatment for
alcoholism since the late 1940s, is based on this association (Ullman, 1952). When some-
one takes this drug, ingesting even the smallest amount of alcohol will make the person
quite ill, even if the exposure to the alcohol is through mouthwash or cologne. Antabuse
continues to be used in the treatment of alcoholism today (Baser & others, 2011).
C L A S S I C A L C O N D I T I O N I N G A N D T H E P L A C E B O E F F E C T C h a p t e r 1
de ned the placebo effect as the effect of a substance (such as a pill taken orally) or
procedure (such as using a syringe to inject a uid) that researchers use as a control to
counterconditioning
A classical conditioning pro-
cedure for changing the
relationship between a
conditioned stimulus and its
conditioned response.
aversive
conditioning
A form of treat-
ment that con-
sists of repeated
pairings of a
stimulus with a
very unpleasant
stimulus.
Watson and Rayner conditioned
11-month-old Albert to fear a white rat
by pairing the rat with a loud noise.
When little Albert was later presented
with other stimuli similar to the rat, such
as the rabbit shown here with Albert,
hewas afraid of them too. This study
illustrates stimulus generalization in
classical conditioning.
kin35341_ch05_166-200.indd Page 173 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 173 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
174 // CHAPTER 5 // Learning
identify the actual effects of a treatment. Placebo effects are observable changes (such
as a drop in pain) that cannot be explained by the effects of an actual treatment. The
principles of classical conditioning help to explain some of these effects (Hyland, 2011).
In this case, the pill or syringe serves as a CS, and the actual drug is the US. After the
experience of pain relief following the consumption of a drug, for instance, the pill or
syringe might lead to a CR of lowered pain even in the absence of actual painkiller. The
strongest evidence for the role of classical conditioning on placebo effects comes from
research on the immune system and the endocrine system.
CLASSICAL CONDITIONING AND THE IMMUNE AND ENDOCRINE
SYSTEMS Even the human body’s internal organ systems can be classically con-
ditioned. The immune system is the body’s natural defense against disease. Robert
Ader and Nicholas Cohen have conducted a number of studies that reveal that classi-
cal conditioning can produce immunosuppression , a decrease in the production of anti-
bodies, which can lower a person’s ability to ght disease (Ader, 2000; Ader & Cohen,
1975, 2000).
The initial discovery of the link between classical conditioning and immunosuppres-
sion came as a surprise. In studying classical conditioning, Ader (1974) was examining
how long a conditioned response would last in some laboratory rats. He paired a condi-
tioned stimulus (saccharin solution) with an unconditioned stimulus, a drug called
Cytoxan, which induces nausea. Afterward, while giving the rats saccharin-laced water
without the accompanying Cytoxan, Ader watched to see how long it would take the rats
to forget the association between the two.
Unexpectedly, in the second month of the study, the rats developed a disease and
began to die off. In analyzing this unforeseen result, Ader looked into the properties
of the nausea-inducing drug he had used. He discovered that one of its side effects
was immunosuppression. It turned out that the rats had been classically conditioned
to associate sweet water not only with nausea but also with the shutdown of the
immune system. The sweet water apparently had become a conditioned stimulus for
immunosuppression.
R e s e a r c h e r s h a v e f o u n d t h a t c o n d i t i o n e d i m m u n e r e s p o n s e s a l s o o c c u r i n h u m a n s
(Goebel & others, 2002; Olness & Ader, 1992; Schedlowski & Pacheco-Lopez, 2010).
For example, in one study, patients with multiple sclerosis were given a avored drink
prior to receiving a drug that suppressed the immune system. After this pairing, the
avored drink by itself lowered immune functioning, similarly to the drug (Giang &
others, 1996).
S i m i l a r r e s u l t s h a v e b e e n f o u n d f o r t h e e n d o c r i n e s y s t e m . R e c a l l f r o m C h a p t e r 2
that the endocrine system is a loosely organized set of glands that produce and
circulate hormones. Research has shown that placebo pills can in uence the
secretion of hormones if patients had previous experiences with pills contain-
ing actual drugs that affected hormone secretion (Benedetti & others, 2003).
Studies have revealed that the sympathetic nervous system (the part of the
autonomic nervous systems that responds to stress) plays an important role in the
learned associations between conditioned stimuli and immune and endocrine functioning
(Saurer & others, 2008).
T A S T E A V E R S I O N L E A R N I N G Consider this scenario. Mike goes out for sushi
with some friends and eats tekka maki (tuna roll), his favorite dish. He then proceeds
to a jazz concert. Several hours later, he becomes very ill with stomach pains and nau-
sea. A few weeks later, he tries to eat tekka maki again but cannot stand it. Importantly,
Mike does not experience an aversion to jazz, even though he attended the jazz concert
that night before getting sick. Mike’s experience exempli es taste aversion : a special
kind of classical conditioning involving the learned association between a particular
taste and nausea (Davis & Riley, 2010; Garcia & Koelling 1966; Kwok & Boakes, 2012;
Scott, 2011).
Thi s i s pr et t y wi l d. Your
body i s l e a r ni ng t hi ngs wi t ho ut
your even not i ci ng i t .
kin35341_ch05_166-200.indd Page 174 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 174 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Classical Conditioning // 175
Taste aversion is special because it typically requires
only one pairing of a neutral stimulus (a taste) with the
unconditioned response of nausea to seal that connection,
often for a very long time. As we consider later, it is
highly adaptive to learn taste aversion in only one trial.
Consider what would happen if an animal required mul-
tiple pairings of a taste with poison. It would likely not
survive the acquisition phase. It is notable, though, that
taste aversion can occur even if the taste experience had
nothing to do with getting sick—perhaps, in Mikes case,
he was simply coming down with a stomach bug. Taste
aversion can even occur when a person has been sickened
by a completely separate event, such as being spun
around in a chair (Klosterhalfen & others, 2000). Although
taste aversion is often considered an exception to the rules
of learning, Michael Domjan (2005) has suggested that
this form of learning demonstrates how classical condi-
tioning works in the natural world, where associations
matter to survival.
Taste aversion learning is especially important in the
context of the traditional treatment of some cancers. Radi-
ation and chemotherapy for cancer often produce nausea in patients, with the
result that individuals sometimes develop strong aversions to foods they
ingest prior to treatment (Holmes, 1993; Jacobsen & others, 1993). Con-
sequently, they may experience a general tendency to be turned off by
food, a situation that can lead to nutritional de cits (Mahmoud & others,
2011).
Researchers have used classical conditioning principles to combat
these taste aversions, especially in children, for whom antinausea med-
ication is often ineffective (Skolin & others, 2006) and for whom aversion
to protein-rich food is particularly problematic (Ikeda & others, 2006). Early
studies demonstrated that giving children a “scapegoat” conditioned stimulus
prior to chemotherapy would help limit the taste aversion to only one avor
(Broberg & Bernstein, 1987). For example, children might be given a par-
ticular avor of Lifesaver candy or ice cream before receiving treatment.
For these children, the nausea would be more strongly associated with the
Lifesaver or the ice cream avor than with the foods they needed to eat for
good nutrition.
D R U G H A B I T U A T I O N Chapter 4 noted how, over time, a person might
develop a tolerance for a psychoactive drug and need a higher and higher dose of the
substance to get the same effect. Classical conditioning helps to explain habituation ,
which refers to the decreased responsiveness to a stimulus after repeated presentations.
A mind-altering drug is an unconditioned stimulus (US): It naturally produces a response
in the person’s body. This unconditioned stimulus is often paired systematically with a
previously neutral stimulus (CS). For instance, the physical appearance of the drug in
a pill or syringe, and the room where the person takes the drugs, are conditioned stim-
uli that are paired with the US of the drug. These repeated pairings should produce a
conditioned response, and they do—but it is different from those we have considered
so far.
The conditioned response to a drug can be the body’s way of preparing for the effects
of a drug (Rachlin & Green, 2009). In this case, the body braces itself for the drug effects
with a conditioned response (CR) that is the opposite of the unconditioned response
(UR). For instance, if the drug (the US) leads to an increase in heart rate (the UR), the
CR might be a drop in heart rate. The CS—the previously neutral stimulus—serves as
a warning that the drug is coming, and the CR in this case is the body’s compensation
habituation
Decreased responsiveness
to a stimulus after repeated
presentations.
The U.S. Fish and Wildlife Service is trying out taste aversion
as a tool to prevent Mexican gray wolves from preying on
cattle. To instill taste aversion for beef, the agency is
deploying bait made of beef and cowhide but that also
contains odorless and fl avorless substances that induce
nausea (Bryan, 2012). The hope is that wolves that are
sickened by the bait will no longer prey on cattle and might
even rear their pups to enjoy alternative meals.
Re member , i n t ast e aver si on,
the taste or flavor is the CS;
the agent that made the person sick
(it could be a rollercoast er ride or
sal monella, f or exampl e) is t he US;
nausea or v omi t i ng i s t he UR; and
taste aversion is the CR.
These r esul t s s how
di scr i mi nat i on i n cl assi cal
con d i t i oni ngt he ki d s dev el oped
aver s i ons onl y t o t he specif ic
scapegoat f lavor s.
kin35341_ch05_166-200.indd Page 175 8/6/12 7:39 PM user-f502kin35341_ch05_166-200.indd Page 175 8/6/12 7:39 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
176 // CHAPTER 5 // Learning
for the drug’s effects (Figure 5.4). In this situation the conditioned response works to
decrease the effects of the unconditioned stimulus, making the drug experience less
intense. Some drug users try to prevent habituation by varying the physical location of
where they take the drug.
This aspect of drug use can play a role in deaths caused by drug overdoses. How might
classical conditioning be involved? A user typically takes a drug in a particular setting,
such as a bathroom, and acquires a conditioned response to this location (Siegel, 1988).
Because of classical conditioning, as soon as the drug user walks into the bathroom, his
or her body begins to prepare for and anticipate the drug dose in order to lessen the effect
of the drug. However, if the user takes the drug in a location other than the usual one, such
as at a rock concert, the drug’s effect is greater because no conditioned responses have
built up in the new setting, and therefore the body is not prepared for the drug. In cases
in which heroin causes death, researchers often have found that the individuals took the
Marketing Between the Lines
C
lassical conditioning is the foundation for many of the commercials bombarding us daily.
(Appropriately, when John Watson left the eld of psychology, he went on to advertising.)
Think about it: Advertising involves creating an association between a product and pleasant feel-
ings (buy that Caffè Misto grande and be happy). Watching TV,
you can see how advertisers cunningly apply classical condition-
ing principles to consumers by showing ads that pair something
pleasant (a US) with a product (a CS) in hopes that you, the
viewer, will experience those positive feelings toward the product
(CR). You might have seen that talking baby (US) trying to get
viewers to sign up and buy stocks through E*TRADE (CS). Adver-
tisers continue to exploit classical conditioning principles—for
instance, through the technique of product placement, or what is
known as embedded marketing.
This is how embedded marketing works. Viewing a TV show or
movie, you notice that a character is drinking a particular brand
of soft drink or eating a particular type of cereal. By placing
their products in the context of a show or movie you like, advertisers are hoping that your
positive feelings about the show, movie plot, or a character (the UR) carry over to their product
(the CS). Sure, it may seem like a long shot—but all they need to do is enhance the chances that,
say, navigating through a car dealership or a grocery store, you will feel attracted to their product.
Consider Sheldon from Big Bang Theory freaking out after handling a snake and shrieking, “Purell!
Purell! Purell!” and the contestants on The Biggest Loser getting weighed in front of a big Subway
sandwich sign. Embedded marketing is also in evidence in Mission: Impossible—Ghost Protocol
(Apple laptops and BMWs), The Adventures of Tintin (Purina Dog Chow, Amtrak, Alouette Cheese),
and The Girl with the Dragon Tattoo (the Sweden-based Wayne’s Coffee). And fans of the sitcom The
Offi ce might recognize that Jim classically conditioned Dwight Schrute with breath mints, modeling
Pavlov’s work, as you can check out on YouTube. This pop culture moment explicitly demonstrated
classical conditioning while also using classical conditioning in product placement for those
curiously strong mints, Altoids.
PSYCHOLOGY IN OUR WORLD
kin35341_ch05_166-200.indd Page 176 8/6/12 7:40 PM user-f502kin35341_ch05_166-200.indd Page 176 8/6/12 7:40 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Operant Conditioning // 177
US CS
The psychoactive drug is
an unconditioned stimulus
(US) because it naturally
produces a response in a
person’s body.
Appearance of the drug tablets
and the room where the person
takes the drug are conditioned
stimuli (CS) that are paired with
the drug (US).
The body prepares to receive
the drug in the room. Repeated
pairings of the US and CS have
produced a conditioned
response (CR).
CR
+
FIGURE 5.4 Drug
Habituation Classical
conditioning is involved in
drughabituation. As a result
of conditioning, the drug user
needs to take more of the
drug to get the same effect
as before the conditioning.
Moreover, if the user takes
the drug without the usual
conditioned stimulus
or stimuli— represented in
the middle panel by the
bathroom and the drug
tablets— overdosing is likely.
1. Pavlov’s dog salivates each time it hears
a bell. Now, after several trials of sali-
vating to the bell and not receiving
anyfood, the dog stops salivating. The
explanation is that
A. the dog realizes that the bell is not
food.
B. extinction has occurred.
C. the contingency loop has been
disrupted.
D. spontaneous recovery has not been
triggered.
2. A young boy goes to the zoo for the first
time with his father and sister. While he
is looking at a bird display, his sister
sneaks up on him and startles him. He
becomes very frightened, and now when
he sees birds outside or on TV, he cries.
The unconditioned response is
A. fear.
B. birds.
C. being startled by his sister.
D. going to the zoo.
3. A dog has learned to associate a small
blue light coming on with being fed.
Now, however, when a small light of any
color comes on, the dog salivates. The
reason is
A. extinction.
B. discrimination.
C. counterconditioning.
D. generalization.
APPLY IT! 4. Jake, a college student,
goes out to eat with friends at a local
Mexican restaurant and orders his favorite
food, bean and cheese tamales. Jake and
his friends are all dressed in fraternity
T-shirts, and they spend the night talking
about an upcoming charity event. When
he gets home, Jake feels horribly ill and
vomits through the night. Later he finds
out that a lot of people in his frat also
were sick and that apparently everyone had
picked up a stomach bug.
Consider this as an example of classical
conditioning. Based on the description of
Jake’s experience and your knowledge of
classical conditioning, which of the follow-
ing would you predict to happen in the
future?
A. Jake will probably feel pretty sick the
next time he puts on his frat T-shirt.
B. Jake will probably feel pretty sick the
next time someone offers him tamales.
C. Jake will probably feel pretty sick at the
charity event.
D. Jake should have no trouble eating ta-
males in the future, because he learned
that a stomach bug, not the tamales,
made him sick.
Recall from early in the chapter that classical conditioning and operant conditioning are
forms of associative learning, which involves learning that two events are connected. In
classical conditioning, organisms learn the association between two stimuli (US and CS).
Classical conditioning is a form of respondent behavior, b e h a v i o r t h a t o c c u r s i n a u t o m a t i c
response to a stimulus such as a nausea-producing drug, and later to a conditioned
stimulus such as sweet water that was paired with the drug.
Classical conditioning explains how neutral stimuli become associated with unlearned,
involuntary responses. Classical conditioning is not as effective, however, in explaining
voluntary b e h a v i o r s s u c h a s a s t u d e n t s s t u d y i n g h a r d f o r a t e s t , a g a m b l e r s p l a y i n g s l o t
machines in Las Vegas, or a dog’s searching for and nding his owner’s lost cell phone.
Operant conditioning is usually much better than classical conditioning at explaining
such voluntary behaviors.
3
Operant Conditioning
drug under unusual circumstances, at a different time, or in a different place relative to the
context in which they usually took the drug (Marlow, 1999). In these cases, with no CS
signal, the body is unprepared for (and tragically overwhelmed by) the drug’s effects.
kin35341_ch05_166-200.indd Page 177 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 177 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
178 // CHAPTER 5 // Learning
Defining Operant
Conditioning
Operant conditioning ( o r instrumental
conditioning ) is a form of associative
learning in which the consequences of a
behavior change the probability of the
behavior’s occurrence. The American
psychologist B. F. Skinner (1938) devel-
oped the concept of operant conditioning.
Skinner chose the term operant to
describe the behavior of the organism.
According to Skinner, an operant behav-
ior occurs spontaneously, and the conse-
quences that follow such a behavior
determine whether it will be repeated.
Imagine, for example, that you spon-
taneously decide to take a different
route while driving to campus one day.
You are more likely to repeat that route
on another day if you have a pleasant
experiencefor instance, arriving at
school faster or nding a great new coffee place to try—than if you have a lousy
experience such as getting stuck in traf c. In either case, the consequences of your
spontaneous act in uence whether that behavior happens again.
R e c a l l t h a t contingency is an important aspect of classical conditioning in which
the occurrence of one stimulus can be predicted from the presence of another one.
Contingency also plays a key role in operant conditioning. For example, when a rat
pushes a lever (behavior) that delivers food, the delivery of food (consequence) is
contingent on that behavior. This principle of contingency helps explain why pass-
ersby should never praise, pet, or feed a service dog while he is working (at least
without asking rst). Providing rewards during such times might interfere with the
dog’s training.
Thorndikes Law of Effect
Although Skinner emerged as the primary gure in operant conditioning, the experiments
of E. L. Thorndike (1898) established the power of consequences in determining volun-
tary behavior. At about the same time that Pavlov was conducting classical conditioning
experiments with salivating dogs, Thorndike, an American psychologist, was studying
cats in puzzle boxes. Thorndike put a hungry cat inside a box and placed a piece
of sh outside. To escape from the box and obtain the food, the cat had to learn
to open the latch inside the box. At rst the cat made a number of ineffective
responses. It clawed or bit at the bars and thrust its paw through the openings.
Eventually the cat accidentally stepped on the lever that released the door bolt.
When the cat returned to the box, it went through the same random activity until
it stepped on the lever once more. On subsequent trials, the cat made fewer and
fewer random movements until nally it immediately stepped on the lever to open
the door (Figure 5.5). Thorndike’s resulting law of effect states that behaviors followed
by satisfying outcomes are strengthened and that behaviors followed by frustrating out-
comes are weakened (P. L. Brown & Jenkins, 2009).
The law of effect is important because it presents the basic idea that the consequences
of a behavior in uence the likelihood of that behavior’s recurrence. Quite simply, a behavior
operant
conditioning
(instrumental
conditioning)
A form of asso-
ciative learning in
which the conse-
quences of a be-
havior change
the probability
ofthe behavior’s
occurrence.
law of effect
Thorndike’s law
stating that be-
haviors followed
by positive
outcomes are
strengthened and
that behaviors
followed by neg-
ative outcomes
are weakened.
The l aw of ef f ect
lays t he foundat ion for
oper ant condi t i oni ng. What
happen s
af t er
a gi ven behavior
det er mi nes whet her t he behavi or
wi l l be r epeat ed. I n 18 98 ?
You go, Thor ndi ke!
kin35341_ch05_166-200.indd Page 178 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 178 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Operant Conditioning // 179
can be followed by something good or something bad, and the probability of a behaviors
being repeated depends on these outcomes. As we now explore, Skinner’s operant condi-
tioning model expands on this idea.
Skinners Approach to
Operant Conditioning
S k i n n e r b e l i e v e d t h a t t h e m e c h a n i s m s o f l e a r n i n g a r e
the same for all species. This conviction led him to
study animals in the hope that he could discover the
components of learning with organisms simpler than
humans, including pigeons. During World War II, Skin-
ner trained pigeons to pilot missiles. Although top navy
of cials just could not accept pigeons piloting their
missiles in a war, Skinner congratulated himself on the
degree of control he was able to exercise over the
pigeons (Figure 5.6).
S k i n n e r a n d o t h e r b e h a v i o r i s t s m a d e e v e r y e f f o r t t o
study organisms under precisely controlled conditions so
that they could examine the connection between the
operant behavior and the speci c consequences in minute
detail (Powell & Honey, 2013). One of Skinner’s cre-
ations in the 1930s to control experimental conditions
was the operant conditioning chamber, sometimes called
a Skinner box (Figure 5.7). A device in the box delivered
food pellets into a tray at random. After a rat became
accustomed to the box, Skinner installed a lever and
observed the rat’s behavior. As the hungry rat explored
the box, it occasionally pressed the lever, and a food
pellet was dispensed. Soon the rat learned that the
FIGURE 5.5 Thorndikes Puzzle Box and the Law of Effect (Left) A box typical of the puzzle boxes Thorndike used in his experiments
with cats to study the law of effect. Stepping on the treadle released the door bolt; a weight attached to the door then pulled the door open and allowed
the cat to escape. After accidentally pressing the treadle as it tried to get to the food, the cat learned to press the treadle when it wanted to escape the
box. (Right) One cat’s learning curve over 24 separate trials. Notice that the cat escaped much more quickly after about ve trials. It had learned the
consequences of its behavior.
Time (seconds)
100
150
50
0
5 10 15 20 25
Number of trials
FIGURE 5.6 Skinner’s Pigeon-Guided Missile
Skinner wanted to help the military during World War II by using
pigeons’ tracking behavior. A gold electrode covered the tip of the
pigeons’ beaks. Contact with the screen on which the image of the
target was projected sent a signal informing the missile’s control
mechanism of the target’s location. A few grains of food occasionally
given to the pigeons maintained their tracking behavior.
kin35341_ch05_166-200.indd Page 179 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 179 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
180 // CHAPTER 5 // Learning
consequences of pressing the lever were positive: It would be
fed. Skinner achieved further control by soundproo ng the
box to ensure that the experimenter was the only in u-
ence on the organism. In many of the experiments, the
responses were mechanically recorded, and the food
(the consequence) was dispensed automatically. These
precautions aimed to prevent human error.
Shaping
Imagine trying to teach even a really smart dog how to do the
laundry. The challenge might seem insurmountable, as it is
quite unlikely that a dog will spontaneously start putting the
clothes in the washing machine. You could wait a very long
time for such a feat to happen. It is possible, however, to train
a dog or another animal to perform highly complex tasks
through the process of shaping.
Shaping refers to rewarding successive approximations of
a desired behavior (Slater & Dymond, 2011). For example,
shaping can be used to train a rat to press a bar to obtain food.
When a rat is rst placed in the conditioning box, it rarely
presses the bar. Thus, the experimenter may start off by giving
the rat a food pellet if it is in the same half of the cage as the
bar. Then the experimenter might reward the rat’s behavior
only when it is within 2 inches of the bar, then only when it
touches the bar, and nally only when it presses the bar.
Returning to the service dog, rather than waiting for the dog spontaneously to put the
clothes in the washing machine, we might reward the dog for carrying the clothes to the
laundry room and for bringing them closer and closer to the washing machine. Finally,
we might reward the dog only when it gets the clothes inside the washer. Indeed, train-
ers use this type of shaping technique extensively in teaching animals to perform tricks.
A dolphin that jumps through a hoop held high above the water has been trained to
perform this behavior through shaping.
Operant conditioning relies on the notion that a behavior is likely to be repeated if
it is followed by a reward. A reasonable question is, what makes a reinforcer reward-
ing? Recent research reveals consider-
able interest in discovering the links
between brain activity and operant con-
ditioning (Bueno & Bueno, 2011; Darcq
& others, 2011).
Principles of
Reinforcement
We noted earlier that a behavior can be
followed by something good or some-
thing bad. Reinforcement refers to
thosegood things that follow a behavior.
Reinforcement is the process by which a
stimulus or event (a reinforcer ) following
a particular behavior increases the prob-
ability that the behavior will happen
shaping
Rewarding successive
approximations of a desired
behavior.
reinforcement
The process by
which a stimulus
or an event (a
reinforcer) follow-
ing a particular
behavior in-
creases the prob-
ability that the
behavior will
happen again.
FIGURE 5.7 Skinner’s Operant
Conditioning Chamber B. F. Skinner
conducts an operant conditioning study in his
behavioral laboratory. The rat being studied is
in an operant conditioning chamber, sometimes
referred to as a Skinner box.
These human er r or s
mi g h t h a v e i n c l u d e d c h e e r i n g
the rat on or rewarding him
just because t he experiment er
felt bad for the hungry
lit t le guy.
Through operant conditioning, animal trainers can coax some amazing behaviors
from their star performers.
kin35341_ch05_166-200.indd Page 180 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 180 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Operant Conditioning // 181
again. Such consequences of a behavior fall into two types, called positive
reinforcement a n d negative reinforcement . B o t h t y p e s o f c o n s e q u e n c e s
increase the frequency of a behavior.
P O S I T I V E A N D N E G A T I V E R E I N F O R C E M E N T I n positive
reinforcement , the frequency of a behavior increases because it is fol-
lowed by the presentation of something that increases the likelihood the
behavior will be repeated. For example, if someone you meet smiles at you
after you say, “Hello, how are you?” and you keep talking, the smile has
reinforced your talking. The same principle of positive reinforcement is at work
when you teach a dog to “shake hands” by giving it a piece of food when it lifts its paw.
In contrast, in negative reinforcement the frequency of a behavior increases because
it is followed by the removal of something. For example, if your father nagged you to
clean out the garage and kept nagging until you cleaned out the garage, your response
(cleaning out the garage) removed the unpleasant stimulus (your dad’s nagging). Taking
an aspirin when you have a headache works the same way: A reduction of pain reinforces
the act of taking an aspirin. Similarly, if your TV is making an irritating buzzing sound,
you might give it a good smack on the side, and if the buzzing stops, you are more likely
to smack the set again if the buzzing resumes. Ending the buzzing sound rewards the
TV-smacking.
Notice that both positive and negative reinforcement involve rewarding behavior—but
they do so in different ways. Positive reinforcement means following a behavior with the
addition of something, and negative reinforcement means following a behavior with the
removal of something. Remember that in this case, “positive” and “negative” have noth-
ing to do with “good” and “bad.Rather, they refer to processes in which something is
given (positive reinforcement) or removed (negative reinforcement). Whether it is posi-
tive or negative, reinforcement is about increasing a behavior. Figure 5.8 provides further
examples to illustrate the distinction between positive and negative reinforcement.
positive
reinforcement
The presentation
of a stimulus fol-
lowing a given
behavior in order
to increase the
frequency of that
behavior.
negative reinforcement
The removal of a stimulus
following a given behavior
in order to increase the fre-
quency of that behavior.
Al t hough Thor ndi ke t alked
about s at i s f yi ng” out comes
st r engt hening behavior s, Ski nner
took the need for satisfying states
out of t he e quat i on. For Ski nner , i f
a s t i mul us i ncr eased a behavi or , i t
was r ei nf or c i n gno n eed t o t al k
about how t he ani mal f eel s.
Behavior
Posit ive Reinforcement
Reward ing St imulus Provid e d Fut ure Behavior
Teacher praises your performance.
You increasingly turn in homework
on time.
You turn in homework on time.
The skis go faster.
You wax your skis the next time you
go skiing.
You wax your skis the next time you
go skiing.
You wax your skis.
Great music begins to play.
You deliberately press the button again
the next time you get into the car.
You randomly press a button on the
dashboard of a friend's car.
Behavior
Negative Reinforcement
St imulus Remove d Fut ure Behavior
Teacher stops criticizing late
homework.
You increasingly turn in homework on
time.
You turn in homework on time.
People stop zooming by you on the
slope.
You wax your skis.
An annoying song shuts off.
You deliberately press the button again
the next time the annoying song is on.
You randomly press a button on the
dashboard of a friend's car.
FIGURE 5.8 Positive and Negative Reinforcement Positive reinforcers involve adding something (generally something
rewarding). Negative reinforcers involve taking away something (generally something aversive).
EXPERIENCE IT!
Operant Conditioning
kin35341_ch05_166-200.indd Page 181 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 181 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
182 // CHAPTER 5 // Learning
A special kind of response to negative reinforcement is avoidance learning.
Avoidance learning o c c u r s w h e n t h e o r g a n i s m l e a r n s t h a t b y m a k i n g a p a r -
ticular response, a negative stimulus can be altogether avoided. For instance,
a student who receives one bad grade might thereafter always study hard
in order to avoid the negative outcome of bad grades in the future. Even
when the bad grade is no longer present, the pattern of behavior sticks.
Avoidance learning is very powerful in the sense that the behavior is
maintained even in the absence of any aversive stimulus. For example,
animals that have been trained to avoid a negative stimulus, such as an
electrical shock, by jumping into a safe area may always thereafter
gravitate toward the safe area, even when the risk of shock is no longer
present.
Experience with unavoidable negative stimuli can lead to a particular
de cit in avoidance learning called learned helplessness , in which the
organism, exposed to uncontrollable aversive stimuli, learns that it has no
control over negative outcomes. Learned helplessness was rst identi ed by
Martin Seligman and his colleagues (Altenor, Volpicelli, & Seligman, 1979;
Hannum, Rosellini, & Seligman, 1976), who found that dogs that were rst
exposed to inescapable shocks were later unable to learn to avoid those shocks,
even when they could avoid them (Seligman & Maier, 1967). This inability to learn
to escape was persistent: The dogs would suffer painful shocks hours, days, and
even weeks later and never attempt to escape. Exposure to unavoidable negative
circumstances may also set the stage for humans’ inability to learn avoidance,
such as with the experience of depression and despair (Pryce & others, 2011).
Learned helplessness has aided psychologists in understanding a variety of
perplexing issues, such as why some victims of domestic violence fail to ee
their terrible situation and why some students respond to failure at school by giving
up trying.
T Y P E S O F R E I N F O R C E R S Psychologists classify positive reinforcers as primary
or secondary based on whether the rewarding quality of the consequence is innate or
learned. A primary reinforcer is innately satisfying; that is, a primary reinforcer does
not require any learning on the organism’s part to make it pleasurable. Food, water, and
sexual satisfaction are primary reinforcers. A secondary reinforcer , on the other hand,
a c q u i r e s i t s p o s i t i v e v a l u e t h r o u g h a n o r g a n i s m s e x p e r i e n c e ; a s e c o n d a r y r e i n f o r c e r i s a
learned or conditioned reinforcer. We encounter hundreds of secondary reinforcers in our
lives, such as getting an A on a test and a paycheck for a job. Although we might think
of these as quite positive outcomes, they are not innately positive. We learn through
experience that A’ s and paychecks are good. Secondary reinforcers can be used in
a system called a token economy. In a token economy, behaviors are rewarded with
tokens (such as poker chips or stars on a chart) that can be exchanged later for
desired rewards (such as candy or money).
G E N E R A L I Z A T I O N , D I S C R I M I N A T I O N , A N D E X T I N C T I O N General-
ization, discrimination, and extinction are important not only in classical conditioning.
They also are key principles in operant conditioning.
Generalization In operant conditioning, generalization means performing a rein-
forced behavior in a different situation. For example, in one study pigeons were rein-
forced for pecking at a disk of a particular color (Guttman & Kalish, 1956). To assess
stimulus generalization, researchers presented the pigeons with disks of varying colors.
As Figure 5.9 shows, the pigeons were most likely to peck at disks closest in color to
the original. When a student who gets excellent grades in a calculus class by studying
the course material every night starts to study psychology and history every night as
well, generalization is at work.
avoidance
learning
An organism’s
learning that it
can altogether
avoid a negative
stimulus by mak-
ing a particular
response.
learned
helplessness
Through
experience with
unavoidable
aversive stimuli,
an organism
learns that it has
no control over
negative
outcomes .
primary reinforcer
A reinforcer that is innately
satisfying; one that does not
take any learning on the
organism’s part to make it
pleasurable.
secondary
reinforcer
A reinforcer
that acquires its
positive value
through an
organism’s
experience; a
secondary
reinforcer is a
learned or condi-
tioned reinforcer.
generalization (operant
conditioning)
Performing a reinforced
behavior in a different
situation.
Yes , dog l over s , many have
ques t i oned t he et hi cs of t hi s
research. What do you think?
Par ent s who ar e
pot t y- t r ai ni ng t oddl er s
of t en use t oken economi es.
Positive reinforcement and negative
reinforcement can be diffi cult
concepts to grasp. The real-world
examples and accompanying practice
exercises on the following website
should help to clarify the distinction
for you: http://psych.athabascau.ca/
html/prtut/reinpair.htm
kin35341_ch05_166-200.indd Page 182 8/6/12 8:54 PM user-f502kin35341_ch05_166-200.indd Page 182 8/6/12 8:54 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Operant Conditioning // 183
D i s c r i m i n a t i o n In operant conditioning,
discrimination m e a n s r e s p o n d i n g a p p r o p r i -
ately to stimuli that signal that a behavior will
or will not be reinforced (de Wit & others,
2007). For example, you go to a restaurant
that has a “University Student Discount” sign
in the front window, and you enthusiastically
ash your student ID with the expectation of
getting the reward of a reduced-price meal. Without
the sign, showing your ID might get you only a puzzled
look, not cheap food.
The principle of discrimination helps to explain how a
service dog “knows when he is working. Typically, the
dog wears a training harness while on duty but not at other
times. Thus, when a service dog is wearing its harness, it
is important to treat him like the professional that he is.
Similarly, an important aspect of the training of service
dogs is the need for selective disobedience. Selective dis-
obedience means that in addition to obeying commands
from his human partner, the service dog must at times
override such commands if the context provides cues that
obedience is not the appropriate response. So, if a guide
dog is standing at the corner with his visually impaired
human, and the human commands him to move forward,
the dog might refuse if he sees the Don’t Walk” sign
ashing. Stimuli in the environment serve as cues, inform-
ing the organism if a particular reinforcement contingency
is in effect.
Extinction I n o p e r a n t c o n d i t i o n i n g , extinction o c c u r s w h e n a b e h a v i o r i s n o l o n -
ger reinforced and decreases in frequency. If, for example, a soda machine that you
frequently use starts “eating” your coins without dispensing soda, you quickly stop
inserting more coins. Several weeks later, you might try to use the machine again,
hoping that it has been xed. Such behavior illustrates spontaneous recovery in oper-
ant conditioning.
S C H E D U L E S O F R E I N F O R C E M E N T Most of the examples of reinforcement
we have considered so far involve continuous reinforcement , in which a behavior is
reinforced every time it occurs. When continuous reinforcement takes place, organisms
learn rapidly. However, when reinforcement stops, extinction takes place quickly. A
variety of conditioning procedures have been developed that are particularly resistant to
extinction. These involve partial reinforcement , in which a reinforcer follows a behav-
ior only a portion of the time. Partial reinforcement characterizes most life experiences.
For instance, a golfer does not win every tournament she enters; a chess whiz does not
win every match he plays; a student does not get a pat on the back each time she solves
a problem.
Schedules of reinforcement a r e s p e c i c patterns that determine when a behavior
will be reinforced (Killeen & others, 2009). There are four main schedules of partial
reinforcement: xed ratio, variable ratio, xed interval, and variable interval. With
respect to these, ratio schedules i n v o l v e t h e n u m b e r o f b e h a v i o r s t h a t m u s t b e p e r f o r m e d
prior to reward, and interval schedules r e f e r t o t h e a m o u n t o f t i m e t h a t m u s t p a s s b e f o r e
a behavior is rewarded. In a xed schedule, the number of behaviors or the amount of
time is always the same. In a variable schedule, the required number of behaviors or
the amount of time that must pass changes and is unpredictable from the perspective
of the learner. Let’s look concretely at how each of these schedules of reinforcement
in uences behavior.
discrimination (operant
conditioning)
Responding appropriately
to stimuli that signal that a
behavior will or will not be
reinforced.
extinction (operant
conditioning)
Decreases in the frequency
of a behavior when the
behavior is no longer
reinforced.
schedules of reinforcement
Specifi c patterns that deter-
mine when a behavior will
be reinforced.
Responses
200
250
300
350
100
150
50
0
470 490 510 530 570550 590 610 630
Wavelengths (nm)
FIGURE 5.9 Stimulus Generalization
In the experiment by Norman Guttman and Harry
Kalish (1956), pigeons initially pecked a disk of a
particular color (in this graph, a color with a wavelength
of 550 nm) after they had been reinforced for this
wavelength. Subsequently, when the pigeons were
presented disks of colors with varying wavelengths,
they were likelier to peck those that were similar to
the original disk.
If you are accustomed to
us i ng your f i nger s t o s t r et ch out
text or an image on your smartphone
or i Pad, you mi ght f i nd your s el f t r yi ng
to do the same thing with a computer
mo n i t o r a n d l o o k i n g f o o l i s h . T h a t s
a l ack of di s cr i mi nat i on.
kin35341_ch05_166-200.indd Page 183 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 183 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
184 // CHAPTER 5 // Learning
A xed-ratio schedule reinforces a behavior after a set
number of behaviors. For example, if you are playing the
slot machines in Atlantic City and if the machines are on
a xed-ratio schedule, you might get $5 back every 20th
time you put money in the machine. It would not take long
to gure out that if you watched someone else play the
machine 18 or 19 times, not get any money back, and then
walk away, you should step up, insert your coin, and get
back $5. The business world often uses xed-ratio sched-
ules to increase production. For instance, a factory might
require a line worker to produce a certain number of items
in order to get paid a particular amount.
Of course, if the reward schedule for a slot machine were
that easy to gure out, casinos would not be so successful.
What makes gambling so tantalizing is the unpredictability
of wins (and losses). Slot machines are on a variable-ratio
schedule , a timetable in which behaviors are rewarded an
average number of times but on an unpredictable basis. For
example, a slot machine might pay off at an average of
every 20th time, but the gambler does not know when this
payoff will be. The slot machine might pay off twice in a
row and then not again until after 58 coins have been
inserted. This averages out to a reward for every 20 behav-
ioral acts, but when the reward will be given is unpredict-
able. Variable-ratio schedules produce high, steady rates of
behavior that are more resistant to extinction than the other
three schedules.
Whereas ratio schedules of reinforcement are based on the
number of behaviors that occur, interval reinforcement sched-
ules are determined by the time elapsed since the last behavior was rewarded. A xed-
interval schedule reinforces the rst behavior after a xed amount of time has passed.
If you take a class that has four scheduled exams, you might procrastinate most of the
semester and cram just before each test. Fixed-interval schedules of reinforcement are
also responsible for the fact that pets seem to be able to “tell time,eagerly sidling up
to their food dish at 5 p.m. in anticipation of dinner. On a xed-interval schedule, the
rate of a behavior increases rapidly as the time approaches when the behavior likely will
be reinforced. For example, a government of cial who is running for reelection may
intensify her campaign activities as Election Day draws near.
A variable-interval s c h e d u l e i s a t i m e t a b l e i n w h i c h a b e h a v i o r i s r e i n -
forced after a variable amount of time has elapsed. Pop quizzes occur on a
variable-interval schedule. So does shing—you do not know if the sh will
bite in the next minute, in a half hour, in an hour, or ever. Because it is dif-
cult to predict when a reward will come, behavior is slow and consistent on
a variable-interval schedule (Staddon, Chelaru, & Higa, 2002).
To sharpen your sense of the differences between xed- and variable-interval sched-
ules, consider the following example. Penelope and Edith both design slot machines for
their sorority’s charity casino night. Penelope puts her slot machine on a variable-interval
schedule of reinforcement; Edith puts hers on a xed-interval schedule of reinforcement.
On average, both machines will deliver a reward every 20 minutes. Whose slot machine
is likely to make the most money for the sorority charity? Edith’s machine is likely to
lead to long lines just before the 20-minute mark, but people will be unlikely to play on
it at other times. In contrast, Penelope’s is more likely to entice continuous play, because
the players never know when they might hit a jackpot. The magic of variable schedules
of reinforcement is that the learner can never be sure exactly when the reward is coming.
Figure 5.10 shows how the different schedules of reinforcement result in different rates
of responding.
Slot machines are on a variable-ratio schedule of
reinforcement.
Thi s i s why pop qui z z es
lead t o more consist ent levels of
st udying compar ed t o t he cr ammi ng
that might be seen with schedul ed
ex ams .
kin35341_ch05_166-200.indd Page 184 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 184 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Operant Conditioning // 185
P U N I S H M E N T We began this section
by noting that behaviors can be followed
bysomething good or something bad. So far,
we have explored only the good things—
reinforcers that are meant to increase behav-
iors. Sometimes, however, the goal is to
decrease a behavior, and in such cases the
behavior might be followed by something
unpleasant. Punishment is a consequence
that decreases the likelihood that a behavior
will occur. For instance, a child plays with a
matchbox and gets burned when she lights
one of the matches; the child consequently is
less likely to play with matches in the future.
As another example, a student interrupts the
instructor, and the instructor scolds the
student. This consequencethe teacher’s
verbal reprimand—makes the student less
likely to interrupt in the future. In punish-
ment, a response decreases because of its
unpleasant consequences.
Just as the positive–negative distinction
applies to reinforcement, it can also apply to
punishment. As was the case for reinforce-
ment, “positive means adding something,
and “negative” means taking something away.
Thus, in positive punishment , a behavior
decreases when it is followed by the presen-
tation of a stimulus, whereas in negative
punishment , a behavior decreases when a
stimulus is removed. Examples of positive
punishment include spanking a misbehaving
child and scolding a spouse who forgot to call when she was running late at
the of ce; the coach who makes his team run wind sprints after a lacka-
daisical practice is also using positive punishment. Time-out is a form of
negative punishment in which a child is removed from a positive rein-
forcer, such as his or her toys. Getting grounded is also a form of nega-
tive punishment as it involves taking a teenager away from the fun things
in his life. Figure 5.11 compares positive reinforcement, negative reinforce-
ment, positive punishment, and negative punishment.
T I M I N G , R E I N F O R C E M E N T , A N D P U N I S H M E N T How does the timing
of reinforcement and punishment in uence behavior? Does it matter whether the rein-
forcement is small or large? Let’s take a look.
I m m e d i a t e V e r s u s D e l a y e d R e i n f o r c e m e n t As in classical conditioning,
inoperant conditioning learning is more ef cient when the interval between a behav-
ior and its reinforcer is a few seconds rather than minutes or hours, especially in
lower animals (Church & Kirkpatrick, 2001). If a food reward is delayed for more
than 30 seconds after a rat presses a bar, the food is virtually ineffective as reinforce-
ment. Humans, however, have the ability to respond to delayed reinforcers (Holland,
1996).
Sometimes important life decisions involve whether to seek and enjoy a small, imme-
diate reinforcer or to wait for a delayed but more highly valued reinforcer (Martin &
Pear, 2011). For example, you might spend your money now on clothes, concert tickets,
and an iPad 2, or you might save your money and buy a car later. You might choose to
punishment
A consequence
that decreases
the likelihood
that a behavior
will occur.
positive
punishment
The presentation
of a stimulus fol-
lowing a given
behavior in order
to decrease the
frequency of that
behavior.
negative
punishment
The removal of a
stimulus follow-
ing a given
behavior in order
to decrease the
frequency of that
behavior.
PSYCHOLOGICAL INQUIRY
FIGURE 5.10 Schedules of Reinforcement and Different
Patterns of Responding In this gure, each hash mark indicates the
delivery of reinforcement. Notice on the xed-ratio schedule the dropoff in
responding after each response; on the variable-ratio schedule the high,
steady rate of responding; on the xed-interval schedule the immediate
dropoff in responding after reinforcement, and the increase in responding just
before reinforcement (resulting in a scalloped curve); and on the variable-
interval schedule the slow, steady rate of responding. > Which schedule of
reinforcement represents the “most bang for the buck”? That is, which is
associated with the most responses for the least amount of reinforcement?
> Which schedule would be best if you have very little time for training?
> Which schedule of reinforcement is most common in your life?
Cumulative response
Time
Variable interval
Variable ratio
Fixed
interval
Fixed
ratio
Reinforcement
Puni s hment i s
somet i mes conf used wi t h negat i ve
reinforcement. Reinforcement
increases behavior. Punishment is
me a n t t o d e c r e a s e i t .
kin35341_ch05_166-200.indd Page 185 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 185 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
186 // CHAPTER 5 // Learning
Behavior: You turn in your
work project on time.
Manager praises you for
turning in your project on
time.
Posit ive Reinforcement
Behavior: You take aspirin for
a headache.
Your headache goes away.
Effect on behavior: You take
aspirin again the next time you
have a headache.
Negative Reinforcement
Effect on behavior: You turn in
your next project on time.
Posit ive Punishment
Behavior: Your younger sister
comes home two hours after
curfew.
Your sister is grounded for
two weeks.
Effect on behavior: Your sister
doesn’t come home late the
next time she’s allowed to go
out with friends.
Negative Punishment
Your parent is angry at you for
not replacing the tires.
Effect on behavior: You stop
dawdling and replace the tires
to avoid your parents anger.
Behavior: You don't replace
the tires on the family car
when your parent asks you to.
FIGURE 5.11 Positive Reinforcement, Negative Reinforcement, Positive Punishment, and Negative
Punishment The ne distinctions here can sometimes be confusing. With respect to reinforcement, note that both types of
r e i n f o r c e m e n t a r e i n t e n d e d t o i n c r e a s e b e h a v i o r , e i t h e r b y p r e s e n t i n g a s t i m u l u s ( i n p o s i t i v e r e i n f o r c e m e n t ) o r b y t a k i n g a w a y a stimulus
(in negative reinforcement). Punishment is meant to decrease a behavior either by presenting something (in positive punishment) or by
taking away something (in negative punishment). The words positive and negative mean the same things in both cases.
enjoy yourself now in return for immediate small reinforcers, or you might opt to study
hard in return for delayed stronger reinforcers such as good grades, a scholarship to
graduate school, and a better job.
I m m e d i a t e V e r s u s D e l a y e d P u n i s h m e n t A s w i t h r e i n f o r c e -
ment, in most instances of research with lower animals, immediate pun-
ishment is more effective than delayed punishment in decreasing the
occurrence of a behavior. However, also as with reinforcement, delayed
punishment can have an effect on human behavior. Not studying at the
beginning of a semester can lead to poor grades much later, and
humans have the capacity to notice that this early behavior contributed
to the negative outcome.
Immediate Versus Delayed Reinforcement and Punishment
Many daily behaviors revolve around rewards and punishments, both
immediate and delayed. We might put off going to the dentist to
avoid a small punisher (such as the discomfort that comes with
getting a cavity lled). However, this procrastination might contrib-
ute to greater pain later (such as the pain of having a tooth pulled).
Sometimes life is about enduring a little pain now to avoid a lot of
pain later.
How does receiving immediate small reinforcement versus
delayed strong punishment affect human behavior (Martin &
Pear, 2011)? One reason that obesity is such a major health prob-
lem is that eating is a behavior with immediate positive consequences
kin35341_ch05_166-200.indd Page 186 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 186 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Operant Conditioning // 187
food tastes great and quickly provides a pleasurable, satis ed feeling. Although the
potential delayed consequences of overeating are negative (obesity and other possible
health risks), the immediate consequences are dif cult to override. When the delayed
consequences of behavior are punishing and the immediate consequences are reinforc-
ing, the immediate consequences usually win, even when the immediate consequences
are minor reinforcers and the delayed consequences are major punishers.
Smoking and drinking follow a similar pattern. The immediate consequences of smok-
ing are reinforcing for most smokers—the powerful combination of positive reinforce-
ment (enhanced attention, energy boost) and negative reinforcement (tension relief,
removal of craving). The primarily long-term effects of smoking are punishing and
include shortness of breath, a chronic sore throat and/or coughing, chronic obstructive
pulmonary disease (COPD), heart disease, and cancer. Likewise, the immediate pleasur-
able consequences of drinking override the delayed consequences of a hangover or even
alcoholism and liver disease.
Now think about the following situations. Why are some of us so reluctant to
takeup a new sport, try a new dance step, run for of ce on campus or in local gov-
ernment, or do almost anything different? One reason is that learning new skills
often involves minor punishing consequences, such as initially looking and feeling
stupid, not knowing what to do, and having to put up with sarcastic comments from
others. In these circumstances, reinforcing consequences are often delayed. For
example, it may take a long time to become a good enough golfer or a good enough
dancer to enjoy these activities, but persevering through the rough patches just might
be worth it.
Applied Behavior Analysis
Although behavioral approaches have been criticized for ignoring mental processes and
focusing only on observable behavior, these approaches do provide an optimistic perspec-
tive for individuals interested in changing their behaviors. That is, rather than concentrat-
ing on factors such as the type of person you are, behavioral approaches imply that you
can modify even longstanding habits by changing the reward contingencies that maintain
those habits (Miltenberger, 2012).
One real-world application of operant conditioning principles to promote better
functioning is applied behavior analysis. Applied behavior analysis (also called
behavior modi cation ) is the use of operant conditioning principles to change human
behavior. In applied behavior analysis, the rewards and punishers that exist in a par-
ticular setting are carefully analyzed and manipulated to change behaviors. Applied
behavior analysis seeks to identify the rewards that might be maintaining unwanted
behaviors and to enhance the rewards of more appropriate behaviors. From this
perspective, we can understand all human behavior as being in uenced by rewards
and punishments. If we can gure out what rewards and punishers are controlling
a person’s behavior, we can change them—and eventually change the behavior
itself.
A manager who rewards staff members with a half day off if they meet
a particular work goal is employing applied behavior analysis. So are a
therapist and a client when they establish clear consequences of the
clients behavior in order to reinforce more adaptive actions
and discourage less adaptive ones (Chance, 2009). A teacher
who notices that a troublesome student seems to enjoy the
attention he receives—even when that attention isscolding—
might use applied behavior analysis by changing herresponses
to the childs behavior, ignoring it instead. These examples show
how attending to the consequences of behavior can be used to
applied behavior analysis
(behavior modifi cation)
The use of operant condi-
tioning principles to change
human behavior.
Not e t h at t he
teacherstudent ex ampl e
involves negat ive punishment .
kin35341_ch05_166-200.indd Page 187 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 187 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
188 // CHAPTER 5 // Learning
improve performance in settings such as a workplace and a classroom. Advocates of
applied behavior analysis believe that many emotional and behavioral problems stem
from inadequate or inappropriate consequences (Alberto & Troutman, 2009).
Applied behavior analysis has been effective in a wide range of situations. Practitio-
ners have used it, for example, to train autistic individuals (Frazier, 2012), children and
adolescents with psychological problems (Miltenberger, 2012), and residents of mental
health facilities (Phillips & Mudford, 2008); to instruct individuals in effective parenting
(Phaneuf & McIntyre, 2007); to enhance environmentally conscious behaviors such as
recycling and not littering (Geller, 2002); to get people to wear seatbelts (Streff & Geller,
1986); and to promote workplace safety (Geller, 2006). Applied behavior analysis can
help people improve their self-control in many aspects of mental and physical health
(Spiegler & Guevremont, 2010).
1. A mother takes away her son’s favorite
toy when he misbehaves. Her action is
an example of
A. positive reinforcement.
B. negative reinforcement.
C. positive punishment.
D. negative punishment.
2. The schedule of reinforcement that
results in the greatest increase in
behavior is
A. xed ratio.
B. variable ratio.
C. xed interval.
D. variable interval.
3. Kelley is scolded each time she teases
her little brother. Her mother notices
that the frequency of teasing has de-
creased. Scolding Kelley is an effective
A. negative reinforcer.
B. negative punisher.
C. conditioner.
D. positive punisher.
APPLY IT! 4. Kevin’s girlfriend is very
moody, and he never knows what to expect
from her. When she is in a good mood, he
feels as if he is in heaven, but when she is
in a bad mood, she makes him crazy. His
friends all think that he should dump her,
but Kevin finds that he just cannot break
itoff. Kevin’s girlfriend has him on a
_________ schedule of reinforcement.
A. variable
B. fixed
C. continuous
D. nonexistent
Would it make sense to teach a 15-year-old boy how to drive with either classical
conditioning or operant conditioning procedures? Driving a car is a voluntary behav-
ior, so classical conditioning would not apply.In terms of operant conditioning, we
could ask him to try to drive down the road and then reward his positive behaviors.
Not many of us would want to be on the road, though, when he makes mistakes.
Albert Bandura (2007b, 2008, 2010) believes that if we learned only in such a trial-
and-error fashion, learning would be exceedingly tedious and at times hazardous.
Instead, he says, many complex behaviors are the result of exposure to competent
models. By observing other people, we can acquire knowledge, skills, rules, strategies,
beliefs, and attitudes (Schunk, 2011).
Bandura’s observational learning, also called imitation or modeling, is learning that
occurs when a person observes and imitates behavior. The capacity to learn by observa-
tion eliminates trial-and-error learning. Often observational learning takes less time than
operant conditioning. Bandura (1986) described four main processes that are involved in
observational learning: attention, retention, motor reproduction, and reinforcement.
I n o b s e r v a t i o n a l l e a r n i n g , t h e rst process that must occur is attention ( w h i c h w e
initially considered in Chapter 3 due to its crucial role in perception). To reproduce a
model’s actions, you must attend to what the model is saying or doing. You might not
hear what a friend says if the stereo is blaring, and you might miss your instructor’s
4
Observational Learning
kin35341_ch05_166-200.indd Page 188 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 188 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Observational Learning // 189
Observational learning occurs when a person observes and imitates
someone else’s behavior. A famous example of observational learning is
the Bobo doll study (Bandura, Ross, & Ross, 1961), in which children
who had watched an aggressive adult model were more likely to behave
aggressively when left alone than were children who had observed a
non-aggressive model.
Having positive role models and
mentors you can observe can be a
signifi cant factor in your learning
and success. Make a list of your most
important role models and mentors.
Next to each, briefl y describe how
they have infl uenced you. What
would your ideal role model or
mentor be like?
analysis of a problem if you are admiring
someone sitting in the next row. As a further
example, imagine that you decide to take a
class to improve your drawing skills. To suc-
ceed, you need to attend to the instructor’s
words and hand movements. Characteristics of
the model can in uence attention to the model.
Warm, powerful, atypical people, for example,
command more attention than do cold, weak,
typical people.
Retention is the second process required for
observational learning to occur. To reproduce a
model’s actions, you must encode the informa-
tion and keep it in memory so that you can
retrieve it. A simple verbal description, or a
vivid image of what the model did, assists
retention. (Memory is such an important cogni-
tive process that Chapter 6 is devoted exclu-
sively to it.) In the example of taking a class to
sharpen your drawing ability, you will need to
remember what the instructor said and did in
modeling good drawing skills.
Motor reproduction, a third element of
observational learning, is the process of imitating
the model’s actions. People might pay attention to a model and encode what they have
seen, but limitations in motor development might make it dif cult for them to reproduce
the model’s action. Thirteen-year-olds might see a professional basketball player do a
reverse two-handed dunk but be unable to reproduce the pro’s play. Similarly, in your
drawing class, if you lack ne motor reproduction skills, you might be unable to follow
the instructor’s example.
Reinforcement is a nal component of observational learning. In this case, the question
is whether the model’s behavior is followed by a consequence. Seeing a model attain a
reward for an activity increases the chances that an observer will repeat the behavior—a
process called vicarious reinforcement . On the other hand, seeing the model punished
makes the observer less likely to repeat the behavior—a process called vicarious punish-
ment . Unfortunately, vicarious reinforcement and vicariouspunishment are often absent
in, for example, media portrayals of violence and aggression.
Observational learning has been studied in a variety of contexts. Researchers have
explored observational learning, for example, as a means by which gorillas learn from
one another about motor skills (Byrne, Hobaiter, & Klailova, 2011). They have also
studied it as a process by which people learn whether stimuli are likely to be painful
(Helsen, & others, 2011) and as a tool individuals use to make economic decisions (Feri &
others, 2011). Researchers are also interested in comparing learning from experience with
learning through observation (Nicolle, Symmonds, & Dolan, 2011).
Observational learning can be an important factor in the functioning of role
models in inspiring people and changing their perceptions. Whether a model
is similar to us can in uence that models effectiveness in modifying our
behavior. The shortage of role models for women and minorities in science
and engineering has often been suggested as a reason for the lack of women
and minorities in these elds. After the election of Barack Obama as presi-
dent of the United States, many commentators noted that for the rst time,
African American children could see concretely that they might also attain
the nation’s highest of ce someday. You may have seen the photo of 5-year-
old Jacob Philadelphia feeling President Obama’s hair, to see if it was just
like his (Calmes, 2012).
Figure 5.12 summarizes Bandura’s model of observational learning.
kin35341_ch05_166-200.indd Page 189 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 189 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
190 // CHAPTER 5 // Learning
1. Another name for observational
learning is
A. replication.
B. modeling.
C. trial-and-error learning.
D. visualization.
2. According to Bandura, _________
occurs first in observational learning.
A. motor reproduction
B. retention
C. attention
D. reinforcement
3. A friend shows you how to do a card
trick. However, you forget the second
step in the trick and are thus unable to
replicate the card trick. There has been
a failure in
A. motor reproduction.
B. retention.
C. attention.
D. reinforcement.
APPLY IT! 4. Shawna is a 15-year-old
high school girl whose mother is a highly
paid accountant. Shawna’s mom works long
hours, often complains about her workplace
and how much she hates her boss, and
seems tired most of the time. When she is
asked what she might do when she grows
up, Shawna says she does not think she
wants to pursue a career in accounting. Her
mother is shocked and cannot understand
why Shawna would not want to follow in
her footsteps. Which of the following is the
most likely explanation for this situation?
A. Shawna has not observed her mother be-
ing reinforced for her behavior. She has
only experienced vicarious punishment.
B. Shawna is not aware that her mother is
an accountant.
C. Shawna is too different from her mother
for her mother to be an effective role
model.
D. Shawna has not been paying attention
to her mother.
Reinforcement or
Incentive Conditions
Motor
Reproduction
RetentionAttention
Observational Learning
FIGURE 5.12 Bandura’s Model
of Observational Learning In terms of
Bandura’s model, if you are learning to ski, you
need to attend to the instructors words and
demonstrations. You need to remember what
the instructor did and his or her tips for avoiding
disasters. You also need the motor abilities to
reproduce what the instructor has shown you.
Praise from the instructor after you have
completed a few moves on the slopes should
improve your motivation to continue skiing.
In learning about learning, we have looked at cognitive processes only as they apply in
observational learning. Skinner’s operant conditioning and Pavlovs classical conditioning
focus on the environment and observable behavior, not what is going on in the head of the
learner. Many contemporary psychologists, including some behaviorists, recognize the
importance of cognition and believe that learning involves more than environment–behavior
connections (Bandura, 2011; Bjork, Dunlosky, & Kornell, 2013; Schunk, 2011). A good
starting place for considering cognitive in uences in learning is the work of E. C. Tolman.
Purposive Behavior
E . C . T o l m a n ( 1 9 3 2 ) e m p h a s i z e d t h e purposiveness o f b e h a v i o r t h e i d e a t h a t m u c h o f b e h a v -
ior is goal-directed. Tolman believed that it is necessary to study entire behavioral sequences
in order to understand why people engage in particular actions. For example, high school
students whose goal is to attend a leading college or university study hard in their classes. If
we focused only on their studying, we would miss the purpose of their behavior. The students
do not always study hard because they have been reinforced for studying in the past. Rather,
studying is a means to intermediate goals (learning, high grades) that in turn improve their
likelihood of getting into the college or university of their choice (Schunk, 2011).
We can see Tolman’s legacy today in the extensive interest in the role of goal setting
in human behavior (Petri & Govern, 2013). Researchers are especially curious about how
people self-regulate and self-monitor their behavior to reach a goal (Bjork, Dunlosky, &
Kornell, 2013; Matthews & Moran, 2011).
E X P E C T A N C Y L E A R N I N G A N D I N F O R M A T I O N I n s t u d y i n g t h e p u r p o s i v e -
ness of behavior, Tolman went beyond the stimuli and responses of Pavlov and Skinner to
focus on cognitive mechanisms. Tolman said that when classical conditioning and operant
5
Cognitive Factors in Learning
kin35341_ch05_166-200.indd Page 190 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 190 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Cognitive Factors in Learning // 191
conditioning occur, the organism acquires certain expectations. In classical conditioning, the
young boy fears the rabbit because he expects it will hurt him. In operant conditioning, a
woman works hard all week because she expects a paycheck on Friday. Expectancies are
acquired from people’s experiences with their environment. Expectancies in uence a variety
of human experiences. We set the goals we do because we believe that we can reach them.
Expectancies also play a role in the placebo effect. Many painkillers have been shown
to be more effective in reducing pain when patients can see the intravenous injection
sites than when they cannot (Price, Finniss, & Benedetti, 2008). If patients can see that
they are getting a drug, they can harness their own expectations for pain reduction.
Tolman (1932) emphasized that the information value of the conditioned stimulus is
important as a signal or an expectation that an unconditioned stimulus will follow. Antic-
ipating contemporary thinking, Tolman believed that the information that the CS provides
is the key to understanding classical conditioning. One contemporary view of classical
conditioning describes an organism as an information seeker, using logical and perceptual
relations among events, along with preconceptions, to form a representation of the world
(Rescorla, 2003, 2005, 2009).
A classic experiment conducted by Leon Kamin (1968) illustrates the importance of an
organism’s history and the information provided by a conditioned stimulus in classical
conditioning. Kamin conditioned a rat by repeatedly pairing a tone (CS) and a shock (US)
until the tone alone produced fear (conditioned response). Then he continued to pair the
tone with the shock, but he turned on a light (a second CS) each time the tone sounded.
Even though he repeatedly paired the light (CS) and the shock (US), the rat showed no
conditioning to the light (the light by itself produced no CR). Conditioning to the light was
blocked, almost as if the rat had not paid attention. The rat apparently used the tone as a
signal to predict that a shock would be coming; information about the light’s pairing with
the shock was redundant with the information already learned about the tone’s pairing with
the shock. In this experiment, conditioning was governed not by the contiguity of the CS
and US but instead by the rat’s history and the information it received. Contemporary clas-
sical conditioning researchers are further exploring the role of information in an organism’s
learning (Kluge & others, 2011; Knight, Lewis, & Wood, 2011; Rescorla & Wagner, 2009).
L A T E N T L E A R N I N G E x p e r i m e n t s o n l a t e n t l e a r n i n g p r o v i d e o t h e r e v i d e n c e t o s u p -
port the role of cognition in learning. Latent learning ( o r implicit learning ) is unreinforced
learning that is not immediately re ected in behavior. In one study, researchers put two
groups of hungry rats in a maze and required them to nd their way from a starting point
to an end point (Tolman & Honzik, 1930). The rst group found food (a reinforcer) at the
end point; the second group found nothing there. In the operant conditioning view, the rst
group should learn the maze better than the second group, which is exactly what happened.
However, when the researchers subsequently took some of the rats from the non-reinforced
group and gave them food at the end point of the maze, they quickly began to run the maze
as effectively as the reinforced group. The non-reinforced rats apparently had learned a great
deal about the maze as they roamed around and explored it. However, their learning was
latent, stored cognitively in their memories but not yet expressed behaviorally. When these
rats were given a good reason (reinforcement with food) to run the maze speedily, they
called on their latent learning to help them reach the end of the maze more quickly.
Outside a laboratory, latent learning is evident when you walk around a new setting
to get “the lay of the land. The rst time you visited your college campus, you may
have wandered about without a speci c destination in mind. Exploring the environment
made you better prepared when the time came to nd that 8 a.m. class.
Insight Learning
Like E. C. Tolman, the German gestalt psychologist Wolfgang Köhler
believed that cognitive factors play a signi cant role in learning.
Köhler spent four months in the Canary Islands during World War I
observing the behavior of apes. There he conducted two fascinating
experiments—the stick problem and the box problem. Although these
latent learning
(implicit learning)
Unreinforced
learning that is
not immediately
refl ected in
behavior.
kin35341_ch05_166-200.indd Page 191 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 191 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
192 // CHAPTER 5 // Learning
two experiments are basically the same, the solutions to the problems are different. In both
situations, the ape discovers that it cannot reach an alluring piece of fruit, either because
the fruit is too high or because it is outside of the apes cage and beyond reach. To solve
the stick problem, the ape has to insert a small stick inside a larger stick to reach the fruit.
To master the box problem, the ape must stack several boxes to reach the fruit (Figure 5.13).
According to Köhler (1925), solving these problems does not involve trial and error
or simple connections between stimuli and responses. Rather, when the ape realizes that
its customary actions are not going to help it get the fruit, it often sits for a period of
time and appears to ponder how to solve the problem. Then it quickly rises, as if it has
had a ash of insight, piles the boxes on top of one another, and gets the fruit. Insight
learning i s a f o r m o f p r o b l e m s o l v i n g i n w h i c h t h e o r g a n i s m d e v e l o p s a s u d d e n i n s i g h t
into or understanding of a problem’s solution.
The idea that insight learning is essentially different from learning through trial and error
or through conditioning has always been controversial (Spence, 1938). Insight learning
appears to entail both gradual and sudden processes, and understanding how these lead to
problem solving continues to fascinate psychologists (Chu & MacGregor, 2011). In one
study, researchers observed orangutans trying to gure out a way to get a tempting
peanut out of a clear plastic tube (Mendes, Hanus, & Call, 2007). The primates
wandered about their enclosures, experimenting with various strategies. Typically,
they paused for a moment before nally landing on a solution: Little by little
they lled the tube with water that they transferred by mouth from their water
dishes to the tube. Once the peanut oated to the top, the clever orangutans had
their snack. More recent research shows that chimps can solve the oating peanut
task through observational learning (Tennie, Call, & Tomasello, 2010).
Insight learning requires thinking “outside the box,” setting aside previous expectations
and assumptions. One way that insight learning can be enhanced in human beings is through
multicultural experiences (Leung & others, 2008). Correlational studies have shown that time
spent living abroad is associated with higher insight learning performance among MBA
students (Maddux & Galinsky, 2007). Furthermore, experimental studies have demonstrated
that exposure to other cultures can in uence insight learning. In one study, U.S. college
students were randomly assigned to view one of two slide shows—one about Chinese and
U.S. culture and the other about a control topic. Those who saw the multicultural slide show
scored higher on measures of creativity and insight, and these changes persisted for a week
(Leung & others, 2008). Being exposed to other cultures and other ways of thinking can be
a key way to enhance insight and creativity, and a person does not have to travel to enjoy
the learning bene ts of multicultural experience. For more on this topic, see the Intersection.
insight learning
A form of problem solving
in which the organism de-
velops a sudden insight
into or understanding of a
problem’s solution.
FIGURE 5.13 Insight Learning Sultan, one of Wolfgang Köhler’s brightest chimps, is faced with the problem of reaching a cluster of bananas overhead.
He solves the problem by stacking boxes on top of one another to reach the bananas. Köhler called this type of problem solving “insight learning.
Wh a t m a k e s i n s i g h t l e a r n i n g
uni q ue i s t ha t Aha! mome nt
but t hat moment of t en comes
af t er s ome t r i al and er r or dur i ng
whi ch ma ny o f the wrong answers
have been t hor oughl y di smi ssed.
kin35341_ch05_166-200.indd Page 192 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 192 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
Cognitive Factors in Learning // 193
O
ne of the most dramatic changes in U.S. higher educa-
tion is the shift to a more diverse student body. The ta-
ble below summarizes changes in the social landscape
of four-year colleges and universities from 1976 to
2011 (U.S. Department of Education, 2011), as well as the pro-
jected changes by 2019 (Chronicle of Higher Education, 2011).
Educational and Cross-Cultural Psychology:
How Does Cultural Diversity Affect Learning?
INTERSECTION
\\
How diverse is your
learning environment?
\\
How do you benefit from
the diversity at your school?
Percent age
of Students
1976
Percent age
of Students
2011
Project ed
Percent age
Increase
2011–2019Groups
Women
41
57
+
18
African American
9.4 14
+
24
Asian American
1.8 7
+
23
Non-Latino White/
European American
83 62
+
5
Lat ino
3.5 12
+
37
community as they enter the job market. Participation in diversity
courses in college is related to cognitive development (Bowman,
2010) and civic involvement (Gurin & others, 2002), with outcomes
especially positive for non-Latino White students (Hu & Kuh, 2003).
How does exposure to diversity in uence the learning of ethnic
minority members? In this case, the link between diversity and ac-
ademic performance is more complicated. For one thing, due to
societal attitudes about their ethnic group, minority students may
worry about taking learning risks and offering ideas. For another,
these students may be in uenced by their concerns about how
others perceive them and their ethnic group. These feelings can
take a toll on academic efforts (Ely, Thomas, & Padavic, 2007;
Guillaume, Brodbeck, & Riketta, 2012). However, diversity may
have bene ts for these individuals as well, especially as the
university setting becomes increasingly diverse.
A recent study examined diversity and individual learning in a
group context at an international business school in Great Britain
(Brodbeck, Guillaume, & Lee, 2011). Students represented a variety
of ethnic backgrounds. The British students included White/Anglo
students and ethnically Indian and Pakistani students. In addition,
some students were Black Caribbean, Black African, Chinese, and
Arab, and some were from other European countries. Students were
assigned to workgroups for a course that involved running a car com-
pany in a computer simulation game. In groups of typically ve, the
students met weekly, developed a business plan, made decisions to-
gether, and tracked their company’s progress. Each student also
wrote an individual essay that was part of the course grade. The
groups varied in terms of ethnic diversity. The results of the study
showed that ethnic minority students whowere in groups with low di-
versity tended to perform relatively poorly, but their performance in-
creased as group diversity did. Especially important, though, was the
inclusion of one other person from the student’s same ethnic group.
The researchers estimated that for an ethnic minority student, being
in a diverse group that included at least one other member from his
or her own group was associated with the difference between a C!
and an A grade. The highest level of learning among ethnic minorities
occurred in groups in which the ethnic minorities made up the major-
ity of the group. And White/Anglo students performed very well in
groups in which they were the only member of their ethnic group.
There is no question that the undergraduate student popula-
tion continues to change dramatically. This development would
appear to be a very good thing for learning. Diverse groups
provide broader knowledge and more varied perspectives than do
homogeneous groups, to the positive
bene t of all group members. As
university communities become more
diverse, they offer students an ever-
greater opportunity to share and to
bene t from those differences.
Research has shown that diversity is bene cial to student learn-
ing. For instance, in a study of over 53,000 undergraduates at 124
colleges and universities, students’ reported interactions with indi-
viduals from other racial and ethnic backgrounds predicted a variety
of positive outcomes, including academic achievement, intellectual
growth, and social competence (Hu & Kuh, 2003). Many universi-
ties recognize that as U.S. society becomes more multiculturally
diverse, students must be prepared to interact in a diverse
kin35341_ch05_166-200.indd Page 193 8/4/12 8:49 PM user-f502kin35341_ch05_166-200.indd Page 193 8/4/12 8:49 PM user-f502 /207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles/207/MH01822/kin35341_disk1of1/0078035341/kin35341_pagefiles
194 // CHAPTER 5 // Learning
1. E. C. Tolman emphasized the purposive-
ness of behavior—the idea that much of
behavior is oriented toward the achieve-
ment of
A. immortality.
B. altruism.
C. goals.
D. self-esteem.
2. When the answer to a problem just pops
into your head, you have experienced
A. latent learning.
B. insight learning.
C. implicit learning.
D. expectancy learning.
3. A type of learning that does not involve
trial and error is
A. insight learning
B. <