University of Idaho Psychology of Learning
Lesson 3: Lecture 1 Transcript
 
Home
Syllabus
Schedule
Contact
Help

 

Department of Psychology

  ©
 
University of Idaho
  All rights reserved.

  Psychology Dept.
  University of Idaho

 


 

 


 

 

Back  
Transcript of Audio Lecture
In the last several sections, we have been examining how stimuli are related, and how that affects some particular type of response. In essence, what we call classical conditioning. In this section, we begin by a discussion of instrumental conditioning. Here, the focus is a little bit different than we saw in classical conditioning. In addition to that, some people also call instrumental conditioning operant conditioning. That is, in fact, an incorrect term and we will talk about the differences between the two a little bit later in the course. So let’s begin by reviewing the differences between classical conditioning and instrumental conditioning. We will begin with slide two.

In classical conditioning, basically what we saw was that the relationship between two stimuli, (that is the conditioned stimulus and the unconditioned stimulus), were important before the conditioned or unconditioned response. So what we were looking at is how stimulus one, (the CS), and how stimulus two (the UCS) work with each other, what variables influenced each, and how we were able to get some kind of a conditioned response.

In instrumental conditioning, this is changed somewhat. Basically, what we look at is the relationship between some stimulus and a particular response. That is, what a particular stimulus does is the focus and how the stimulus, then, influences some response. And this is what we call in psychology and learning theory, Stimulus-Response Psychology or what is also called S-R Psychology. Instrumental conditioning begins around the same time as classical conditioning. The different theoretical approach that one takes ultimately makes the difference between the two.

So let’s begin by having a discussion of the two major theorists that we talk about in instrumental conditioning since they really began instrumental conditioning. That is, Thorndike and Guthrie. So let’s begin with a discussion of Thorndike and we’ll do that starting on slide four.

Thorndike is perhaps really the greatest learning theorist of all time and as you can see on the slide there, he’s published a large number of articles. In total, five hundred books, articles, monographs, on and on and on. He did a lot of his work in learning, but he also did it in a wide variety of other areas. As you can see here, he’s worked in education and in comparative psychology. The key for Thorndike was basically that he tried to attempt to measure everything, which is what we should do in psychology in some form or another.

Thorndike began his work with Cattell who was primarily working with chickens. So as a result, he begins to work with chicks. Ultimately he switches to cats and develops a methodology that was more suitable for animal learning and but also could be applied to human learning as well.

Thorndike’s underlying principles were related to a concept that was called connectionism. In this case it’s very similar to what we would call associationism, ala, the British associationists. He believed that associations between sensory impressions and actions were very, very important and that these sensory impressions were basically neurological in nature. So the connection between the two was in essence some kind of a neurological mechanism. To do that, he began to look with a particular type of learning and that is now called trial and error learning. For Thorndike, it was the most basic form of learning. What Thorndike would do is work with cats in puzzle boxes Specifically he would put a cat in a box that had some kind of way for the cat to get out. That is, it had some kind of escape mechanism that the cat had to push, pull, or do something to open up the door.

What Thorndike would do is put the cat in the box and like all cats it goes nuts trying to get out. After some five to ten minutes or so, the cat accidentally hits some kind of escape mechanism, the door opens, and the cat gets out. Then Thorndike would pick the cat up, stick it back in the box and see what happens again.

Now, Thorndike was trying to test something very simple. That is, do you get gradual learning or do you get what we would call insight learning. That is, you start to try to figure something out, a light bulb goes off in your head (pop), and you have the solution. So what he is trying to test is something to that extent. What we should see is what we observe in slide eight. That is, that if the cat forms some kind of an association, the next time you stick it back in the box, it should get out faster. What we should see is a very rapid learning curve or insight learning. As you can see there are two examples where you have insight learning on the right and a more gradual learning process on the left. So what happens, we see on slide nine that the learning curve was very gradual. And what Thorndike concluded was that learning was an incremental or gradual process, and that it occurred in very small steps. It was not developed by insight (that is, you have some light bulb go off in your head and then wallah, you have the solution), instead, you learn things gradually over time.

In addition, learning for Thorndike was not mediated by some idea. That is, animals didn’t kind of look the situation over. For example, you stick the animal in the cage and it looks around. That is, it looks at this, and it looks at that and processed things and on and on and on. Then it makes some particular response. Instead what you see with the cat when you stick it in the box, it starts going all over the place. They don’t think it over, says Thorndike. So he’s rejecting a reasoning concept that other people had proposed in favor of some kind of direct connection that we do and process over time.

Now he took this idea and then tried it with other organisms. And lo and behold, he found the same thing. As we see in slide 11, he did the same thing with apes, and he found the same kind of solutions going on with humans. So, we stick you in a cage and let’s say that you have a variety of different things in the cage. Then we see what happens. Generally, you don’t look around and process and on and on and on. You go over and try different stuff, and that’s exactly the way other animals learn as well.

Now, Thorndike began to develop some theories about learning and they’re broken down into two major areas—information and theories before, and after 1930. And during this time, he developed several laws which I have listed on slide 12 including a law of readiness, the law of exercise and the law of effect. So let’s talk about each of these.

We begin with the law of readiness in slide 13. It really has three parts. That is, 1. When someone is ready to kind of perform some act and to do so is satisfying. 2, When someone is ready to perform some act and not do it (so it’s kind of annoying). And 3. When somebody is not ready to perform an act and is forced to do so( it is annoying). So in essence, when you’re ready to do something and you wanta do it, then it’s a very satisfying thing. When you’re not ready to do it or you don’t want to do it, you’re not very happy about it.

The law of readiness, says Thorndike, can interfere with goal-directed behavior. When this occurs you become very frustrated. When you call someone to do something that they’re not ready to do it, it’s also very frustrating. The classic example is working with your kid. You want them to do lots and lots of math, they’re not ready to the math, and they don’t want toido math. So what do you have, you have a great big fight on your hands. The same principle applies with other things as well.

Now at the time, there were some terms. One is the concept of satisfying and annoying. So let’s define what it satisfying and annoying. We will talk about that in slide 15. Now satisfying and annoying were acceptable, and still are for some people even today. A satisfying state of affairs was one where the animal does nothing to avoid it, and often does things to in essence obtain, achieve, and preserve it. Whereas, a discomforting or annoying state of affairs was one where the animal avoids or abandons the situation. So this was the first law that Thorndike developed.

The second law we begin discussing is on slide 16. That is the law of exercise and it goes something like this. Connections between stimuli and responses are strengthened when they’re used. And the more often they’re used, the stronger they become. This was called the Law of Use. In contrast, when connections between the stimulus and responses is weakened, that is when practice is not used, some connection between the stimulus and response kind of goes away. When you don’t practice or use it and we call that the Law of Disuse.

So strengthening, as we see in slide 17, is basically going to be an increase in the probability of a response that would occur when a stimulus is presented, and that bond is strengthened. So the next time the stimulus is presented, there is a high probability that the response will occur. On the other hand, if you weaken it, what should happen? Well there should be a decrease in the probability that the response would occur.

Now the last law that Thorndike really develops and which has a major impact is shown on slide 18. That is what is called the Law of Effect. What the law of effect basically says is this, “When a response is followed by a satisfying state of affairs, the strength of the connection is increased and when a response is followed by an annoying state of affairs, the strength of the connection is decreased.”

You cam restate this in more conventional terms. That is, when a stimulus leads to some kind of response that leads to some kind of goodie, that is reinforcement and the stimulus - response connection is strengthened. If the stimulus leads a response which leads us to delivery of some thing that is aversive, (some kind of punishing stimulus), the stimulus response connection is weakened. In some ways, that’s very similar to some of the operant theories that we’ll discuss a little bit later.

Now what were the implications for all of this? Well, as we see in slide 20, it was greater than traditional association theories which basically claimed that the frequency or occurrence or contiguity was the determiner of the association strength. So, in essence, what the traditional theorists were basically saying is that all you need to do is have something together to get the strength to increase. What was the implications for this? Well, as we see on slide 21, You needed to have some kind of association and/or contingency of occurrence. So you had to have in essence both. However, for Thorndike, the consequences were extremely important for determining association strength. If you don’t have any good consequences, that behavior then goes down.

So how is the stimulus response connection strengthened? This is shown in slide 22. Basically he postulated that there was some kind of reaction that occurred in the nervous system and when it was triggered in there, the response caused something to feel good. This was neuro-physiological in nature and not conscious to the organism. As we talk about things today in physiological psychology, there are actual brain structures that support that kind of an argument. That is, within the hypothalamus and the hippocampus and some other particular brain structures. So in essence, Thorndike has some early ideas that will impact later areas in psychology. Now, in addition to the three major laws, there were other concepts that Thorndike developed as well. These weren’t really as important as the primary laws but are included and shown on slide 23, multiple response, set or attitude, and others. So let’s talk about multiple response first.

Multiple Response, for Thorndike, was in essence really the first step in all of learning and basically goes something like this. You make an attempt at something. If it doesn’t solve your problem, you try something else, so basically what we continue to do is try and try and try until we make some kind of response and that solves the problem.

Set or Attitude. This is shown on slide 25 and that is, what the organism brings with it. Every organism, whether it be an animal or human, brings information and different history into the particular situation. As we see, individual differences in learning can be explained by lots of other things including your cultural differences, your genetic history, how fatigued you are, have you been deprived of some particular goodie, or whatever. What in essence acts as a satisfier or an annoyer depends upon the organism’s background and the body state at the time of learning. For example, you might love chocolate chip cookies, (which is the only kind of cookie), but other people might instead of liking chocolate chip cookies prefer to have some kind of peanut butter thing instead. So, in essence, if you give the person a chocolate chip cookie, it could become an annoyer, versus, if you gave a peanut butter wafer to me, it would be an annoyer while a chocolate chip cookie would be very good satisfier. However, we also need to remember that after 30 chocolate chip cookies, if I had to eat another chocolate chip cookie, that could become an annoyer. So deprivation, how much food you’ve had, etc., will become extremely important for you as you examine the set or attitude of the organism.

The third new thing is related to Prepotency of Elements (basically the partial or piecemeal activity of some kind of situation). That is, only some elements of the situation govern behavior. So in essence, you take all the stimuli in the puzzle box, no matter what the stimuli are. So you have darkness, string, maybe a ball. The box has all sorts of different things, and over a period of time we begin to only respond to one of those. For the cat, it was to pull on the cord and it opened up the door. So in essence how we respond to some particular set of stimuli depends on what we attend to and what responses are attached to what we attend to. That is true today, you have lots and lots of different stimuli in your environment, and you attend only to a couple of them that are important for you.

Now, so we’ve now talked about a variety of different things which were all occurring before 1930. Well, what happened after 1930? We see this starting on slide 27. The first thing that Thorndike denounced or renounced was the law of exercise. That is, mere repetition didn’t strengthen a response. You might get some minor improvement in the response, but not much else. In addition to that, he also revised the law of effect. Basically what he said was this. That the law of effect was only true when the response resulted in a satisfying state of affairs; and that reinforcement increases the strength of the connection. Punishment had nothing to do with the strength of the connection. Now later theorists will ultimately show that Thorndike was wrong about punishment. The reason that he was wrong was due to the punisher that Thorndike used to reach his conclusions. Later theorists that will follow him, so in essence, it was, he wasn’t very accurate on that particular point.

Finally, belongingness. If somehow basically elements of an association somehow belong together, the association between them was learned and was retained more readily than if the elements did not belong together. In essence, when you take elements of some kind of an association that somehow work together in some kind of a system, you basically begin to associate them together more than if the elements were not together. For example, you might have a tennis ball and a baseball and some other things, versus a tennis ball and a bottle of soda and a box. Ultimately, the more elements you have belonging together, the more they are retained than elements that don’t belong together.

So, in this section we’ve covered some of the basic concepts that were described by Thorndike. In the second section and later sections, we’re going to talk about some other instrumental conditioning theorists and their impacts.

 

 

 


Back