University of Idaho Psychology of Learning
Lesson 4.1: Lecture 3 Transcript
 
Home
Syllabus
Schedule
Contact
Help

 

Department of Psychology

  ©
 
University of Idaho
  All rights reserved.

  Psychology Dept.
  University of Idaho

 


 

 


 

 

Back  
Transcript of Audio Lecture
In the last sections, we’ve been examining some aspects of reinforcement. In this section we begin to talk about schedules that relate to reinforcement and what we’re going to call schedules of operant conditioning. So let’s begin with a discussion of how we do that. We begin, as we see in slide two, with an operant chamber. An operant chamber is a way that we can measure something. It was developed by B. F. Skinner and today is also called a Skinner box. Let’s take a look at what a Skinner box looks like.

In essence, the Skinner box looks like what you see here (slide 3) with some modifications. There is usually some kind of a food tray, there is some kind of a key or a bar, there may be lights in the system, and also on the bottom of the operant chamber, there are usually wire shock grids. These are connected with some electrical shocking device that one can use. The remainder of the operant chamber is usually made out of clear Plexiglas and has a lid. In general, what we do is open up the lid and drop the animal into the inside of the box. The animal then wanders around and engages in particular types of behavior.

We also have some kind of a recording device. This recording device is connected to a bar or the key in the operant chamber. The recording device basically is on a sheet of paper which is on a drum. There is a needle that basically moves up and down as the organism responds. So what we end up seeing is some kind of a figure similar to the one in slide five. When the material or the needle is working, it moves when a person makes a response. This results in a little high blip. When the subject makes another response, it makes another little blip. Now the paper continues to move between each of these responses but the key, in essence, just kind of stays in place. So when we pause we have a flattened horizontal line whereas the vertical lines are the lines that indicated behavior. And so as we can see in this figure on line five, this organism has a very high rate of responding. You’re getting many, many, many blips in a very short period of time. Then it gets up to the very top of the paper, the key drops down and it starts over and the process is repeated.

Now what you will see in operant chambers and journals are different figures. What we see here on the left figure in slide 6 is an organism responding at some high rate and the behavior’s going up the same rate. The key then stops, drops back down to the bottom of the page. The behavior goes up again and as we can see, there’s kind of a stair step fashion. The organism is going up and it comes up to a point and then there’s a long flat time. This is what you observe when the organism has stopped pushing the bar or pressing the key and we get a pause in the behavior.

Now you can see there are two rates of behavior in this figure. There is a high responding rate shown by very small flat pauses in the behavior. That is shown on the left side of the figure. On the right side we have a relatively long pause between each time the organism changes the behavior. In essence, both are the same height, but the response rate on the left figure is much greater than response rate on the right figure.

Now that we’ve described a little bit about how we record behavior, let’s talk about some types of schedules that have been discovered within operant conditioning. Again most of these types were discovered and demonstrated by Skinner. Let’s begin with a discussion of slide seven. As you can see here, there’s a variety of different types of schedules, there’s continuous schedules, response schedules, and differentiation schedules. So let’s start by talking about continuous schedules on page eight first.

In a continuous schedule, what you do is you reinforce or punish the organism after every response the organism makes. So let’s say you’re using a reinforcement paradigm. When the organism makes a correct response, you give it a goodie. As a result of the reinforcement, you get extremely high rates of responding. So let’s say that you are doing something to make some money. You make a widget and you get five bucks. Make another widget, you get five bucks and so, what’s your rate of responding? Well, it’s extremely high, so you’re working your tail off. Then we stop giving you the five bucks and we don’t tell you when we’re going to do that. So what you do is make the first widget, you make the second widget, and you’re getting five bucks and things are going great, then I stop giving you the money. You make a widget, you get nothing! Make another widget you get nothing! What happens on the third widget? Well, you begin to reduce your behavior. Ultimately you go on an extinction schedule and your behavior decreases to zero.

Depending on the type of reinforcer that you’re giving, you can also get rapid satiation. Let’s say if you’re giving food to the organism, such as a rat for the reinforcement, if you give lots and lots of food in a very, very short period of time, the organism will become satiated and then will no longer engage in the behavior.

The only difference between a reinforcement paradigm and a punishing paradigm is that every time the organism makes the response, you give it a zap or a time out or something like that and you get very rapid rates of response suppression. On the other hand, when you stop giving the punishing stimuli, the behavior returns at a very rapid rate.

So that is a continuous schedule. Now there are other schedules as well. These are shown in slide nine. In the past we called these partial schedules, today we call them time and response schedules, and there are two types. There are ratio schedules which is based on responding and there are interval schedules which is based on intervals of time. So lets begin by talking about ratio schedules first. This is shown in slide 10.

As we showed in the last slide, ratio schedules depend on a number of responses to occur before you give a reinforcer, and there are two types. The first type of these is one we’ve kind of described before and that is called a fixed ratio schedule. Here a reinforcer is given after a fixed number of responses have been emitted. So for example, when you make five widgets, shoes, or whatever, you get $10. Make another five widgets, you get another $10, third five widgets you get another $10. The schedule you’re on, is the number of responses that are required before you’re receiving some kind of reinforcer. In this example, you make five widgets per reinforcer, thus you are on what we call an FR or fixed ratio five schedule.

Now what are some attributes of this particular schedule. Well the first thing, as we see in slide 12, is that it gives very high rates of responding. The organism responds at a very, very rapid rate. Thus, you can get a lot of productivity out of an organism or a person by using this particular type of schedule. However, if you get the schedule too thin, that is, we have a fixed ratio 2000, so you have to make 2000 widgets before you get the $5, the organism will stop responding. That is what we call ratio strain. Here we get very rapid extinction and low levels of responding. So, what you want to do is have a moderate schedule, say an FR, 5, 10, 20. As a result you, even when the organism isn’t reinforced the one time, will still engage in the behavior. In general what you want to do when you’re working with a fixed ratio schedule is you want to begin with a relatively low schedule, an FR2, FR4, FR5. Then you gradually increase it; FR10, FR20 and on and on.

Now there’s one thing that we need to note and this is shown in slide 13. We can also have a fixed ratio one schedule. This works exactly the same as a continuous schedule. Thus, a FR1 schedule is called a continuous schedule.

Now what’s the final aspects of a fixed ratio schedule. Let’s use an example so you get an idea of ratio strain. I’ll tell you a story.

Long time ago in a galaxy far, far away I was doing some consulting for a company with a team of others. This company, for the lack of a better term, made widgets. They had a union that made the widgets. Generally what was happening is that the company was going broke. One major reason it was going broke was that there were lots of widgets being made (example, 50,000 widgets being made in a particular day), but on average 30 to 40% of those widgets had some kind of defect and thus had to be redone, fixed, or whatever. The company was spending lots and lots of money and was going broke because they couldn’t get good productivity, etc., So they called us in to try to get some help. We looked at this situation for a couple days or so and then we decided to change the behavior within the company. So, we got the management together to make some kind of change. What we did was this. We made an agreement with the management and the union that if the union made (let’s just say) 50,000 widgets a day, and when they made 50,000 widgets a day with no mistakes or errors, they could get to go home for the day. Further, they would get paid for the remainder of the day. How long do you think it took the workers within the union to make the 50,000 widgets. Well, in essence it took them about 4.5 hours. So, they would start at 8 o’clock, and around noon they were all gone. This was at 100% quality so you weren’t having any problems with any of the material.

Now the management and the board of directors looked at this and say, “Hey we need to change this, these people are working in essence only 4 hours a day and we’re getting paid for an 8 hour day.” So, we go back and we renegotiate with the union to make a 100,000 widgets. So, double the amount and again have no problems with the quality of the widgets and how long does it take. It takes them about 5 .5hours to get 100,000 widgets. So, they’ve doubled the productivity in an hour and a half. Can imagine what’s going on.

Now the management and the board of directors get greedy and they say “Hey, this is really going too well, we shouldn’t still be paying them for not working, we need to renegotiate the contract and have them make 150,000 widgets a day.” So, they take this to the union, union says take a hike, we’ll go back to the way we were, so they all go back to the way they were and the company goes bankrupt in six months. Here is a classic example of ratio strain, when we get the fixed response schedule too thin, the reinforcer stops being effective.

Now there’s a second type of ratio schedule and this is shown in slide 14. It is called a variable ratio schedule. In a variable ratio schedule, a reinforcer is given after variable number of responses have occurred, but the number of responses that are required for reinforcement changes every time. We see that here in the example that I have. The first time you make two widgets and you get 10 bucks. Then you have to make eight widgets and you get 10 bucks, six widgets and you get 10 bucks, four widgets and you get 10 bucks. So, we have the number of responses being required for reinforcement changing every time.

So, how do you tell what schedule you’re on. Basically the schedule is the number of responses divided by the number of reinforcers that are given. So, if we look at the previous example on slide 14, what we see is that we made a total of 20 widgets. We divided that by 4 reinforcers that were given, and as a result, we get a variable ratio 5 schedule. So on average, every 5 responses that the organism makes is reinforced.

What are some attributes of this schedule? Well as we see in slide 16, it gives extremely high rates of responding. However, it is also extremely resistant to extinction, thus it doesn’t suffer from ratio strain as much as the fixed ratio schedules do. So again, in ratio schedules what we’re looking at is the number of responses that the organism has to make for some particular delivery of reinforcement.

So now that we’ve talked about ratio schedules, let’s look at the second type of schedule. These are called interval schedules, and interval schedules are based on time. In essence as we see in slide 17, two things must occur in interval schedules.

The first one thing that must occur is that a certain interval of time must elapse before the organism can get a reinforcer; one minute, five minutes, twenty minutes. Number two, the organism must make one particular response during the time interval to get the reinforcer.

Thus as we see on the bottom here, the first response the organism makes during the time interval is the one that is actually reinforced. Note that the number of responses emitted during the time block is irrelevant. So whether the organism makes one response during the time block or 50 responses during the time block is irrelevant. All the organism has to do during that one time block is make that one particular behavior to get the reinforcement.

So, what do we look at in relation to specific types of interval schedules. The first type of interval schedule that we’ll talk about is a fixed interval schedule. In a fixed interval schedule, as we see in slide 18, a reinforcer is given for the first response that occurs after a fixed period of time has elapsed. So, as we see here in the example, every five minutes a reinforcer becomes available. If the organism responds during that five minute interval, they get the reinforcer, no response, no reinforcer.

So what’s some attributes of this schedule. As we see in slide 19, it doesn’t give the rates of responding that ratio schedules do. Thus, what we see is that the organism after they receive the reinforcement, takes a break after getting it. This develops into what we call a scalloping effect on the cumulative recorder.

The second example of a interval schedule is shown in slide 20. This is what is called a variable interval schedule. Here the reinforcer is given for the first response that occurs after a variable period of time has elapsed, but again, the time period changes every time. So in this example, the first three minutes of the reinforcer is available, then the second set of minutes the reinforcer is available and on and on. So, how do we figure out the schedule? Basically, it’s the number of minutes divided by the number of time intervals where reinforcer is available. So we have a total of 20 minutes from the previous example, divided by four opportunities to receive a reinforcer, thus we have a VI - 5 minute schedule.

Well what’s some attributes of variable interval schedules. Number one, it gives very low rates of responding. However, in academics, it can give extremely high rates of responding, ala, studying behavior. Let’s give an example of that. Say that instead of you having to take your three traditional exams we have scheduled throughout this course, I give you pop exams. That is, I can give you an exam any time during the course. So right now I would say, “Let’s have an exam.” To do well you need to answer all the questions correctly. How would you like that? Well in most cases, you wouldn’t like that at all. So, and as a result my performance evaluations or student teaching evaluations would go down. However, VI schedules can be very, very resistant to extinction. As a result this schedule, although it doesn’t give high rates of responding, it is also extremely resistant to extinction.

Let’s show an example of some schedules here. As we see here on slide 23, there are a couple of major ones here. The first major one, as we see, is on the left side where you’re getting rapid rates of responding in a fixed ratio paradigm. You get a rate, you get a post-reinforcement pause but you get kind of a pseudo responding rate where you have little delay every time. It works well and has rapid responding because due to the steepness of the curve. That’s in contrast to the next schedule, where you have a variable ratio schedule. That is, you have no pauses and it has extremely high resistance to extinction.

Now look at the lower left hand square, where we have a fixed interval schedule.

Notice here you have a long pause after the reinforcement is given. So after you take an exam and you get you’re A, you take a break for a while before starting to cram for the exam. So in essence what you get is a scallop effect (called an FI scallop)/. Finally, you see the variable interval schedule in the lower right hand corner is basically a long, low steady response rate that has minimal pausing. Note that the difference between the variable ratio schedule with its high rate of responding and high risk of resistance to extinction and the variable interval schedule with its low rates of responding. We see a very similar aspect in the following figure that’s shown on slide 24.

So now we’ve talked about two major types of schedules. We’ve talked about time schedules and response schedules. Let’s talk now about the last type of schedule. These are called differentiation schedules or what are called IRT schedules. Basically differentiation or IRT schedules are used when the reinforcer depends both on the time and the number of reinforcers that you’re giving. It can be extremely, extremely effective in increasing or decreasing behavior. So let’s show the first one here. This is called a differentiation rate for high responding, or what we call a DRH schedule. Here you have to respond at a high rate within a certain time period. That is, you basically have a 25 page term paper due in two weeks, So, you work your tail off to get it in and receive you’re A. Is a very effective schedule and you get high rates of responding. The problem is, as we see in slide 27, you can’t make the level too high, that is, if the organism doesn’t respond enough, it’ll ultimate receive less reward and decrease the response rate. This in essence, this schedule looks very simple like an FI schedule, that is, you’re working very, very hard, you get your paper in, and you take a break or you do the same thing with an exam.

Let’s look at another type of schedule, that’s differentiation rates for low responding. These are designed to create periods of responding which are relatively low during some particular time period. That is, you don’t want a child to act out at some kind of class, so basically you give the kid a reinforcer when acting out responses are low during a particular period of time So Joey and Susie doesn’t make some kind of acting out behavior for a period of five minutes or so, and as a result of that, you give them a reinforcer. Guess what happens. The acting out behavior drops out of the behavioral pool and they don’t act out any more.

So, what are some aspects of differentiation schedules. Well they work extremely well in applied settings, you can use them in schools, group homes and you can even use them in your own home with your own families and with kids. For example, an organism doesn’t make a response for say five or ten minutes, they’re good in church so to speak, and when they do that, they get a reinforcer for doing that. So in essence for these types of things, differentiation schedules can be very effective.

So, what do we conclude about all these different types of schedules (of which I’ve only listed a few and there are many others)? The key is to identify a particular schedule that you want to use and monitor it. Monitor behavior, then systematically develop an intervention using the schedule that you want while you’re monitoring the behavior. If you find the behavior is changing to a degree that you like, you continue with that particular intervention. If as you observe the graph or whatever you’re using and you’re finding that the behavior is not changing, then you can make a different type of intervention. So in essence schedules of responding are extremely important, and they can have major impacts in all kinds of behavior, whether it be at home, whether it be at school or whether it is in the industry that you want to run. You can use these schedules in essence to significantly increase worker productivity. The classic example is the productivity example that I described earlier where we doubled the number of widgets that were being made, and significantly increased the quality of those particular widgets. You can use that technology and others in applied and work setting.

In the next sections we begin to talk about some variables that are related to reinforcement procedures. So until then have a great day.


Back