Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
63 Cards in this Set
- Front
- Back
Identify the three basic stages in any shaping procedure as presented at the beginning of this chapter, and describe them with an example. Either Frank’s case or an example of your own. |
a) Specify the final target behavior. Example: Frank’s final target behavior was to jog a quarter mile each day.
|
|
Define shaping |
The development of a new operant behavior by the reinforcement of successive approximations of that behavior and the extinction of earlier approximations of that behavior until the new behavior occurs. |
|
What’s another name for shaping? |
The method of successive approximations. |
|
Explain how shaping involves successive applications of the principles of positive reinforcement and operant extinction. |
The process begins by reinforcing behaviour that occur occasionally which remotely resemble the final target behavior, this similar behavior is positively reinforced. Once it occurs several times successively, it is extinguished and replaced with a behavior which is closer to the final target behavior, which will now be reinforced. When that behavior occurs several times successively, it will be eliminated and replaced with a behavior closer to the final target behavior. This is applied successively until the final target behavior is reached. Once it is, then the final target behavior is reinforced. |
|
Why bother with shaping? Why not just learn about the use of straightforward positive reinforcement to increase a behavior? |
In some cases, a desired behavior may never occur, so it is not possible to increase it’s frequency through positive reinforcement alone. Shaping is used to establish a behavior that the individual never performs by establishing a behavior that has a frequency of more than zero and at least resembles the final target behavior. |
|
In terms of the three stages in a shaping procedure, describe how parents might shape their child to say a particular word? |
a) Specify the final target behavior: The final target behavior is the child saying and properly pronouncing the desired word. Eg. Daddy
|
|
List five dimensions of behavior that can be shaped. Give two examples of each |
a) Topography: Example 1: Learning to ice skate with longer and longer strides.
|
|
Describe a behavior of yours that was shaped by consequences in the natural environment and state several of the initial approximations. |
At my job, I have to write thank you cards. The more thank you cards, the more I get paid. I was able to work up to 4 hours per day, so writing thank you cards for 4 hours was the final target behavior. The first week I spend 1 hour writing thank you cards, and got paid $12. The second week I spent 2 hours and made $24. The third week I spend 3 hours and made $36. Finally, by the fourth week I had reached my final target behavior by writing thank you cards for four hours and made $48. |
|
What is meant by the term final target beahviour in a shaping program? Give an example |
A precise statement of the final desired behavior, including all the relevant characteristics of the behavior (eg. Frequency, etc.) and the conditions under which the beahviour is or is not to occur and any necessary guidelines should be stated. Example: Jessica’s final target behavior is to ride her bicycle for 20 minutes a day at 5mph |
|
What is meant by the term starting behavior in a shaping program? Give an example. |
A behavior that occurs often enough to be reinforced within the session time and approximate the final target beahviour.
|
|
How do you know you have enough successive approximations or shaping steps of the right size? |
There are no specific guidelines. Try imagining what steps you would go through, or ask someone who can perform the final target behavior what steps they went through. Try to stick to your steps, but be flexible if the trainee is moving to quickly or slowly through them. |
|
Why is it necessary to avoid under-reinforcement of any shaping step? |
Without sufficient reinforcement, the step won’t become well established. Trying to get to a new step before the previous approximation has been well established can result in losing the previous approximation through extinction without actually achieveing the new approximation. |
|
Why is it necessary to avoid reinforcing too many times at any shaping step? |
If one approximation is reinforced so long it becomes extremely strong new approximations are less likely to appear. |
|
Give an example of the unaware-misapplicaiton pitfall in which shaping might be accidentally applied to develop an undesirable beavhour. Describe some of the shaping steps in your example. |
John learns how to ride a two wheel bicycle, and receives lots of cheering from his friends (positive reinforcement). After a while, they aren’t impressed by this skill anymore, so they stop cheering, so he learns to ride with only 1 hand on the handles, and they are impressed again and cheer again. He then learns how to ride with no hands and they begin to cheer again, but soon grow bored of this too. So John starts to ride standing up on the seat. This is a dangerous, undesirable behavior. |
|
Give an example of the pitfall in which the failure to apply shaping might have an undesireable result. |
Failure-to-apply Pitfall: Jessica, an infant begins babbling, but her mother is not terribly impressed, so she does not reinforce the behavior. The behavior is not positively reinforced, so the child doesn’t move on to the next stage. |
|
Give an example from your own experience of a final target behavior that might best be developed through a procedure other than shaping. |
When I was younger, I used to jump from the third step in my house to the ground for attention (positive reinforcement). This behavior would be dangerous to try to use shaping, which involved extinction, because extinction could possible involve an increase in the intensity of the behavior, and jumping from the fourth step or higher could result in serious injury. |
|
State a rule for deciding when to move the learner to a new approximation. |
Move on to the next step whne the learner performs the current step correctly in 6 of 10 trials, with 1 or 2 less perfect than desired, and 1 or 2 where the beahviour is better than the current step. |
|
Why do we refer to positive reinforcement and operant extinction as principles, but to shaping as procedure? |
Principles are procedures with a consistent effect and are so simple they can’t be broken down into simpler procedures (operant extinction and positive reinforcement), and are like laws.
|
|
Describe how Scott and colleagues used shaping to decrease the heart rate of a man suffering from chornic anxiety? |
They hooked up a video portion of a TV set to a heart rate monitor. The TV played sound, but only showed picture as positive reinforcement when the heart rate decreased for three sessions. It would need to systematically decrease in each level in order to show the picture. |
|
Describe how computer technology may be used to shape specific limb movements in a paralyzed person. |
It would provide be more precise, rapid and give systematic feedback as well as unlimited patience. |
|
Describe how computer technology might be used to study shaping more accurately than can be done with the usual noncomputerized shaping procedures |
Fast enough to make comaprisions and consistent application of shaping procedures. More accurate and faster especially in terms of topography. |
|
Describe an experiment demonstrating that maladaptive behavior can be shaped. |
Reinforcing rates with food for extending their noses over the edge of a platform. Over trials, they were required to extend their noses farther and farter over the edge before recieving reinforcement, and eventually they extended it so far that they fell off. |
|
Define and give an example of intermittent reinforcement. |
An arrangement in which a behavior is positively reinforced only occasionally, rather than every time it occurs. Eg. Jan is reinforced with praise for every 2 math problems she solves correctly. |
|
Define and give an example of response rate |
The number of instances of a behavior that occur in a given period of time. Eg. Jan solves 16 math problems in an hour. |
|
Define and give an example of schedule of reinforcement |
A rule specifying in which occurrences of a given behaviour, if any will be reinforced. Eg. It has decided that Jan will only be reinforced every 4 times she solves a math problem correctly. |
|
Define CRF and give an example that isn’t in this chapter |
Continuous reinforcement, an arrangement in which each instance of a particular response is reinforced. Eg Everytime you turn on the tap you are reinforced with water. |
|
Describe four advantages of intermittent reinforcement over CRF for maintaining behavior. |
a) The reinforcer remains effective longer because satiation takes place slower.
|
|
Explain what an FR schedule is. Illustrate with two examples of FR schedules in everyday life (atleast one of which is not in this chapter) |
A fixed-ratio schedule, a reinforcer occurs each time a fixed number of responses of a particular type are emitted.
|
|
What is a free-operant procedure? Give an example |
A schedule in which the individual is free to respond at various rates in the sense that there are no constraints on successive response.
|
|
What is a discrete-trials procedure? Give an example. |
The individual isn’t free to respond at whatever rate they chose because the environment places limits on the availability of response opportunities.
|
|
What are three characteristic effects of an FR schedule? |
a) High steady rate until reinforcement
|
|
What is ratio strain? |
Deterioration of responding from increasing an FR schedule to rapidly. |
|
Explain what a VR schedule is. Illustrate with two examples of VR schedules in everyday life (at least one of which isn’t in this chapter). Do your examples involve a free-operant procedure or a discrete-trials procedure? |
Variable-ratio, a reinforcer occurs after a certain number of a particular response, and the number of responses required for each reinforcer changes unpredictably from one reinforcer to the next. The number of responses required for each reinforcement varies around a mean value, which is specified.
|
|
Describe how a VR schedule is similar procedurally to an FR schedule. Describe how it’s different procedurally. |
Similar: a) Causes a high, steady rate of responding,
|
|
What are three characteristic effects of a VR schedule? |
a) Produces a consistent response rate
|
|
Illustrate with two examples of how FR or VR might be applied in training programs (by training program, we refer to any situation in which someone deliberately uses behavior principles to increase and maintain someone else’s beahviour, such as parents to influence a child’s behavior, a teacher to influence student’s behavior, etc.) Do your examples involve a free-operant or discrete-trials procedure?
|
Example 1: Jennifer’s parents want her to do her chore of mowing the lawn, so they give her $10 once she has mowed the lawn 3 times. This is FR 3, and a discrete-trial procedure.
|
|
Explain what a PR schedule is and how PR has been mainly used in applied settings. |
It is like a FR schedule, but the ratio requirement increases by a speficiied amount after each reinforcement. At the beginning of each session the ratio requirement starts back at its original value, and after a number of sessions, it the ratio requirement reaches a level called the break point where the individual stops responding completely.
|
|
What is an FI schedule? |
Fixed-interval schedule, a reinforcer is presented following the first instance of a specific response after a fixed period of time. The only requirement for a reinforcer to occur is that the individual engage in the behavior after reinforcement has become available because of the passage of time. The FI size is the amount of time that must elapse before reinforcement becomes available. Eg. PVR-ing a show |
|
What are two questions to ask when judging whether a behavior is reinforced on an FI schedule? What answers to those questions would indicate that the behavior is reinforced on an FI schedule? |
a) Does reinforcement require only one response after a fixed interval of time?
|
|
Suppose that a professor gives an exam to students every Friday. The students’ studying behavior would likely resemble the characteristic pattern of an FI schedule in that studying would gradually increase as Friday approaches, and the students would show a break in studying (similar to a lengthy postreinforcement pause) after each exam. But this isn’t an example of an FI schedule for studying. Explain why. |
Because The students must make more than one study response in order to receive a good grade, and responding before the interval does affect the result as it contributes to a good grade. |
|
What is a VI schedule? |
A reinforcer is presented following the first instance of a specific response after an interval of time, and the length of the interval changes unpredictably from one reinforcer to the next. It’s a response reinforced after unpredictable intervals of time. Eg. Checking email. |
|
Explain why simple interval schedules aren’t often used in training programs |
a) FI procedures produce long postreinforcement pauses
|
|
Explain what an FR/LH schedule is, and illustrate with an example from every day life that isn’t in this chapter |
A schedule with a fixed ratio (reinforcer occurs each time a fixed number of responses of a particular type are emitted) and limited hold (a deadline for meeting the response requirement of a schedule of reinforcement)
|
|
Expalin what an FI/LH schedule is and illustrate with an example that isn’t in this chapter. |
A fixed interval schedule (a reinforcer is presented following the first instance of a specific response after a fixed period of time) with a limited hold (a deadline for meeting the response requirement of a schedule of reinforcement).
|
|
Describe how an FI/LH schedule is procedurally similar to a simple FI schedule. Describe how it procedurally differs. |
Similar: The reinforcer appears only after a fixed period of time.
|
|
Explain what a VI/LH schedule is? Illustrate with two examples from everyday life atleast 1 not in this chapter. |
A variable-interval schedule (a reinforcer is presented following the first instance of a specific response after an interval of time, and the length of the interval changes unpredictably from one reinforcer to the next) with a limited hold (a deadline for meeting the response requirement of a schedule of reinforcement).
|
|
Give two examples of how VI/LH might be applied in training programs |
Example 1: The timer game in classrooms. If children are working quietly when the timer goes off they get extra free time. VI 30 minutes/LH 0 seconds.
|
|
For each of the photos, identify the schedule of reinforcement that appears to be operating. |
a) After an unpredictable amount of time, one gets their luggage.
|
|
Explain what an FD schedule is. Illustrate with two examples of FD schedules that occur in everyday life (with atleast one not in this chapter) |
Fixed duration schedule: Reinforcer is presented only if a behavior occurs continuously for a fixed period of time. The value is the amount of time that the behavior must be engaged in continuously before reinforcement occurs.
|
|
Suppose each time you put bread in the toaster and press the lever, 30 seconds passes before your toast is ready. Is this an example of an FD schedule? Why or why not? Would it be an FD schedule if a) the catch that keeps the lever down doesn’t work. b) The timer that releases it doesn’t work. Explain in each case.
|
a) If the catch that holds the lever down is broken, then you must manually hold down the level continuously for 30 seconds, so this is a FD schedule.
|
|
Explain why FD might not be a very good schedule for reinforcing study behavior |
The behavior must be a behavior that is easily measured continuously and reinforced on the basis of its duration. With studying, it is hard to measure how long the person is studying, vs how long they are doing something else, like daydreaming, texting, or reading a book instead of studying. |
|
Give two examples of how FD might be applied in training programs |
Example 1: Some children with developmental do not make eye contact with others, and when adults try to initiate it, they quickly avert their eyes. FD may be used to increase eye contact by reinforcing after a certain amount of time of maintaining eye contact.
|
|
Explain what a VD schedule is, and illustrate with an example of one from everyday life that isn’t from this chapter. |
A variable-duration schedule has a reinforcer presented only if a behavior occurs continuously for a fixed period of time and the interval of time from reinforcer to reinforcer changes unpredictably. The mean interval is specified in the designation of the VD schedule.
|
|
What are concurrent schedules of reinforcement? Give an example |
When each of two or more behaviours is reinforced on different schedule sat the same time, the schedules of reinforcement that are in effect are called concurrent schedules of reinforcement.
|
|
If an individual has an option of engaging in two or more behaviours that are reinforced on different schedules by different reinforcers, what four factors in combination are likely to determine the response that the person will make? |
a) The types of schedules that are operating.
|
|
Describe how intermittent reinforcement works against those who are ignorant of its effects. Give an example. |
They may be unawakre that behavior may get worse before getting better, so they give into the behavior. This may cause a VR or VD schedule of reinforcement for undesirable beahviour. |
|
Name six schedules of reinforcement commonly used to develop beahviour persistent |
a) Fixed Ratio
|
|
In general, which schedules tend to produce higher resistance to extinction (RTE), the fixed or variable schedules? |
Variable Schedules |
|
Who wrote the classic authoritative work on schedules of reinforcement and what is the title of that book? |
Fester and Skinner Schedules of Reinforcement |
|
What may account for the failures to obtain the schedule effects in basic research with humans that are typically found in basic research with animals |
Humans have complex verbal behaviours which is emitted and responded to. Humans can verbalize rules that may influence them to show different behavior patterns than animals show when exposed to various reinforcement schedules.
|
|
Describe how FR schedules may be involved in writing a novel |
Some novelists stop writing immediately after completing each chapter of a book, after a brief pause of a day or so they resume writing at a high rate, which was maintained until the next chapter is completed. Longer pauses typically occurred after a draft of a manuscript was completed. One may argue completed chapters and drafts are reinforcers for novel writing that occur according to FR schedules. |
|
Might it be better to reinforce a child for dusting the living room furniture for a fixed period of time or for a fixed number of items dustsed? Explain your answer. |
A fixed number of items, because the child may dust slowly and complete dust less items during the amount of time. |
|
Briefly describe how schedules of reinforcement can help us understand behavior that has frequently been attributed to inner motivational states. |
A VR schedule with a low reinforcement rate can account for highly persistent behavior eg. Dedicated student, or compulsive gambler. |