Term
|
Definition
All actions, thoughts, hopes, expectations, etc. are caused by mental processes -all of your behavior occurs because of the state of the mind you were in |
|
|
Term
What are the 2 criticisms against Mentalism? |
|
Definition
1. What about consequences? 2. Are the mind states observable? |
|
|
Term
What is Edward Thorndike's viewpoint on consequences? |
|
Definition
Serve to strengthen "BONDS" or associations b/t STIMULI & RESPONSES |
|
|
Term
What was Edward Thorndike the first psychologist to do? |
|
Definition
study how the consequences of action influences learning & voluntary behavior |
|
|
Term
What was Thorndike's method of inquiry? |
|
Definition
mazes w/ baby chickens & puzzle boxes w/ stray cats |
|
|
Term
What was Thorndike's purpose of the puzzle boxes? |
|
Definition
could the cats learn the correct response to escape confinement & retrieve food? |
|
|
Term
What were Thorndike's questions of research? (3) |
|
Definition
What proceses are involved in trial & error learning, are there any signs of insight, and why is performance non-linear? |
|
|
Term
What were Thorndike's 3 observations from the trials? |
|
Definition
Trial 1: innate & learned responses to confinement Trial 2-9: "stamping out" responses not associated with escape Trial 10-23: "stamping in" responses that lead to food & escape |
|
|
Term
What was the main implication of the puzzle box experiment? |
|
Definition
Trial & error learning involves eliminating what does not work and incorporating what does work |
|
|
Term
What was the explanation to the nonlinear results? |
|
Definition
Since the cats were strays- they freaked out to being confined & didn't focus on learning until later on |
|
|
Term
How are the processes of stamping in & stamping out involved in learning? |
|
Definition
Experience new stimuli (cat being placed in box), understanding the relationships b/t the stimuli (loop attached to latch by lever) and form associations based upon the relationships (pulling the loop will release the latch to give the cats food) & aquiring new knowledge or responses to previously insignificant stimuli/modify old responses |
|
|
Term
What is Thorndike's Law of Effect? |
|
Definition
1. Responses that lead to satisfaction will be strengthened & more firmly connected to a given event so that when the event recurs, the responses will be more likely to occur 2. responses preceding discomfort will have it's connections weakened with the event so they are less likely to be displayed |
|
|
Term
What are satisfiers and annoyers? |
|
Definition
-Satisfying state of affairs=one which animal does nothing to avoid, often doing things to attain/preserve it •By discomforting/annoying state of affairs=animal commonly avoids or abandons |
|
|
Term
|
Definition
incorporating responses that lead to success |
|
|
Term
|
Definition
They are stamping out responses that are ineffective to leading to the goal |
|
|
Term
What are the roles of satisfiers in "selecting" and "connecting? |
|
Definition
The consequences/(SR) of an experience influence learning by: “selecting & connecting what”? Reinforcers select responses and connect those responses to stimuli |
|
|
Term
What is Edwin Guntrie's viewpoint on consequences? |
|
Definition
Played a limited role in learning, may serve to preserve or protect STIMULUS-RESPONSE bonds that are forming |
|
|
Term
What is Gunthrie's viewpoint on learning (3) and the role of contiguity (1 of 3)? |
|
Definition
1. all behavior cannot be explained by it's consequenes since several behaviors are not goal directed or purposeful 2. association by contiguity is the basis for most learning 3. we do what we did the last time we were in that situation |
|
|
Term
What are the procedures in the Guthrie's transparent puzzle box (3)? |
|
Definition
1. used an entry box to insert cat into larger transparent puzzle box 2. front door remained open for first 3 trials, food available after exiting via front door 3. on remaining trials, cat could only escape by displacing pole which opened door |
|
|
Term
What are the findings in Gunthrie's experiment? |
|
Definition
After learning, an almost identical response was used to displace the pole on each trial |
|
|
Term
What were the conclusions in Gunthrie's expeirment? (2) |
|
Definition
1. stimuli acting at the time of a response tend to evoke a response 2. the last response performed is the last response that is conditioned to a stimulus |
|
|
Term
What are the role of reinforcers in learning according to Gunthrie? |
|
Definition
The reinforcer end the sequence of responses leading up to it and leave the bavhior as the last thing that was done in that stimulus situation: RESPONSE PRESERVATION- no new associations formed |
|
|
Term
What is the Contrafreeloading effect? |
|
Definition
1. Davidson: FR10 in Skinner Box, 8 days of free feeding, return to box w/ choice b/t free feeding & level pressing on FR10 schedule 2. kimble (1951): feeding wet mash 15-20 daily in feeding chamber until satiated, then remove for 1 minute & return rat to feeding chamber- will rat eat again? YES |
|
|
Term
What is the evidence which supports Gunthrie's Viewpoint? |
|
Definition
Experiment with the transparent puzzle box & contrafreeloading effect |
|
|
Term
What is B.F. Skinner's viewpoint on consequences? |
|
Definition
They selectively strengthen REPONSES that precede their delivery |
|
|
Term
What was Skinner's method of producing "superstitious behavior"? |
|
Definition
1. drop food into chamber every 15 seconds 2. no specific response required for the reinforcer 3. what is the effect on behavior and the S-R-Reinforcer relationship? - the pigeons would continue to do wahtever they were doing when the food was dropped |
|
|
Term
What did the "superstitious experiment" imply for the law of effect? |
|
Definition
Skinner said that the stimulus is unnecessary. Any responses will keep occurring as long as it is rewarded/reinforced. |
|
|
Term
What were Skinner's modifications of the law of effect? |
|
Definition
He removed the stimulus from the "stimulus-response" learning |
|
|
Term
What were staddon & Simmelhag's criticism of the "Superstitious experiment"? (2) |
|
Definition
1. SR or adjunctive behaviors 2. Interim & Terminal responses |
|
|
Term
Adjunctive behaviors: -what are they -when are each more likely to be exhibited |
|
Definition
Staddon and Simmelhag used the collective term “adjunctive“ to describe the behaviors observed in Skinner’s pigeons. This term is composed of 2 forms of behaviors, known as: A) interim – what animal does to pass time, when Sr is far away B) terminal- when Sr is expected to come |
|
|
Term
|
Definition
selective reinforcement of successive approximations of a target behavior |
|
|
Term
What 2 methods/procedures are involved in shaping? |
|
Definition
1. selective reinforcement of components of target behaviors/responses 2. extinction: gradual withholding of reinforcement for responses that were once reinforced |
|
|
Term
What were the findings from Hutt (1954)? |
|
Definition
Animals will work harder to get a small quantity of a high quality reward than they will to get larger amounts of a poor reward |
|
|
Term
Hutt (1954): Response strength is measured by? |
|
Definition
|
|
Term
Hutt (1954): What is the important of quality vs. quantity in the delivery of primary reinforcers? |
|
Definition
· Quality is the more important than quantity but that only applies to primary reinforcers
o Primary reinforcers – things you need to stay alive (food, water) |
|
|
Term
Hutt (1954): What characteristic is more important in the delivery of secondary reinforcers? |
|
Definition
· For secondary/conditioned reinforcers, it’s the quantity that’s more important
o Secondary reinforcer – any stimuli that can lead to or produce a primary reinforcer (ex. Dollar bill: you care about getting more rather than how crisp it is) |
|
|
Term
What were the findings from Capaldi (1978)? |
|
Definition
o Contiguity (rats don’t like being confined, so they stop running to avoid confinement) • Running speeds decreases b/c no positive reinforcer |
|
|
Term
Capaldi (1978): Response strength is measured by? |
|
Definition
|
|
Term
Capaldi (1978): Reasons why imposing a delay between the response & the (Sr) decreases motivation to respond? |
|
Definition
The response of running is now associated with confinement in the 10 sec delay group o What happens to the association b./t the response of running & Sr? It is weakened because running becomes associated w/ being confined rather than being rewarded • Running speeds decreases b/c no positive reinforcer |
|
|
Term
Capaldi (1978): How are the findings from Capaldi utilized in the business community? |
|
Definition
· How are businesses affected by it?
o Restaurants give you a pager so there’s no delay
o There’s a bar in the front of restaurants |
|
|
Term
What were the findings from Crespi (1942)? |
|
Definition
-the previous SR history influences the change in running speeds following the shift in SR amount -When the amount of food shifted from small to large vs. large to small- the responses/speed increased/decreased (respectively) |
|
|
Term
Crespi (1942): Response strength is measured by? |
|
Definition
The vigor of their responses/speed of running |
|
|
Term
Crespi (1942): What is the concept of asymptote? |
|
Definition
maximum amount of responding that can occur in a given condition |
|
|
Term
Crespi (1942): What are the 2 theories to explain the increase of decrease in motivation following a shift in the quantity of Sr's? |
|
Definition
-Contrast in perception of magnitude -Emotional reactions (elation vs. frustration) |
|
|
Term
How does the contrast in perception of magnitude play a role in decreasing motivation? |
|
Definition
Contrast/difference in your perception of the magnitude of the changes in reinforcers |
|
|
Term
How does emotional reactions play a role in decreasing motivation? |
|
Definition
· Emotional reactions (elation vs. frustration) become associated with reinforcer
o Animals that got 10 pellets that then got 1 show dissatisfaction by urinating/defecating § Running is associated with frustration so their speeds decrease o Animals that got 1 that then got 10 jumped around=elation |
|
|
Term
What is the definition of instrumental learning? |
|
Definition
refers to conditions in which the organism’s responses are necessary for instrumental in bringing about some change in the environment |
|
|
Term
What is the definition of operant learning? |
|
Definition
the animal behavior brings about changes in the environment b/c it operates upon the environment |
|
|
Term
What is the type of procedure used in instrumental learning? Example? |
|
Definition
Discrete trails—the response that the organism makes is specific for that particular trial
• Example: in the stray alley- the animal runs to the end, makes one response & experimenter is involved o Once it makes that response—it gets taken out o Therefore there is only one response, one trial to get it right |
|
|
Term
What is the type of procedure used in operant learning? |
|
Definition
The pre-operant trials: the organism can operate in the environment and obtain as many reinforcers as possible • There are no limitations
Free operant procedures: procedures that make use of lever pressing, key pecking, etc. • Operant response can occur at any time • Operant response can occur repeated for as long as subject remains in experimental chamber |
|
|
Term
What is the DV/measure of response strength in instrumental learning? |
|
Definition
look @ latency or # of incorrect responses |
|
|
Term
What is the DV/measure of response strength in operant learning? |
|
Definition
|
|
Term
What is reinforcement contigency? |
|
Definition
Contingency is a rule that states that some event, B, will occur if and only if event, A, occurs |
|
|
Term
What are the rules for administering reinforcement? |
|
Definition
-How to deliver (Fixed & Variable) -Conditions in which a reinforcer will be given (Ratio & Interval) |
|
|
Term
What is a fixed ratio schedule? |
|
Definition
An identical rate after a given # of responses |
|
|
Term
What are the patterns of responding on a fixed ratio schedule? |
|
Definition
•Post reinforcement pause- work hard and then take a break |
|
|
Term
What are the 3 different viewpoints to explain post reinforcement pause? |
|
Definition
1. fatigue 2. satiation 3. remaining responses |
|
|
Term
How do we identify a fixed ratio schedule on a graph? |
|
Definition
• Look @ orientation of the lines- they are horizontal, the tic marks represent when a reinforcer is given • They are in concert w/ the y axis—if they were in concert w/ the X-axis=then they would be in fixed interval (time) |
|
|
Term
What is a fixed interval schedule? |
|
Definition
The identical rate after a certain interval of time |
|
|
Term
What is the pattern of responding on a fixed interval schedule? |
|
Definition
• Scalloped=when the time out period is almost over—the animal will increase its rate of response o The animal knows that as the interval is almost over- you need to make a response • Rate of response is steady |
|
|
Term
What is the concept of a "time out period" in FI schedule? |
|
Definition
• The time between when one reinforcer is delivered and the next one is delivered • No amount of responses will bring about a reinforcer during the time out period |
|
|
Term
What is the subject's strategy for responding on a FI schedule? |
|
Definition
• Scalloped=when the time out period is almost over—the animal will increase its rate of response • You want to make sure you don’t want to miss it- there are only certain periods in which your response will lead to a reward |
|
|
Term
What is a variable interval schedule? |
|
Definition
The amount of time which passes between responses varies |
|
|
Term
What are the rules for administering reinforcers on a variable interval schedule? |
|
Definition
• A reinforcer will be given only after a certain amount of time has passed on average- therefore that means the amount of time that passes from reinforcer to reinforcer=varies |
|
|
Term
How do you identify the VI schedule on a graph? |
|
Definition
Vertical lines means continuous responding
o Slide showon average after 48 responses made There were 525 responses and 11 reinforcers48 |
|
|
Term
What is a variable ratio schedule? Pattern of responding? |
|
Definition
The amount of responses necessary for a reinforcer varies -increase in responses |
|
|
Term
What are the rules for administering reinforcers on a variable ratio schedule? |
|
Definition
• The animal will receive an reinforcer after X # of responses on average |
|
|
Term
What are the differences between VI & VR |
|
Definition
The pattern of response is different:
• VR=vertical response o This animal was responding based upon its ability to bring about the reinforcer o The fast you respond- the more reinforcers you get o So the animal learns that if I respond fast=I get a reinforcer better o 5 responses made every second60 seconds in a minute=300 responses a minute, in total there are 6 minutes1800 responses are made 10 reinforcers were given Therefore 180 responses per reinforcer • VI=more horizontalmore pausing o This animal learns that the reinforcer is dependent on how much time has passed o Just needs to check every so often to see if reward is ready |
|
|
Term
What is the need reduction theory? |
|
Definition
o All primary SR’s reduce a biological need, therefore the reinforcers are any stimulus that reduces a biological need |
|
|
Term
What is the need reduction theory's shortcoming? Examples (6)? |
|
Definition
• things that don’t address biological needs but they still serve as reinforcers. o i.e. kind words o certificats o plaques o smiles o ribbons o trophies |
|
|
Term
What is the drive reduction theory? (Definition) |
|
Definition
o This is an extension- reinforces are any stimulus that fulfills a biological need AND also reduces drives |
|
|
Term
Does the drive reduction theory address the problems in the Need Reduction Theory? |
|
Definition
Yes, Anything that is a depletion or deprivation of drives & needs create tension (unpleasant state) Tension is aversiveso it motivates or energies our action o Their theory suggests reinforcers are any stimulus that reduce the “tension” by replenishing resources necessary for optimal survival |
|
|
Term
What are the shortcomings of the Drive Reduction Theory? |
|
Definition
There are stimulus that also serve to increase tension or drive |
|
|
Term
What is the optimal stimulation theory? |
|
Definition
o Reinforcers are stimuli that erturn the organism to an “intermediate level of arousal” o Internal arousal linked to level of sensory stimulation from environment o Increases or decreases in stimulation serve as reinforces when these changes return the organism to an intermediate level of arousal |
|
|
Term
What is the basic premise of the optimal stimulation theory? |
|
Definition
o Organisms function optimally under an intermediate level of internal arousal Everyone wants to function at a certain level of arousal |
|
|
Term
What is EBS & Dopamine's role in reinforcement? |
|
Definition
Olds & Milner performed an experiment in which the placed an electrode that would stimulate the medial forebrain bundle (essential in dopamine release). They found that rats would rather bar press for EBS intead of eat/drink/sleep. It is an extremely strong reinforcer and will override any other reinforcers |
|
|
Term
What is the mesotelencephalic reinforcement pathway composed of? (2 components) |
|
Definition
Tegmentostriatal & nigrostriatal pathways |
|
|
Term
What is the tegmentostriatal primary function? |
|
Definition
Evaluates & Registers the motivation properties of a Sr -places value in the stimulus |
|
|
Term
What is the tegmentostriatal origin? |
|
Definition
|
|
Term
What brain structures are involved in the tegmentostriatal pathway? |
|
Definition
lateral hypothalamus-(medial forebrain bundle)->ventral temental area--> 3 places via medial forebrain bundle: septum, nucleus accumbens, prefrontal cortex |
|
|
Term
What are the neurotransmitters involved in the tegmentostriatal pathway? |
|
Definition
norepinephrine which will go to the septum/nucleus accumbens/prefrontal cortex to activate/release dopamine |
|
|
Term
What is the site of termination in the tegmentostriatal pathway? |
|
Definition
|
|
Term
What is the primary function of the nigrostriatal pathway? |
|
Definition
Stores the memory for Stimulus-Response-Reinforcer relationships, will encode the positive stimulus into memory |
|
|
Term
What is the origin of the nigrostriatal pathway? |
|
Definition
begins in substantia nigra |
|
|
Term
What are the brainstructures involved in the nigrostriatal pathway? |
|
Definition
Substantia nigra-->caudate nucleus & putamen |
|
|
Term
What are the site of termination for the nigrostriatal pathway? |
|
Definition
|
|
Term
What is the function of the lateral hypothalamus? |
|
Definition
• One of the main areas of the brain that will detects if the reinforcer is related to the stimulus (is the stimulus rewarding or not)
-detects reinforcement realated stimuli |
|
|
Term
What is purpose of the nucleus accumbens? |
|
Definition
Here hedonic sensations, emotional arousal |
|
|
Term
What is the purpose of the septum? |
|
Definition
Hedonic sensations=dopamine gives you the perception that something is pleasurable |
|
|
Term
What is the purpose of the prefrontal cortex? |
|
Definition
This area of the brain is involved in decision making • Even though people know that they are addicted to dopamine is negative- it affects your response planning to reinforcing stimuli o Your response becomes solely to get more dopamine
-decision making -response planning to Sr related stimuli |
|
|
Term
What is the function of the substantia nigra? |
|
Definition
o This also contains dopamine o It is the source of dopamine to initiate activity in the striatum |
|
|
Term
What is the function of the caudate nucleus? |
|
Definition
o This area is responsible for the memory for S-R-Sr associations |
|
|
Term
What is the function of the putamen? |
|
Definition
o This area develops the motor programs from the caudate sent to cortex o This output will then go to the motor cortex- then you’ll make the response automatically w/out thinking once you see the stimulus |
|
|
Term
What is Premack's principle of Sr? |
|
Definition
o The role that any behavior plays in serving as a reinforcer depends on its relative position on the probability scale (i.e. preferred behavior) They look at responses rather than stimuli |
|
|
Term
What is the difference between Premack's contingency and that of Skinner? |
|
Definition
Skinner says that the behavior of level pressing is because you want to obtain food Premack says that obtaining food is not a reinforcer, really a reinforcer involves the opportunity to engage in the behavior of eating • It’s not the food itself that is rewarding, it is the opportunity to engage in the behavior or eating |
|
|
Term
What is Premack's principle? |
|
Definition
the more probable behaviors will reinforce less probably behaviors |
|
|
Term
What is the important of position of behavior on probability scale in determining whether it will serve as a reinforcer? |
|
Definition
Depending on if the behavior his highly preferred or least preferred will determine the probability if that behavior will serve as a reinforcer. The more preferred, the more it can probably act as a reinforcing behavior.
• His whole theory is that any behavior is least preferred- animal can be trained to increase that behavior by being able to do the behavior that they want to do o Things that you least prefer but using the things that you want to engage in as reinforcers |
|
|
Term
Conditions where Premack's principle falls short of predicting behaviors that may be potential reinforces? |
|
Definition
If a schedule requires much more of the high-probability bheavior than a low-probabiliy behavior, Premack's principle may be violated |
|
|
Term
What is the basic premise for the response deprivation theory? |
|
Definition
Any schedule that lowers the baseline ratio of a behavior, deprives the organism of it’s PREFERRED level of that behavior; this restriction is what converts the deprived behavior into a reinforcer |
|
|
Term
How does the response deprivation theory address the only weakness in Premack's principle? |
|
Definition
It addresses that fact that even if the behavior is of low-probability, it can serve as a reinforcer if it is more restricted |
|
|
Term
What is the criticism of mentalism in regard to consequences? |
|
Definition
-i.e. there are certain events where you are in a state of extreme sadness, but you don't cry (like jail) -You may have a mind state of something, but it doesn't always elad to action because the consequences of our actions can dictate our behavior -consequences restrain behavior as well as elicit behavior |
|
|
Term
What are interim & terminal responses/behaviors? |
|
Definition
interim: b/c the likelihood of seeing them occurs whenever the reinforcer is far away, These behaviors are what the organism does to pass time -terminal behaviors: defined as behaviors that seldom occurred early in interval but increased in freq as time of food delivery approached |
|
|
Term
What are the 3 characteristisc of a reinforcer that influence "response strength" or vigor? |
|
Definition
1. quantity & quality 2. timing/delay 3. previous reinforcement history |
|
|
Term
What were Hutt's methods? What did he want to test? |
|
Definition
•Reinforcer composed of Liquid Drink •Varied Reward Quantity by SMALL, MEDIUM or LARGE •Varied Reward Quality by NORMAL, ACIDIC or SWEET
•Which Characteristic of the Reward has the greatest impact on Response Strength (barpress rate)? |
|
|
Term
What is the fatigue hypothesis? |
|
Definition
tired after responding this is not the case, b/c if there was fatigue, there would be a greater pause at the very end of the trial |
|
|
Term
What is the satiation hypothesis? |
|
Definition
the animals work to get the reinforcer, then they are full |
|
|
Term
What is the remaining responses hypothesis? |
|
Definition
• You pause b/c after you receive an reinforcer-->you’re farther away to the next reward
o It is the size of the upcoming requirements that determine the length of the post-reinforcement pause Every time we know there is something large ahead of us- we tend to pause for a longer time |
|
|
Term
What was the procedure to understand the cuase of the Post Reinforcement Pause? |
|
Definition
Multiple schedule: There’s more than one schedule in effect • So you have a discrimitive stimulus- Red vs. Blue light o The blue light=says you are on a fixed ratio of 100 o But as soon as the red light=says that only 10 responses are required • Look @ size of the pause o The size of the pause is longer from 10100 This tells us that we cannot predict the size of the pause by looking @ the previous schedule o Size between 10010=very small So fatigue is not the issue |
|
|
Term
How do we identify a FI schedule on a graph? |
|
Definition
The line aligns w/ the x-axis (time) |
|
|
Term
What is the pattern of responding on a VI schedule? |
|
Definition
• Steady rates of responding b/c interval can’t be predicted o Vs. fixed intervalsyou are able to judge the passage of time • Introduce uncertainty—b/c reinforcer is unpredictable o Since its uncertain, it causes you to check more often |
|
|
Term
What is the inter response time theory? |
|
Definition
Animals on VI schedules make fewer responses during a session b/c long periods of non-responding are eventually followed by delivery of reward Animals on VR schedules are so high b/c few if any reinforcements are given for withholding responses, but rather, the association develops between making several responses and the delivery of reinforcement |
|
|
Term
What were the two experiments which highlighted probelms in the Drive Reduction Theory? |
|
Definition
Sheffield Experiment (in book) • Put male rat in, put female rat on other side o Before male can finish on female—take the male out • They found that as the trials continue- male will go from slow to fast o But this graph shows you, that the performance/responses increases o Why did we see an increase in the behavior? (short coming) Everitt • The male was in a box- hit lever 10 times=female rat • Like before- rat was taken out before “finishing” • Rate of bar pressing increased- even though tension was being increased rather than decreased |
|
|
Term
What is the stop-action principle? |
|
Definition
The occurrence of the reinforcer serves to stop the animal's ongoing behavior & strengthen the association b/t the situation & those precise behaviors that were occurring @ moment of reinforcement. Addressed why different behaviors were reinforced in different subjects |
|
|
Term
What are the issues with the stop-action principle? |
|
Definition
interim & terminal behaviors |
|
|
Term
What is the three-term contingency? What are the 3 components? |
|
Definition
3 components in operant conditioning contingency • Context/situation in which a response occurs • The response itself • The stimuli that follows the response (reinforcer) In the presence of a specific stimulus- discriminative stimulus: reinforcer will occur if and only if operant response occurs |
|
|
Term
|
Definition
reappearance of previously reinforced response occurs when more recently reinforced response is extinguished |
|
|
Term
What are generalized reinforcers? |
|
Definition
special class of conditioned reinforcers, those that are associated w/ various primary reinforcers |
|
|
Term
|
Definition
when an operant response is no longer followed by a reinforcer, so the response will weaken and eventually disappear |
|
|
Term
What are response chains? |
|
Definition
sequence of behaviors that must occur in specific order w/ the primary reinforcer being delivered only after final response of sequence |
|
|
Term
When is extinction more rapid? after a constant reinforcement sched. or a sched. of intermittent reinforcerment |
|
Definition
Humphrey's paradox=extinction is more rapid after CRF than intermitted reinforcement |
|
|
Term
What are the two hypothesis which explain Humphrey's paradox? |
|
Definition
1. discrimination 2. generalization decrement |
|
|
Term
What is the generalization decrement hypothesis? |
|
Definition
The generalization decrement hypothesis, which the text believes is better, says that the decrease in responding observed is seen when the test stimuli is less and less similar to the training stimuli. Responding during extinction will only be weak if the stimuli presented during extinction are different. But, when the stimuli are similar, the response will still be strong. So, during CRF to extinction, the animal has never experienced a situation where their responses were not reinforced, so they quickly stop responding. Whereas the animals under VR training have learned to continue to respond, even when their initial responses are reinforced. |
|
|
Term
What is a DRL schedule? what type of responding does it produce? |
|
Definition
DRL: differential reinforcement o flow rates schedule • A response is reinforced if and only if a certain amount of time has elapsed o i.e. DRL 10 –sec, response that occurs after a puse of at least 10 seconds=reinforced o if response occurs after 9.5 seconds, not only no reinforcement, but resets 10-second clock • produces low rates of responding |
|
|
Term
|
Definition
DRH schedule: differential reinforcement of high rates • Certain number of responses must occur w/in a fixed amount of time |
|
|
Term
what are concurrent schedules vs. chained schedules? |
|
Definition
Concurrent schedule: subject is presented with 2+ response alternatives each associated w/ own reinforcement schedule Chained schedules: subject must complete requirement for 2+ simple schedules in a fixed sequence- each schedule signaled by different stimulus |
|
|
Term
What is the interresponse time reinforcement theory? |
|
Definition
The IRT reinforcement theory attempts to explain the differences in response rates between subjects because long pauses between responses (longer IRTs) are more frequently reinforced on VI schedules.
Animals on a VR training schedule realize that the # of their responses leads to a reinforcer, therefore, more responses, more reinforcement. Animals on the VI schedule are taught that no matter the # of responses, there still is a time out period that they have to go through before receiving a reward, therefore they make fewer responses during a session because long periods of non-responding are eventually followed by delivery of reward |
|
|
Term
Are there any problems w/ Staddon & Simmelhag’s interpretations of Skinner’s study? |
|
Definition
• You see some of the behaviors occur throughout the whole time • The probability of some of the behaviors occur |
|
|