Term
|
Definition
He believed fear was LEARNED and Not Inate. Little albert experiments. Conditioned fear of rats in a 9month old baby. Paired rat with "bang" and baby showed Generalization to rabbit, dog, sealskin coat. |
|
|
Term
|
Definition
Paired Bell and food. Strength of reflex indicated amount of training. "empirical way of understanding black box of the mind". He was a physiologist studying difference between wet and dry food and salavation. Found Classical Conditioning. |
|
|
Term
|
Definition
Trial and Error Boxes. Discovered the law of effect. Founded Instrumental conditioning |
|
|
Term
|
Definition
Wanted to Falsify the idea that "no learning occurs in the absence of reinforcement". Therefore, he created an experiment: 1-no food condition. 2-reward 3-reward after day 11. By looking at the learning curve he found that the rats have been learning, they just aren't showing what they are learning because they didn't have a reason to. |
|
|
Term
|
Definition
renamed Instrumental conditioning to be "Operant Conditioning" because we "operate" on our environment. Made Skinner Box. |
|
|
Term
|
Definition
example: when little albert was trained to be afraid of rats and was also afraid of everything white and fuzzy. In Classical conditioning there is a generalization gradient- trained to be afraid of circles- will be less afraid of ovals as they get farther from a circle shape |
|
|
Term
|
Definition
opposite of generalization. Can do discrimination training- so that an circle scares you but an oval does not |
|
|
Term
|
Definition
when you present the CS alone enough times that you lose the CR. |
|
|
Term
|
Definition
when the CR occasionally reappears |
|
|
Term
|
Definition
Building off what you've learned with more CSs. Ex: get dog to stop scratching on back door. US-"noise" pop = UR- fear from dog, CS1- balloons in face CR-fear of balloons, CS2- balloons on door CR- fear of door, NOW the dog is afraid of the door bc he is also now afraid of balloons bc he is afraid of a pop noise. |
|
|
Term
|
Definition
Suppressing an unwanted response by conditioning a competing one. Two Types: Aversion Therapy & Systematic Desensitization |
|
|
Term
|
Definition
Positive-->Negative. Replace unwanted response with a new one. Used to get rid of a bad habbit. Very Difficult. Ex: puff till you puke. (Pos feeling of smoking is now associated with puking) |
|
|
Term
Systematic Desensitization |
|
Definition
Negative --> Positive. Used to get rid of phobias. Make a hierarchy from pictures of a spider to spider on your face. Teach to relax(physical relaxing). When relaxed it is hard for mind to be stressed. Pair fear with relaxing. Just get rid of phobia enough to live normally. |
|
|
Term
|
Definition
When a CS predicts that no US will occur. Typically involves both a CS+ and a CS-. Example: US=shock CS+=Tone CS-=Light. When Tone is there a shock is presented. BUT when the light is on...no shock. CS+ is the bad thing. The CS- takes away the power of the CS+ |
|
|
Term
|
Definition
When one CS overshadows another. EX: US- food. cs1-TONE!! cs2- light. both CS are equally predictive of US but one overshadows the other. EX: If a rat is shocked everytime there is a light, it learns to fear light. If it is then shocked everytime there is a light with a tone. It will be afraid of the light and not be afraid of the tone--this is blocking via Prior Knowledge. |
|
|
Term
|
Definition
Conditioning various combinations of CSs. Example: CSa &CSb = FOOD. CSa=nothing CSb=shock. |
|
|
Term
|
Definition
(don't confuse with Conditioned inhibition) . This is when a CS has previously and repeatedly been presented alone, with no US. If the CR is now paired with a meaningful US, the CR will not develop as quickly. -Because you have learned this CS is irrelevant.(Past experience of filtering out or ignoring CS you will not learn as quickly) |
|
|
Term
4 timing schedules: delay, trace, simultaneous, backwards |
|
Definition
1. Delay- optimal for learning. CS + 1sec=US Stimulus Onset Asynchrony(SOA) = 1sec. the longer the SOA- the slower learning is going to be 2. Trace. Pretty good for learning but delay is better. ISI (inter-stimulus interval)=time between CS and US 3. Simultaneous. both presented at the same time. 4. Backwards. US before CS. ex: US-scream. CS- feet pitter patter |
|
|
Term
Contiguity vs Contingency |
|
Definition
Contiguity~ adjacency, proximity Contingency ~dependency, predictability Is it sufficient for the CS to simply precede the US (=contiguity), or must it also predict the US(=contingency) |
|
|
Term
|
Definition
Contiguity vs. Contingency. US=shock UR="ouch" CS=tone CR=freezing. He changed the probability of the CS preceding the US. either 10%, 20%, or 40%. p(US|CS)= .1,.2,.4 BUT, US also occurred without CS at a constant probability. 0%, 1-%,20%, or 40%. p(US|noCS)=0,.1,.2,.4 |
|
|
Term
|
Definition
A new neuronal path connects the CS directly to the response. Sound of bell has new neuronal connections between sound of bell and salivary glands in face. |
|
|
Term
|
Definition
A new neuronal path connects the CS with the US. The new R is whatever response is appropriate when "expecting" the US. what is learned? connection between CS and US. |
|
|
Term
|
Definition
Paralyze the muscles so that the CS is never paired with a response. When paralysis wears off? S-R theory predicts it will not move its leg bc CS has not been paired with the shock. S-S theory predicts that the mind has learned that dog will move leg bc CS is paired with US and natural US response is to move leg. ____S-S theory is supported!! |
|
|
Term
|
Definition
After conditioning, devalue the US. US=food, CS=bell. give lots of food to devalue the bell. |
|
|
Term
sensory preconditioning paradigms |
|
Definition
Step1- Pair 2 neutral stimuli together. (Clowns and grammas) Step 2- Pair one of the stimuli with a US. (Clowns with shock) Step3- Present the other CS to see what happens. (gramma alone). This supports S-S theory. |
|
|
Term
|
Definition
How much learning occurs on each trial: DeltaV=alpha(lamda-V) .... V strength of CS-US association(strength of learning). Alpha = Rate of learning (0-1). Lamda = Maximum learning possible (0-100).. |
|
|
Term
|
Definition
Instrumental conditioning- stop rewarding |
|
|
Term
|
Definition
Instrumental conditioning- the rat will hit levers in other boxes |
|
|
Term
Discrimination Training: SD & S |
|
Definition
Discriminative stimuli define the situation. SD: indicates a reward can be earned. (Example-BOGO sign) Sdelta: indicates a punisher (or no reward) can be earned (Example- police car). The SD is not the reward. |
|
|
Term
|
Definition
The process of teaching very complicated behaviors in small increments. Selectively reinforcing ever closer approximations to a target response. Skinner did this around WWII when he shapes pigeons to peck on a screen. |
|
|
Term
|
Definition
It is a type of shaping. It is the process of training a complex series of behaviors. You start at the end task and they are a series of behaviors that have to go in a particular order. |
|
|
Term
|
Definition
Present something desired to reward behavior. It is anything that increases the probability of a behavior. Relatively easy to understand and is a primary element to our capitalist society. |
|
|
Term
|
Definition
Remove something dreaded to reward behavior. It usually involves the removal of fear or pain. Something is taken away to increase the probability of a behavior occurring. It leads to escape and avoidance responses. |
|
|
Term
|
Definition
Present something dreaded to discourage behavior. Something is presented to decrease behavior (Ex: Slapping and spanking) It is relatively easy to understand and used heavily by ineffective parents. |
|
|
Term
|
Definition
Remove something desired to discourage or decrease a behavior. It is also referred to as Omission Training. |
|
|
Term
|
Definition
is a stimulus that does not require pairing to function as a reinforcer. Examples of primary reinforcers include sleep, food, air, water, and sex |
|
|
Term
|
Definition
You learn to value secondary reinforcers {through pairing with an already established reinforcer}(ex: money) |
|
|
Term
|
Definition
Type of reinforcement that comes from an outside agent. These reinforcers have a tendency to overpower intrinsic rewards |
|
|
Term
|
Definition
Naturally reinforcing just by doing the activity, you get self-rewarded. These reinforcers are not tangible. |
|
|
Term
|
Definition
Type of learning that occurs in the absence of reinforcement. (Ex: group of rats in maze.. All week long they have been learning maze but just haven't been showing it because they didn't have a reason to. But, now that they have a reason to, they show learning. People may be learning something but just not performing there learning) |
|
|
Term
|
Definition
Immediate reward vs. potential punishment. People weight these the same and choose reward even though the punishment will have a big effect later. (Ex: laying out in the sun. Reward: nice golden tan.. Risk: potential skin cancer) |
|
|
Term
Response-consequence interval |
|
Definition
Associate headache with tequila. Link good luck charm with success of winning. |
|
|
Term
|
Definition
A way to avoid future mistakes. We all have our weaknesses but we have responsibility to know where we are vulnerable so we need to avoid that. |
|
|
Term
|
Definition
Award actual behavhior, not stimuli. (Ex: Getting money isn't rewarding, but the act of using that money to go shopping is rewarding). It is about the verb, not the noun (Ex: shopping, playing, etc.) All behaviors have value to the organism and that a more valued behavior reinforces a less valued behavior (You can go outside and play once you eat your carrots). Punishment occurs when an organism is forced to engage in a less valued behavior as a consequence of engaging in a more valued behavior. |
|
|
Term
|
Definition
The endless search for bliss. Anything is a reward if it brings you close to your bliss point. Anything is a punishment if it brings you away from your bliss point. Nothing is a reward forever. The reward is going to start becoming a punisher if you get too much of the reward. |
|
|
Term
|
Definition
Only useful when you are in exact situation multiple times. |
|
|
Term
|
Definition
Learning that darker means food, lighter color means getting shocked. The term Kohler used to indicate that the organism had transferred the relationship between one pair of stimuli to choosing between a different pair. |
|
|
Term
Reinforcement schedules: CRF, FR, VR, FI, VI |
|
Definition
Ratios are usually faster to learn than intervals. Fixed schedules=variable response rate. Variable schedule=fixed response rate. Extinction is very quick with continuous and fixed ratio schedules. Extinction is much slower in variable schedules. |
|
|
Term
|
Definition
one behavior =one reinforcment... It is continuous |
|
|
Term
|
Definition
Every xth behavior is reinforced |
|
|
Term
|
Definition
On average, every xth behavior is reinforced- You never know when the next behavior will occur |
|
|
Term
|
Definition
Behavior is reward after n minutes |
|
|
Term
|
Definition
On average, behavior is reward after n minutes |
|
|