School of Communication, Information and Library Studies

EXPERIMENTAL RESEARCH

Methods of Inquiry Syllabus:514

Gustav W. Friedrich

 


A. Purpose of Experimental Research:
1. Establish causal relationships between variables (necessary and sufficient)
2. Control for threats to internal (interpretability) and external (generalizability) validity.
B. Potential Threats

1. Internal validity: did in fact the experimental treatments make a difference in this specific experimental instance?

a. Threats due to researchers:
1) Researcher personal attribute effect (e.g., researcher's race, gender, age, ethnic identity, prestige, anxiety, friendliness, dominance, warmth affect subject's response): Sex: only 12% smiled at male subjects; 70% at female subjects. Female experimenters friendly to female subjects in visual channel, but not auditory; reverse for males.
2) Researcher unintentional expectancy effect (researchers influence subjects' responses by inadvertently letting them know the results they desire): Clever Hans.
3)Experimenter bias (independent of a specific hypothesis)
a) observer effects: recording and computational errors. Recording errors = 1%; over 2/3 in direction of hypothesis. Computational errors: 2/3 of experimenters err; 3/4 in direction of hypothesis.
b) interpreter effects
c) intentional effects

b. Threats to how research is conducted:
1) Procedure validity and reliability-three forms: (a) administering accurate measurement techniques in a consistent manner (instrumentation); (b) treatment validity and reliability; and (c) controlling for environmental influence.
2) History: changes in the environment external to the study that may influence people's behavior.
3) Sensitization: tendency for an initial measurement or procedure to influence a subsequent measurement or procedure.
4) Data analysis: using improper procedures to analyze data.

c. Threats due to research subjects:
1) The Hawthorne effect: changes in behavior due to being observed (Western Electric Hawthorne plant in Cicero, Illinois, 1939).
2) Selection: selection of people or texts for a study may influence the validity of the conclusions drawn (comparisons don't have any meaning unless the groups are comparable).
3) Statistical regression: the tendency for subjects selected on the basis of extreme scores to go back toward the mean on a second measurement.
4) Mortality: Differential dropout rate for experimental and control group.
5) Maturation: all internal changes that occur within people studied.
6) Intersubject bias: when the people being studied influence one another.

 

2. External validity: to what populations, settings, treatment variables, and measurement variables can this effect be generalized?
a. Sampling: to who can the findings be generalized?
b. Ecological Validity: to what extent does the research reflect, or do justice to, real-life circumstances?
c. Replication: has the study been replicated (conducting a second or third study on a particular topic that repeats exactly the procedures used in the first study or varies them in some systematic way)?

C. Features of Experimental Research:
1. Independent and Dependent Variables
2. Pre-tests and post-tests
3. Treatment group and Control group
4. Randomization.

D. Potential Solutions

1. True Experimental Designs
a. pretest/posttest control group design
b. posttest-only control group design
c. Solomon four-group design
2. Quasi-Experimental Designs
a. nonequivalent control group
b. time series
c. multiple time series
3. Pre-Experimental Designs
a. one-shot case study
b. one-group, pretest/posttest
c. static group comparison

E. Implementing Research Designs--the "MaxMinCon" principle.

1. Maximize systematic variance: design, plan, and conduct research so that the experimental conditions are as different as possible.
a) experimental realism vs. mundane realism
b) stimulus standardization vs. impact standardization
c) instructions vs. an event (the "accident"; confederate; no aura of an experiment)

2. Minimize error variance: (a) reduce errors of measurement through controlled conditions; (b) increase the reliability of measures.
a) remove from setting
b) assess significant behaviors
c) observe behaviors in another setting
d) imbed items
e) "whoops" procedure
f) confederate collect data
g) physiological measures

3. Control extraneous systematic variance: (a) through randomization, (b) build it into the design as an independent variable, (c) eliminate the variable as a variable, (d) match subjects, (e) use statistical control (ANOCOVA)

4. Additional design features
a) placebo model from medicine
b) deception
c) enlist subject as experimenter
d) avoid pre-post designs
e) stay ignorant of specific treatment ("blind" techniques)
f) use taped instructions

To return to the Syllabus