QUASI -EXPERIMENTAL DESIGN1

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

GET A 40% DISCOUNT ON YOU FIRST ORDER

ORDER NOW DISCOUNT CODE >>>> WELCOME40

about 400-500 words of writing the discuss of advantages and disadvantages on Quasi-experimental design research. 3 journals should be included in it. Attached files can be references for writer.
QUASI -EXPERIMENTAL DESIGN1
Donald T. Campbell
Northwestern University
This phrase refers to the application of
an experimental mode of analysis and interpretation
to bodies of data not meeting the full
requirements of experimental control because
experimental units are not assigned at random to at least two “treatment” conditions. The settings
to which it isappropriate are those of
experimentation in social settings, including
planned interventions such as specific communications,
persuasive efforts, changes in conditions
and policies, efforts at social remediation,
etc. Unplanned conditions and events may
also be analyzed in this way where an exogeneous
variable has such discreteness and abruptness as
to make appropriate its consideration as an ex- perimental treatment applied at a specific point
in time to a specific population. When properly
done, when attention is given to the specific
implications of the specific weaknesses of the
design in question, quasi -experimental analysis
can provide a valuable extension of the experimental
method.
While efforts to interpret field data as experiments go back much farther, the first
prominent methodology of this kind in the social
sciences was Chapin’s Ex Post Facto Experiment
(Chapin & Queen, 1937; Chapin, 1955; Greenwood,
1945), although it should be noted that due to
the failure to control regression artifacts,
this mode of analysis is no longer regarded as
acceptable. The American Soldier volumes
(Stouffer et al., 1949) provide prominent analyses
of the effects of specific military experiences,
where it is implausible that differences
in selection explain the results. Thorn – dike’s efforts to demonstrate the effects of
specific course work upon other intellectual
achievements provide an excellent early model
(e.g., Thorndike Woodworth, 1901; Thorndike
& Ruger, 1923). Extensive analysis and review
of this literature are provided elsewhere
(Campbell, 1957; 1963; Campbell & Stanley, 1963)
and serve as the basis for the present abbreviated
presentation.
The core requirement of a “true” experiment
lies in the experimenter’s ability to apply
at least two experimental treatments in complete
independence of the prior states of the materials
1The preparation of this review was
supported in part by Project C -998, Contract
3 -20 -001, with the Media Research Branch, Office
of Education, U.S. Department of Health, Education,
and Welfare, under provisions of Title VII
of the National Defense Education Act. This symposium
presentation is essentially the same as
the current draft of my article for the International
Encyclopedia of the Social Sciences.
157
(persons, etc.) under study. This independence
makes resulting differences interpretable as
effects of the differences in treatment. In the
social sciences this independence of prior status
is assured by randomization in assignments to treatments. Experiments meeting these requirements,
and thus representing “true” experiments,
are much more possible in the social sciences
than is generally realized. Wherever, for example,
the treatments can be applied to individuals
or small units (such as precincts.or classrooms)
without the respondents’ being aware of
experimentation or that other units are getting
different treatments, very elegant experimental
control can be achieved. An increased acceptance
by administrators of randomization as the
democratic method of allocating scarce resources
(be these new housing, therapy, or fellowships)
will make possible field experimentation in many
settings. Where innovations are to be introduced
throughout a social system, and where the introduction
cannot in any event be simultaneous, a use of randomization in the staging can provide an experimental comparison of the new and the
old, using the groups receiving the delayed introduction
as controls. Nothing in this article
should be interpreted as minimizing the importance
of increasing the use of true experimentation.
However, where true experimental design
with random assignment of persons to treatments
is not possible, due to ethical considerations,
lack of power, or in feasibility, application of
quasi -experimental analysis has much to offer.
The social sciences must do the best they can with the possibilities open to them. Inferences
must frequently be made from data lacking
complete control. Too often a scientist trained
in experimental method rejects out of hand any
research in which complete control is lacking.
Yet in practice no experiment is perfectly executed,
and the practicing scientist overlooks
those imperfections which seem to him to offer
no plausible rival explanation of the results.
In the light of modern philosophies of science,
no experiment ever proves a theory, it merely
probes it. Seeming proof results from that
condition in which there is no available plausible
rival hypothesis to explain the data. The
general program of quasi- experimental analysis
is to specify and examine those plausible rival
explanations of the results which are provided
by the uncontrolled variables. A failure of
control which does not in fact provide a plausible
rival interpretation is not regarded as invalidating.
It is well to remember that we do make
assured causal inferences in many settings not
involving randomization: (The earthquake caused
the brick building to crumble; the automobile
crashing into it caused the telephone pole to break; the language patterns of the older models
158
and mentors caused this child to speak English
rather than Kwakiutl; etc.) While these are all
potentially erroneous inferences, they are of the same type as experimental inferences. We are
confident that were we to intrude experimentally, we could confirm the causal laws involved. Yet
they have been made assuredly by a nonexperimenting
observer. This assurance is due to the
effective absence of other plausible causes. Consider the inference as to crashing auto and
the telephone pole: we rule out combinations of
termites and wind because the other implications
of these theories (e.g., termite tunnels and
debris in the wood, wind records at nearby
weather stations) do not occur. Spontaneous
splintering of the pole by happenstance coincident
with the auto’s onset does not impress us as a rival, nor would it explain the damage to the car, etc. Analogously in quasi -experimental
analysis, tentative causal interpretation of data
may be made where the interpretation in question
squares with the data and where other rival interpretations
have been rendered implausible.
For the evaluation of data series as quasi- experiments, a set of twelve frequent
threats to validity have been developed. These
may be regarded as the important classes of frequently
plausible rival hypotheses which good
research design seeks to rule out. All will be
presented briefly even though not all are em- ployed in the evaluation of the designs used
illustratively here.
Fundamental to this listing is a distinction
between internal validity and external
validity. Internal validity is the basic mini- mum without which any experiment is uninterpretable:
did in fact the experimental treat- ments make a difference in this specific experimental
instance? External validity asks the
question of generalizability: to what populations,
settings, treatment variables, and
measurement variables can this effect be generalized?
Both types of criteria are obviously
important, even though they are frequently at
odds, in that features increasing one may jeopardize
the other. While internal validity is the sine qua non, and while the question of
external validity, like the question of inductive
inference, is never completely answerable, the
selection of designs strong in both types of
validity is’obviously our ideal.
Relevant to internal validity are eight
different classes of extraneous variables which,
if not controlled in the experimental design,
might produce effects mistaken for the effect of
the experimental treatment. These are: 1. History: the other specific events occurring
between a first and second measurement in addition
to the experimental variable. 2. Maturation:
processes within the respondents operating
as a function of the passage of time per se (not
specific to the particular events), including
growing older, growing hungrier, growing tireder,
and the like. 3. Testing: the effects of
taking a test upon the scores of a second testing.
4. Instrumentation: in which changes in
the calibration of a measuring instrument or
changes in the observers or scorers used may
produce changes in the obtained measurements.
5. Statistical regression: operating where
groups have been selected on the basis of their
extreme scores. 6. Selection: biases resulting
in differential recruitment of respondents for
the comparison groups. 7. Experimental mortalthe
differential loss of respondents from
the comparison groups. 8. Selection – maturation
interaction: In certain of the multiple -group
quasi -experimental designs, such as the nonequivalent
control group design, such interaction
is confounded with, i.e., might be mistaken for,
the effect of the experimental variable.
Factors jeopardizing external validity or
representativeness are: 9. The reactive or interaction
effect of testing, in which a pretest
might increase or decrease the respondent’s
sensitivity or responsiveness to the experimental
variable and thus make the results obtained for a
pretested population unrepresentative of the
effects of the experimental variable for the
unpretested universe from which the experimental
respondents were selected. 10. Interaction
effects between selection bias and the experimental
variable. 11. Reactive effects of experimental
arrangements, which would preclude
generalization about the effect of the experimental
variable for persons being exposed to it
in nonexperimental settings. 12. Multiple – treatment inference, a problem wherever multiple
treatments are applied to the same respondents,
and a particular problem for one -group designs
involving equivalent time -samples or equivalent
materials samples.
Perhaps the simplest quasi -experimental
design is the One -Group Pretest -Posttest Design,
X 0 (where 0 represents measurement or observation,
and X represents the experimental
treatment). This common design patently leaves
uncontrolled the internal validity threats of
History, Maturation, Testing, Instrumentation,
and, if selected as extreme on Regression.
There may be situations in which the analyst
could decide that none of these represented
plausible rival hypotheses in his setting: A
log of other possible change- agents might provide
no plausible ones, the measurement in question
might be nonreactive (Campbell, 1957), the time
span too short for maturation, too spaced for
fatigue, etc. However, the sources of invalidity
are so numerous that a more powerful quasi – experimental design would be preferred. Several
of these can be constructed by adding features to this simple one. The Interrupted Time-Series
Experiment utilizes a series of measurements providing
multiple pretests and posttests, e.g.:
l 02 03 04 X0-5 07 If in this series,
– a rise greater than found elsewwiere,
then Maturation, Testing, and Regression
are no longer plausible, in that they would predict
equal or greater rises for 01 02, etc. Instrumentation may well be-controlled too, although
in institutional settings a change of
administration policy is often accompanied by a change in record -keeping standards. Observers
and participants may be focused on the occurrence
of X, and may fail to take into consideration
changes in rating standards, etc. History re- mains the major threat, although in many settings
it would not offer a plausible rival interpretation.
If one had available a parallel time
series from a group not receiving the experimental
treatment, but exposed to the same extraneous sources of influence, and if this
control time series failed to show the exceptional
jump from to 0;, then the plausibility
of History as a rival interpretation would be
greatly reduced. We may call this the Multiple
Time -Series Design.
Another way of improving the One -Group
Pretest- Posttest Design is to add a “Nonequivalent
Control Group.” (Were the control group to be randomly assigned from the same population as
the experimental group, we would, of course, have
a true, not quasi, experimental design.) Depending
on the similarities of setting and attributes,
if the nonequivalent control group fails to show
a gain manifest in the experimental group, then
History, Maturation, Testing, and Instrumentation
are controlled. In this popular design, the
frequent effort to “correct” for the lack of
perfect equivalence by matching on pretest scores
is absolutely wrong (e.g., Thorndike, 1942;
Hovland et al., 1949; Campbell & Clayton, 1961),
as it introduces a regression artifact. Instead, one should live with any initial pretest differences,
using analysis of covariance, or graphic
presentation. Remaining uncontrolled is the
Selection -Maturation Interaction, i.e., the pos,
sibility that the experimental group differed
from the control group not only in initial level,
but also in its autonomous maturation rate. In experiments on psychotherapy and on the effects
of specific coursework this is a very serious
rival. Note that it can be rendered implausible
by use of a time series of pretests for both
groups, thus moving again to the Multiple Time – Series Design.
There is not space here to present adequately
even these four quasi -experimental designs,
but perhaps the strategy of adding
specific observations and analyses to check on
specific threats to validity has been illustrated.
This is carried to an extreme in the
Recurrent Institutional Cycle Design (Campbell
McCormack, 1957; Campbell Stanley, 1963),
in which longitudinal and cross -sectional measurements
are combined with still other-analyses
to assess the impact of indoctrination procedures,
etc., through exploiting the fact that
essentially similar treatments are being given
to new entrants year after year or cycle after
cycle. Other quasi -experimental designs
covered in Campbell & Stanley (1963) include two more single -group designs (the Equivalent Time – Samples Design and the Equivalent Materials
Design), Counterbalanced or Rotational Designs,
Separate Sample Pretest -Posttest Designs, Regression-
Discontinuity Analysis, the Panel
Impact Design (see also Campbell Clayton,
1961), and the Cross- Lagged Panel Correlation,
which is related to Lazarsfeld’s Sixteen -Fold
Table (see especially Campbell, 1963).
Related to the program ofcquasi159
experimental analysis are those efforts to achieve causal inference from correlational data.
Note that while correlation does not prove causation,
most causal hypotheses imply specific correlations,
and thus examination of these probes,
tests, or edits the causal hypothesis. Further,
as Simon and Blalock have emphasized (e.g.,
Blalock, 1961), certain causal models specify
uneven patterns of correlation. Thus the
B -3 C model implies that rAC be smaller
than r or r BC
. However, the of partial
correlátions or the use of Wright’s (1920) path
analysis are rejected by the present writer as
tests of the model because of the requirement
that the “cause” be totally represented in the
“effect.” In the social sciences it will never
be plausible that the “cause” has been measured
without unique error and that it also totally
lacks unique systematic variance not shared with
the “effect.” More appropriate would be Lawley’s
(1940) test of the hypothesis of single- factoredness.
Only if single- factoredness can be rejected
would the causal model as represented by
its predicted uneven correlations pattern be the
preferred interpretation.
A word needs to be said about tests of
significance for quasi -experimental designs.
There has come from several competent social
scientists the argument that since randomization
has not been used, tests of significance assuming
randomization are not relevant. The attitude of
the present writer is on the whole in disagreement.
However, some aspects of the protest are
endorsed: Good experimental design is needed for
any comparison inferring change, whether or not
tests of significance are used, even if only
photographs, graphs, or essays are being compared.
In this sense, experimental design is
independent of tests of significance. More importantly,
tests of significance have come to be
taken as thoroughgoing o. rQ In vulgar social
science usage, finding a significant difference”
is apt to be taken as provin& the author’s basis
for predicting the difference, forgetting the
many other plausible rival hypotheses explaining
a significant difference which quasi -experimental
designs leave uncontrolled. Certainly the valuation
of tests of significance in some quarters
needs demoting. Further, the use of tests of
significance designed for the evaluation of a
single comparison becomes much too lenient when
dozens, hundreds, or thousands of comparisons
have been sifted, and this is still common
usage. And in a similar manner, the author’s
decision as to which of his studies is publishable,
and the editor’s decision as to which of
the manuscripts is acceptable, further biases
the sampling basis. In all of these ways,
reform is needed.
However, when a quasi -experimenter has
compared the results from two intact classrooms
employed in a sampling of convenience, sample
size, small- sample instability,a chance difference,
is certainly one of the many plausible
rival hypotheses which must be considered, even
if only one. If each class had but five students we would interpret the fact that 207. more in the
experimental class showed increases in favorable-
160 ness with much less interest than if each class
had 500 students. In this case there is available
an elaborate formal theory for the plausible
rival hypothesis of chance fluctuation. This
theory involves assumptions of randomness, which
are quite appropriately present when we reject
the null model of random association in favor of
a hypothesis of systematic difference between
the two classes. If we find a “significant
difference,” the test of significance will not,
of course, tell us whether the two classes
differed because one saw the experimental movie,
or for some selection reason associated with
class topic, time of day, etc., which might have
interacted with rate of autonomous change, preREFERENCES
Blalock, H.M. 1964 Causal inferences in nonexperimental
research. Chapel Hill: The
University of North Carolina Press, 1964.
Campbell, D.T. 1957 Factors relevant to validity
of experiments in social settings.
Psychological Bulletin 54:297 -312.
Campbell, D.T. 1963 From description to experimentation:
Interpreting trends as
quasi -experiments. Pages 212 -242 in
Harris, C.W. (editor), Problems in measuring
change. Madison, Wis.: University
of Wisconsin Press.
Campbell, D.T.; and Clayton, K.N. 1961 Avoiding
regression effects in panel studies of
communication impact. Studies in Public
Communication No. 3, 99 -118. Campbell, D.T.; and McCormack, Thelma H. 1957
Military experience and attitudes toward
authority. American Journal of Sociology
62:482 -490.
Campbell, D.T.; and Stanley, J.C. 1963 Experimental
and quasi- experimental designs for
research on teaching. Pages 171 -246 in
Gage, N.L. (editor), Handbook of research
on teaching. Chicago: Rand McNally.
Chapin, F.S. 1955 Experimental designs in
sociological research. New York: Harper.
(Rev. ed.)
Chapin, F.S.; and Queen, S.A. 1937 Research
memorandum on social work in the depression.
New York: Social Science Research Council,
Bulletin 39, 1937.
Greenwood, E. 1945 Experimental sociology: A
study in method. New York: King’s Crown
Press.
test instigated changes, reactions to commonly
experienced events, etc. But such a test of
significance will help us rule out this 13th
plausible rival hypothesis, that there is no difference here at all that a model of purely
chance assignment could not account for as a vagary of sampling. Note that our statement
of probability level is in this light a statement
of the plausibility of this rival hypothesis,
which always has some plausibility, however
faint.In thb orientation, a practice of stating
the probability in descriptive detail seems
preferable to using but a single apriori
decision criterion.
Hovland, C.I.; Lumsdaine, A.A.; and Sheffield,
F.C. 1949 Experiments on mass communication.
Princeton, N.J.: Princeton University
Press.
Lawley, C.N. 1940 The estimation of factor
loadings by the method of maximum likelihood.
Proceedings of the Royal Society of
Edinburgh 60:64 -82. Stouffer, S.S. (editor) 1949 The American
Soldier. Princeton, N.J.: Princeton University
Press. Vols. I and II. Thorndike, R.L. 1942 Regression fallacies in
the matched groups experiment. Psycho – metrika 7 :85 -102.
Thorndike, E.L.; and Ruger, G.J. 1923 The
effect of first -year Latin upon knowledge
of English words of Latin derivation.
School and Society 81:260 -270, 417 -418.
Thorndike, E.L.; and Woodworth, R.S. 1901 The
influence of improvement in one mental
function upon the efficiency of other
functions. Psychological Review 8:247 -261,
384 -395, 553 -564.
Wright, S. 1920 Correlation and causation.
Journal of Agricultural Research 20:557 -585.
Research Methods in Healthcare Epidemiology and
Antimicrobial Stewardship – Quasi-Experimental Designs
Marin L. Schweizer, PhD1,2, Barbara I. Braun, PhD3, and Aaron M. Milstone, MD MHS4,5
1Department of Internal Medicine, Carver College of Medicine, University of Iowa
2Center for Comprehensive Access and Delivery Research and Evaluation, Iowa City VA Health
Care System
3Department of Health Services Research, The Joint Commission, Oakbrook Terrace IL
4Department of Pediatrics, Division of Pediatric Infectious Diseases, Johns Hopkins University
School of Medicine, Baltimore, MD
5Department of Hospital Epidemiology and Infection Control, Johns Hopkins Hospital, Baltimore,
MD
Abstract
Quasi-experimental studies evaluate the association between an intervention and an outcome using
experiments in which the intervention is not randomly assigned. Quasi-experimental studies are
often used to evaluate rapid responses to outbreaks or other patient safety problems requiring
prompt non-randomized interventions. Quasi-experimental studies can be categorized into three
major types: interrupted time series designs, designs with control groups, and designs without
control groups. This methods paper highlights key considerations for quasi-experimental studies in
healthcare epidemiology and antimicrobial stewardship including study design and analytic
approaches to avoid selection bias and other common pitfalls of quasi-experimental studies.
Introduction
The fields of healthcare epidemiology and antimicrobial stewardship (HE&AS) frequently
apply interventions at a unit-level (e.g. intensive care unit [ICU]). These are often rapid
responses to outbreaks or other patient safety problems requiring prompt non-randomized
interventions. Quasi-experimental studies evaluate the association between an intervention
and an outcome using experiments in which the intervention is not randomly assigned.1, 2
Quasi-experimental studies can be used to measure the impact of large scale interventions or
policy changes where data are reported in aggregate and multiple measures of an outcome
over time (e.g., monthly rates) are collected.
Quasi-experimental studies vary widely in methodological rigor and can be categorized into
three types: interrupted time series designs, designs with control groups, and designs without
Corresponding author: Marin L. Schweizer, PhD, Iowa City VA Health Care System (152), 601 Hwy 6 West, Iowa City, IA 52246,
marin-schweizer@uiowa.edu, 319-338-0581 x3831.
Potential conflicts of interest. None.
HHS Public Access
Author manuscript
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Published in final edited form as:
Infect Control Hosp Epidemiol. 2016 October ; 37(10): 1135–1140. doi:10.1017/ice.2016.117.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
control groups. The HE&AS literature contains many uncontrolled before-and-after studies
(also called pre-post studies), but advanced quasi-experimental study designs should be
considered to overcome the biases inherent in uncontrolled before-and-after studies.3
In this
article, we highlight methods to improve quasi-experimental study design including use of a
control group that does not receive the intervention2
and use of the interrupted time series
study design, in which multiple equally spaced observations are collected before and after
the intervention.4
Advantages and Disadvantages (Table 1)
The greatest advantages of quasi-experimental studies are that they are less expensive and
require fewer resources compared with individual randomized controlled trials (RCTs) or
cluster randomized trials. Quasi-experimental studies are appropriate when randomization is
deemed unethical (e.g., effectiveness of hand hygiene studies).1
Quasi-experimental studies
are often performed at a population-level not an individual-level, and thus they can include
patients who are often excluded from RCTs, such as those too ill to give informed consent or
urgent surgery patients, with IRB approval as appropriate.5
Quasi-experimental studies are
also pragmatic because they evaluate the real-world effectiveness of an intervention
implemented by hospital staff, rather than efficacy of an intervention implemented by
research staff under research conditions.5
Therefore, quasi-experimental studies may also be
more generalizable and have better external validity than RCTs.
The greatest disadvantage of quasi-experimental studies is that randomization is not used,
limiting the study’s ability to conclude a causal association between an intervention and an
outcome. There is a practical challenge to quasi-experimental studies that may arise when
some patients or hospital units are encouraged to introduce an intervention, while other units
retain the standard of care and may feel excluded.2
Importantly, researchers need to be aware
of the biases that may occur in quasi-experimental studies that may lead to a loss of internal
validity, especially selection bias in which the intervention group may differ from the
baseline group.2
Types of selection bias that can occur in quasi-experimental studies include
maturation bias, regression to the mean, historical bias, instrumentation bias, and the
Hawthorne effect.2
Lastly, reporting bias is prevalent in retrospective quasi-experimental
studies, in which researchers only publish quasi-experimental studies with positive findings
and do not publish null or negative findings.
Pitfalls and Tips
Key study design and analytic approaches can help avoid common pitfalls of quasiexperimental
studies. Quasi-experimental studies can be as small as an intervention in one
ICU or as large as implementation of an intervention in multiple countries.6
Multisite studies
generally have stronger external validity. Subtypes of quasi-experimental study designs are
shown in Table 2 and the Supplemental Figure.1, 2, 7
In general, the higher numbers assigned
to the designs in the table are associated with more rigorous study designs. Quasiexperimental
studies meet some requirements for causality including temporality, strength of
association and dose response.1, 8
The addition of concurrent control groups, time series
measurements, sensitivity analyses and other advanced design elements can further support
Schweizer et al. Page 2
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
the hypothesis that the intervention is causally associated with the outcome. These design
elements aid in limiting the number of alternative explanations that could account for the
association between the intervention and the outcome.2
Quasi-experimental studies can use observations that were collected retrospectively,
prospectively, or a combination thereof. Prospective quasi-experimental studies use baseline
measurements that are calculated prospectively for the purposes of the study, then an
intervention is implemented and more measurements are collected. It is often necessary to
use retrospective data when the intervention is outside of the researcher’s control (e.g.
natural disaster response) or when hospital epidemiologists are encouraged to intervene
quickly in response to external pressure (e.g. high central line-associated bloodstream
infection [CLABSI] rates).2
However, retrospective quasi-experimental studies have a higher
risk of bias compared with prospective quasi-experimental studies.2
The first major consideration in quasi-experimental studies is the addition of a control group
that does not receive the intervention (Table 2 subtype 6–9, 11, 15). Control groups can
assist in accounting for seasonal and historical bias. If an effect is seen among the
intervention group but not the control group, then causal inference is strengthened. Careful
selection of the control group can also strengthen causal inference. Detection bias can be
avoided by blinding those who collect and analyze the data to which group received the
intervention.2
The second major consideration is designing the study in a way to reduce bias, either by
including a non-equivalent dependent variable or by using a removed-treatment design, a
repeated treatment design or a switching replications design. Non-equivalent dependent
variables should be similar to the outcome variable except that the non-equivalent dependent
variable is not expected to be influenced by the outcome (Table 2 subtypes 3, 12). In a
removed-treatment design the intervention is implemented then taken away and observations
are made before, during and after implementation (Table 2 subtypes 4, 5, 13). This design
can only be used for interventions that do not have a lasting effect on the outcome that could
contaminate the study. For example, once staff have been educated, that knowledge cannot
be removed.2
Researchers must clearly explain before implementation that the intervention
will be removed, otherwise this can lead to frustration or demoralization by the hospital staff
implementing the intervention.2
In the repeated treatment design (Table 2 subtypes 5, 14)
interventions are implemented, removed, then implemented again. Similar to the removedtreatment
design, the repeated treatment design should only be used if the intervention does
not have a lasting effect on the outcome. In a switching replications design, which is also
known as a cross-over design, one group implements the intervention while the other group
serves as the control. Then, the intervention is stopped in the first group and implemented in
the second group (Table 2 subtypes 9, 15). The cross-overs can occur multiple times. If the
outcomes are only impacted during intervention observations, but not in the control
observations, then there is support for causality.2
A third key consideration for quasi-experimental studies with the interrupted time series
design is to collect many evenly spaced observations in both the baseline and intervention
periods. Multiple observations are used to estimate and control for underlying trends in data,
Schweizer et al. Page 3
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
such as seasonality and maturation.2
The frequency of the observations (e.g. weekly,
monthly, quarterly) should have clinical or seasonal meaning so that a true underlying trend
can be established. There are conflicting recommendations as to the minimum number of
observations needed for a time series design but they range from 20 observations before and
20 after intervention implementation to 100 observations overall.2–4, 9
The interrupted time
series design is the most effective and powerful quasi-experimental design, particularly
when supplemented by other design elements.2
However, time series designs are still subject
to biases and threats to validity.
The final major consideration is ensuring an appropriate analysis plan. Time series study
designs collect multiple observations of the same population over time, which result in
autocorrelated observations.2
For instance, carbapenem-resistant Enterobacteriaceae (CRE)
counts collected one month apart are more similar to one another than CRE counts collected
two months apart.4
Basic statistics (e.g. chi-square test) should not be used to analyze time
series data because they cannot take into account trends over time and they rely on an
independence assumption. Time series data should be analyzed using either regression
analysis or interrupted time-series analysis (ITSA).4
Linear regression models or generalized
linear models can be used to evaluate the slopes of the observed outcomes before and during
implementation of an intervention. However, unlike regression models, ITSA relaxes the
independence assumption by combining a correlation model and a regression model to
effectively remove seasonality effects before addressing the impact of the intervention.2, 4
ITSA assesses the impact of the intervention by evaluating the changes in the intercept and
slope before and after the intervention. ITSA can also include a lag effect if the intervention
is not expected to have an immediate result, and additional sensitivity analyses can be
performed to test the robustness of the findings. We recommend statistician consultation
while designing the study in order to determine which model may be appropriate and to help
perform power calculations that account for correlation.
Key considerations for designing, analyzing and writing a quasi-experimental study can be
found in the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND)
statement and are summarized in Table 3.10
Examples of Published Quasi-Experimental Studies in HE&AS
Recent quasi-experimental studies illustrated strengths and weaknesses that require attention
when employing this study design.
A recent prospective quasi-experimental study (Table 2 subtype 10) implemented a
multicenter bundled intervention to prevent complex Staphylococcus aureus surgical site
infections.11 The study exemplified strengths of quasi-experimental design using a
pragmatic approach in a real-world setting that even enabled identification of a dose
response to bundle compliance. To optimize validity, the authors included numerous
observation points before and after the intervention and used time series analysis. This study
did not include a concurrent control group, and outcomes were collected retrospectively for
the baseline group and prospectively for the intervention group which may have led to
ascertainment bias.
Schweizer et al. Page 4
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Quach and colleagues performed a quasi-experimental study (Table 2 subtype 11) to
evaluate the impact of an infection prevention and quality improvement intervention of daily
chlorhexidine gluconate (CHG) bathing to reduce CLABSI rates in the neonatal ICU.12 The
primary strength of this study was the authors used a non-bathed concurrent control group.
Given that the baseline rates of CLABSI exceed the National Healthcare Surveillance
Network (NHSN) pooled mean and the observation that the concurrent control group did not
see a reduction in rates post-intervention suggest that the treatment effect was more likely
due to the treatment than to regression to the mean, seasonal effects, or secular trends.
Yin and colleagues performed a quasi-experimental study (Table 2 subtype 14) to determine
whether universal gloving reduced HAIs in hospitalized children.13 This retrospective study
compared the winter respiratory syncytial virus (RSV) season during which healthcare
workers (HCW) were required to wear gloves for all patient contact and the non-winter, nonRSV
season when HCWs were not required to wear gloves. Because the study period
extended many calendar years, the design enabled for multiple crossovers removing the
intervention and use of time series analysis. This study did not have a control group (another
hospital or unit that did not require universal gloving during RSV season) nor did it have a
non-equivalent dependent variable.
Major Points
Quasi-experimental studies are less resource intensive than RCTs, test real world
effectiveness, and can support a hypothesis that an intervention is causally associated with
an outcome. These studies are subject to biases that can be limited by carefully planning the
design and analysis. Consider key strategies to limit bias, such as including a control group,
including a non-equivalent variable or removed-treatment design, collecting adequate
observations before and during the intervention, and using appropriate analytic methods (i.e.
interrupted time series analysis).
Conclusion
Quasi-experimental studies are important for HE&AS because practitioners in those fields
often need to perform non-randomized studies of interventions at the unit level of analysis.
Quasi-experimental studies should not always be considered methodologically inferior to
RCTs because quasi-experimental studies are pragmatic and can evaluate interventions that
cannot be randomized due to ethical or logistic concerns.10 Currently, too many quasiexperimental
studies are uncontrolled before-and-after studies using suboptimal research
methods. Advanced techniques such as use of control groups and non-equivalent dependent
variables, as well as interrupted time series design and analysis should be used in future
research.
Acknowledgments
Financial support. MLS is supported through VA Health Services Research and Development (HSR&D) Career
Development Award (CDA 11-215)
Schweizer et al. Page 5
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
References
1. Harris AD, Bradham DD, Baumgarten M, Zuckerman IH, Fink JC, Perencevich EN. The use and
interpretation of quasi-experimental studies in infectious diseases. Clin Infect Dis. 2004; 38:1586–
91. [PubMed: 15156447]
2. Shadish, WR.; Cook, TD.; Campbell, DT. Experimental and Quasi-Experimental Designs for
Generalized Causal Inference. Boston: Houghton Mifflin; 2002.
3. Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for
evaluating guideline implementation strategies. Fam Pract. 2000; 17(Suppl 1):S11–6. [PubMed:
10735262]
4. Shardell M, Harris AD, El-Kamary SS, Furuno JP, Miller RR, Perencevich EN. Statistical analysis
and application of quasi experiments to antimicrobial resistance intervention studies. Clin Infect
Dis. 2007; 45:901–7. [PubMed: 17806059]
5. Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator
summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009; 62:464–75. [PubMed:
19348971]
6. Lee AS, Cooper BS, Malhotra-Kumar S, et al. Comparison of strategies to reduce meticillinresistant
Staphylococcus aureus rates in surgical patients: a controlled multicentre intervention trial.
BMJ Open. 2013; 3:e003126.
7. Harris AD, Lautenbach E, Perencevich E. A systematic review of quasi-experimental study designs
in the fields of infection control and antibiotic resistance. Clin Infect Dis. 2005; 41:77–82.
[PubMed: 15937766]
8. Hill AB. The Environment And Disease: Association Or Causation? Proc R Soc Med. 1965;
58:295–300. [PubMed: 14283879]
9. Crabtree BF, Ray SC, Schmidt PM, O’Connor PJ, Schmidt DD. The individual over time: time series
applications in health care research. J Clin Epidemiol. 1990; 43:241–60. [PubMed: 2313315]
10. Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations
of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;
94:361–6. [PubMed: 14998794]
11. Schweizer ML, Chiang HY, Septimus E, et al. Association of a bundled intervention with surgical
site infections among patients undergoing cardiac, hip, or knee surgery. JAMA. 2015; 313:2162–
71. [PubMed: 26034956]
12. Quach C, Milstone AM, Perpete C, Bonenfant M, Moore DL, Perreault T. Chlorhexidine bathing in
a tertiary care neonatal intensive care unit: impact on central line-associated bloodstream
infections. Infect Control Hosp Epidemiol. 2014; 35:158–63. [PubMed: 24442078]
13. Yin J, Schweizer ML, Herwaldt LA, Pottinger JM, Perencevich EN. Benefits of universal gloving
on hospital-acquired infections in acute care pediatric units. Pediatrics. 2013; 131:e1515–20.
[PubMed: 23610206]
14. Popoola VO, Colantuoni E, Suwantarat N, et al. Active Surveillance Cultures and Decolonization
to Reduce Staphylococcus aureus Infections in the Neonatal Intensive Care Unit. Infect Control
Hosp Epidemiol. 2016; 37:381–7. [PubMed: 26725699]
15. Waters TM, Daniels MJ, Bazzoli GJ, et al. Effect of Medicare’s nonpayment for Hospital-Acquired
Conditions: lessons for future policy. JAMA Intern Med. 2015; 175:347–54. [PubMed: 25559166]
Schweizer et al. Page 6
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 7
Table 1
Advantages, disadvantages, and important pitfalls in using quasi-experimental designs in healthcare
epidemiology research.
Advantages Notes
Less expensive and time consuming
than RCTs or Cluster Randomized
Trials
Do not need to randomize groups
Pragmatic Include patients that are often excluded in RCTs, tests effectiveness more than efficacy, may have
good external validity
Can retrospectively analyze policy
changes
Even if policy implementation is out of your control
Meets some requirements of causality Quasi-experimental studies meet some requirements for causality including temporality, strength of
association and dose response2
Designs can be strengthened with
control groups, multiple measures over
time and cross-overs
Not gold standard to establish causation but can be next level below RCT if well-designed
Disadvantages Notes
Retrospective data is often incomplete
or difficult to obtain
Need processes to assess availability, accuracy and completeness during baseline phase before
implementation
Not randomized Nonrandomized designs tend to overestimate effect size3
Does not meet all requirements to determine causality
Lack of internal validity
Potential pitfalls Notes
Selection Bias When group receiving the intervention differs from the baseline group.2
Maturation Bias Maturation bias can occur when natural changes over the passage of time may influence the study
outcome.1
Examples include seasonality, fatigue, aging, maturity or boredom.2
Hawthorne Effect Could bias quasi-experimental studies in which baseline rates are collected retrospectively and
intervention rates are collected prospectively, because the intervention group could be more likely to
improve when they are aware of being observed.3
Historical Bias Historical bias is a threat when other events occur during the study period that may have an effect on
the outcome.2
Regression to the Mean Regression to the mean is a statistical phenomenon in which extreme measures tend to naturally
revert back to normal.2
Instrumentation Bias Instrumentation bias occurs when a measuring instrument changes over time (e.g. improved
sensitivity of laboratory tests) or when data are collected differently before and after an
intervention.2
Ascertainment Bias Systematic error or deviation in the identification or measurement of outcomes.
Reporting Bias Reporting bias is especially prevalent in retrospective quasi-experimental studies, in which
researchers only publish quasi-experimental studies with positive findings and do not publish null or
negative findings.
Need advanced statistical analysis
when using more complex designs
With time series designs, should use interrupted time series analysis, not just single measurements
before and after a response to an outbreak. Should account for intracluster correlation in power
calculations
Note: RCT, randomized controlled trial.
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 8
Table 2
Major Quasi-experimental design types and subtypes
Type and Subtype Description Notation
A. INTERRUPTED TIME-SERIES QUASI-EXPERIMENTAL DESIGNS
#15 Interrupted time series that
uses switching replications
and a control group
A1c A2c A3c
X A4t A5t A6t removeX A7c A8c A9c A10c
B1c B2c B3c B4c B5c B6c
X B7t B8t B9t B10t
#14 Interrupted time series with
repeated treatment design13
A1c A2c A3c
X A4t A5t removeX A6c A7c
X A8t A9t
#13 Interrupted time series
removing the treatment at a
known time
A1c A2c A3c A4c
X A5t A6t A7t A8t removeX A9c A10c
#12 Interrupted time series with a
nonequivalent dependent
variable14
(A1c
v, A1c
n
) (A2c
v, A2c
n
) (A3c
v, A3c
n
) X (A4t
v, A4t
n
) (A5t
v, A5t
n
)
#11 Interrupted time series with an
untreated control group12
A1c A2c A3c A4c A5c
X A6t A7t A8t A9t A10t
B1c B2c B3c B4c B5c B6c B7c B8c B9c B10c
#10 Simple interrupted time
series11, 15
A1c A2c A3c A4c A5c
X A6t A7t A8t A9t A10t
B. QUASI-EXPERIMENTAL DESIGNS THAT USE CONTROL GROUPS
# 9 The control group design that
uses dependent pretest and
posttest samples and
switching replications
A1c
X A2t removeX A3c
B1c B2c
X B3t
# 8 The untreated-control group
design that uses dependent
pretest and posttest samples
and a double pretest
A1c A2c
X A3t
B1c B2c B3c
# 7 The untreated-control group
design that uses dependent
pretest and posttest samples
A1c
X A2t
B1c B2c
# 6 The posttest-only design that
uses an untreated control
group
X A1t
B1c
C. QUASI-EXPERIMENTAL DESIGNS THAT DO NOT USE CONTROL GROUPS
# 5 The repeated-treatment design A1c
X A2t removeX A3c
X A4t
# 4 The removed-treatment design A1c
X A2t A3t removeX A4c
# 3 The 1-group pretest-posttest
design that uses a
(A1c
v, A1c
n
) X (A2t
v, A2t
n
)
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 9
Type and Subtype Description Notation
nonequivalent dependent
variable
# 2 The 1-group pretest-posttest
design that uses a double
pretest
A1c A2c
X A3t
# 1 The 1-group pretest-posttest
design
A1c
X A2t
Note: Classification types adapted prior publications
1, 2; A, B = Groups; 1,2,3, etc.= observations for a Group; X= intervention; removeX = remove intervention;
v=variable of interest;
n=non-equivalent
dependent variable; t=treatment group; c=control group. Time moves from left to right. Citations are published examples from the literature.
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 10
Table 3
Checklist of key considerations when developing a quasi-experimental study
CONSIDERATIONS FOR RETROSPECTIVE AND PROSPECTIVE QUASI-EXPERIMENTAL STUDIES
1. Determine PICO: population, intervention, control group, outcomes (specify primary vs. secondary outcomes)
2. What is the hypothesis?
3. Is it ethical or feasible to randomize patients to the intervention?
4. Will this be a retrospective or prospective study or a combination of both?
5. What are the main inclusion and exclusion criteria?
6. Will anyone (participants, study staff, research team, analyst) be blinded to the intervention assignment?
7. Consider options for control group
8. Consider options for nonequivalent dependent variable
9. How will the observations (outcomes) be measured?
10. How many observations can be measured pre and post intervention?
11. How should the observations be spaced to account for seasonality? Weekly? Monthly? Quarterly?
12. Do you hypothesize that the intervention will diffuse quickly or slowly? (e.g. are changes in the outcomes expected right away or only after
a phase-in period?)
13. Do you hypothesize that the intervention will have a lasting effect on the outcome? (If yes, do not use cross-over design)
14. What is the analysis plan? (Consult a statistician)
15. If the unit of analysis differs from the unit of assignment, what analytical method will be used to account for this (e.g. adjusting the standard
error estimates by the design effect or using multilevel analysis)?
16. What sample size is needed to be powered to see a significant difference? (Consult a statistician)
17. Will the analysis strategy be intention to treat or how will non-compliers be treated in the analysis?
ADDITIONAL CONSIDERATIONS FOR QUASI-EXPERIMENTAL STUDIES WITH PROSPECTIVE COMPONENTS
18. What will be the unit of delivery? (e.g. Individual patient or unit or hospital)
19. How will the units of delivery be allocated to the intervention?
20. Who will deliver the intervention? (e.g. study team or healthcare workers)
21. How and when will the intervention be delivered?
22. How will compliance with the intervention be measured?
23. Will there be activities to increase compliance or adherence? (e.g. incentives, coaching calls)
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

GET A 40% DISCOUNT ON YOU FIRST ORDER

ORDER NOW DISCOUNT CODE >>>> WELCOME40

 

 

Posted in Uncategorized

QUASI -EXPERIMENTAL DESIGN1

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

GET A 40% DISCOUNT ON YOU FIRST ORDER

ORDER NOW DISCOUNT CODE >>>> WELCOME40

about 400-500 words of writing the discuss of advantages and disadvantages on Quasi-experimental design research. 3 journals should be included in it. Attached files can be references for writer.
QUASI -EXPERIMENTAL DESIGN1
Donald T. Campbell
Northwestern University
This phrase refers to the application of
an experimental mode of analysis and interpretation
to bodies of data not meeting the full
requirements of experimental control because
experimental units are not assigned at random to at least two “treatment” conditions. The settings
to which it isappropriate are those of
experimentation in social settings, including
planned interventions such as specific communications,
persuasive efforts, changes in conditions
and policies, efforts at social remediation,
etc. Unplanned conditions and events may
also be analyzed in this way where an exogeneous
variable has such discreteness and abruptness as
to make appropriate its consideration as an ex- perimental treatment applied at a specific point
in time to a specific population. When properly
done, when attention is given to the specific
implications of the specific weaknesses of the
design in question, quasi -experimental analysis
can provide a valuable extension of the experimental
method.
While efforts to interpret field data as experiments go back much farther, the first
prominent methodology of this kind in the social
sciences was Chapin’s Ex Post Facto Experiment
(Chapin & Queen, 1937; Chapin, 1955; Greenwood,
1945), although it should be noted that due to
the failure to control regression artifacts,
this mode of analysis is no longer regarded as
acceptable. The American Soldier volumes
(Stouffer et al., 1949) provide prominent analyses
of the effects of specific military experiences,
where it is implausible that differences
in selection explain the results. Thorn – dike’s efforts to demonstrate the effects of
specific course work upon other intellectual
achievements provide an excellent early model
(e.g., Thorndike Woodworth, 1901; Thorndike
& Ruger, 1923). Extensive analysis and review
of this literature are provided elsewhere
(Campbell, 1957; 1963; Campbell & Stanley, 1963)
and serve as the basis for the present abbreviated
presentation.
The core requirement of a “true” experiment
lies in the experimenter’s ability to apply
at least two experimental treatments in complete
independence of the prior states of the materials
1The preparation of this review was
supported in part by Project C -998, Contract
3 -20 -001, with the Media Research Branch, Office
of Education, U.S. Department of Health, Education,
and Welfare, under provisions of Title VII
of the National Defense Education Act. This symposium
presentation is essentially the same as
the current draft of my article for the International
Encyclopedia of the Social Sciences.
157
(persons, etc.) under study. This independence
makes resulting differences interpretable as
effects of the differences in treatment. In the
social sciences this independence of prior status
is assured by randomization in assignments to treatments. Experiments meeting these requirements,
and thus representing “true” experiments,
are much more possible in the social sciences
than is generally realized. Wherever, for example,
the treatments can be applied to individuals
or small units (such as precincts.or classrooms)
without the respondents’ being aware of
experimentation or that other units are getting
different treatments, very elegant experimental
control can be achieved. An increased acceptance
by administrators of randomization as the
democratic method of allocating scarce resources
(be these new housing, therapy, or fellowships)
will make possible field experimentation in many
settings. Where innovations are to be introduced
throughout a social system, and where the introduction
cannot in any event be simultaneous, a use of randomization in the staging can provide an experimental comparison of the new and the
old, using the groups receiving the delayed introduction
as controls. Nothing in this article
should be interpreted as minimizing the importance
of increasing the use of true experimentation.
However, where true experimental design
with random assignment of persons to treatments
is not possible, due to ethical considerations,
lack of power, or in feasibility, application of
quasi -experimental analysis has much to offer.
The social sciences must do the best they can with the possibilities open to them. Inferences
must frequently be made from data lacking
complete control. Too often a scientist trained
in experimental method rejects out of hand any
research in which complete control is lacking.
Yet in practice no experiment is perfectly executed,
and the practicing scientist overlooks
those imperfections which seem to him to offer
no plausible rival explanation of the results.
In the light of modern philosophies of science,
no experiment ever proves a theory, it merely
probes it. Seeming proof results from that
condition in which there is no available plausible
rival hypothesis to explain the data. The
general program of quasi- experimental analysis
is to specify and examine those plausible rival
explanations of the results which are provided
by the uncontrolled variables. A failure of
control which does not in fact provide a plausible
rival interpretation is not regarded as invalidating.
It is well to remember that we do make
assured causal inferences in many settings not
involving randomization: (The earthquake caused
the brick building to crumble; the automobile
crashing into it caused the telephone pole to break; the language patterns of the older models
158
and mentors caused this child to speak English
rather than Kwakiutl; etc.) While these are all
potentially erroneous inferences, they are of the same type as experimental inferences. We are
confident that were we to intrude experimentally, we could confirm the causal laws involved. Yet
they have been made assuredly by a nonexperimenting
observer. This assurance is due to the
effective absence of other plausible causes. Consider the inference as to crashing auto and
the telephone pole: we rule out combinations of
termites and wind because the other implications
of these theories (e.g., termite tunnels and
debris in the wood, wind records at nearby
weather stations) do not occur. Spontaneous
splintering of the pole by happenstance coincident
with the auto’s onset does not impress us as a rival, nor would it explain the damage to the car, etc. Analogously in quasi -experimental
analysis, tentative causal interpretation of data
may be made where the interpretation in question
squares with the data and where other rival interpretations
have been rendered implausible.
For the evaluation of data series as quasi- experiments, a set of twelve frequent
threats to validity have been developed. These
may be regarded as the important classes of frequently
plausible rival hypotheses which good
research design seeks to rule out. All will be
presented briefly even though not all are em- ployed in the evaluation of the designs used
illustratively here.
Fundamental to this listing is a distinction
between internal validity and external
validity. Internal validity is the basic mini- mum without which any experiment is uninterpretable:
did in fact the experimental treat- ments make a difference in this specific experimental
instance? External validity asks the
question of generalizability: to what populations,
settings, treatment variables, and
measurement variables can this effect be generalized?
Both types of criteria are obviously
important, even though they are frequently at
odds, in that features increasing one may jeopardize
the other. While internal validity is the sine qua non, and while the question of
external validity, like the question of inductive
inference, is never completely answerable, the
selection of designs strong in both types of
validity is’obviously our ideal.
Relevant to internal validity are eight
different classes of extraneous variables which,
if not controlled in the experimental design,
might produce effects mistaken for the effect of
the experimental treatment. These are: 1. History: the other specific events occurring
between a first and second measurement in addition
to the experimental variable. 2. Maturation:
processes within the respondents operating
as a function of the passage of time per se (not
specific to the particular events), including
growing older, growing hungrier, growing tireder,
and the like. 3. Testing: the effects of
taking a test upon the scores of a second testing.
4. Instrumentation: in which changes in
the calibration of a measuring instrument or
changes in the observers or scorers used may
produce changes in the obtained measurements.
5. Statistical regression: operating where
groups have been selected on the basis of their
extreme scores. 6. Selection: biases resulting
in differential recruitment of respondents for
the comparison groups. 7. Experimental mortalthe
differential loss of respondents from
the comparison groups. 8. Selection – maturation
interaction: In certain of the multiple -group
quasi -experimental designs, such as the nonequivalent
control group design, such interaction
is confounded with, i.e., might be mistaken for,
the effect of the experimental variable.
Factors jeopardizing external validity or
representativeness are: 9. The reactive or interaction
effect of testing, in which a pretest
might increase or decrease the respondent’s
sensitivity or responsiveness to the experimental
variable and thus make the results obtained for a
pretested population unrepresentative of the
effects of the experimental variable for the
unpretested universe from which the experimental
respondents were selected. 10. Interaction
effects between selection bias and the experimental
variable. 11. Reactive effects of experimental
arrangements, which would preclude
generalization about the effect of the experimental
variable for persons being exposed to it
in nonexperimental settings. 12. Multiple – treatment inference, a problem wherever multiple
treatments are applied to the same respondents,
and a particular problem for one -group designs
involving equivalent time -samples or equivalent
materials samples.
Perhaps the simplest quasi -experimental
design is the One -Group Pretest -Posttest Design,
X 0 (where 0 represents measurement or observation,
and X represents the experimental
treatment). This common design patently leaves
uncontrolled the internal validity threats of
History, Maturation, Testing, Instrumentation,
and, if selected as extreme on Regression.
There may be situations in which the analyst
could decide that none of these represented
plausible rival hypotheses in his setting: A
log of other possible change- agents might provide
no plausible ones, the measurement in question
might be nonreactive (Campbell, 1957), the time
span too short for maturation, too spaced for
fatigue, etc. However, the sources of invalidity
are so numerous that a more powerful quasi – experimental design would be preferred. Several
of these can be constructed by adding features to this simple one. The Interrupted Time-Series
Experiment utilizes a series of measurements providing
multiple pretests and posttests, e.g.:
l 02 03 04 X0-5 07 If in this series,
– a rise greater than found elsewwiere,
then Maturation, Testing, and Regression
are no longer plausible, in that they would predict
equal or greater rises for 01 02, etc. Instrumentation may well be-controlled too, although
in institutional settings a change of
administration policy is often accompanied by a change in record -keeping standards. Observers
and participants may be focused on the occurrence
of X, and may fail to take into consideration
changes in rating standards, etc. History re- mains the major threat, although in many settings
it would not offer a plausible rival interpretation.
If one had available a parallel time
series from a group not receiving the experimental
treatment, but exposed to the same extraneous sources of influence, and if this
control time series failed to show the exceptional
jump from to 0;, then the plausibility
of History as a rival interpretation would be
greatly reduced. We may call this the Multiple
Time -Series Design.
Another way of improving the One -Group
Pretest- Posttest Design is to add a “Nonequivalent
Control Group.” (Were the control group to be randomly assigned from the same population as
the experimental group, we would, of course, have
a true, not quasi, experimental design.) Depending
on the similarities of setting and attributes,
if the nonequivalent control group fails to show
a gain manifest in the experimental group, then
History, Maturation, Testing, and Instrumentation
are controlled. In this popular design, the
frequent effort to “correct” for the lack of
perfect equivalence by matching on pretest scores
is absolutely wrong (e.g., Thorndike, 1942;
Hovland et al., 1949; Campbell & Clayton, 1961),
as it introduces a regression artifact. Instead, one should live with any initial pretest differences,
using analysis of covariance, or graphic
presentation. Remaining uncontrolled is the
Selection -Maturation Interaction, i.e., the pos,
sibility that the experimental group differed
from the control group not only in initial level,
but also in its autonomous maturation rate. In experiments on psychotherapy and on the effects
of specific coursework this is a very serious
rival. Note that it can be rendered implausible
by use of a time series of pretests for both
groups, thus moving again to the Multiple Time – Series Design.
There is not space here to present adequately
even these four quasi -experimental designs,
but perhaps the strategy of adding
specific observations and analyses to check on
specific threats to validity has been illustrated.
This is carried to an extreme in the
Recurrent Institutional Cycle Design (Campbell
McCormack, 1957; Campbell Stanley, 1963),
in which longitudinal and cross -sectional measurements
are combined with still other-analyses
to assess the impact of indoctrination procedures,
etc., through exploiting the fact that
essentially similar treatments are being given
to new entrants year after year or cycle after
cycle. Other quasi -experimental designs
covered in Campbell & Stanley (1963) include two more single -group designs (the Equivalent Time – Samples Design and the Equivalent Materials
Design), Counterbalanced or Rotational Designs,
Separate Sample Pretest -Posttest Designs, Regression-
Discontinuity Analysis, the Panel
Impact Design (see also Campbell Clayton,
1961), and the Cross- Lagged Panel Correlation,
which is related to Lazarsfeld’s Sixteen -Fold
Table (see especially Campbell, 1963).
Related to the program ofcquasi159
experimental analysis are those efforts to achieve causal inference from correlational data.
Note that while correlation does not prove causation,
most causal hypotheses imply specific correlations,
and thus examination of these probes,
tests, or edits the causal hypothesis. Further,
as Simon and Blalock have emphasized (e.g.,
Blalock, 1961), certain causal models specify
uneven patterns of correlation. Thus the
B -3 C model implies that rAC be smaller
than r or r BC
. However, the of partial
correlátions or the use of Wright’s (1920) path
analysis are rejected by the present writer as
tests of the model because of the requirement
that the “cause” be totally represented in the
“effect.” In the social sciences it will never
be plausible that the “cause” has been measured
without unique error and that it also totally
lacks unique systematic variance not shared with
the “effect.” More appropriate would be Lawley’s
(1940) test of the hypothesis of single- factoredness.
Only if single- factoredness can be rejected
would the causal model as represented by
its predicted uneven correlations pattern be the
preferred interpretation.
A word needs to be said about tests of
significance for quasi -experimental designs.
There has come from several competent social
scientists the argument that since randomization
has not been used, tests of significance assuming
randomization are not relevant. The attitude of
the present writer is on the whole in disagreement.
However, some aspects of the protest are
endorsed: Good experimental design is needed for
any comparison inferring change, whether or not
tests of significance are used, even if only
photographs, graphs, or essays are being compared.
In this sense, experimental design is
independent of tests of significance. More importantly,
tests of significance have come to be
taken as thoroughgoing o. rQ In vulgar social
science usage, finding a significant difference”
is apt to be taken as provin& the author’s basis
for predicting the difference, forgetting the
many other plausible rival hypotheses explaining
a significant difference which quasi -experimental
designs leave uncontrolled. Certainly the valuation
of tests of significance in some quarters
needs demoting. Further, the use of tests of
significance designed for the evaluation of a
single comparison becomes much too lenient when
dozens, hundreds, or thousands of comparisons
have been sifted, and this is still common
usage. And in a similar manner, the author’s
decision as to which of his studies is publishable,
and the editor’s decision as to which of
the manuscripts is acceptable, further biases
the sampling basis. In all of these ways,
reform is needed.
However, when a quasi -experimenter has
compared the results from two intact classrooms
employed in a sampling of convenience, sample
size, small- sample instability,a chance difference,
is certainly one of the many plausible
rival hypotheses which must be considered, even
if only one. If each class had but five students we would interpret the fact that 207. more in the
experimental class showed increases in favorable-
160 ness with much less interest than if each class
had 500 students. In this case there is available
an elaborate formal theory for the plausible
rival hypothesis of chance fluctuation. This
theory involves assumptions of randomness, which
are quite appropriately present when we reject
the null model of random association in favor of
a hypothesis of systematic difference between
the two classes. If we find a “significant
difference,” the test of significance will not,
of course, tell us whether the two classes
differed because one saw the experimental movie,
or for some selection reason associated with
class topic, time of day, etc., which might have
interacted with rate of autonomous change, preREFERENCES
Blalock, H.M. 1964 Causal inferences in nonexperimental
research. Chapel Hill: The
University of North Carolina Press, 1964.
Campbell, D.T. 1957 Factors relevant to validity
of experiments in social settings.
Psychological Bulletin 54:297 -312.
Campbell, D.T. 1963 From description to experimentation:
Interpreting trends as
quasi -experiments. Pages 212 -242 in
Harris, C.W. (editor), Problems in measuring
change. Madison, Wis.: University
of Wisconsin Press.
Campbell, D.T.; and Clayton, K.N. 1961 Avoiding
regression effects in panel studies of
communication impact. Studies in Public
Communication No. 3, 99 -118. Campbell, D.T.; and McCormack, Thelma H. 1957
Military experience and attitudes toward
authority. American Journal of Sociology
62:482 -490.
Campbell, D.T.; and Stanley, J.C. 1963 Experimental
and quasi- experimental designs for
research on teaching. Pages 171 -246 in
Gage, N.L. (editor), Handbook of research
on teaching. Chicago: Rand McNally.
Chapin, F.S. 1955 Experimental designs in
sociological research. New York: Harper.
(Rev. ed.)
Chapin, F.S.; and Queen, S.A. 1937 Research
memorandum on social work in the depression.
New York: Social Science Research Council,
Bulletin 39, 1937.
Greenwood, E. 1945 Experimental sociology: A
study in method. New York: King’s Crown
Press.
test instigated changes, reactions to commonly
experienced events, etc. But such a test of
significance will help us rule out this 13th
plausible rival hypothesis, that there is no difference here at all that a model of purely
chance assignment could not account for as a vagary of sampling. Note that our statement
of probability level is in this light a statement
of the plausibility of this rival hypothesis,
which always has some plausibility, however
faint.In thb orientation, a practice of stating
the probability in descriptive detail seems
preferable to using but a single apriori
decision criterion.
Hovland, C.I.; Lumsdaine, A.A.; and Sheffield,
F.C. 1949 Experiments on mass communication.
Princeton, N.J.: Princeton University
Press.
Lawley, C.N. 1940 The estimation of factor
loadings by the method of maximum likelihood.
Proceedings of the Royal Society of
Edinburgh 60:64 -82. Stouffer, S.S. (editor) 1949 The American
Soldier. Princeton, N.J.: Princeton University
Press. Vols. I and II. Thorndike, R.L. 1942 Regression fallacies in
the matched groups experiment. Psycho – metrika 7 :85 -102.
Thorndike, E.L.; and Ruger, G.J. 1923 The
effect of first -year Latin upon knowledge
of English words of Latin derivation.
School and Society 81:260 -270, 417 -418.
Thorndike, E.L.; and Woodworth, R.S. 1901 The
influence of improvement in one mental
function upon the efficiency of other
functions. Psychological Review 8:247 -261,
384 -395, 553 -564.
Wright, S. 1920 Correlation and causation.
Journal of Agricultural Research 20:557 -585.
Research Methods in Healthcare Epidemiology and
Antimicrobial Stewardship – Quasi-Experimental Designs
Marin L. Schweizer, PhD1,2, Barbara I. Braun, PhD3, and Aaron M. Milstone, MD MHS4,5
1Department of Internal Medicine, Carver College of Medicine, University of Iowa
2Center for Comprehensive Access and Delivery Research and Evaluation, Iowa City VA Health
Care System
3Department of Health Services Research, The Joint Commission, Oakbrook Terrace IL
4Department of Pediatrics, Division of Pediatric Infectious Diseases, Johns Hopkins University
School of Medicine, Baltimore, MD
5Department of Hospital Epidemiology and Infection Control, Johns Hopkins Hospital, Baltimore,
MD
Abstract
Quasi-experimental studies evaluate the association between an intervention and an outcome using
experiments in which the intervention is not randomly assigned. Quasi-experimental studies are
often used to evaluate rapid responses to outbreaks or other patient safety problems requiring
prompt non-randomized interventions. Quasi-experimental studies can be categorized into three
major types: interrupted time series designs, designs with control groups, and designs without
control groups. This methods paper highlights key considerations for quasi-experimental studies in
healthcare epidemiology and antimicrobial stewardship including study design and analytic
approaches to avoid selection bias and other common pitfalls of quasi-experimental studies.
Introduction
The fields of healthcare epidemiology and antimicrobial stewardship (HE&AS) frequently
apply interventions at a unit-level (e.g. intensive care unit [ICU]). These are often rapid
responses to outbreaks or other patient safety problems requiring prompt non-randomized
interventions. Quasi-experimental studies evaluate the association between an intervention
and an outcome using experiments in which the intervention is not randomly assigned.1, 2
Quasi-experimental studies can be used to measure the impact of large scale interventions or
policy changes where data are reported in aggregate and multiple measures of an outcome
over time (e.g., monthly rates) are collected.
Quasi-experimental studies vary widely in methodological rigor and can be categorized into
three types: interrupted time series designs, designs with control groups, and designs without
Corresponding author: Marin L. Schweizer, PhD, Iowa City VA Health Care System (152), 601 Hwy 6 West, Iowa City, IA 52246,
marin-schweizer@uiowa.edu, 319-338-0581 x3831.
Potential conflicts of interest. None.
HHS Public Access
Author manuscript
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Published in final edited form as:
Infect Control Hosp Epidemiol. 2016 October ; 37(10): 1135–1140. doi:10.1017/ice.2016.117.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
control groups. The HE&AS literature contains many uncontrolled before-and-after studies
(also called pre-post studies), but advanced quasi-experimental study designs should be
considered to overcome the biases inherent in uncontrolled before-and-after studies.3
In this
article, we highlight methods to improve quasi-experimental study design including use of a
control group that does not receive the intervention2
and use of the interrupted time series
study design, in which multiple equally spaced observations are collected before and after
the intervention.4
Advantages and Disadvantages (Table 1)
The greatest advantages of quasi-experimental studies are that they are less expensive and
require fewer resources compared with individual randomized controlled trials (RCTs) or
cluster randomized trials. Quasi-experimental studies are appropriate when randomization is
deemed unethical (e.g., effectiveness of hand hygiene studies).1
Quasi-experimental studies
are often performed at a population-level not an individual-level, and thus they can include
patients who are often excluded from RCTs, such as those too ill to give informed consent or
urgent surgery patients, with IRB approval as appropriate.5
Quasi-experimental studies are
also pragmatic because they evaluate the real-world effectiveness of an intervention
implemented by hospital staff, rather than efficacy of an intervention implemented by
research staff under research conditions.5
Therefore, quasi-experimental studies may also be
more generalizable and have better external validity than RCTs.
The greatest disadvantage of quasi-experimental studies is that randomization is not used,
limiting the study’s ability to conclude a causal association between an intervention and an
outcome. There is a practical challenge to quasi-experimental studies that may arise when
some patients or hospital units are encouraged to introduce an intervention, while other units
retain the standard of care and may feel excluded.2
Importantly, researchers need to be aware
of the biases that may occur in quasi-experimental studies that may lead to a loss of internal
validity, especially selection bias in which the intervention group may differ from the
baseline group.2
Types of selection bias that can occur in quasi-experimental studies include
maturation bias, regression to the mean, historical bias, instrumentation bias, and the
Hawthorne effect.2
Lastly, reporting bias is prevalent in retrospective quasi-experimental
studies, in which researchers only publish quasi-experimental studies with positive findings
and do not publish null or negative findings.
Pitfalls and Tips
Key study design and analytic approaches can help avoid common pitfalls of quasiexperimental
studies. Quasi-experimental studies can be as small as an intervention in one
ICU or as large as implementation of an intervention in multiple countries.6
Multisite studies
generally have stronger external validity. Subtypes of quasi-experimental study designs are
shown in Table 2 and the Supplemental Figure.1, 2, 7
In general, the higher numbers assigned
to the designs in the table are associated with more rigorous study designs. Quasiexperimental
studies meet some requirements for causality including temporality, strength of
association and dose response.1, 8
The addition of concurrent control groups, time series
measurements, sensitivity analyses and other advanced design elements can further support
Schweizer et al. Page 2
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
the hypothesis that the intervention is causally associated with the outcome. These design
elements aid in limiting the number of alternative explanations that could account for the
association between the intervention and the outcome.2
Quasi-experimental studies can use observations that were collected retrospectively,
prospectively, or a combination thereof. Prospective quasi-experimental studies use baseline
measurements that are calculated prospectively for the purposes of the study, then an
intervention is implemented and more measurements are collected. It is often necessary to
use retrospective data when the intervention is outside of the researcher’s control (e.g.
natural disaster response) or when hospital epidemiologists are encouraged to intervene
quickly in response to external pressure (e.g. high central line-associated bloodstream
infection [CLABSI] rates).2
However, retrospective quasi-experimental studies have a higher
risk of bias compared with prospective quasi-experimental studies.2
The first major consideration in quasi-experimental studies is the addition of a control group
that does not receive the intervention (Table 2 subtype 6–9, 11, 15). Control groups can
assist in accounting for seasonal and historical bias. If an effect is seen among the
intervention group but not the control group, then causal inference is strengthened. Careful
selection of the control group can also strengthen causal inference. Detection bias can be
avoided by blinding those who collect and analyze the data to which group received the
intervention.2
The second major consideration is designing the study in a way to reduce bias, either by
including a non-equivalent dependent variable or by using a removed-treatment design, a
repeated treatment design or a switching replications design. Non-equivalent dependent
variables should be similar to the outcome variable except that the non-equivalent dependent
variable is not expected to be influenced by the outcome (Table 2 subtypes 3, 12). In a
removed-treatment design the intervention is implemented then taken away and observations
are made before, during and after implementation (Table 2 subtypes 4, 5, 13). This design
can only be used for interventions that do not have a lasting effect on the outcome that could
contaminate the study. For example, once staff have been educated, that knowledge cannot
be removed.2
Researchers must clearly explain before implementation that the intervention
will be removed, otherwise this can lead to frustration or demoralization by the hospital staff
implementing the intervention.2
In the repeated treatment design (Table 2 subtypes 5, 14)
interventions are implemented, removed, then implemented again. Similar to the removedtreatment
design, the repeated treatment design should only be used if the intervention does
not have a lasting effect on the outcome. In a switching replications design, which is also
known as a cross-over design, one group implements the intervention while the other group
serves as the control. Then, the intervention is stopped in the first group and implemented in
the second group (Table 2 subtypes 9, 15). The cross-overs can occur multiple times. If the
outcomes are only impacted during intervention observations, but not in the control
observations, then there is support for causality.2
A third key consideration for quasi-experimental studies with the interrupted time series
design is to collect many evenly spaced observations in both the baseline and intervention
periods. Multiple observations are used to estimate and control for underlying trends in data,
Schweizer et al. Page 3
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
such as seasonality and maturation.2
The frequency of the observations (e.g. weekly,
monthly, quarterly) should have clinical or seasonal meaning so that a true underlying trend
can be established. There are conflicting recommendations as to the minimum number of
observations needed for a time series design but they range from 20 observations before and
20 after intervention implementation to 100 observations overall.2–4, 9
The interrupted time
series design is the most effective and powerful quasi-experimental design, particularly
when supplemented by other design elements.2
However, time series designs are still subject
to biases and threats to validity.
The final major consideration is ensuring an appropriate analysis plan. Time series study
designs collect multiple observations of the same population over time, which result in
autocorrelated observations.2
For instance, carbapenem-resistant Enterobacteriaceae (CRE)
counts collected one month apart are more similar to one another than CRE counts collected
two months apart.4
Basic statistics (e.g. chi-square test) should not be used to analyze time
series data because they cannot take into account trends over time and they rely on an
independence assumption. Time series data should be analyzed using either regression
analysis or interrupted time-series analysis (ITSA).4
Linear regression models or generalized
linear models can be used to evaluate the slopes of the observed outcomes before and during
implementation of an intervention. However, unlike regression models, ITSA relaxes the
independence assumption by combining a correlation model and a regression model to
effectively remove seasonality effects before addressing the impact of the intervention.2, 4
ITSA assesses the impact of the intervention by evaluating the changes in the intercept and
slope before and after the intervention. ITSA can also include a lag effect if the intervention
is not expected to have an immediate result, and additional sensitivity analyses can be
performed to test the robustness of the findings. We recommend statistician consultation
while designing the study in order to determine which model may be appropriate and to help
perform power calculations that account for correlation.
Key considerations for designing, analyzing and writing a quasi-experimental study can be
found in the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND)
statement and are summarized in Table 3.10
Examples of Published Quasi-Experimental Studies in HE&AS
Recent quasi-experimental studies illustrated strengths and weaknesses that require attention
when employing this study design.
A recent prospective quasi-experimental study (Table 2 subtype 10) implemented a
multicenter bundled intervention to prevent complex Staphylococcus aureus surgical site
infections.11 The study exemplified strengths of quasi-experimental design using a
pragmatic approach in a real-world setting that even enabled identification of a dose
response to bundle compliance. To optimize validity, the authors included numerous
observation points before and after the intervention and used time series analysis. This study
did not include a concurrent control group, and outcomes were collected retrospectively for
the baseline group and prospectively for the intervention group which may have led to
ascertainment bias.
Schweizer et al. Page 4
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Quach and colleagues performed a quasi-experimental study (Table 2 subtype 11) to
evaluate the impact of an infection prevention and quality improvement intervention of daily
chlorhexidine gluconate (CHG) bathing to reduce CLABSI rates in the neonatal ICU.12 The
primary strength of this study was the authors used a non-bathed concurrent control group.
Given that the baseline rates of CLABSI exceed the National Healthcare Surveillance
Network (NHSN) pooled mean and the observation that the concurrent control group did not
see a reduction in rates post-intervention suggest that the treatment effect was more likely
due to the treatment than to regression to the mean, seasonal effects, or secular trends.
Yin and colleagues performed a quasi-experimental study (Table 2 subtype 14) to determine
whether universal gloving reduced HAIs in hospitalized children.13 This retrospective study
compared the winter respiratory syncytial virus (RSV) season during which healthcare
workers (HCW) were required to wear gloves for all patient contact and the non-winter, nonRSV
season when HCWs were not required to wear gloves. Because the study period
extended many calendar years, the design enabled for multiple crossovers removing the
intervention and use of time series analysis. This study did not have a control group (another
hospital or unit that did not require universal gloving during RSV season) nor did it have a
non-equivalent dependent variable.
Major Points
Quasi-experimental studies are less resource intensive than RCTs, test real world
effectiveness, and can support a hypothesis that an intervention is causally associated with
an outcome. These studies are subject to biases that can be limited by carefully planning the
design and analysis. Consider key strategies to limit bias, such as including a control group,
including a non-equivalent variable or removed-treatment design, collecting adequate
observations before and during the intervention, and using appropriate analytic methods (i.e.
interrupted time series analysis).
Conclusion
Quasi-experimental studies are important for HE&AS because practitioners in those fields
often need to perform non-randomized studies of interventions at the unit level of analysis.
Quasi-experimental studies should not always be considered methodologically inferior to
RCTs because quasi-experimental studies are pragmatic and can evaluate interventions that
cannot be randomized due to ethical or logistic concerns.10 Currently, too many quasiexperimental
studies are uncontrolled before-and-after studies using suboptimal research
methods. Advanced techniques such as use of control groups and non-equivalent dependent
variables, as well as interrupted time series design and analysis should be used in future
research.
Acknowledgments
Financial support. MLS is supported through VA Health Services Research and Development (HSR&D) Career
Development Award (CDA 11-215)
Schweizer et al. Page 5
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
References
1. Harris AD, Bradham DD, Baumgarten M, Zuckerman IH, Fink JC, Perencevich EN. The use and
interpretation of quasi-experimental studies in infectious diseases. Clin Infect Dis. 2004; 38:1586–
91. [PubMed: 15156447]
2. Shadish, WR.; Cook, TD.; Campbell, DT. Experimental and Quasi-Experimental Designs for
Generalized Causal Inference. Boston: Houghton Mifflin; 2002.
3. Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for
evaluating guideline implementation strategies. Fam Pract. 2000; 17(Suppl 1):S11–6. [PubMed:
10735262]
4. Shardell M, Harris AD, El-Kamary SS, Furuno JP, Miller RR, Perencevich EN. Statistical analysis
and application of quasi experiments to antimicrobial resistance intervention studies. Clin Infect
Dis. 2007; 45:901–7. [PubMed: 17806059]
5. Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator
summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009; 62:464–75. [PubMed:
19348971]
6. Lee AS, Cooper BS, Malhotra-Kumar S, et al. Comparison of strategies to reduce meticillinresistant
Staphylococcus aureus rates in surgical patients: a controlled multicentre intervention trial.
BMJ Open. 2013; 3:e003126.
7. Harris AD, Lautenbach E, Perencevich E. A systematic review of quasi-experimental study designs
in the fields of infection control and antibiotic resistance. Clin Infect Dis. 2005; 41:77–82.
[PubMed: 15937766]
8. Hill AB. The Environment And Disease: Association Or Causation? Proc R Soc Med. 1965;
58:295–300. [PubMed: 14283879]
9. Crabtree BF, Ray SC, Schmidt PM, O’Connor PJ, Schmidt DD. The individual over time: time series
applications in health care research. J Clin Epidemiol. 1990; 43:241–60. [PubMed: 2313315]
10. Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations
of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;
94:361–6. [PubMed: 14998794]
11. Schweizer ML, Chiang HY, Septimus E, et al. Association of a bundled intervention with surgical
site infections among patients undergoing cardiac, hip, or knee surgery. JAMA. 2015; 313:2162–
71. [PubMed: 26034956]
12. Quach C, Milstone AM, Perpete C, Bonenfant M, Moore DL, Perreault T. Chlorhexidine bathing in
a tertiary care neonatal intensive care unit: impact on central line-associated bloodstream
infections. Infect Control Hosp Epidemiol. 2014; 35:158–63. [PubMed: 24442078]
13. Yin J, Schweizer ML, Herwaldt LA, Pottinger JM, Perencevich EN. Benefits of universal gloving
on hospital-acquired infections in acute care pediatric units. Pediatrics. 2013; 131:e1515–20.
[PubMed: 23610206]
14. Popoola VO, Colantuoni E, Suwantarat N, et al. Active Surveillance Cultures and Decolonization
to Reduce Staphylococcus aureus Infections in the Neonatal Intensive Care Unit. Infect Control
Hosp Epidemiol. 2016; 37:381–7. [PubMed: 26725699]
15. Waters TM, Daniels MJ, Bazzoli GJ, et al. Effect of Medicare’s nonpayment for Hospital-Acquired
Conditions: lessons for future policy. JAMA Intern Med. 2015; 175:347–54. [PubMed: 25559166]
Schweizer et al. Page 6
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 7
Table 1
Advantages, disadvantages, and important pitfalls in using quasi-experimental designs in healthcare
epidemiology research.
Advantages Notes
Less expensive and time consuming
than RCTs or Cluster Randomized
Trials
Do not need to randomize groups
Pragmatic Include patients that are often excluded in RCTs, tests effectiveness more than efficacy, may have
good external validity
Can retrospectively analyze policy
changes
Even if policy implementation is out of your control
Meets some requirements of causality Quasi-experimental studies meet some requirements for causality including temporality, strength of
association and dose response2
Designs can be strengthened with
control groups, multiple measures over
time and cross-overs
Not gold standard to establish causation but can be next level below RCT if well-designed
Disadvantages Notes
Retrospective data is often incomplete
or difficult to obtain
Need processes to assess availability, accuracy and completeness during baseline phase before
implementation
Not randomized Nonrandomized designs tend to overestimate effect size3
Does not meet all requirements to determine causality
Lack of internal validity
Potential pitfalls Notes
Selection Bias When group receiving the intervention differs from the baseline group.2
Maturation Bias Maturation bias can occur when natural changes over the passage of time may influence the study
outcome.1
Examples include seasonality, fatigue, aging, maturity or boredom.2
Hawthorne Effect Could bias quasi-experimental studies in which baseline rates are collected retrospectively and
intervention rates are collected prospectively, because the intervention group could be more likely to
improve when they are aware of being observed.3
Historical Bias Historical bias is a threat when other events occur during the study period that may have an effect on
the outcome.2
Regression to the Mean Regression to the mean is a statistical phenomenon in which extreme measures tend to naturally
revert back to normal.2
Instrumentation Bias Instrumentation bias occurs when a measuring instrument changes over time (e.g. improved
sensitivity of laboratory tests) or when data are collected differently before and after an
intervention.2
Ascertainment Bias Systematic error or deviation in the identification or measurement of outcomes.
Reporting Bias Reporting bias is especially prevalent in retrospective quasi-experimental studies, in which
researchers only publish quasi-experimental studies with positive findings and do not publish null or
negative findings.
Need advanced statistical analysis
when using more complex designs
With time series designs, should use interrupted time series analysis, not just single measurements
before and after a response to an outbreak. Should account for intracluster correlation in power
calculations
Note: RCT, randomized controlled trial.
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 8
Table 2
Major Quasi-experimental design types and subtypes
Type and Subtype Description Notation
A. INTERRUPTED TIME-SERIES QUASI-EXPERIMENTAL DESIGNS
#15 Interrupted time series that
uses switching replications
and a control group
A1c A2c A3c
X A4t A5t A6t removeX A7c A8c A9c A10c
B1c B2c B3c B4c B5c B6c
X B7t B8t B9t B10t
#14 Interrupted time series with
repeated treatment design13
A1c A2c A3c
X A4t A5t removeX A6c A7c
X A8t A9t
#13 Interrupted time series
removing the treatment at a
known time
A1c A2c A3c A4c
X A5t A6t A7t A8t removeX A9c A10c
#12 Interrupted time series with a
nonequivalent dependent
variable14
(A1c
v, A1c
n
) (A2c
v, A2c
n
) (A3c
v, A3c
n
) X (A4t
v, A4t
n
) (A5t
v, A5t
n
)
#11 Interrupted time series with an
untreated control group12
A1c A2c A3c A4c A5c
X A6t A7t A8t A9t A10t
B1c B2c B3c B4c B5c B6c B7c B8c B9c B10c
#10 Simple interrupted time
series11, 15
A1c A2c A3c A4c A5c
X A6t A7t A8t A9t A10t
B. QUASI-EXPERIMENTAL DESIGNS THAT USE CONTROL GROUPS
# 9 The control group design that
uses dependent pretest and
posttest samples and
switching replications
A1c
X A2t removeX A3c
B1c B2c
X B3t
# 8 The untreated-control group
design that uses dependent
pretest and posttest samples
and a double pretest
A1c A2c
X A3t
B1c B2c B3c
# 7 The untreated-control group
design that uses dependent
pretest and posttest samples
A1c
X A2t
B1c B2c
# 6 The posttest-only design that
uses an untreated control
group
X A1t
B1c
C. QUASI-EXPERIMENTAL DESIGNS THAT DO NOT USE CONTROL GROUPS
# 5 The repeated-treatment design A1c
X A2t removeX A3c
X A4t
# 4 The removed-treatment design A1c
X A2t A3t removeX A4c
# 3 The 1-group pretest-posttest
design that uses a
(A1c
v, A1c
n
) X (A2t
v, A2t
n
)
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 9
Type and Subtype Description Notation
nonequivalent dependent
variable
# 2 The 1-group pretest-posttest
design that uses a double
pretest
A1c A2c
X A3t
# 1 The 1-group pretest-posttest
design
A1c
X A2t
Note: Classification types adapted prior publications
1, 2; A, B = Groups; 1,2,3, etc.= observations for a Group; X= intervention; removeX = remove intervention;
v=variable of interest;
n=non-equivalent
dependent variable; t=treatment group; c=control group. Time moves from left to right. Citations are published examples from the literature.
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.
Author Manuscript Author Manuscript Author Manuscript Author Manuscript
Schweizer et al. Page 10
Table 3
Checklist of key considerations when developing a quasi-experimental study
CONSIDERATIONS FOR RETROSPECTIVE AND PROSPECTIVE QUASI-EXPERIMENTAL STUDIES
1. Determine PICO: population, intervention, control group, outcomes (specify primary vs. secondary outcomes)
2. What is the hypothesis?
3. Is it ethical or feasible to randomize patients to the intervention?
4. Will this be a retrospective or prospective study or a combination of both?
5. What are the main inclusion and exclusion criteria?
6. Will anyone (participants, study staff, research team, analyst) be blinded to the intervention assignment?
7. Consider options for control group
8. Consider options for nonequivalent dependent variable
9. How will the observations (outcomes) be measured?
10. How many observations can be measured pre and post intervention?
11. How should the observations be spaced to account for seasonality? Weekly? Monthly? Quarterly?
12. Do you hypothesize that the intervention will diffuse quickly or slowly? (e.g. are changes in the outcomes expected right away or only after
a phase-in period?)
13. Do you hypothesize that the intervention will have a lasting effect on the outcome? (If yes, do not use cross-over design)
14. What is the analysis plan? (Consult a statistician)
15. If the unit of analysis differs from the unit of assignment, what analytical method will be used to account for this (e.g. adjusting the standard
error estimates by the design effect or using multilevel analysis)?
16. What sample size is needed to be powered to see a significant difference? (Consult a statistician)
17. Will the analysis strategy be intention to treat or how will non-compliers be treated in the analysis?
ADDITIONAL CONSIDERATIONS FOR QUASI-EXPERIMENTAL STUDIES WITH PROSPECTIVE COMPONENTS
18. What will be the unit of delivery? (e.g. Individual patient or unit or hospital)
19. How will the units of delivery be allocated to the intervention?
20. Who will deliver the intervention? (e.g. study team or healthcare workers)
21. How and when will the intervention be delivered?
22. How will compliance with the intervention be measured?
23. Will there be activities to increase compliance or adherence? (e.g. incentives, coaching calls)
Infect Control Hosp Epidemiol. Author manuscript; available in PMC 2017 October 01.

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

GET A 40% DISCOUNT ON YOU FIRST ORDER

ORDER NOW DISCOUNT CODE >>>> WELCOME40

 

 

Posted in Uncategorized