Omega-Squared.txt
Dear 6430 students,
We have discussed omega-squared as a less biased (than is
eta-squared) estimate of the proportion of variance explained by the
treatment variable in the population from which our sample data could be
considered to be random. Earlier this semester we discussed a very
similar statistic, r-squared, and I warned you about how this statistic
can be inflated by high levels of extraneous variable control. The same
caution applies to eta-squared and omega-squared. Here is a comment I posted
to EDSTAT-L on this topic a few years back:
------------------------------------------------------------------------------
Date: Mon, 11 Oct 93 11:27:23 EDT
From: "Karl L. Wuensch"
To: Multiple recipients of list
Subject: Omega-squared (was P Value)
Josh, backon@vms.huji.ac.il, noted:
>We routinely run omega squared on our data. Omega squared is one of the most
>frequently applied methods in estimating the proportion of the dependent
>variable accounted for by an independent variable, and is used to confirm the
>strength of association between variables in a population. ............
Omega-squared can also be misinterpreted. If the treatment is
evaluated in circumstances (the laboratory) where the influence of
extraneous variables (other variables that influence the dependent
variable) is eliminated, then the omega-squared will be inflated relative
to the proportion of the variance in the dependent variable due to the
treatment in a (real) population where those extraneous variables are
not eliminated. Thus, a treatment that really accounts for a trivial
amount of the variance in the dependent variable out there in the real world
can produce a large omega-squared when computed from data collected in the
laboratory. To a great extent both P and omega-squared measure the extent
to which the researcher has been able to eliminate "error variance" when
collecting the data (but P is also greatly influenced by sample size).
Imagine that all your subjects were clones of one another with identical
past histories. All are treated in exactly the same way, except that for half
of them you clapped your hands in their presence ten minutes before measuring
whatever the dependent variable is. Because the subjects differ only on
whether or not you clapped your hands in their presence, if such clapping has
any effect at all, no matter how small, it accounts for 100% of the variance
in your sample. If the population to which you wish to generalize your
results is not one where most extraneous variance has been eliminated, your
omega-squared may be a gross overestimate of the magnitude of the effect.
Do note that this problem is not unique to omega-squared. Were you to
measure the magnitude of the effect as being the between groups difference
in means divided by the within groups standard deviation the same potential
for inflation of effect would exist.
Karl L. Wuensch, Dept. of Psychology, East Carolina Univ.
Greenville, NC 27858-4353, phone 919-757-6800, fax 919-757-6283
Bitnet Address: PSWUENSC@ECUVM1
Internet Address: PSWUENSC@ECUVM.CIS.ECU.EDU
========================================================================
Sender: edstat-l@jse.stat.ncsu.edu
From: Joe H Ward
Karl --- good comment!! My early research days were spent in an R-squared,
Omega-squared, Factor Analysis environment. My own observations say: "BEWARE
of those correlation-type indicators!!!" --- Joe
Joe Ward 167 East Arrowhead Dr.
San Antonio, TX 78228-2402 Phone: 210-433-6575 joeward@tenet.edu