Good commentary, Michael.  Frankly, I am not very fond of any 
proportion of variance effect size estimate, but the squared partial strikes as 
especially wicked, especially since most people who use them have no idea what 
they are.

Cheers,
[Karl L. Wuensch]<http://core.ecu.edu/psyc/wuenschk/klw.htm>
From: Michael Palij [mailto:m...@nyu.edu]
Sent: Saturday, December 09, 2017 7:42 PM
To: Teaching in the Psychological Sciences (TIPS)
Cc: Michael Palij
Subject: Fw: [tips] interpretations of partial eta squared



On Fri, 08 Dec 2017 18:05:27 -0800, , Karl Louis Wuensch wrote:
>          Unless you can justify removing from the denominator
>(total variance to be explained) that related to other effects
>in the model, you should never, ever, report partial eta-squared
>or partial r-squared.  If you must report a proportion of variance
>statistic, report semi-partial eta-squared / r- squared, known
>simply as eta-squared in the context of ANOVA.

Using Karls own materials on correlation, let me clarify some of
the points that Karl makes above as well as pose a question:

Assume we have 3 variables, AR=attitude toward animal rights,
MIS=misanthrophy (a dislike of humankind), and IDEAL=Idealism,
If we make AR our Y or criterion variable, and Mis = X1 and
Ideal = X2, out predictor vriables, one can represent the relationship
among the three variables in terms of a Venn diagram as follows:


The criterion Y variable AR is subdivided into several components labelled with
lower case letters:
d = unexplained or error variane
a = common variance or covariance of AR and MIS
c = common variance or covariance of AR and Ideal
b = common variance shared by AR, MIS, and IDEAL

The semi-partial correlation coefficient sr identifies the correlation between
a single variable and TOTAL variance of AR.  In terms of an equation,
sr(AR,MIS) = a/(a + b + c + d)
sr(AR,MIS) = proportion of TOTAL variance explained by the common
variance between AR and MIS.
all of the semi-partial correlations have (a + b + c + d) in the denominator
of equation used to calculate sr (given above).  The semi-partial correlation
is sometime referred to as a "part correlation".  The semi-partial eta-squared
follows a similar logic and the summ of the semi-partial eta-squard values plus
the remainer error variance should sum to 1.00 because each sr has the same
denominator.

Partial correlations differ from semi-partial correlations in a couple of ways
but the most important is what they express:

semi-partial correlations (technically, its squared values) identifty the common
variance between a predictor and the TOTAL variance of the criterion (in this
case (AR) while the (full) partial correlations (again, technically its squared 
values)
identify the common variance between a predictor and the UNEXPLAINED
variance.  The equation for (full) partial correlation for a above is
pr(AR,MIS) = a / (a + d)
The variance components for b and c (commoned or shared variance between
these two variables and the criterion) is removed from the total variance.

The question that the (full) partial correlation answers is "What proportion of
the remaining unexplained variance is accounted for by the relationship between
the criterion and this specific predictor after the systematic variance in 
criterion
that is associated with other predictor is removed from the criterion's 
variance.
The (full) partial correlations squared do NOT add up to 1.00 because they
have different denominators (i.e.,[specific effect variance + error variance] 
and
the specific effect variance is either a or b or c).

Partial eta squared, following the above logic, describes how much common 
variance
is accounted for by the independent  variance of dependent variable's variance 
that
has not been accounted for by the other independent variables.

Whether one should use the semi-partial eta-squared or (full) partial 
eta-squared,
I think, depends upon what what question one is asking or which of two reference
values one can use, namely,
(1)  The toatl variance in the criterion or dependent variable
(2) The remaining unexplained variance in the crierion or dependent variable.

My question to Karl is the following:
What did (full) partial correlation ever do to you to make you hold such
a potent grudge against ever using them?
;-)

>While SPSS does not provide this, it is easily computed as
>the effect sum of squares divided by the total (corrected)
>sum of squares.


Tests of Between-Subjects Effects





Dependent Variable:   Rating





Source


Type III Sum of Squares


df


Mean Square


F


Sig.


Partial Eta Squared


Corrected Model


1318.281a


7


188.326


163.344


.000


.893


Intercept


3314.885


1


3314.885


2875.150


.000


.955


DE_Attr


1275.998


1


1275.998


1106.731


.000


.890


Gender


4.068


1


4.068


3.529


.062


.025


Gender * DE_Attr


15.894


1


15.894


13.785


.000


.091


PL_Attr


.837


1


.837


.726


.396


.005


DE_Attr * PL_Attr


.181


1


.181


.157


.693


.001


Gender * PL_Attr


.791


1


.791


.686


.409


.005


Gender * DE_Attr * PL_Attr


4.252


1


4.252


3.688


.057


.026


Error


157.953


137


1.153











Total


4943.000


145














Corrected Total


1476.234


144








So, the plain eta-squared for DE_Atrr = 1275.988/1478.234 = .84
while the partial eta-squared = .890.

>SAS will give you a confidence interval for
>this estimate.

People are still doing confidence intervals?  I thought they moved
over to credible interval. ;-)

>Since your design is 2 x 2 x 2, the effect of interest is a one
>degree of freedom effect.  In that case, Cohen’s d is almost always a better
>effect size estimate, and is easy to calculate from the marginal means and the
>pooled standard deviation.

Actually, given the dependence of measures of variance on aspects of
the experimental design, it would seem a good idea not to use them
unless one's design is VERY simple.  An article by Richardson (2010)
highlights some of these problem, especially the external validity of
percent of variance measures as does the Olejnik & Algina (2003)
article that will make one cry from all of the additional work (i.e.,
one eta-squared for manipulated variables, another for grouping
on subject attributes like gender -- maybe we should just use Omega squared
but think about that after looking at Table 2 and subsequent tables). ;-)

Richardson, J. T. (2011). Eta squared and partial eta squared
as measures of effect size in educational research. Educational
Research Review, 6(2), 135-147.

Olejnik, S., & Algina, J. (2003). Generalized eta and omega squared
statistics: measures of effect size for some common research designs.
Psychological methods, 8(4), 434-447.
Avaialable at: 
https://www.researchgate.net/profile/James_Algina/publication/8968445_Generalized_Eta_and_Omega_Squared_Statistics_Measures_of_Effect_Size_for_Some_Common_Research_Designs/links/0912f51014365bef73000000/Generalized-Eta-and-Omega-Squared-Statistics-Measures-of-Effect-Size-for-Some-Common-Research-Designs.pdf

I don't know if the image will make it to the Tips mail archive
or the digest but it should get through to folks who receive TiPS
directly.

-Mike Palij
New York University
m...@nyu.edu<mailto:m...@nyu.edu>

P.S. Thanks to Karl for providing all the stuff. ;-)


---

You are currently subscribed to tips as: 
wuens...@ecu.edu<mailto:wuens...@ecu.edu>.

To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13060.c78b93d4d09ef6235e9d494b3534420e&n=T&l=tips&o=51867

(It may be necessary to cut and paste the above URL if the line is broken)

or send a blank email to 
leave-51867-13060.c78b93d4d09ef6235e9d494b35344...@fsulist.frostburg.edu<mailto:leave-51867-13060.c78b93d4d09ef6235e9d494b35344...@fsulist.frostburg.edu>

---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=52030
or send a blank email to 
leave-52030-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to