[tips] Sample Size: How to Determine it?

2013-08-27 Thread Michael Britt
I'm reading an interesting piece of research on anthropomorphism which 
essentially states after a natural disaster if we use the term mother nature 
when describing it, people will be less willing to contribute to relief efforts 
(Humanizing nature could help the perceiver to conceive natural events as 
imbued with intentionality and significance rather than considering them merely 
random and meaningless phenomena).  They did two studies.  Here's the 
issue/question:

Study 1 was correlational and involved 96 students.  The results were 
supportive at .001
Study 2 was an experiment (no need to go into the details) involving 56 
students. The results were, in the authors words, tangentially supportive 
with p.06

I think the study was well conducted so I don't mean to slight the researchers. 
 My guess is that if they used more subjects they probably would have reached 
p.05 - but would that have been an example of selective stopping?  I assume 
it would be.

So how exactly does a researcher determine beforehand - as we are suggesting 
they do - the number of subjects they ought to try to get for the study?  I'm 
just not familiar with the process.  Does one look at the effect sizes of 
previous related studies to determine if the effect is large or small and then 
make a decision?  But let's say the effect is assumed to be small, so do you 
use 100 subjects?  500?  How is this number determined?

Appreciate the insight in this.

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27372
or send a blank email to 
leave-27372-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Paul C Bernhardt
There is software to determine this. One excellent and free app is G*Power.

http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/

I would use the correlational study to give me an estimate of effect size. As 
you describe, I would use that in the software to estimate my number of 
participants to attain the desired power. Practicality constraints on number of 
available participants usually limits things. I did such an estimate using 
G*Power a few weeks ago for a study we are planning. We will need to collect 
data over two semesters because the anticipated number of participants 
available from one semester's worth of students would only give us power of 
about .66, whereas two semester's worth would bump us up over .90.

Paul

On Aug 27, 2013, at 8:18 AM, Michael Britt wrote:







I'm reading an interesting piece of research on anthropomorphism which 
essentially states after a natural disaster if we use the term mother nature 
when describing it, people will be less willing to contribute to relief efforts 
(Humanizing nature could help the perceiver to conceive natural events as 
imbued with intentionality and significance rather than considering them merely 
random and meaningless phenomena).  They did two studies.  Here's the 
issue/question:


  *   Study 1 was correlational and involved 96 students.  The results were 
supportive at .001
  *   Study 2 was an experiment (no need to go into the details) involving 56 
students. The results were, in the authors words, tangentially supportive 
with p.06

I think the study was well conducted so I don't mean to slight the researchers. 
 My guess is that if they used more subjects they probably would have reached 
p.05 - but would that have been an example of selective stopping?  I assume 
it would be.

So how exactly does a researcher determine beforehand - as we are suggesting 
they do - the number of subjects they ought to try to get for the study?  I'm 
just not familiar with the process.  Does one look at the effect sizes of 
previous related studies to determine if the effect is large or small and then 
make a decision?  But let's say the effect is assumed to be small, so do you 
use 100 subjects?  500?  How is this number determined?

Appreciate the insight in this.

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.commailto:mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt



---

You are currently subscribed to tips as: 
pcbernha...@frostburg.edumailto:pcbernha...@frostburg.edu.

To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13441.4e79e96ebb5671bdb50111f18f263003n=Tl=tipso=27372

(It may be necessary to cut and paste the above URL if the line is broken)

or send a blank email to 
leave-27372-13441.4e79e96ebb5671bdb50111f18f263...@fsulist.frostburg.edumailto:leave-27372-13441.4e79e96ebb5671bdb50111f18f263...@fsulist.frostburg.edu






---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27373
or send a blank email to 
leave-27373-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

RE: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Stuart McKelvie
Dear Tipsters,



There are various ways to plan sample size. When teaching this in research 
methods, I divide the issues into two parts:



1. Estimation of population values.

Here, more is better but there are diminishing returns. Think of the fact that 
we rarely see more than 1500 people in national polls and surveys. The formula 
is based on minimizing standard error. Of course, sampling is critical.



2. Conducting studies with variables: experimental, subject or correlational.

There are four interconnected concepts: effect size, alpha, power and sample 
size. When any three are known, the fourth is determined. You can decide where 
to set alpha and power. For effect size (d), you can be guided by Cohen's 
guidelines for small, medium and large (.3, .5, .8) and choose the value you 
are looking for. This may come from past research or, in its absence, what you 
think is interesting theoretically or practically.



Cohen's book on power analysis gives tables where you can look up the sample 
size needed after specifying the values you choose. There is also this webiste:

http://homepage.stat.uiowa.edu/~rlenth/Power/



Sincerely,



Stuart





_
 Sent via Web Access

   Floreat Labore

  Recti cultus pectora roborant

Stuart J. McKelvie, Ph.D., Phone: 819 822 9600 x 2402
Department of Psychology, Fax: 819 822 9661
Bishop's University,
2600 rue College,
Sherbrooke,
Québec J1M 1Z7,
Canada.

E-mail: stuart.mckel...@ubishops.ca (or smcke...@ubishops.ca)

Bishop's University Psychology Department Web Page:
http://www.ubishops.ca/ccc/div/soc/psy

   Floreat Labore
___


From: Paul C Bernhardt [pcbernha...@frostburg.edu]
Sent: 27 August 2013 08:41
To: Teaching in the Psychological Sciences (TIPS)
Subject: Re: [tips] Sample Size: How to Determine it?










There is software to determine this. One excellent and free app is G*Power.

http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/

I would use the correlational study to give me an estimate of effect size. As 
you describe, I would use that in the software to estimate my number of 
participants to attain the desired power. Practicality constraints on number of 
available participants usually limits things. I did such an estimate using 
G*Power a few weeks ago for a study we are planning. We will need to collect 
data over two semesters because the anticipated number of participants 
available from one semester's worth of students would only give us power of 
about .66, whereas two semester's worth would bump us up over .90.

Paul

On Aug 27, 2013, at 8:18 AM, Michael Britt wrote:










I'm reading an interesting piece of research on anthropomorphism which 
essentially states after a natural disaster if we use the term mother nature 
when describing it, people will be less willing to contribute to relief efforts 
(Humanizing nature could help the perceiver to conceive natural events as 
imbued with intentionality and significance rather than considering them merely 
random and meaningless phenomena).  They did two studies.  Here's the 
issue/question:


  *   Study 1 was correlational and involved 96 students.  The results were 
supportive at .001
  *   Study 2 was an experiment (no need to go into the details) involving 56 
students. The results were, in the authors words, tangentially supportive 
with p.06

I think the study was well conducted so I don't mean to slight the researchers. 
 My guess is that if they used more subjects they probably would have reached 
p.05 - but would that have been an example of selective stopping?  I assume 
it would be.

So how exactly does a researcher determine beforehand - as we are suggesting 
they do - the number of subjects they ought to try to get for the study?  I'm 
just not familiar with the process.  Does one look at the effect sizes of 
previous related studies to determine if the effect is large or small and then 
make a decision?  But let's say the effect is assumed to be small, so do you 
use 100 subjects?  500?  How is this number determined?

Appreciate the insight in this.

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.commailto:mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt



---

You are currently subscribed to tips as: 
pcbernha...@frostburg.edumailto:pcbernha...@frostburg.edu.

To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13441.4e79e96ebb5671bdb50111f18f263003n=Tl=tipso=27372

(It may be necessary to cut and paste the above URL if the line is broken)

or send a blank email to 
leave-27372-13441.4e79e96ebb5671bdb50111f18f263...@fsulist.frostburg.edumailto:leave-27372-13441.4e79e96ebb5671bdb50111f18f263...@fsulist.frostburg.edu









---

You are currently subscribed to tips as: 

Re: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Michael Britt
Thanks  Paul.  I've downloaded G*Power.  Question: the correlational component 
of the study revealed r = -.21, p04 (higher tendency to humanize nature were 
associated with a lower tendency to help victims of a natural disaster).  The 
next test will be an independent samples t-test.

How does this info help me enter the values needed by G*Power: Effect Size d 
and Allocation ratio N2/N1?

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt

On Aug 27, 2013, at 8:41 AM, Paul C Bernhardt pcbernha...@frostburg.edu wrote:

  
 
  
 
  
 
 There is software to determine this. One excellent and free app is G*Power. 
 
 http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/
 
 I would use the correlational study to give me an estimate of effect size. As 
 you describe, I would use that in the software to estimate my number of 
 participants to attain the desired power. Practicality constraints on number 
 of available participants usually limits things. I did such an estimate using 
 G*Power a few weeks ago for a study we are planning. We will need to collect 
 data over two semesters because the anticipated number of participants 
 available from one semester's worth of students would only give us power of 
 about .66, whereas two semester's worth would bump us up over .90. 
 
 Paul
 
 On Aug 27, 2013, at 8:18 AM, Michael Britt wrote:
 
  
 
  
 
  
 
 I'm reading an interesting piece of research on anthropomorphism which 
 essentially states after a natural disaster if we use the term mother 
 nature when describing it, people will be less willing to contribute to 
 relief efforts (Humanizing nature could help the perceiver to conceive 
 natural events as imbued with intentionality and significance rather than 
 considering them merely random and meaningless phenomena).  They did two 
 studies.  Here's the issue/question:
 
 Study 1 was correlational and involved 96 students.  The results were 
 supportive at .001
 Study 2 was an experiment (no need to go into the details) involving 56 
 students. The results were, in the authors words, tangentially supportive 
 with p.06
 
 I think the study was well conducted so I don't mean to slight the 
 researchers.  My guess is that if they used more subjects they probably 
 would have reached p.05 - but would that have been an example of selective 
 stopping?  I assume it would be.
 
 So how exactly does a researcher determine beforehand - as we are suggesting 
 they do - the number of subjects they ought to try to get for the study?  
 I'm just not familiar with the process.  Does one look at the effect sizes 
 of previous related studies to determine if the effect is large or small and 
 then make a decision?  But let's say the effect is assumed to be small, so 
 do you use 100 subjects?  500?  How is this number determined?
 
 Appreciate the insight in this.
 
 Michael
 
 Michael A. Britt, Ph.D.
 mich...@thepsychfiles.com
 http://www.ThePsychFiles.com
 Twitter: @mbritt
 
 
 ---
 
 You are currently subscribed to tips as: pcbernha...@frostburg.edu.
 
 To unsubscribe click here: 
 http://fsulist.frostburg.edu/u?id=13441.4e79e96ebb5671bdb50111f18f263003n=Tl=tipso=27372
 
 (It may be necessary to cut and paste the above URL if the line is broken)
 
 or send a blank email to 
 leave-27372-13441.4e79e96ebb5671bdb50111f18f263...@fsulist.frostburg.edu
 
 
  
 
  
 
 
 ---
 
 You are currently subscribed to tips as: michael.br...@thepsychfiles.com.
 
 To unsubscribe click here: 
 http://fsulist.frostburg.edu/u?id=13405.0125141592fa9ededc665c55d9958f69n=Tl=tipso=27373
 
 (It may be necessary to cut and paste the above URL if the line is broken)
 
 or send a blank email to 
 leave-27373-13405.0125141592fa9ededc665c55d9958...@fsulist.frostburg.edu
 
 
  
 
  


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27377
or send a blank email to 
leave-27377-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

RE: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Jim Clark
Hi

A couple of observations to add to what others have said.

First, note that the reported p (.06 or .0619, precisely) is for a 
non-directional test.  If the authors predicted the difference, directional p 
is half of this (.031) and significant.  This is consistent with the moderate 
or large value of d, depending on your preferences there.

Second, I think it could be risky to use a correlational effect size to 
estimate an experimental effect size.  In principle it could go in either 
direction.  Experimental effect size could be larger or smaller depending on 
whether correlated factors in correlational study contributed to or masked the 
observed effect.  Also would have to infer whether the experimental 
manipulation was more or less powerful than naturally occurring variation on 
the predictor/independent variable.

Third, even deciding in principle what effect size was required to be 
important is a challenging question.  The labels of small, medium, and large 
are pretty meaningless without knowing what generalization is being made.  In 
this case, for example, are we trying to generalize to the helping behavior of 
many millions of Americans, in which case a tiny effect size could be 
important (as in the classic aspirin study).

In reality (versus theory), research design and statistics are messy and 
seemingly precise tools need to be used thoughtfully.

Take care
Jim

Jim Clark
Professor  Chair of Psychology
204-786-9757
4L41A

From: Michael Britt [mailto:mich...@thepsychfiles.com]

Also helpful.  So, to answer my own previous question, based on what they found 
in the correlational study and what one might guess from previous research, I'm 
going to assume that the effect size here, if it exists, is probably small.  So 
I used .3 in G*Power.  The result?  G*Power suggests that I get 242 subjects 
per group.  These researchers had 26 subjects in each group.

So: if you were the reviewer what would you conclude?  The researchers found:

...the results revealed that participants in the anthropomorphism condition 
were tendentially less willing to help the victims of the natural disaster (M = 
4.39, SD = 1.02) than participants in the control condition (M = 4.89, SD = 
0.87), t(50) = -1.91, p = .06, d = 0.53.

Would you recommend that they get more subjects?

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.commailto:mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt

On Aug 27, 2013, at 8:59 AM, Stuart McKelvie 
smcke...@ubishops.camailto:smcke...@ubishops.ca wrote:











Dear Tipsters,


There are various ways to plan sample size. When teaching this in research 
methods, I divide the issues into two parts:


1. Estimation of population values.
Here, more is better but there are diminishing returns. Think of the fact that 
we rarely see more than 1500 people in national polls and surveys. The formula 
is based on minimizing standard error. Of course, sampling is critical.


2. Conducting studies with variables: experimental, subject or correlational.
There are four interconnected concepts: effect size, alpha, power and sample 
size. When any three are known, the fourth is determined. You can decide where 
to set alpha and power. For effect size (d), you can be guided by Cohen's 
guidelines for small, medium and large (.3, .5, .8) and choose the value you 
are looking for. This may come from past research or, in its absence, what you 
think is interesting theoretically or practically.


Cohen's book on power analysis gives tables where you can look up the sample 
size needed after specifying the values you choose. There is also this webiste:
http://homepage.stat.uiowa.edu/~rlenth/Power/


Sincerely,


Stuart




_
 Sent via Web Access

   Floreat Labore

  Recti cultus pectora roborant

Stuart J. McKelvie, Ph.D., Phone: 819 822 9600 x 2402
Department of Psychology, Fax: 819 822 9661
Bishop's University,
2600 rue College,
Sherbrooke,
Québec J1M 1Z7,
Canada.

E-mail: stuart.mckel...@ubishops.camailto:stuart.mckel...@ubishops.ca (or 
smcke...@ubishops.camailto:smcke...@ubishops.ca)

Bishop's University Psychology Department Web Page:
http://www.ubishops.ca/ccc/div/soc/psy

   Floreat Labore
___


From: Paul C Bernhardt 
[pcbernha...@frostburg.edumailto:pcbernha...@frostburg.edu]
Sent: 27 August 2013 08:41
To: Teaching in the Psychological Sciences (TIPS)
Subject: Re: [tips] Sample Size: How to Determine it?









There is software to determine this. One excellent and free app is G*Power.

http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/

I would use the correlational study to give me an estimate of effect size. As 
you describe, I would use that in the software to estimate my number of 
participants to attain the desired 

Re: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Ken Steele

Hi Michael:

Be careful with the effect size statistic that G*Power uses, sometimes 
it is using rho. rho = .3 would be a medium effect size.


Ken

PS - It is surprising how underpowered are many of the experiments 
reported in the journals.




Kenneth M. Steele, Ph. D.steel...@appstate.edu
Professor
Department of Psychology http://www.psych.appstate.edu
Appalachian State University
Boone, NC 28608
USA



On 8/27/2013 9:59 AM, Michael Britt wrote:




Also helpful. So, to answer my own previous question, based on what they
found in the correlational study and what one might guess from previous
research, I'm going to assume that the effect size here, if it exists,
is probably small. So I used .3 in G*Power. The result? G*Power suggests
that I get 242 subjects per group. These researchers had 26 subjects in
each group.

So: if you were the reviewer what would you conclude? The researchers found:

...the results revealed that participants in the anthropomorphism
condition were tendentially less willing to help the victims of the
natural disaster (M = 4.39, SD = 1.02) than participants in the control
condition (M = 4.89, SD = 0.87), t(50) = –1.91, p = .06, d = 0.53.
Would you recommend that they get more subjects?

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.com mailto:mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt

On Aug 27, 2013, at 8:59 AM, Stuart McKelvie smcke...@ubishops.ca
mailto:smcke...@ubishops.ca wrote:





Dear Tipsters,

There are various ways to plan sample size. When teaching this in
research methods, I divide the issues into two parts:

1. Estimation of population values.
Here, more is better but there are diminishing returns. Think of the
fact that we rarely see more than 1500 people in national polls and
surveys. The formula is based on minimizing standard error. Of course,
sampling is critical.

2. Conducting studies with variables: experimental, subject or
correlational.
There are four interconnected concepts: effect size, alpha, power and
sample size. When any three are known, the fourth is determined. You
can decide where to set alpha and power. For effect size (d), you can
be guided by Cohen's guidelines for small, medium and large (.3, .5,
.8) and choose the value you are looking for. This may come from past
research or, in its absence, what you think is interesting
theoretically or practically.

Cohen's book on power analysis gives tables where you can look up the
sample size needed after specifying the values you choose. There is
also this webiste:
http://homepage.stat.uiowa.edu/~rlenth/Power/

Sincerely,

Stuart

_
Sent via Web Access

Floreat Labore

Recti cultus pectora roborant

Stuart J. McKelvie, Ph.D., Phone: 819 822 9600 x 2402
Department of Psychology, Fax: 819 822 9661
Bishop's University,
2600 rue College,
Sherbrooke,
Québec J1M 1Z7,
Canada.

E-mail: stuart.mckel...@ubishops.ca
mailto:stuart.mckel...@ubishops.ca (or smcke...@ubishops.ca
mailto:smcke...@ubishops.ca)

Bishop's University Psychology Department Web Page:
http://www.ubishops.ca/ccc/div/soc/psy

 Floreat Labore
___

*From:*Paul C Bernhardt [pcbernha...@frostburg.edu
mailto:pcbernha...@frostburg.edu]
*Sent:*27 August 2013 08:41
*To:*Teaching in the Psychological Sciences (TIPS)
*Subject:*Re: [tips] Sample Size: How to Determine it?




There is software to determine this. One excellent and free app is
G*Power.

http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/

I would use the correlational study to give me an estimate of effect
size. As you describe, I would use that in the software to estimate my
number of participants to attain the desired power. Practicality
constraints on number of available participants usually limits things.
I did such an estimate using G*Power a few weeks ago for a study we
are planning. We will need to collect data over two semesters because
the anticipated number of participants available from one semester's
worth of students would only give us power of about .66, whereas two
semester's worth would bump us up over .90.

Paul

On Aug 27, 2013, at 8:18 AM, Michael Britt wrote:








I'm reading an interesting piece of research on anthropomorphism
which essentially states after a natural disaster if we use the term
mother nature when describing it, people will be less willing to
contribute to relief efforts (Humanizing nature could help the
perceiver to conceive natural events as imbued with intentionality
and significance rather than considering them merely random and
meaningless phenomena). They did two studies. Here's the issue/question:

  * Study 1 was correlational and 

RE: [tips] Sample Size: How to Determine it?

2013-08-27 Thread rfro...@jbu.edu
I am assuming this was an independent samples t test where some participants 
heard the mother nature language and others didn't. Using the d of .53 they 
obtained as my estimate of what effect size they would be interested in 
obtaining (or that they think would be worthwhile to note), it appears that, 
with a df of 50, they had less than a 50/50 chance of finding a significant 
result of that size if one existed in the population. As others have pointed 
out, you need to determine before the study begins, what effect size you are 
interested in obtaining. For example, you may believe that even a .05 effect 
size (1/20th of a standard deviation difference between the two means) could be 
meaningful given the question. If so, you are going to need a very large sample 
size to have a high probability of finding a significant result if such a small 
difference exists in the population. By my calculations*, if you wanted to have 
at least an 80 percent chance of detecting an effect size of at least .50 (half 
a standard deviation difference between the means) with an independent sample t 
test, you would need to have 128 participants in the study (64 in each group). 
If you wanted to have an 80% chance of detecting a .05 (5 percent) effect size 
in such a case, you would need  12560 participants (6280 in each group).

*My power calculations came from http://homepage.stat.uiowa.edu/~rlenth/Power/. 
The author has a nice discussion of power and why retrospective power analysis 
is worthless under the Advice section on that page.

Rick

Dr. Rick Froman, Chair
Division of Humanities and Social Sciences
Box 3519
x7295
rfro...@jbu.edumailto:rfro...@jbu.edu
http://bit.ly/DrFroman

Proverbs 14:15 A simple man believes anything, but a prudent man gives thought 
to his steps.

From: Michael Britt [mailto:mich...@thepsychfiles.com]
Sent: Tuesday, August 27, 2013 9:00 AM
To: Teaching in the Psychological Sciences (TIPS)
Subject: Re: [tips] Sample Size: How to Determine it?










Also helpful.  So, to answer my own previous question, based on what they found 
in the correlational study and what one might guess from previous research, I'm 
going to assume that the effect size here, if it exists, is probably small.  So 
I used .3 in G*Power.  The result?  G*Power suggests that I get 242 subjects 
per group.  These researchers had 26 subjects in each group.

So: if you were the reviewer what would you conclude?  The researchers found:

...the results revealed that participants in the anthropomorphism condition 
were tendentially less willing to help the victims of the natural disaster (M = 
4.39, SD = 1.02) than participants in the control condition (M = 4.89, SD = 
0.87), t(50) = -1.91, p = .06, d = 0.53.

Would you recommend that they get more subjects?

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.commailto:mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt

On Aug 27, 2013, at 8:59 AM, Stuart McKelvie 
smcke...@ubishops.camailto:smcke...@ubishops.ca wrote:











Dear Tipsters,


There are various ways to plan sample size. When teaching this in research 
methods, I divide the issues into two parts:


1. Estimation of population values.
Here, more is better but there are diminishing returns. Think of the fact that 
we rarely see more than 1500 people in national polls and surveys. The formula 
is based on minimizing standard error. Of course, sampling is critical.


2. Conducting studies with variables: experimental, subject or correlational.
There are four interconnected concepts: effect size, alpha, power and sample 
size. When any three are known, the fourth is determined. You can decide where 
to set alpha and power. For effect size (d), you can be guided by Cohen's 
guidelines for small, medium and large (.3, .5, .8) and choose the value you 
are looking for. This may come from past research or, in its absence, what you 
think is interesting theoretically or practically.


Cohen's book on power analysis gives tables where you can look up the sample 
size needed after specifying the values you choose. There is also this webiste:
http://homepage.stat.uiowa.edu/~rlenth/Power/


Sincerely,


Stuart




_
 Sent via Web Access

   Floreat Labore

  Recti cultus pectora roborant

Stuart J. McKelvie, Ph.D., Phone: 819 822 9600 x 2402
Department of Psychology, Fax: 819 822 9661
Bishop's University,
2600 rue College,
Sherbrooke,
Québec J1M 1Z7,
Canada.

E-mail: stuart.mckel...@ubishops.camailto:stuart.mckel...@ubishops.ca (or 
smcke...@ubishops.camailto:smcke...@ubishops.ca)

Bishop's University Psychology Department Web Page:
http://www.ubishops.ca/ccc/div/soc/psy

   Floreat Labore
___


From: Paul C Bernhardt 

RE: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Mike Palij

I was going to stay out of this discussion but I have to address
a couple of points, one of which is made by Rick at the end
of his post:

(1) The major problem with power analysis is that it requires
one to have knowledge of POPULATION PARAMETERS,
that is, the means, the standard deviation, the correlations, and
so on. NOTE: a researcher has sample data from which descriptive
statistics and inferential statistics are calculated which will have
sampling error and possible other types of error that make the
sample estimates of the mean, standard deviation, correlation, etc.,
misleading. The proper thing to do before collecting the data is
to conduct an A Priori power analysis. But An A Priori power
analysis assumes that one knows the relevant population means,
standard deviations, correlations, effect size, and so on that are
involved.  This is a problem because far too many researchers
don't have a clue what these values are or should be.  If you don't
know what the population parameters are, step away from the
data and let a professional try to do something with it.

(2) Rick Froman below refers to Russ Lenth's website where
one can use his software for some calculations -- I suggest
one use G*Power instead -- as well as his position that
retrospective or observed power analysis is bad, m'kay?  I suggest
that one instead read Geoff Cumming's Understanding the New
Statistics which goes into much more detail about effect sizes, 
confidence

intervals, and meta-analysis -- all of which are inter-related; see:
http://www.amazon.com/Understanding-The-New-Statistics-Meta-Analysis/dp/041587968X/ref=sr_1_1?ie=UTF8qid=1377628036sr=8-1keywords=cummings+meta-analysis
Cummings makes a stronger argument than Lenth. However,
I would also suggest that one read my review of Cummings'
book in PsycCritiques which takes issue with the
anti-retrospective or anti-observed power analysis situation; see:

Palij, M. (2012). New statistical rituals for old. PsycCRITIQUES 57 
(24).


(3) Pragmatically, most psychologists who do statistical analysis
rely almost solely on the sample information to reach conclusions
about the population parameters.  This is where concerns about
whether the probability of one's obtained statistic like a t-test
is statistically significant or what to do if one has a p(obt t)= .06.
The p-value doesn't really matter if you know that that two
sample means you have come from different populations, right?
Which is why one is urged to use confidence intervals instead.
But psychologists will look at the observed power level provided
by SPSS' MANOVA or GLM procedures if they have done an
ANOVA because they did not select the power level before they
collected their data.  And it is only then that they might realize,
Ooops!, I don't really have enough statistical power to reject a
false null hypothesis.

But this is an old tale that all Tipsters should be familiar with given
our current statistical practices -- see Cummings' book if one needs
a refresher on what some consider proper statistical analysis in
contemporary psychological research.

Then, again, really knowing the phenomenon you're studying and
having strong theory, such as signal detection theory in psychophysics
or recognition memory research, may go a much longer way than
wondering whether one has a statistically significant result.

-Mike Palij
New York University
m...@nyu.edu


  Original Message  
On Tue, 27 Aug 2013 10:53:07 -0700, Rick Froman wrote:
I am assuming this was an independent samples t test where some 
participants
heard the mother nature language and others didn't. Using the d of .53 
they

obtained as my estimate of what effect size they would be interested in
obtaining (or that they think would be worthwhile to note), it appears 
that,
with a df of 50, they had less than a 50/50 chance of finding a 
significant
result of that size if one existed in the population. As others have 
pointed
out, you need to determine before the study begins, what effect size you 
are
interested in obtaining. For example, you may believe that even a .05 
effect
size (1/20th of a standard deviation difference between the two means) 
could be
meaningful given the question. If so, you are going to need a very large 
sample
size to have a high probability of finding a significant result if such 
a small
difference exists in the population. By my calculations*, if you wanted 
to have
at least an 80 percent chance of detecting an effect size of at least 
.50 (half
a standard deviation difference between the means) with an independent 
sample t
test, you would need to have 128 participants in the study (64 in each 
group).
If you wanted to have an 80% chance of detecting a .05 (5 percent) 
effect size

in such a case, you would need  12560 participants (6280 in each group).

*My power calculations came from 
http://homepage.stat.uiowa.edu/~rlenth/Power/ .
The author has a nice discussion of power and why retrospective power 

[tips] Tenure Track Opening: Adult Clinical

2013-08-27 Thread Pollak, Edward (Retired)
West Chester University invites applications for a tenure-track faculty 
position in Clinical Psychology at the assistant professor level. Applicants 
must have a Ph.D. in psychology, an active program of research with an adult 
population, and the ability to mentor undergraduate and graduate students in 
research. We are especially interested in recruiting faculty from 
under-represented groups.  Preference will be given to candidates who can begin 
their appointment in January, 2014.  Applicants must express a commitment to 
teaching courses in clinical psychology (e.g., personality, 
abnormal/psychopathology, counseling/psychotherapy, testing/assessment) at the 
undergraduate and graduate levels.  Applicants selected for on-campus 
interviews will present a colloquium to demonstrate teaching and research 
excellence.  The Department of Psychology is composed of 20 full-time faculty 
members and serves approximately 800 undergraduate majors and 100 master’s 
degree students.  Additional information is available at http://www.wcupa.edu.  
Completion of the Ph.D. from an APA-approved program is required before the 
start of the appointment, as is licensure-eligibility in the state of PA.  
Applicants should apply online at https://wcupa.peopleadmin.com. Applications 
must include a letter identifying the courses the applicant is prepared to 
teach, curriculum vitae, 3 letters of reference, no more than 3 reprints or 
preprints of published articles, a statement of teaching philosophy, and a 
statement of research interests.  Review of candidates will begin on August 15, 
2013 and continue until the position is filled. Applicants must successfully 
complete the interview process and a colloquium to be considered finalists. The 
filling of this position is contingent upon available funding. All offers of 
employment are subject to and contingent upon satisfactory completion of all 
pre-employment criminal background and consumer reporting checks. West Chester 
University is an Affirmative Action-Equal Opportunity Employer.

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27389
or send a blank email to 
leave-27389-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

RE: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Wuensch, Karl L
My two cents:  Decide on what is the smallest effect that you would 
consider to be of importance.  If you think Type I and Type II errors are 
equally serious, then set both alpha and beta to .05, that is, find N for 95% 
power.  G*Power does this with ease, but you are unlikely to like the answer.

If precision of estimation of the effect size is of importance, even 
bigger is better (less wide confidence intervals for effect size).

With respect to independent samples t test, you can use the procedure 
that G*Power specifies for that design, with d as the effect size, or you can 
use the point biserial regression procedure, with r as the effect size.  

Do note that the size of the point biserial is greatly affected by the 
ratio of the sample sizes, which is not true with d.  See 
http://core.ecu.edu/psyc/wuenschk/StatHelp/d-r.htm 

When discussing issues of power and effect size, always pay attention 
to speakers from NYU.  :-)

Cheers,

Karl L. Wuensch

-Original Message-
From: Mike Palij [mailto:m...@nyu.edu] 
Sent: Tuesday, August 27, 2013 3:06 PM
To: Teaching in the Psychological Sciences (TIPS)
Cc: Michael Palij
Subject: RE: [tips] Sample Size: How to Determine it?

I was going to stay out of this discussion but I have to address a couple of 
points, one of which is made by Rick at the end of his post:

(1) The major problem with power analysis is that it requires one to have 
knowledge of POPULATION PARAMETERS, that is, the means, the standard deviation, 
the correlations, and so on. NOTE: a researcher has sample data from which 
descriptive statistics and inferential statistics are calculated which will 
have sampling error and possible other types of error that make the sample 
estimates of the mean, standard deviation, correlation, etc., misleading. The 
proper thing to do before collecting the data is to conduct an A Priori power 
analysis. But An A Priori power analysis assumes that one knows the relevant 
population means, standard deviations, correlations, effect size, and so on 
that are involved.  This is a problem because far too many researchers don't 
have a clue what these values are or should be.  If you don't know what the 
population parameters are, step away from the data and let a professional try 
to do something with it.

(2) Rick Froman below refers to Russ Lenth's website where one can use his 
software for some calculations -- I suggest one use G*Power instead -- as well 
as his position that retrospective or observed power analysis is bad, m'kay?  I 
suggest that one instead read Geoff Cumming's Understanding the New 
Statistics which goes into much more detail about effect sizes, confidence 
intervals, and meta-analysis -- all of which are inter-related; see:
http://www.amazon.com/Understanding-The-New-Statistics-Meta-Analysis/dp/041587968X/ref=sr_1_1?ie=UTF8qid=1377628036sr=8-1keywords=cummings+meta-analysis
Cummings makes a stronger argument than Lenth. However, I would also suggest 
that one read my review of Cummings'
book in PsycCritiques which takes issue with the anti-retrospective or 
anti-observed power analysis situation; see:

Palij, M. (2012). New statistical rituals for old. PsycCRITIQUES 57 (24).

(3) Pragmatically, most psychologists who do statistical analysis rely almost 
solely on the sample information to reach conclusions about the population 
parameters.  This is where concerns about whether the probability of one's 
obtained statistic like a t-test is statistically significant or what to do if 
one has a p(obt t)= .06.
The p-value doesn't really matter if you know that that two sample means you 
have come from different populations, right?
Which is why one is urged to use confidence intervals instead.
But psychologists will look at the observed power level provided by SPSS' 
MANOVA or GLM procedures if they have done an ANOVA because they did not select 
the power level before they collected their data.  And it is only then that 
they might realize, Ooops!, I don't really have enough statistical power to 
reject a false null hypothesis.

But this is an old tale that all Tipsters should be familiar with given our 
current statistical practices -- see Cummings' book if one needs a refresher on 
what some consider proper statistical analysis in contemporary psychological 
research.

Then, again, really knowing the phenomenon you're studying and having strong 
theory, such as signal detection theory in psychophysics or recognition memory 
research, may go a much longer way than wondering whether one has a 
statistically significant result.

-Mike Palij
New York University
m...@nyu.edu


  Original Message   On Tue, 27 Aug 2013 10:53:07 
-0700, Rick Froman wrote:
I am assuming this was an independent samples t test where some participants 
heard the mother nature language and others didn't. Using the d of .53 they 
obtained as my estimate of what effect size they would be 

[tips] noninvasive brain-to-brain interface in human beings

2013-08-27 Thread Carol DeVolder
Another interesting tidbit from Science Daily:

http://www.sciencedaily.com/releases/2013/08/130827122713.htm


-- 
Carol DeVolder, Ph.D.
Professor of Psychology
St. Ambrose University
518 West Locust Street
Davenport, Iowa  52803
563-333-6482

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27395
or send a blank email to 
leave-27395-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re: [tips] Sample Size: How to Determine it?

2013-08-27 Thread Paul C Bernhardt
It's hard to know what I would conclude. Are there other significant effects 
found in the study with the small sample size? Power is not an issue if you 
have statistical significance. I have a published paper in which one study has 
an N of 8, four in each of the two groups. It was significant with a huge 
effect size (naturally, with such a small sample). People may not believe it 
(is it robust?) but power is not the reason to doubt it.

With a marginally significant effect (I've never heard the term tendentially 
used in this context) it is essentially the same problem. It is what it is. 
They don't have statistical significance. They have a small, but not absurdly 
small, sample size (how many studies with df = 50 are published out there?).

I would be less concerned with the stats on this and more concerned with the 
claims they make in their discussion about the finding.

Do they write their discussion with scant or no real accounting for the fact 
that it is marginally significant? They need to describe their finding as 
tentative and suggestive that a future study needs greater control over the IV 
and possibly increased sample size.

If they try to act like they have a big discovery, I'd be requesting a rewrite.

In fact, an important part of the interpretation is the degree of surprise in 
the finding. Is it consistent with other findings in the domain? If so, then 
they can speak more strongly (not a lot, just a little) about the meaning of 
their findings. If it is surprising and contrary to other findings in the 
literature, then I'd be prone to rejection of the article due to lack of a 
sufficient finding to change my prior view.

Paul

On Aug 27, 2013, at 9:59 AM, Michael Britt wrote:







Also helpful.  So, to answer my own previous question, based on what they found 
in the correlational study and what one might guess from previous research, I'm 
going to assume that the effect size here, if it exists, is probably small.  So 
I used .3 in G*Power.  The result?  G*Power suggests that I get 242 subjects 
per group.  These researchers had 26 subjects in each group.

So: if you were the reviewer what would you conclude?  The researchers found:

...the results revealed that participants in the anthropomorphism condition 
were tendentially less willing to help the victims of the natural disaster (M = 
4.39, SD = 1.02) than participants in the control condition (M = 4.89, SD = 
0.87), t(50) = –1.91, p = .06, d = 0.53.

Would you recommend that they get more subjects?

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.commailto:mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt

On Aug 27, 2013, at 8:59 AM, Stuart McKelvie 
smcke...@ubishops.camailto:smcke...@ubishops.ca wrote:










Dear Tipsters,



There are various ways to plan sample size. When teaching this in research 
methods, I divide the issues into two parts:



1. Estimation of population values.
Here, more is better but there are diminishing returns. Think of the fact that 
we rarely see more than 1500 people in national polls and surveys. The formula 
is based on minimizing standard error. Of course, sampling is critical.



2. Conducting studies with variables: experimental, subject or correlational.
There are four interconnected concepts: effect size, alpha, power and sample 
size. When any three are known, the fourth is determined. You can decide where 
to set alpha and power. For effect size (d), you can be guided by Cohen's 
guidelines for small, medium and large (.3, .5, .8) and choose the value you 
are looking for. This may come from past research or, in its absence, what you 
think is interesting theoretically or practically.



Cohen's book on power analysis gives tables where you can look up the sample 
size needed after specifying the values you choose. There is also this webiste:
http://homepage.stat.uiowa.edu/~rlenth/Power/



Sincerely,



Stuart





_
 Sent via Web Access

   Floreat Labore

  Recti cultus pectora roborant

Stuart J. McKelvie, Ph.D., Phone: 819 822 9600 x 2402
Department of Psychology, Fax: 819 822 9661
Bishop's University,
2600 rue College,
Sherbrooke,
Québec J1M 1Z7,
Canada.

E-mail: stuart.mckel...@ubishops.camailto:stuart.mckel...@ubishops.ca (or 
smcke...@ubishops.camailto:smcke...@ubishops.ca)

Bishop's University Psychology Department Web Page:
http://www.ubishops.ca/ccc/div/soc/psy

   Floreat Labore
___


From: Paul C Bernhardt 
[pcbernha...@frostburg.edumailto:pcbernha...@frostburg.edu]
Sent: 27 August 2013 08:41
To: Teaching in the Psychological Sciences (TIPS)
Subject: Re: [tips] Sample Size: How to Determine it?










There is software to determine this. One excellent and free app is G*Power.


[tips] Belfast not Dublin,you dummy.

2013-08-27 Thread michael sylvester
On tonight's Jeopardy game the final question dealt with what area of Ireland 
are there distinctive markings indicating Protestant and Catholic 
neighborhoods.All three of the contestants wrote down Dublin.I wonder if they 
have heard of William of Orange. Btw,I hosted a college student from Northern 
Ireland a fortnight ago.She told me that
when they play the Monopoly board game,players avoid landing on Dublin.
Apparently Cork is esteemed as the real capiyal of the Irish Republic.
michael
---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27398
or send a blank email to 
leave-27398-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Fw: [tips] Tenure Track Opening:

2013-08-27 Thread michael sylvester
Donald Trump University
Please submit $4000 with youur application

michael
---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27399
or send a blank email to 
leave-27399-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu