Re: How calculate 95%=1.96 stdv

2001-07-04 Thread dennis roberts

At 01:11 PM 7/4/01 +, s.petersson wrote:
>Hi NG,
>
>I sometimes run into a constant of 1.96 stdv that is used to calculate 95%
>statistical confidence intervals. But I can't seem to find how the 1.96 stdv
>is actually derived from a security level of 95%. In the statistical
>textbooks I've read, there is only a huge table with different stdv's at a
>given security level.

if the sampling distribution is normal ... or we can assume it to be ... 
then, 95% of the sample means will vary around the mu value ... from - 1.96 
z units below mu to 1.96 z units above mu

that is where the 1.96 comes from ... 95% of the area in a normal dist. 
equally distant away from the mean



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: levene

2001-07-02 Thread dennis roberts

At 01:10 PM 7/2/01 -0600, Stab wrote:
>whats the difference between a modified levene test, and a levene test.
>
>how do you do both of these tests in SAS
>thanks

whether you do the test using the deviations around the medians of the 
samples (modified) or the means of the samples (i think the original) 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Help with stats please

2001-06-24 Thread dennis roberts

At 12:20 PM 6/24/01 -0700, Melady Preece wrote:
>Hi.  I am teaching educational statistics for the first time, and although I
>can go on at length about complex statistical techniques, I find myself at a
>loss with this multiple choice question in my test bank.  I understand why
>the range of  (b) is smaller than (a) and (c), but I can't figure out how to
>prove that it is smaller than (d).
>
>If you can explain it to me, I will be humiliated, but grateful.
>
>
>1.  Which one of the following classes had
>  the smallest range in IQ scores?

of course, there is nothing about the shape of the distribution of any 
class ... so, does the item assume sort of normal? in fact, since each of 
these classes is probably on the small side ... it would be hard to assume 
that but, for the sake of the item ... pretend

in addition, it does not say to assume the population of IQ scores has mean 
= 100 and sd about 15 ... so, whether this plays a role or not, i am not 
sure BUT ...


>  A)  Class A has a mean IQ of 106
>and a standard deviation of ll.

at least about 2 units of 11 = 22 on each side of 106 ... range about 45 or 
so or more

>  B)  Class B has an IQ range from 93
>to 119.

well, range here is about 26 ... less than in A for sure

>  C)  Class C has a mean IQ of 110
>with a variance of 200.

variance of 200 means an sd about 14 ... so 2 units of 14 = 28 on each side 
of 110 ...
range must be 50 or more ... similar to A but, more than C

>   D)  Class D has a median IQ of 100
>with Q1 = 90 and Q3 = 110.

25th PR = 90 and 75PR = 110 ... IF we assumed the class was ND ... then the 
mean would be about 100 too ... and since -1 for SD below the mean and +1 
SD above the mean would give your roughly the 16th PR and 84th PR ... Q1 
and Q3 are NOT that far out ... so, the SD must be at least 10 or more ... 
thus, 2 units of at least 10 = 20 on either side of 100 = range of at least 
about 40 ... probably less than A or C ... but, more than B ...

B is probably the best of the lot BUT, i am NOT sure what the real purpose 
of this item is ...


>The test bank says the answer is b.
>
>Melady
>
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: trimming data

2001-06-20 Thread dennis roberts



here is some help info from minitab about trimmed means ...
===
Trimmed mean

The trimmed mean (TrMean) is like the mean, but it excludes the most 
extreme values in the data set. The highest and lowest 5% of the values 
(rounded to the nearest integer) are dropped, and the mean is calculated 
for the remaining values.
For the precipitation data, 5% of 11 observations is 0.55, which rounds to 
1. Thus, the highest value and the lowest value are dropped, and the mean 
is calculated for the remaining data:
 1  2  2  3  3  3  3  4  4  5  10

This yields a value of 3.222. Like the median, the trimmed mean is less 
sensitive to extreme values than the mean. For example, the trimmed mean of 
this data set would be 3.222 even if there were 30 days with precipitation 
in April instead of 10.

© All Rights Reserved. 2000 Minitab, Inc.
==

keep in mind that if the data set is symmetrical ... then, trimming really 
accomplishes nothing ... when it comes to the mean ... even if there are 
extreme values ...

in a seriously + skewed distribution ... then trimming (for the mean) will 
back up the mean more to the LEFT ... compared to non trimming ... and just 
the opposite for a seriously - skewed distribution ...

as i said earlier, trimming will necessarily DECREASE the variability ... 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: trimming data

2001-06-20 Thread dennis roberts

At 11:24 AM 6/20/01 -0500, Mike Granaas wrote:

>A colleague has approached me about locating references discussing the
>trimming of data, with primary emphasis on psychological research.  He is
>primarily interested in books/chapters/articles that emphasize the when
>and how.
>
>I am at a loss on this one and was wondering if anyone could offer a
>coupld of references.

other than what some software programs do ... i don't have ready references 
... but, the notion is that for some distributions ... particularly with 
some outliers at ONE end ... if you trim say 5% from each end ... it will 
reduce the impact on your descriptive stats of the outliers ...

in minitab, there is a trimmed mean that you get as part of the DESCRIBE 
command which axes 5% from each end and THEN finds the mean for the middle 
90% ...
if you think about it ... you can trim different % values from the ends ... 
and, if you did a full trim of 50% from EACH end ... you are at the median!

clearly, the more you trim the data, the narrower the data set is ...

one should only consider trimming in the broader context of are there 
outliers and if there are, what (if anything) should we do about them? in 
some cases ... you do nothing since, from all accounts, the data are 
legitimate values ... but, if you find BAD data at the ends (due to 
miskeying, scoring error, etc.), then the first thing is to justify WHAT 
values to eliminate if any ...




>Thanks,
>
>Michael
>
>***
>Michael M. Granaas
>Associate Professor[EMAIL PROTECTED]
>Department of Psychology
>University of South Dakota Phone: (605) 677-5295
>Vermillion, SD  57069  FAX:   (605) 677-6604
>***
>All views expressed are those of the author and do not necessarily
>reflect those of the University of South Dakota, or the South
>Dakota Board of Regents.
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: RANDOM NUMBER GENERATOR

2001-06-04 Thread dennis roberts

At 12:39 PM 6/4/01 -0700, AL.RAMOS wrote:
>I WOULD LIKE HELP ON FORMULATING DATA ON EXCEL THAT WOULD EXECUTE
>RANDOM NUMBERS WITHOUT REPEATING. I TRIED USING THE FOLLOWING FORMULA
>BUT IT REPEATS SOME OF THE NUMBERS: =RAND()*39.  ONCE I ENTER THIS
>FORMULA A NUMBER SHOWS UP ON CELL A1 AND THEN I JUST DRAG TO OTHER
>CELLS AND EXCEL AUTOMATICALLY GENERATES OTHER NUMBERS BUT REPEATS SOME
>OF THEM. I NEED HELP SO THAT IT DOES'NT REPEAT THEM . THANKS. AL.



are they really random if there are never any repeats 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Ninety Percent above Median

2001-06-01 Thread dennis roberts

w d allen seems not to be happy with the feedback that a few of us sent, in 
response to his (apparent) blasting of "educators" and their understanding 
of the median ...

sorry ...

but, as don pointed out ... you provided NO context whatsoever in your 
blast ... as to what was said when, and by whom ... so, i think the 
comments you got were fair and sensible

you also point to a url below ... that gives a simple way to find the 
median ... but, you also say that statisticians link this to a population 
... but, i see NO mention whatsoever in this link to populations, samples, 
etc. etc.

thus, your point about that seems irrelevant

no statistician i know links the definition of the median (or the mean or 
the mode) to THE population ... as opposed to some sample of data ... if 
that is the case, please provide some references on that point ... the url 
below helps not one bit

the link says the following:

A. > The median value is the middle value in a set of values. Half of 
all values are smaller than the median value and half are larger.

B. > When the data set contains an odd (uneven) set of numbers, the 
middle value is the median value. When the data set contains an even set of 
numbers, the middle two numbers are added and the sum is divided by two. 
That number is the median value.

A is a definition for the median ... but B is a PROCEDURE or an AGREEMENT 
on how we should locate the median ... A and B are not the same

let's say i have data 10, 8, 6, 3, 2, 2 ... and, i tell you that the median 
is 5 ... does that satisfy A ... the definition of the median? YES ... what 
about 4 or 5.3 or 3.9?? well, they all satisfy that definition too ... ANY 
value that falls between what you consider to be the upper limit of the 
lower of the two middle values and the lower limit of the larger of the two 
values ... satisfies that definition

WHERE DOES B COME IN THEN? stat folks have just come to an agreement that 
when we have cases where the median will fall between 2 values ... and 
there is "space" between the two values ... that we will average the two 
values and CALL it the median ... this is done by convention ... and has 
nothing to do with the definition of the median ...


At 11:34 AM 6/1/01 -0700, W. D. Allen Sr. wrote:
>"A couple of colleagues have already pointed out how the statement you so
>scornfully cite might in fact be true; ...".



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



old cars

2001-05-31 Thread dennis roberts

if anyone enjoys old cars ... really old cars ... and needs a 5 minute 
break from your daily work tedium ... have a look at

http://community.webshots.com/user/dennisroberts111

taken memorial day ...

the 1906 stanley steamer was a hoot! (it ran too!)

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Ninety Percent above Median

2001-05-31 Thread dennis roberts

At 05:56 PM 5/31/01 +, W. D. Allen Sr. wrote:
>Only from the education field do we hear the statement that over ninety
>percent of students ranked above the median! The statement was made on TV.

i take exception to the above ... i bet there are stupid folks in other 
disciplines that make stupid statements like that too ...

BUT ... who said it? on what TV program? should we believe everything we 
hear on TV

in fact, this CAN be true ... depending on your frame of reference ... 
example: say we have a nationally normed test ... and, you are comparing 
YOUR school district's students to the national norms ... it is totally 
possible that 90% of YOUR students could be above the national median ...




>WDA
>
>end
>
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Coincident (i.e. overlapping) plots

2001-05-30 Thread dennis roberts

of course, in any 2 dimensional graph ... you are very limited in what you 
can do since, you are trying to distinguish between points with identical 
coordinates ... ANY sort of an offset system will distort the data ... some 
density system in the darkness of the plotting symbol or shade of color 
still leaves you not know how many points are at that spot

then you have the problem of trying to differentiate say the X=53.2 and 
Y=67.8 (of which there might be 10) from ... an adjacent value of X=53.1 
and Y=67.9 ...

good 3 dimensional plots help this a bit but, do not fully get around the 
problem ...

i guess i would ask what the purpose is for seeing the plot? if it is to 
note a pattern (if there is one) and get a feel for where different 
concentrations of data points might be ... then i have found jitter in 
minitab to be sufficient (i just wish it were the default mode) ... if you 
really want to get REAL accurate .. then one has to sort data on X and see 
what happens on Y ...

finally, given that so much of our data has been rounded in some fashion 
... getting overly precise with this seems to be trying to make out for our 
data ... something that it does not contain



At 09:34 PM 5/29/01 -0400, Peter Nash wrote:
>Do you know any statistical software shows on a scatter-plot when points are
>coincident (i.e. there are numerous points that overlap in one location)?
>This is sometimes shown using jitter, sometimes different sizes for the
>points, sometimes adding leaves to the points to indicate the number of
>overlapping points, and sometimes this can be performed by changing a 2D
>graph to 3D.
>
>This feature is crucial because it IMMEDIATELY shows the importance of the
>points.  (Not Minitab, which insists on jittering ALL the plotted points)
>
>
>
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=====

_____
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: "Mean" of Standard deviations

2001-05-17 Thread dennis roberts



sounds like you want the overall sd ... as though you had ALL the data in 
ONE column and were calculating the sd on THAT one column

the formula for TWO groups would be:

variance (weighted or pooled)=

[(n1-1)* var1] + [(n2-1)*var2] all divided by ... n1 + n2 -2

then take the square root to get the overall sd

if you have more than two groups ... just follow the same pattern



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Question

2001-05-10 Thread dennis roberts

this is not unlike having scores for students in a class ... one score for 
each student and ... the age of the teacher of THOSE students ... for a 
class ... scores will vary but, age for the teacher remains the same ... 
but the age might be different in ANother class with a different teacher 
... in a sense, the age is like a mean  just like your turnover rate ... 
and you want to know the relationship between student scores and teachers ages

something has to give

i think you have to reduce the data points on X2 ... find the mean within 
organization 1 ... on X2 ... then have .4 next to it ... second data pair 
would be mean on X2 for organization 2 .. with .25 ... etc.

so, in this case ... you have 4 values on X2 and 4 values on Y ... so, what 
is the relationship between those??

look at the following:


  Row C7 C8

1   0.72   0.40
2   1.15   0.25
3   0.90   0.30
4   0.60   0.50

MTB > plot c8 c7

Plot


  - *
  0.48+
  -
  C8  -
  - *
  -
  0.36+
  -
  -   *
  -
  -
  0.24+*
+-+-+-+-+-+--C7
 0.60  0.70  0.80  0.90  1.00  1.10
Correlations: C7, C8


Pearson correlation of C7 and C8 = -0.957
P-Value = 0.043

there might be a better way to do it but ... looks like a pretty clear case 
of the greater the % of market the organization pays ... the less is there 
turnover rate


At 06:05 PM 5/10/01 -0400, Magill, Brett wrote:
>A colleague has a data set with a structure like the one below:
>
>ID  X1  X2  Y
>1   1   0.700.40
>2   1   0.800.40
>3   1   0.650.40
>4   2   1.200.25
>5   2   1.100.25
>6   3   0.900.30
>7   4   0.500.50
>8   4   0.600.50
>9   4   0.700.50
>
>Where X1 is the organization.  X2 is the percent of market salary an
>employee within the organization is paid--i.e. ID 1 makes 70% of the market
>salary for their position and the local economy.  And Y is the annual
>overall turnover rate in the organization, so it is constant across
>individuals within the organization.  There are different numbers of
>employee salaries measured within each organization. The goal is to assess
>the relationship between employee salary (as percent of market salary for
>their position and location) and overall organizational turnover rates.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re:

2001-05-04 Thread dennis roberts

i don't want folks to think i am against research ... i am not. but, i do 
honestly think that we do too much of it ... we force too much to be done 
... we force "publishing" and, the only real criterion is ... were you able 
to get it published (in a decent outlet of course)? not only that ... most 
of the rewards in academe come from doing this ... and racking up the 
tallies for pubs ... that is ... the balance has tipped FAR too much to the 
side of perks for pubs ...

good scholarship is more than that

social science research has a big barrier to hurdle ... and that is ... 
what is the most important impact any of it can have? ... in general, the 
answer to this is very limited ... at best. so, in this context,  we don't 
need more piecemeal projects ... research tidbits so to speak ... we need:

1. better definitions of what is valuable to do ... and what is not
2. projects that go on for longer periods of time ... to look at sustained 
effects
3. groups of students/faculty/researchers working TOGETHER, even at the 
dissertation stage  ... on larger projects that have more potential for impact
4. as mundane as it may be, given #1, we need more replication studies ... 
and not think that every new study has to break new ground
5. we need much more training in methodology ... broadly speaking ... (in a 
time where there seems to be so many efforts being made to reduce such 
training)

and on and on

and in the area of journal editorial policies ... perhaps we need to think 
about a trained cadre of PROFESSIONAL reviewers ... who get paid for their 
professional efforts

and, to make that editorial job easier to carry out, i would suggest that 
(unless there is some overriding issue of huge importance) ... i would 
FORBID anyone from submitting more than one GOOD paper a year ... (not try 
to insist that they submit and publish more)

anyone attending the aera meeting this year in seattle ... and lugging 
around the program from hotel to hotel ... will remember that is was at 
least 3/4" thick (maybe closer to 1") ... crammed to the hilt of sessions 
of research papers ... on narrow topics ...

i think it is time for our profession to take a long hard look at this 
"volume" of activity ... and see if we can't come to some agreement about 
far FEWER areas that we should do research in ... that's right ... cast off 
many that have not and will not lead us to anything important ... and 
concentrate our resources in a more comprehensive way in more limited areas 
... with more people working together in a sustained effort ... over longer 
periods of time and THEN think about putting together a monograph ... 
summarizing what one did (the team that is) ... and what one found ... and 
what the real import of all this is

some will say well, how do you KNOW that something won't be important ... 
down the road?

we know ... trust me ... we know

however, we seem not of a mind to say ... research in that and this area 
... is priority ... and these other areas ... no dice

At 09:44 AM 5/4/01 -0700, Carl Huberty wrote:
>  Why do articles appear in print when study methods, analyses, 
> results, and conclusions are somewhat faulty?  [This may be considered as 
> a follow-up to an earlier edstat interchange.]  My first, and perhaps 
> overly critical, response  is that the editorial practices are faulty.  I 
> don't find Dennis Roberts' "reasons" in his 27 Apr message too 
> satisfying.  I regularly have students write critiques of articles in 
> their respective areas of study.  And I discover many, many, ... errors 
> in reporting.  I often ask myself, WHY?  I can think of two reasons: 1) 
> journal editors can not or do not send manuscripts to reviewers with 
> statistical analysis expertise; and 2) manuscript originators do not 
> regularly seek methodologists as co-authors.  Which is more prevalent?
>  For whatever it is worth ...
>
>Carl Huberty

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re:

2001-05-04 Thread dennis roberts

At 09:44 AM 5/4/01 -0700, Carl Huberty wrote:
>  Why do articles appear in print when study methods, analyses, 
> results, and conclusions are somewhat faulty?  [This may be considered as 
> a follow-up to an earlier edstat interchange.]  My first, and perhaps 
> overly critical, response  is that the editorial practices are faulty.  I 
> don't find Dennis Roberts' "reasons" in his 27 Apr message too satisfying.

i was not satisfied with my own list either but, these are reasons why 
screw ups do occur

>  I regularly have students write critiques of articles in their 
> respective areas of study.  And I discover many, many, ... errors in 
> reporting.  I often ask myself, WHY?  I can think of two reasons: 1) 
> journal editors can not or do not send manuscripts to reviewers with 
> statistical analysis expertise;

unfortunately ... an editor has to beg sometimes to get reviewers and, 
sometimes ... beggars can't be choosers ... this is the reality of journal 
article submission reviewing ...
in addition ... a paper about say ... topic A ... has both content and 
methods ... and, you cannot always just find a person with skills in both 
... so, what are you to do? you have to get 2/3 people to AGREE to review a 
paper ... and, we know that these are not all in tune to the same things 
... thus, one might focus on methods/data ... another might focus on 
content theme ...

>and 2) manuscript originators do not regularly seek methodologists as 
>co-authors.

well, put yourself in the place of an untenured faculty member ... trying 
to get HIS/HER name as a sole author on sufficient stuff ... try to do it 
without a co-author ... you get more P and T points



>  Which is more prevalent?
>  For whatever it is worth ...


let's put all of this in the proper perspective ... there is just FAR too 
much emphasis on getting papers submitted and published (especially in the 
social sciences ... we are NOT medicine where miraculous breakthroughs DO 
happen) ... the editorial load is too great for the resources at hand (free 
... to boot!) ... so much of the stuff we do in the sake of scholarship is 
really  on the fringe of quality and usefulness ... but, we put more 
and more pressure on faculty to be "part of the game"

when will we wise up? we need LESS stuff done, but what's done should be of 
better quality over longer periods of time ... and of greater potential 
import ...

if we pick up say most of the good journals in our field ... and honestly 
read papers and ask ourselves ... does this really matter? is this really 
important?

if we are honest ... i would bet at least 50%-75% ... would be rated NO

but, it goes on your VITA ... guess that is what counts, right?

>
>Carl Huberty

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



old fangled technology

2001-05-03 Thread dennis roberts

A friend of mine sent me the following and, I decided to scan and post. 
These relate to old interpretations of NEW technology terms like ... modem, 
mega hertz, and the like. Some of these are a HOOT!
It's best to follow the links in order ... some frames follow after others.
I HAVE THIS FEELING THAT I HAVE SEEN THIS BEFORE ... BUT, SOME QUICK 
SEARCHING FAILED TO FIND ANY SOURCE. IF ANYONE KNOWS THE SOURCE OF THESE 
FUNNIES, PLEASE LET ME KNOW SO I CAN GIVE RIGHTFUL CREDIT.


http://roberts.ed.psu.edu/users/droberts/mtbcommands/OldTech.htm

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: probability and repeats

2001-05-01 Thread dennis roberts

let's see ... there is a population of 3 different items ... boxes, people, 
colors, whatever ... right?

it appears to me that the problem you are presenting is the following:

let's say you take 3 elements, at random and WITH REPLACEMENT, from a 
population (with 3 distinct elements)

you take one ... record it ... put it back, take another ... put it back 
... take the 3rd

clearly, IF this is the case you are referring to, then ANY of the elements 
could come up on ANY of the 3 draws

what are the distinct sets of 3 under this sampling plan?

now, i would say that this means ORDER is relevant ... that is, if you get 
1 then 3 then 2, that is identical as a set ... as, 2, then 1, then 3

i think your enumeration below hits the possibilities ...

but, the normal combinations formula seems not to work since, the universe 
is equal to the number in your sample ... it's not like have 10 things 
taking 3 at a time ...
you have 3 things and want a sample of 3 ...





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: errors in journal articles

2001-05-01 Thread dennis roberts



the notion of being able to fix errors in manuscripts that have NOT yet 
been published is one thing ... but, the ability to correct glaring errors 
in manuscripts PUBLISHED is quite a different story. i have a paper (that i 
can't find at the moment ... from either chemistry or physiology i think) 
of a rather famous case where a researcher desperately tried to get errors 
in a paper corrected ... but the amazing saga that he took to try do it 
(don't recall if he was ever successful, i don't think so) ... and the huge 
resistance put up by the journal ... could have been a quality publication 
like science ...

sometimes, when something is cast in stone, like a published paper is (more 
or less) ... it can be nearly impossible to fix mistakes, even if they are 
of importance



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: errors in journal articles

2001-04-27 Thread dennis roberts

even in the best journals, you will find crap ... or, serious mistakes ...

consider the following:

1. editors don't always have an easy time finding appropriate reviewers to 
review papers
2. reviewing papers (generally speaking) is a gratis activity ...
3. reviews are done usually in one's spare time (whatever "spare" time means)
4. different reviewers look for different things
5. reviews generally are done rather fast ... given #2 ... and things are 
missed
6. a reviewer might be good in the content of the paper but, still might 
not be a stat whiz
7. you can't expect a reviewer to recheck all calculations, and all the 
details ... usually, when found ... it is because they just happen to pop 
out to the reviewer
8. too many papers have too much data ... easy to miss something


At 03:59 PM 4/27/01 -0400, Lise DeShea wrote:
>List Members:
>
>I teach statistics and experimental design at the University of Kentucky, 
>and I give  journal articles to my students occasionally with instructions 
>to identify what kind of research was conducted, what the independent and 
>dependent variables were, etc.  For my advanced class, I ask them to 
>identify anything that the researcher did incorrectly.
>
>As an example, there was an article in a recent issue of an APA journal 
>where the researchers randomly assigned participants to one of six 
>conditions in a 2x3 factorial design.  The N wouldn't allow equal cell 
>sizes, and the reported df exceeded N.  Yet the article said the 
>researchers ran a two-way fixed-effects ANOVA.
>
>One of my students wrote on her homework, "It is especially hard to know 
>when you are doing something wrong when journals allow bad examples of 
>research to be published on a regular basis."
>
>I'd like to hear what other list members think about this problem and 
>whether there are solutions that would not alienate journal editors.  (As 
>a relative new assistant professor, I can't do that or I'll never get 
>published, I'll be denied tenure, and I'll have to go out on the street 
>corners with a sign that says, "Will Analyze Data For Food.")
>
>Cheers.
>Lise
>~~~
>Lise DeShea, Ph.D.
>Assistant Professor
>Educational and Counseling Psychology Department
>University of Kentucky
>245 Dickey Hall
>Lexington KY 40506
>Email:  [EMAIL PROTECTED]
>Phone:  (859) 257-9884
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>  http://jse.stat.ncsu.edu/
>=
>

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: crib sheets

2001-04-26 Thread dennis roberts

At 09:29 PM 4/26/01 -0500, Christopher Tong wrote:
>On 26 Apr 2001, dennis roberts wrote:
>
> > i would put a different spin on this ... if students use the crib sheet
> > (which i let them have too) AND have to depend on it TO remember important
> > formulas/definitions ... then this works against them since, they will 
> then
> > be spending time on "consulting" their card ... and hoping to find
> > something ... when they could be using that time to work on other problems
> > or items ... or give more time to problems that are a bit more complex
>
>That is not true.  In contrast to the open book exam, a crib sheet
>forces the student to first organize and digest the material being tested,
>and then select those items/formulas which he does not want to waste time
>memorizing.

a crib sheet does not force the student to do anything other than put some 
things down on a notecard ... WHAT they put down, WHY they selected what 
they did, ... are unknowns and, vary from student to student. the reality 
is that we don't know about this ... THAT's why i expanded a bit about an 
interest in trying to find out

some might put down what they know they will NOT remember ... whether they 
tried to learn it before or not
some might put down highly idiosyncratic things ... that would make no 
particular sense to us
most would put down formulas ... EVEN if they know them backwards and 
forwards ... they THINK they need to have them there



>If that is the case, the student treats the crib sheet
>the same way that a scientist treats the CRC Handbook of Physics and
>Chemistry, kept within arm's reach.  The Handbook gives you the details so
>you get them right, but it is up to you to understand the
>underlying concepts and the overall organization of the body of knowledge
>in question.  If a student takes this approach,


IF IF ... they take that approach ... but do they? that is the question
some are very systematic about this ... actually planning what they want 
... others slap them together at the last moment ...

we are not talking about what could be ... but what is and, generally 
speaking, i stick to my guns in thinking that IF a student has to consult 
the crib sheet too often ... they are losing time and, are groping for help 
...

>The process of organizing the material and boiling it down to a card
>or a summary is, arguably, more valuable for learning than the actual
>exam itself, when done right. <<<<< WHEN DONE RIGHT




>  That is because the exam can only test
>a cross-section of material and understanding, whereas producing a good
>summary of the entire course is quite an instructive project.

a student who makes a crib sheet, if allowed, will only put down stuff 
he/she THINKS will be needed on THAT test ...




>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: crib sheets

2001-04-26 Thread dennis roberts

each time you use your word processor, you get better AT using it AND its 
features
same holds for a decent stat package
the more students practice with it, the better they get with it
and the instructor should help them in this
minitab for example, has tried in recent releases to enhance the amount of 
online help it gives ... which is good ... NOT just about how to use 
minitab but, more info about the procedures being completed ...
personally, i think (and i know many will disagree) any instructor who does 
NOT require students in the first course to learn SOME package ... their 
way around it and how to use basic features ... is NOT helping them learn 
how to do analysis the way  data analysis is done in the real world





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



old calculators

2001-04-26 Thread dennis roberts

hope you find something that brings back a memory or two (or three) at 
these calculator sites ... some great pics ... old electronics and 
mechanical monsters ... i can still hear the clackity clack of the old 
marchants and monroes

http://www.geocities.com/SiliconValley/Park/7227/photo_tz.html

http://www.geocities.com/SiliconValley/Park/7227/links.html

you might even have one in a drawer or closet someplace!

the first one i had was a commodore ... with pixie tubes ... used D 
batteries ... had a memory key and a square root button!!! wow ... what 
luxury at $129

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: p- values Was: Re: Artifacts in stats: (Was Student's t vs. z tests)

2001-04-26 Thread dennis roberts

At 10:16 AM 4/26/01 -0500, Herman Rubin wrote:


>A p-value tells me nothing of importance.

i agree if this means practical and of benefit say to society

>  It is in no way
>a measure of strength of evidence.

are you saying p tells you nothing?



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



crib sheets

2001-04-26 Thread dennis roberts

At 09:58 AM 4/26/01 -0500, Herman Rubin wrote:

>For the important part, it is ALWAYS appropriate.  An
>argument against open book is that they spend too much
>time looking things up, but I always allow crib sheets.
>This way they know that they will get no credit for
>memorizing definitions and formulas.


i would put a different spin on this ... if students use the crib sheet 
(which i let them have too) AND have to depend on it TO remember important 
formulas/definitions ... then this works against them since, they will then 
be spending time on "consulting" their card ... and hoping to find 
something ... when they could be using that time to work on other problems 
or items ... or give more time to problems that are a bit more complex

crib sheets are like the college degree that some athletes get (so they 
say) ... it is a fall back position ...

the allowance to use or not ... and the benefit from use, if crib sheet use 
is allowed ... is an interesting area of inquiry that has essentially been 
ignored in the literature ...

i hypothesize that ... crib sheet use CAN have a + impact NOW and THEN ... 
but, it is essentially a random effect ... and, if it does help ... the 
help will be minimal for any given test

i think that more often than not, it mainly "eases" ones mind ...

but, it can have a down side too ... if one spends too much time on MAKING 
a crib sheet and not enough time on understanding the content ... then over 
reliance on the use of a card can be detrimental

in any case, it would make for some interesting data fodder to have a close 
look at such things as:

1. what is ON crib sheets ... and relate types of content on cards TO test 
performance
2. look at how OFTEN students actually access their cards
3. look at how much TIME is spent looking at their cards compared to total 
test time
4. do some comparisons (nice highly controlled experiment of course) 
between classes where crib sheet use is or is not allowed ... and how use 
changes (if any) what they do to prepare for tests ...


i know in my classes, when i just causally observe students working on 
tests and using their cards ... it is interesting ...





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



experience and understanding

2001-04-26 Thread dennis roberts

some things in statistics one learns to understand as they gain experience 
hands on ...

for example, one can over time ... become rather proficient in using some 
software ... so as to easily do analysis for oneself ... or for helping others

some principles can be "learned" by doing ... for example, even with the 
CLT ... one can get a rather good feel for what is going on via various 
simulations

there are some cases when a derivation can teach you something ...

getting the hang of how formulas work can be greatly facilitated by doing 
many ... and seeing what different kinds of data DO when you do calculations

BUT, there are many things ... that experience seem to have NO impact on 
whatsoever ... nor can't

for example ... just the notion of p values and what they mean ... i see no 
way that any amount of hands on experience CAN increase one's understanding 
of what these mean ... statistical significance is not a concept that one 
becomes more familiar with ... understands more deeply ... as the number of 
significance tests you do increases

here is a concept that you take on faith ... someone TELLS you what it 
means ... you READ in a book about the interpretation of it ...

now, some might say that one could simulate populations ... sampling 
distributions ... and set cut offs and given that you KNOW the null value, 
see how often you reject the null using that CV ... BUT, that still does 
not give you a feel for what p means with respect to evidence (that the p 
value is supposed to yield) AGAINST THE NULL

so, while i am a firm advocate that one learns by doing ... and the more 
practice the better ... there are some concepts that practice makes NO 
difference in whatsoever ... not in learning any fundamental meaning of that is

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



understanding and reading

2001-04-25 Thread dennis roberts

robert dawson has tweaked our imaginations about what might be done with 
some "group" of students (in psychology for example) who might not be 
research doer material ... but, who still would benefit from some kind of 
course or exposure that would help them READ psychology literature that has 
a research base to it ...

the implication of this notion is that somehow ... simple understanding of 
what is going on IN a psychology paper (to be able to "grasp its 
basics)  that has a research base to it is cognitively different than ... 
and less complex ... than knowing how to do analysis PLUS interpreting it 
... as in the case if you happen to actually DO some study

well, i wonder about that

one of the primary problems with reading ANY paper is that ... a paper is a 
multi part message ... part of it is literature ... part of it is 
formulating a worthwhile and DOABLE problem ... part of it is design and 
data collection ... part of it is analysis ... part of it is interpretation 
... and part of it (perhaps hardest of all) is the "so what ... what can we 
make of all this?"

understanding a paper that has "research" in it is NOT just analysis ... in 
fact, in most cases ... that is the least of the problems. creating a 
poorly defined problem, using poor measures, doing a poor job of getting Ss 
into different conditions, failing to control the treatment across the time 
of the experiment,   and not knowing how to factor in all these 
difficulties when READING and interpreting the data ... are worse than 
knowing or not knowing what the t test means (for example).

thus, understanding in the context of reading some research based paper ... 
REQUIRES a multidimensional set of skills ... and many eyes in the back of 
one's head to spot problems ... and know when something is being done well 
... or royally messed up. IN MANY CASES ... NICE VERBIAGE MASKS THESE TWO 
POSSIBLE OUTCOMES!

so, what can one do IF one accepts this point of view? is it possible to 
have A course ... that revs up one to reading papers BETTER ... without any 
prerequisite work? I DON'T THINK SO

learning these "reading" skills takes practice and experience OVER time ... 
experiencing what goes on IN the process of doing some study (even if 
small) ... learning what can and will go wrong ... learning how to deal 
with that ... learning how  investigations done by others fit into this 
current data collection and analysis effort ... and, gradually, building 
one's repertoire of skills and understandings. the more you do this, the 
quicker one is able to "spot" something that went awry (or went good!) in a 
paper one reads

thus, i suggest that unless students come into a course that is "designed" 
to help them read literature better with skills ("some" basic  savvy in 
measurement, analysis, design, etc.) ... then the attempt to make them read 
more literately  ... will fail ... or, fall woefully short of what we are 
hoping will happen

whether students will ever want to or actually do research down the line, 
is a totally irrelevant matter  ... the failure to TRY some ... and see 
what happens ... will be our Achilles heel ...

back in undergrad school, we had a two semester sequence in psychology 
called experimental design and methodology ... that blended small projects 
(becoming increasingly complex) with analysis and write up ... that seemed 
to work VERY well ... we had to look at some relevant literature for each 
project ... think about the design we were going to use ... work out a plan 
to collect data and analyze it ... and then try to summarize all that 
activity to convince the instructor that we learned something of value

we seem NOW to be on the fast track of trying to allow students to AVOID 
this ... and seem to think we can figure out some alternative that will 
give them the same general level of "understanding" ... so they can cope 
with articles and papers ... even though they might not want to do research 
later

in my view, this is a very bad approach and fundamentally flawed

the very things that makes for understanding is the DOING ... without the 
doing ... houston ... we have a problem











_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



critiquing research

2001-04-25 Thread dennis roberts

many moons ago, a colleague and i put together a course called "critiquing 
educational research' ... which sounds in part, something like what robert 
has been circling around

now, the purpose of the course was to be better able to look at research 
that is in one's discipline ... and look at it with a somewhat more 
critical eye

grad students were in the course (that helped because all were going to 
have to do some research!)  and, though i don't think it is enough, there 
was a single prerequisite of having at least ONE full (not in the canadian 
sense) course in statistics ... more would be nice but, a minimum of 1


the first thing students had to do was to bring US in 3 articles from THEIR 
disciplines (all published in journals that they looked at on a rather 
regular basis) ... and we then selected one for that student to use later 
... (we did this as a review so we could control and not have say ... 10 3 
group experiments, tried to have a mix of kinds of studies, and also not to 
have papers with such complex design/analysis methods that it would be 
impossible to discuss them in the class)

the course was divided up into 3 main blocks ... not necessarily = length

1. my colleague and i, presented some overview materials on design, notions 
of internal and external validity, reviewed a bit about measurement issues, 
and things like this

2. the second part is where my colleague and myself shared in presenting a 
critiquing model ... ie, how to go about it ... and we modelled that by 
doing two studies that WE found

3. the last part focused on student presentations ... usually about 2 per 
night ... where the student gave a small summary (and a short handout to 
give to each class member) critique of what their study was, what was done 
in it, what was found, + and - features ...

now, for each of the #3 presentations ... we had developed a rating scale 
that we used as a class ... where a scale of 1 to 10 was implemented ... 
with 10 being superb ... !!! down to 1 which meant that the journal should 
be contacted and FORCED to retroactively locate and destroy every copy of 
that paper that was published!! (we thought it was THAT bad!)

overall, we liked what happened in the course ... and we think students 
benefited

however, even with the control we exerted on the paper selection, there 
were examples where the type of analysis used in the study was way beyond 
what we had demanded as prerequisite statistical skill and had no way to 
discuss satisfactorily in the course

in addition, we found that in some cases, lack of some measurement skill on 
the part of students kept us from pursuing in any detail ... problems in 
some papers related to advanced measurement matters

of course, my colleague and i were NOT content experts in all the 
disciplines represented by papers used by the students ... and what might 
be good noise variables to control for in one discipline and study, may 
have NO relevance whatsoever in another area

and there were a variety of other problems within the confines of this course

while we "think" that the course helped students, the fact that there was 
not some higher level of common methodological skill across students, ON 
ENTRY INTO THE COURSE,  greatly limited how far and WHAT we could go

and, this is what i see as a basic fundamental problem one has to face IF 
one would want to develop a "robert like" course where emphasis is on 
reading papers ... and understanding them ... with no prior skill development

i also find this same problem to carry over to what i call intro research 
methods courses ... that want to cover the territory in one course ... when 
there are essentially no prerequisite skills attached to entry ...

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



compartments

2001-04-25 Thread dennis roberts

the difficulty in discussing new courses and other issues is that ... 
academe is a compartment system. most institutions have what is labelled as 
general education ... so that, it is assumed that it is GOOD for an 
undergraduate to have some from the science compartment, some from the 
quantitative compartment, some from the humanities compartment, so on and 
so forth. in many cases, this work is done before one declares the major. 
BUT, when we get to the major, we find more compartments ... in fact, more 
specific compartments ... in psychology for example, there is the 
personality compartment, motivation compartment, learning compartment, and 
so on

then folks who are courageous might actually move to the graduate level 
and, guess what? MORE COMPARTMENTS AND MORE SPECIFICITY within each ... we 
have educational psychology and, there is the statistics compartment, the 
measurement compartment, cognitive learning compartment, and so on.

this is how we have structured ourselves ... and this is how we act. and we 
cannot break out of  that mold.

in the area of research, the ideal approach would be to start off a cohort 
group ... and, begin real simple. say ... we design a very VERY simple 
survey ... a few demographics ... do some piloting to see that it makes 
sense to takers ... then begin to talk about how we might work with the 
data once we get some ... we write up what we did, what we found, and 
limitations to what has transpired

then, we move up a notch ... perhaps work on a scale of some sort ... like 
an attitude scale ... work on the notion of developing items to measure 
some underlying construct ... actually construct some items ... do some 
pilot work ... see what happens ... and introduce some notions of 
reliability ... what it is ... how it is assessed ... how we can improve it 
...

and perhaps bring in some notions of validation too ... how scores on this 
measure might relate to other variables of interest ... we offer up some 
hypotheses about what should be related to what ... and when see gather 
some data ... we again come back to how we might handle the data ... 
perhaps bringing in the notion of correlation ... simple regression  
and the like

and we write up the results ... say what we did ... how we handled the data 
... what the problems were ... and try to summarize what we found

then, we might turn to a simple experimental situation ... where we think 
of some useful independent variable to explore and manipulate ...  talk 
about how do design and implement such a study ... how we recruit and 
assign Ss to conditions ... collect data .. and then approach how we might 
handle data of this sort ... maybe anova gets some air time ... then we 
write up the results ... say what we did ... tell what problems we ran into 
... and summarize what we found

in the long run, over several semesters ... we build up a good basket of 
skills THROUGH EXPERIENCING the acts ... we learn by doing ... discussing 
... summarizing ... and then moving up the ladder of complexity

but, this approach ... is almost impossible to implement within standard 
university settings ... whether it be for general education ... for work in 
the major ... or for graduate study BECAUSE ... our instruction and methods 
have been SO COMPARTMENTALIZED ... and usually, faculty are only really 
competent to teach in one maybe two of these subdivisions ...

the only practical way to do this would be for ONE entire department ... 
that has complete control over THEIR say 200 students ... could revamp what 
they do and what their students take ...

but, this is a pipe dream ... and it is a super pipe dream if you happen to 
be a department that is expected to provide overall SERVICE COURSES ... for 
those outside of your OWN group of students

so, back to the main issue ... trying to have a survey course ... in whatever

such approaches cover the water ... FAST  with no depth ... and that 
seems to be the way programs want it nowadays ... especially when a student 
ventures outside of his or her COMPARTMENT ...

so, do i think that a book or course can be designed in a way that will 
focus on READING AND INTERPRETING articles and research reports? well, sure 
... but, if the students don't have the PREREQUISITE SKILLS in analysis, 
measurement, design, etc. ... then, it is bound to be a watered down and 
rather unsuccessful experience ... and ultimately, does NOT serve the 
student well



_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Artifacts in stats: (Was Student's t vs. z tests)

2001-04-25 Thread dennis roberts



as for the use of t tables ... or any other ...

1. one issue is can the student USE the table ... that is, you specify some 
from the table and you want to know if they can find it

2. another issue is what the student knows about what happens in the table 
as df changes
3. another issue is whether the instructor, when wanting to have t problem, 
HAS to have the entire table there ... why not just put a few selected 
values  some right, some wrong ... that should be sufficient
4. there ARE ways to have a t table large enough to be seen by a whole 
class ... sensible sized class that is
5. there is always the situation of knowing that a t of approximately 2 
will get you results that are close
6. tell em to bring in a 3 by 5 card ... i have done it for years ... and 
tell em to put anything on it they want ... they might put a few CVs on it 
... as guidelines
7. #1 is not the same as seeing if a student can work through a t interval 
problem and/or do a t test ... yes, that does involve a t table value but, 
much more too

personally, i don't see what the big deal is in this regard




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: ways of grading group participation

2001-04-24 Thread dennis roberts

At 05:35 PM 4/24/01 -0500, Simon, Steve, PhD wrote:

steve has pointed out, undoubtedly, useful references for us to examine ...

my main point is this: while there are many many activities that are and 
can be done in groups ... and we need to train to participate in such ... 
there are many many (if not more) activities that demand that i ... the 
individual ... take hold of my own knowledge ... expand it ... and act ...

i teach statistics ... and, in many instances down the road ... students 
will be part of some small team ... expected to contribute to some group 
goal ... whether developed by the team itself ... or, forced on them by 
"the boss" ...

but, i think the far greater activity will be when the individual reads a 
paper and has to get something out of it ... or sits down to work on some 
small analysis ... or has to explain (if he or she happens to be a prof, by 
darn!) to a student ... what the concept of a sampling distribution means 
... where you are on your own devices ... to act and accomplish

more acts in human behavior are done at the individual level

BOTH of these are important activities however and both deserve adequate 
training for

even in groups, competition is not void ... since, in many instances, 
DELIBERATELY ... groups are pitted against one another ... or, we find a 
group member who (though silent on this) wants to be the best contributor 
... or the one to find the solution FIRST ...

sure, for individuals ... who "compete" for limited job openings ... 
college slot openings ... limited ticket availabilities for the "hot" act 
coming to town ...

for good or for bad ... competition is here to stay ... and impacts on 
group and individual actions ...

the problem we face is how to keep it in balance ... how to use it 
productively (and not cause ulcers) ...

getting rid of competition is impossible ...

now, what this has to do with how we "grade" group activities ... i am not 
sure ... 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: ways of grading group participation

2001-04-24 Thread dennis roberts



of course, if one views group work to be important and, wants a good model 
for such behavior, it is not academe for sure! while there is lots of group 
activity that goes on, committee meetings off the chart,  ... by and large 
... when it comes to making decisions about faculty, staff, administrators 
... we gather the evidence that these people AS INDIVIDUALS HAVE PERFORMED 
... in some way ... to make decisions about them

if ever there is a competitive model ... NON sharing ... almost cut throaty 
at times to KEEP one's knowledge to one's self ... being afraid that 
someone might actually STEAL it and gain reputation ... it is in the 
academic environment

what a shame ...

the way you get ahead is NOT to be a member of a group ... to refuse (if 
you can get away with it) group assignments ... to hide away in some non 
findable place ... write and publish like time was running out ... and make 
sure YOUR VITA is long ... single authored (for the most part) ... that is, 
boost up YOUR personal stock

the ivory tower? great eh? 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: ways of grading group participation

2001-04-24 Thread dennis roberts

At 01:53 PM 4/24/01 -0400, [EMAIL PROTECTED] wrote:


>All though school (elementary, junior and senior high as well as in some
>undergraduate college courses) we tend to discourage competition.

say WHAT??? i would say it is JUST THE OPPOSITE ... do the best you can to 
get ahead of the next fellow or gal ... is that not the mode? compete 
compete compete ... do better than others if you can SINCE, down the road 
... it is assumed that OTHERS (college admission officers for example) will 
VALUE that

your reply to college officers and employers ... "well, most of the things 
i did i did in groups" ... will NOT get you very far ... since they will 
ask: what can YOU do? 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



inference

2001-04-24 Thread dennis roberts

well, we have been having some continuing discussions about z and t and 
binomials, etc. ...

now, our group of edstaters is a varied bunch ... but, i would like to hear 
from some about what 2/3 settings that YOU think are good examples to start 
a intro class on IF the NEW topic is statistical inference ...

the assumption is that UP to this point in the course, there has bee NO 
discussion of the desire to generalize from a sample to a population ... 
or, no discussion about the desire to be able to estimate some parameter 
... and the like

thus, it is ALL new and ... where do we start?

it would really be nice if we could come to some agreement (highly unlikely 
i know) about a half dozen examples ... that we would all feel are good 
places to start ...

at this point ... don't worry about WHERE these might lead but, only ... 
suggested as starting points

AND, having requested that ... also some thoughts about what would be the 
2/3 KEY notions one would be attempting to inculcate in students ... with 
these examples

thanks

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: ways of grading group participation

2001-04-24 Thread dennis roberts

the way this is usually done is to assign everyone the same grade ... and 
THERE is the rub

i am totally surprised that it has taken you 20 years to encounter this 
problem ... i would say you have been a mighty LUCKY person!

we have to distinguish between the goal of the group project ... and the 
grade given to the members of the group ... why is it that the assumption 
seems to be that all get the same grade? i don't see any necessary 
connection between one and the other

the typical pattern in a group ... when all are given the same grade ... is 
for one or more to pick up the slack of one or more who, don't feel they 
need a B or A on this component of a course ... so, we have a "pick up the 
slack" group activity ...

this assumes of course that most of the group wants to do well ... but, if 
none of the group really cares that much ... then, there will be no picking 
up the slack

i think the best compromise is to try to determine the value of each 
member's contribution ... and weight that most ... but, then give some 
overall grade to the full project ... and give that somewhat less weight 
... and keep these two separate ...

by the way, what IS the main reason for having students work in groups?

1. impossible to get al the projects done that the instructor wants 
assigning them to individuals?
2. it is the training on a cooperative effort to get a task done that, 
would be difficult to do alone?
3. it is the cooperative effort that will make the overall results 
(product) BETTER than if a person did it alone?

i have been puzzled often at what the real goals are for assigning group 
projects ... and for sure, there is WIDE variation across disciplines for 
doing this ...

At 10:18 AM 4/24/01 -0500, EAKIN MARK E wrote:

>I have been assigning group projects for about 20 years and have lucky
>enough (until this semester) to have few students complaints about their
>fellow groups members. This semester I have many, many problems with
>groups complaining about members not carrying their fair share.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: FW: Student's t vs. z tests

2001-04-24 Thread dennis roberts


>
>I think that reading the scientific literature would disabuse one
>about the limited application of statistical significance.  My
>students tell me that learning about statistical inference
>greatly increases their capacity to read primary
>literature.  Perhaps it is different in your discipline.\

but, you assume that this is a good thing ... i don't necessarily share 
that view


it is not different in my discipline ... and, therefore the  same mistake 
is made here as in most others

most empirical literature depends highly on, n fact it does not get IN to 
the literature, unless one shows one or more cases of "statistical 
significance". however, most 'honest' statisticians will admit that the 
importance of statistical significance is HIGHLY OVERRATED ... and has very 
limited applications ... if one disputes this, then follow the wave that 
has been mushrooming for years (actually decades) to include confidence 
intervals where possible and/or effect sizes ... since rejecting the 
typical null hypothesis (at the heart of significance testing) leaves one 
at a DEADEND alley.

so, if you are saying that your students are saying that they are in a much 
better position to understand the literature that is dominated by 
hypothesis testing ... F tests, z tests, t tests, and on and on ... that is 
great. but, of course ... their increased confidence is on something that 
if far FAR less important than we teach it or how we emphasize it when we 
disseminate it

when we have had extensive discussions about that the meaning of a p value 
is ... associated with the typical significance test ... i think it is fair 
to summarize (sort of by vote, the majority opinion) that the smaller the p 
(assuming the study is done well), the less plausible is the null hypothesis

personally, i like this view BUT, what does it really mean then? since in 
the typical case, we set up things hoping like the dickens to reject the 
null ... AND when we do, what can we say? let's assume that the null 
hypothesis is that the mean SAT M score in california is 500 ... and, in a 
decent study (moore and mccabe use this one), we reject the null. conclusion???

we don't think the mean SAT M score in california is 500 ... and we keep 
pressing because surely there has to be more that this? again ... we say 
... we don't think the mean SAT M score in california is 500 ... and, with 
a p value of .003 ... we are pretty darn sure of that.

but, the real question here is NOT what it isn't ... but WHAT it (might) is 
... and the desirable result of rejecting the null helps you NOT in any way 
... to answer the question ... that is the REAL question of interest

this is true in most all of significance testing ... doing what we hope ... 
ie, reject a null, leaves you hanging

most will quick to point out well, you could build a CI to go along with 
that and/or ... present an effect size ...

sure, but what this means is that without this additional information, the 
hypothesis testing exercise has yielded essentially no useful information

again ... if we help students to learn all about logic of hypothesis 
testing, and the right way to go about it ... AS a way to make sure they 
read literature correctly ... AND/OR be able to apply the correct methods 
in their own research ... all of this is great ...

BUT, it does not change the fact that this over reliance on and dominance 
of ... significance testing in the literature is misplaced effort ... and, 
i submit, a poor practice for students to emulate





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-22 Thread dennis roberts

At 05:15 PM 4/22/01 -0400, Rich Ulrich wrote:
>On 21 Apr 2001 13:04:55 -0700, [EMAIL PROTECTED] (Will Hopkins)
>wrote:
>
>So you guys are all giving advice about teaching statistics to
>psychology majors/ graduates, who have no aspirations or
>potential for being anything more than "consumers" (readers)
>of statistics?  Or (similar intent) to biomedical researchers?
>
>Don't researchers deserve to be shown a tad more?

rich, one problem is how much TIME do we have with students? by a light 
year, far more students who take ONE course (and that is the majority case) 
will be looking at articles and papers and not doing research ... even if 
they are in psy ... so, it is a quandry what to do ...

i wish we all had the luxury to have students in a sequence ... for several 
courses ... that would give one many options ...

such is not a luxury most who teach intro stat have ...

even in grad school, the trend is clearly to require less quantitative work 
... across many disciplines ... thus, even for those who are supposed to be 
in "higher education" partly for learning about doing research ... are 
having this downplayed more and more

what are your suggestions in this atmosphere of academe?





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-20 Thread dennis roberts

At 10:58 AM 4/20/01 -0500, jim clark wrote:


>  What does a t-distribution mean to a student who does not
>know what a binomial distribution is and how to calculate the
>probabilities, and who does not know what a normal distribution
>is and how to obtain the probabilities?

good question but, NONE of us have an answer to this ... i know of NO data 
that exists about going through various different "routes" and then 
assessing one's understanding at the end

no one has evidence who has commented today about this ... nor yesterday 
about this ... nor any member of this list

to say that we know that IF we want students to learn about and understand 
something about t and its applications ... one must:

1. do binomial first ...
2. then do normal
3. then do t

is mere speculation

without some kind of an experiment where we try various combinations and 
orderings ... and see what happens to student's understandings, we know not 
of what we assert (including me)


off the top of my head, i would say that one could learn alot about a t 
distribution studying it ... are you suggesting that one could not learn 
about calculating probabilities within a t distribution without having 
worked and learned about calculating probabilities in a normal distribution?

as far as i know, the way students learn about calculating probabilities is 
NOT by any integrative process ... rather, they are shown a nice drawing of 
the normal curve, with lines up at -3 to +3 ... with values like .02, .14, 
.34 ... etc. within certain whole number boundaries under the curve, and 
then are shown tables on how to find areas (ps) for various kinds of 
problems (areas between points, below points, above points)

if there is something real high level and particularly intuitive about 
this, let me know. you make it sound like there is some magical "learning" 
here ... some INductive principle being established ... and, i don't see it

i don't see one whit of difference between this and ... showing some t 
distributions, giving them a table about areas under these, and having them 
find areas below points, above points, and between points ...

now, going from binomial to the normal is a bit different ... going from a 
highly gappy binomial distribution to a smooth one ...

but i contend that one does NOT need to have experience in finding 
probabilities WITH the normal to fully understand what probability 
statements mean using the various t distributions ...

if someone wants to do binomial ... THEN move to normal ... THEN move to t 
because they like that sequence ... fine. but, please don't say that one 
MUST follow that sequence inorder to know something about either a normal 
and/or a t

again, all of these pedagogic assertions are ONLY that ... assertions ... 
but, with no evidence behind them

unless one can cite a study or two on the matter?

>  In fact, what does the
>whole idea of a distribution in general and sampling distribution
>in particular mean for students when the basics are omitted?  It
>is far more important to give solid foundations in the
>entry-level course than to "make room" for more sophisticated
>tests that students will only vaguely understand.
>
>Best wishes
>Jim
>
>
>James M. Clark  (204) 786-9757
>Department of Psychology(204) 774-4134 Fax
>University of Winnipeg  4L05D
>Winnipeg, Manitoba  R3B 2E9 [EMAIL PROTECTED]
>CANADA  http://www.uwinnipeg.ca/~clark
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=====

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-20 Thread dennis roberts

nice note mike


>Impossible?  No.  Requiring a great deal of effort on the part of some
>cluster of folks?  Definitely!

absolutely!


>There is some discussion of this very possibility in Psychology, although
>I've yet to see evidence of fruition.  A very large part of the problem,
>in my mind, is breaking out of established stereotypes of what a Stats and
>Methods sequence should look like, and then finding the materials to
>support that vision.

i think it may ONLY be possible within a large unit that requires their 
students to take their methods courses ... design, testing, statistics, 
etc. i think it will be very hard for a unit that PROVIDES SUBSTANTIAL 
cross unit service courses ... to do this

for example, in our small edpsy program at penn state, most of the courses 
in research methods, measurement, and stat ... are for OTHERS ... even 
though our own students take most of them too. if we redesigned a sequence 
that would be more integrative ... for our own students, students from 
outside would NOT enroll for sure ... because they are looking for (or 
their advisors are) THE course in stat ... or THE course in research 
methods ... etc. they are not going to sit still for say a two/3 course 
sequence

>If I could find good materials that were designed specifically to support
>the integrated sequence, I might be able to get others to go along with
>it.

i think the more serious problem would be agreeing what should be contained 
in what course ... that is, the layout of this more integrative approach

if that could be done, i don't think it would be that hard to work on 
materials that fit the bill ... by having different faculty write some 
modules ... by finding good web links ... and, gathering a book of readings

what you want is NOT necessarily a BOOK that does it this way but, a MANUAL 
you have developed over time  that accomplishes the goals of this approach
It can be done, but it will require someone with more energy and force of
>will than I.

i doubt i have the energy either ...


>Mike
>
>***
>Michael M. Granaas
>Associate Professor[EMAIL PROTECTED]
>Department of Psychology
>University of South Dakota Phone: (605) 677-5295
>Vermillion, SD  57069  FAX:   (605) 677-6604
>***
>All views expressed are those of the author and do not necessarily
>reflect those of the University of South Dakota, or the South
>Dakota Board of Regents.

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-20 Thread dennis roberts

alan and others ...

perhaps what my overall concern is ... and others have expressed this from 
time to time in varying ways ... is that

1. we tend to teach stat in a vacuum ...
2. and this is not good

the problem this creates is a disconnect from the question development 
phase, the measure development phase, the data collection phase, and THEN 
the analysis phase, but finally the "what do we make of it" phase.

this disconnect therefore means that ... in the context of our basic stat 
course(s) ... we more or less have to ASSUME that the data ARE good ... 
because if we did not, like you say  we would go dig ditches ...at this 
point, we are not in much of a position to question the data too much 
since, whether it be in a book we are using or, some of our own data being 
used for illustrative examples ... there is NOTHING we can do about it at 
this stage.

it is not quite the same as when a student comes in with his/her data to 
YOU and asks for advice ... in this case, we can clearly say ... your data 
stink and, there is not a method to "cleanse" it

but in a class about statistical methods, we plod on with examples ... 
always as far as i can tell making sufficient assumptions about the 
goodness of the data to allow us to move forward

bottom line: i guess the frustration i am expressing is a more general one 
about the typical way we teach stat ... and that is in isolation from other 
parts of the question development, instrument construction, and data 
collection phases ...

what i would like to see .. which is probably impossible in general (and 
has been discussed before) ... it a more integrated approach to data 
collection ... WITHIN THE SAME COURSE OR A SEQUENCE OF COURSES ... so that 
when you get to the analysis part ... that we CAN make some realistic 
assumptions about the quality of the data, quality of the data collection 
process, and make sense of the question or questions being investigated





At 02:01 PM 4/20/01 +1000, Alan McLean wrote:
>All of your observations about the deficiencies of data are perfectly
>valid. But what do you do? Just give up because your data are messy, and
>your assumptions are doubtful and all that? Go and dig ditches instead?
>You can only analyse data by making assumptions - by working with models
>of the world. The models may be shonky, but they are presumably the best
>you can do. And within those models you have to assume the data is what
>you think it is.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-19 Thread dennis roberts

At 08:46 AM 4/20/01 +1000, Alan McLean wrote:

>So the two good reasons are - that the z test is the basis for the t,
>and the understanding that knowledge has a very direct value.
>
>I hasten to add that 'knowledge' here is always understood to be
>'assumed knowledge' - as it always is in statistics.
>
>My eight cents worth.
>
>Alan

the problem with all these details is that ... the quality of data we get 
and the methods we use to get it ... PALE^2 in comparison to what such 
methods might tell us IF everything were clean

DATA ARE NOT CLEAN!

but, we prefer it seems to emphasize all this minutiae .. rather than spend 
much much more time on formulating clear questions to ask and, designing 
good ways to develop measures and collect good data

every book i have seen so causally says: assume a SRS of n=40 ... when SRS 
are nearly impossible to get

we dust off assumptions (like normality) with the flick of a cigarette ash ...

we pay NO attention to whether some measure we use provides us with 
reliable data ...

the lack of random assignment in even the simplest of experimental designs 
... seems to cause barely a whimper

we pound statistical significance into the ground when, it has such LIMITED 
application

and the list goes on and on and on

but yet, we get in a tizzy (me too i guess) and fight tooth and nail over 
such silly things as should we start the discussion of hypothesis testing 
for a mean with z or t? WHO CARES? ... the difference is trivial at best

in the overall process of research and gathering data ... the process of 
analysis is the LEAST important aspect of it ... let's face it ... errors 
that are made in papers/articles/research projects are rarely caused by 
faulty analysis applications ... though sure, now and then screw ups do 
happen ...

the biggest (by a light year) problem is bad data ... collected in a bad 
way ... hoping to chase answers to bad questions ... or highly overrated 
and/or unimportant questions

NO analysis will salvage these problems ... and to worry and agonize over z 
or t ... and a hundred other such things is putting too much weight on the 
wrong things

AND ALL IN ONE COURSE TOO! (as some advisors are hoping is all that their 
students will EVER have to take!)






>--
>Alan McLean ([EMAIL PROTECTED])
>Department of Econometrics and Business Statistics
>Monash University, Caulfield Campus, Melbourne
>Tel:  +61 03 9903 2102Fax: +61 03 9903 2007
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-19 Thread dennis roberts

At 04:42 PM 4/19/01 +, Radford Neal wrote:
>In article <[EMAIL PROTECTED]>,
>dennis roberts <[EMAIL PROTECTED]> wrote:
>
>I don't find this persuasive.

nor the reverse ... since we have NO data on any of this ... only our own 
notions of how it MIGHT play itself out inside the heads of students

>  I think that any student who has the
>abstract reasoning ability needed to understand the concepts involved
>will not have any difficult accepting a statement that "this situation
>doesn't come up often in practice, but we'll start with it because
>it's simpler".

this in and of itself sounds strange ... "this situation doesn't come up 
often in practice ... but we will being with it ... (forget the reason why) 
... "

when does it EVER come up in practice, really? i know there must be some 
good examples out there for when it does but ... i have yet to see one ... 
where one would KNOW the sd but not the mean too ...

for sure, it would not be based on data the investigator gathered ... 
since, to get the sd you would have to have the mean ... so, it must be 
(once again) one of those where you say "assume the sd in the population is 
... " ... and hope the students buy that ...




>I have my doubts that introducing the t distribution is "NOT hard", if
>by that you mean that it's not hard to get them to understand what's
>actually happening.  Of course, it's not very hard to get them to
>understand how to plug the numbers into the formula.

just as i have doubts that the converse ... that introducing the z approach 
is easy ... as far as i can tell (again, no data ... just conjecture) the 
only thing that could make it easier is that (if one sticks to 95% CIs or 
.05 as a p value level criterion for a hypothesis test) ... you only have 
to remember 1.96 ...

can someone elaborate on why fundamentally, using z would be easier OTHER 
than only 1 CV to remember? i don't see how it makes the basic notions of 
what CIs are and what you do to conduct hypothesis tests ... easier in some 
ideational or cognitive way

what would the train of cognitive thought BE in the z approach that would 
make this easier?


>I think one could argue that introducing the z test first is MORE
>realistic.

this seems inconsistent with your earlier suggestion that " ... this does 
not come up in practice very often ... "

  After seeing the z test, students will
>realize how lucky one is to have such a statistic,

h ... this is a real stretch

for most students, being "lucky" is finding out that he/she does NOT have 
to take a stat course and therefore can avoid all this mess!


none of this applies to really good students ... you can introduce almost 
any notion to them and they will catch on to it AND quickly ... the problem 
is with the general batch which is usually 90% or more of all these 
students you have ... especially in first level intro stat courses ...


>Radford Neal
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Comparing Software Options-NCSS, Minitab, SPSS

2001-04-19 Thread dennis roberts



one can possibly download mtb for $26 for 6 months ... at

http://www.-e-academy.com/minitab

i am not sure everyone qualifies but, it is worth a try AND is the full 
release 13

if you played your cards right ... you can download for FREE for 30 days 
and then EXTEND the "lease" after that ... so you might gain effective use 
for 7 months or so ... 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-19 Thread dennis roberts

At 11:47 AM 4/19/01 -0500, Christopher J. Mecklin wrote:
>As a reply to Dennis' comments:
>
>If we deleted the z-test and went right to t-test, I believe that 
>students' understanding of p-value would be even worse...


i don't follow the logic here ... are you saying that instead of their 
understanding being "bad"  it will be worse? if so, not sure that this 
is a decrement other than trivial

what makes using a normal model ... and say zs of +/- 1.96 ... any "more 
meaningful" to understand p values ... ? is it that they only learn ONE 
critical value? and that is simpler to keep neatly arranged in their mind?

as i see it, until we talk to students about the normal distribution ... 
being some probability distribution where, you can find subpart areas at 
various baseline values and out (or inbetween) ... there is nothing 
inherently sensible about a normal distribution either ... and certainly i 
don't see anything that makes this discussion based on a normal 
distribution more inherently understandable than using a probability 
distribution based on t ... you still have to look for subpart areas ... 
beyond some baseline values ... or between baseline values ...

since t distributions and unit normal distributions look very similar ... 
except when df is really small (and even there, they LOOK the same it is 
just that ts are somewhat wider) ... seems like whatever applies to one ... 
for good or for bad ... applies about the same for the other ...

i would be appreciative of ANY good logical argument or empirical data that 
suggests that if we use unit normal distributions  and z values ... z 
intervals and z tests ... to INTRODUCE the notions of confidence intervals 
and/or simple hypothesis testing ... that students somehow UNDERSTAND these 
notions better ...

i contend that we have no evidence of this ... it is just something that we 
think ... and thus we do it that way



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Student's t vs. z tests

2001-04-19 Thread dennis roberts

students have enough problems with all the stuff in stat as it is ... but, 
when we start some discussion about sampling error of means ... for use in 
building a confidence interval and/or testing some hypothesis ... the first 
thing observant students will ask when you say to them ...

assume SRS of n=50 and THAT WE KNOW THAT THE POPULATION SD = 4 ... is: if 
we are trying to do some inferencing about the population mean ... how come 
we know the population sd but NOT the mean too? most find this notion 
highly illogical ... but we and books trudge on ...

and they are correct of course in the NON logic of this scenario

thus, it makes a ton more sense to me to introduce at this point a t 
distribution ... this is NOT hard to do ... then get right on with the 
reality case 

asking something about the population mean when everything we have is an 
estimate ... makes sense ... and is the way to go

in the moore and mccabe book ... the way they go is to use z first ... 
assume population is normal and we know sd ... spend alot of time on that 
... CI and logic of hypothesis testing ... THEN get into applications of t 
in the next chapter ...

i think that the benefit of using z first ... then switching to reality ... 
is a misguided order

finally, if one picks up a SRS random journal and looks at some SRS random 
article, the chance of finding a z interval or z test being done is close 
to 0 ... rather, in these situations, t intervals or t tests are almost 
always reported ...

if that is the case ... why do we waste our time on z?



At 08:52 PM 4/18/01 -0300, Robert J. MacG. Dawson wrote:
>David J Firth wrote:
> >
> > : You're running into a historical artifact: in pre-computer days, 
> using the
> > : normal distribution rather than the t distribution reduced the size 
> of the
> > : tables you had to work with.  Nowadays, a computer can compute a t
> > : probability just as easily as a z probability, so unless you're in the
> > : rare situation Karl mentioned, there's no reason not to use a t test.
> >
> > Yet the old ways are still actively taught, even when classroom
> > instruction assumes the use of computers.
>
> The z test and interval do have some value as a pedagogical
>scaffold with the better students who are intended to actually
>_understand_ the t test at a mathematical level by the end of the
>course.
>
> For the rest, we - like construction crews - have to be careful
>about leaving scaffolding unattended where youngsters might play on it
>in a dangerous fashion.
>
> One can also justify teaching advanced students about the Z test so
>that they can read papers that are 50 years out of date. The fact that
>some of those papers may have been written last year - or next-  is,
>however, unfortunate; and we should make it plain to *our* students that
>this is a "deprecated feature included for reverse compatibility only".
>
> -Robert Dawson
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Confidence region plots

2001-04-18 Thread dennis roberts

minitab DOES have a way (if i interpret your note correctly) to put either 
a confidence band or a prediction band ... around the simple bivariate 
regression line ... you decide which and, what level of confidence

there is a macro routine called %fitline ... and subcommands allow for 
these options ...

and a dialog box arrangement also is available

At 09:56 AM 4/18/01 -0400, Paige Miller wrote:
>carl lee wrote:
> >
> > Hello, there:
> >
> > I am looking for software or programs that has procedure for drawing
> > confidence region for bivariate cases, such as Youden Plot. I am not
> > aware that the commonly used software such as Minitab, SPSS or SAS has
> > procedures for this. If anyone has such a program or happens to know any
> > resource, I would appreciate for such information.
>
>I am not sure what a Youden Plot is, however, bivariate normal
>confidence ellipses are not hard to draw in SAS, particularly if you
>use the procedure outlined in Jackson, J. E. (1991) "A User's Guide To
>Principal Components", John Wiley and Sons, New York, Chapter 15.
>
>--
>Paige Miller
>Eastman Kodak Company
>[EMAIL PROTECTED]
>
>"It's nothing until I call it!" -- Bill Klem, NL Umpire
>"Those black-eyed peas tasted all right to me" -- Dixie Chicks
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: In realtion to t-tests

2001-04-05 Thread dennis roberts

At 05:29 PM 4/5/01 +, Andrew L. wrote:
>I am trying to learn what a t-test will actually tell me, in simple terms.
>Dennis Roberts and Paige Miller, have helped alot, but i still dont quite
>understand the significance.

neither does most of the world (including myself on the odd numbered days) 
... so, don't feel alone

this is a bit hard to cover in a paragraph or two AND, before doing that, 
we need to feel that you have tackled enough readings so that there is some 
background in your mind ... with just sticky points left to iron out

but, let me put before you a scenario that you might think about (not in a 
context of t tests) but, a broader issue of hypothesis testing ... which of 
course is what significance is all about

lets say that you come before me ... and, we do a coin flipping experiment ...

i pull a penny out of my pocket ...

1. first flip ... heads

have any worries about if something funny is going on?

NAH

2. second flip ... heads

any problems with getting 2 heads in a row?

NAH

3. third flip ... heads

whatcha think about it now?

well, no MAJOR qualms

4. fourth flip ... heads ...

getting a bit edgy???

5. fifth flip ... heads
6. sixth flip ... heads ...

etc.

at WHAT point might you get SO edgy that you say ... 'wait a minute ... 
something's not right here ... '

THAT is the essence of hypothesis testing  and significance ...

A. there IS a null hypothesis here ..
B. you might REJECT this null at some point given your sample evidence
C.  which might lead you to what alternative conclusion?




>Andy L
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_____
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: p-value of one-tailed test

2001-04-04 Thread dennis roberts

if you are talking about a t test for means ... most software would 
automatically give a two tailed p value ... unless you specify otherwise 
(which software usually will let you do)

here is the typical example

Two-sample T for C1 vs C2

  N  Mean StDev   SE Mean
C1  10 25.70  2.87  0.91
C2  10 27.50  3.66   1.2

Difference = mu C1 - mu C2
Estimate for difference:  -1.80
95% CI for difference: (-4.90, 1.30)
T-Test of difference = 0 (vs not =): T-Value = -1.22  P-Value .238

when ns are 10 for each ... df would be 18 for the two sample t 
(approximately) ... so, here is what a t distribution with df=18 looks like




 :: :
  .:.::.:::.
 . :...
 ::: .
  . .. .::... .. . ..
   ---+-+-+-+-+-+---C3
   -3.0  -1.5   0.0   1.5   3.0   4.5

the p value of .238 is figured in the following way:

from 0 ... go to the negative side to -1.22 ... and also from 0 to the 
right side to +1.22 ... and find the area BELOW -1.22 and ABOVE +1.22 ... 
this is the p value of .238 that gets printed out ...

two tails ...


At 11:25 AM 4/4/01 -0500, auda wrote:
>Hi,
>What is the p-value of a t-statistic significant (significant level shown by
>the software is p) in the wrong direction in an one-tailed test? Should we
>modified it to (1-p)? Or it is just p?
>
>
>Erik
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: attachments

2001-04-03 Thread dennis roberts

At 01:55 PM 4/3/01 -0500, Drake R. Bradley wrote:

>While I agree with the sentiments expressed by others that attachments should
>not be sent to email lists, I take exception that this should apply to small
>(only a few KB or so) gif or jpeg images. Pictures *are* often worth a
>thousand words, and certainly it makes sense that the subscribers to a stat
>list would occasionally want to post a graph or figure so as to illustrate a
>particular statistical point. (David Howell posted a graph of a sampling
>distribution.) It is more than a little ironic that this would be against the
>rules for this list!

though more of a pain ... what i tend to do is to make graphs and post to 
my webspace ... then tell folks of the url ... then it is their choice to 
go and see

in eudora, we can insert an image to the text space ... and this can be 
neat ... but, some may be able to see it in the text ... or, get it as an 
attachment ... but, i have had some feedback that doing this CAN goof 
things up too ... at their end (the technicalities of why i don't know)

i can certainly just attach stuff and send ...

it is all i can do on some occasions ... NOT to add to the message or as an 
attachment ... someTHING ... perhaps a pic ...

but, i just resist sending any attachments to any list ... for a variety of 
reasons ...

i use http://www.copernic.com ... a nice desktop search tool ... and one 
nice thing about it is you can save your search results as a browser 
webfile ... and then send AS AN ATTACHMENT to a person, list, etc. ... and 
when they open the attachment ... it IS already in an opened ie or netscape 
... so you get the benefits of the short url descriptions and workable 
links ... BUT, even there, i hesitate to send this attachment to a list 
(who knows what evil might parasite itself along with it?)

i know that sometimes ... one might have something to share ... and it 
would be MUCH easier to share it once ... rather than say ... "i have a pic 
i can send to anyone who wants it ... send me a note" ... and this makes 
FAR more work for the sender (so, they opt for NOT doing anything with 
it)  AND, invariably, one then gets posts to the entire list rather than to 
the specific person who has the 'thing' to share

sure is a tangled web we live in




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: attachments

2001-04-03 Thread dennis roberts



the pragmatic of the situation is:

DO NOT SEND ANY ATTACHMENTS TO ANY LIST

this has PARTLY to do with virus spreading potential but ... partly to 
courtesy ... and partly due to the fact that when downloading your messages 
say at home ... on a modem ... you can't get to the NEXT message without 
taking time to have the attachment downloaded too ... whether you opt to 
look at it or not




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: (no subject)

2001-04-02 Thread dennis roberts

well, this is a tricky sort of ?  if in fact, all REAL scores that 
actually convert to a SAT value ... anything = to or > than 800 are listed 
as ... 800 ... then, the ? really can't be ... what is the p value for 
having 800 or more ... has to be what is the p value for 800

but, the question being asked is probably wanting you to assume that scores 
could go larger than 800 ... so, for all practical purposes ... it amounts 
to a ? of 800 or more ...

minitab would say:

MTB > cdf 800;
SUBC> norm 500 100.

Cumulative Distribution Function

Normal with mean = 500.000 and standard deviation = 100.000

  xP( X <= x )
   800.0.9987

MTB > let k1=1-.9987
MTB > prin k1

Data Display

K10.0013
MTB > let k2=100*k1
MTB > prin k2

Data Display

K20.13 ... as a percent ... about .13 of ONE percent ... about the 
value you have as the answer
MTB >


At 08:23 PM 4/2/01 +, Jan Sjogren wrote:
>SAT scores are approximately normal with mean 500 and a standard
>devotion 100. Scores of 800 or higher are reported as 800, so a perfect
>paper is not required to score 800 on the SAT. What percent of students
>who take the SAT score 800?
>
>The answer to this question shall be: SAT scores of 800+ correspond to
>z>3; this is 0.15%.
>
>Please help me understand this. I dont understand how I get that z>3???
>and that it is 0.15%?
>
>Thanks for help
>
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Repeated-measures t test for ratio level data

2001-04-02 Thread dennis roberts

At 06:50 PM 4/2/01 +0100, Dr Graham D Smith wrote:

>Thinking about these issues has caused me to reassess the assumptions 
>underpinning the use of the repeated measures t test (for differences). 
>For a long time, I have thought that the homogeneity of variance 
>assumption is meaningless for the RM t test. In other words there is no 
>point in comparing the variability of scores from one condition with the 
>variability of scores in the other condition prior to using the test. I 
>thought this because, once the difference scores are calculated 
>homogeneity of variance is meaningless. The t test is performed on the 
>differences not the scores themselves whose variances may differ (so 
>what?). However, I now wonder whether in fact one should look at 
>homoscedasticity of the relationship between the difference of the scores 
>in the two conditions and the sum of the scores in the two conditions; for 
>example, for my data the relationship between Incong-Cong and Incong+Cong. 
>(Actually the data from my study were not clearly heteroscedastic).


let's say that you do a pre and post study with the same Ss ... say, 
pretest score and posttest score ... AND, while there is variance at pre 
... all Ss master the material and, the variance on scores on the post more 
or less goes away (a not uncommon problem in mastery learning studies)

are you suggesting that the difference in variances at pre and post should 
be of no concern when doing a dependent t test on the means? 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: [Q] Generate a Simple Linear Model

2001-03-29 Thread dennis roberts

not completely sure what you are requesting but

well, if you have access to a routine that will generate two variables with 
specified r ... then, you can do it ... i have one that runs in minitab ... 
it is a macro ... and i know that jon cryer has one too ...

http://roberts.ed.psu.edu/users/droberts/macro.htm ... check #1 ...

you might find something at

http://members.aol.com/johnp71/javastat.html

At 07:54 AM 3/29/01 +, Chien-Hua Wu wrote:

>Does anybody know how to generate a simple linear model?
>
>--
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



stan error of r

2001-03-28 Thread dennis roberts

anyone know off hand quickly ... what the formula might be for the standard 
error for r would be IF the population rho value is something OTHER than zero?

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



mood watch

2001-03-28 Thread dennis roberts

talk about artificial intelligence! nuances of language?

i use eudora 5 as my email client ... and, eudora 5 has a feature (??) 
called MOOD WATCH ... that allows you to either SET the message you are 
planning to send OR, will WARN you of ... whether the message might be 
offensive to the "average reader" ... 1 chile = might be offensive, 2 
chiles = will PROBABLY be offensive ... and 3 chilies ... well, its HOT

unlike spell checkers and grammar checkers that show you where the problem 
(might) is ...  mood watch says nothing of where the offending spot(s) 
might be

note: you can disable this feature if you want ... and just be offensive 
anytime without regards to any reader!

now, i had sent the message below to aera-d ... and the topic was about 
Computer Adaptive Testing ... and i got back the warning dialog box ... 
DING ... is probably offensive to the average reader ...

i was perplexed ... just have a look at the message ... and see if YOU can 
figure out the culprit (i did ...through a process of elimination of lines 
and words ... )

one wonders what the algorithm is that eudora uses 

[NOTE: IF ANYONE WANTS TO MAKE THEIR GUESS TO ME PERSONALLY ... I WILL TELL 
YOU THE CULPRIT ... SO, YOU MIGHT NOT WANT TO POST ANY COMMENTS TO THE LIST 
DIRECTLY ...]

fascinating ... the probably offensive message follows
===

no test publisher can tell a user ... what it means to "fail" a test

this has to be decided by the user OF the test ... or the professional 
committee that will oversee the implementation of the test results

if the test is properly normed and properly documented ... then you should 
have some ideas (in their documentation) about the measurement error that 
might be present ... around raw scores that Ss get or, some estimates of 
ability generated for them by the process ...

then, it is up to you  to decide where the cutoff will be

personally, in high stakes situations ... i would prefer a method that does 
NOT allow different Ss to take more or less numbers of items ... but, i 
would rather standardize this more tightly ... due to the fact that 
disgruntled examinees ... could make a real big stink out of it ...

and besides, it appears to me that with the kinds of Ss you are focusing 
on, the more mysterious is the procedure you use ... to make this decision 
... the worse off you will be ... certainly, it will be much much harder to 
communicate to them ... what their score means ...




_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



simulated rs

2001-03-27 Thread dennis roberts

i had sent this note to bob hayden .. re: simulating a sampling 
distribution of r values ... assuming that rho = 0 in the population

i know there are ways to simulate a set of X and Y data (i have one) with 
some specified r ... but, does anyone know of a routine (in minitab would 
be nice) that would allow you to insert some rho value ... specify a paired 
n size ... and then say how many variables one wants? 10 variables, 20, 
etc. ... so as to generate many many pairs of rs all at once (10 variables 
would produce 45 unique rs, etc.) ... from which to make a dotplot (for 
example) to get a feel for what the sampling distribution would look like?

thanks for any leads
===

MTB > rand 20 c1-c20;
SUBC> inte 20 40.
MTB > corr c1-c20 m1
MTB > copy m1 c30-c49
MTB > stack c30-c49 c50
MTB > Code (1) '*' C50 c52
MTB > dotp c52

Dotplot: C52


20 Points missing or out of range
:: :
:: :
:: :
:   ::: :: : :
   ::    :   :
    ::   :
 :  : : :::  :: :
 :  : : ::: :::
  : ::: :::
 :   :   :: :::  :
 ::: : :: :  : :
   -+-+-+-+-+-+-C52
-0.50 -0.25  0.00  0.25  0.50  0.75

MTB > desc c52

Descriptive Statistics: C52


Variable N N*   Mean Median TrMean  StDev
C52380 20-0.0168-0.0172-0.0166 0.2283

Variable   SE MeanMinimumMaximum Q1 Q3
C52 0.0117-0.5772 0.6873-0.1809 0.1303

NOTE: there are really only 190 unique rs here ... 1/2 of the N=380

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Random Sampling and External Validity

2001-03-26 Thread dennis roberts

At 12:44 PM 3/26/01 -0500, Neil W. Henry wrote:


>Introductory statistics classes, with their artificially created null 
>hypotheses
>and impractical data gathering designs, often ignore these complexities.

you won't get much argument from me about the above ... a null hypothesis 
is rather useless in my book but, like many things, null hypothesis testing 
is so entrenched into the system ... somehow, we need to break free ... or 
at least lessen its dominance

the upcoming paper by roger kirk ... "promoting good statistical practices: 
some suggestions" in educational and psychological measurement, V 61, #2, 
2001 ... is a good place to look for some points on this matter (and, it is 
nothing new by any means)

but, regardless of what is meant by and can be thought of as "qualitative" 
research ... what i see day after day being done in the qualitative 
research context is to define it in terms of certain kinds of data 
collection methods like ... content analysis, or in depth personal 
interviews, or ... case studies ... to many, when you use the term 
"qualitative" research, that seems to be what they mean

content analysis, in depth personal interviews, case studies ... are not 
qualitative research ... these are simply methods that are used in the 
conduct OF research

i have also seen in many instances for those claiming to do qualitative 
research, that the notion of generalization is unimportant ... that is, you 
study the situation for itself ... but, i find this really troubling if 
that is the message we are trying to pass along to students we are training 
... if there is no eye on generalizable elements of what we are doing ... 
what is the point of doing research in the first place? science suggests 
that application and extrapolation (in the broadest sense) is THE noble 
goal of giving it the old scientific college try

when i went to grad school, we never even heard of the term "qualitative" 
... what we did hear of was "research" and, we started off with some 
question of interest ... and then worked on a plan of attack that would 
yield information that would help us be able to offer some answers to the 
questions posed ...

this plan of attack was NEVER to think in terms of quantitative or 
qualitative ... but, methods that would be congruent with our goals ...

yes, i do have a rather strong bias (readily admitted) and that is ... the 
distinctions made between quantitative and qualitative have NOT been 
helpful ... in fact, in some ways ... they have retarded progress in 
thinking about, planning, and conducting useful research ...

what we tend to have now are 'camps' ... like cronbach's famous apa 
presidential address about 'the two camps of psychology' (circa 1950) ... 
where the experimental researchers and field researchers didn't speak to 
one another ... how sad

this is happening and getting more so today ... between faculty and their 
students in the areas of "quantitative" and "qualitative" ...

one time, i had  a student come in and say that he/she wanted to do a 
"qualitative" study ... that was his/her goal ... that had nothing to do 
with an issue that he or she wanted to pursue ... i tried to extinguish 
that verbal behavior right away ... and help the student focus on some 
problem of interest

i would have done the same thing (and have) if the student would have said: 
i want to do a quantitative study

this is just not the right way for students to be thinking about scholarly 
efforts that they might want to engage in ...

ISSUE OR PROBLEM FIRST ... methods that seem to fit second




>--
>   *
>  `o^o' * Neil W. Henry ([EMAIL PROTECTED])   *
>  -<:>- * Virginia Commonwealth University *
>  _/ \_ * Richmond VA 23284-2014  *
>   *  http://www.people.vcu.edu/~nhenry   *
>   *********
>
>
>--9F3BD71D2EDA80683B2FF9EA
>Content-Type: text/html; charset=us-ascii
>Content-Transfer-Encoding: 7bit
>
>
>
>dennis roberts wrote:
>>At 12:56 PM 3/25/01 -0500, Karl L. Wuensch wrote:
>> >Here is how I resolve that problem:  Define the population from the 
>> sample,
>> >rather than vice versa -- that is, my results can be generalized to any
>> >population for which my sample could be reasonably considered to be a 
>> random
>> >sample.  Maybe we could call this "transcendental sampling" ;-) -- it is
>> >somewhat like transcendental realism, defining reality from our 
>> percetion of
>> >it, eh?
>>
>>this sounds like  the method of grounded theory in the qualitative 
>>bailiwick ...
>>l

Re: Random Sampling and External Validity

2001-03-25 Thread dennis roberts

At 12:56 PM 3/25/01 -0500, Karl L. Wuensch wrote:
>Here is how I resolve that problem:  Define the population from the sample,
>rather than vice versa -- that is, my results can be generalized to any
>population for which my sample could be reasonably considered to be a random
>sample.  Maybe we could call this "transcendental sampling" ;-) -- it is
>somewhat like transcendental realism, defining reality from our percetion of
>it, eh?


this sounds like  the method of grounded theory in the qualitative 
bailiwick ...
look at data you have and see what you can make of it

that is ... there is no particular PLAN to the investigation ... data 
gathering ... or, what you want to do with what you find after the fact

i try to tell students this is  not a very good strategy ...



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Most Common Mistake In Statistical Inference

2001-03-22 Thread dennis roberts



here is my entry for the most common mistake made in statistical inference ...

using and interpreting inference procedures under the assumption of SRS 
 simple random samples ... when they just can't be

this permeates across almost every technique ... and invades almost every 
study ever published ...

if not in an internal validity sense ... surely in an external validity sense



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: One tailed vs. Two tailed test

2001-03-16 Thread dennis roberts

At 04:14 PM 3/16/01 -0500, Rich Ulrich wrote:
>Sides?  Tails?
>
>There are hypotheses that are one- or two-sided.
>There are distributions (like the t)  that are sometimes
>folded over, in order report "two tails" worth of p-level
>for the amount of the extreme.


seems to me when you fold over (say) a t distribution ... you don't have a 
t distribution anymore ... mighten you have a chi square if before you fold 
it over you square the values?

unless there is something really funny about a distribution that i have 
been unable to identify in a picture ... all of them have two ends ... 
tails ... whether they stretch out alot or ... bunch up on the left like 
chisquare 1

a test STATISTIC is not a distribution ... so, we need to keep what the 
test STATISTIC does ... how it works ... APART from some distribution ... 
which it might follow

all i know is that there seems to be considerably confusion/differential 
use ... call it whatever but ... our terminology on this one is NOT clear ...

especially when we relate the test statistic ... the statistical 
distribution ... AND the null/and research hypothesis we might have in some 
particular investigation

i was hoping that our list might help reduce this confusion ... by 
advancing some more specific uses of terms




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Was: MIT Sexism & statistical bunk

2001-03-15 Thread dennis roberts



many moons ago ... there was a post that referred to a case at MIT ... 
where women biology faculty charged sex discrimination in that they thought 
their salaries were much lower than they should be ... due to the fact that 
they were women

then, there was post after post ... arguing this point or that ... in fact 
there was so much heated debate ... the SUBJECT line even changed ... from 
what is above to inappropriate hypothesis testing

now, after all these posts ... i am asking myself: what good has come from 
all of this?

at the moment, i can see none ... nothing that jumps right out at me anyway

seeing that a major purpose of this list is to provide help to people who 
are in the business of TEACHING statistics ... and communicating to 
students beneficial uses of statistics (while hopefully cautioning them 
about (to use a phrase) "inappropriate" ones) ... i would like to reiterate 
that the original setting ... and the issue at hand there ... is important. 
so, the question is: how can statistics (if at all) be used in the context 
of a discrimination case ... in this context, over the issue of salary?

i pose the following general scenario

let's assume that at an institution, a group of people (women, hispanics, 
clerical workers, associate professors, ... you name the group) files a 
suit against the university charging  discrimination

again ... let's assume that the target variable is salary ... and this 
"group" claims that they have been hugely UNfairly treated

what can we as those charged with teaching people about statistical 
analysis ... share with them as to how statistical analysis can be useful 
in this context? NOT in the sense of "proving" that discrimination DID 
occur ... or did NOT occur ... but rather, to show them methods that would 
yield data that might be useful in helping resolve a case like this?

Suggestion 1
Suggestion 2
Suggestion 3

and so on

can we bring some closure to this PARTICULAR MIT discussion with some 
general "findings" as to what students could take away from all this prattle?

thanks

ps ... a conclusion that lots of people don't agree with one another will 
not be too helpful

  



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-03-14 Thread dennis roberts



in most large institutions ... the notion of performance based pay is a 
myth ... since it is easy to document clear differences in performance for 
faculty in different Colleges .. where pay is lopsided in favor of a 
favored college (like business) even when productivity (however you define 
it) goes in favor of the faculty member in the NON favored college

there are huge college to college internal differences in salary ... having 
absolutely zip ... zero ... zilch to do with performance differences of any 
kind

in fact ... the largest single factor that "explains" variance in salaries 
... along with rank ... is college location (ie, what college you happen to 
be in)

i also think that in most large institutions ... we have two broad classes 
of faculty ... these is a small group of those we might call "stars" ... 
that nobel laureate jim mentioned ... or others who have by any imaginable 
criterion ... have "shone" in the discipline ... nationally and 
internationally ... THESE FOLKS SHOULD BE MAKING TONS OF MONEY

(maybe in the MIT case ... there are 1/2 of these stars that happen to be 
males ... i don't know)

and, these are peppered throughout the institution ... across ALL the 
colleges and disciplines ... and a star in one college should earn about 
the same as a star in another college ... i can't see any real 
justification for not doing that

then you have the rest of us ... general .. hard working faculty ... sure, 
lots of variation still ... but, within a rank ... and with about the same 
years IN that rank ... i don't see much to argue for compensating these 
folks too much differently ... as long as their jobs are roughly the same 
 they teach ... they advise ... they do some research ... they serve on 
university committees ... so on and so forth. their movement UP through the 
ranks ... passing over all those hurdles ... justifies in my book ... 
salaries being approximately the same for the same status of tenure, rank, 
and years in rank

to start doing regression analysis and splitting salary hairs this way 
seems so out of touch with the noise in this system ... as to be rather 
comical ...

i do NOT object to stars being paid a whole lot more than regular folks ...
i DO object to there being vastly different salaries for regular folks just 
because one works in college A ... and another on works in college B

for faculty morale ... and a sense of worth ... and for faculty to give it 
their best shot to help the institution (ie, be loyal) ... there needs to 
be some sort of approximate equity ... in compensation ...

at penn state, and this is probably true in most other large schools ... 
the administration really cares little about huge gaps in salary ACROSS 
disciplines or academic colleges ... and does essentially  NOTHING ever to 
try to make compensation more equitable for us regular folk

but, when they go to the legislature ... they opine about the need for more 
salary dollars ... to keep faculty from running away ... or to be able to 
attract faculty ... but this is really for certain disciplines ... NOT to 
try to make salaries more equitable across the board

personally, it matters not much to me if penn state is more down the list 
(in average salaries) compared to illinois ... or michigan ... though i 
know that the administration worries about that

what i do worry about is trying to compensate in a much fairer way and 
equitable way ... those faculty who actually work HERE ... (and i would say 
that about illinois ... or michigan too)  



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-03-14 Thread dennis roberts

At 04:10 PM 3/14/01 -0500, Rich Ulrich wrote:

>Oh, I see.   You do the opposite.  Your own
>flabby rationalizations might be subtly valid,
>and, on close examination,
>*do*  have some relationship to the questions


could we ALL please lower a notch or two ... the darts and arrows? i can't 
keep track of who started what and who is tossing the latest flames but ... 
somehow, i think we can do a little better than this ... 



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: One tailed vs. Two tailed test

2001-03-14 Thread dennis roberts

At 03:39 PM 3/14/01 +, Jerry Dallal wrote:

>It wasn't ironically and has nothing to do with 5%.  As Marvin Zelen
>has pointed out, one-tailed tests are unethical from a human
>subjects perspective because they state that the difference can go
>in only one direction (we can argue about tests that are similar on
>the boundary, but I'm talking about how they are used in practice).
>If the investigator is *certain* that the result can go in only one
>direction, then s/he is ethically bound not to give a subject a
>treatment that is inferior to another.
>
>Consider yourself or someone near and dear with a fatal condition.
>You go to a doc who says, "I can give you A with P(cure) in your
>case of 20% or I can give you B for which P(cure) can't be less than
>20% and might be higher.  In fact, I wouldn't even consider B if
>there weren't strong reasons to suspect it might be higher. And
>let's not forget it can't be lower than 20%.  I just flipped a
>coin.  YOU CAN'T HAVE "B"!"



what can i say ... marvin zelen is wrong ...

it would only be unethical if a better alternative were available ... or 
even a possibly better alternative were available ... and the investigator 
or the one making the decision to give or not to give ... KNOWS this ... 
AND HAS the ability to give this treatment to the patient ... and does NOT 
do it

because a treatment might be known to be better, through a logical 
deductive process or experimentation ... or potentially better ... does NOT 
lead to unethical practice if this treatment is not adopted ...

implementations of treatments have consequences ... other than impact of 
treatments ... there are COSTS ASSOCIATED WITH TREATMENTS and these costs 
have to be weighed in from a cost/benefit perspective (maybe even take into 
account IF the public WANTS this to be done) ... it is irresponsible NOT to 
take other things into consideration

if the costs associated with treatments are so high compared to the (albeit 
true) benefits ... one has to consider whether it would actually be 
UNethical to go ahead and order up full implementation ... when society has 
to shell out the 

one vivid example: we KNOW for  a fact that ... if we reduced the national 
speed limit to 45 ... it would save thousands of lives ... though drivers 
would be hopping mad (and road rage might cause some accidents ... the 
reduction still would save many many lives) ...

are politicians, who make these decisions, acting in an unethical way NOT 
to lower the national speed limit to 45? i don't think so

decisions to implement or not implement (regardless of evidence) in most 
cases are some compromise between what we know MIGHT happen if we go 
direction A ... but, we make a tempered decision to go in direction B ... 
because of the realities of the overall situation

hypothesis testing ... is NO different







>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



1 tail 2 tail mumbo jumbo

2001-03-14 Thread dennis roberts
ould be called a 1 tailed test ... no 
matter what your research predictions are ... when we use chi square on a 
contingency table ... it should be called a 1 tailed test ... no matter how 
you think the direction of the relationship should go

when we use a studentized range statistic ... Q ... it is a 1 tailed test 
... no matter which way your predictions say that the ordering of the means 
should go

but, when we use a t test (for means for example) ... we should call this a 
TWO TAILED test ... always ...

whether the researcher opts for ... funneling alpha all at one end ... or 
subdividing it up in 1/2 ... partly at one end and partly at the other end 
... that is entirely a different matter ... but should NOT be dubbed "1 or 
2 tailed" ...

we need to be clear on the use of terms ... and, in this area ... there 
CLEARLY is serious confusion about what 1 or 2 tailed tests MEAN ... at 
least the myriad of "opines" on the list with respect to this suggest that

can't we fix this? if not for us ... for students who have to learn this 
stuff?

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: One tailed vs. Two tailed test

2001-03-13 Thread dennis roberts

well, help me out a bit

i give a survey and ... have categorized respondents into male and females 
... and also into science major and non science majors ... and find a data 
table like:

MTB > chisquare c1 c2

Chi-Square Test: C1, C2


Expected counts are printed below observed counts


non science  science
 C1   C2Total
M   1   24   43   67
  32.9834.02

F   2   39   22   61
  30.0230.98

Total   63   65  128

Chi-Sq =  2.444 +  2.368 +
   2.684 +  2.601 = 10.097
DF = 1, P-Value = 0.001

when we evaluate THIS test ... with the chi square test statistic we use in 
THIS case  ... in what sense would this be considered to be a TWO tailed 
test? would we still be using say ... the typical value of .05 to make a 
decision to retain or reject? would we be asking the tester to look up both 
lower and upper CVs from a chi square distribution with 1 df ... and really 
ask him/her to consider rejecting if the obtained chi squared value is 
smaller than the lower CV?

in this case ... minitab is finding the area ABOVE 10.097 in a chi square 
distribution with 1 df ... and recording it as the P value ...

of course, in a simple hypothesis test for a single population mean ... like

Test of mu = 31 vs mu not = 31

Variable  N  Mean StDev   SE Mean
C5   20 28.10  6.71  1.50

Variable 95.0% CIT  P
C5(   24.96,   31.24)-1.93  0.068

the p value that is listed is found by taking the area TO THE LEFT of -1.93 
and to the RIGHT of +1.93 in a t distribution with 19 df ... and adding 
them together

At 08:50 PM 3/13/01 +0100, RD wrote:
>On 13 Mar 2001 07:12:33 -0800, [EMAIL PROTECTED] (dennis roberts) wrote:
>
> >1. some test statistics are naturally (the way they work anyway) ONE sided
> >with respect to retain/reject decisions
> >
> >example: chi square test for independence ... we reject ONLY when chi
> >square is LARGER than some CV ... to put a CV at the lower end of the
> >relevant chi square distribution makes no sense
> >
>Hmm... do not want to start flame war but just can not go by such HUGE
>misconception about chi squared test.



>Now getting back to original question.



>Incidentally my opinion agrees with international harmonisation
>guidelines. Just dig FDA site to find them. There are half-page
>additional explanations why one tailed tests with 5% are unacceptable.
>The result you can not submit a drug for approval based on studies
>with one tailed 5% rate tests.

agreement with another position is not sufficient evidence to discard the 
notion that one tailed tests can be legitimate in some cases

are you suggesting that the model for drug research is always correct?


>I am dermatologist not statistitian and all those questions seems
>obvious to me. I am disappointed.
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: java - statistic

2001-03-13 Thread dennis roberts

have a look at

http://members.aol.com/johnp71/javastat.html

i think the answer is yes

At 06:00 PM 3/13/01 +, Paolo Covelli wrote:
>Is JAVA suitable to develop programs of statistic or a more specific
>language exists?
>
>Paolo
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-03-13 Thread dennis roberts



in a general case like this ... where the plaintiff has to show proof of 
discrimination ... the burden is especially difficult

there are some preliminaries of course ...

if the women make more than the males ... then we would agree it would be 
"hard" to argue sex discrimination in terms of salaries ... though i guess 
the women could "try" to argue that the difference is NOT large enough (but 
in any case, the "court" is not going to waste it's time if this cursory 
test is not confirmed ... ie, men have higher salaries than women)

it is like age discrimination ... if someone brings an age discrimination 
case to EEOC ... and the facts show that older people ARE being hired or 
retained ... when this person is being let go ... it will be essentially 
impossible to win an age discrimination case

but, in the current situation, let's say that we have identified 15 
measures that relate to work and work productivity ... 1 to 15 ... and 
let's just assume that for each ... higher values mean better ...

scenario A: on all of these, women have lower mean values than males ... 
AND male salaries are higher ... it will be very hard if not impossible to 
argue (and win)  sex discrimination ...

scenario B: on all of these, women have higher mean values than males ... 
BUT have lower salaries

if 1 to 15 are valued ... it might be rather easy to argue and win a sex 
discrimination case

the overall problem in cases like these will be that it would rarely if 
ever be a situation like scenario B ...

it seems to me that only in certain cases ... would statistical information 
really be that helpful in arguing and persuading on the side of 
discrimination ...

so, ultimately, it will not generally boil down to anything statistical 
but, rather ... some logical and rational conclusion that is made based on 
the facts of the case ... many of which are "behind the scenes" and 
unobservable through any real data source




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



how could i forget?

2001-03-13 Thread dennis roberts

the "lode" of all lists

http://members.aol.com/johnp71/javastat.html

===

http://www.kuleuven.ac.be/ucs/java/

http://www.stat.vt.edu/~sundar/java/applets/

http://www.ruf.rice.edu/~lane/stat_sim/index.html

http://ebook.stat.ucla.edu/calculators/


_____
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



applets

2001-03-13 Thread dennis roberts

these are coming in fast and furious this morning ... perhaps a more 
summary listing in one place would be helpful ... here is what i have seen 
so far ... i am sure there are more

http://www.kuleuven.ac.be/ucs/java/

http://www.stat.vt.edu/~sundar/java/applets/

http://www.ruf.rice.edu/~lane/stat_sim/index.html

http://ebook.stat.ucla.edu/calculators/

i cannot vouch for the goodness of any of these but, there sure is alot of 
good looking "stuff" out there

_____
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: One tailed vs. Two tailed test

2001-03-13 Thread dennis roberts

we have to first separate out 2 things:

1. some test statistics are naturally (the way they work anyway) ONE sided 
with respect to retain/reject decisions

example: chi square test for independence ... we reject ONLY when chi 
square is LARGER than some CV ... to put a CV at the lower end of the 
relevant chi square distribution makes no sense

2. whether for our research hypothesis ... rejection of the null is 
something that makes sense to BE ABLE to do regardless if the evidence 
suggests that the effect is LESS than the null or MORE than the null

example: typical treatments could have positive or negative effects (even 
though obviously, we predict + effects) ... thus, when doing a typical two 
sample t test (if you are interested in differences in means) ... we make 
both an upper AND lower rejection region ... ie, two tailed TEST

but, in some cases, it might be totally unthinkable for one end of the 
statistical distribution to be "useful" in a given case ... say we have a 
weight loss regimen program ... consisting of diet and exercise ... and 
want to know if it works ... ie, people lose weight ... now, in this case 
(it could be) one might argue that it is difficult to conceptualize that 
the regimen would actually "cause" one to GAIN weight ... so, to put some 
rejection area on that end of the t distribution would seem silly ... thus, 
we might be able to make the case that it is perfectly legitimate to use a 
one tailed test in this case ... (done BEFORE hand of course ... not just 
after the fact because your 2 tailing approach failed to allow you to 
reject the null)



At 03:08 PM 3/13/01 +1300, Will Hopkins wrote:
>At 7:34 PM + 12/3/01, Jerry Dallal wrote:
>>Don't do one-tailed tests.
>
>If you are going to do any tests, it makes more sense to one-tailed 
>tests.  The resulting p value actually means something that folks can 
>understand:  it's the probability the true value of the effect is opposite 
>to what you have observed.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-03-12 Thread dennis roberts

At 02:25 PM 3/12/01 +, Radford Neal wrote:


>In this context, all that matters is that there is a difference.  As
>explained in many previous posts by myself and others, it is NOT
>appropriate in this context to do a significance test, and ignore the
>difference if you can't reject the null hypothesis of no difference in
>the populations from which these people were drawn (whatever one might
>think those populations are).

the problem with your argument is this ...

now, whether or not formal inferential statistical procedures are called 
for ... if there is a difference in salary ... and differences in any OTHER 
factor or factors ... one is in the realm of SPECULATION as to what may or 
may not be the "reason" or "reasons" for THAT difference

in other words ... any way you say that the difference "may be explained 
by"  is a hypothesis you have formulated ...

so, in this general context ... it still is a statistical issue ... that 
being, what (may) causes what ... and, this calls for some model 
specification ... that links difference in salaries TO differences in other 
factors/variables

if we do not view it as some kind of a statistical model ... then we are in 
no position to really talk about this case ... not in any causal or quasi 
causal way ... and, i thought that was the main purpose of this entire 
matter ... what LEAD to the gap in salaries?? ... was it something based on 
merit? or something based on bias?

i don't see how else we could check up on these kinds of issues other than 
some statistical questions being asked ... then tested in SOME fashion 
(though i am not specifying exactly how)




>Radford Neal
>
>
>Radford M. Neal   [EMAIL PROTECTED]
>Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
>University of Toronto http://www.cs.utoronto.ca/~radford
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Patenting a statistical innovation

2001-03-10 Thread dennis roberts

At 09:50 PM 3/7/01 +, Warren Sarle wrote:

>In article <[EMAIL PROTECTED]>,
>  Paige Miller <[EMAIL PROTECTED]> writes:
> >
> > If it so happens that while I am in the employ of a certain company, I
> > invent some new algorithm, then my company has a vested interest in
> > making sure that the algorithm remains its property and that no one
> > else uses it, especially a competitor.
>
>That would be perfectly reasonable. Unfortunately, patent law
>doesn't work that way. You cannot patent an algorithm per se.
>But anybody can patent applications of the algorithm that you
>invented. You could end up having to pay royalties to somebody
>else for using your own algorithm. The law is insane.


like the biotech patents of genes ... that was highlighted on 60 minutes a 
week or two ago



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



nomographs

2001-03-06 Thread dennis roberts

back around 1960 ... there appeared via ETS ... a two side nomograph that 
found (on one side) partial rs ... and (on the other side) multiple Rs ... 
where you could enter the graphs from 2 or more directions and read these 
values off ... (if your eyesight was good enough!)

the first was by ruth lees and fred lord, the second was from fred lord ...

1. J. Amer. Stat. Assoc, December 1962
2. J. Amer. Stat. Assoc, December 1955

now, back then i assume you could get these from ets ... i would be SHOCKED 
if you could "buy" or "get" this nomograph anymore ...

to show how creative some folks were to help "users of stats" figure out 
things ... i have scanned this and you can see at

http://roberts.ed.psu.edu/users/droberts/multr.jpg

http://roberts.ed.psu.edu/users/droberts/partr.jpg

if you have a look and think of the WORK it took on SOMEone's part to draw 
these back then ... my hat's off to them

NOW, IF ANYONE THINKS I SHOULD NOT POST THESE ... LET ME KNOW AND I WILL 
YANK THEM OFF THE SERVER RIGHT AWAY ...

i just thought this was interesting in light of the post i sent re: normal 
curve template

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



norm curve template

2001-03-06 Thread dennis roberts

may eons ago ... 1974 to be precise ... i had this idea of making a small 
plastic normal and skewed curve template ... that would help students draw 
both types ... with information about the distributions on the template ... 
that would help them work with problems by being able to make a nice sketch 
...

if anyone is interested in a historical artifact (relic?) ... have a look at

http://roberts.ed.psu.edu/users/droberts/statmat.jpg

i still think it WAS a good idea ... just didn't have the right "marketing" 
team in place

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: power,beta, etc.

2001-03-05 Thread dennis roberts

the "act" of "deciding" (using whatever rule/CV you like) to retain the 
null or reject the null  ... is just that and nothing more

however, you do NOT "act" or "decide" to make a type II error or a type I 
error ...
you don't "act" or "decide" to make an incorrect or correct choice ... the 
fact that it is correct or incorrect ... is not within YOUR "causative" 
ability ... ( i wanted to use the word "power" but ... that would totally 
mess things up)

if the null happens to be true ... AND your action was to retain ... YOU 
did not take an action to make a correct decision ... the "accident" that 
the decision could be called "correct" is of our doing as statisticians ... 
conditioned on the null being true ... this is merely a LABEL we have given 
to this resulting situation

if the null is not true and ... your ACTION was to retain ... YOU did not 
take an action to make an INcorrect choice ... this "accident" was because 
we as statisticians have given this label to that ... and is conditioned on 
the fact (that you have no awareness of) ... that the null is not true

we can say the same about rejecting the null also

yes, we as investigators do make the decision to retain ... or reject ... 
but, we don't make the decision to have the outcome of that decision be 
correct or incorrect

finally, i do think that the label we give of correct or incorrect is a 
consequence of the fact that you acted one way or the other AND conditioned 
on the state of nature with respect to the null ... and, the name we give 
to it being correct or not ... is only a consequence of the CONGRUENCE 
between YOUR act AND state of nature ... according to my dictionary ... 
consequence means result and ... calling an action you take a correct or 
incorrect decision is only the result of one overlaying the action (and 
that's the only action part of this overall case) WITH the state of nature 
... so, the name correct or incorrect ... (type I or type II error, etc.) 
is THE result ...

it is NOT the result of what YOU did ... it is the result of comparing what 
you did WITH nature ... and the resultant NAME we assign will be a or b or 
c or d ... depending on that congruence or lack of that comparison

in this sense ... there is a probability associated with that consequence 
... and the probability only makes sense as a consequence ... not as an 
action YOU take

the investigator does not make the type I error ... or the type II error 
... or either of the correct possibilities too ...

so, as long as that is clear, then ... ok by me

but, the more i examine the notion of power ... the more  i fail to see 
that this is a very good term to assign to that probability ... alpha and 
beta as terms ... have no particular "loaded" meanings (though these can be 
differentially "bad" depending on circumstances) ... but one cannot say 
that about "power" ... so, by assigning this name to that probability ... 
it suggests that this is THE good place to be striving for ... but, as i 
said before ... we should be striving for having the consequence of our 
action ... be correct ... not correct of a certain type ... though, i 
readily admit that the direction of the consequence being where we now 
label power ... is probably more often than not ... where we hope to be 
but, not always ...

the implication is as follows:

let's say that the null is true and ... you have retained it (call this A)

or, the null is not true and you have rejected it ... (call this B)

in our current layout and discussion of terms ... we try to argue by the 
name (power) of B ... that somehow it is a BETTER correct decision than A

i don't buy that

it may not be that interesting of a case but, if true ... it still is good 
that we made it



At 11:15 AM 3/5/01 -0500, Donald Burrill wrote:
>In response to Dennis's earlier statement,
>"that is ... power in many cases is a highly overrated CORRECT decision"
>
>I wrote:



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: power,beta, etc.

2001-03-05 Thread dennis roberts

At 12:09 AM 3/5/01 -0500, Donald Burrill wrote:

>Well, no.  Overrated it may be (that lies, I think, in the eye of the
>beholder);  but a _decision_ it is definitely not.  Power is the
>_probability_ of making a particular decision -- which, of course, like
>all decisions, may or may not be correct.

sorry  we don't MAKE this decision ... the only decision we make in 
this case is to reject the null ... it is only the statisticians 
who  overlay onTOP of this ... the consequence OF that reject decision ... 
saying that IF the null had been false (of which the S has no clue about) 
... THEN the consequence of that reject decision is called power

this is one reason i raised this issue ... because, we only make 2 possible 
decisions with respect to our investigation ... we retain ... we reject ... 
we DON'T determine the consequence of that decision ... so, in this sense 
... saying that there is a consequence associated with a particular act ... 
retaining or rejecting ... "power is the probability of MAKING (emphasis 
added from don's comment) ... a particular decision ... " ... sounds like 
WE did this ... when we did NOT DO this

all we did was to reject the null

i still think there would be value ... in:

1. making it clear that the S only makes decisions of the retain kind ... 
and reject kind ... that's it!

2. it would be helpful to identify both correct decisions (oops ... 
unbeknownst outcomes) ... just like we identify both incorrect decisions 
(oops .. unbeknownst outcomes) ... and then give some symbol to the 
probability associated with each of the "cells" ... which is distinct from 
the name we have given to the cell

> -- Don.
>  --
>  Donald F. Burrill[EMAIL PROTECTED]
>  348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
>  MSC #29, Plymouth, NH 03264 (603) 535-2597
>  Department of Mathematics, Boston University[EMAIL PROTECTED]
>  111 Cummington Street, room 261, Boston, MA 02215   (617) 353-5288
>  184 Nashua Road, Bedford, NH 03110  (603) 471-7128

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: power,beta, etc.

2001-03-04 Thread dennis roberts

At 03:08 AM 3/4/01 -0500, Donald Burrill wrote:

>Do you have a reasoned objection to "1 - alpha"?  In other contexts we 
>routinely use, e.g., "1 - Rsq" for the proportion of variance unexplained 
>by the model being considered.  The "1 minus" construction shows the 
>logical and arithmetical connection between two quantities, which can 
>easily get lost if one uses very different-looking terms for those 
>quantities.

seems like that each cell should have a probability definition that is NOT
dependent on the probability name for another cell ... 

i know that sometimes power is "defined" as 1 - beta ... but, beta could
therefore (algebraically and logically) be defined as 1 - power ... so,
these are circular in a way 

beta AND power ... just like alpha and "that other cell" should have their
own (independent of the other cell names) probability definitions even if
there is additivity between 2 quantities

i don't think that there is anything UNnecessary about having a better
lable and probability definition for the ret null if null true cell ...
after all .. it is a correct decision AND, we should above all ... try to
encourage making the correct decision  even if this particular cell is
rather UNinteresting to folks ... 

one could make the argument that in a trial ... making the decision to
acquit a person who is really innocent ... is just as important as
convicting someone of a minor piddly crime ... in fact, one could make the
case in many instances that aquittal is more important than conviction ... 

that is ... power in many cases is a highly overrated CORRECT decision


==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



power,beta, etc.

2001-03-03 Thread dennis roberts

when we discuss things like power, beta, type I error, etc. ... we often
show a 2 by 2 table ... similar to

 null truenull false

retain   correct  type II, beta

reject   type I, alpha power


i think that we need a bit of overhaul to this typical way of doing things ... 

1. each cell needs to have a name ... label ... that reflects the
consequence of the decision (retain, reject) that was made

i propose something along the lines of

  null true null false

retaintype I correct, 1C type II error, 2E

rejecttype I error, 1E   type II correct, 2C


then, we have names or symbols for probabilities attached to each cell

   null true  null false

retain  WHAT NAME/SYMBOL FOR THIS??beta

reject  alpha  power


DOES ANYONE HAVE SOME SUGGESTION AS TO HOW THE UPPER LEFT CELL MIGHT BE
REFERRED TO via A SYMBOL??? OR, SOME NAME THAT IS DIFFERENT FROM POWER BUT
... STILL GIVES THE FLAVOR THAT A CORRECT DECISION HAS BEEN MADE (better
than making an error)?

2. i think it would be helpful to first identify each cell with a
distinctive label ... describing the decision (correct, error) and ... the
type ... 1 or 2

3. i think it would be helpful to have a system where there are names for
EACH cell (why should the poor upper left be "left" out in the cold??) ...
FIRST ... then some OTHER name/symbol for the probability associated with
that cell

confusions that might be avoided would be like:

a. saying type II error is the same as beta ... 
b. saying that power is NOT a name for a decision but, rather, THE
probability of making some particular decision

we have special names for errors of the first and second kind  type I
and type II ... and we have symbols of alpha and beta to represent their
associated probabilities

we have power which is supposed to be the probability of making a certain
kind of decision ... but, no special name for THAT cell like we have given
to differentiate the two kinds of errors one can make ...

any support out there to try to right this somewhat ambiguous ship? 
======
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Census Bureau nixes sampling on 2000 count

2001-03-02 Thread dennis roberts

unfortunately, there is a constitutional MANDATED way to take the census 
... which is archaic ... and tremendously costly to boot ... but as far as 
i know ... all attempts at going to a more statistical sampling method have 
been stricken down in the courts ...

this is one place that the constitution clearly needs to be changed ... 
but, given that amendments need 3/4 of the states to agree ... this will be 
hard to pass ... since many states see reapportionment under the current 
method to be advantageous to them ... so, they would not agree to this

when the census readily admits that 10% or so are missed ... flat out NOT 
seen nor counted ... AND we know that statistical methods can greatly 
improve upon that ... we need to change

At 12:16 PM 3/2/01 +, J. Williams wrote:
>The Census Bureau urged Commerce Secretary Don Evans on Thursday not
>to use adjusted results from the 2000 population count.  Evans must
>now weigh the recommendation from the Census Bureau, and will make the
>decision next week.  If the data were adjusted statistically it  could
>be used to redistribute and remap political district lines. William
>Barron, the Bureau Director, said in a letter to Evans that he agreed
>with a Census Bureau committee recommendation "that unadjusted census
>data be released as the Census Bureau's official redistricting data."
>Some say about 3 million or so people make up a disenfranchising
>undercount.  Others disagree viewing sampling as a method to "invent"
>people who have not actually been counted.  Politically, the stakes
>are high on Evans' final decision.
>
>
>
>
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Cronbach's alpha and sample size

2001-02-28 Thread dennis roberts

i don't see a tradeoff between n for sample and k for # of items as being 
really THE or AN issue

you don't really consider n for sample (though having larger is nicer) ... 
when you are contemplating the general size of the reliability coefficient 
you are targeting to

that is ... you don't say ... well, i can only "run" 10 Ss so, i need twice 
the number of items ... or, since i can have 400 Ss i only NEED 8 items

the  real benefit that larger n might have is that it would produce 
probably a little more test score variance ... which might be helpful in 
the calculation of alpha ... making it potentially a bit larger

now, the stability of the alpha coefficient ... that is a different matter ...

At 12:08 PM 2/28/01 +0100, Nicolas Sander wrote:
>How is Cronbach's alpha affected by the sample size apart from questions
>related to generalizability issues?
>
>Ifind it hard to trace down the mathmatics related to this question
>clearly, and wether there migt be a trade off between N of Items and N
>of sujects (i.e. compensating for lack of subjects by high number of
>items).
>
>Any help is appreciated,
>
>Thanks, Nico
>--
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=========

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: ASA and patenting

2001-02-28 Thread dennis roberts

At 07:26 PM 2/27/01 -0800, T.S. Lim wrote:
>Consider the following excerpts from the ASA Ethical Guidelines for 
>Statistical Practice 
>(http://www.amstat.org/profession/ethicalstatistics.html). My naive 
>interpretation is that the ASA may endorse patenting statistical 
>innovations or making them proprietary. What's your interpretation?
>
>===
>Make new statistical knowledge widely available, in order to provide
>benefits to society at large beyond your own scope of
>applications. Statistical methods may be broadly applicable to many
>classes of problem or application. (Statistical innovators may well be
>entitled to monetary or other rewards for their writings, software, or
>research results.)
>
>Make new statistical knowledge widely available in order to benefit
>society at large. (Those who have funded the development of new
>statistical innovations are entitled to monetary and other rewards for
>their resulting products, software, or research results.)
>===


i don't see that the above paragraphs mean necessarily ... patents ... even 
in the case of software, is it patented or copyrighted?

of course, i don't see anything above that excludes the notion of patents 
either




>--
>T.S. Lim
>[EMAIL PROTECTED]
>www.Recursive-Partitioning.com
>
>
>
>
>Get paid to write review! http://recursive-partitioning.epinions.com
>
>
>
>
>=
>Instructions for joining and leaving this list and remarks about
>the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
>=====

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



two sample t

2001-02-26 Thread dennis roberts

when we do a 2 sample t test ... where we are estimating the population 
variances ... in the context of comparing means ... the test statistic ...

diff in means / standard error of differences ... is not exactly like a t 
distribution with n1-1 + n2-1 degrees of freedom (without using the term 
non central t)

would it be fair to tell students, as a thumb rule  ... that in the case where:

  ns are quite different ... AND, smaller variance associated with larger 
n, and reverse ... is the situation where the test statistic above is when 
we are LEAST  comfortable saying that it follows (close to) a t 
distribution with n1-1 + n2-1 degrees of freedom?

that is ... i want to set up the "red flag" condition for them ...

what are guidelines (if any) any of you have used in this situation?




_____
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: pizza

2001-02-26 Thread dennis roberts

the original post meant that ... there were multiple tasters ... i had just 
put 10 as an example

thus, in the binomial context ... i was assuming (rightfully or wrongfully) 
that n=10 ... that is, if we SCORE across the 10 ... we could have scores 
of 0 to 10 ... in terms of how many got the correct orderings

now, it was the p that i was most interested in ... since ... in the 
example ... we have no real idea of how many times the Ss might taste and 
retaste ... slices and, if multiple ... in what orders ...

given that for any particular S ... the way the problem was posted ... the 
correct order could have been (and only) ... SSD ... SDS ... DSS ...

in this sense, there is a 1 out of 3 chance of hitting it correctly ... 
but, is the p value in this binomial really 1/3??? is this really a true 
binomial case?

does the fact that SSS and DDD are not allowed and, the fact that tasting 
one surely has some impact on what you decide about tasting another (hence, 
some dependence in the situation) ... take it out of the binomial?

At 09:15 AM 2/26/01 -0600, Mike Granaas wrote:

>Upon rereading Dennis' original question he proposed 10 S, not 10
>trials/S.  So, my speculations about sequential trials for a given S are
>not relevant.  That will teach me to try and respond on friday afternoons.
>
>Michael



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: pizza

2001-02-23 Thread dennis roberts

a concern i have in this situation ... and why i posed the question is as
follows

since it is a taste test ... Ss will taste the pizzas ... so, the notion of
just selecting ONE and saying it is different seems not a reasonble scenario

so, what would a resonable guessing scenario be? one might be that ...
after tasting and retasting ... the S says to himself/herself  ... i just
cannot make a choice ... i really don't know the difference ... BUT, he/she
has to make a choice ... those are the rules ...

so, if that were the case ... let's say that the strategy he/she adopts is
to flip a mental coin ... if it is heads, call the first pizza SAME ...
and, if tails ... call it DIFFERENT ...

now, if the first turns up heads ... then there is another piece to do the
mental flip for ... so the second piece gets the second flip ... assume it
too is heads ... and is therefore called SAME ... 

then, there is NO random choice for the third ... it has to be DIFFERENT
... the third slice decision in this case is NOT independent of the second ...

but, what if the first slice mental flip came up TAILS ... then for it, it
is called the different one ... but automatically and out of the control of
the S are the decisions for the other two ... they are both SAMES

i claim that in this situation ... the decisions for all three are NOT
independent decisions ... therefore, it does not satisfy one of the
conditions for the binomial to be a correct model ...

if the strategy were to simply flip a three sided coin ... with sides pizza
slice 1, 2, or 3 ... whichever one the mental flip lands on ... the OTHER
two are fixed choices and out of the control of the S ... 

some of the choices DEPEND on what has already transpired



At 03:00 PM 2/23/01 -0600, Mike Granaas wrote:
>On Fri, 23 Feb 2001, dennis roberts wrote:
>
> 
>> 
>> but, what is really the p for success? q for failure?
>> 
>> is this situation of n=10 ... really a true binomial case where p for 
>> success is 1/3 under the  assumption that simple guessing were the way in 
>> which tasters made their decisions?
>
>It's late on friday so I could be missing something, but it seems
>reasonable that p = 1/3 in this case.  If the taster were to simply walk
>into the room and point at the middle piece of pizza each trial they
>should be right 1 time in 3. (Unless there is some experimental
>manipulation that keeps the odd piece in one position more frequently than
>would be expected...but I think you specified counterbalancing in your
>question.)
>
>> 
>> (as an aside, what would it mean for tasters in this situation to be making 
>> their decisions purely based on chance?)
>
>I would interpret it as meaning that the tasters couldn't tell the two
>pizza brands apart.  They did no better than someone who didn't taste the
>pizza and so were unable to discriminate between to two brands.  The
>obivious explanations are that the pizza brands really are the same in all
>ways that matter for taste discrimination, or the tasters were not very
>good at the task.
>
>Michael
>
>> 
>> _
>> dennis roberts, educational psychology, penn state university
>> 208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
>> http://roberts.ed.psu.edu/users/droberts/drober~1.htm
>> 
>> 
>> 
>> =
>> Instructions for joining and leaving this list and remarks about
>> the problem of INAPPROPRIATE MESSAGES are available at
>>   http://jse.stat.ncsu.edu/
>> =
>> 
>
>***
>Michael M. Granaas
>Associate Professor[EMAIL PROTECTED]
>Department of Psychology
>University of South Dakota Phone: (605) 677-5295
>Vermillion, SD  57069  FAX:   (605) 677-6604
>*******
>All views expressed are those of the author and do not necessarily
>reflect those of the University of South Dakota, or the South
>Dakota Board of Regents.
>
>

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



pizza

2001-02-23 Thread dennis roberts

let's say that you have 'students' (they love pizza you know!) who claim 
they can easily tell the difference between brands of pizza (pizza hut, 
dominoes, etc.) ... so, you put them up to the challenge

you select 10 students at random ... and, arrange a taste test as follows:

you have some piping hot pizzas ... from dominoes and pizza hut ... and, 
you cut slices of each (pepperoni and green peppers in all cases)  and, 
when each student comes in ... you randomly pick 2 slices from one of the 
two brands ... and 1 from the other brand ... and lay them out in front of 
the student in a random order and ask the student to taste test ... then 
tell you which two of the 3 are the same ... and which 1 of the 3 is 
different ...

of course, they have to try all 3 ... and, probably go back and forth 
retasting more than once before making their final decision ...

now, we have 10 trials in terms of students doing independent tests, one 
from the other ...

in each of these 10 cases ... if the identification of the 3 is correct ... 
you count this as a successful identification ... if there are any 
misplacements or misidentifications ... then we label this as a failure ...

say we have pizza 1, 2, and 3 ... and the only allowable options are:

12 same, 3 different
13 same, 2 different
23 same, 1 different

that is, the instructions are such that they are told ... 2 ARE the same 
... and, 1 IS different so, saying all are the same ... or all are 
different ... are not options that you allow for the taster

so, in this scenario, there are 10 independent trials ...

but, what is really the p for success? q for failure?

is this situation of n=10 ... really a true binomial case where p for 
success is 1/3 under the  assumption that simple guessing were the way in 
which tasters made their decisions?

(as an aside, what would it mean for tasters in this situation to be making 
their decisions purely based on chance?)

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



power/beta macro

2001-02-20 Thread dennis roberts

i now have a  macro that will do power/beta calculations in the 1 sample z 
test case ... that runs in MINITAB and produces nice overlapping graphs

for the moment, the power/beta macro is done ... but, subject to 
improvements later ... especially if any of you have a good suggestion that 
i can implement

currently, you enter the null mean value, alternative mean value, 
population standard deviation, and the sample size n ... and it then runs 
and produces the overlapping normal distributions with power and beta 
calculated ...

at the moment ... the default is a 2 tailed alpha of .05 ... i will change 
this to allow more options later

the nice thing about the macro is that one can run it under one set of 
conditions ... say, null = 100, alternative = 102, population sd = 16, and 
n=25 ... get the output graphs ... then, run it again using n=100 ... and 
see the impact changing sample size has on power and beta ...

the link is http://roberts.ed.psu.edu/users/droberts/powbeta.htm

at the moment, you have to cut and paste the macro ... save it on your 
system and give it whatever name you want ... i have used powbeta.MAC ... 
it is a file that you run at the prompt

MTB> %powbeta

you might want to copy to your (if you have minitab) MACROS folder in the 
minitab directory

any comments are welcome

_____
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: citations

2001-02-16 Thread dennis roberts

we all know that the setting of salary ... either initially or
incremenetally over the years ... is a highly subjective business ... there
is very little that is OBjective to it

from an array of data ... that a dean might see prior to hire ... or, after
onboard ... that a local p and t committee might see ... or a department
head on an annual basis ... the department head usually forwards to a dean
... some recommendation as to increments

fundamentally, regardless of ANY of the data sources, it boils down to how
much value ... the department head ... conveyed to the dean ... PLACES in
your service

it is not just (but this plays some role) how much they like or dislike you
... but, how much they think you provide value to their unit

it could be teaching ... it could be service ... it could be research ...
it could be grants ... it could be visibility on the internet ... it could
be all kinds of things ... no faculty member i know ... it they are to be
called a faculty member ... is a unidimensional being ... nor has a
UNIdimensional role in a unit

i would hope that any program chair or department head ... worth his/her
salt ... would consider a variety of factors ... in some weighted
combination ... which could be different from faculty member to faculty
member depending on their role in the unit ... and then make what he/she
thinks is the best decision (unfortunately, in any given year ... the
discretion he/she has in this area is rather puny ... though a dean does
have rather large discretion on hire, which is where so many of these huge
salary discrepancies start from) 

what really worries me ... which this MIT case discussion highlights
(possibly) ... is our reliance on what appears to be "objective" measures
of performance ... citation rate is just one of them ... and then start
thinking in an interval measurement scale way ... that, 2 units more on X
... means, we should be awarding faculty member Y ... Z more units of $$$
in salary

this is a hugely bad way to operate ... 

it reminds me of some attempts to overly micromanage and define "workload" ... 

sure, we need some measures so that unjustifiable salaries (in the first
place) or salary increments don't occur ... but, our adherence to these
seemingly "exact" data sources on which to make these rather subjective
decisions ... is rather scary

if someone wants to use citation rates ... well, go ahead and do it (even
though i hate this indicator) BUT, keep in mind that it is but ONE of
dozens of factors that can and should enter into the mix ... and, one
should keep some proper perspective on the WEIGHT given to ANY of the
myriad factors or measures one can use
======
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



citations

2001-02-16 Thread dennis roberts

this was an interesting find ... looks like someone wanted others to know 
how good he/she was doing

http://www.stsci.edu/~marel/citations.html

a useful analysis here ... some good points

http://www.uibk.ac.at/sci-org/voeb/vhau9402.html

this was interesting

http://www.vsv.slu.se/johnb/java/isi/career1.htm

another

http://psy.ed.asu.edu/~horan/d-bk-apa.htm

about many of the citation index cd roms

http://www.library.nuigalway.ie/services/elec/citind.html



_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



citation rates

2001-02-16 Thread dennis roberts



obviously irving scheffe likes citation rates (or lives with them 
comfortably)  ... and dennis roberts does not

we can have these back and forth discussions till we are blue in the face i 
guess ... but, i seriously doubt that it will change your mind nor mine

ok, so you thought my looking out the window example was way off mark ... 
fine, i will accept that

BUT, i ask the following ... and i hope that you won't put this in some 
nonsensical category

you stated:
===
No, not at all. How could you possibly equate citation counts based
on the way HUNDREDS of other scientists have reacted to the work of
these individuals over TWELVE YEARS ...
===

NOTE: CITATION IS FOR A PERSON, RIGHT? CLARIFY WHAT YOU MEAN THEN ... ??
since we don't have group tenure or group promotion or group salary 
increments ... then we have to be talking about ONE person at a time


how do you know that over 12 years ... and the 12000 citation rate that was 
previously mentioned ... that it involved HUNDREDS of other scientists???

please relate to me and the rest of the edstat group ... how you deduce 
this  FROM THE 12000 CITATION NUMBER ... over the 12 years?

it would be helpful to show the output statistics from the citation rate 
site or sites or databases that you have access to ... that allows you to 
make this assertion

and more specifically i ask:

1. how many are unique and different scientists?
2. how many of #1 did NOT appear simultaneously on the SAME papers? (note: 
it is very commonplace in science writing ... to have papers with 5 or 6 or 
7 or 8 authors ... would these count in the HUNDREDS of OTHER scientists?)
3. how many different PAPERS/BOOKS  does this represent as separated from 
the HUNDREDS of other  scientists?
4. how many of these papers ... where citations are commonly carried over 
from one paper to another ... are from the same group of researchers 
working in the same institution(s)?

i know when i write papers, i quote myself ... is that not common practice? 
but, to assert that i am having impact on myself ... is rather strange ... 
so, now i have 5 papers ... where the fifth cites 3 of the others ... and 
so on and so forth ...

and students who work with me ... cite those papers too ... they HAVE to!

now, i want to make it abundantly clear that i am in NO way suggesting that 
the person or persons who was (were) given in evidence as havingion 
average) 12000 citations over 12 years ... has (have) not made important 
contributions to biology ... and that others do not recognize that ...

but, your implication that 12000 citations over 12 years has impacted 
hundreds of scientists in important ways ... is overstated ... ALOT

there just is no way to do any corroboration that will show convincingly 
... that this level of citations for THIS or any other person ... equates 
to the level of impact that you are implying

the questions i have raised about citation rates in general ... and 
specifically in this case ... are fully legitimate to make about citation 
rates ... and, if you have some good data to clearly answer the questions 
posed ... i (and most others i would suspect) would be more than delighted 
to examine these data

CITATION RATE STATISTICS ARE HIGHLY OVERRATED






=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-02-16 Thread dennis roberts

At 01:58 AM 2/16/01 +, you wrote:
>Dennis,
>
>Having the salary data would be desirable. If, on the other
>hand, we are only interested in the question "Did the female
>biologists at MIT perform as well as their male colleagues,"
>your comment is incorrect.
>
>The "dinky" sample size is the entire population, and the
>answer can be ascertained. See my Gork example earlier in this thread.

doesn't matter ... what we have are far too few cases ... to know what is 
going on ... either on with THESE particular people ... or, in some larger 
population sense

it is like i look out my window ... and the first 4 women i see ... i note 
their approximate walking speed ... and the first 5 men i see ... i note 
the same ... and i actually take the time to watch them go from point A to 
point B (assuming they don't bump into a tree someplace) ... and note that 
it took the men  a mean amount of time of 14 seconds ... and, the women 
took a mean amount of time of 7 seconds ...

so, by these data ... which i use as a proxy measure of quickness ... i 
make the bold judgment that these ... not women in general ... but THESE 
... women, are quicker in general ...

this is exactly what you are doing with your groups of 5 and 6

THE PROXY MEASURE IS BAD



>I think there is a conflation of issues. I definitely resonate
>to your suggestion [allow me the temporary luxury of interpretation]
>that the "utility function" relating citation counts, publication
>rates, etc. to academic value is uncertain, and there are a host
>of other factors to consider before determining whether anyone
>was discriminated against at MIT.
>
>However -- MIT's assertion that it could not release any information
>without compromising privacy is obviously untrue. For example, I'm
>sure that, had we put you in charge of the investigation, you could
>have found ways to describe the committee's methodology [assuming it
>actually had any] that would not involve releasing individual data,
>but would serve to allow the public to evaluate the process.
>In fact, you've made a start at doing that in your posts.

how can they have it both ways ... ? most institutions are public 
institutions and, these data should be part of the public record ...

we know the salaries of senators ... governors ... the president ... etc. 
 i don't see any constitutional case for keeping this information 
secret???

part of the problem in this case and others like it is ... keeping SOME 
information FROM the public ... while revealing OTHER information ... that 
appears to be cogent to the case that the reporters want to make ...

not a good idea

if these women were all that serious about this problem ... citing salary 
data would not be a problem for THEM ... but, i bet the men would not go 
along with that


>MIT went further than denying the public access to the facts,
>or any information about the facts. It specifically denied
>that the differential outcomes occurred because the women
>"were not good enough," and declared the very question out
>of bounds, i.e., "the last refuge of the bigot."

again ... allowing some tidbits to be put out in the press ... but not 
others ...

>Our data show that the MIT report authors may well have
>engaged, consciously or otherwise, in a compression fallacy.
>But of course we do not know enough to reach strong conclusions.
>MIT will not let anyone know.

which means ... they should be seriously criticized ... and rightfully so ...
while i have NO idea of the merits of these particular cases ... i bet MIT 
does not want (nor would any other big institution where salaries can be 
massively different) to really air the facts ... and the background 
particulars, the deals that were made on appointment, etc. ... it would NOT 
make them look good ... but of course, to hide many of the important pieces 
of this puzzle ... sure does not earn them any brownie points either





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-02-15 Thread dennis roberts

At 10:42 PM 2/15/01 +, Irving Scheffe wrote:
>
>
>Suppose we have
>
>
> Citations   Grant$
>
>Mary  10514 Million
>Fred 12000+  23 Million

let's think about this ... just as another view of course

if we are really considering citations as a proxy for performance ... then,
by my calculations ... mary gets $38059 PER cite in grants ... while fred
only gets $19167 PER cite in grants ... thus, in this world view ... mary
is getting for MIT much more buck for the cite

if fred is doing all that great ... then proportionately he should be
bringing in MORE per cite ... 

just another view of why cites is a very poor indicator ... of performance,
quality, etc.

and, just as an aside ... let's think about just what 12000 cites would
mean??? could there possibly be THAT many people ... THAT interested ... in
the work of fred during the year?

on average, this would mean that about 33 people a DAY are citing his work
... every day of the year ... in order to "cite" ... you have to "write"
... and, it is hard to fathom that there could possibly be that much
writing activity going on where fred is actively on the minds of the writers

not saying there is enough for mary either ... i am just reemphasizing how
uninformative these "values" are



======
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/drober~1.htm


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-02-15 Thread dennis roberts


>
>Dr. Steiger's post states, "There were HUGE differences in the citation rates
>of senior men and women. The mean number of citations was, as I recall, 
>roughly
>7000 for the men and 1400 for the senior women."  The actual data were 
>7032 for
>the men and 1539 for the women (with sample sizes of 6 and 5 respectively).
>The geometric means were 4800 and 1400.  A Mann-Whitney U test indicates that
>12.6% of the permutations of these 11 data would produce differences in
>citation number as extreme or more extreme than those reported.  Do these 11
>data offer compelling or dramatic evidence for gender differences in
>productivity?  Not to my way of thinking.  Was I making inferences to a larger
>population?  I didn't intend to.  I was just trying to assess Steiger &
>Hausman's claim of HUGE gender-based differences in productivity.

of course, with these ns ... one or two extreme values for males could have 
made the difference look big .. the actual distributions would have been 
nicer to see given there are so few data ... and, for variables like these 
that tend to be rather skewed to the right ... medians might be more 
appropriate to report ... not means (if that in fact what was reported)

and what about the notion of senior? it is true that males have dominated 
many in the science professions in terms of numbers, ranks, etc. so ... i 
would suspect that senior males in this case had many MORE years of 
experience ... in rank ... and just in general ... have been given more lab 
space, assistants, etc. so ... the citation rates which appear on the 
surface (though i have argued against them for various reasons) to be 
"telling" ... may not be telling at all since, there are many things that 
have not been "equated" ... even for senior males and senior females

it is indeed good advice when a report like this comes out ... if one wants 
to have a decent discussion about it ... to read it from cover to cover ... 
so that one is able to cogently talk from a position of knowing what is in 
the report and what is NOT in the report

but, i would say as being one not having read the report ... to make some 
strong claims about differences ... when you have ns of 6 and 5 
respectively ... seems a real stretchhh

especially when using criteria that are highly suspect



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: On inappropriate hypothesis testing. Was: MIT Sexism & statistical bunk

2001-02-14 Thread dennis roberts

At 07:58 PM 2/14/01 +, Irving Scheffe wrote:
>Gene,


whether gene was correct or not, it seems problematic to me to be on the 
one hand be arguing that this is really not an inference problem ... and 
then say that it was perfectly reasonable for a statistician to sign 
his/her name to it

i really don't know the context of this particular set of data but, SURELY, 
the interest at MIT can't simply be for this particular department ... it 
has to have broader implications across the institution ...

the other inherent problem, that you mention, is the use of citation rates 
... they are really bogus and everyone knows it (or should) ... because,

1. like hits on a web page ... more hits do NOT mean (necessarily) more 
unique visitors
2. citation rates do NOT indicate whether the person citing has actually 
READ the document being cited
3. citation rates equate volume with influence and we know this is not true 
... though i might be persuaded that there is NOT a negative correlation 
between the two ... and maybe even SOME + r ... but, it's size CAN'T be 
assumed to be large

the citation index is meant to be a proxy for INFLUENCE IN THE FIELD and, 
we have no good evidence that this is true ... if you really want this to 
be a proxy for influence, then you have to do more tracking to see WHAT a 
particular citing person has done with the document he/she cites ...

therefore, the fact that for males the citation rate was 7000 ... and for 
females, it was 1400 ... canNOT necessarily be taken as evidence that the 
male has had more influence in the field than the female

i am not arguing that there is not a difference between the males and 
females ... and not arguing at all that salaries should be equivalent ... 
but, many (if not all) of the performance measures are SO WEAK ... that 
their use for making the case one way or the other is highly suspect

and because of this, if i were a statistician, i would be very wary of 
signing my name to a report of this nature without ALL KINDS OF CAVEATS 
being highlighted in bold print





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



multtest

2001-02-14 Thread dennis roberts

i found the multtest i was looking for ... posted by gerry dallal ...

http://www.tufts.edu/~gdallal/multtest.htm

just for fun ... i repeated this "test" 20 times ... with the following 
frequency distribution of the number out of 20 what i was told TO PUBLISH!!!

0 = 6   p = .3
1 = 9   p = .45
2 = 4   p = .2
3 = 1   p = .05

interesting

gerry's note at the bottom of the test says that the p value for NOT 
finding a difference is .3585 ... so, i cam pretty close

_________
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



multtest

2001-02-14 Thread dennis roberts

someone posted a url recently (that i have lost obviously) to a demo about 
getting significant results when the null is true ... but doing multiple 
tests ... the file i think was

multtest.htm ...

anyone know from whence this came? thanks

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Statistics is Bunk {thanks!}

2001-02-13 Thread dennis roberts

At 10:06 AM 2/13/01 -0600, Jeff Rasmussen wrote:

>  its clear to me that most of what they learned about stats is lost to 
> forgetting.
>
>best,
>
>JR

just like driving ... if you practice over and over again ... you minimize 
forgetting ...
basic problem with stat ... like most areas that students take say ONE 
course in ... there is no opportunity for overlearning ...





>  Jeff Rasmussen, PhD
>"Welcome Home to Symynet"
> Symynet <http://www.symynet.com>http://www.symynet.com
>  ANOVA MultiMedia
>   Quantitative Instructional Software
>
>= 
>Instructions for joining and leaving this list and remarks about the 
>problem of INAPPROPRIATE MESSAGES are available 
>at   <http://jse.stat.ncsu.edu/>http://jse.stat.ncsu.edu/ 
>=====
>

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



papers

2001-02-12 Thread dennis roberts

i had mentioned earlier that i was beginning to put some chapters,
documents, etc. at

http://roberts.ed.psu.edu/users/droberts/papers/papers.htm

here is what i have so far ... i will add more as i get the chance

STATISTICS RELATED
Parts from my book Descriptive and Inferential Statistical
Analysis ... can be found here ... note, there is a section on linear
correlation and regression that has a problem in the files (and some
other stuff too) ... so, I will not be able to post that here at the
moment. Perhaps later ... 

o   o   Org.
of Data 
o   o   CT
and Variability 
o   o   Lin
Comb/Compos Groups and Position Measures 
o   o   Norm
Dist. 
o   o   Multiple
Correlation 
o   o   Sampling
and Special Distributions 
o   o   Sampling
Error of Means 
o   o   Intro
to Confidence Intervals and Hypothesis Testing 
o   o   1
Factor Anova 
o   o   Two
Factor ANOVA 
o   o   Link
Between Regression and ANOVA 
o   o   Power


Other Stat Things

... About Sampling Distributions and n ... Sampling Distributions and n 

Confidence Interval and Standard Error 

MEASUREMENT AND ASSESSMENT RELATED

o   o   Summary of Paper about Mastery Learning 
o   o   Test Construction Model 
o   o   Reliability and Test Length (Chart) 
o   o   Multitrait-Multimethod Validation Chart 
o   o   Notes on Scaling 
o   o   Correction for Guessing Formula Explanation 
o   o   VERY SIMPLE Intro to Notion of Factor Analysis 
o   o   Cognitive Test Item Writing Guidelines 


__  ___
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:dmr@psu.edu
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=


papers

2001-02-12 Thread dennis roberts

i have begun to put papers, documents, and chapters from books ... at a 
site i set up this morning ... mainly statistical related and 
assessment/measurement related ...

http://roberts.ed.psu.edu/users/droberts/papers/papers.htm

i just started this morning and, will be converting various documents to 
pdf files and placing them here as i get time

i hope that you or some of your students will find some of this helpful



_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: MIT Sexism & statistical bunk

2001-02-09 Thread dennis roberts

At 05:19 PM 2/9/01 +, Gene Gallagher wrote:


>  The report argues that the
>gender difference in MIT salary and lab space was justified because "few
>would question the fairness of rewarding those who publish more widely,
>are more frequently cited, or raise the most in grant funds (p. 8, IWF
>report)"


this raises a related but perhaps an even more troubling matter ... (which 
many say ... ah shucks, that is just "market" forces at play ... and thus, 
don't even consider it a legitimate variable to enter into the fray) ... 
but i do

the largest % of the salary variance at most institutions, large ones 
anyway, is NOT rank but, college ... ie, variations across colleges are 
greater than within ranks ...

these differences can be massive ... (if you think the difference between 
male and females anywhere approach college differences, think again)

so, if one wants to examine (IF they do) the matter of productivity ... 
then the argument would go something like this:

if you believe that more productivity (assuming rank were constant) 
deserves more  ... then, that notion should apply ACROSS the 
institution as a whole ...

which we know does not of course ...

the productivity issue is a lame variable in the overall scheme of things 
... since, those making the most money and in the highest salaried colleges 
HAVE the most time to devote to this activity called "scholarship" ... 
because they have the smallest teaching and advising loads, in general ...

at penn state for example, according to our policy manual, salary 
increments are based on MERIT ONLY ... that is, the notion of an across the 
board increment for everyone because cost of living goes up ... has no 
legal place in our system (rather stupid i say) ... so technically, if only 
merit is to be the factor, merit would have to relate (either totally or 
darn close to it) ... productivity ..
but, if you try to push the notion of REAL productivity ... the logic 
breaks down quickly since, differences in salary seem to have little to do 
with productivity ... but rather, WHERE you happen to be within the entire 
university system

what DOES productivity mean anyway? the # of articles? who really READS 
them? HOW much money you bring in?? how many students you teach? etc. etc.

it is really difficult, at the micro manage level of trying to 
differentiate salary ... and salary increments ... by productivity measures 
... when it appears that so many NON productivity factors are the key 
elements in general level of salary for faculty and, the amount of 
increments given





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: careers in statistics

2001-02-09 Thread dennis roberts

At 09:12 AM 2/9/01 -0600, Jay Warner wrote:

>>3. job satisfaction
>
>that's your responsibility, not the company''s.

i agree with all the other points that jay made but, i  disagree to some 
extetn with this latter one ...

how well you are satisfied with your "job" is a mix between the

1. match between your skills and what the job demands
2. the truthfulness of the employer at carefully delineating what your job 
really will be
3. how much effort YOU make
4. what primary and secondary resources the employer provides FOR your work

4 is important ... and if lacking to a substantial degree (which you may 
not be able to ascertain UNtil you are on the job)

for example ... not related to stat specifically ... but, what if you get a 
faculty appointment where, part of that job will be to teach a large intro 
section of stat ... and, the promise is made to you that there will be 
resources for you to carry out that responsibility ... such as a good 
classroom with good tech for demos, etc. ... teaching assistant(s) to 
handle the volume of office hours, etc. ...

and, while these happen for the first semester or two ... slow and surely 
they start dwindling away ... can we really expect you to be really 
satisfied? i doubt it and, it is not all your responsibility to make it so 
either




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Statistics is Bunk

2001-02-08 Thread dennis roberts

At 12:21 PM 2/8/01 -0600, Jeff Rasmussen wrote:

> I just thought I'd throw this out here and see if there is any 
> interest
>
> For a Graduate Level Statistics course I teach in the Psychology 
> Department, I start out with the proposition:  "Psychology is Bunk."  I 
> tie this into Ford's proclamation that History is Bunk.  I pre-poll the 
> students to see where they stand on the issue, and then assign them into 
> groups to argue the proposition Pro and Con.  Since the course is Stats, 
> their arguments focus on current methodology.
>
> Well, anyhow, I'm curious how many of you teach any alternatives 
> to the scientific method and statistical analysis in your stats and 
> methods courses.  I've lost the faith in the religion of science over the 
> years, and am curious if there are other lapsed-scientists, or only true 
> believers on this list.
>
>best,


my view is that it would be better to start off  positive ... not negative ...
sure, as you go, point out the difficulties ... "discovering" "summarizing" 
knowledge is not easy ... but, there are things we CAN do ...

stress what we can do ... with appropriate caveats of course


>JR
>
>
>
>
>
>
>
>
>
>
>  Jeff Rasmussen, PhD
>"Welcome Home to Symynet"
> Symynet <http://www.symynet.com>http://www.symynet.com
>  Website Development
> Eastern Philosophies Software
>   Quantitative Instructional Software
>
>= 
>Instructions for joining and leaving this list and remarks about the 
>problem of INAPPROPRIATE MESSAGES are available 
>at   <http://jse.stat.ncsu.edu/>http://jse.stat.ncsu.edu/ 
>=
>

_
dennis roberts, educational psychology, penn state university
208 cedar, AC 8148632401, mailto:[EMAIL PROTECTED]
http://roberts.ed.psu.edu/users/droberts/drober~1.htm



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



  1   2   >