At 10:23 AM 4/13/00 -0500, Michael Granaas wrote:
In addition to defining the variables some areas do a better job of
defining and therefore testing their models. The ag example is one where
not only the variables are relatively clear so are the models. That is
there is one highly plausible
a professor thought that he was producing a test of 50 items at 'about the
50%' difficulty level, that is .. on average, the scores would be about
50%. now, he collected data from a random sample of n=40 of his class ...
gave them the test ... and then did a ttest using 25 as the null ... he
it appears to me that we are having the same kinds of discussions on this
topic as usual and we go round and round ... and where we stop depends
on when people get tired of it
is progress being made? i wonder ...
perhaps some of this time would be better spent defining more what a
==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/droberts.htm
===
This list is open to everyone. Occasionally, less thoughtful
people send inappropriate
here are two sample r values ... done in minitab ... and the associated output
Correlations: C52, C53
Pearson correlation of C52 and C53 = 0.599
P-Value = 0.000
MTB corr c54 c55
Correlations: C54, C55
Pearson correlation of C54 and C55 = 0.586
P-Value = 0.075
now, minitab prints out a p
i found this ... someone has made some excel demos ... using the lotus
product screencam ... which shows desktop work ...
http://www.business.utah.edu/~mgtdgw/statmov.htm
screencam can be seen at
http://www.lotus.com/home.nsf/welcome/screencam
you need a screencam player ... which is a
At 02:29 PM 4/11/00 -0300, Robert Dawson wrote:
The problem is that failure to reject means *either* that the null is
true *or* that the sample size too small *or* both;
"or" both says then ... that the null IS true AND that sample size is
TOO small ...
too small for what???
translate THAT into some working hunch worthy of testing ... rather
than defining nulls that in many cases are rather silly ... i would be happier
======
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu
, Michael Granaas wrote:
On Fri, 7 Apr 2000, dennis roberts wrote:
At 04:00 PM 4/7/00 -0500, Michael Granaas wrote:
But whatever form hypothesis testing takes it must first and formost be
viewed in the context of the question being asked.
this seems to be the key to REinventing
At 01:16 PM 4/10/00 -0300, Robert Dawson wrote:
No if you have to start "a more sensible null would be perhaps" you
almost surely do not have a hypothesis worth testing.
now we get to the crux of the matter ... WHY do we need a null ... or any
hypothesis ... (credible and/or
At 01:16 PM 4/10/00 -0300, Robert Dawson wrote:
both leave the listener wondering "why 0.5?" If the only answer is "well,
it was a round number close enough to x bar [or "to my guesstimate before
the experiment"] not to seem silly, but far enough away that I thought I
could reject it." then the
here are a few (fastly found i admit) urls about scientific method ... some
are quite interesting
http://dharma-haven.org/science/myth-of-scientific-method.htm#Overview
http://teacher.nsrl.rochester.edu/phy_labs/AppendixE/AppendixE.html
http://idt.net/~nelsonb/bridgman.html
the logic behind the null hypothesis method is flawed ... IF you are
looking for truth AND you keep following the logic of testing AGAINST a
null ...
first, say you reject the null of rho = 0 ...
then, LOGICALLY ... this says that since we don't know what truth is ...
just what we think it
let's say that today ... we as the statistical community decided, by
democratic vote, that the concept of 'hypothesis testing' ... which has
essentially dominated statistical work for as long as i can remember
(which, er um ... is a LOT of years!) ... is relegated to the 'we USED
to do
the discussion of comparing variances brings to mind the following ... and
is related to the post i just sent re: hyp testing
let's assume that we are interested whether there is some difference in
treatment effects ... as measured by means ... our null is the mu1 = mu2
now, we use the
new tricks is not easy, right? If such a vote were
taken today with the results suggested by Mr.Roberts, I know I have
successfully misled literally thousands of students. Would re-education
be the
answer?
[EMAIL PROTECTED] (dennis roberts) wrote in
[EMAIL PROTECTED]:
let's say that to
At 04:00 PM 4/7/00 -0500, Michael Granaas wrote:
But whatever form hypothesis testing takes it must first and formost be
viewed in the context of the question being asked.
this seems to be the key to REinventing ourselves ... make sure the focus
is on the question ... AND, to REshape the
how come when you do a pdf on a unit norm distribution and one say, where
mean is 100 and sd = 15 ... you get different pdf values along the Y
axis??? is it just because the lenght of the continuity along X is
narrower/wider?
0.030+
-
C2 - * * *
go to http://www.sagepub.com/
search on ... factor analysis ... some nice short books here
At 03:06 PM 4/5/00 +0200, Gottfried Helms wrote:
[EMAIL PROTECTED] wrote:
What are your favorite book(s) on factor analysis?
What do you think of R. Gorsuch's book?
My favorite is Stan
the other thing i wanted to mention was that ... if you develop some strict
calculator policy ... then you spend too much time at the beginning of a
class ... checking to make sure that each student ONLY has what is allowed ...
in addition, since good calculators allow storage ... and we would
the purpose and any inferential statistical procedure is to either answer
the question: what is the parameter, or ... to test some specific
hypothesis ABOUT a parameter ...
thus, the goal of inferential statistics IS finding the parameter.
now, significance is nothing more than asking what is
At 08:19 AM 03/21/2000 -0500, Herman Rubin wrote:
The purpose of any course should be the development of
knowledge and the ability to use it. Even the use of
assignments for any other purpose does not contribute to
education. Assignments for the purpose of having the
students do assignments,
grading projects for a first assignment REreminds me that ... some students
go way above and beyond the call of duty when doing projects ... in my
case, they have to download a file ... do some analyses ... and then do
some write up of what they found.
now, some go to alot to trouble to do
well, this is interesting indeed ... for let's say that you did adopt a .1
level for a pilot AND, you just happened to reject the null IN the pilot
... is THAT sufficient justification for committing more time and resources
TO a large main study?? the implication from this pilot result is that
i use minitab and it does not display anywhere the mode (not saying it
should) ... does anyone who uses any other software know if your software
displays mode/modes in any command or output display? (i don't mean a
frequency distribution where YOU can locate it ... but, rather ... it lists
AS
in minitab ... is there a way to make a grouped frequency distribution and
store the results? for example ...
C1 Count
10 1
12 2
14 1
15 1
16 2
19 1
20 1
21 1
22 2
23 2
a lot of this depends on who the target audience is is it adults in
general? those who have interest in computers? those who 'do' email? those
who are on email lists?
this is your first concern
then, after deciding on the above ... one has to assess IF a sampling plan
will get TO those
This post is rather long, sorry.
I have started a new listserv ... called INTROSTAT-L ... that will be
housed here at Penn State, and uses the list server lists.psu.edu. Here is
a brief explanation of what the purpose of the list is, and information
about who the list has been primarily
says (if it can say anything) that time produces test
performance . surely the other way around would make no sense
what kind of data are you thinking about when you pose this question?
==
dennis roberts, penn state university
/
===
==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/droberts.htm
===
This list is open
it appears that the longer i go, the more info i tend to gather for other
folks ... especially via the web. now, i have my favorite search engines
... and for sure, none is perfect. in addition to things like altavista,
infoseek, etc. ... i like ones such as google, directhit, dogpile, and
At 08:20 AM 1/7/00 -0500, Paige Miller wrote:
I read somewhere that a state government agency deliberately left three
computers unfixed for Y2K and they crashed immediately and were useless.
the problem with this is how does one know that these 3 would not have
crashed even if there were
happy new year to everyone ... hope your y2k +1 year is great!
now, the y2k scare provides us with an excellent example of confounds (more
or less) .. consider the following:
Time One: lots of hype about "potential" disasters related to y2k ... (PRETEST)
Time Two: billions of $$$ spent
over and over again ... and, in this
context a transcript merely gives us some more information about certain
states of nature ...
in this context ... transcripts are helpful ...
==========
dennis roberts, penn state university
educational psychol
efinitions ... then this translates into unclear
procedures for doing so in real practice ... and, college catalogs don't
help ... have a look at where grades are discussed and see if that helps
much i doubt it
======
dennis roberts, penn state university
e
this shows how naive deming really was ...
who says learning "should" be a joy? learning is WORK ... and, work is
hard. now, some kids really relish the task and challenges ... but many
others do not ... should we blame THEM?
but, i don't really see what deming has to do with our discussion of
OTECTED] wrote:
I never, as a teacher, used any curving
procedure to lower students grades!
==========
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/droberts.htm
first, why does she want to do this?
second, does the distribution as is, look like a normal distribution? if
not ... why would you want to FORCE it to look like that?
third ... usually, "curving" means lowering the cutoffs ... that were
established at the beginning of a course (maybe in the
At 02:34 PM 12/21/99 -0600, EAKIN MARK E wrote:
Dennis Roberts writes:
i said this ...
third ... usually, "curving" means lowering the cutoffs ... that were
established at the beginning of a course (maybe in the syllabus) if
that is the case ... then there is NO s
of course ... if one believes that NEITHER really give you any useful
information about population parameters ... means ... or correlation
values, etc. ... remember, the t distribution and associated tests using
it, is not JUST used for means ... THEN, maybe this distinction is trivial
...
in minitab for example ... the command ANOVA insists on equal ns in the
cells ... glm does not ... this is not a conceptual difference as don was
pointing out ... but, it is important IF you happen to be using minitab
--
208 Cedar Bldg., University
i would highly recommend a paper by ken brewer ... titled: behavioral
statistics textbooks: source of myths and misconceptions, Journal of
Educational Statistics .. Fall, 1985, V 10, #3, pp 252-268 ... for an
excellent discussion of the CLT
At 12:20 PM 12/15/99 -0600, Olsen, Chris wrote:
At 11:37 PM 11/29/99 -0500, Bob Hayden wrote:
Someone found another bug in Excel's statistics routines. Someone
else came up with a clever alternative. What you have to think about
is all the bugs you have not noticed yet. Anybody can do statistic
with Minitab, but you need a Ph.D. in
i would like to comment on this ... without getting anyone mad at me. i
have heard this argument many times before but ... i think that if we
promulgate this ... what it means is that we are not doing our students any
favors ... i don't view some general stat package as a "specialist" package
...
401 - 444 of 444 matches
Mail list logo