Hi
I like to use small, artificially generated data sets with
integer parameters to introduce analyses. Often, however, I find
it difficult to avoid undesirable contingencies among the scores
(e.g., linear dependencies in within-subject designs). Is there
an algorithmic way to generate such
Hi all,
I am working on a dissertation that analyzes some international tests of
mathematics achievement. I need to use the responses (which can be
considered "correct"/"incorrect") and estimate an IRT (Item Response Theory)
model to describe the test.
In a nutshell, assume the test measures a
Hi
On 12 Mar 2001, Radford Neal wrote:
Yes indeed. And the context in this case is the question of whether
or not the difference in performance provides an alternative
explanation for why the men were paid more (one supposes, no actual
salary data has been released).
In this context, all
At 02:25 PM 3/12/01 +, Radford Neal wrote:
In this context, all that matters is that there is a difference. As
explained in many previous posts by myself and others, it is NOT
appropriate in this context to do a significance test, and ignore the
difference if you can't reject the null
Hi, all,
We are testing a group of subjects on their performance in two different
conditions (say, A and B), and we are testing them individually. We have an
alternative hypothesis that reaction time in condition A should be longer
than in condition B, so we perform a one-tailed t test. However,
It isn't actually that easy, in the sense that
most data humans make up has a low efficiency with
respect to design criteria -- the determinant of
the cross-product matrix tends to be small. The
simplest way is to use a computer program that
calculates algorithmic designs.
jim clark wrote:
auda wrote:
Hi, all,
We are testing a group of subjects on their performance in two different
conditions (say, A and B), and we are testing them individually. We have an
alternative hypothesis that reaction time in condition A should be longer
than in condition B, so we perform a
Jim:
I agree with Radford Neal's comments,
and urge careful reconsideration of the
foundation behind some of the comments
made.
For example, suppose you had a department
in which the citation data were
Males Females
12220 1298
2297 1102
The male with 12220 is, let's
Hi
On Mon, 12 Mar 2001, Irving Scheffe wrote:
Jim:
For example, suppose you had a department
in which the citation data were
Males Females
12220 1298
2297 1102
When I said outlier, I had in mind hypothetical data of the
following sort (it doesn't matter to me whether
auda wrote:
Hi, all,
We are testing a group of subjects on their performance in two different
conditions (say, A and B), and we are testing them individually. We have an
alternative hypothesis that reaction time in condition A should be longer
than in condition B, so we perform a
I'm not clear on what your design is but it seems that
the problem is in the between S effect not within. Note that you only
have 4 df within and 4 dependent variables
=
Instructions for joining and leaving this list and
I'm trying to reduce all stats to a few simple procedures that
students can do EASILY with available stats packages. A two-way
ANOVA or an ANCOVA is as complex as I want to go. I thought SPSS
would do the trick, but I was amazed to discover that it can't.
Here's the example. I want students
At 7:34 PM + 12/3/01, Jerry Dallal wrote:
Don't do one-tailed tests.
If you are going to do any tests, it makes more sense to one-tailed
tests. The resulting p value actually means something that folks can
understand: it's the probability the true value of the effect is
opposite to what
On Tue, 13 Mar 2001, Will Hopkins wrote in part:
Example: you observe an effect of +5.3 units, one-tailed p = 0.04.
Therefore there is a probability of 0.04 that the true value is less
than zero.
Sorry, that's incorrect. The probability is 0.04 that you would find an
effect as large as
14 matches
Mail list logo