On Mon, 24 Jul 2000, Mike Hewitt wrote:

> Thank you all for the replies!  You have given me much to (re)think
> about:).  I have a few follow-up questions if I may...
> 
> 1.  It was mentioned that a floor or ceiling effect could affect the
> interpretation of the results.  I do believe that there could have 
> been a floor/basement" effects during the pretest.  I understand how 
> this can have an effect on the results, but why only for the 
> interaction and not the main effects?

You report different pretest means for the different conditions;  
possibly a "floor" effect was operating more strongly in one group 
(the one with the smallest mean?) than in the others. 
        It is also imaginable that a "ceiling" effect was operating, in 
one group but not others, at posttest.

> 2.  I have plotted my results a number of different ways and have run 
> simple effects to see where differences and relationships occur as
> suggested.  (I wish I could show you the graphs), but let me give you 
> the means for
> 
> test x model (p = .003 N = 82):
> 
>                  Pretest       Posttest        Gain
> 
> Model             16.85         21.13          4.28
> No Model          17.05         19.16          2.11
> 
> and test x modeling x self-evaluation (p = .023)
> 
>                          Pretest    Posttest        Gain
> Model/Eval                16.35      21.48          5.13
> Model/No Eval             17.41      20.87          3.46
> No Model/Eval             15.59      17.78          2.19
> No Model/No Eval          16.96      20.17          3.21

You sure about these?  The "No Model" mean ought to be an average 
(weighted by sample sizes, which you haven't reported) of the two "No 
Model" means in the second table;  but  17.05  is NOT an average of 
15.59 and 16.96 !  It follows that 2.11 is not an average of 2.19 and 
3.21, as it should have been.  Other values appear mildly inconsistent:  
presumably there were the same numbers of respondents at pretest and 
posttest in each of the 4 subgroups, in which case one would expect 
16.85 to be the same sort of weighted mean of 16.35 and 17.41  as 21.13  
is of 21.48 and 20.87;  but the 16.85 is closer to the "Eval" condition, 
and 21.13 is closer to the "No Eval" condition.

> These results are for overall performance.  I also examined other 
> areas of performance such as melodic accuracy and tone with similar 
> results for all areas.  I interpreted this to mean that listening to a 
> model may be effective during "self-evaluation", but not necessarily 
> during "no self-evaluation". 

Yes, that does seem to be what the data are trying to say.  Depending in 
part on what you mean by "self-evaluation" (or, more properly, what the 
Subjects actually DID during "self-evaluation"), this result may not be 
unreasonable at all.

> So...the effects of modeling may be more clearly understood 
                --  or at least observed --
> when you combine it with evaluation. 
> Also, there were no significant effects for Self-Evaluation. 

This is at best misleading to write, and is arguably not true.  From the 
pattern of your means, I'd not be surprised if the _main_effect_ of 
"Self-Evaluation" were not significant.  But "Self-Evaluation" clearly 
has a notable effect in the presence of a model (at least for the 
modelling you supplied in the experiment).

> Are the following conclusions correct?
> 
> 1.  The combination of listening to a model and self-evaluation is the 
> most effective method for improving performance.

This may be true.  For it to be unassailable as a conclusion, you would 
need to show that the improvement under that combination (gain = 5.13) is 
significantly greater than the gain under the next most (apparently) 
effective combination (Model/NoEval, gain = 3.46);  or significantly 
greater than the gain under the NoEval condition (gain = an average of 
3.46 and 3.21), against which you have somewhat greater power than 
against the single Model/NoEval condition.

> 2.  When performing self-evaluation, listening to a model is more 
> effective for improving performance than not listening to a model.

The pattern of results implies that quite clearly, I think.  Compulsive 
reviewers might want a formal comparison between the two groups.

> 3.  Listening to a model may not be more effective than not listening 
> to a model when not performing self-evaluation.

I would be inclined to write "is not detectably more effective", and 
include a formal comparison, e.g. "(t = 0.3, p > .5)" or whatever;  and 
somewhere in the works, possibly in discussion of the results, I'd want 
to see a clear description of what you think the Subjects were actually 
doing (or, perhaps, not doing;  but this is MUCH harder to argue) in the 
"Evaluation" and "No Evaluation" conditions.

Some of the remarks above may be vitiated when you look more closely at 
the REAL subgroup means.  It is curious that at pretest the Modelling 
means are both higher than the No Modelling means when different levels 
of Evaluation are considered;  but the overall Modelling mean is LOWER 
than the overall No Modelling mean when averaging over the two levels of 
Evaluation.  As remarked earlier, this would appear not to be possible.

 ------------------------------------------------------------------------
 Donald F. Burrill                                 [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,          [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264                                 603-535-2597
 184 Nashua Road, Bedford, NH 03110                          603-471-7128  



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to