I'll try to add some more information regarding my experiment - maybe that
would help clear things out.
Instead of actually measuring the learning curve (i.e. number of correct
responses per block) I created a variable that substract the number of
correct answers from the last block with that of the first block.
I did the same thing for reward and punishment.
I also use the same predictors in both regression models.
Until now I just created a new variable - learning vector of reward minus
learning vector of punishment.
By that I think I measure the difference.
I just wanted to know if there's another option to compare a model with same
predictors but different dependent variable.


On Thu, Jun 10, 2010 at 12:33 PM, Joris Meys <jorism...@gmail.com> wrote:

> This is only valid in case your X matrix is exactly the same, thus
> when you have an experiment with multiple response variables (i.e.
> paired response data). When the data for both models come from a
> different experiment, it ends here.
>
> You also assume that y1 and y2 are measured in the same scale, and can
> be substracted. If you take two models, one with response Y in meters
> and one with response Y in centimeters, all others equal, your method
> will find the models "significantly different"  whereas they are
> exactly the same except for a scaling parameter. If we're talking two
> different responses, the substraction of both responses doesn't even
> make sense.
>
> The hypothesis you test is whether there is a significant relation
> between your predictors and the difference of the "reward" response
> and the "punishment" response. If that is the hypothesis of interest,
> the difference can be interpreted in a sensible way, AND both the
> reward learning curve and the punishment learning curve are measured
> simultaneously for every participant in the study, you can
> intrinsically compare both models by modelling the difference of the
> response variable.
>
> As this is not the case (learning curves from punishment and reward
> can never be made up simultaneously), your approach is invalid.
>
> Cheers
> Joris
>
> On Thu, Jun 10, 2010 at 9:00 AM, Gabor Grothendieck
> <ggrothendi...@gmail.com> wrote:
> > We need to define what it means for these models to be the same or
> > different.  With the usual lm assumptions suppose for i=1, 2 (the two
> > models) that:
> >
> > y1 = a1 + X b1 + error1
> > y2 = a2 + X b2 + error2
> >
> > which implies the following which also satisfies the usual lm
> assumptions:
> >
> > y1-y2 = (a1-a2) + X(b1-b2) + error
> >
> > Here X is a matrix, a1 and a2 are scalars and all other elements are
> > vectors.  We say the models are the "same" if b1=b2 (but allow the
> > intercepts to differ even if the models are the "same").
> >
> > If y1 and y2 are as in the built in anscombe data frame and x3 and x4
> > are the x variables, i.e. columns of X, then:
> >
> >> fm1 <- lm(y1 - y2 ~ x3 + x4, anscombe)
> >> # this model reduces to the following if b1 = b2
> >> fm0 <- lm(y1 - y2 ~ 1, anscombe)
> >> anova(fm0, fm1)
> > Analysis of Variance Table
> >
> > Model 1: y1 - y2 ~ 1
> > Model 2: y1 - y2 ~ x3 + x4
> >  Res.Df    RSS Df Sum of Sq      F Pr(>F)
> > 1     10 20.637
> > 2      8 18.662  2    1.9751 0.4233 0.6687
> >
> > so we cannot reject the hypothesis that the models are the "same".
> >
> >
> > On Wed, Jun 9, 2010 at 11:19 AM, Or Duek <ord...@gmail.com> wrote:
> >> Hi,
> >> I would like to compare to regression models - each model has a
> different
> >> dependent variable.
> >> The first model uses a number that represents the learning curve for
> reward.
> >> The second model uses a number that represents the learning curve from
> >> punishment stimuli.
> >> The first model is significant and the second isn't.
> >> I want to compare those two models and show that they are significantly
> >> different.
> >> How can I do that?
> >> Thank you.
> >>
> >>        [[alternative HTML version deleted]]
> >>
> >> ______________________________________________
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
> --
> Joris Meys
> Statistical consultant
>
> Ghent University
> Faculty of Bioscience Engineering
> Department of Applied mathematics, biometrics and process control
>
> tel : +32 9 264 59 87
> joris.m...@ugent.be
> -------------------------------
> Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
>

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to