This may be more than you wanted to know, but ...
Let me summarize:
Model: 1 2 3
R-square: .06 .15 .42
Change in R-sq: -- .09 .27 (from previous model)
Predictors
Family SES ** ns ns
Child fng 1 * ns
Child fng 2 * ns
Family support **
School support **
>From models 1 & 2 we know that SES does not contribute much when the two
measures of child functioning are present. While we don't know exactly
how much is lost (as measured by, say, change in R-sqare) if SES is
omitted, I'd guess the loss might be in the neighborhood of 0.01 or
less. (You could assess this exactly by running a model that contains
only the two child-functioning predictors.)
For that reason, we would anticipate the "ns" result for SES in model 3.
The "ns" results for the two measures of child functioning are much more
ambiguous. Possibilities: (a) both measures of child functioning are
superfluous in the presence of the support measures; (b) both measures
of child functioning contribute significantly in the absence of SES, but
not when SES is present, because their contributions in the presence of
the support measures duplicate the contribution of SES in some degree
(which we know to be true from model 2, but we don't know the
quantitative effect in model 3); (c) one measure of child functioning,
but not the other, contributes significantly beyond the contributions of
the support measures, but its influence is masked by the presence of SES
and the other child-functioning measure and the intercorrelations among
the three.
In interpreting model 2, I would first examine an analysis in which only
the two child-functioning predictors were present, to verify that one
doesn't lose much by cutting out SES.
In interpreting model 3, I'd want to see an analysis in which only the
support measures were predictors; and if this model showed an R-square
interestingly less than 0.42, I'd investigate a model using both support
measures and one of the child-functioning measures, AND a model using
both support measures and the other child-functioning measure, and quite
possibly a model using all four predictors but omitting SES.
However, none of these approaches pay any attention whatever to the
reasons you had in the first place for conducting regressions in a
hierarchical manner. To address those reasons, I'd want to conduct an
analysis like model 2, but with child-functioning measures that had been
orthogonalized with respect to SES. This analysis would retain the
significance of SES from model 1, but the other two would be evaluated
for what they contribute IN ADDITION TO whatever they share with SES.
And it is imaginable that (although the overall R-square will still be
0.15) both of these may have non-significant coefficients because
(should it happen to be the case) the additional 0.09 of variance
explained is entirely, or mostly, shared by both variables, so that one
of them is redundant. In which case one would wish to evaluate a model
with SES + CF1, and a model with SES + CF2, where "CF1" and "CF2" are
orthogonalized with respect to SES. Of course, this assumes that SES
really deserves the primary place you appear to have given it, and
opinions might differ with respect to that deserving.
(For how to orthogonalize, see a good regression text like Draper &
Smith, or consult my paper on "modelling and interpreting interactions
in multiple regression analysis", at
www.minitab.com/resources/whitepapers
-- which deals with orthogonalizing interactions, but the principles
are quite general.)
Similarly, if I believed in the hierarchy described, I might well want
to conduct a model 3 analysis with SES, one or both of CF as
orthogonalized with respect to SES, and the two support measures after
they'd been orthogonalized with respect to SES and the CF measure(s)
that survived model 2.
On Fri, 9 Apr 2004, Paul Benson wrote:
> Hi. I have a question regarding hierarchical regression. I am
> running hierarchical regression employing a total of 7 IVs. I am
> entering the DVs in 4 sequential blocks and assessing r-squared,
> r-square change, etc. My problem has to do with interpreting
> individual regression coeficients -- do I use the coefficient
> associated with the block where the IV was first entered or only in
> the final ("full") model with all 7 IVs included? Sorry, this may be
> a pretty dumb question but I could use some guidance here.
... and subsequently added:
Hi. I just posted a question on h. regression. Maybe it would
be useful to provide some specifics of my analysis. My DV is
a measure of parent involvement in the education of their disabled
child. I actually enter only 3 groups of IVs, not 4. My DVs were
grouped and entered in the analyses thusly:
1. a measure of family SES;
2. 2 measures of child functioning; and
3. 2 measures of social support (1 family support, 1 school support).
In equation 1 -- SES is highly significant.
In equation 2 --SES significance dropped by ns (.156) and the child
functioning measures are both significant at the .05 level.
In equation 3 -- SES and the child functioning measures become
insignificant and the 2 newly [added] measures both are highly
significant at the .01 level or better. Overall the R-squared of these
models are .06, .15. and .42 so the support measures are really the
critical predictors of parent involvement here.
So the question is should I be interpreting the reg coefficients only
from equation 3 or for each IV as it was first entered into the
analysis?
------------------------------------------------------------
Donald F. Burrill [EMAIL PROTECTED]
56 Sebbins Pond Drive, Bedford, NH 03110 (603) 626-0816
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
. http://jse.stat.ncsu.edu/ .
=================================================================