Chris Auld wrote:

> Notice your results are stronger than anything in the entire Bell Curve:
> you at least contolled for education and some important characteristics
> such as marital and fertility status.  M/H make no attempt to control
> for income, health, marital status, fertility, experience, region of
> residence, gender, nationality, and so on, and so on.  

Most of these wind up being dependent variables at some point in their
book.  In their defense, I think they are engaging the "socialization"
literature within psych, rather than the rate of return to education or
similar literatures in econ.  In this context, seeing how IQ changes the
estimated effect of SES is a big step forward.

You are of course correct that aggregating a handful of things into a
"SES index" is equivalent to putting coefficient restrictions on all of
them, which has to (weakly) reduce their explanatory power.  But the
same is true of the AFQT - they could have put in disaggregated measures
from each sub-scale, or even put in every single question as a separate
variable.  I think there are good practical reasons not to do this, and
there is a wealth of research that uses simplified indices in place of
the kitchen sink (e.g. "Democracy" indices, "Rule of Law" indices, "Bank
Failure" indices, etc.)

> Recall too that folks like Heckman have shown the results in M/H change
> dramatically when the analysis is done right.

My memory is not too good here - I read a few pieces by Heckman on this,
but nothing that I remember reaching results that were "dramatically"
different.  

I thought that Bill Dickens had written TBC the most thorough and fair
criticism, and mostly concluded that they were qualitatively right but
had over-stated their case somewhat.

> > If you're right on this, then I'd better start greeting
> > almost all results with greater skepticism - in the real world, what is
> > better measured than IQ and education?
> 
> Well, it's not so much measurement error as the other endogeneity problem
> (and recall that was modest) that caused the inconsistency in my little
> experiment.  

What you call an "endogeneity problem" doesn't fit the usual textbook
description.  Normally correlation among independent variables isn't a
problem, though it complicates the interpretation if changing one
variable almost always changes the other.  

Normally measurement error just yields attenuation bias - making all
coefficients on the poorly measure variables too small in absolute
value.  But as best I could tell you got your result with correlated
measurement error across X variables.  Is that right?

> That said, I'm not sure education, at least, is particularly
> well measured in most datasets, as we generally ignore quality measures.

Well-measured compared to what?  The CPI, to take one example, also
neglects quality.  And how well can income be measured on a phone survey
that leaves no time to check your tax records?  

> And recall M/H "solve" the colinearity problem they have between education
> and IQ by *dropping* education.  Some solution!  

Though it may appall you, many textbooks recommend it.  In spite of all
of his reservations about TBC, Bill Dickens felt pretty comfortable with
their omission.

> Granted, they also repeat
> most of their regressions with a high school only sample, but notice the
> results generally change markedly when that sample is contrasted with the
> whole sample.  

I don't remember, but I'll take your word for it.  

> Also recall they admit that education causally increases
> performance on IQ tests (although I looked up their cite, a 1989 AER
> article, and that piece actually assumes such a relationship exists, but
> differences it out so they don't have to measure it, so I don't know what
> M/H were talking about).  That implies that a simultaneous approach is
> necessary, particularly recalling that the endogeneity problem here is
> compounded by the colinearity problem.

I'd put it differently.  Leaving education out clearly understates its
importance.  But putting in both IQ and education can also give a
misleading interpretation insofar as its hard for one to change without
changing the other.  In other words, you're choosing between two
imperfect strategies, and its not obvious to me that M/H chose wrongly. 
Yea, they could have done both, but they're making the same trade-offs
every applied researcher makes between completeness and reader
comprehension.

> Perhaps you could explain why you find the evidence they array compelling.
> Maybe I'm missing something.

If anything your command of the details of their study exceeds mine. 
The main thing I'd say in their defense is to step back and take stock
of what they did.  Here's how I'd summarize it:

1.  They put forward an intutively plausible hypothesis that virtually
no economists I knew at Berkeley or Princeton ever even mentioned.
2.  They showed that this hypothesis explained an extremely diverse and
seemingly unconnected sets of facts.
3.  They directly challenged almost the entire "socialization"
literature within psych, and implicitly called much of the ROR to
education literature in econ into question.
4.  Almost all of the econometric problems you cite seem at least as
severe in almost every study I can think of.  Correlated independent
variables, measurement error, using indices, etc.  If these were enough
for summary rejection, most of the ROR to education literature would
remain unpublished.

Of course, saying "*I* learned a lot from it" might just be a sign of
how clueness I was or am.  But I'm pretty sure that even today most
labor economists are happily ignoring the issues M/H raise.  In short,
most economists would learn a lot from TBC.  How many published articles
can you say that about?  
-- 
          Prof. Bryan Caplan               [EMAIL PROTECTED] 
 
          http://www.gmu.edu/departments/economics/bcaplan 
 
  "[W]hen we attempt to prove by direct argument, what is really
   self-evident, the reasoning will always be inconclusive; for it
   will either take for granted the thing to be proved, or something
   not more evident; and so, instead of giving strength to the
   conclusion, will rather tempt those to doubt of it, who never
   did so before."  
    -- Thomas Reid, _Essays on the Active Powers of the Human Mind_

Reply via email to