Previously, Nancy Melucci had written: > I am wondering if we have an empirical basis (research of some type) > for the claim that high school GPAs are inflated.
There are a few problems with this though one might think that someone was evaluating this on an ongoing basis and would be publishing the results in peer reviewed journals. The first problem is how does one define grade inflation. At the college level one relatively straightforward:way appears to be to do "transcript studies" which examine changes in college GPAs over decades. On the web, there are a few sources that focus on this: http://en.wikipedia.org/wiki/Grade_inflation http://www.gradeinflation.com/ As the Wikipedia entry mentions (about the 3rd paragraph in), comparable data for high schools was not available when transcript surveys were made in the 1990s. To measure grade inflation in high school, another way to assess grade inflation is to compare high school GPA (HS-GPA) relative to performance on a standardized test.such as the SAT or ACT (under the assumption that the SAT is a better measure of intrinsic ability [or what some may have once called "native intelligence"] to do college level work; this is consistent with the original claim that the SAT was a "Scholastic Aptitiude Test" before researchers realized that it probably didn't purely measure) For example, imagine taking a test like the ACT, dividing these scores into 10 equal parts (deciles), and then looking at the distribution of HS-GPA across deciles. There should be a positive correlation between HS-GPA and ACT scores. The distribution of HS-GPA should also be similar to the distribution of ACT scores. If one searches the ERIC database, one will find one or two reports from the ACT testing corporation which report that grade inflation was found using this type of method. I haven't bothered to read these reports but it should be obvious that there are various problems with this type of research. First, the apparent conflict of interst of the person conducting the research (apparently ACT sponsored the research and an ACT research person did the analysis; remember this when reading comparable reports by either ETS or the College Board). Second, if the students who took the ACT did so in order to apply to college (in contrast to sampling students at random from high schools and administering the ACT to them), then there is sample selection bias because the students self- select themselves to take the test. Third, high school dropouts (attrition) are not included in this type of analysis. What role does differential loss of students (i.e., losing more lower HS-GPA students than high HS-GPA) have on HS-GPA? Another way of estimating inflation in HS-GPA is to do a transcript study like that done for colleges. As it turns out, the National Assessment of Education Progress (NAEP) has conducted such a study examining 2005 high school graduates. The executive summary and complete report is available at: http://nces.ed.gov/nationsreportcard/pubs/studies/2007467.asp A relevant statements from the summary include the following: |Transcripts were collected from about 640 public schools and |80 private schools for the 2005 High School Transcript Study (HSTS). |These transcripts constituted a nationally representative sample |of 26,000 high school graduates, representing approximately 2.7 million |2005 high school graduates. |Graduates earn more credits and achieve higher GPAs | | * In 2005, graduates earned about three credits more than |their 1990 counterparts, or about 360 additional hours of instruction |during their high school careers. | * In 2005, the overall grade point average (GPA) was approximately |a third of a letter grade higher than in 1990. There are many possible |reasons for this apparent increase, including "grade inflation," changes |in grading standards and practices, and growth in student performance. |Because the study is restricted to high school graduates, it contains |no information about dropouts who may differ from graduates. Graduates |who receive a special education diploma or certificate of completion |are also excluded from analyses in this report unless noted otherwise. So, is there grade inflation in the above study or not? Is "a third of a letter grade" relative to 1990 really what we mean by HS-GPA grade inflation? And how would the inclusion of dropouts and other special cases affect overall HS-GPA? NOTE: The above are aggregate results based on 720 high schools. There is probbably significant school-to-school variation, with some schools probably having significant amounts of grade inflation but with other having limited grade inflation. Local factors need to be taken into account but perhaps grade inflation is not as obvious as some may think. -Mike Palij New York University [EMAIL PROTECTED] --- To make changes to your subscription contact: Bill Southerly ([EMAIL PROTECTED])