Assessment and Cognition: Theory to Practice
August 13-14, 2001
University of Maryland, College Park, Maryland

A conference hosted by the Department of Measurement, 
Statistics and Evaluation, University of Maryland, and 
supported by the Maryland State Department of Education.  
Organized by Robert Mislevy, William Schafer, and Robert 
Lissitz, University of Maryland.  

Summary

     1985 marked the publication of an influential volume 
entitled Test Design: Developments in Psychology and 
Psychometrics, edited by Professor Susan Embretson of the 
University of Kansas.  Test design is an intriguing foray 
into ways that developments in psychometrics and cognitive 
psychology might be brought together to improve educational 
and psychological testing.   A number of tantalizing 
small-scale examples illustrated the ideas.  Much progress 
has been made since that time.  Many projects have not only 
pushed the individual contributing sciences farther, but 
pulled insights together from across disciplinary 
boundaries, and closed in on practical applications.  This 
conference is meant to lay down another footprint along 
that path.  The opening session describes an approach that 
brings the various developments together in an assessment 
design framework.   In the main part of the conference, 
presenters describe in depth three applications that build 
on advances in the contributing sciences, integrate the 
developments into coherent designs, and harness them for 
practical work.  Three sessions each focus on a different 
project, as members of their multidisciplinary teams 
describe the important ideas from their own perspective 
(e.g., psychology, measurement, technology, instruction, or 
content domain), and discuss how these ideas fit together 
to achieve a common purpose.  


Sessions

Welcome and Introduction (Dean Edna Szymanski, Robert 
Lissitz, University of Maryland).

Cognition and assessment: Theory to practice (Session 
organizer: Robert Mislevy, University of Maryland).  This 
session describes a framework for designing and delivering 
assessments in which the integration of psychology and test 
design envisioned in Test Design can be realized.   

Biomass (Session organizer: Linda Steinberg, Educational 
Testing Service).  Web-delivered, standards-based 
assessment of science inquiry, in the domain of secondary 
biology.  Biomass can be run in one mode for learning in 
the classroom, another for end-of-course assessment.  One 
talk features the Bayes net measurement model.

The Berkeley Evaluation and Assessment Research (BEAR) 
system (Session organizer: Mark Wilson, University of 
California at Berkeley).  The BEAR assessment system 
demonstrates relationships among learning, open-ended 
performance tasks, and a graded-response measurement model, 
as applied in a middle school science curriculum called 
"Issues, Evidence and You."

The Cisco Learning Institute (CLI) simulation-based 
assessment prototype (Session organizer: John Behrens, 
Cisco Systems).  CLI has developed a design framework and 
delivery architecture for web-based assessment of network 
design and troubleshooting.  The goal is to extend CLI's 
current on-line instruction and assessment to the complex 
and interactive problem-solving that students need in 
practice.

How far have we come, where do we need to go? Commentary by 
Profs. Susan Embretson, University of Kansas, and William 
Schafer, University of Maryland.


Registration

For further information or registration materials, 
contact Mr. Ricardo Morales at (e-mail: [EMAIL PROTECTED] 
phone: (301) 405-3629)










=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

Date: Thu, 24 May 2001 17:30:35 -0400
From: Rich Ulrich <[EMAIL PROTECTED]>
Subject: Standardized testing in schools

Standardized tests and their problems?  Here was a 
problem with equating the scores between years.

The NY Times had a long front-page article on Monday, May 21:
"When a test fails the schools, careers and reputations suffer."
It was about a minor screw-up in standardizing, in 1999.  Or, since
the company stonewalled and refused to admit any problems,
and took a long time to find the problems, it sounds like it 
became a moderately *bad*  screw-up.

The article about CTB/McGraw-Hill starts on page 1, and covers
most of two pages on the inside of the first section.  It seems 
highly relevant to the 'testing' that the Bush administration 
advocates, to substitute for having an education policy.

CTB/McGraw-Hill  runs the tests for a number of states, so they
are one of the major players.  And this proved to me , once again,
why nuclear power plants are too hazardous to trust:  we can't
yet Managements to spot problems, or to react to credible  problem
reports in a responsible way.

In this example, there was one researcher from Tennessee who
had strong longitudinal data to back up his protest to the company;
the company arbitrarily (it sounds like) fiddled with *his*  scores, 
to satisfy that complaint, without ever facing up to the fact that 
they did have a real problem.  Other people, they just talked down.

The company did not necessarily lose much business from the 
episode because, as someone was quoted, all the companies
who sell these tests   have histories of making mistakes.  
(But, do they have the same history of responding so badly?)

- -- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

Date: Thu, 24 May 2001 23:25:42 GMT
From: "W. D. Allen Sr." <[EMAIL PROTECTED]>
Subject: Re: Standardized testing in schools

"And this proved to me , once again,
why nuclear power plants are too hazardous to trust:..."

Maybe you better rush to tell the Navy how risky nuclear power plants are!
They have only been operating nuclear power plants for almost half a century
with NO, I repeat NO failures that has ever resulted in any radiation
poisoning or the death of any ship's crew. In fact the most extensive use of
Navy nuclear power plants has been under the most constrained possible
conditions, and that is aboard submarines!

Beware of our imaginary boogy bears!!!!!!

You are right though. There is nothing really hazardous about the operation
of nuclear power plants. The real problem has been civilian management's
ignorance or laziness!


WDA

end

"Rich Ulrich" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Standardized tests and their problems?  Here was a
> problem with equating the scores between years.
>
> The NY Times had a long front-page article on Monday, May 21:
> "When a test fails the schools, careers and reputations suffer."
> It was about a minor screw-up in standardizing, in 1999.  Or, since
> the company stonewalled and refused to admit any problems,
> and took a long time to find the problems, it sounds like it
> became a moderately *bad*  screw-up.
>
> The article about CTB/McGraw-Hill starts on page 1, and covers
> most of two pages on the inside of the first section.  It seems
> highly relevant to the 'testing' that the Bush administration
> advocates, to substitute for having an education policy.
>
> CTB/McGraw-Hill  runs the tests for a number of states, so they
> are one of the major players.  And this proved to me , once again,
> why nuclear power plants are too hazardous to trust:  we can't
> yet Managements to spot problems, or to react to credible  problem
> reports in a responsible way.
>
> In this example, there was one researcher from Tennessee who
> had strong longitudinal data to back up his protest to the company;
> the company arbitrarily (it sounds like) fiddled with *his*  scores,
> to satisfy that complaint, without ever facing up to the fact that
> they did have a real problem.  Other people, they just talked down.
>
> The company did not necessarily lose much business from the
> episode because, as someone was quoted, all the companies
> who sell these tests   have histories of making mistakes.
> (But, do they have the same history of responding so badly?)
>
> --
> Rich Ulrich, [EMAIL PROTECTED]
> http://www.pitt.edu/~wpilib/index.html




=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

Date: Thu, 24 May 2001 21:08:02 -0700
From: "David Heiser" <[EMAIL PROTECTED]>
Subject: The False Placebo Effect

Be careful on your assumptions in your models and studies!
- ---------------------------------------------------

Placebo Effect An Illusion, Study Says
By Gina Kolata
New York Times
(Published in the Sacramento Bee, Thursday, May 24, 2001)

In a new report that is being met with a mixture of astonishment and some
disbelief, two Danish researchers say that the placebo effect is a myth.

The investigators analyzed 114 published studies involving about 7,500
patients with 40 different conditions. They found no support for the common
notion that, in general, about one-third of patients will improve if they
are given a dummy pill and told it is real.

Instead, they theorize, patients seem to improve after taking placebos
because most diseases have uneven courses in which their severity waxes and
wanes. In studies in which treatments are compared not just to placebos but
also to no treatment at all, they said, participants given no treatment
improve at about the same rate as participants given placebos.

The paper appears today in the New England Journal of Medicine. Both
authors, Dr. Asbjorn Hrobjartsson and Dr. Peter C. Gotzsche, are with the
University of Copenhagen and the Nordic Cochran Center, an international
organization of medical researchers who review randomized clinical trials.

Reaction to the report covers the spectrum.

Dr. Donald Berry" a statistician at the M.D. Anderson Cancer Center in
Houston, said: "I believe it. In fact, I have long believed that the placebo
effect is nothing more than a regression effect," referring to a statistical
observation that patients who feel terrible one day will almost in- variably
feel better the next day, no matter what is done for them.

But others, like David Freedman, a statistician at the University of
California, Berkeley, said he was not convinced. He said that the
statistical method the researchers used -pooling data from many studies and
using a statistical tool called meta-analysis to analyze them -could give
results that were misleading.

"I just don't find this report to be incredibly persuasive," Freedman said.

The researchers said they saw a slight effect of placebos on subjective
outcomes reported by patients, like their descriptions of how much pain they
experienced. But Hrobjartsson said he questioned that effect. "It could be a
true effect, but it also could be a reporting bias," he said. "The patient
wants to please the investigator and tells the investigator, 'I feel
slightly better. ' "

Placebos still are needed in clinical research, Hrobjartsson said, to
prevent researchers from knowing who is getting a real treatment.

Curiosity prompted Hrobjartsson and Gotzsche to act. Over and over, medical
journals and textbooks asserted that placebo effects were so powerful that,
on average, 35 percent of patients would improve if they were told a dummy
treatment was real.

They began asking where this assessment came from. Every paper ,
Hrobjartsson said, seemed to refer back to other papers.

He began peeling back the onion, finally coming to the original paper. It
was written by a Boston doctor, Henry Beecher, who had been chief of
anesthesiology at Massachusetts General Hospital in Boston and published a
paper in the Journal of the American Medical Association in 1955 titled,
"The Powerful Placebo." In it, Beecher, who died in 1976, reviewed about a
dozen studies that compared placebos to active treatments and concluded that
placebos had medical effects.

"He came up with the magical 35 percent number that has entered placebo
mythology, Hrobjartsson said.

But, Hrobjartsson said, diseases naturally wax and wane.

"Of the many articles I looked through, no article distinguished between a
placebo effect and the natural course of a disease," Hrobjartsson said.

He and Gotzsche began looking for well-conducted studies that divided
patients into three groups, giving one a real medical treatment, one a
placebo and one nothing at all. That was the only way, they reasoned, to
decide whether placebos had any medical effect.

They found 114, published between 1946 and 1998. When they analyzed the
data, they could detect no effects of placebos on objective measurements,
like cholesterol levels or blood pressure.

The Washington Post contributed to this report.
- -----------------------------end of article---------------------------------




=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

Date: Fri, 25 May 2001 05:58:50 -0500
From: jim clark <[EMAIL PROTECTED]>
Subject: Re: The False Placebo Effect

Hi

On 24 May 2001, David Heiser wrote:
> Be careful on your assumptions in your models and studies!
> ---------------------------------------------------
> Placebo Effect An Illusion, Study Says
> By Gina Kolata
> New York Times
> (Published in the Sacramento Bee, Thursday, May 24, 2001)
...
> He and Gotzsche began looking for well-conducted studies that divided
> patients into three groups, giving one a real medical treatment, one a
> placebo and one nothing at all. That was the only way, they reasoned, to
> decide whether placebos had any medical effect.
> 
> They found 114, published between 1946 and 1998. When they analyzed the
> data, they could detect no effects of placebos on objective measurements,
> like cholesterol levels or blood pressure.

Was there some reason that they did not include studies with only
2 groups: no treatment and placebo?  Only those two groups are
necessary to determine whether placebo differs from no treatment.

Best wishes
Jim

============================================================================
James M. Clark                          (204) 786-9757
Department of Psychology                (204) 774-4134 Fax
University of Winnipeg                  4L05D
Winnipeg, Manitoba  R3B 2E9             [EMAIL PROTECTED]
CANADA                                  http://www.uwinnipeg.ca/~clark
============================================================================



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

Date: Thu, 24 May 2001 20:57:52 -0300
From: "Robert J. MacG. Dawson" <[EMAIL PROTECTED]>
Subject: Re: The False Placebo Effect

jim clark wrote:

> Was there some reason that they did not include studies with only
> 2 groups: no treatment and placebo?  Only those two groups are
> necessary to determine whether placebo differs from no treatment.

        Possibly because ethics committees would not OK an experiment that
involved withholding treatment from patients and was not expected to
provide any improvement in treatment in the long run?

        -Robert Dawson


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

Date: 25 May 2001 05:49:21 -0700
From: [EMAIL PROTECTED] (Anna Nass)
Subject: SAS / STAT Documentation

Hi, 
I am deperately looking for a good documentation on SAS/STAT Output
(e.g. proc DISCRIM, etc.) So far I have been working with SPSS and it
seems to me that the output is quite different. (But it should be the
same ?!

Thanks a lot in advance.

Anna Nass


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

Date: Fri, 25 May 2001 01:27:39 -0500
From: Jay Warner <[EMAIL PROTECTED]>
Subject: Re: Standardized testing in schools

At the Three Mile Island plant, there was a strip chart temperature recorder in
the control room, with two pens, red & blue.  And a tag note on it saying,
"Remember, blue means hot."

Common sense is not so common.

Jay

"W. D. Allen Sr." wrote:

> "And this proved to me , once again,
> why nuclear power plants are too hazardous to trust:..."
>
> Maybe you better rush to tell the Navy how risky nuclear power plants are!
> They have only been operating nuclear power plants for almost half a century
> with NO, I repeat NO failures that has ever resulted in any radiation
> poisoning or the death of any ship's crew. In fact the most extensive use of
> Navy nuclear power plants has been under the most constrained possible
> conditions, and that is aboard submarines!
>
> Beware of our imaginary boogy bears!!!!!!
>
> You are right though. There is nothing really hazardous about the operation
> of nuclear power plants. The real problem has been civilian management's
> ignorance or laziness!
>
> WDA
>
> end
>
> "Rich Ulrich" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Standardized tests and their problems?  Here was a
> > problem with equating the scores between years.
> >
> > The NY Times had a long front-page article on Monday, May 21:
> > "When a test fails the schools, careers and reputations suffer."
> > It was about a minor screw-up in standardizing, in 1999.  Or, since
> > the company stonewalled and refused to admit any problems,
> > and took a long time to find the problems, it sounds like it
> > became a moderately *bad*  screw-up.
> >
> > The article about CTB/McGraw-Hill starts on page 1, and covers
> > most of two pages on the inside of the first section.  It seems
> > highly relevant to the 'testing' that the Bush administration
> > advocates, to substitute for having an education policy.
> >
> > CTB/McGraw-Hill  runs the tests for a number of states, so they
> > are one of the major players.  And this proved to me , once again,
> > why nuclear power plants are too hazardous to trust:  we can't
> > yet Managements to spot problems, or to react to credible  problem
> > reports in a responsible way.
> >
> > In this example, there was one researcher from Tennessee who
> > had strong longitudinal data to back up his protest to the company;
> > the company arbitrarily (it sounds like) fiddled with *his*  scores,
> > to satisfy that complaint, without ever facing up to the fact that
> > they did have a real problem.  Other people, they just talked down.
> >
> > The company did not necessarily lose much business from the
> > episode because, as someone was quoted, all the companies
> > who sell these tests   have histories of making mistakes.
> > (But, do they have the same history of responding so badly?)
> >
> > --
> > Rich Ulrich, [EMAIL PROTECTED]
> > http://www.pitt.edu/~wpilib/index.html
>
> =================================================================
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>                   http://jse.stat.ncsu.edu/
> =================================================================

- --
Jay Warner
Principal Scientist
Warner Consulting, Inc.
4444 North Green Bay Road
Racine, WI 53404-1216
USA

Ph: (262) 634-9100
FAX: (262) 681-1133
email: [EMAIL PROTECTED]
web: http://www.a2q.com

The A2Q Method (tm) -- What do you want to improve today?






=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

------------------------------

End of edstat-digest V2000 #419
*******************************





=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to