Re: Dependent ordinal data

2000-07-03 Thread Donald Burrill

On Mon, 3 Jul 2000 Robert Németh <[EMAIL PROTECTED]> wrote:

> Could somebody please advise me in the following problem:
> 
> I have a summary score consisting of 5 different items, which are
> definitely not independent from each other. 

By "summary score", do you mean you are using as a dependent variable 
the sums of rating scores for the 5 different items?  If so, how are 
the rating scores defined?  (And notice that by summing the scores you 
are implicitly treating them as though they were, at least to a decent 
approximation, of interval-scale quality;  insisting on using only the 
ordinal quality of the resulting score is then rather beside the point, 
not much different from locking the barn door after the horses have been 
stolen, to use an epithet from my youth.) ...  If not, what DO you mean?

> Each item describes the severity of a given symptom (medical) on a 
> scale of absent, mild, moderate or severe. 

My initial inclination would be to score each item 0, 1, 2, or 3 (or, if 
you prefer, 1, 2, 3, 4) respectively, sum the 5 item scores (thus 
producing a response variable whose potential range of values is from 0 
to 15 (or from 5 to 20), and apply a two-way ANOVA to see if anything 
interesting emerges.  (I might consider a 3-way ANOVA on the item scores, 
using items as a 5-level factor.)  If nothing emerges, I doubt whether 
methods that use only the ordinality of the item scores would show 
anything at all.
If ANOVA produces interesting results, and if the results lend 
themselves nicely to interesting interpretation(s), one may then worry 
about whether the results are possibly attributable to having treated 
the data as though they were of (approximately) interval quality.  
One way of pursuing such worries is to apply a dual scaling analysis 
(otherwise called correspondence analysis) to see whether the "best" 
scaling of the item scores displays approximately equal successive 
intervals.  (If so, stop worrying.  If not, substitute the scale values 
arising from the scaling analysis and repeat the ANOVA(s) on the 
redefined response variable(s).)
For advice on dual scaling, consult S. Nishisato's books on the 
topic, or consult Professor Nishisato himself:
 Shizuhiko Nishisato <[EMAIL PROTECTED]>, 
to whom I've copied this message.

> These assessments will be done two times (baseline and end value) for 
> each individual, who are assigned to two different treatment groups 
> within each center.  (Repeated assessments in a parallel-group 
> multicenter design using correlated measures.)

I take it you meant, "... assigned randomly to one of two different 
treatment groups ...". 
 An alternative form of analysis for such a design would be an analysis 
of covariance, using the "end value" as the response variable and the 
"baseline" as the covariate.  This is just a different way of looking at 
the problem, not necessarily a better way;  given the coarseness of your 
item scores, I wouldn't expect much in the way of improved sensitivity 
to possibly interesting effects, but you never can tell.  Do model 
interaction between the covariate the the treatment groups, at least for 
the initial analysis;  if your ANCOVA routine doesn't permit that, use 
either a general linear model (GLM) routine, or a multiple-regression 
routine, using a dichotomous variable to distinguish between your 
treatments. 

> Is there any generalisation of the Cochran-Mantel-Haenszel methods
> for this situation?
Sorry, I'm not familiar with these methods.

> One further question is whether somebody could point me to any 
> reference for a CMH method for symmetry tests (McNemar or Bowker) or 
> in general for agreement statistics?

> Many thanks in advance
> 
> Robert Németh
> 
> _
> Robert Németh
> Focus Clinical Drug Development GmbH
> 
> http://www.focus-cdd.de
> Email: [EMAIL PROTECTED]
> Tel: +49 (0) 2131 155 315Fax: +49 (0) 2131 155 378
> _
> --
> Focus Clinical Drug Development GmbH, Neuss

 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-471-7128  


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsub

Re: cubic regression

2000-07-03 Thread Donald Burrill

On Fri, 30 Jun 2000, dennis roberts wrote:

> interesting but ... 3 questions:
> 
> 1. how can the r squared for the best model be 100% when, the errors 
> are not all 0s?

R-sq is not 100% exactly, it is reported as 100.0%.  
Examining the SS reported shows that  R-sq = 3361.7/3361.9 = 99.994%, 
less than 100.000% but equal to 100.0% (to one decimal place).

> 2. we are talking about a model that goes from an r squared of 99.5% 
> ... to (nearly) 100% ... is this important?

In the second step, adding the quadratic term accounts for 90% (well, 
89.67%) of the residual variance from the linear model.  In the third 
step, adding the cubic term accounts for about 90% (89.47%) of the 
residual variance from the quadratic model.  Is it important to be able 
to account for 90% of the remaining variance by a single predictor?

> 3. while there is a dinky gain in r squared ... it comes in relation to 
> using a scale that is not as understandable as age ... it is the square 
> of age ... or the cube of age ... is this gain worth the transposition 
> of a scale in 1 year increments ... to something like squares or cubes 
> of age increments? 

I'm not altogether sure that I would characterize the explaining of 99% 
(98.91%, if one can believe 4 digits' precision) of the residual variance 
as "a dinky gain".
I do not follow the "not understandable as age" part.  The fitted 
function gives an explicit estimate of height (in cm) as a function of 
age (in, presumably, years).  To one decimal place, these estimates are:

yrs   2  3  4  5  6  7  8  9 10 11
 cm  86.8   95.4  103.1  110.2  116.7  122.7  128.5  134.2  139.8  145.6 

delta8.67.77.16.56.05.85.75.65.8

The annual increment in predicted height is shown in the line labelled 
"delta".  We see that the fitted RATE of growth diminishes from the 
initial value, levels off about age 9 or 10, and increases a little in 
the last year for which data are supplied.  This, one supposes, is a part 
of what the residual plots were trying to tell us.  (Something like this 
will have been visible in the raw data as well, but I neglected to copy 
that information to a place where I can easily retrieve it right now.)

> i would say that in this case ... using a much more complicated model . 
> does not add to the clarity of the prediction problem ... 

I wouldn't call it _much_ more complicated.  Agreed, with a linear 
approximation, the annual increments are constant (6.37 cm per year);  
and agreed, this is simpler than increments that change from year to 
year, from a value 20% higher in the first year to values about 10% 
lower in the last four years.  On the other hand, it is hard to believe 
that a constant increment per year adequately describes the true 
relationship between age and height for human female children.

Just one more example of the eternal trade-offs between "reality" and 
spurious simplicity.
-- Don.
 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-471-7128  


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: I need help!!! SPSS and Panel Data

2000-07-03 Thread Bruce Weaver

On Sun, 2 Jul 2000 [EMAIL PROTECTED] wrote:

> Help!
>  I'm a Norwegian student who can't figure out how
> to work SPSS 9.0 properly for running a multiple
> regression on panel data (longitudinal data or
> cross-sectional time-series data). My data set
> consist of financial data from about 300 Norw.
> municipalities. For each municipality I have
> observations for 7 fiscal years. My problem is
> that I don't know how to "tell" SPSS that the
> cases are grouped 7 by 7, i.e that they are panel
> data.
> Can somebody please help me!
> 
> Ketil Pedersen
> 

Hi Ketil,
I'm not familiar with time series terminology, but if I followed 
you, you have a data file that looks something like this:


MUNICIP  YEAR   Y
  1   1   
  1   2 
  1   3   
etc
  1   7  
  2   1  
  2   2  
  2   3  
etc
  2   7  
  3   1
  3   1
etc
  3   7
etc


I think you may have one or more "between-groups" variables too, but
wasn't sure about this.  Anyway, if this is more or less accurate, then I
think you would find it easier to use UNIANOVA rather than REGRESSION.  In
the pulldown menus, you find it under GLM-->Univariate, I think.  Here's
an example of some syntax for the data shown above with SIZE included as a
between-municipalities variable: 

UNIANOVA
  y  BY municip year size
  /RANDOM = municip
  /METHOD = SSTYPE(3)
  /INTERCEPT = INCLUDE
  /EMMEANS = TABLES(year)
  /EMMEANS = TABLES(size)
  /EMMEANS = TABLES(year*size)
  /CRITERIA = ALPHA(.05)
  /print = etasq
  /plot = resid
  /DESIGN = size municip(size)
year year*size .


Note that municip is a random factor here (i.e., it is treated the same
way Subjects are usually treated).  And the notation "municip(size)" 
indicates that municip is nested in the size groups.  The output from this
syntax will give you an F-test for size with municip(size) as the error
term; and for the year and year*size F-tests, the error term (called
"residual") will be Year*municip(size), because that's all that is left
over. 

You can get the same F-tests using REGRESSION, but not as easily.  For 
one thing, you have to compute your own dummy variables for MUNICIP and 
YEAR; and if you have a mixed design (between- and within-municipalities 
variables), you pretty much have to do two separate analyses, as far as I 
can tell.

Hope this helps.
-- 
Bruce Weaver
[EMAIL PROTECTED]
http://www.angelfire.com/wv/bwhomedir/




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



RE: cubic regression

2000-07-03 Thread Simon, Steve, PhD

I've enjoyed the comments about polynomial regression. There is a cute joke
that relays the dangers of extrapolating a model.

Two statisticians were travelling in an airplane from LA to New York. About
an hour into the flight, the pilot announced that they had lost an engine,
but don't worry, there are three left. However, instead of 5 hours it would
take 7 hours to get to New York. A little later, he announced that a second
engine failed, and they still had two left, but it would take 10 hours to
get to New York. Somewhat later, the pilot again came on the intercom and
announced that a third engine had died. Never fear, he announced, because
the plane could fly on a single engine. However, it would now take 18 hours
to get to New York. At this point, one statistician turned to the other and
said, 'Gee, I hope we don't lose that last engine, or we'll be up here
forever!'

I found this joke at the following web site:

http://www.xs4all.nl/~jcdverha/scijokes/1_2.html
 : Science Jokes:
Statistics and Statisticians, by Joachim Verhagen.

This story could also serve as a cautionary note about the interpretability
of the intercept term (estimated average flight time when the number of
engines=0).

Steve Simon, [EMAIL PROTECTED], Standard Disclaimer.
STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: lin. reg.

2000-07-03 Thread John Hendrickx

In article , [EMAIL PROTECTED] 
says...
> That's not right. Trendline is limited to bivariate. With Data Analysis:
> Regression, you can select a contiguous cell range with more than one X
> variable.
> 
You're right! But that's not really obvious from the wizard.
> See the file Reg.xls at http://www.wabash.edu/econometrics

Yes, very useful information there. Thanks!


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: cubic regression

2000-07-03 Thread Rex Boggs

Bob Hayden wrote:

> Tom Moore's original request for data well fit by a cubic tacitly
> implied that it's actually pretty unusual for a cubic (or higher order
> polynomial) to be a good choice.  Perhaps Tom even doubted that it
> would EVER be a good choice, and wondered if anyone could provide a
> counterexample!-)

The Alpo dogfood dataset is a possible candidate.

Source: Primary: Alpo dog food bag  Secondary: a saved email I found while
scrounging around my hard drive.

**
The recommended serving size for dog food depends on the weight of the dog.

Alpo Data Set
Weight of Dog and amount of food it needs

Weight  Amount of Alpo
 (lbs)  (Cups)
  8 1
  192
  363
  564
  785
  103   6
  130   7
  158   8
  190   9

This is more cubic-like if you make Amount of Alpo the explanatory variable
and weight the response variable, rather than the (intended) other way
round.

Probably a better dataset is the Alligator dataset that was in an early
draft document about the AP Stats syllabus.

**
Many wildlife populations are monitored by taking aerial photographs.
Information about the number of animals and their whereabouts is important
to protecting certain species and to ensuring the safety of surrounding
human populations.

In addition, it is sometimes possible to monitor certain characteristics of
the animals. The length of an alligator can be estimated quite accurately
from aerial photographs or from a boat. However, the alligator's weight is
much more difficult to determine. In the example below, data on the length
(in inches) and weight (in pounds) of alligators captured in central
Florida are used to develop a model from which the weight of an alligator
can be predicted from its length.

Length  Weight  Len^3   Ln(Length)  Ln(Weight)
58  28  195112  4.0604433.33220451
61  44  226981  4.1108739   3.78418963
63  33  250047  4.1431347   3.49650756
68  39  314432  4.2195077   3.66356165
69  36  328509  4.2341065   3.58351894
72  38  373248  4.271   3.63758616
72  61  373248  4.271   4.11087386
74  54  405224  4.3040651   3.98898405
74  51  405224  4.3040651   3.93182563
76  42  438976  4.3307333   3.73766962
78  57  474552  4.3567088   4.04305127
82  80  551368  4.4067192   4.38202664
85  84  614125  4.4426513   4.4308168
86  83  636056  4.4543473   4.41884061
86  80  636056  4.4543473   4.38202664
86  90  636056  4.4543473   4.49980967
88  70  681472  4.4773368   4.24849524
89  84  704969  4.4886364   4.4308168
90  106 729000  4.4998097   4.66343909
90  102 729000  4.4998097   4.62497281
94  110 830584  4.5432948   4.70048037
94  130 830584  4.5432948   4.86753445
114 197 1481544 4.7361984   5.28320373
128 366 2097152 4.8520303   5.9026
147 640 3176523 4.9904326   6.46146818

Cheers

Rex
--
Rex BoggsPhone: 0749 230 338
Glenmore SHS Fax:   0749 230 350
P.O. Box 5822, R.M.C.Email: [EMAIL PROTECTED]
Rockhampton QLD  4702
Australia
--
Secondary Mathematics Assessment and Resource Database
 http://smard.cqu.edu.au
--
 Exploring Data website
 http://exploringdata.cqu.edu.au
--




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: lin. reg.

2000-07-03 Thread Humberto Barreto

At 10:01 AM +0200 7/3/00, John Hendrickx wrote:
>Excel 97 has an add-in for doing regression and certain other statistical
>analyses.

>It's limited to bivariate regression though.

That's not right. Trendline is limited to bivariate. With Data Analysis:
Regression, you can select a contiguous cell range with more than one X
variable.

See the file Reg.xls at http://www.wabash.edu/econometrics


***
Humberto Barreto
Department of Economics
Wabash College
Crawfordsville, IN 47933

Phone: 765-361-6315
FAX: 765-361-6277
Email: [EMAIL PROTECTED]
WWW: http://www.wabash.edu/depart/economic/barretoh/barretoh.html




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: cubic regression

2000-07-03 Thread Robert Dawson

Paul Velleman wrote:
> > I'd rather fit log(wt) on day.
and Donald Burrill responded:
> Agreed.  (Any day!-)
>  This has the further virtue of permitting "doubling time" to be
> defined and estimated, for the range in which the exponential growth
> function appears to be an adequate description.  (Exponential growth
> always eventually comes to be dominated by some limiting factor(s)
> inherent either in the system exhibiting such growth or in the
> environment in which it takes place.  Extrapolation is no more to be
> trusted for such a model than for a polynomial.)

In which case something like  A exp{Bt}/(C+exp{Bt}) allowing  both
transitions to be modelled would seem ideal.

-Robert




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: cubic regression

2000-07-03 Thread Donald Burrill

On Sat, 1 Jul 2000, Paul Velleman wrote:

> I'm not real comfortable with a polynomial model that takes nearly 
> half the available degrees of freedom and offers no theoretical 
> motivation. 

"Comfortable" is not a word that much occurs to mind in the context of 
polynomial models.  From the point of view of teaching about modelling, 
polynomial models permit one to show a number of things, here mentioned 
in no particular order:
  +  Whatever functions may be theoretically justified as models, one 
can find a polynomial that will more or less adequately describe 
any empirical shape of function.  Even if one has no idea what 
kind(s) of function may be theoretically appropriate.
  +  Any polynomial will rather rapidly zoom off toward infinity (or 
negative infinity) if you try to extrapolate beyond the range
of the data;  as Bob Hayden illustrated with one data set.
Interpolation within that range may even have some problems, 
as Paul and I have both noted in the chicks data.
  +  Although polynomial shapes are often described in stereotypical 
terms (quadratic = parabola = 1 bend in the function;  cubic 
= 2 bends;  etc.), particular polynomials may not appear to 
display the stereotypical shape (1 bend "implies" quadratic, 
2 bends "imply" cubic, etc.).  The chicks data exhibit only one 
bend, but a quadratic fit is not satisfactory, a cubic fit does 
not show two bends unless you look carefully (or extrapolate to 
the left), etc.
  +  Using orthogonal polynomial components as predictors in developing 
an empirical model has certain conveniences (well described in, 
e.g., Draper & Smith, so I won't go into detail here).

> I'd rather fit log(wt) on day. 
Agreed.  (Any day!-)  
 This has the further virtue of permitting "doubling time" to be 
defined and estimated, for the range in which the exponential growth 
function appears to be an adequate description.  (Exponential growth 
always eventually comes to be dominated by some limiting factor(s) 
inherent either in the system exhibiting such growth or in the 
environment in which it takes place.  Extrapolation is no more to be 
trusted for such a model than for a polynomial.)

 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-471-7128  



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: Ancova question

2000-07-03 Thread Donald Burrill

On Mon, 3 Jul 2000, Miguel Verdu wrote:

> In an ANCOVA where covariate interacts with the independent variable,
> should the covariate be nested within the independent variable?. I
> would appreciate bibliographic references on this matter.

In general, interaction can be observed only if the interacting variables 
are crossed.  If they are nested, "interaction" is not defined. 
Thus for a one-factor ANCOVA the conventional sources of variation are
the factor (A, say), the covariate (X), and their interaction (A*X). 
(The existence of a significant interaction implies that the slope of the 
dependent variable Y on X is not constant across the several levels of 
A.)  

A good basic treatment of ANCOVA, explicitly including interaction 
between factor(s) and covariate(s), can be found in Tatsuoka's book on 
multivariate analysis.  Chapter 3, as I recall.

Some standard statistical-package ANCOVA routines do not permit such a 
model to be analyzed;  in which case you will wish to use either a 
general-linear-model (GLM) routine or a multiple-regression routine, 
either of which will permit more elaborate models, as well as 
unconventional models, to be analyzed.
For an example of a three-factor ANCOVA in which all the 
interactions were modeled in a multiple regression analysis, see a 
White Paper of mine on the Minitab web site:
http://www.minitab.com

(Whether the conventional sources of variation provide the most useful 
way of reporting your results is another question entirely, answers to 
which tend to depend at least in part on the pattern of results.)
-- DFB.
 
 Donald F. Burrill [EMAIL PROTECTED]
 348 Hyde Hall, Plymouth State College,  [EMAIL PROTECTED]
 MSC #29, Plymouth, NH 03264 603-535-2597
 184 Nashua Road, Bedford, NH 03110  603-471-7128  



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: lin. reg.

2000-07-03 Thread John Hendrickx

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> Hi,
> 
> Does someone know were I can download an EXCEL macro for lin reg?
> Besides it calculates the y=bx+a I like the possibility to give Y and
> then the macro calculates the x with s.
> 
Excel 97 has an add-in for doing regression and certain other statistical 
analyses. Select "Extra"->"Add-ins", and check both buttons for the 
"Analysis TookPak" (I'm translating from a Dutch version, "Extra" may be 
"Tools" or something similar). This will add a "Data Analysis" item to 
the bottom of your "Extra" menu that allows elementary statistical 
analyses. It's limited to bivariate regression though.

Hope this helps,
John Hendrickx


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Ancova question

2000-07-03 Thread Miguel Verdu

Hello,

In an ANCOVA where covariate interacts with the independent variable,
does covariate should be nested within the independent variable?. I
would appreciate bibliographic references on this matter.

Thanks to all


Miguel Verdu

--
 Centro de Investigaciones sobre Desertificacion (CSIC/UV/GV)
 Cami de la Marjal, s/n
 Apartado Oficial
 46470 ALBAL, VALENCIA  (SPAIN)
  tel.: (+34) 961220540 ext 110
  fax.: (+34) 961270967

 [EMAIL PROTECTED]
 ---




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Dependent ordinal data

2000-07-03 Thread robert . nemeth




Could somebody please advise me in the following problem:

I have a summary score consisting of 5 different item, which are
definitely not independent from each other. Each item describes
the severity of a given symptom (medical) on a scale of absent,
mild, moderate or severe. These assessments will be done two
times (baseline and end value) for each individual, who are assigned
to two different treatment groups within each center.
(Repeated assessments in a parallel-group multicenter design using
correlated measures.)
Is there any generalistaion of the Cochran-Mantel-Haenszel methods
for this situation?
One further question is whether somebody could point me to any reference
for a CMH method for symmetry tests (McNemar or Bowker) or in general
for agreement statistics?
Many thanks in advance

Robert Németh

_
Robert Németh
Focus Clinical Drug Development GmbH

http://www.focus-cdd.de
Email: [EMAIL PROTECTED]
Tel: +49 (0) 2131 155 315Fax: +49 (0) 2131 155 378
_


--
Focus Clinical Drug Development GmbH, Neuss




===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===