[update] ECONOMICS: WorkingPapers/Publications/News

2001-05-24 Thread Agent V.M.

One of the best Web Sites in Economics renewed and fully updated. Now
available at http://epf-se.uni-mb.si/verbic/index.htm

Mirror now available: http://mirror.at/verbic

With best regards,
Miroslav Verbic
Faculty of Economics and Business

--
If you want to answer me, please remove the word NOSPAM from my e-mail
address. It is only used against automatic spam-mailing.






=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Variance in z test comparing purcenteges

2001-05-24 Thread Robert J. MacG. Dawson



Rich Ulrich wrote:
> 
>  - BUT, Robert,
> the equal N case is different from cases with unequal N -
>  - or did I lose track of what the topic really is... -

Possibly .

In the Z-for-proportion case the equal and unequal N 
cases do not differ at all; the null hypothesis (under which
p-values are calculated) makes the two populations identical,
and which is in what sample doesn't matter.

In the t test case equal and unequal N are identical
IF the variances are equal, but the nominal null hypothesis 
(equal means) does not imply this. It's an 'assumption' which
is sort of a copout, because (on the one hand) nobody really
believes it but (on the other hand) nobody's prepared to throw
it into the null, because that would weaken the alternative.
You could have:

Ho: the means are equal and the variances are equal
Ha: either the means or the variances differ

but the editors wouldn't like it.

A more sophisticated t test does NOT assume equal 
variances but uses some sort of fiddle to adjust the degrees
of freedom. In some cases this can be simplified for equal N.

-Robert Dawson


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



sample size requirements and sampling error

2001-05-24 Thread Mike Tonkovich

Greetings,

Before I get to the issue at hand, I was hoping someone might explain the
differences between the following 3 newsgroups: sci.stat.edu, sci.stat.cons,
and sci.stat.math?  Now that I've found these newsgroups, chances are good I
will be taking advantage of the powerful resources that exist out there.
However, I could use some guideance on what tends to get posted where?  Some
general guidelines would be helpful.

Now for my question.

We have an estimated 479,000 hunters in Ohio and we want to conduct a survey
to estimate such things as hunter success rates, participation rates, and
opinions on various issues related to deer management.  The first question
of course, is how large of a sample?  My former boss conducted a similar
survey and ended up with 3800 (actually he mailed surveys to 6700 hunters,
apparently he new that the response rate would be down around 45-55% and
took this into consideration when calculating the necessary sample size)
useable responses.  I'm now getting ready to conduct a similar survey and
the question of sample size once again needs to be addressed.  There was
little documentation on how he arrived at 6,700 or 3,800 for that matter, so
I'm left with coming up with my own estimate and of course justifying it.
In all of the STATS text books that I've been able to lay my hands on, they
all deal with minimum sample sizes for estimating the mean for a given
variable or a proportion.  In each case, you're asked to specify a
confidence level (typically 95%) and also the bound or error that you are
willing to accept, for instance plus/minus 2.5lbs in the case of the average
weight of a particular strain of egg plant.  In this survey that I plan on
running, I'm going to ask 40 questions.  Am I to do this for every variable
and take the maximum sample size needed to achieve the desired level of
confidence or what?  If not, is there a similar formula that one uses in
situations such as mine to come up with a sample size required for x-level
of confidence?

 On a related note, after discussing this issue with a statistician in
Virginia, he sent me an excel spreadsheet that asked for 2 inputs - the size
of the sample and the size of the population and the output was the maximum
% sampling error.  The inputs and outputs are presented below.

Population Size  Sample Size  Max Sampling Error (95% CI)Percent Error
(95% CI)
479000.00  3800.00  0.01583453051.583453047

I'm having a tough time grasping just exactly what the 1.58% means (in
simple terms that adminstrators can understand!).  Does that mean that in
repeated sampling of n=3800, 95 times out of 100, the sample mean plus or
minus 1.58% of the mean will contain the actual population mean?  How, or
should I say, does this relate in anyway to the standard error and CV.  I
know that the CV is actually a percentage (the SE expressed as a percent of
the mean).  Is this 1.58% the maximum CV for all variables in the survey?
If anyone can help me sort this out, I would greatly appreciate it.

Thanks in advance for any assitance you might be able to offer.

Mike Tonkovich




-= Posted via Newsfeeds.Com, Uncensored Usenet News =-
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-==  Over 80,000 Newsgroups - 16 Different Servers! =-


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



sample size and sampling error

2001-05-24 Thread Mike Tonkovich

Before I get to the issue at hand, I was hoping someone might explain the
differences between the following 3 newsgroups: sci.stat.edu, sci.stat.cons,
and sci.stat.math?  Now that I've found these newsgroups, chances are good I
will be taking advantage of the powerful resources that exist out there.
However, I could use some guideance on what tends to get posted where?  Some
general guidelines would be helpful.

Now for my question.

We have an estimated 479,000 hunters in Ohio and we want to conduct a survey
to estimate such things as hunter success rates, participation rates, and
opinions on various issues related to deer management.  The first question
of course, is how large of a sample?  My former boss conducted a similar
survey and ended up with 3800 (actually he mailed surveys to 6700 hunters,
apparently he new that the response rate would be down around 45-55% and
took this into consideration when calculating the necessary sample size)
useable responses.  I'm now getting ready to conduct a similar survey and
the question of sample size once again needs to be addressed.  There was
little documentation on how he arrived at 6,700 or 3,800 for that matter, so
I'm left with coming up with my own estimate and of course justifying it.
In all of the STATS text books that I've been able to lay my hands on, they
all deal with minimum sample sizes for estimating the mean for a given
variable or a proportion.  In each case, you're asked to specify a
confidence level (typically 95%) and also the bound or error that you are
willing to accept, for instance plus/minus 2.5lbs in the case of the average
weight of a particular strain of egg plant.  In this survey that I plan on
running, I'm going to ask 40 questions.  Am I to do this for every variable
and take the maximum sample size needed to achieve the desired level of
confidence or what?  If not, is there a similar formula that one uses in
situations such as mine to come up with a sample size required for x-level
of confidence?

 On a related note, after discussing this issue with a statistician in
Virginia, he sent me an excel spreadsheet that asked for 2 inputs - the size
of the sample and the size of the population and the output was the maximum
% sampling error.  The inputs and outputs are presented below.

Population Size  Sample Size  Max Sampling Error (95% CI)Percent Error
(95% CI)
479000.00  3800.00  0.01583453051.583453047

I'm having a tough time grasping just exactly what the 1.58% means (in
simple terms that adminstrators can understand!).  Does that mean that in
repeated sampling of n=3800, 95 times out of 100, the sample mean plus or
minus 1.58% of the mean will contain the actual population mean?  How, or
should I say, does this relate in anyway to the standard error and CV.  I
know that the CV is actually a percentage (the SE expressed as a percent of
the mean).  Is this 1.58% the maximum CV for all variables in the survey?
If anyone can help me sort this out, I would greatly appreciate it.

Thanks in advance for any assitance you might be able to offer.

Mike Tonkovich




--
Michael J. Tonkovich, Ph.D.
Wildlife Research Biologist
ODNR, Division of Wildlife
[EMAIL PROTECTED]




-= Posted via Newsfeeds.Com, Uncensored Usenet News =-
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-==  Over 80,000 Newsgroups - 16 Different Servers! =-


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: sample size requirements and sampling error

2001-05-24 Thread Allen E. Bingham
Title: 



Mike,There a number of factors to be considered 
over and above your (more-or-less simple) question about how to determine sample 
size in a survey of the nature of what you're proposing.Probably a more 
important factor than setting your sample size to achieve a set sampling error, 
is the issue of non-response. Do you have any reason to believe that the 45-55% 
of the hunters that fail to respond to your survey have similar characteristics 
to those who do? Most research on this issue indicates that recreational 
participants that are more "avid" (i.e. participate, and "harvest" more, etc.) 
tend to respond more frequently to these types of surveys than those that do not 
participate as much (Brown and Wilkins 1978; Leinonen 1988; Tarrant et al. 1993; 
Fisher 1993). Accordingly, you will need to address this issue somehow (assuming 
you wish to make inferences that apply to the estimated 479,000 hunters in Ohio, 
rather than the ??? hunters in Ohio that tend to respond to these type of 
surveys).There are a number of ways to address the issue of 
non-response. Drane et al. (1973) described a procedure that we use in our mail 
survey of recreational anglers in Alaska each year. The details are available in 
the operational plan describing our survey, which I've made available 
at:

  http://www.sf.adfg.state.ak.us/rts/bingham/Accsp/Alaska_statewide_sport_fish_harvest_survey.pdf
Other procedures are outlined in an easy to digest 
format in the following publication (this publication may also be of some use to 
you in regards to setting your sample size):

  Salant, Priscilla, and Don A. Dillman. 1994. How to 
  conduct your own survey. John Wiley and Sons, New 
York.
This publication is available in cloth (ISBN: 
0-471-01267-X) or paper (ISBN: 0-471-01273-4), and is inexpensive at $19.95 for 
the paper version.
 
Another publication that I have found quite 
useful in addressing issues of this type (that comparatively is quite expensive 
at $175.00, ISBN: 0-471-61171-9):

  Groves, Robert M. 1989. Survey errors and 
  survey costs. John Wiley and Sons, New York.
If this is too pricey you should at least 
consider getting a copy from a library.
 
One other issue that you may want to consider 
when setting a sample size in a survey of this type that is not necessarily 
related to the issue of sampling error: participation at sites that are only 
utilized by a small proportion of the hunting population will only be detectable 
with very large sample sizes. So if you're interested in estimating parameters 
associated with a location and animal hunted combination that only 1% of the 
479,000 hunters (or 4,790 hunters); and you send out only 6,700 surveys, getting 
back 3,800 responses ... of these "on average" 38 responses will be from 
hunters who participated in this particular hypothetical hunt.
 
You didn't indicate in your question what 
agency you work for in Ohio ... or where ... but there are possibly a number of 
resources available to you to get some assistance in designing a survey of this 
nature, for example the Statistics Department at The Ohio State University has 
Statistical Consulting Service that as per their web-site is described 
as:

  
  Welcome to the Statistical Consulting Service 
  (SCS), at The Ohio State University. The SCS is a team of faculty, staff and 
  graduate students in the Department of Statistics, with the mission to assist 
  in improving the quality of research at Ohio State and the broader scientific 
  community. 
(the web site is at: http://www.stat.ohio-state.edu/~scs/). 
I imagine some of the other universities in Ohio may have similar services. You 
might also want to contact one of the various local Chapters of the American 
Statistical Association (http://www.amstat.org/):

  
  Cleveland: http://www.bio.ri.ccf.org/ASA/
  
  Columbus: http://stat.ohio-state.edu/~peruggia/asa/index.html
  
  Cincinnati: http://www.muohio.edu/~asacccwis/
  
  Dayton: http://www.wright.edu/%7Emunsup.seoh/asadaytonchapter/
for more leads in regards to 
assistance.
 
Finally, you might want to check out some 
"leads" at one of the various internet sites listed by the Survey and Research 
Methods section of the ASA given at: http://www.amstat.org/sections/SRMS/links.html.
 
Hope this helps.
 
LITERATURE CITED

  Brown, T. L., and B. T. Wilkins. 
  1978. Clues to reasons for nonresponse, and its effect upon variable 
  estimates. Journal of Leisure Research 10:226-231.
  Drane, J. W., D. Richter, and C. Stoskopf. 1993. Improved 
  imputation of non-responses to mailback questionnaires. Statistics in Medicine 
  12:283-288.
  Fisher, M. R. 1993. The relationship 
  between nonresponse bias and angler specialization. Ph. D. dissertation, Texas 
  A&M University, College Station.
  Leinonen, K. 1988. Biased catch 
  estimates due to nonresponse in fishing questionnaire. Finnish Fisheries 
  Research 7:66-74.
  Tarrant, M. A., M. J. Manfredo, P. B. 
  Bayley, and R. Hess. 1993. E

Re: Standardized testing in schools

2001-05-24 Thread W. D. Allen Sr.

"And this proved to me , once again,
why nuclear power plants are too hazardous to trust:..."

Maybe you better rush to tell the Navy how risky nuclear power plants are!
They have only been operating nuclear power plants for almost half a century
with NO, I repeat NO failures that has ever resulted in any radiation
poisoning or the death of any ship's crew. In fact the most extensive use of
Navy nuclear power plants has been under the most constrained possible
conditions, and that is aboard submarines!

Beware of our imaginary boogy bears!!

You are right though. There is nothing really hazardous about the operation
of nuclear power plants. The real problem has been civilian management's
ignorance or laziness!


WDA

end

"Rich Ulrich" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Standardized tests and their problems?  Here was a
> problem with equating the scores between years.
>
> The NY Times had a long front-page article on Monday, May 21:
> "When a test fails the schools, careers and reputations suffer."
> It was about a minor screw-up in standardizing, in 1999.  Or, since
> the company stonewalled and refused to admit any problems,
> and took a long time to find the problems, it sounds like it
> became a moderately *bad*  screw-up.
>
> The article about CTB/McGraw-Hill starts on page 1, and covers
> most of two pages on the inside of the first section.  It seems
> highly relevant to the 'testing' that the Bush administration
> advocates, to substitute for having an education policy.
>
> CTB/McGraw-Hill  runs the tests for a number of states, so they
> are one of the major players.  And this proved to me , once again,
> why nuclear power plants are too hazardous to trust:  we can't
> yet Managements to spot problems, or to react to credible  problem
> reports in a responsible way.
>
> In this example, there was one researcher from Tennessee who
> had strong longitudinal data to back up his protest to the company;
> the company arbitrarily (it sounds like) fiddled with *his*  scores,
> to satisfy that complaint, without ever facing up to the fact that
> they did have a real problem.  Other people, they just talked down.
>
> The company did not necessarily lose much business from the
> episode because, as someone was quoted, all the companies
> who sell these tests   have histories of making mistakes.
> (But, do they have the same history of responding so badly?)
>
> --
> Rich Ulrich, [EMAIL PROTECTED]
> http://www.pitt.edu/~wpilib/index.html




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



The False Placebo Effect

2001-05-24 Thread David Heiser


Be careful on your assumptions in your models and studies!
---

Placebo Effect An Illusion, Study Says
By Gina Kolata
New York Times
(Published in the Sacramento Bee, Thursday, May 24, 2001)

In a new report that is being met with a mixture of astonishment and some
disbelief, two Danish researchers say that the placebo effect is a myth.

The investigators analyzed 114 published studies involving about 7,500
patients with 40 different conditions. They found no support for the common
notion that, in general, about one-third of patients will improve if they
are given a dummy pill and told it is real.

Instead, they theorize, patients seem to improve after taking placebos
because most diseases have uneven courses in which their severity waxes and
wanes. In studies in which treatments are compared not just to placebos but
also to no treatment at all, they said, participants given no treatment
improve at about the same rate as participants given placebos.

The paper appears today in the New England Journal of Medicine. Both
authors, Dr. Asbjorn Hrobjartsson and Dr. Peter C. Gotzsche, are with the
University of Copenhagen and the Nordic Cochran Center, an international
organization of medical researchers who review randomized clinical trials.

Reaction to the report covers the spectrum.

Dr. Donald Berry" a statistician at the M.D. Anderson Cancer Center in
Houston, said: "I believe it. In fact, I have long believed that the placebo
effect is nothing more than a regression effect," referring to a statistical
observation that patients who feel terrible one day will almost in- variably
feel better the next day, no matter what is done for them.

But others, like David Freedman, a statistician at the University of
California, Berkeley, said he was not convinced. He said that the
statistical method the researchers used -pooling data from many studies and
using a statistical tool called meta-analysis to analyze them -could give
results that were misleading.

"I just don't find this report to be incredibly persuasive," Freedman said.

The researchers said they saw a slight effect of placebos on subjective
outcomes reported by patients, like their descriptions of how much pain they
experienced. But Hrobjartsson said he questioned that effect. "It could be a
true effect, but it also could be a reporting bias," he said. "The patient
wants to please the investigator and tells the investigator, 'I feel
slightly better. ' "

Placebos still are needed in clinical research, Hrobjartsson said, to
prevent researchers from knowing who is getting a real treatment.

Curiosity prompted Hrobjartsson and Gotzsche to act. Over and over, medical
journals and textbooks asserted that placebo effects were so powerful that,
on average, 35 percent of patients would improve if they were told a dummy
treatment was real.

They began asking where this assessment came from. Every paper ,
Hrobjartsson said, seemed to refer back to other papers.

He began peeling back the onion, finally coming to the original paper. It
was written by a Boston doctor, Henry Beecher, who had been chief of
anesthesiology at Massachusetts General Hospital in Boston and published a
paper in the Journal of the American Medical Association in 1955 titled,
"The Powerful Placebo." In it, Beecher, who died in 1976, reviewed about a
dozen studies that compared placebos to active treatments and concluded that
placebos had medical effects.

"He came up with the magical 35 percent number that has entered placebo
mythology, Hrobjartsson said.

But, Hrobjartsson said, diseases naturally wax and wane.

"Of the many articles I looked through, no article distinguished between a
placebo effect and the natural course of a disease," Hrobjartsson said.

He and Gotzsche began looking for well-conducted studies that divided
patients into three groups, giving one a real medical treatment, one a
placebo and one nothing at all. That was the only way, they reasoned, to
decide whether placebos had any medical effect.

They found 114, published between 1946 and 1998. When they analyzed the
data, they could detect no effects of placebos on objective measurements,
like cholesterol levels or blood pressure.

The Washington Post contributed to this report.
-end of article-




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Standardized testing in schools

2001-05-24 Thread Rich Ulrich

Standardized tests and their problems?  Here was a 
problem with equating the scores between years.

The NY Times had a long front-page article on Monday, May 21:
"When a test fails the schools, careers and reputations suffer."
It was about a minor screw-up in standardizing, in 1999.  Or, since
the company stonewalled and refused to admit any problems,
and took a long time to find the problems, it sounds like it 
became a moderately *bad*  screw-up.

The article about CTB/McGraw-Hill starts on page 1, and covers
most of two pages on the inside of the first section.  It seems 
highly relevant to the 'testing' that the Bush administration 
advocates, to substitute for having an education policy.

CTB/McGraw-Hill  runs the tests for a number of states, so they
are one of the major players.  And this proved to me , once again,
why nuclear power plants are too hazardous to trust:  we can't
yet Managements to spot problems, or to react to credible  problem
reports in a responsible way.

In this example, there was one researcher from Tennessee who
had strong longitudinal data to back up his protest to the company;
the company arbitrarily (it sounds like) fiddled with *his*  scores, 
to satisfy that complaint, without ever facing up to the fact that 
they did have a real problem.  Other people, they just talked down.

The company did not necessarily lose much business from the 
episode because, as someone was quoted, all the companies
who sell these tests   have histories of making mistakes.  
(But, do they have the same history of responding so badly?)

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=