On 5 Aug 2003 06:46:40 -0700, [EMAIL PROTECTED] (Bob Roberts)
wrote:

[ snip, a lot of detail, including sample data ]

> The lawsuits are settled now, and the first wood pile is being cleaned
> up.  The It is estimated to be 3,000 tons or 12,800 pallets/crates.
> 
> It was decided to sample 12 pallets/crates for aresenic.  In selecting
> the 12 pallets/crates, an attempt was made to select different-looking
> ones.  Each selected pallet/crate separately was ground into chips,
> stirred, spread out, and a composite sample was collected.  The 12
> composites were tested by the TCLP protocol, which mimics the chemical
> leaching that occurs in a landfill.  (Assume that the grinding,
> mixing, composite sampling, and TCLP analysis was done correctly.)

Several distinct issues come to mind, not entirely statistical.

 1) Okay, even assuming that the TCLP  analysis was done
correctly, how  valid  is that analysis?  Mimicking the
leaching of a landfill sounds like a damn-poor criterion to me.
What is the maximum As  that is present?   - is that reflected
directly in the TCLP  analysis?

 2) Here is a question easier than validity... 
How good is the statistical reliability of measurements?
How similar are the results when you split one of those
samples into two parts, and do two TCLP analyses?

 3) 12  pallets were selected, out of 12 thousand, and they
were "different looking" ones.  That sounds like a noble 
intention, but using grossly non-random selection might
raise more problems than it solve.  Is that all that was done,
or were the 12 considered as part of an "experiment"?
 - Was there an attempt, for instance, to rate the 12  as to 
(a) their mutual, expected similarities, or (b) their singular,
expected Arsenic levels?  

Here is a gross problem.  The 12 were not  random, and 
you don't know what hidden agenda might have existed 
behind their selection; nor whether the *intentions*  of a 
hidden agenda  could have, even conceivable, mattered.

 4)  *Some*  amount of random sampling  surely needs 
to be done (as distinct from the stratified sample, so far).   
Some amount of knowledge is needed (Do you have it)
about the quality of those assays.  Some knowledge is 
needed about  whether the "eyeballing"  works at all,  
in classifying pallets into sets that are apt to be more 
or less hazardous -- else, there was no usefulness in 
doing the selection of 12, was there?  
 - I think it should be good to find ways to categorize the
pallets, and then to investigate "main effects" systematically.

Will there be surprises in the sample?  Or, What is the
*size*  of the surprise that might arise?  I think that you 
can't make a very good guess if the process has never
been attempted before.  

>From the pieces of experience of mine, half-related to
that, I have some opinions based on the dozen "test results"
that were presented.  The 5 scores that were above 1.0 
were high enough to be 'worrisome'  when the criterion is 5.0,
the relations seem logarithmic, and the range is already wide.
If dozens of scores were selected on the same basis as those, 
some of the dozen probably would exceed 5; maybe even 
exceed  25.  

Now, some 'decision'  questions:
How serious is a single 5?  single 25?
How predictable are those high-scoring pallets, based
on visible  characteristics or pallet-history?




To me as an environmentalist, it seems that you do have to 
distinguish between "good science"  and  "corporate science" 
for the basic facts.  You have to fight further if you hope to get  
extrapolation.

Good luck.
-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
"Taxes are the price we pay for civilization."  Justice Holmes.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to