On Feb 19, 2011, at 1:08 PM, Ben Ward wrote:

Hi Graham,

Thanks, that does explain lots. I've been playing with making log's of
data in models to make the relationship linear, which it does, which
suggests to me that lm() is the right way to go, however, after if try
to predict after y values after about 60% on the x axis for light
transmission, the y value, for bacterial numbers, crosses the axis and
gives me negative values for y, which on a practical level isn't
possible, as one can't have less than no bacteria in a culture.

Once you have your estimated parameter in the transformed "analysis" scale, you need to apply the inverse transformation, in this case exp, to return the estimate to the measured scale. You also need to consider that in the process of transforming the values, performing an additive estiamtion, and transforming back to the "natural" scale, you will have estimated a multiplicative model, since exp(log(x)+log(y)) = x*y.

Had you used log(x) inside the lm call and then used predict, the predictions would be correct.

You might want to consider glm models with family="poisson".

--
David.

On a
practical level, when I include the cirve in my appendix I could say
anything above around 60% is 0, and mention the negative results from
the standard curve's prediction capabilities are not literal, and to say
turn any negative bacterial count obtained as a result of the curve to
0.

I wouldn't. The informed observer would know you were flailing around.

I've not had to deal with such pleatauing curves before. The values I
have for the curve don't go above 50%, so anything above it is
prediction, and my experiment probably won't result in x values above
50% as the death of the culture proceeds slowly, but that depending on
relative amounts of culture and antimicrobial I use the rate 'could' go
faster or slower, so could go above 50%. I was wondering if non-linear
regression is better for such a thing, but I'm hesitant to go into it in
more detail for now because of the danger of drastically increacing
complexity, if on a practical level, what I currenly have, works and is
very accurate, within the range I will most likely be using it.

Thanks,
Ben.


On 19/02/2011 15:39, Graham Smith wrote:
Ben,

Does this help.

http://r-eco-evo.blogspot.com/2011/01/confidence-intervals-for-regression.html

Not sure if it will work with your particular model, but may be worth
a try.


Graham

On 18 February 2011 23:29, Ben Ward <benjamin.w...@bathspa.org
<mailto:benjamin.w...@bathspa.org>> wrote:

   Hi, I wonder if anyone could advise me with this:

   I've been trying to make a standard curve in R with lm() of some
   standards from a spectrophotometer, so as I can express the curve
   as a formula, and so obtain values from my treated samples by
   plugging in readings into the formula, instead of trying to judge
   things by eye, with a curve drawn by hand.

   It is a curve and so I used the following formula:

   model <- lm(Approximate.Counts~X..Light.Transmission +
   I(Approximate.Counts^2), data=Standards)

   It gives me a pretty decent graph:
   xyplot(Approximate.Counts + fitted(model) ~ X..Light.Transmission,
   data=Standards)

   I'm pretty happy with it, and looking at the model summary, to my
   inexperienced eyes it seems pretty good:

   lm(formula = Approximate.Counts ~ X..Light.Transmission +
   I(Approximate.Counts^2),
      data = Standards)

   Residuals:
     Min     1Q Median     3Q    Max
   -91.75 -51.04  27.33  37.28  49.72

   Coefficients:
                            Estimate Std. Error t value Pr(>|t|)
   (Intercept)              9.868e+02  2.614e+01   37.75 <2e-16 ***
   X..Light.Transmission   -1.539e+01  8.116e-01  -18.96 <2e-16 ***
   I(Approximate.Counts^2)  2.580e-04  6.182e-06   41.73 <2e-16 ***
   ---
   Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

   Residual standard error: 48.06 on 37 degrees of freedom
   Multiple R-squared: 0.9956,    Adjusted R-squared: 0.9954
   F-statistic:  4190 on 2 and 37 DF,  p-value: < 2.2e-16

   I tried to put some 95% confidence interval lines on a plot, as
   advised by my tutor, to see how they looked, and I used a function
   I found in "The R Book":

   se.lines <- function(model){
   b1<-coef(model)[2]+ summary(model)[[4]][4]
   b2<-coef(model)[2]- summary(model)[[4]][4]
   xm<-mean(model[[12]][2])
   ym<-mean(model[[12]][1])
   a1<-ym-b1*xm
   a2<-ym-b2*xm
   abline(a1,b1,lty=2)
   abline(a2,b2,lty=2)
   }
   se.lines(model)

   but when I do this on a plot I get an odd result:


   They looks to me, to lie in the same kind of area, that my
   regression line did, before I used polynomial regression, by
   squaring "Approximate.Counts":

   lm(formula = Approximate.Counts ~ X..Light.Transmission +
   I(Approximate.Counts^2), data = Standards)

   Is there something else I should be doing? I've seen several ways
   of dealing with non-linear relationships, from log's of certain
   variables, and quadratic regression, and using sin and other
   mathematical devices. I'm not completely sure if I'm "allowed" to
   square the y variable, the book only squared the x variable in
   quadratic regression, which I did first, and it fit quite well,
   but not as good squaring Approximate Counts does:

   model <- lm(Approximate.Counts~X..Light.Transmission +
   I(X..Light.Transmission^2), data=Standards)


   Any advice is greatly appreciated, it's the first time I've really
   had to look at regression with data in my coursework that isn't a
   straight line.

   Thanks,
   Ben Ward.



David Winsemius, MD
West Hartford, CT

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to