Hi,
to weigh in on this:
@Aitor, Harrell's rules of thumb are assuming independent predictors
without
any fancy covariance function. To model the covariance of the residuals
you are now estimating extra
2nd order parameters from the data, so even more data is needed to
stabilize the parameter esti
Hi,
The best route is not to do it. the following web sites explain it
better than I can in a short email,
and have example code
http://biostat.mc.vanderbilt.edu/wiki/Main/DynamitePlots
http://emdbolker.wikidot.com/blog:dynamite
Nicholas
> From: ZAID M MCKIE-KRISBERG
> To: r-sig-ecology@r-proje
Hi Sacha,
A few things:
1) If counts are large, a normal error structure may be fine as an
approximation,
large counts means conditional on the covariates, the counts in each
cell are relatively 'large'. if there are not many 0's a log
transformation
might be appropriate. This will be a lot e
Hi Krista,
You might look at posterior predictive p-values and Bayes factors,
However, you are mixing your metaphors so to speak. You can't really
talk about power in a Bayesian context, It really is a frequentest
concept dealing with classical error rates. You can think about how much
information
Hi Melissa,
I think the problem is confounding.
If you do not have pre and post measurements
of soil moisture, you cannot really fully model
the effect of warming on on growth vs the effect of drying, without
making some assumptions. Though it would seem to me, rusty though my
botany is, that inc
Hi Manuel,
I am guessing the problem is that because you have categorical
predictors,
you are getting empty cells in your cross validation sets, and hence
infinite coefficients.
Unfortunately, you are now in a very tricky situation, to get at the
generalization error of your
model you need to have
Hi Tom,
You have 45 observations, and remember the bootstrap samples with
replacement.
If you have a few extreme data points it is entirely likely that some of
your parameters
may have some extreme cases that arise from rare bootstrap samples where
the extreme data
appears more than once, especial
Hi,
Yep, you seem to have more behaviors than observations.
"Error in anova.mlm(object) : residuals have rank 70 < 72"
is the clue.
On top of that be aware that MANOVA has very strong assumptions about
the covariance structure. You may want to look into a mixed modeling
approach.
A solution is
Hi,
Just to add a note to the good advice you have gotten so far. Redundancy
Aanalysis (RA) is a linear
method, as Jari explained. If you apply transformations until your data
conform to the assumptions of RA and then do RA, you are no longer
applying a linear method.
You will get back a configura
Hi Joseph,
I think you are making things a bit more complicated than they need to
be.
You have 4 levels of instar as treatment and 2 presumably (-)correlated
responses
algae and zooplankton. You can assume you know something about the
spacing of levels
and try and fit a linear or quadratic contrast
our replicates in each of those categories. The metric I used
> was the hourly % loss of a tethered set of algae. Basically, I set the
> tether out, came back an hour later, and quantified the percent loss
> in terms of mass.
>
>
>
> On Mar 4, 2010, at 10:32 AM, Nicholas L
d set of algae. Basically, I set the
> tether out, came back an hour later, and quantified the percent loss
> in terms of mass.
>
>
>
> On Mar 4, 2010, at 10:32 AM, Nicholas Lewin-Koh wrote:
>
> > Hi Nathan,
> > I don't use SPSS, so I can't commen
Hi Nathan,
I don't use SPSS, so I can't comment on what it is doing,
but if you look at the bottom of the output from multcomp
it says:(Adjusted p values reported -- single-step method)
What that means is that multcomp is adjusting for the fact
that you are doing six comparisons. So a quick and di
Hi Kristen,
Are you are just interested in "clumpiness" in time, or "clumpiness"
in space and time? Are your sites far enough apart that the
environmental process
you are looking at can be assumed independent? If you have the spatial
coordinates, even approximate, you could calculate Moran's I for
Hi Tore,
What your seeing in the residuals may just be due to the "discreetness"
of count data.
Gordon Smyth has a nice paper on this topic (and code in the statmod
package):
Dunn, P. K., and Smyth, G. K. (1996). Randomized quantile residuals.
Journal of Computational and Graphical Statistics 5, 23
Hi Lisa,
The point of including time in the model is that your data are
ordered in time. If there is reason to believe that food availability
at year t has no influence on food availability in year t + 1
than all you need is success~food. But alas ecology is rarely that
simple
and I would guess t
Hi,
To add to all the good advice you have gotten, I would second Scott's
advice about using a GAM.
Two issues that come to mind are that the interpretation might be tricky
as population, I assume
geographical unit, is your "Experimental unit" and individual nests are
within the populations
so flu
Hi Anna,
A couple of thoughts:
Did you try fitting a straight Poisson model? The quasi Poisson model is
assuming
the variance is not a strict function of the mean, and that may be
interacting with your
weighting function. Also how exactly are the weights defined? is it (#
squares counted)/(total po
Hi Alex,
Well in the example you show, almost any distance based
clustering method would identify the relationship, from your example
dd<-dist(cbind(x,y))
cuttree(hclust(dd),2)
However, what I think you are after is simply thresholding. So
if x are your bird coordinates, then,
library(igraph)
x<-m
Hi,
Doing that with xyplot is a bit of a PITA, but
try:
fits<-fitted(model)
xyplot((GPP)~(Discharge), groups=Site, data=nighttimeds1.dat, fits=fits,
panel=function(x,y,fits,...){
panel.xyplot(x,y,...)
panel.lines(x,fits)
},col=c("red","blue","green","yellow"
No No No No No!
The log likelihood of the Poisson and the Gaussian are not comparable.
One is a discrete distribution and the other continuous, you can get
into all
sorts of trouble there and not just pathological cases. They are on
totally different scales.
You need to make a decision if you wan
Hi Clement,
I am not sure exactly what you are proposing. Is there any data
or is this all simulated? My first question is why is years ordinal?
It would seem that richness would vary smoothly from year to year.
The second question is why is spatial location lumped in with gradient?
wouldn't a "bet
Hi,
It's not quite so simple, at what stage do you recombine your samples.
Recombining the covariances into an "average" covariance from the
imputed samples has different implications than recombining the
projected data. I think that in terms of pca uncertainty the projection
directions is the issu
Hi Dan,
Since you are fitting a fully Bayesian model, the output from your MCMC
sample
is a sample from the joint posterior distribution. So given that your
chain is mixing
properly and converged (a whole nother kettle of fish) I don't see why
the correlation
is a problem. You can assess the model
-
>
> Message: 1
> Date: Thu, 18 Dec 2008 16:52:46 +0530
Hi,
If your counts are relatively high, you might start with a log-normal or
gamma
distribution. What you are talking about here are species abundance
distributions
on which the
Hi,
I have yet to see a book that was a good introduction to statistics. I
have always felt
that before going into anything about inference, discussion should
center on what is
an experimental unit, and what is an observational unit. Then
incorporate notions of
likelihood, and probability. S&R and
Hi Ophelia,
You are working in an area where R is pretty weak. You will have
to write your own code to do this, however there are a few packages
that support image analysis. They are
EBImage on bioconductor
biOps
rimage
Some of the Spatial packages like RGDAL and RGrass might help as well.
You c
Hi Phil,
Can you plot the dendorgam in pieces? if you look at
example(hclust) the show an example where they plot the dendrogram
showing 10 aggregated groups from the centroids of a previous dendrogram
cut at 10 clusters. You could also do the opposite using as.dendrogram
example(hclust)
hcd<-as.d
Hi,
Mark Taper, Subhash Lele, and I organized an ESA invited session around
this topic
at the 98? ESA meeting in Baltimore MD. Which resulted in
Mark L. Taper and Subhash R. Lele (eds): The Nature of Scientific
Evidence: Statistical, Philosophical, and Empirical Considerations
University of Chic
Hi Fraser,
I believe the package plotrix has a polar plot
function. For circular statistics there are the packages CircStats, and
circular
on CRAN. Those should have what you need.
Nicholas
> --
>
> Message: 3
> Date: Tue, 8 Jul 2008 14:35:16 -0400
> From: Fraser Smith
Hi,
Short answer: fit a 4 parameter logistic model using nls in base or use
the
package drm. The parameter names differ, but the third parameter is the
ec50.
Longer Answer: Need more information from you to give you a longer
answer.
Nicholas
PS: I changed the title of this thread. Something info
Hi Phil,
The main reason would be for accessing R functionality in
broader applications. Python does use a more efficient
memory model than R, however when using R functions
through rpy, R will still make copies with assignment within
called R functions, so I am not clear that there is any
gain on
What about the packages COZIGAM and ZIGP and the orphan package in the
archives, zicounts?
Nicholas
>
> Message: 2
> Date: Tue, 10 Jun 2008 11:45:44 +0200
> From: "Dr. Cornelia B. Krug" <[EMAIL PROTECTED]>
> Subject: [R-sig-eco] zip models in R
> To: r-sig-ecology@r-project.org
> Message-ID: <[
Hi,
following this thread I have seen several misunderstandings that I think
should be cleared up. Firstly, we should be careful what is meant by
"publication quality", on interpretation is for a particular journal,
a good resolution graphic in the format they require. In general, the
meaning ref
Hi,
Last month, or so, Doug was talking about comparing models
using likelihood ratio tests, anova(m1,m2) and pointed out
that the way things are calculated in lmer the ML and REML estimates
are equivalent. I am not sure if this is because the bias in the REML
estimates cancels out or if the estim
Hi Stephen,
Can you be a bit more specific, are the peaks already marked
and you want to find all pairs of peaks distance k appart? or is it
that the peaks are unknown and the signal/noise needs to be considered
as
well? I know Tibshirani and Hastie have a package for peak probability
contrasts
but
Hi Melanie,
It has been a long time since i looked at gravity models but if I
remember correctly,
they are formulated as log linear models with a particular weight
matrix. I would guess
that that could be written as a generalized linear mixed model, and fit
with lme, or lmer, though if a paramet
37 matches
Mail list logo