On Tue, 22 Sep 2015, John Sorkin wrote:
Charles,
I am not sure the answer to me question, given a dataset, how can one
compare the fit of a model of the fits the data to a mixture of two
normal distributions to the fit of a model that uses a single normal
distribution, can be based on the g
John: After I sent what I wrote, I read Rolf's intelligent response. I
didn't realize that
there are boundary issues so yes, he's correct and my approach is EL
WRONGO. I feel very not good that I just sent that email being that it's
totally wrong. My apologies for noise
and thanks Rolf for the cor
Hi John: For the log likelihood in the single case, you can just calculate
it directly
using the normal density, so the sum from i = 1 to n of f(x_i, uhat,
sigmahat)
where f(x_i, uhat, sigma hat) is the density of the normal with that mean
and variance.
so you can use dnorm with log = TRUE. Of c
On 23/09/15 13:39, John Sorkin wrote:
Charles, I am not sure the answer to me question, given a dataset,
how can one compare the fit of a model of the fits the data to a
mixture of two normal distributions to the fit of a model that uses a
single normal distribution, can be based on the glm mod
Charles,
I am not sure the answer to me question, given a dataset, how can one compare
the fit of a model of the fits the data to a mixture of two normal
distributions to the fit of a model that uses a single normal distribution, can
be based on the glm model you suggest.
I have used normalmi
On Tue, 22 Sep 2015, John Sorkin wrote:
In any event, I still don't know how to fit a single normal distribution
and get a measure of fit e.g. log likelihood.
Gotta love R:
y <- rnorm(10)
logLik(glm(y~1))
'log Lik.' -17.36071 (df=2)
HTH,
Chuck
Bert
I am surprised by your response. Statistics serves two purposes: estimation and
hypothesis testing. Sometimes we are fortunate and theory, physiology, physics,
or something else tell us what is the correct, or perhaps I should same most
adequate model. Sometimes theory fails us and we wish
I'll be brief in my reply to you both, as this is off topic.
So what? All this statistical stuff is irrelevant baloney(and of
questionable accuracy, since based on asymptotics and strong
assumptions, anyway) . The question of interest is whether a mixture
fit better suits the context, which only
I am not sure AIC or BIC would be needed as the two normal distribution has at
least two additional parameters to estimate; mean 1, var1, mean 2, var 2 where
as the one normal has to estimate only var1 and var2.In any event, I don't know
how to fit the single normal and get values for the loglik
That's true but if he uses some AIC or BIC criterion that penalizes the
number of parameters,
then he might see something else ? This ( comparing mixtures to not
mixtures ) is not something I deal with so I'm just throwing it out there.
On Tue, Sep 22, 2015 at 4:30 PM, Bert Gunter wrote:
> Tw
Bert,Better, perhaps, but will something like the LR test be significant?
Adding an extra parameter to a linear regression almost always improves the R2,
the if one compares models, the model with the extra parameter is not always
significantly better.
John
P.S. Please forgive the appeal to "sig
Two normals will **always** be a better fit than one, as the latter
must be a subset of the former (with identical parameters for both
normals).
Cheers,
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
-- Clifford Stoll
On Tue
I have data that may be the mixture of two normal distributions (one contained
within the other) vs. a single normal.
I used normalmixEM to get estimates of parameters assuming two normals:
GLUT <- scale(na.omit(data[,"FCW_glut"]))
GLUT
mixmdl = normalmixEM(GLUT,k=1,arbmean=TRUE)
summary(mixmdl
13 matches
Mail list logo