Dear all,
I am writing an R code to fit a Bayesian mixed logit (BML) via MCMC / MH
algorithms following Train (2009, ch. 12).
Unfortunately, after many draws the covariance matrix of the correlated random
parameters tend to become a matrix with almost perfect correlation, so I think
there is a
vs f3, hence my question on the
>> list... but I may well be mistaken and I will double check as soon as I
>> am
>> back. Hopefully this does not sound too unreasonable.
>>
>> Best wishes,
>>
>> Carlo
>>
>> >
>> >
>> > Be
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
>> On Behalf Of Joris Meys
>> Sent: Monday, June 21, 2010 12:09 PM
>> To: Carlo Fezzi
>> Cc: r-help@r-project.org
>> Subject: Re: [R] {Spam?} Re: mgcv, testing gamm vs lme,which degree
nds on the focus as well. If the focus is prediction, you
> might even want to consider testing whether the variance of the
> residuals differs significantly with a simple F-test. This indicates
> whether the predictive power differs significantly between the models.
> But these tests tend t
e of the
> randomization that allows for a general hypothesis about the added
> value of the spline, without focusing on its actual shape. Hence the
> "freedom" connected to that actual shape should not be used in the df
> used to test the general hypothesis.
>
> Hope t
ld not a test of one model
vs the other take this into account?
Sorry if this may sound dull, many thanks for your help,
Carlo
> On Wednesday 16 June 2010 20:33, Carlo Fezzi wrote:
>> Dear all,
>>
>> I am using the "mgcv" package by Simon Wood to estimate an additiv
> tests. To be completely correct, this -apparently- only counts for
> gamms using the identity link.
>
> Cheers
> Joris
>
> On Wed, Jun 16, 2010 at 9:33 PM, Carlo Fezzi wrote:
>> Dear all,
>>
>> I am using the "mgcv" package by Simon Wood to estim
Dear all,
I am using the "mgcv" package by Simon Wood to estimate an additive mixed
model in which I assume normal distribution for the residuals. I would
like to test this model vs a standard parametric mixed model, such as the
ones which are possible to estimate with "lme".
Since the smoothing
; + # GLS mean parameter estimates
>> + betam <- betav%*%t(X)%*%inv.sigma%*%Y
>> + })
>> user system elapsed
>> 1.140.511.76
>>>
>>> system.time({
>> + csig <- chol2inv(covar)
>> + betam2 <- ginv(csig %*% X) %*% csig %*% Y
>> + })
>> user system
of a bigger
code and needs to run several times).
Any suggestion would be greatly appreciated.
Carlo
***
Carlo Fezzi
Senior Research Associate
Centre for Social and Economic Research
on the Global Environment (CSERGE),
School of Environmental Sciences
ind(u1,u2)
c.s <-c(rep(0,N-1),rep(c(low,-high),N))
OPTIMIZATION
a<-constrOptim(outer.iteration = 500, control = list(maxit=1),
theta=(high-low)/(N+1)* N:1, f=negdet, ui = u.s, ci=c.s,
method="Nelder-Mead")
####
**
= TRUE)
I guess the function "constrOptim" does not allow this argument which, on
the other hand, is allowed in "optim".
I would be extremely grateful if anybody could suggest a way I could use to
I obtain the values of the hessian matrix...
Many thanks,
Carlo
****
To: Carlo Fezzi
Cc: r-help@r-project.org
Subject: Re: [R] R computing speed
I would suggest that you use Rprof to get a profile of the code to see
where time is being spent. You did not provide commented, minimal,
self-contained, reproducible code, so it is hard to tell from just
looking at the
really grateful if anybody could help me with this issue, I
attach my code below.
Many thanks,
Carlo
***
Carlo Fezzi
Centre for Social and Economic Research
on the Global Environment (CSERGE),
School of Environmental Sciences,
University of East Anglia
14 matches
Mail list logo