See below.

On Thu, Jun 3, 2010 at 5:35 PM, Gavin Simpson <gavin.simp...@ucl.ac.uk>wrote:

> On Thu, 2010-06-03 at 17:00 +0200, Joris Meys wrote:
> > On Thu, Jun 3, 2010 at 9:27 AM, Gavin Simpson <gavin.simp...@ucl.ac.uk
> >wrote:
> >
> > >
> > > vegan is probably not too useful here as the response is univariate;
> > > counts of ducks.
> > >
> >
> > If we assume that only one species is counted and of interest for the
> whole
> > research.  I (probably wrongly) assumed that data for multiple species
> was
> > available.
> >
> > Without knowledge about the whole research setup it is difficult to say
> > which method is the best, or even which methods are appropriate. VGAM is
> > indeed a powerful tool, but :
> >
> > > proportion_non_zero <- (sum(ifelse(data$duck == 0,0,1))/182)
> > means 182 observations in the dataset
> >
> > > model_NBI <- gamlss(duck ~ cs(HHCDI200,df=3) + cs(HHCDI1000,df=3) +
> > cs(HHHDI200,df=3) + cs(HHHDI1000,df=3) + cs(LFAP200,df=3),data=data,
> > family= NBI)
> >
> > is 5 splines with 3df, an intercept, that's a lot of df for only 182
> > observations. using VGAM ain't going to help here.
>
> How do you know?
>

I don't. I thought it would be like that because you use essentially the
same splines, and I overlooked the fact that the OP tried to reduce to a
single smooth. I stand corrected.

Cheers
Joris


> >  I'd reckon that the model
> > itself should be reconsidered, rather than the distribution used to fit
> the
> > error terms.
>
> I was going to mention that too, but the OP did reduce this down to a
> single smooth and the problem of increasing deviance remained. Hence
> trying to fit a /similar/ model in other software might give an
> indication whether the problems are restricted to a single software or a
> more general issue of the data/problem?
>
> At this stage the OP is stuck not knowing what is wrong; (s)he has
> nothing to do model checking on etc. Trying zeroinfl() and fitting a
> parametric model, for example, might be a useful starting point, then
> move on to models with smoothers if required.
>

He (quite positive on that one :-) ) can indeed try to use VGAM on the model
with one smooth and see if that turns out to give something. That should
give some clarity on the question whether it is the optimization of pscl
that goes wrong, or whether the problem is inherent to the data.

I'd like to suggest next to that to take a closer look at the iteration
parameters of the gamlss function itself. Honestly, I've never tried these
ones out before, but you never know whether it would work.
See ?gamlss.control

Cheers
Joris

>
> --
Joris Meys
Statistical Consultant

Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control

Coupure Links 653
B-9000 Gent

tel : +32 9 264 59 87
joris.m...@ugent.be
-------------------------------
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to