Thomas, Thierry,
Thank your for your answers and my appogies for my late reply. Thomas,
from your reply it seems that dividing the weights by their average
would also make finding a suitable starting value more robust. This
indeed seems to be case from test I've ran.
However, your comments about
Jan,
Thierry is correct in saying that you are misusing glm(), but there is also a
numerical problem.
You are misusing glm() because your model specification claims to have
Binomial(n,p) observations with w in the vicinity of 100, where there is a
single common p but the observed binomial p
; CC: r-help@r-project.org
> Onderwerp: Re: [R] Weights in binomial glm
>
> Thierry,
>
> Thank you for your answer.
>
> From the documentation it looks like it is valid to assume
> that the weights can be used for replicate weights.
> Continuing your example:
>
able answer can be extracted from a given body of
> data.
> ~ John Tukey
>
>
>> -Oorspronkelijk bericht-
>> Van: r-help-boun...@r-project.org
>> [mailto:r-help-boun...@r-project.org] Namens Jan van der Laan
>> Verzonden: vrijdag 16 april 2010 14:11
&
oun...@r-project.org] Namens Jan van der Laan
> Verzonden: vrijdag 16 april 2010 14:11
> Aan: r-help@r-project.org
> Onderwerp: [R] Weights in binomial glm
>
> I have some questions about the use of weights in binomial
> glm as I am not getting the results I would expect. In my
I have some questions about the use of weights in binomial glm as I am
not getting the results I would expect. In my case the weights I have
can be seen as 'replicate weights'; one respondent i in my dataset
corresponds to w[i] persons in the population. From the documentation
of the glm method, I
6 matches
Mail list logo