Don't care about my previous email.
I have eventually modified the function and it gives me now the series
of the residuals. Hence, I can use the following commands: 

Unconstrained maximum:

mle logl = - 0.5*ln(2*pi) - ln(sigma) - 0.5*(e/sigma)^2
        series e = pstr_cres(y,X,q,gamma,c,m,Z)
        params gamma c sigma
end mle

Constrained maximum:

matrix ac = atanh((2*c-c_max-c_min)./(c_max-c_min))
matrix lg = ln(gamma)
mle logl = - 0.5*ln(2*pi) - ln(sigma) - 0.5*(e/sigma)^2
        series e = pstr_cres(y,X,q,gamma,c,m,Z)
        matrix gamma = exp(lg)            
        matrix c = c_min + 0.5*(tanh(ac) + 1)*(c_max-c_min)
        params lg ac sigma
end mle

Thanks
Giuseppe


On Thu, 2011-10-27 at 09:57 +0200, Giuseppe Vittucci wrote:
> Still on the mle command.
> The argument of the mle command in gretl is actually the log-L
> contributions and not the Log-L.
> Clearly every combination of parameters that maximize the latter also
> maximize the former.
> So, as far as the point estimates of the parameters are concerned,  it
> does not really matters which one is used.
> 
> So far I have used mle simply as a maximization BFGS method, and I was
> not looking at the covariance or the other ancillary statistics.
> 
> In my case working directly with the log-likelihood is much easier cause
> I have a quite complex function that returns the SSR and I use it
> directly in the command.
> In case of homoscedastic normally distributed residuals, the Log-L is
> indeed just:
> 
> Log-L = -n/2*(ln (ssr/n) + 1 + ln 2pi)
> 
> and I can use my function directly in the formula.
> On the contrary, working directly with the log-L contributions is not
> straightforward in my case.
> 
> I would like to know if I could use the covariance matrix and the other
> statistics generated by the program (in particular the information
> criteria), if, instead of using the Log-L contributions, I simply divide
> the Log-L by n.
> 
> As far as the covariance is concerned, likely I cannot use the matrix
> calculated from the outer product of the gradient, but can I use the
> Hessian?
>  
> Thanks a lot
> Giuseppe

Reply via email to