You can of course print out the values in your objective function, as that 
is where you want the information.  In any case, using R's debugging 
facilities (e.g. dump.frames & debugger) would have enabled you to find 
the input values are which your function was failing.  Please see the 
chapter in `Writing R Extensions'.

One way out is to return NA if the objective cannot be evaluated: that 
often works (not L-BFGS).  Another would be to work with log(standard 
deviation) and atanh(correlation).


On Mon, 23 Apr 2007, DEEPANKAR BASU wrote:

> Ravi,
>
> Thanks a lot for your detailed reply. It clarifies many of the 
> confusions in my mind.
>
> I want to look at the parameter estimates at each iteration because the 
> full model that I am trying to estimate is not converging; a smaller 
> version of the model converges but the results are quite meaningless. 
> The problem in the estimation of the full model is the following: my 
> likelihood function contains the elements of a (bivariate normal) 
> covariance matrix as parameters. To compute the likelihood, I have to 
> draw random samples from the bivariate normal distribution. But no 
> matter what starting values I give, I cannot ensure that the covariance 
> matrix remains positive definite at each iteration of the optimization 
> exercise. Moreover, as soon as the covariance matrix fails to be 
> positive definite, I get an error message (because I can no longer draw 
> from the bivariate normal distribution) and the program stops. Faced 
> with this problem, I wanted to see exactly at which parameter estimates 
> the covariance matrix fails to remain positive definite. From that I 
> would think of d evising a method to get around the problem, at least I 
> would try to.
>
> Probably there is some other way to solve this problem. I would like 
> your opinion on the following question: is there some way I can 
> transform the three parametrs of my (2 by 2) covariance matrix (the two 
> standard devaitions and the correlation coefficient) to ensure that the 
> covariance matrix remains positive definite at each iteration of the 
> optimization. Is there any method other than transforming the parameters 
> to ensure this?
>
> Deepankar
>
>
>
> ----- Original Message -----
> From: Ravi Varadhan <[EMAIL PROTECTED]>
> Date: Monday, April 23, 2007 12:21 pm
> Subject: RE: [R] Estimates at each iteration of optim()?
>
>> Deepankar,
>>
>> Here is an example using BFGS:
>>
>>> fr <- function(x) {   ## Rosenbrock Banana function
>> +     x1 <- x[1]
>> +     x2 <- x[2]
>> +     100 * (x2 - x1 * x1)^2 + (1 - x1)^2
>> + }
>>> grr <- function(x) { ## Gradient of 'fr'
>> +     x1 <- x[1]
>> +     x2 <- x[2]
>> +     c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
>> +        200 *      (x2 - x1 * x1))
>> + }
>>> optim(c(-1.2,1), fr, grr, method = "BFGS", control=list(trace=TRUE))
>> initial  value 24.200000
>> iter  10 value 1.367383
>> iter  20 value 0.134560
>> iter  30 value 0.001978
>> iter  40 value 0.000000
>> final  value 0.000000
>> converged
>> $par
>> [1] 1 1
>>
>> $value
>> [1] 9.594955e-18
>>
>> $counts
>> function gradient
>>     110       43
>>
>> $convergence
>> [1] 0
>>
>> $message
>> NULL
>>
>>>
>>
>> This example shows that the parameter estimates are printed out
>> every 10
>> iterations.  However, trying different integer values for trace
>> from 2 to 10
>> (trace = 1 behaves the same as trace=TRUE) did not change anything.
>> If you
>> want to get estimates at every iteration, look at the source code
>> for BFGS
>> (which I assume is in FORTRAN). You may have to modify the source
>> code and
>> recompile it yourself to get more detailed trace for BFGS.
>>
>> However, you can get parameter iterates at every step for "L-BFGS-
>> B" using
>> trace=6, although this gives a lot more information than just the
>> parameterestimates.  Alternatively, you can use the "CG" methods
>> with trace=TRUE or
>> trace=1, which is a generally a lot slower than BFGS or L-BFGS-B.
>>
>> Why do you want to look at parameter estimates for each step, anyway?
>>
>>
>> Ravi.
>>
>> --------------------------------------------------------------------
>> --------
>> -------
>>
>> Ravi Varadhan, Ph.D.
>>
>> Assistant Professor, The Center on Aging and Health
>>
>> Division of Geriatric Medicine and Gerontology
>>
>> Johns Hopkins University
>>
>> Ph: (410) 502-2619
>>
>> Fax: (410) 614-9625
>>
>> Email: [EMAIL PROTECTED]
>>
>> Webpage:
>> http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html
>>
>>
>> --------------------------------------------------------------------
>> --------
>> --------
>>
>> -----Original Message-----
>> From: [EMAIL PROTECTED]
>> [EMAIL PROTECTED] On Behalf Of DEEPANKAR BASU
>> Sent: Monday, April 23, 2007 11:34 AM
>> To: Peter Dalgaard
>> Cc: r-help@stat.math.ethz.ch
>> Subject: Re: [R] Estimates at each iteration of optim()?
>>
>> I read the description of the trace control parameter in ?optim and
>> thenalso looked at the examples given at the end. In one of the
>> examples I found
>> that they had used "trace=TRUE"  with the method "SANN". I am using
>> themethod "BFGS" and I tried using "trace=TRUE" too but I did not
>> get the
>> parameter estimates at each iteration. As you say, it might be method
>> dependent. I tried reading the source code for "optim" but could
>> not find
>> out what I was looking for. Hence, I was wondering if anyone could
>> tell me
>> what option to use with the method "BFGS" to get the parameter
>> estimates at
>> each iteration of the optimization.
>>
>> Deepankar
>>
>>
>> ----- Original Message -----
>> From: Peter Dalgaard <[EMAIL PROTECTED]>
>> Date: Monday, April 23, 2007 2:46 am
>> Subject: Re: [R] Estimates at each iteration of optim()?
>>
>>> DEEPANKAR BASU wrote:
>>>> I am trying to maximise a complicated loglikelihood function
>> with
>>> the "optim" command. Is there some way to get to know the
>> estiamtes
>>> at each iteration? When I put "control=list(trace=TRUE)" as an
>>> option in "optim", I just got the initial and final values of the
>>> loglikelihood, number of iterations and whether the routine has
>>> converged or not. I need to know the estimate values at each
>>> iteration.>
>>>>
>>> It might help if you actually _read_ the description of the trace
>>> control parameter (hint: it is not an on/off switch) in ?optim...
>>> And,
>>> as it says, this is method dependent, so you may have to study
>> the
>>> source code.
>>>
>>>> Deepankar
>>>>
>>>> ______________________________________________
>>>> R-help@stat.math.ethz.ch mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>> PLEASE do read the posting guide http://www.R-
>> project.org/posting-
>>> guide.html> and provide commented, minimal, self-contained,
>>> reproducible code.
>>>>
>>>
>>>
>>
>> ______________________________________________
>> R-help@stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-
>> guide.htmland provide commented, minimal, self-contained,
>> reproducible code.
>>
>
> ______________________________________________
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
Brian D. Ripley,                  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford,             Tel:  +44 1865 272861 (self)
1 South Parks Road,                     +44 1865 272866 (PA)
Oxford OX1 3TG, UK                Fax:  +44 1865 272595

______________________________________________
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to