The issue is almost certainly in the objective function i.e., diagH,
since Nelder Mead doesn't use any matrix operations such as Choleski.
I think you probably need to adjust the objective function to catch
singularities (non-positive definite cases). I do notice that you have
two identical
Hi R-Help,
I am using R to do functional outlier detection (using PCA to reduce to 2
dimensions - the functional boxplot methodology used in the Rainbow package),
and using Hscv.diag function to calculate the bandwidth matrix where this line
of code is run:
result <- optim(diag(Hstart),
Thanks to all who responded. Will take me some time to digest it all.
-Roy
> On Aug 11, 2020, at 6:24 AM, J C Nash wrote:
>
> Thanks to Peter for noting that the numerical derivative part of code doesn't
> check bounds in optim().
> I tried to put some checks into Rvmmin and Rcgmin in
Thanks to Peter for noting that the numerical derivative part of code doesn't
check bounds in optim().
I tried to put some checks into Rvmmin and Rcgmin in optimx package (they were
separate packages before, and
still on CRAN), but I'm far from capturing all the places where numerical
This stuff is of course dependent on exactly which optimization problem you
have, but optimx::optimr is often a very good drop-in replacement for optim,
especially when bounds are involved (e.g., optim has an awkward habit of
attempting evaluations outside the domain when numerical derivatives
I am running a lot of optimization problems, at the moment using 'optim'
('optim' is actually called by another program). All of the problems have
variables with simple upper and lower bounds, which I can easily transform
into a form that is unconstrained and solve using 'BFGS'. But I was
Subject: Re: [R] optim function
Date: Fri, 13 Jul 2018 17:06:56 -0400
From: J C Nash
To: Federico Becerra
Though I wrote the original codes for 3 of the 5 solvers in optim(), I now
suggest using more recent ones, some
of which I have packaged. optimx on R-forge (not the one on CRAN yet) has
There's a CRAN Task View on optimization. There might be something useful there.
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
925-423-1062
Lab cell 925-724-7509
On 7/13/18, 11:43 AM, "R-help on behalf of Federico Becerra"
wrote:
Good afternoon,
I am a Biology researcher working on Functional Morphology and Behaviour
in mammals. Nowadays, I have a series of morphological data that I would
like to test against different models for which I would need to optimize
them -namely, "randomly manipulating" all models
On 02/10/2018 06:00 AM, r-help-requ...@r-project.org wrote:
Did you check the gradient? I don't think so. It's zero, so of course
you end up where you start.
Try
data.input= data.frame(state1 = (1:500), state2 = (201:700) )
err.th.scalar <- function(threshold, data){
state1 <-
Did you check the gradient? I don't think so. It's zero, so of course
you end up where you start.
Try
data.input= data.frame(state1 = (1:500), state2 = (201:700) )
err.th.scalar <- function(threshold, data){
state1 <- data$state1
state2 <- data$state2
op1l <- length(state1)
Hello,
I'm trying to fminimize the following problem:
You have a data frame with 2 columns.
data.input= data.frame(state1 = (1:500), state2 = (201:700) )
with data that partially overlap in terms of values.
I want to minimize the assessment error of each state by using this function:
When optim() is used with method="BFGS", the name of parameters within
the vector are transmitted (see below, first example).
When method="Brent", the name of parameter (only one parameter can be
fitted with Brent method) is not transmitted. As there is only one, of
course, we know which
count <- count+1
>>}
>>
>>count <-1
>> }
>>
>> qjk.cal.matrix # RETURN CALCULATED MATRIX TO THE ERROR FUNCTION
>>
>> }
>>
>>
>> # ERROR FUNCTION - FINDS DIFFERENCE BETWEEN CAL. MATRIX AND ORIGINAL
>> MATRIX. Mi
trix.prod))
>>>
>>> count <- 1
>>> number <- 1
>>> for(colnum in 1:ncol(my.data.matrix.prod)) # loop through all
>PROD
>>> wells columns
>>> {
>>>sum <-0
>>>for(row in 1:nrow(my.data.matrix.prod)
1,1,1,1,Inf,1,1,1,1,1,Inf,1,1,1,1,1),
lower=c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0))
sols
On 17 June 2016 at 16:55, Jeff Newmiller <jdnew...@dcn.davis.ca.us> wrote:
Your code is corrupt because you failed to send your email in plain text
format.
You also don't appear to have
> the dput function to generate R code form of a sample of your data.
> --
> Sent from my phone. Please excuse my brevity.
>
> On June 17, 2016 1:07:21 PM PDT, Priyank Dwivedi <dpriyan...@gmail.com>
> wrote:
>>
>> By mistake, I sent it earlier to the wrong a
t;
>> Your code is corrupt because you failed to send your email in plain
>text format.
>>
>> You also don't appear to have all data needed to reproduce the
>problem. Use the dput function to generate R code form of a sample of
>your data.
>> --
>> Sent from my ph
1:07:21 PM PDT, Priyank Dwivedi <dpriyan...@gmail.com> wrote:
>By mistake, I sent it earlier to the wrong address.
>
>-- Forwarded message --
>From: Priyank Dwivedi <dpriyan...@gmail.com>
>Date: 17 June 2016 at 14:50
>Subject: Matrix Constraints
By mistake, I sent it earlier to the wrong address.
-- Forwarded message --
From: Priyank Dwivedi <dpriyan...@gmail.com>
Date: 17 June 2016 at 14:50
Subject: Matrix Constraints in R Optim
To: r-help-ow...@r-project.org
Hi,
Below is the code snippet I wrote in R:
The basi
Dear members;
I am stuck trying to find optimal parameters using optim() function. I would be
veryy gateful if you could help me on this:
I have the following equation:
Rp,t+1 = rf+ beta*rt+1 (1)
Rp,t+1= the return of the portfolio , fr = free risk rate
Since some questioned the scaling idea, here are runs first with
scaling and then without scaling. Note how much better the solution
is in the first run (see arrows). It is also evident from the data
> head(data, 3)
y x1 x2 x3
1 0.660 20 7.0 1680
2 0.165 5 1.7 350
3 0.660 20 7.0
> On 14 Nov 2015, at 16:15, Lorenzo Isella wrote:
>
> Dear All,
> I am using optim() for a relatively simple task: a linear model where
> instead of minimizing the sum of the squared errors, I minimize the sum
> of the squared relative errors.
> However, I notice that
Dear All,
I am using optim() for a relatively simple task: a linear model where
instead of minimizing the sum of the squared errors, I minimize the sum
of the squared relative errors.
However, I notice that the default algorithm is very sensitive to the
choice of the initial fit parameters,
Tyipcally the parameters being optimized should be the same order of
magnitude or else you can expect numerical problems. That is what the
fnscale control parameter is for.
On Sat, Nov 14, 2015 at 10:15 AM, Lorenzo Isella
wrote:
> Dear All,
> I am using optim() for a
I meant the parscale parameter.
On Sat, Nov 14, 2015 at 10:30 AM, Gabor Grothendieck
wrote:
> Tyipcally the parameters being optimized should be the same order of
> magnitude or else you can expect numerical problems. That is what the
> fnscale control parameter is for.
> On 14 Nov 2015, at 17:02, Berend Hasselman wrote:
>
>>
>> On 14 Nov 2015, at 16:15, Lorenzo Isella wrote:
>>
>> Dear All,
>> I am using optim() for a relatively simple task: a linear model where
>> instead of minimizing the sum of the squared
Hi Sir,
How to use the optim for maximization. I don't understand the
control$fnscale option that is given on help page. It says if the
control$fnscale is negative, the function will be maximized.
Thanks a lot
Padmanand
--
Padmanand Madhavan Nambiar
Alternate e-mail id : an...@uga.edu
Padmanand Madhavan Nambiar padmanandm at gmail.com writes:
Hi Sir,
How to use the optim for maximization. I don't understand the
control$fnscale option that is given on help page. It says if the
control$fnscale is negative, the function will be maximized.
Thanks a lot
Padmanand
Evan Cooch evan.cooch at gmail.com writes:
You could also use Rvmmin
that has bounds, or nmkb from dfoptim (though you
cannot start on bounds).
One 'negative' for dfoptim is that is doesn't automatically generate the
Hessian (as far as I can tell). Rather nice to be able to do so
Or, something to that effect. Following is an example of what I'm
working with basic ABO blood type ML estimation from observed type
(phenotypic) frequencies. First, I generate a log-likelihood function.
mu[1] - mu[2] are allele freqs for A and B alleles, respectively. Since
freq of O allele
on bounds).
Best, JN
On 14-09-19 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 27
Date: Fri, 19 Sep 2014 00:55:04 -0400
From: Evan Cooch evan.co...@gmail.com
To: r-help@r-project.org
Subject: [R] optim, L-BFGS-B | constrained bounds on parms?
Message-ID: 541bb728.6030...@gmail.com
Content
On 9/19/2014 11:32 AM, Prof J C Nash (U30A) wrote:
One choice is to add a penalty to the objective to enforce the
constraint(s) along with bounds to keep the parameters from going wild.
This generally works reasonably well. Sometimes it helps to run just a
few iterations with a big penalty
You could also use Rvmmin
that has bounds, or nmkb from dfoptim (though you cannot start on bounds).
One 'negative' for dfoptim is that is doesn't automatically generate the
Hessian (as far as I can tell). Rather nice to be able to do so for
other calculations that usual follow after the
Hello,
In the following code, I need to calculate parameters
par_1,par_2,par_3,seperately for all the rows in casted data-frame .
how may I avoid the for loop because it is taking too much time , as in I
want to calculate optimized parameters(initial values 0.2,0.25,0.3 ,,, so
that I get
On Nov 19, 2013, at 12:09 PM, sofeira taajobian wrote:
Dear R Users
Hi,
I have very emergency problems in my programming about finding MLE
with optim command. I reproduced it with real data. I guess that my
function object in optim is very sensitive because it has power
function .
This is the code that was attached. I arrived in the copy sent to my email
address:
#--
L1=function(X){
B1=X[1]
C1=X[2]
B2=X[3]
C2=X[4]
mu=X[5]
S=-(B1*(C1^x)*((C1^v)-1)/log(C1))
+(d2*log(B1*(C1^(x+v
-(B2*(C2^y)*((C2^v)-1)/log(C2))
+(d1*log(B2*(C2^(y+v
-(v*mu)
Greetings,
In obedient deference to the demands of the collective I emailed a BUG report
containing
code and data to r-b...@r-project.org but found subsequently that I am unable
to load the page
http://bugs.r-project.org/ to check on the status of this report.
Can anyone else load this page?
On 02/10/2013 9:18 AM, Michael Meyer wrote:
Greetings,
In obedient deference to the demands of the collective I emailed a BUG report
containing
code and data to r-b...@r-project.org but found subsequently that I am unable
to load the page
http://bugs.r-project.org/ to check on the status of
Thanks for all replies.
The problem occurred in the following context:
A Gaussian one dimensional mixture (number of constituents, locations,
variances all unknown)
is to be fitted to data (as starting value to or in lieu of mixtools). A
likelihood maximization is performed.
I'll try to
Thanks for all replies.
The problem occurred in the following context:
A Gaussian one dimensional mixture (number of constituents, locations,
variances all unknown)
is to be fitted to data (as starting value to or in lieu of mixtools). A
likelihood maximization is performed.
Cool. That
Slight correction:
On Thu, Sep 5, 2013 at 7:48 AM, Bert Gunter bgun...@gene.com wrote:
Michael:
Your parameter specification is probably over-determined, so that you have
an infinite set of parameter **values** that give essentially the same
solution within numerical error. I would
Michael:
Your parameter specification is probably over-determined, so that you have
an infinite set of parameters that give essentially the same solution
within numerical error. I would venture to guess that this will not be
fixable with alternative optimizers. It is up to you to provide a
It would take some effort to extract selfcontained code from the mass of code
wherein this optimization is embedded. Moreover I would have to obtain
permission from my employer to do so.
This is not efficient.
However some things are evident from the trace log which I have submitted:
(a)
Hi Michael,
You do not need to create a self-contained example from the mass of
code where it is embedded, but given that optim() works in many cases,
to file a bug report, you do need to give _an_ example where it is
failing.
Here is an example where it works great:
optim(1, fn = function(x)
-boun...@r-project.org] On
Behalf
Of Michael Meyer
Sent: Wednesday, September 04, 2013 1:35 AM
To: r-help@r-project.org
Subject: [R] optim evils
It would take some effort to extract selfcontained code from the mass of code
wherein
this optimization is embedded. Moreover I would have
package through other packages (bobyqa, nmkb, Rvmmin, Rcgmin)
JN
On 13-09-04 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 67
Date: Wed, 4 Sep 2013 16:34:54 +0800 (SGT)
From: Michael Meyerspyqqq...@yahoo.com
To:r-help@r-project.org r-help@r-project.org
Subject: [R] optim evils
Greetings,
I am in great anguish as the routine stats::optim shows unexplicable behaviour
of various sorts.
For one it is immune to the choice of optimization method and seems to always
do the same.
The following trace log
N = 21, M = 5 machine precision = 2.22045e-16
At X0, 0 variables are
I don't think anyone can do much to help you unless you show us (a) your
objective function OF and your starting value for pars --- which I
do not
see in your posting. Examples should be ***reproducible***!!!
My personal experience with optim() has always been very good.
cheers,
Carlos,
There are likely several problems with your likelihood. You should check it
carefully first before you do any optimization. It seems to me that you have
box constraints on the parameters. They way you are enforcing them is not
correct. I would prefer to use an optimization algorithm
Dear R helpers,
I try to find the model parameters using mle2 (bbmle package). As I try to
optimize the likelihood function the following error message occurs:
Error in grad.default(objectivefunction, coef) :
function returns NA at
: [R] [optim/bbmle] function returns NA at ... distance from x
Message-ID:
CAP=bvwpxj991fbyt9ou5x1jf9nol3vtq1svtjvw82jwfjyz...@mail.gmail.com
Content-Type: text/plain
Dear R helpers,
I try to find the model parameters using mle2 (bbmle package). As I try to
optimize the likelihood function
Hello R users,
Does optimizing a function using optim with method= L-BFGS-B and without box
constraints lead to L-BFGS optimization in R?
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
On 02.08.2013 10:10, Anera Salucci wrote:
Hello R users,
Does optimizing a function using optim with method= L-BFGS-B and without box
constraints lead to L-BFGS optimization in R?
Sort of, but the question is why this would be beneficial with today's
computers ...
Best,
Uwe Ligges
, and possibly some
other routines.
JN
On 13-08-02 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 36
Date: Fri, 02 Aug 2013 10:38:51 +0200
From: Uwe Liggeslig...@statistik.tu-dortmund.de
To: Anera Saluccia.salu...@yahoo.com
Cc:r-help@r-project.org r-help@r-project.org
Subject: Re: [R] optim
Hello,
I'm optimizing a log-likelihood function using the built-in optim()
function with the default method. The model is essentially a Weibull
distribution where the rate parameter changes based on a single covariate,
coded 1 or 0 (that is, when the indicator is 1, the rate parameter changes
to
Hi Ilai,
after you sent this message I tried your code as well and it worked. As a
result, I reconsidered the code written by me and of course also found the
error in my function simulateEKOP. So for other people assuming errors or ill
behavior in base functions: Forget it: these functions are
Hi guys,
I am working now for several years in R and I would say I manage things pretty
easily, but with the foreach loop I have my problems. I call for a simulation a
double foreach loop and this works fine. Inside the second loop (which I plan
to parallelize later on) I call Rs
On Mon, Jun 3, 2013 at 11:37 AM, Simon Zehnder szehn...@uni-bonn.de
... [Some not minimal, self contained, reproducible code]...
Data simulation and thecreation of startpar works fine, but the parameters
in res$par are always the start parameters. If I run the same commands
directly on the
Katja,
this seems to be a bug.
I can reproduce this under 64-bit R-2.15.3 / R-prerelease for Windows.
It works with the results given by Ben Bolker under 32-bit R for Windows.
Will inspect shortly.
Best,
Uwe
On 07.03.2013 21:08, Katja Hebestreit wrote:
Hello,
optim hangs for some
Hello,
optim hangs for some reason when called within the betareg function
(from the betareg package).
In this special case, the arguments which are passed to optim cause
never ending calculations.
I uploaded the arguments passed to optim on:
Katja Hebestreit katja.hebestreit at uni-muenster.de writes:
Hello,
optim hangs for some reason when called within the betareg function
(from the betareg package).
In this special case, the arguments which are passed to optim cause
never ending calculations.
I uploaded the arguments
-help@r-project.org
Subject: Re: [R] optim .C / Crashing on run
Hi,
Thanks for your help. Invoking valgrind under R for the test script I
attached produces the following crash report;
Rscript optim_rhelp.R -d valgrind
Nelder-Mead direct search function minimizer
function
:20 AM
To: Patrick Burns
Cc: r-help@r-project.org
Subject: Re: [R] optim .C / Crashing on run
Hi,
Thanks for your help. Invoking valgrind under R for the test script I
attached produces the following crash report;
Rscript optim_rhelp.R -d valgrind
Nelder-Mead direct search
You might also want to try the Nelder-Mead algorithm, nmk(), in the dfoptim
package. It is a better algorithm than the Nelder-Mead in optim. It is all R
code, so you might be able to modify it to fit your needs.
Ravi
Ravi Varadhan, Ph.D.
Assistant Professor
The Center on Aging and Health
That is a symptom of the C/C++ code doing
something like using memory beyond the proper
range. It's entirely possible to have crashes
in some contexts but not others.
If you can run the C code under valgrind,
that would be the easiest way to find the
problem.
Pat
On 03/11/2012 18:15, Paul
It looks like my attached files didn't go through, so I'll put them in a
public Dropbox folder instead;
optim_rhelp.tar.gzhttp://dl.dropbox.com/u/1113102/optim_rhelp.tar.gz
Thanks, I'll run a compiled binary of the C++ code through Valgrind see
what it reports, then perhaps I'll try an Rscript
When invoking R, you can add
-d valgrind
to run it under valgrind.
On 04/11/2012 11:35, Paul Browne wrote:
It looks like my attached files didn't go through, so I'll put them in a
public Dropbox folder instead; optim_rhelp.tar.gz
http://dl.dropbox.com/u/1113102/optim_rhelp.tar.gz
Thanks,
Hi,
Thanks for your help. Invoking valgrind under R for the test script I
attached produces the following crash report;
Rscript optim_rhelp.R -d valgrind
Nelder-Mead direct search function minimizer
function value for initial parameters = 1267.562555
Scaled convergence tolerance is
Running this valgrind command on the test optim_rhelp.R script
R -d valgrind --tool=memcheck --leak-check=full
--log-file=optim_rhelp.valgrind.log --vanilla optim_rhelp.R
yields this report:
optim_rhelp.valgrind.loghttp://dl.dropbox.com/u/1113102/optim_rhelp.valgrind.log
Ignoring everything
Playing around with alternate optimzers, I've found that both nlminb the
nls.lm Levenberg-Marquadt optimizer in minpack.lm both work with my
objective function without crashing, and minimize the function as I'd expect
them to.
Using optim for amoeba sampling would be nice, but I think I'll just
Hello,
I am attempting to use optim under the default Nelder-Mead algorithm for
model fitting, minimizing a Chi^2 statistic whose value is determined by a
.C call to an external shared library compiled from C C++ code.
My problem has been that the R session will immediately crash upon starting
, I've never seen a query with a short,
testable case
fail to get an answer very quickly.
JN
On 10/11/2012 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 92
Date: Wed, 10 Oct 2012 13:16:38 -0700 (PDT)
From: nserdar snes1...@hotmail.com
To: r-help@r-project.org
Subject: [R] optim
a fortune?
On 10/11/2012 9:56 AM, John C Nash wrote:
snip
Indeed in several years on the list, I've never seen a query with a short,
testable case
fail to get an answer very quickly.
JN
--
Spencer Graves, PE, PhD
President and Chief Technology Officer
Structure Inspection and
I have already try optimx but I got this error message. How to solve it.
fn is Linn
Function has 10 arguments
par[ 1 ]: 0 ? 0.5 ? 1 In Bounds
par[ 2 ]: 0 ? 0.5 ? 1 In Bounds In Bounds
par[ 3 ]: 0 ? 0.5 ? 1 In Bounds In Bounds In Bounds
#optim package
estimate-optim(init.par,Linn,hessian=TRUE, method=c(L-BFGS-B),control =
list(trace=1,abstol=0.001),lower=c(0,0,0,0,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf),upper=c(1,1,1,1,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf))
#nlminb package
Hello,
I have some problems regarding the function optim. In my simulations I
generate negative binomial data and get estimates of the parameter using
the likelihood and the package optim. At some point there appears the
warning:
- non-finite finite-difference value []
I think it occurs, when
On Sat, Sep 29, 2012 at 8:17 AM, wasss deichki...@gmx.net wrote:
Hello,
I have some problems regarding the function optim. In my simulations I
generate negative binomial data and get estimates of the parameter using
the likelihood and the package optim. At some point there appears the
Dear R users,
I'm using optim to optimize a pretty complicated function. This function takes
the parameter vector theta and within its body I use instructions like
sigma-theta[a:b]; computations with sigma...
out-c()
for (i in 1:d){
a-theta[(3*d+i):c]
out[i]-evaluation of an expression
On 20/09/2012 09:24, Gildas Mazo wrote:
Dear R users,
I'm using optim to optimize a pretty complicated function. This function takes the
parameter vector theta and within its body I use instructions like
sigma-theta[a:b]; computations with sigma...
out-c()
for (i in 1:d){
a-theta[(3*d+i):c]
Hello,
I want to estimate the exponential parameter by using optim with the following
input, where t contains 40% of the data and q contains 60% of the data within
an interval. In implementing the code command for optim i want it to contain
both the t and q data so i can obtain the correct
On 28-08-2012, at 03:12, Christopher Kelvin wrote:
Hello,
I want to estimate the exponential parameter by using optim with the
following input, where t contains 40% of the data and q contains 60% of the
data within an interval. In implementing the code command for optim i want it
to
On Wed, Aug 1, 2012 at 4:34 PM, Xu Jun junx...@gmail.com wrote:
Thanks Michael. Now I switched my approach after doing some google.
Following are my new codes:
###
library(foreign)
readin - read.dta(ordfile.dta, convert.factors=FALSE)
Thanks Michael. Now I switched my approach after doing some google.
Following are my new codes:
###
library(foreign)
readin - read.dta(ordfile.dta, convert.factors=FALSE)
myvars - c(depvar, x1, x2, x3)
mydta - readin[myvars]
# remove all
Disclaimer: I have not followed this thread at all, but only wish to note:
1) Indicator variables are (almost?) never needed in R -- that you are
fooling with them suggests that there is probably a better approach.
2) Your bols is just least regression, no? -- If so, there are far
better ways to
I am not that proficient in R. I found some codes on web using those
indicator variables to sum up log likelihood. I block out bols in the
codes, but I also tried using them as start value for my estimation of
ologit. Didn't work. Thanks for your suggestion.
Jun
On Wed, Aug 1, 2012 at 5:44 PM,
Dear R listers,
I am learning the MLE utility optim() in R to program ordered logit
models just as an exercise. See below I have three independent
variables, x1, x2, and x3. Y is coded as ordinal from 1 to 4. Y is not
yet a factor variable here. The ordered logit model satisfies the
parallel
On Tue, Jul 31, 2012 at 7:57 PM, Xu Jun junx...@gmail.com wrote:
Dear R listers,
I am learning the MLE utility optim() in R to program ordered logit
models just as an exercise. See below I have three independent
variables, x1, x2, and x3. Y is coded as ordinal from 1 to 4. Y is not
yet a
and change.
Best, JN
Message: 4
Date: Tue, 8 May 2012 14:35:10 -0500
From: Wenhao Gui guiwen...@gmail.com
To: r-help@r-project.org
Subject: [R] optim question
Message-ID:
CABZdO=zKr1wsXmTOQ54UieVQfpkAx=cyt0dzip7yt1cjb6e...@mail.gmail.com
Content-Type: text/plain
Hello,
I used optim
Hello,
I used optim to find the MLE estimates of some parameters. See the code
below. It works for data1(x). but It did not work for data2 and the error
says L-BFGS-B needs finite values of 'fn' .
data2: c(x, 32) that is, if I added the number 32 at the end of data1.
The error appears
as saw
a log.
JN
On 05/01/2012 06:00 AM, r-help-requ...@r-project.org wrote:
Message: 54 Date: Mon, 30 Apr 2012 08:30:24 -0700 (PDT) From: barb
mainze...@hotmail.com
To: r-help@r-project.org Subject: [R] Optim (fct): Parameters=LowerBounds!?
Message-ID:
1335799824557-4598504.p...@n4.nabble.com
Hey,
i am trying to do the MLE for Garch and have a problem with the optim
function.
Initally i tried optim with Method=BFGS. Reading trhough the forum i found
out
i would neet bounds. So i went on with Method=L-BFGS-B.
But now my parameters equal the lower bounds.
out - optim(par=initial,
On Fri, Feb 24, 2012 at 4:03 PM, nserdar snes1...@hotmail.com wrote:
I did it like above but got an error message.
estimate- optim(init.par,Linn,gr=NULL,method= L-BFGS-B,
hessian=FALSE,control =
list(trace=1),lower=c(0,-Inf,Inf,Inf),upper=c(1,Inf,Inf,Inf))
Your lower bound for parameters
Hi
I need a phi restriction in my code. That is 0phi1
How can I do that ?
Linn=function(param){
phi=param[1]
sigw=param[2]
sigv=param[3]
Betam=param[4]
kf=kfilter1(n,st[,k],st[,1],0,1,phi,Betam,sigw,sigv)
return(kf$like)
}
init.par-c(1,1,1,1)
estimate-
On Fri, Feb 24, 2012 at 12:18 PM, nserdar snes1...@hotmail.com wrote:
Hi
I need a phi restriction in my code. That is 0phi1
How can I do that ?
init.par-c(1,1,1,1)
estimate- optim(init.par,Linn,gr=NULL,method= BFGS, hessian=FALSE,control
= list(trace=1))
You want method L-BFGS-B not
Thanks for your attention.
I searched this function but I can not find special example about box
constraint.
Can you give an example for my code?
Regards,
Ser
--
View this message in context:
http://r.789695.n4.nabble.com/Optim-package-restriction-tp4418379p4418858.html
Sent from the R help
I did it like above but got an error message.
estimate- optim(init.par,Linn,gr=NULL,method= L-BFGS-B,
hessian=FALSE,control =
list(trace=1),lower=c(0,-Inf,Inf,Inf),upper=c(1,Inf,Inf,Inf))
Error in solve.default(sig[, , 1]) :
system is computationally singular: reciprocal condition number =
Hello,
I'm trying to maximize a likelihood function for a HMM with the optim()
function using the Nelder-Mead Method. The LLF has 20 Parameters which
are to be estimated. We found out that R changes some variables but not
all of them, especially the last 2 parameters aren't changed. I already
Dear community,
I'm trying to model growth with this function: Yi = A* exp(-k*(1/ti^m)) ; A
asymptote, k rate of decrease of the relative growth rate, m shape
parameter.
I don't have variable time so, finally, following some papers, I try to fit
Yi+a = A*exp(-k*
, for
effective use they
require more knowledge than many of their users possess, and can be dangerous
because they
seem to work.
JN
Message: 72
Date: Fri, 16 Dec 2011 18:41:12 +1100
From: Dae-Jin Lee lee.dae...@gmail.com
To: r-help@r-project.org
Subject: [R] optim with simulated annealing SANN
1 - 100 of 227 matches
Mail list logo