Re: [R] Error in eigen(nhatend)

2015-06-06 Thread Prof J C Nash (U30A)
It looks like the matrix nhatend (for Numerical Hessian AT END) has some
NAs or Infs.

Suggest you turn off the Hessian calculation by

argument   hessian=FALSE (that is the default)
and control=list(kkt=FALSE)  (the default is TRUE for small problems)

Then take the resulting final parameters and use the package numDeriv to
get the numerical hessian and try to see if you can then do eigen() on it.

optimx is really just a wrapper for a lot of the calculations people
want to do with optimizers, so diagnosing this kind of thing is best
done in a pedestrian fashion by trying to replicate the calculation
outside the package.

Note that the Jacobian of the gradient (from numDeriv) is more accurate
if you have analytic gradients than the hessian() function, which needs
two levels of differencing and can give poor approximations for many
functions. This may, in fact, be the source of the reported error.

Note also that there are a couple of (really stupid) typos in optimx and
Rvmmin packages. They don't affect most users. I've fixed them, but am
having trouble with reverse dependency checks, which I believe are
problems outside of my packages. However, I'm hoping to resolve those
issues before I post the fixed packages on CRAN.

JN


On 15-06-06 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 26
 Date: Fri, 5 Jun 2015 10:43:23 -0700
 From: Olu Ola oluola2...@yahoo.com
 To: r-help@r-project.org
 Subject: [R] Error in eigen(nhatend)
 Message-ID:
   1433526203.5741.yahoomailba...@web161604.mail.bf1.yahoo.com
 Content-Type: text/plain; charset=us-ascii
 
 Hello,
 I am estimating a nonlinear GMM and I got the following error message. I have 
 searched online in other to understand what is going on but could not find 
 help
 
  ngmm = optimx(par=b0, fn=object,gr=gred, method = 
  c(BFGS,nlminb,nlm), itnmax=1, control=list(follow.on = 
  TRUE,starttests=TRUE, save.failures=TRUE, trace=0))
 Error in eigen(nhatend) : infinite or missing values in 'x'
 In addition: Warning messages:
 1: In optimx.run(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower,  :
   Hessian is reported non-symmetric with asymmetry ratio NaN
 2: Hessian forced symmetric 
 Error in ans.ret[meth, ] - c(ans$par, ans$value, ans$fevals, ans$gevals,  : 
   number of items to replace is not a multiple of replacement length
 In addition: Warning message:
 In optimx.run(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower,  :
   Eigenvalue failure after method BFGS
 
 A way forward will be highly appreciated.
 
 Thank you

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Imposing linear binding constraint while using Rcgmin

2015-06-02 Thread Prof J C Nash (U30A)
I wrote Rcgmin to do bounds constraints only. Linear constraints are
much more complicated to include.

If your constraints are equality ones, you could solve, but that could
make it awkward to evaluate the gradient.

For inequality constraints, especially if there are only a couple, I
think I'd try a penalty or barrier function. They tend to mess up the
scaling and slow things down, but generally give some idea of solutions.

Penalty function adds a penalty factor * violation. Note that gradient
may need attention. i.e. penalizes being outside the constraint. No
use if you end up trying log(-something) or sqrt(-something).


Barrier adds cost inside the constraint to keep away from the
constraint. Generally you want it to be very small except close,
and that is controlled by barrier control parameter. Again, you need to
watch the gradient is provided properly.

If possible avoid using numerical gradients for the penalty or barrier
because of the compromised scaling, though I would use them for a quick
and dirty test.

JN



 
 Message: 18
 Date: Tue, 2 Jun 2015 01:33:55 +0530
 From: Rajarshi Bhadra bhadrarajars...@gmail.com
 To: r-help@r-project.org
 Subject: [R] Imposing linear binding constraint while using Rcgmin
 Message-ID:
   CAFOno_aNQsXjiT4Z+9Koxig=7ji83lzv9e4ivxqvxlx+xgh...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 Is there any way by which I can impose a linear binding constraint while
 using Rcgmin for a non linear optimization exercise?
 
   [[alternative HTML version deleted]]


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls and four parameter estimates

2015-06-02 Thread Prof J C Nash (U30A)
Package nlmrt (function nlxb) tries to use symbolic derivatives. In
fact, Duncan Murdoch and I have a very slowly developing nls14 package
to substitute for nls that should advance this even further.

nlxb also allows masked (i.e., fixed) parameters, which would let you
combine your runs, fixing the fourth parameter for initial runs. It also
allows bounds constraints.

I suspect it should work better. I'd have tried it, but the data
provided was not easily decipherable. Was it HTML format?

JN

On 15-06-02 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 27
 Date: Tue, 2 Jun 2015 02:54:03 +
 From: onoriode.co...@csiro.au
 To: r-help@r-project.org
 Subject: [R] nls and four parameter estimates
 Message-ID:
   d736f454355006429da41754a53ffdbb43d4e...@exmbx06-cdc.nexus.csiro.au
 Content-Type: text/plain; charset=UTF-8
 
 Hello all!
 
 I am trying to estimate four parameters (mu, sigma, theta and lambda) of a 
 model
 Using the nls package in R, I can only get it to work if I limit the number 
 of parameters to be estimated to three (i.e. mu, sigma and theta) as in the 
 first model - mod1 - below. Including a fourth parameter (lambda) like in the 
 second model - mod2 - returns the following error messages
 
 1.   Error in numericDeriv(form[[3L]], names(ind), env):
 
 2.   Missing value or an infinity produced when evaluating the model
 
 mod1-nls(germ~1-(exp(-1*((psi-(theta/time)-mu)/sigma))),start=c(mu=-2.7, 
 theta=3, sigma=3), data=ht)
 mod1
 
 mod2-nls(germ~1-(exp(-1*((psi-(theta/time)-mu)/sigma)^lambda)),start=c(mu=-2.7,
  theta=3, sigma=3, lambda=-1.2), data=ht)
 mod2
 
 Please have a look at my code and tell how I might get it to work. A sample 
 of my data is shown below. It has five levels of psi (0, -0.4, -0.8, -1.2 and 
 -1.6).
 
 psi
 
 time
 
 germ
 
 0
 
 1.33

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R] nls model singular gradient matrix at initial parameter, estimates

2015-05-28 Thread Prof J C Nash (U30A)
I have a section (6.4.2) about singular gradient (actually singular
Jacobian to numerical analysts) in my recent book Nonlinear parameter
optimization using R tools. nls() is prone to this, though having all
the starting values the same in many functions can be asking for trouble
of this sort, as is any function involving expm(). (If you search on
R-help archives, you'll find where there is discussion of how this can
result in huge timing differences in two similar methods of calculation.
But that is about timing rather than computational failure.)

To avoid some of the singular gradient issues, try package nlmrt. Note
that it's nlxb() has a slightly different call in that more arguments
need to be explicitly specified.

JN


On 15-05-27 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 34
 Date: Wed, 27 May 2015 01:23:35 +
 From: oussama belmejdoub oussa.b...@hotmail.com
 To: r-help@r-project.org r-help@r-project.org
 Subject: [R] nls model singular gradient matrix at initial parameter
   estimates
 Message-ID: dub109-w3681a8069b5ce24d6e75a79c...@phx.gbl
 Content-Type: text/plain; charset=iso-8859-1
 
 Greetings,
 I'm trying to use the nls function in my statistics project but I'm really 
 finding lot of difficulties.
 I have a function called apinene_modele_prediction that calculates the 
 estimations:
 library(expm); #exp of a matrixapinene_modele_prediction - function(t,theta) 
 {   x0=c(100,0,0,0,0)   
 A=matrix(c(-(theta[1]+theta[2]),theta[1],theta[2],0,0,0,0,0,0,0,0,0,-(theta[3]+theta[4]),theta[3],theta[4],0,0,0,0,0,0,0,theta[5],0,-theta[5]),5,5)
  X=x0for (i in t[2:length(t)]){  X=c(X,x0%*%expm(A*i))
}return(X)}
 
 My t vector is given by: 
 t=seq(0,100,by=2) 
 And the real observations y ara given to us  in a txt file called 
 data.txt that I have joined to this message.
 So when I try to fit the theta in my model starting with: 
 theta=c(0.2,0.2,0.2,0.2,0.2) 
 And doing:
 theta_appr 
 -nls(y~apinene_modele_prediction(t,theta),start=list(theta=c(0.2,0.2,0.2,0.2,0.2)))
 I always got the ERROR : singular gradient matrix at initial parameter 
 estimates
 And, when I try: 
 nls(y~apinene_modele_prediction(t,c(theta,theta,theta,theta,theta)),start=list(theta=0.2))
 I got the result: Nonlinear regression model  model: y ~ 
 apinene_modele_prediction(t, c(theta, theta, theta, theta, theta))   
 data: parent.frame()  theta0.04403 residual sum-of-squares: 219002
 But I need to have the elements of the theta to be different and not equal.
 Thanks in advance for your help.
 -- next part --
 An embedded and charset-unspecified text was scrubbed...
 Name: data.txt
 URL: 
 https://stat.ethz.ch/pipermail/r-help/attachments/20150527/37052351/attachment-0001.txt

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Finding Optimum of Stochastic Function

2015-05-19 Thread Prof J C Nash (U30A)
Most of the stochastic optimization methods are directed at multiple
optima. You appear to have an imprecisely determined function e.g., time
taken for racing driver to get round the track, and indeed this is a
different form of stochastic optimization.

With Harry Joe of UBC I did quite a bit of work about 20 years ago on
response surface minimization, but none of this is (to my knowledge)
translated to R. David Wagstaff at Penn State just sent me a msg that
he's making some progress translating our Fortran 77 to Fortran 95 (I
think to R might actually be easier). I believe Harry had at least a
partial C version.

reference is Statistics and Computing 13: 277–286, 2003 Numerical
optimization and surface estimation with imprecise function evaluations

Our approach was to generate some points and model them as a paraboloid
and then search near the minimum of that model. All the smarts are in
choosing which points to add to or remove from the set of points for
modelling the surface. Clearly there are no guarantees, but for some
applications we found this worked not too badly.

JN

On 15-05-19 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 11
 Date: Mon, 18 May 2015 14:47:49 -0700
 From: ivo welch ivo.we...@anderson.ucla.edu
 To: r-help@r-project.org
 Subject: [R] Finding Optimum of Stochastic Function
 Message-ID:
   CAPr7RtV99V3=3qumkamlridyecor2maq6cqxrv3929madcu...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 Could someone please point me to an optimizer for stochastic functions?
(In http://cran.r-project.org/web/views/Optimization.html, I saw methods
 that use random directions for deterministic functions, which is not the
 kind of stochastic I need.)
 
 For clarification, say I have an outcome function f(x), where x is a vector
 of, say, 3 choices.  f(x) yields a simulated result that depends on random
 draws.  That is, if I run it twice, it will give me different answers.  I
 want to find the value of x that has the highest average f(x).
 
 There are apparently well-defined algorithms, such as Robbins-Monro,
 Kiefer-Wolfowitz, and Spall, although I don't know how they work nor do I
 need to know much.  Presumably, a good algorithm knows not to draw too many
 points at a given x too early (when far away from the optimum), but to
 start more scattershot; and not to try to climb too aggressively.
 Intuitively, I probably want to start from a point, draw in a cloud around
 this point, and slowly sample-crawl into the direction where values tend to
 be higher.  Ideally, the algorithm would try to solve an updatable
 least-squares problem to determine its next sample.
 
 Pointers appreciated.
 
 regards, /iaw
 
 Ivo Welch (ivo.we...@gmail.com)
 
   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nlminb supplying NaN parameters to objective function

2015-05-08 Thread Prof J C Nash (U30A)
Your problem is saying (on my machine) that it cannot compute the
gradient. Since it does this numerically, my guess is that the step to
evaluate the gradient violates the bounds and we get log(-something).

I also get

 Warning messages:
 1: In dnbinom(x = dummyData[, Y], mu = mu, size = params[length(params)],  :
   NaNs produced
 2: In nlminb(start = sv, objective = nLL, lower = 0, upper = Inf, control = 
 list(trace = TRUE)) :
   NA/NaN function evaluation
 3: In dnbinom(x = dummyData[, Y], mu = mu, size = params[length(params)],  :
   NaNs produced
 4: In nlminb(start = sv, objective = nLL, lower = 0, upper = Inf, control = 
 list(trace = TRUE)) :
   NA/NaN function evaluation



I put lower=0.01 and got convergence OK, but that may not be suitable,
since all but one of the parameters are at that bound.

 $par
  [1] 0.0100 0.0100 0.0100 0.0100 0.0100 0.0100
  [7] 0.0100 0.0100 0.0100 0.0100 0.0100 0.0100
 [13] 0.09168027
 
 $objective
 [1] 11879.51
 
 $convergence
 [1] 0
 
 $iterations
 [1] 8
 
 $evaluations
 function gradient 
   13  119 
 
 $message
 [1] relative convergence (4)


As it turns out, Duncan Murdoch, Ben Bolker and I had a meeting
yesterday to discuss improvements in optimization and nonlinear least
squares and derivatives. One suggestion to be implemented is a wrapper
for objective functions to reveal when bounds are violated. It will,
however, take a little time for me to get that organized.

FYI, without the reproducible example, you would not have received this
attempt to explain things. Thanks.

JN



 A follow-up to my yesterday's email.
 
 I was able to make a reproducible example. All you will have to do is
 load the .RData file that you can download here:
 https://drive.google.com/file/d/0B0DKwRjF11x4dG1uRWhwb1pfQ2s/view?usp=sharing
 
 and run this line of code:
 
 nlminb(start=sv, objective = nLL, lower = 0, upper = Inf,
 control=list(trace=TRUE))
 
 which output the following:
 
   0: 12523.401: 0.0328502 0.0744493 0.00205298 0.0248628 0.0881807
 0.0148887 0.0244485 0.0385922 0.0714495 0.0161784 0.0617551 0.0244901
 0.0784038
   1: 12421.888: 0.0282245 0.0697934  0.0 0.0199076 0.0833634
 0.0101135 0.0189494 0.0336236 0.0712130 0.0160687 0.0616015 0.0244689
 0.0660129
   2: 12050.535: 0.00371847 0.0451786  0.0  0.0 0.0575667
 0.0  0.0 0.00697067 0.0697205 0.0156250 0.0608550 0.0243431
 0.0994355
   3: 12037.682: 0.00303460 0.0445012  0.0  0.0 0.0568530
 0.0  0.0 0.00636016 0.0696959 0.0156250 0.0608550 0.0243419
 0.0988824
   4: 12012.684: 0.00164710 0.0431313  0.0  0.0 0.0554032
 0.0  0.0 0.00515500 0.0696451 0.0156250 0.0608550 0.0243395
 0.0978328
   5: 12003.017: 0.00107848 0.0425739  0.0  0.0 0.0548073
 0.0  0.0 0.00469592 0.0696233 0.0156250 0.0608550 0.0243386
 0.0974616
   6: 11984.372:  0.0 0.0414397  0.0  0.0 0.0535899
 0.0  0.0 0.00378996 0.0695782 0.0156250 0.0608550 0.0243370
 0.0967449
   7: 11978.154:  0.0 0.0409106  0.0  0.0 0.0530158
 0.0  0.0 0.00340746 0.0695560 0.0156250 0.0608550 0.0243363
 0.0964537
   8:-0.000:  0.0  nan  0.0  0.0  nan
 0.0  0.0  nan  nan  nan  nan  nan  nan
 
 Regards,
 
 Jean
 
 2015-05-06 17:43 GMT-07:00 Jean Marchal jean.d.marc...@gmail.com:
 Dear list,

 I am doing some maximum likelihood estimation using nlminb() with
 box-constraints to ensure that all parameters are positive. However,
 nlminb() is behaving strangely and seems to supply NaN as parameters
 to my objective function (confirmed using browser()) and output the
 following:

 $par
  [1] NaN NaN NaN   0 NaN   0 NaN NaN NaN NaN NaN NaN NaN

 $objective
 [1] 0

 $convergence
 [1] 1

 $iterations
 [1] 19

 $evaluations
 function gradient
   87  542

 $message
 [1] gr cannot be computed at initial par (65)


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 'Installation of package package had non-zero exit, status' on R-3.2.0 (RStudio Version 0.98.1103) on Ubuntu 12.04 OS

2015-05-01 Thread Prof J C Nash (U30A)
As another post has suggested, r-sig-debian may be more help.

However, it looks like you need to install a different package from the
one you tried. See

http://ubuntuforums.org/showthread.php?t=1774516

about getting libcurl on 12.04.

FYI Ubuntu 12.04 is now getting dated, and I've found people using it
get similar problems to that you encountered -- i.e., packages that more
up to date distros use are not available, or those that are available
don't have all the features needed. Maybe time to upgrade.

JN


On 15-05-01 06:00 AM, r-help-requ...@r-project.org wrote:
 Date: Fri, 1 May 2015 00:50:38 +0530
 From: Anirudh Jayaraman aniru...@igidr.ac.in
 To: Duncan Murdoch murdoch.dun...@gmail.com
 Cc: r-help@r-project.org
 Subject: Re: [R] 'Installation of package package had non-zero exit
   status' on R-3.2.0 (RStudio Version 0.98.1103) on Ubuntu 12.04 OS
 Message-ID:
   CAD4HhLwRUW4jy2XerC4-zvYBDjA8RiPP=82t2k45++mr8jm...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 In that case, it seems that *libcurl* is not available for R-3.2.0 as I get
 a message for
 
 ** *install.packages(libcurl)*
 
   package ?libcurl? is not available (for R version 3.2.0)
 
 Or if on Terminal I run
 
 *sudo apt-get install libcurl4-openssl-dev*
 
 Package libcurl4-openssl-dev is not available, but is referred to by
 another package.
 This may mean that the package is missing, has been obsoleted, or
 is only available from another source
 
 
 **
 *Anirudh Jayaraman*
 M.Sc Economics (2014-16)
 Indira Gandhi Institute of Development Research (IGIDR), Mumbai
 Ph No: +91 9560476729

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] .Rdata files -- fortune?

2015-03-12 Thread Prof J C Nash (U30A)
Well put. I avoid them too, and go so far as to seek and destroy so they
don't get loaded unnoticed and cause unwanted consequences.

.RData files (the ones with nothing before the period) are just traps
for your future self, with no documentation. I avoid them like the plague.


JN

On 15-03-11 07:00 AM, r-help-requ...@r-project.org wrote:
 Message: 34
 Date: Tue, 10 Mar 2015 17:51:15 -0700
 From: Jeff Newmiller jdnew...@dcn.davis.ca.us
 To: Rolf Turner r.tur...@auckland.ac.nz, Erin Hodgess
   erinm.hodg...@gmail.com, R help r-h...@stat.math.ethz.ch
 Subject: Re: [R] .Rprofile vs. First (more of an opinion question)
 Message-ID: e5a53229-b271-42d9-beab-73142b2f6...@dcn.davis.ca.us
 Content-Type: text/plain; charset=UTF-8
 
 I concur with Rolf.
 
 .RData files (the ones with nothing before the period) are just traps for 
 your future self, with no documentation. I avoid them like the plague. I 
 refer to specifically-named Something.RData files in my .R/.Rnw/.Rmd files to 
 cache results of long computations, but they are optional in my workflow 
 because I always have R code that can regenerate them.
 
 .Rprofile files offer consistency of behavior  regardless of which working 
 directory you use, and you can comment them.
 ---
 Jeff NewmillerThe .   .  Go Live...
 DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.   ##.#.  Live Go...
   Live:   OO#.. Dead: OO#..  Playing
 Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
 /Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
 --- 
 Sent from my phone. Please excuse my brevity.

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with optim() to maximize log-likelihood

2015-03-10 Thread Prof J C Nash (U30A)
1) It helps to include the require statements for those of us who work
outside your particular box.
   lme4 and (as far as I can guess) fastGHQuad
are needed.

2) Most nonlinear functions have domains where they cannot be
evaluated. I'd be richer than Warren Buffett if I got $5 for
each time someone said your optimizer doesn't work and I
found   f(start, ...) was NaN or Inf, as in this case, i.e.,

 start - c(plogis(sum(Y/m)),log(sigma2H))
 cat(starting params:)
 print(start)
 tryf0 - ll(start,Y,m)
 print(tryf0)


It really is worthwhile actually computing your function at the initial
parameters EVERY time. (Or turn on the trace etc.)

JN

On 15-03-10 07:00 AM, r-help-requ...@r-project.org wrote:
 Message: 12
 Date: Mon, 9 Mar 2015 16:18:06 +0200
 From: Sophia Kyriakou sophia.kyriako...@gmail.com
 To: r-help@r-project.org
 Subject: [R] Help with optim() to maximize log-likelihood
 Message-ID:
   cao4ga+qokumhozwvbu7ey3xabbo2lnqjrcwxqkzchm3u9oz...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 hello, I am using the optim function to maximize the log likelihood of a
 generalized linear mixed model and I am trying to replicate glmer's
 estimated components. If I set both the sample and subject size to q=m=100
 I replicate glmer's results for the random intercept model with parameters
  beta=-1 and sigma^2=1. But if I change beta to 2 glmer works and optim
 gives me the error message function cannot be evaluated at initial
 parameters.
 
 If anyone could please help?
 Thanks
 
  # likelihood function
  ll - function(x,Y,m){
  beta - x[1]
  psi - x[2]
  q - length(Y)
   p - 20
  rule20 - gaussHermiteData(p)
  wStar - exp(rule20$x * rule20$x + log(rule20$w))
  # Integrate over(-Inf, +Inf) using adaptive Gauss-Hermite quadrature
  g - function(alpha, beta, psi, y, m) {-y+m*exp(alpha + beta)/(1 +
 exp(alpha + beta)) + alpha/exp(psi)}
  DDfLik - deriv(expression(-y+m*exp(alpha + beta)/(1 + exp(alpha + beta))
 + alpha/exp(psi)),
  namevec = alpha, func = TRUE,function.arg = c(alpha, beta, psi,
 y, m))
int0 - rep(NA,q)
  piYc_ir - matrix(NA,q,p)
  for (i in 1:q){
  muHat - uniroot(g, c(-10, 10),extendInt =yes, beta = beta, psi = psi, y
 = Y[i], m = m)$root
  jHat - attr(DDfLik(alpha = muHat, beta, psi, Y[i], m), gradient)
  sigmaHat - 1/sqrt(jHat)
  z - muHat + sqrt(2) * sigmaHat * rule20$x
  piYc_ir[i,] -
 choose(m,Y[i])*exp(Y[i]*(z+beta))*exp(-z^2/(2*exp(psi)))/((1+exp(z+beta))^m*sqrt(2*pi*exp(psi)))
  int0[i] - sqrt(2)*sigmaHat*sum(wStar*piYc_ir[i,])
  }
  ll - -sum(log(int0))
  ll
  }
 
  beta - 2
  sigma2 - 1
  m - 100
  q - 100
 
  cl - seq.int(q)
  tot - rep(m,q)
 
  set.seed(123)
  alpha - rnorm(q, 0, sqrt(sigma2))
  Y - rbinom(q,m,plogis(alpha+beta))
 
  dat - data.frame(y = Y, tot = tot, cl = cl)
  f1 - glmer(cbind(y, tot - y) ~ 1 + (1 | cl), data = dat,family =
 binomial(),nAGQ = 20)
  betaH - summary(f1)$coefficients[1]
  sigma2H - as.numeric(summary(f1)$varcor)
  thetaglmer - c(betaH,sigma2H)
 
  logL - function(x) ll(x,Y,m)
  thetaMLb - optim(c(plogis(sum(Y/m)),log(sigma2H)),fn=logL)$par
  Error in optim(c(plogis(sum(Y/m)), log(sigma2H)), fn = logL) :  function
 cannot be evaluated at initial parameters
 
 thetaglmer
 [1] 2.1128529 0.8311484
  (thetaML - c(thetaMLb[1],exp(thetaMLb[2])))
 
   [[alternative HTML version deleted]]


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with nonlinear least squares regression curve fitting

2015-02-26 Thread Prof J C Nash (U30A)
Andrew's suggestion for Year is a help, but package nlmrt shows the
problem you are trying to solve is truly one where there is a Jacobian
singularity. (nlmrt produces the Jacobian singular values -- but read
the output carefully because these are placed for compact output as if
they correspond to parameters, which they do not).

Unfortunately, nlmrt tries to use analytic derivatives, and sign() is
not in the derivatives table for the double sigmoid. BTW, your function
has a typo. Do provide reproducible results. Here is what I did using

callaghan.csv:
Area,Year
104.7181283,1984
32.88026974,1985
56.07395863,1986
191.3422143,1987
233.4661392,1988
57.28317116,1989
201.1273404,1990
34.42570796,1991
165.8962342,1992
58.21905274,1993
114.6643724,1994
342.3461986,1995
184.8877994,1996
94.90509356,1997
45.2026941,1998
68.6196393,1999
575.2440229,2000
519.7557581,2001
904.157509,2002
1107.357517,2003
1682.876061,2004
40.55667824,2005
740.5032604,2006
885.7243469,2007
395.4190968,2008
1031.314519,2009
2597.544987,2010
1316.968695,2011
848.7093901,2012
5076.675075,2013
6132.975491,2014

code:
library(nlmrt)
df - read.csv(callaghan.csv)

fitmodeliq - nlxb(Area ~ (-a*Year)*(Year + b), data = df,
start=list(a=1,b=1, c=1))

fitmodelsig - nlxb(Area~a/(1+exp(-(b+c*Year))), data=df,
start=list(a=1,b=1, c=1))

fitmodelds - nlxb(Area ~
a+2*b*(1/(1+exp(-abs(-c*Year+d)))-1/2)*sign(-c*Year+d), data=df,
start=list(a=1, b=1, c=1))

For information of readers, Duncan Murdoch and I have been working on
nls14 to replace/augment nls(), but we've a way to go yet before this is
ready for CRAN. Collaborators welcome.

John Nash


On 15-02-26 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 24
 Date: Thu, 26 Feb 2015 07:26:50 +1100
 From: Andrew Robinson a.robin...@ms.unimelb.edu.au
 To: Corey Callaghan ccallaghan2...@fau.edu
 Cc: R help \(r-help@r-project.org\) r-help@r-project.org
 Subject: Re: [R] Help with nonlinear least squares regression curve
   fitting
 Message-ID:
   cahygmd6rruc_aobhrhw7babxnmzrsbi4b7zjt0vn5lrwvw2...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 Finding starting values is a bit of a dark art.  That said, there are steps
 you can take, but it may take time.
 
 First, I would scale Year so that it's not in the thousands! Experiment
 with subtracting 1980 or so.  For specific advice, see inline.
 
 On Thu, Feb 26, 2015 at 3:03 AM, Corey Callaghan ccallaghan2...@fau.edu
 wrote:
 
  The curves' functions that I want to test are in the code here (hopefully
  correctly):
 
  Inverse Quadratic Curve:
  fitmodel - nls(Area ~ (-a*Year)*(Year + b), data = df, start=list(a=??,
  b=??, c=??))
 
 I would plot the data and a smooth spline, differentiate the curve
 function, identify some parameter values somewhere stable, and estimate
 some values by eye, or even predict them from the first derivative of the
 spline - spline.smooth will do this.
 
 Sigmodial Curve:
  fitmodel - nls(Area~a/(1+exp(-(b+c*Year))), data=df, start=list(a=???,
  b=???, c=??))
 
 I'd use the highest value as a, fit spline as above then invert area at two
 times to get b and c.
 
 Double sigmoidal Curve:
  fitmodel - nls(Area~a+2b(1/(1+exp(-abs(-c*Year+d)))-1/2)*sign(-c*Year+d),
  data=df, start=list(a=???, b=???, c=???)
 
  I'd use min(Area) as a, figure out b from the maximum (I guess 2b+a is the
 asymptote), and experiment with two values for year to retrieve c and d
  uniroot might help?
 
 Cheers
 
 Andrew
 
 -- Andrew Robinson Deputy Director, CEBRA, School of Biosciences Reader
  Associate Professor in Applied Statistics Tel: (+61) 0403 138 955
 School of Mathematics and Statistics Fax: +61-3-8344 4599 University of
 Melbourne, VIC 3010 Australia Email: a.robin...@ms.unimelb.edu.au
 Website: http://www.ms.unimelb.edu.au/~andrewpr MSME:
 http://www.crcpress.com/product/isbn/9781439858028 FAwR:
 http://www.ms.unimelb.edu.au/~andrewpr/FAwR/ SPuR:
 http://www.ms.unimelb.edu.au/spuRs/ [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] multiple parameter optimization with optim()

2015-02-18 Thread Prof J C Nash (U30A)
Some observations -- no solution here though:

1) the code is not executable. I tried. Maybe that makes it reproducible!
Typos such as stat mod, undefined Q etc.

2) My experience is that any setup with a ?apply approach that doesn't
then check to see that the structure of the data is correct has a high
probability of failure due to mismatch with the optimizer requirements.
It's worth being VERY pedestrian in setting up optimization functions
and checking obsessively that you get what you expect and that there are
no regions you are likely to wander into with divide by 0,
log(negative), etc.

3) optim() is a BAD choice here. I wrote the source for three of the
codes, and the one most appropriate for many parameters (CG) I have been
deprecating for about 30 years. Use Rcgmin or something else
instead.

4) If possible, analytic gradients are needed for CG like codes. You
probably need to dig out some source code for dbinom() to do this, but
your function is not particularly complicated, and doesn't have if
statements etc. However, you could test a case using the numDeriv
gradient that is an option for Rcgmin, but it will be painfully slow.
For a one-off computation, that may still be acceptable.

JN

On 15-02-18 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 37
 Date: Tue, 17 Feb 2015 23:03:24 +
 From: Doran, Harold hdo...@air.org
 To: r-help@r-project.org r-help@r-project.org
 Subject: [R] multiple parameter optimization with optim()
 Message-ID: d10931e1.23c0e%hdo...@air.org
 Content-Type: text/plain; charset=UTF-8
 
 I am trying to generalize a working piece of code for a single parameter to a 
 multiple parameter problem. Reproducible code is below. The parameters to be 
 estimated are a, b, and c. The estimation problem is such that there is one 
 set of a, b, c parameters for each column of the data. Hence, in this sample 
 data with 20 columns, there are 20 a params, 20 b-params, and 20 c-params.
 
 Because I am estimating so many parameters, I am not certain that I have 
 indicated to the function properly the right number of params to estimate and 
 also if I have generated starting values in a sufficient way.
 
 Thanks for any help.
 Harold
 
 dat - replicate(20, sample(c(0,1), 2000, replace = T))
 library(stat mod)
 qq - gauss.quad.prob(Q, dist = 'normal', mu = 0, sigma = 1)
 nds - qq$nodes
 wts - qq$weights
 fn - function(params){
 a - params[1:ncol(dat)]
 b - params[1:ncol(dat)]
 c - params[1:ncol(dat)]
 L - sapply(1:ncol(dat), function(i) dbinom(dat[,i], 1, c + ((1 - c)/(1 + 
 exp(-1.7 * a * (nds[i] - b * wts[i]))
 r1 - prod(colSums(L * wts))
 -log(r1)
 }
 startVal - rep(.5, ncol(dat))
 opt - optim(startVal, fn)
 
   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem Solved: optim fails when using arima

2015-02-12 Thread Prof J C Nash (U30A)
I would not say it is fully solved since in using Nelder-Mead
you did not get the Hessian.

The issue is almost certainly that there is an implicit bound due to
log() or sqrt() where a parameter gets to be near zero and the finite
difference approximation of derivatives steps over the cliff. Probably
some NM steps are doing the same, but returning a large function value
which will allow NM to work, but possibly make it inefficient.

JN

On 15-02-12 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 30
 Date: Wed, 11 Feb 2015 08:10:03 -0700
 From: Albert Shuxiang Li albert.shuxiang...@gmail.com
 To: r-help@r-project.org
 Subject: [R] Problem Solved: optim fails when using arima
 Message-ID:
   canko1dkhubongy0zdtnvgxxmvfi8c+d3z-akykva5tnkrp7...@mail.gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 I am using arima(x, order=c(p,0,q)) function for my project, which deals
 with a set of large differenced time series data, data size varies from
 8000 to 7. I checked their stationarity before applying arima.
 Occasionally, arima(x, order=c(p,0,q)) gives me error like following (which
 stops script running):
 
 Error in optim(init[mask], armafn, method = optim.method, Hessian = TRUE, :
 non-finite finite-difference value [16]
 
 The last [16] would change anyting from 1 to 16. Using argument
 method=CSS, or ML, or default did not help. I am using the newest R
 version 3.1.2 for windows 7.
 
 I have done a lot of research on internet for this Error Message, and tried
 a lot of suggested solutions too. But the results are negative. Then,
 finally, I used following line which solved my problem.
 
 arima(x, order=c(p,0,q), optim.method=Nelder-Mead)
 
 Hope this helps others with similar situations.
 
 Shuxiang Albert Li
 
   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Maximum likelihood with analytical Hessian and

2014-12-18 Thread Prof J C Nash (U30A)
Of the tools I know (and things change every day!), only package trust
uses the Hessian explicitly.

It would not be too difficult to include explicit Hessian by modifying
Rvmmin which is all in R -- I'm currently doing some cleanup on that, so
ask offline if you choose that route.

Given that some parameters are between 0 and 1, you could use the
hyperbolic transformation (section 11.2 of my book Nonlinear parameter
optimization using R tools) with trust, and I think I'd try that as a
first attempt. You probably need to adjust the Hessian for the
transformation carefully.

Generally the work in computing the Hessian ( # obs * (# parameters)^2
in size) is not worth the effort, but there are problems for which it
does make a lot of sense.

JN

On 14-12-18 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 12
 Date: Wed, 17 Dec 2014 21:46:16 +0100
 From: Xavier Robin ro...@lindinglab.org
 To: r-help@r-project.org
 Subject: [R] Maximum likelihood with analytical Hessian and
 Message-ID: 5491eb98.6090...@lindinglab.org
 Content-Type: text/plain; charset=utf-8
 
 Dear list,
 
 I have an optimization problem that I would like to solve by Maximum
 Likelihood.
 I have analytical functions for the first and second derivatives of my
 parameters.
 In addition, some parameters are constrained between 0 and 1, while some
 others can vary freely between -Inf and +Inf.
 
 I am looking for an optimization function to solve this problem.
 
 I understand that the base optim function doesn't take a Hessian
 function, it only computes it numerically.
 I found the maxLik package that takes the function as a hess parameter
 but the maxNR method (the only one that uses the Hessian function) can't
 be bounded.
 Surprisingly I couldn't find a function doing both.
 
 Any suggestions for a function doing bounded optimization with an
 analytical Hessian function?
 
 Thanks,
 Xavier
 


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help specifying a non-linear model in nlsLM()

2014-12-17 Thread Prof J C Nash (U30A)
nlsLM and nls share a numerical gradient approximation and pop up the 
singular gradient quite often at the start. Package nlmrt and a very 
alpha nls14 (not on CRAN) try to use analytic derivatives for the 
Jacobian (most optimization folk will say singular Jacobian rather than 
singular gradient), so you might be better off with nlxb  from nlmrt. 
BUT ... it works on expressions, not functions.


Or you could provide your own Jacobian and make sure it is not singular 
at the start.


Both nlsLM and nlxb/nlfb will generally be OK once they get started due 
to the Marquardt adjustment to the Jacobian to avoid singularity.


JN


On 14-12-17 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 9 Date: Tue, 16 Dec 2014 08:24:45 -0800 From: Bert Gunter
gunter.ber...@gene.com To: Chandrasekhar Rudrappa
chandr...@gmail.com Cc: r-help@r-project.org r-help@r-project.org
Subject: Re: [R] Help specifying a non-linear model in nlsLM()
Message-ID:
cack-te30269-jbc4roh24whycv2mrsorqcekurn+vzhvurm...@mail.gmail.com
Content-Type: text/plain; charset=UTF-8 Suitable help may not be
possible. I suspect that either your function/code is funky (is the
function smooth, non-infinite near your starting value?) or you are
overparameterized. If the latter, the remedy may depend on the nature of
your data, which you have not shared. While you may receive some help
here (there are some pretty smart optimizers who monitor the list), I
suggest you try a statistical site like stats.stackexchange.com for
help, as this appears to be primarily a statistics issue, not an R
programming one. Cheers, Bert Bert Gunter Genentech Nonclinical
Biostatistics (650) 467-7374 Data is not information. Information is
not knowledge. And knowledge is certainly not wisdom. Clifford Stoll On
Tue, Dec 16, 2014 at 6:26 AM, Chandrasekhar Rudrappa
chandr...@gmail.com wrote:

Dear All,

I am trying to fit the following model in nlsLM():

fn5 - function(x, T, t1, w_inf, Lt0){
S-function(x, T, t1){
x+(1-T)/(2*pi)*sin(2*pi*(x-t1)/(1-T))
}
F - function(x, T, t1){
t2 - t1 + (1-T)/2
t3 - t1 + (1+T)/2
t.factorial - x%%1
floor(x)*(1-T) + S(t.factorial, T, t1)*(0=t.factorial  t.factorialt2) +
S(t2, T, t1)*(t2=t.factorial  t.factorialt3) +
S(t.factorial-T, T, t1)*(t3=t.factorial  t.factorial1)
}
return(w_inf - (w_inf - Lt0)*exp(-(2/30)*(F(x,T,t1)-F(7,T,t1
}
fn6- y~fn5(x, T, t1, w_inf, Lt0)
startval-c(x=x, T=0.035, t1=0.359, w_inf=135, Lt0=47)
(nlsktm1 - nlsLM(fn6, start=startval, lower=c(x=x, T=0.0135, t1=0.259,
w_inf=131, Lt0=41), jac=NULL, trace=T, data=ktm,
control=nls.control(maxiter=10)))

When I run the above script, the following error is displayed:

Error in nlsModel(formula, mf, start, wts) :
   singular gradient matrix at initial parameter estimates
In addition: Warning message:
In nls.lm(par = start, fn = FCT, jac = jac, control = control, lower =
lower,  :
   lmdif: info = 0. Improper input parameters.

Suitable help is requested.
--
Dr. TR Chandrasekhar, M.Sc., M. Tech., Ph. D.,
Sr. Scientist
Rubber Research Institute of India
Hevea Breeding Sub Station
Kadaba - 574 221
DK Dt., Karnataka
Phone-Land: 08251-214336
Mobile: 9448780118

 [[alternative HTML version deleted]]

__
R-help@r-project.org  mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] maximum number of dlls issue

2014-12-10 Thread Prof J C Nash (U30A)
I'm attempting to run the following script to allow me to
bring a new computer to a state where it can run a set of scripts
that is running on another.

# listcheckinstall.R
# get a list of R packages from a file,
# check if installed,
# install those not installed
# then update all packages
# Run as superuser
listname - readline(file of packages=)
biglist-readLines(listname)
np - length(biglist)
# instlist - installed.packages()  # this is alternative method
for (i in 1:np) {
## if (biglist[i] %in% instlist) {  # this is alternative method
   if (require(biglist[i], character.only=TRUE) ) {
  cat(biglist[i], is installed\n)
  biglist[i] - NA
   } else {
  cat(need to install ,biglist[i],\n)
   }
   tmp - readline(next)
}
plist - biglist(which(! is.na(biglist)))
plist
install.packages(plist)
update.packages()

About 2/3 of the way through, I get a msg that the maximum number of
dlls has been loaded and then some consequent errors.

I've been looking at library.dynam.unload and .dynLibs as a way to
possibly unload things require() has got into the workspace. So far no
success. I suggest offline contact and I'll post solution or relevant
discussion when things are worked out.

The installed.packages() list approach works OK, so this is not
critical. I was thinking installed.packages() would be slow, but in fact
the call is only done once, and it runs faster than the require
approach. However, I would like to understand what is going on with the
DLLs and how to sort them out if needed.

 sessionInfo()
R version 3.1.2 (2014-10-31)
Platform: x86_64-pc-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_CA.UTF-8   LC_NUMERIC=C
 [3] LC_TIME=en_CA.UTF-8LC_COLLATE=en_CA.UTF-8
 [5] LC_MONETARY=en_CA.UTF-8LC_MESSAGES=en_CA.UTF-8
 [7] LC_PAPER=en_CA.UTF-8   LC_NAME=C
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base


OS = Linux Mint Maya (Ubuntu 12.04 derivative) 64 bit.

JN

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] backslash quirk in paste

2014-12-06 Thread Prof J C Nash (U30A)
This is NOT critical. It arose due to a fumble fingers when developing
an R example, but slightly intriguing.

How could one build a string from substrings with a single backslash (\)
as separator. Here's the reproducible example:

fname = John; lname = Smith
paste(fname, lname)
paste(fname, lname, sep= / )
# BUT there's a glitch with backslash
paste(fname, lname, sep= \ ) # because of escaping character
paste(fname, lname, sep=' \ ')
paste(fname, lname, sep=' \\ ') # because of escaping character
bslash - \\
print(bslash)
paste(fname, lname, sep=bslash)

Possibly the answer is that R never allows a single backslash in its
strings, but I can imagine possible cases where I might want to output
such lines, for example, in documenting this.

Best, JN

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] backslash quirk in paste

2014-12-06 Thread Prof J C Nash (U30A)
Thanks. Now why didn't I think of that? However, it underlines that
there is an implicit call to print(), which processes the string rather
than simply dumping it to the screen. That's something to remember (and
I should have!).

Best, JN


On 14-12-06 02:30 PM, Ben Tupper wrote:
 Hi,
 
 When you call paste without assigning the value it returns to anything it 
 runs through the print command.  So, while your string may contain escapes, 
 using print will not present escapes as you are expecting them.  In this case 
 you could wrap cat() around your paste command.  
 
 cat(paste(fname, '\\', lname), \n)
 John \ Smith 
 
 See FAQ 7.37 
 
 http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-does-backslash-behave-strangely-inside-strings_003f
 
 Cheers,
 Ben
 
 sessionInfo()
 R version 3.1.0 (2014-04-10)
 Platform: x86_64-apple-darwin13.1.0 (64-bit)
 
 locale:
 [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
 
 attached base packages:
 [1] stats graphics  grDevices utils datasets  methods   base 
 
 loaded via a namespace (and not attached):
 [1] tools_3.1.0
 
 
 On Dec 6, 2014, at 2:00 PM, Prof J C Nash (U30A) nas...@uottawa.ca wrote:
 
 This is NOT critical. It arose due to a fumble fingers when developing
 an R example, but slightly intriguing.

 How could one build a string from substrings with a single backslash (\)
 as separator. Here's the reproducible example:

 fname = John; lname = Smith
 paste(fname, lname)
 paste(fname, lname, sep= / )
 # BUT there's a glitch with backslash
 paste(fname, lname, sep= \ ) # because of escaping character
 paste(fname, lname, sep=' \ ')
 paste(fname, lname, sep=' \\ ') # because of escaping character
 bslash - \\
 print(bslash)
 paste(fname, lname, sep=bslash)

 Possibly the answer is that R never allows a single backslash in its
 strings, but I can imagine possible cases where I might want to output
 such lines, for example, in documenting this.

 Best, JN

 __
 R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
 
 Ben Tupper
 Bigelow Laboratory for Ocean Sciences
 60 Bigelow Drive, P.O. Box 380
 East Boothbay, Maine 04544
 http://www.bigelow.org
 
 
 
 
 
 
 


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] non-finite initial optimization function; was help

2014-12-03 Thread Prof J C Nash (U30A)
If you want this resolved, you are going to have to provide the full 
function in a reproducible example. Nearly a half-century with this type 
of problem suggests a probability of nearly 1 that nlogL will be poorly 
set up.


JN

On 14-12-03 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 14
Date: Tue, 2 Dec 2014 12:38:03 -0300
From: Alejandra Chovar Veraalejandra.cho...@gmail.com
To:r-help@r-project.org
Subject: [R] help
Message-ID:
cagw2uadzyafdcnvyz82qkb127fxra0rvuv9z3mdvu1bp7mu...@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

Dear R

I have a big problem in my estimation process,  I try to estimate my
likelihood function with the option optim, but R give me this message
Error en optim(par = valores$par, nlogL, method = BFGS, hessian = T,  :

   valor inicial en 'vmmin' no es finito  I know this is because my initial
values are out the interval, but i try with different initial values and
the problem persist.

I don't know what can i do.


I have this code, to obtain my initial values:


  valores-
optim(c(-1,-1,1,1,1),nlogL,method=SANN,control=list(maxit=1000))

DCp -
optim(par=valores$par,nlogL,method=BFGS,hessian=T,control=list(maxit=1000))



I found in this linkhttp://es.listoso.com/r-help/2012-02/msg02395.html;
something similar, but in this case there isn't answer.


If you need more information about my code, please tell me.


Sincerely


Alejandra


__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R website

2014-11-20 Thread Prof J C Nash (U30A)
I was looking at the R website (r-project.org).

1) The Books page does not list several books about R, including one of
my own (Nonlinear parameter optimization tools in R) nor that of Karline
Soetaert on differential equations. How is the list updated?

2) The wiki seems to be dead. Is anyone in charge of it? If not, please
contact me off-line. I am willing to help out (I run several Dokuwiki
wikis).

Best, JN

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Extract model from deriv3 or nls

2014-09-19 Thread Prof J C Nash (U30A)
If it is possible, I think you will need to get the expression for
Puro.fun2 and then (essentially manually) put it into nls (or perhaps
better nlmrt or minpack.lm which have better numerics and allow bounds;
nlmrt even has masks or temporarily fixed parameters, but I need to
writa a vignette about that). That is, I believe you essentially need to
do the analytic derivatives to get what you want.

There is also an experimental nls14 on r-forge in the optimizer project,
which Duncan Murdoch and I have been working on. It even includes all-R
replacements for D and deriv (I don't think Duncan implemented deriv3
though). It is, however,
experimental, and we welcome people trying it and letting us know of
bugs and glitches. Note that nlmrt and nls14 try to use analytic
derivatives for the Jacobian of the nonlinear least squares, while nls()
uses numeric approximations. When it works, nls() is generally more
efficient, but it is much more fragile -- trade-offs abound.

Best, JN



On 14-09-19 06:00 AM, r-help-requ...@r-project.org wrote:
 Date: Thu, 18 Sep 2014 13:33:24 +
 From: Riley, Steve steve.ri...@pfizer.com
 To: r-help@r-project.org r-help@r-project.org
 Subject: [R] Extract model from deriv3 or nls
 Message-ID:
   941f9c738e7abb459a4306d21d7177cbc8598...@ndhamrexde03.amer.pfizer.com
   
 Content-Type: text/plain; charset=UTF-8
 
 Hello!
 
 I am trying to figure out how to extract the model equation when using deriv3 
 with nls.
 
 Here is my example:
 #
 # Generate derivatives
 #
 Puro.fun2 - deriv3(expr = ~(Vmax + VmaxT*state) * conc/(K + Kt * state + 
 conc),
 name = c(Vmax,VmaxT,K,Kt),
 function.arg = function(conc, state, Vmax, VmaxT, K, Kt) 
 NULL)
 #
 # Fit model using derivative function
 #
 Puro.fit1 - nls(rate ~ Puro.fun2(conc, state == treated, Vmax, VmaxT, K, 
 Kt),
  data = Puromycin,
  start = c(Vmax = 160, VmaxT = 47, K = 0.043, Kt = 0.05))
 
 Normally I would use summary(Puro.fit1)$formula to extract the model but 
 because I am implementing deriv3, the following gets returned:
 
  summary(Puro.fit1)$formula
 rate ~ Puro.fun2(conc, state == treated, Vmax, VmaxT, K, Kt)
 
 What I would like to do is find something that returns:
 
 rate ~ (Vmax + VmaxT*state) * conc/(K + Kt * state + conc)
 
 Is there a way to extract this? Please advise. Thanks for your time.
 
 Steve
 860-441-3435

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optim, L-BFGS-B | constrained bounds on parms?

2014-09-19 Thread Prof J C Nash (U30A)
One choice is to add a penalty to the objective to enforce the
constraint(s) along with bounds to keep the parameters from going wild.

This generally works reasonably well. Sometimes it helps to run just a
few iterations with a big penalty scale to force the parameters into a
feasible region, though a lot depends on the particular problem in my
experience, with some being straightforward and others needing a lot of
fiddle.

I suspect a math programming approach is overkill, though R does have
some packages for that. Your mileage may vary.

Note that L-BFGS-B used by R is a version for which Nocedal et al.
reported a bug in 2011 and provided a new Fortran code. I've recently
put up an implementation of the new code on r-forge under the optimizer
project. Still testing, but I think it's OK. You could also use Rvmmin
that has bounds, or nmkb from dfoptim (though you cannot start on bounds).

Best, JN

On 14-09-19 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 27
 Date: Fri, 19 Sep 2014 00:55:04 -0400
 From: Evan Cooch evan.co...@gmail.com
 To: r-help@r-project.org
 Subject: [R] optim, L-BFGS-B | constrained bounds on parms?
 Message-ID: 541bb728.6030...@gmail.com
 Content-Type: text/plain; charset=UTF-8
 
 Or, something to that effect. Following is an example of what I'm 
 working with basic ABO blood type ML estimation from observed type 
 (phenotypic) frequencies. First, I generate a log-likelihood function. 
 mu[1] - mu[2] are allele freqs for A and B alleles, respectively. Since 
 freq of O allele is redundant, I use 1-mu[1]-mu[2] for that. The terms 
 in the function are the probability expressions for the expected values 
 of each phenotype.
 
 But, that is somewhat besides the point:
 
 f_abo - function(mu) { 
 25*log(mu[1]^2+2*mu[1]*(1-mu[1]-mu[2]))+25*log(mu[2]^2+2*mu[2]*(1-mu[1]-mu[2]))+50*log(2*mu[1]*mu[2])+15*log((1-mu[1]-mu[2])^2)
  
 }
 
 
 So, I want to come up with MLE for mu[1] and mu[2] (for alleleic freqs 
 for A and B alleles, respectively. Now, given the data, I know (from 
 having maximized this likelihood outside of R) that the MLE for mu[1] is 
 0.37176, and for mu[2], the same -- mu[2]=0.371763. I confirm this in 
 MATLAB, and Maple, and Mathematica, using various non-linear 
 solvers/optimization routines. They all yielded recisely the right answers.
 
 But, stuck trying to come up with a general approach to getting the 
 'right estimates' in R, that doesn't rely on strong prior knowledge of 
 the parameters. I tried the following - I used L-BFGDS-B' because this 
 is a 'boxed' optimzation: mu[1] and mu[2] are both parameters on the 
 interval [0,1].
 
   results - optim(c(0.3,0.3), f_abo,
   method = L-BFGS-B, lower=c(0.1,0.1), upper=c(0.9,0.9),
hessian = TRUE,control=list(fnscale=-1))
 
 but that through the following error at me:
 
 L-BFGS-B needs finite values of 'fn'
 
 OK, fine. Taking that literally, and thinking a bit, clear that the 
 problem is that the upper bound on the parms creates the problem. So, I 
 try the crude approach of making the upper bound for each 0.5:
 
 
   results - optim(c(0.3,0.3), f_abo,
   method = L-BFGS-B, lower=c(0.1,0.1), upper=c(0.5,0.5),
hessian = TRUE,control=list(fnscale=-1))
 
 
 No errors this time, but no estimates either. At all.
 
 OK -- so I 'cheat', and since I know that mu[1]=mu[2]=0.37176, I make 
 another change to the upper limit, using 0.4 for both parms:
 
 
 
   results - optim(c(0.3,0.3), f_abo,
   method = L-BFGS-B, lower=c(0.1,0.1), upper=c(0.4,0.4),
hessian = TRUE,control=list(fnscale=-1))
 
 
 Works perfectly, and...right estimates too. ;-)
 
 But, I could get there from here because I had prior knowledge of the 
 parameter values. In other words, I cheated (not a thinly veiled 
 suggestion that prior information is cheating, of course ;-)
 
 What I'm trying to figure out is how to do a constrained optimization 
 with R, where mu[1] and mu[2] are estimated subject to the constraint that
 
 0 = mu[1]+mu[2] = 1
 
 There seems to be no obvious way to impose that -- which creates a 
 problem for optim since if I set 'vague' bounds on the parms (as per 
 original attempt), optim tries combinations (like mu[1]=0.9, mu[2]=0.9), 
 which aren't plausible, given the constraint that 0 = mu[1]+mu[2] = 1. 
 Further, in this example, mu[1]=mu[2]. That might not be the case, and I 
 might need to set upper bound on a parameter to be 0.5. But, without 
 knowing which parameter, I'd need to set both from (say) 0.1 - 0.9.
 
 Is this possible with optim, or do I need to use a different package? If 
 I can get there from here using optim, what do I need to do, either to 
 my call to the optim routine, or the function that I pass to it?
 
 This sort of thing is quite easy in (say) Maple. I simply execute
 
 NLPSolve(f_abo,initialpoint={mu[1]=0.2,mu[2]=0.2},{mu[1]+mu[2]=1},mu[1]=0.1..0.9,mu[2]=0.1..0.9,maximize);
 
 where I'm telling the NLPSolve function that there is a constraint for 
 mu[1] and 

[R] nlxb generating no SE

2014-08-28 Thread Prof J C Nash (U30A)
You didn't give your results (but DID give a script -- hooray!). I made
a small change -- got rid of the bounds and added trace=TRUE, and got
the output

## after  5001Jacobian and  6997 function evaluations
##   namecoeff  SE   tstat  pval
gradientJSingval
## p1   53.1753NA NA NA
0.01591   1.158e+13
## p2 8.296NA NA NA
8.959e+11   4.549
## p3  -7.47638NA NA NA
-0.002521  0.3049
## p4  -1.64963NA NA NA
-0.003805  0.1073
## p5   1.44299NA NA NA
0.001269 0.02521
## p6   91.1994NA NA NA
-0.01548 0.01474
## 

Sorry that this doesn't display correctly in plain text emailer (wrapped
lines). However, it shows

1) This is a pretty nasty problem that has NOT got to the convergence
point, as indicated by 5001 Jacobians. In that case I don't give the
summary(). That is a hint to provide more diagnostics when I do some
upgrade (in process -- new nls14() with Duncan Murdoch is on r-forge
now, but much work to be done).

2) The Jacobian is effectively singular.

3) The parameter scaling is awful.

Maybe time to reformulate.

Best, JN



On 14-08-28 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 23
 Date: Wed, 27 Aug 2014 12:52:59 -0700
 From: Andras Farkas motyoc...@yahoo.com
 To: r-help@r-project.org
 Subject: [R] nlxb generating no SE
 Message-ID:
   1409169179.90920.yahoomailba...@web161605.mail.bf1.yahoo.com
 Content-Type: text/plain; charset=us-ascii
 
 Dear All
 
 please provide insights to the following, if possible:
  we have
 
 E -c(8.2638 ,7.9634, 7.5636, 6.8669, 5.7599, 8.1890, 8.2960, 8.1481, 8.1371, 
 8.1322 ,7.9488, 7.8416, 8.0650,
  8.1753, 8.0986 ,8.0224, 8.0942, 8.0357, 7.8794, 7.8691, 8.0660, 8.0753, 
 8.0447, 7.8647, 7.8837, 7.8416,
  7.6967, 7.4922, 7.7161, 7.6378 ,7.5128 ,7.4886, 7.4667, 7.3940, 7.2450, 
 7.1756, 6.7253, 6.7213, 6.9897,
  6.7053, 6.3637, 6.8318 ,5.5420, 6.8955, 6.6074, 7.0689, 0.0010 ,1.3010, 
 1.3010 ,0.0010, 0.0010)
 
 D1-  c(0.00,  0.00,  0.00 , 0.00,  0.00,  0.25,  0.50 , 1.00 , 2.00,  4.00,  
 8.00, 16.00, 32.00,  0.25,  0.50,  1.00,
  2.00,  4.00,  8.00, 16.00, 32.00 , 0.25  ,0.50,  1.00 , 2.00,  4.00 , 8.00, 
 16.00 ,32.00 , 0.25 , 0.50 , 1.00
 ,  2.00,  4.00,  8.00, 16.00 , 0.25,  0.50 , 1.00  ,2.00,  4.00,  8.00 
 ,16.00,  0.25,  0.50,  1.00,  4.00,  8.00,
 16.00, 32.00, 32.00)
 D2 -c(4 , 8, 16, 32, 64,  0,  0,  0,  0,  0,  0,  0,  0,  4,  4,  4,  4,  4, 
  4,  4,  4,  8,  8,  8,  8,  8,  8,  8,  8, 16 ,16 ,16,
 16, 16, 16, 16, 32 ,32 ,32, 32, 32, 32, 32, 64, 64, 64, 64, 64, 64, 64, 32)
 y -rep(1,length(E))
 raw -data.frame(D1,D2,E,y)
 
 require(nlmrt)
 start -list(p1=60,p2=9,p3=-8.01258,p4=-1.74327,p5=-5,p6=82.8655)
 print(nlxb -nlxb(y 
 ~D1/(p1*((E/(p2-E))^(1/p3)))+D2/(p6*((E/(p2-E))^(1/p4)))+(p5*D1*D2)/(p1*p6*((E/(p2-E))^(0.5/p3+0.5/p4))),
  start=start,data=raw, lower=-Inf, upper=Inf))
 
 and once you run the code you will see the best I was able to get out of 
 this data set using the model. Best here means the result that made most 
 sense from the perspective of applying it to life science My question is 
 related to the lack of calculated SEs (standard errors, correct me if I am 
 wrong)... I would like to calculate CIs for the parameters, and as far as I 
 understand SEs would be needed to be able to do that. Any suggestions for how 
 we may establish 95% CIs for the estimated parameters?
 
 appreciate your input,
 
 thanks,
 
 Andras

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with nlm function to minimize the negative log likelihood

2014-06-24 Thread Prof J C Nash (U30A)
This is almost always user error in setting up the problem, but
without a reproducible example, you won't get any real help on this list.

JN

On 14-06-24 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 11
 Date: Mon, 23 Jun 2014 09:14:23 -0700
 From: Ferra Xu ferra...@yahoo.com
 To: r-sig-...@r-project.org r-sig-...@r-project.org,
   r-help@r-project.org r-help@r-project.org
 Subject: [R] Problem with nlm function to minimize the negative log
   likelihood
 Message-ID:
   1403540063.64234.yahoomail...@web125103.mail.ne1.yahoo.com
 Content-Type: text/plain
 
 Hi all
  I have a problem in using nlm function to minimize the negative log 
 likelihood of a function in R. The problem is that it gives me the same 
 estimated values for all the parameters, except one of them, in each 
 iteration!! Does anyone have any ideas what may cause this mistake? 
 
 Thank you

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Questions in R about optim( ) and optimize( )

2014-05-22 Thread Prof J C Nash (U30A)
You need to provide reproducible examples if you want to get readers to
actually give you answers.

The two issues both come up from time to time and generally relate to
how the objective function is set up, though sometimes to options in the
call. However, generally really isn't good enough.

JN



 Date: Wed, 21 May 2014 20:37:10 +
 From: Gao, Vicky v...@panagora.com
 To: 'r-help@r-project.org' r-help@r-project.org
 Subject: [R] Questions in R about optim( ) and optimize( )
 Message-ID:
   e6231a6c907f1e4ebdc773df7f92327cdfe...@exmailbox1.panagora.com
 Content-Type: text/plain
 
 Hello,
 
 I would be really thankful if I could bother you with two questions.
 
 1.  I ran into some problems when I used the code optim() in my work. I 
 used it to optimize a function-fn(x,y), where x is a two-dimensional 
 parameter.  For y I want to pass the value later when I use optim(), because 
 it needs to be inside of a loop, which changes the value of y every time.
 
 For (point in 100:500){
   
   optim(c(0,0.5),fn,p=point)
   
}
 
 But it always turned to be Error in fn(par, ...) : argument p is missing, 
 with no default.  I'm confused about this, is that caused by optim() has 
 some specific requirements towards the objective function?
 
 2.  I'm also confused about optimize() actually.  My code is like: 
 optimize(f,c(0,0.99),b=1.0,p=point). I have nothing wrong when I run it, 
 but the result is not accurate enough. Like I got the $minimum as 0.39 from 
 the interval as above (0,0.99), however, I got 0.87 when I change it to 
 (0.2,0.99), 0.96 for (0.3, 0.99). And the only thing I changed is the 
 interval value, so I'm not sure what I should do to improve it.
 
 As above, could you help me with these two problems? I would really 
 appreciate that if you could give me some information.
 
 Thank you very much! I look forward for your reply soon.
 
 Vicky
 
 
   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nls() help please

2014-05-14 Thread Prof J C Nash (U30A)
Someone might be able to come up with a neat expression, but my own
approach would be to write a residual function to create the vector res
from the parameters, for which core line would be

res-
 D1/(p1*((E/(p2-E))^(1/p3)))+D2/(p6*((E/(p2-E))^(1/p4)))
+(p5*D1*D2)/(p1*p6*((E/(p2-E))^(0.5/p3+0.5/p4)))-1

and use either package nlmrt function nlfb or package minpack.lm
function nls.lm.

Note that if you can provide a Jacobian function (nls does this
internally, which is why it is nice), you will generally get much better
results and more quickly. Writing the residuals this way round helps
avoid sign errors in the derivatives.

You can also minimize the sum of squares -- I've examples of this in my
new book Nonlinear parameter optimization using R tools from Wiley.
For information, I'll be giving a tutorial on this topic at UseR 2014. I
believe this year the tutorials are included in conference registration.

Let me know off-list of your progress.

JN



On 14-05-14 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 29
 Date: Tue, 13 May 2014 13:20:01 -0700 (PDT)
 From: Andras Farkas motyoc...@yahoo.com
 To: r-help@r-project.org r-help@r-project.org
 Subject: [R] nls() help please
 Message-ID:
   1400012401.98480.yahoomail...@web161605.mail.bf1.yahoo.com
 Content-Type: text/plain
 
 Dear All,
  
 please help with writing the function for the following:
  
 we have data frame raw
  
 D1 -c(2, 2, 2, 2, 2, 5, 5, 5, 5, 5, 10, 10, 10, 10, 10, 20, 20, 20, 20, 20, 
 50, 50, 50, 50, 50)
 D2 -c(0.2, 0.5, 1, 2, 5, 0.2, 0.5, 1, 2, 5, 0.2, 0.5, 1, 2, 5, 0.2, 0.5, 1, 
 2, 5, 0.2, 0.5, 1, 2, 5)
 E -c(76.3, 48.8, 44.5, 15.5, 3.21, 56.7, 47.5, 26.8, 16.9, 3.25, 46.7, 35.6, 
 21.5, 11.1, 2.94, 24.8, 21.6, 17.3, 7.78, 1.84, 13.6, 11.1, 6.43, 3.34, 0.89)
  
 raw -data.frame(D1,D2,E)
  
 reasonable starting parameters for nls (to the best of my knowledge):
  
 start -c(p1=8,p2=80,p3=-0.7,p4=-2.5,p5=0.3,p6=0.7)
  
 I would like to fit this model:
  
 1 = 
 D1/(p1*((E/(p2-E))^(1/p3)))+D2/(p6*((E/(p2-E))^(1/p4)))+(p5*D1*D2)/(p1*p6*((E/(p2-E))^(0.5/p3+0.5/p4)))
  
  
 to the data in raw using nls(). The weighting of the fit in this case 
 should be done using the inverse of the variance of raw$E. Having the model 
 equal to 1 makes it quiet difficult for me to see or understand how this can 
 be done (versus the usual nls(y~p1*x+p2,...)) using the software.
  
 As always, your help is greatly appreciated.  
  
 Sincerely,
  
 Andras 
 
   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] convergence=0 in optim and nlminb is real?

2013-12-17 Thread Prof J C Nash (U30A)
If you run all methods in package optimx, you will see results all over 
the western hemisphere. I suspect a problem with some nasty 
computational issues. Possibly the replacement of the function with Inf 
when any eigenvalues  0 or  nu  0 is one source of this.


Note that Hessian eigenvalues are not used to determine convergence in 
optimization methods. If they did, nobody would ever get promoted from 
junior lecturer who was under 100 if they needed to do this, because 
determining the Hessian from just the function requires two levels of 
approximate derivatives.


If you want to get this problem reliably solved, I think you will need to
1) sort out a way to avoid the Inf values -- can you constrain the 
parameters away from such areas, or at least not use Inf. This messes up 
the gradient computation and hence the optimizers and also the final 
Hessian.

2) work out an analytic gradient function.

JN






Date: Mon, 16 Dec 2013 16:09:46 +0100
From: Adelchi Azzalini azzal...@stat.unipd.it
To: r-help@r-project.org
Subject: [R] convergence=0 in optim and nlminb is real?
Message-ID: 20131216160946.91858ff279db26bd65e18...@stat.unipd.it
Content-Type: text/plain; charset=US-ASCII

It must be the case that this issue has already been rised before,
but I did not manage to find it in past posting.

In some cases, optim() and nlminb() declare a successful convergence,
but the corresponding Hessian is not positive-definite.  A simplified
version of the original problem is given in the code which for
readability is placed below this text.  The example is built making use
of package 'sn', but this is only required to set-up the example: the
question is about the outcome of the optimizers. At the end of the run,
a certain point is declared to correspont to a minimum since
'convergence=0' is reported, but the eigenvalues of the (numerically
evaluated) Hessian matrix at that point are not all positive.

Any views on the cause of the problem? (i) the point does not
correspong to a real minimum, (ii) it does dive a minimum but the
Hessian matrix is wrong, (iii) the eigenvalues are not right.
...and, in case, how to get the real solution.


Adelchi Azzalini


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] convergence=0 in optim and nlminb is real?

2013-12-17 Thread Prof J C Nash (U30A)
As indicated, if optimizers check Hessians on every occasion, R would
enrich all the computer manufacturers. In this case it is not too large
a problem, so worth doing.

However, for this problem, the Hessian is being evaluated by doing
numerical approximations to second partial derivatives, so the Hessian
may be almost a fiction of the analytic Hessian. I've seen plenty of
Hessian approximations that are not positive definite, when the answers
were OK.

That Inf is allowed does not mean that it is recommended. R is very
tolerant of many things that are not generally good ideas. That can be
helpful for some computations, but still cause trouble. It seems that it
is not the problem here.

I did not look at all the results for this problem from optimx, but it
appeared that several results were lower than the optim(BFGS) one. Is
any of the optimx results acceptable? Note that optimx DOES offer to
check the KKT conditions, and defaults to doing so unless the problem is
large. That was included precisely because the optimizers generally
avoid this very expensive computation. But given the range of results
from the optimx answers using all methods, I'd still want to do a lot
of testing of the results.

This may be a useful case to point out that nonlinear optimization is
not a calculation that should be taken for granted. It is much less
reliable than most users think. I rarely find ANY problem for which all
the optimx methods return the same answer. You really do need to look at
the answers and make sure that they are meaningful.

JN

On 13-12-17 11:32 AM, Adelchi Azzalini wrote:
 On Tue, 17 Dec 2013 08:27:36 -0500, Prof J C Nash (U30A) wrote:
 
 PJCN If you run all methods in package optimx, you will see results
 PJCN all over the western hemisphere. I suspect a problem with some
 PJCN nasty computational issues. Possibly the replacement of the
 PJCN function with Inf when any eigenvalues  0 or  nu  0 is one
 PJCN source of this.
 
 A value Inf is allowed, as indicated in this passage from the
 documentation of optim:
 
   Function fn can return NA or Inf if the function cannot be evaluated
   at the supplied value, but the initial value must have a computable
   finite value of fn.
 
 Incidentally, the documentation of optimx includes the same sentence. 
 
 However, this aspect is not crucial anyway, since the point selected by
 optim is within the feasible space (by a good margin), and evaluation of
 the Hessian matrix occurs at this point.
 
 PJCN 
 PJCN Note that Hessian eigenvalues are not used to determine
 PJCN convergence in optimization methods. If they did, nobody would
 PJCN ever get promoted from junior lecturer who was under 100 if they
 PJCN needed to do this, because determining the Hessian from just the
 PJCN function requires two levels of approximate derivatives.
 
 At the end of the optimization process, when a point is going to be
 declared a minimum point, I expect that an optimizer  checks that it
 really *is* a minimum. It may do this in other ways other than
 computing the eigenvalues, but it must be done somehow. Actually, I
 first realized the problem by attempting inversion (to get standard
 errors) under the assumption of positive definiteness, and it failed.
 For instance 
 
   mnormt:::pd.solve(opt$hessian)
 
 says  x appears to be not positive definite. This check does not
 involve a further level of approximation.
 
 PJCN 
 PJCN If you want to get this problem reliably solved, I think you will
 PJCN need to
 PJCN 1) sort out a way to avoid the Inf values -- can you constrain
 PJCN the parameters away from such areas, or at least not use Inf.
 PJCN This messes up the gradient computation and hence the optimizers
 PJCN and also the final Hessian.
 PJCN 2) work out an analytic gradient function.
 PJCN 
 
 In my ealier message, I have indicated that this is a semplified
 version of the real thing, which is function mst.mle of pkg 'sn'.
 What mst.mle does is exactly what you indicated, that is, it
 re-parameterizes the problem so that we always stay within the
 feasible region and works with analytic gradient function (of the
 transformed parameters). The final outcome is the same: we land on
 the same point.
 
 However, once the (supposed) point of minimum has been found, the
 Hessian matrix must be computed on the original parameterization,
 to get standard errors.
 
 Adelchi Azzalini
 
 PJCN 
 PJCN 
 PJCN  Date: Mon, 16 Dec 2013 16:09:46 +0100
 PJCN  From: Adelchi Azzalini azzal...@stat.unipd.it
 PJCN  To: r-help@r-project.org
 PJCN  Subject: [R] convergence=0 in optim and nlminb is real?
 PJCN  Message-ID:
 PJCN  20131216160946.91858ff279db26bd65e18...@stat.unipd.it
 PJCN  Content-Type: text/plain; charset=US-ASCII
 PJCN 
 PJCN  It must be the case that this issue has already been rised
 PJCN  before, but I did not manage to find it in past posting.
 PJCN 
 PJCN  In some cases, optim() and nlminb() declare a successful
 PJCN  convergence, but the corresponding Hessian

Re: [R] convergence=0 in optim and nlminb is real?

2013-12-17 Thread Prof J C Nash (U30A)
I actually agree with the sentiments below -- the optimizer should
support its claims. The reality is sadly otherwise, in my view largely
because of the difficulties in computing the Hessian.

This exchange has been useful, as it highlights user expectations.
Without such dialog, we won't improve our R tools.

JN


On 13-12-17 04:54 PM, Adelchi Azzalini wrote:
 It was not my suggestion that an optimizer should check the Hessian on
 every occasion (this would be both time consuming and meaningless), but
 I expected it to do so before claiming that a point is at a minimum,
 that is, only for the candidate final point.
 
 Neither I have ever thought that nonlinear optimization is a cursory
 operation, especially when the dimensionality is not small. Exactly for
 this reason I expect that an optimizer takes stringent precautions
 before claiming to have completed its job successfully.
 
 AA
 
 
 
 On 17 Dec 2013, at 18:18, Prof J C Nash (U30A) wrote:
 
 As indicated, if optimizers check Hessians on every occasion, R would
 enrich all the computer manufacturers. In this case it is not too large
 a problem, so worth doing.

 However, for this problem, the Hessian is being evaluated by doing
 numerical approximations to second partial derivatives, so the Hessian
 may be almost a fiction of the analytic Hessian. I've seen plenty of
 Hessian approximations that are not positive definite, when the answers
 were OK.

 That Inf is allowed does not mean that it is recommended. R is very
 tolerant of many things that are not generally good ideas. That can be
 helpful for some computations, but still cause trouble. It seems that it
 is not the problem here.

 I did not look at all the results for this problem from optimx, but it
 appeared that several results were lower than the optim(BFGS) one. Is
 any of the optimx results acceptable? Note that optimx DOES offer to
 check the KKT conditions, and defaults to doing so unless the problem is
 large. That was included precisely because the optimizers generally
 avoid this very expensive computation. But given the range of results
 from the optimx answers using all methods, I'd still want to do a lot
 of testing of the results.

 This may be a useful case to point out that nonlinear optimization is
 not a calculation that should be taken for granted. It is much less
 reliable than most users think. I rarely find ANY problem for which all
 the optimx methods return the same answer. You really do need to look at
 the answers and make sure that they are meaningful.

 JN

 On 13-12-17 11:32 AM, Adelchi Azzalini wrote:
 On Tue, 17 Dec 2013 08:27:36 -0500, Prof J C Nash (U30A) wrote:

 PJCN If you run all methods in package optimx, you will see results
 PJCN all over the western hemisphere. I suspect a problem with some
 PJCN nasty computational issues. Possibly the replacement of the
 PJCN function with Inf when any eigenvalues  0 or  nu  0 is one
 PJCN source of this.

 A value Inf is allowed, as indicated in this passage from the
 documentation of optim:

  Function fn can return NA or Inf if the function cannot be evaluated
  at the supplied value, but the initial value must have a computable
  finite value of fn.

 Incidentally, the documentation of optimx includes the same sentence.

 However, this aspect is not crucial anyway, since the point selected by
 optim is within the feasible space (by a good margin), and evaluation of
 the Hessian matrix occurs at this point.

 PJCN
 PJCN Note that Hessian eigenvalues are not used to determine
 PJCN convergence in optimization methods. If they did, nobody would
 PJCN ever get promoted from junior lecturer who was under 100 if they
 PJCN needed to do this, because determining the Hessian from just the
 PJCN function requires two levels of approximate derivatives.

 At the end of the optimization process, when a point is going to be
 declared a minimum point, I expect that an optimizer  checks that it
 really *is* a minimum. It may do this in other ways other than
 computing the eigenvalues, but it must be done somehow. Actually, I
 first realized the problem by attempting inversion (to get standard
 errors) under the assumption of positive definiteness, and it failed.
 For instance

  mnormt:::pd.solve(opt$hessian)

 says  x appears to be not positive definite. This check does not
 involve a further level of approximation.

 PJCN
 PJCN If you want to get this problem reliably solved, I think you will
 PJCN need to
 PJCN 1) sort out a way to avoid the Inf values -- can you constrain
 PJCN the parameters away from such areas, or at least not use Inf.
 PJCN This messes up the gradient computation and hence the optimizers
 PJCN and also the final Hessian.
 PJCN 2) work out an analytic gradient function.
 PJCN

 In my ealier message, I have indicated that this is a semplified
 version of the real thing, which is function mst.mle of pkg 'sn'.
 What mst.mle does is exactly what you indicated, that is, it
 re-parameterizes the problem so

Re: [R] optimization

2013-11-16 Thread Prof J C Nash (U30A)
There are lots of errors in your code. In particular, the optimization
routines do not like functions that ignore the parameters.

And you have not provided out or out1 to the optimizer -- they are
returned as elements of func(), but not correctly.

Please try some of the examples for optim or optimx to learn how to
structure your problem.

JN


On 13-11-16 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 19
 Date: Fri, 15 Nov 2013 09:17:47 -0800 (PST)
 From: IZHAK shabsogh ishaqb...@yahoo.com
 To: r-help@r-project.org r-help@r-project.org
 Subject: [R] optimization
 Message-ID:
   1384535867.58595.yahoomail...@web142506.mail.bf1.yahoo.com
 Content-Type: text/plain
 
 x1-c(5.548,4.896,1.964,3.586,3.824,3.111,3.607,3.557,2.989,18.053,3.773,1.253,2.094,2.726,1.758,5.011,2.455,0.913,0.890,2.468,4.168,4.810,34.319,1.531,1.481,2.239,4.204,3.463,1.727)
 y-c(2.590,3.770,1.270,1.445,3.290,0.930,1.600,1.250,3.450,1.096,1.745,1.060,0.890,2.755,1.515,4.770,2.220,0.590,0.530,1.910,4.010,1.745,1.965,2.555,0.770,0.720,1.730,2.860,0.760)
 x2-c(0.137,2.499,0.419,1.699,0.605,0.677,0.159,1.699,0.340,2.899,0.082,0.425,0.444,0.225,0.241,0.099,0.644,0.266,0.351,0.027,0.030,3.400,1.499,0.351,0.082,0.518,0.471,0.036,0.721)
 k-rep(1,29)
 x-data.frame(k,x1,x2)
 
 freg-function(y,x1,x2){
   reg- rlm(y ~ x1 + x2 , data=x)
   return(reg)
 }
 
 
  func - function(x1,x2,b){
   fit-freg(y,x1,x2)
 b-c(coef(fit))
   dv-1+ b[2]*x2^b[3]
 dv1-b[2]*x2^b[3]*log(x2)
   out - ( x1/(1+ b[2]*x2^b[3]))
   out1-  c(-x1*x2^b[3]/dv^2,-x1* dv1/dv^2)
 return(list( out,out1))
}
 optim(par=c(b[2],b[3]), fn=out, gr =out1,
 method = c(BFGS),
 lower = -Inf, upper = Inf,
 control = list(), hessian = T)
 can someone help me try running this code because i try many occasion but 
 prove  abortive. the aim is
 to optimize the parameter of the model that is parameter estimates using 
 optimization 
 
   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] codenls

2013-11-13 Thread Prof J C Nash (U30A)
The expression has b[1] and b[2] while start has b[2] and b[3].

The expression needs a different form, for example:

#   fit-nlrob(y ~ x1 / (1+ b[1]*x2^b[2]),data = xx, start =
#   list(b[2],b[3]))
   fit-nlrob(y ~ x1 / (1+ b1*x2^b2),data = xx, start =
   list(b1=b[2],b2=b[3]))

This works, though I have no idea if the results make sense.

JN

On 13-11-13 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 6
 Date: Tue, 12 Nov 2013 06:03:02 -0800 (PST)
 From: IZHAK shabsogh ishaqb...@yahoo.com
 To: r-help@r-project.org r-help@r-project.org
 Subject: [R] codenls
 Message-ID:
   1384264982.50617.yahoomail...@web142502.mail.bf1.yahoo.com
 Content-Type: text/plain
 
  kindly help correct this program as given below i run and run is given me 
 some error
 
 rm(list=ls())
  require(stats)
  require(robustbase) 
  x1-as.matrix(c(5.548,4.896,1.964,3.586,3.824,3.111,3.607,3.557,2.989))
  y-as.matrix(c(2.590,3.770,1.270,1.445,3.290,0.930,1.600,1.250,3.450))
  x2-as.matrix(c(0.137,2.499,0.419,1.699,0.605,0.677,0.159,1.699,0.340))
  k-rep(1,9)
  x-data.frame(k,x1,x2)
  xx-data.frame(y,x1,x2)
  
  
  freg-function(y,x1,x2){
reg- lm(y ~ x1 + x2 , data=x)
   
  return(reg)
  }
  
  fit-freg(y,x1,x2)
  b-as.matrix((coef(fit)))
  
 
  f-function(b,x1,x2){
fit-nlrob(y ~ x1 / (1+ b[1]*x2^b[2]),data = xx, start =
list(b[2],b[3]))
return(fit)
  }
  fit1-f(b,x1,x2)
 Error in nlrob(y ~ x1/(1 + b[1] * x2^b[2]), data = xx, start = list(b[2],  : 
   'start' must be fully named (list or numeric vector)
   [[alternative HTML version deleted]]
 
 
 
 --
 
 Message: 7
 Date: Tue, 12 Nov 2013 06:34:44 -0800 (PST)
 From: Alaios ala...@yahoo.com
 To: R-help@r-project.org R-help@r-project.org
 Subject: [R] Handle Gps coordinates
 Message-ID:
   1384266884.5102.yahoomail...@web125305.mail.ne1.yahoo.com
 Content-Type: text/plain
 
 Dear all,
 I would like to ask you if there are any gps libraries.
 
 
 I would like to be able to handle them,
 -like calculate distances in meters between gps locations,
 -or find which gps location is closer to a list of gps locations.
 
 Is there something like that in R?
 
 I would like to tthank you in advance for your reply
 
 Regards
 Alex
   [[alternative HTML version deleted]]
 
 
 
 --
 
 Message: 8
 Date: Tue, 12 Nov 2013 06:59:53 -0800
 From: Baro babak...@gmail.com
 To: R help r-help@r-project.org
 Subject: [R] Having a relative x-axis in a plot
 Message-ID:
   CAF-JZQupPPoyGfd-1k0ztn7bHmFTxdXUjtMGHub9D3TYLh=2...@mail.gmail.com
 Content-Type: text/plain
 
 I would like to have a relative x-axis in r. I am reading time seris from
 an excel file and I want to show in a plot and also I want to have a window
 which moves over the values.
 
 My aim ist to see which point belongs to which time(row number in excel
 file). i.e I am reading from 401th row in 1100th column in excel file and
 my window length is 256. here is my code:
 
 #which column in excel filespalte-1100
 #start row and end row
 start-401end-600
 
 window-256
 n-1
 
 wb - loadWorkbook(D:\\MA\\excel_mix_1100.xls)
 dat -readWorksheet(wb, sheet=getSheets(wb)[1], startRow=start,
 endRow=end, startCol=spalte, endCol=spalte,header=FALSE)
 datalist-dat[seq(1, length(dat[,1]), by = 2),]
 datalist-matrix(datalist)while(1+window=length(datalist)){
   kd-matrix(as.integer(datalist[n:(n+window-1),1]))
   Sys.sleep(0.2)
   
 plot(kd,type=l,col=blue,xlab=Rohdaten,ylab=values,xlim=c(start+n,start+n+window-1))
   n-n+1}
 
   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls model definition help

2013-10-23 Thread Prof J C Nash (U30A)
Because you have y on both sides, and at different times, I think you
are going to have to bite the bullet and write down a residual function.

Suggestion: write it as

res[t+1] = (th1*x1 + R1*x2) * exp(a1*x3) + (1-th1*x1 + R1*x2)*y(t) - y[t+1]

(cleaning up the indices -- they are surely needed for the x's too).
This way you can compute derivatives without sign issue if you decide to
do so rather than relying on numeric Jacobian.

nlmrt or minpack.lm will give more stable computations that nls(). Both
have either formula or functional versions of the NLLS minimizer. nlfb()
from nlmrt does bounds constraints and also fixed parameters (masks).

JN



On 13-10-23 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 37
 Date: Tue, 22 Oct 2013 17:52:28 +
 From: wayne.w.jo...@shell.com
 To: R-help@r-project.org
 Subject: [R]  nls model definition help
 Message-ID:
   
 823fb8ad8fd2f44a92284630a4aadf7e2f1b5...@seacmw-s-53401.europe.shell.com
   
 Content-Type: text/plain
 
 Hi fellow R users,
 
 I'm trying to fit a model using nls with the following model definition:
 
 y(t+1)=(th1*x1 + R1*x2) * exp(a1*x3) + (1-th1*x1 + R1*x2)*y(t)
 
 y is the dependent variable (note on both sides of eq) and the x's represent 
 the regressors.
 th1, R1 and a1 are parameters to be estimated. The problem is non- linear and 
 hence why I'm trying to fit using the well used nls function.
 
 To fit the model I would like to be able to use the formula interface rather 
 than build my own ugly function definition.
 Any ideas if this is achievable and if not any ideas on how to fit this model?
 
 
 Many thanks,
 
 Wayne

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R - How to physically Increase Speed

2013-10-22 Thread Prof J C Nash (U30A)
The advice given is sensible. For a timing study see

http://rwiki.sciviews.org/doku.php?id=tips:rqcasestudy

We found that for optimization calculations, putting the objective
function calculation or parts thereof in Fortran was helpful. But we
kept those routines pretty small -- less than a page -- and we just
called them to evaluate things, avoiding passing around information back
and forth to R.

JN




On 13-10-22 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 58
 Date: Tue, 22 Oct 2013 05:47:15 -0400
 From: Jim Holtman jholt...@gmail.com
 To: Alexandre Khelifa akhel...@logitech.com
 Cc: r-help@r-project.org r-help@r-project.org
 Subject: Re: [R] R - How to physically Increase Speed
 Message-ID: 73d989da-b6b3-421d-838c-903da3435...@gmail.com
 Content-Type: text/plain; charset=us-ascii
 
 I would start with taking a subset of the data (definitely some that would 
 run in less than 10 minutes) and use the profiler Rprof to see where time 
 is being spent.  you can use the the task monitor (if on windows) to see how 
 much memory you are using; it sounds like you did not need the extra memory.
 
 You might see if you can partition your data so you can run multiple versions 
 of R and then merge the results.
 
 Anything that takes more than a half hour, for me, is looked into to see 
 where the problems are.  For example dataframes arevexpensive to access and 
 conversion to matrices is one way to speed it up.  the is where the profiler 
 helps.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nlminb() - how do I constrain the parameter vector properly?

2013-10-21 Thread Prof J C Nash (U30A)
This is one area where more internal communication between the objective 
function (inadmissible inputs) and optimizer (e.g., Quasi-Newton) is 
needed. This is NOT done at the moment in R, nor in most software. An 
area for RD. In Nash and Walker-Smith (1987) we did some of this in 
BASIC back in mid-80s. Still trying to redo that for R, but it won't be 
done quickly due to the much bigger infrastructure of R.


The trick with using a large value or Inf (which sometimes causes
other errors) usually slows the optimization, whereas communicating that 
the objective is inadmissible in a line search can often be simply a 
shortening of the step size.


JN

On 13-10-21 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 34
Date: Mon, 21 Oct 2013 05:56:45 -0400
From: Mark Leedsmarklee...@gmail.com
To: Steven LeBlancores...@gmail.com
Cc:r-help@R-project.org  r-help@r-project.org
Subject: Re: [R] nlminb() - how do I constrain the parameter vector
properly?
Message-ID:
cahz+bwyetvzjiccaugvxstgcqmf6enw0n3mmb7jfa3okykb...@mail.gmail.com
Content-Type: text/plain

my mistake. since nlminb is minimizing, it should be +Inf  ( so that the
likelihood
is large ) as you pointed out. Note  that this approach is a heuristic and
may  not work all the time.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Cleaning up workspace

2013-10-16 Thread Prof J C Nash (U30A)
In order to have a clean workspace at the start of each chapter of a 
book I'm kniting I've written a little script as follows:


# chapclean.R
# This cleans up the R workspace
ilist-c(.GlobalEnv, package:stats, package:graphics, 
package:grDevices,

package:utils, package:datasets, package:methods, Autoloads,
package:base)
print(ilist)
xlist-search()[which(!(search() %in% ilist))]
print(xlist)
for (ff in xlist){
   cat(Detach ,ff, which is pos ,as.integer(which(ff == 
search())),\n)
   detach(pos=as.integer(which(ff == search())), unload=TRUE) # ?? do 
we need unload

}
rm(list=ls())


This appears to work fine in my system -- session info is below, but I 
get 30 warnings of the type


30: In FUN(X[[2L]], ...) :
  Created a package name, ‘2013-10-16 10:56:47’, when none found

Does anyone have ideas why the warnings are being generated?  I'd like 
to avoid suppressing them. Here's the session info.


R version 3.0.1 (2013-05-16)
Platform: x86_64-pc-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_CA.UTF-8   LC_NUMERIC=C
 [3] LC_TIME=en_CA.UTF-8LC_COLLATE=en_CA.UTF-8
 [5] LC_MONETARY=en_CA.UTF-8LC_MESSAGES=en_CA.UTF-8
 [7] LC_PAPER=C LC_NAME=C
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

loaded via a namespace (and not attached):
[1] tools_3.0.1


John Nash

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Looking for package to solve for exponent using newton's method

2013-10-11 Thread Prof J C Nash (U30A)

And if you need some extra digits:

require(Rmpfr)
testfn-function(x){2^x+3^x-13}
myint-c(mpfr(-5,precBits=1000),mpfr(5,precBits=1000))
myroot-unirootR(testfn, myint, tol=1e-30)
myroot

John Nash

On 13-10-11 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 33
Date: Thu, 10 Oct 2013 21:03:00 +0200
From: Berend Hasselmanb...@xs4all.nl
To: Ken Takagikatak...@bu.edu
Cc:r-h...@stat.math.ethz.ch
Subject: Re: [R] Looking for package to solve for exponent using
newton'smethod
Message-ID:a12b1f57-a38f-470d-a6d7-479f94125...@xs4all.nl
Content-Type: text/plain; charset=us-ascii


On 10-10-2013, at 20:39, Ken Takagikatak...@bu.edu  wrote:


Hi,
I'm looking for an R function/package that will let me solve problems of the
type:

13 = 2^x + 3^x.

The answer to this example is x = 2, but I'm looking for solutions when x
isn't so easily determined. Looking around, it seems that there is no
algebraic solution for x, unless I'm mistaken.  Does anyone know a good
package to solve these types of problems? Are there built in functions to do
this?


Univariate equations can be solved with uniroot, available in base R.

You can also use package nleqslv for this but that is intended for systems of 
nonlinear equations.
It does however solve your equation.
There is also BB which is especially intended for large sparse systems.

Berend



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] rbenchmark: why is function benchmark back-quoted?

2013-10-08 Thread Prof J C Nash (U30A)
I'm wondering what the purpose of the back-quoting of the name is, since 
benchmark seems a valid name. The language reference does mention 
back-quoting names to make them syntactic names, but I found no 
explanation of the why.


Can someone give a concise reason?

JN

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rbenchmark: why is function benchmark back-quoted?

2013-10-08 Thread Prof J C Nash (U30A)
The function 'benchmark' which is the only one in package 'rbenchmark' 
has a back-quoted name in its first line


`benchmark` - function( 

I wondered whether this had specific importance. It appears not.

JN


On 13-10-08 11:15 AM, Jeff Newmiller wrote:

Your use of the English language is failing to communicate. You mention  the name when 
name is not a proper noun. Are you referring to some specific example?
---
Jeff NewmillerThe .   .  Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.   ##.#.  Live Go...
   Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
---
Sent from my phone. Please excuse my brevity.

Prof J C Nash (U30A) nas...@uottawa.ca wrote:

I'm wondering what the purpose of the back-quoting of the name is,
since
benchmark seems a valid name. The language reference does mention
back-quoting names to make them syntactic names, but I found no
explanation of the why.

Can someone give a concise reason?

JN

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] SSweibull() : problems with step factor and singular gradient

2013-10-04 Thread Prof J C Nash (U30A)

I think you have chosen a model that is ill-suited to the data.

My initial thoughts were simply that the issue was the usual nls() 
singular gradient (actually jacobian if you want to be understood in 
the optimization community) woes, but in this case the jacobian really 
is bad.


My quick and dirty tries give some insight, but do not provide a 
satisfactory answer. Note that the last two columns of the nlxb summary 
are the gradient and the Jacobian singular values, so one can see how 
bad things are.


days - c(163,168,170,175,177,182,185,189,196,203,211,217,224)
height - c(153,161,171,173,176,173,185,192,195,187,195,203,201)
dat - as.data.frame(cbind(days,height))
fit - try(nls(y ~ SSweibull(x, Asym, Drop, lrc, pwr), data = dat, 
trace=T, control=nls.control(minFactor=1/10)))

## failed

fdata-data.frame(x=days, y=height)

require(nlmrt)
strt2-c(Asym=250, Drop=1, lrc=1, pwr=1)
fit2-nlxb(y ~ Asym - (Drop * ( exp(-(exp(lrc)*(x^pwr), data=fdata, 
start=strt2, trace=TRUE)


strt3-c(Asym=250, Drop=.5, lrc=.1, pwr=2)
fit3-nlxb(y ~ Asym - (Drop * ( exp(-(exp(lrc)*(x^pwr), data=fdata, 
start=strt3, trace=TRUE)


strt4-c(Asym=200, Drop=.5, lrc=.1, pwr=2)
fit4-nlxb(y ~ Asym - (Drop * ( exp(-(exp(lrc)*(x^pwr), data=fdata, 
start=strt4, trace=TRUE, masked=c(Asym))


d50-days-160
fd2-data.frame(x=d50, y=height)
fit5-nlxb(y ~ Asym - (Drop * ( exp(-(exp(lrc)*(x^pwr), data=fd2, 
start=strt3, trace=TRUE)

fit5

John Nash


On 13-10-04 02:19 AM, r-help-requ...@r-project.org wrote:

Message: 40
Date: Thu, 3 Oct 2013 20:49:36 +0200
From:aline.fr...@wsl.ch
To:r-help@r-project.org
Subject: [R] SSweibull() : problems with step factor and singular
gradient
Message-ID:
of669fa420.9ef643ed-onc1257bf9.00676b04-c1257bf9.00676...@wsl.ch
Content-Type: text/plain

  SSweibull() : Â problems with step factor and singular gradient

Hello

I  am working with growth data of ~4000 tree seedlings and trying to fit  
non-linear Weibull growth curves through the data of each plant. Since  they 
differ a lot in their shape, initial parameters cannot be set for  all plants. 
That’s why I use the self-starting function SSweibull().
However, I often got two error messages:

1)
# Example
days - c(163,168,170,175,177,182,185,189,196,203,211,217,224)
height - c(153,161,171,173,176,173,185,192,195,187,195,203,201)
dat - as.data.frame(cbind(days,height))
fit - nls(y ~ SSweibull(x, Asym, Drop, lrc, pwr), data = dat, trace=T, 
control=nls.control(minFactor=1/10))

Error in nls(y ~cbind(1, -exp(-exp(lrc)* x^pwr)), data = xy, algorithm = 
“plinearâ€�, :                         Â
step factor 0.000488281 reduced below `minFactor` of 0.000976562

I  tried to avoid this error by reducing the step factor below the  standard 
minFactor of 1/1024 using the nls.control function (shown in  the example 
above). However, this didn’t work, as shown in the example  (minFactor still 
the standard).
Thus, does nls.control() not work for self-starting functions like SSweibull()? 
Or is there another explanation?

2)
In other cases, a second error message showed up:

Error in nls(y ~cbind(1, -exp(-exp(lrc)* x^pwr)), data = xy, algorithm = 
“plinearâ€�, :                         Â
singular gradient

Is there a way to avoid the problem of a singular gradient?

I’d be very glad about helpful comments. Thanks a lot.
Aline
[[alternative HTML version deleted]]


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optim evils

2013-09-04 Thread Prof J C Nash (U30A)


Sometimes one has to really read the manual carefully.

If non-trivial bounds are supplied, this
 method will be selected, with a warning. (re L-BFGS-B)

Several of us have noted problems occasionally with this code.

You might want to look at the box constrained codes offered in optimx 
package through other packages (bobyqa, nmkb, Rvmmin, Rcgmin)


JN

On 13-09-04 06:00 AM, r-help-requ...@r-project.org wrote:
 Message: 67
 Date: Wed, 4 Sep 2013 16:34:54 +0800 (SGT)
 From: Michael Meyerspyqqq...@yahoo.com
 To:r-help@r-project.org  r-help@r-project.org
 Subject: [R] optim  evils
 Message-ID:
 1378283694.77272.yahoomail...@web193402.mail.sg3.yahoo.com
 Content-Type: text/plain

 It would take some effort to extract selfcontained code from the mass 
of code wherein this optimization is embedded. Moreover I would have to 
obtain permission from my employer to do so.


 This is not efficient.
 However some things are evident from the trace log which I have 
submitted:
 (a) L-BFGS-B does not identify itself even though it was called 
overriding the method

 parameter in optim.
 (b) Optim  reports as final converged minimum value a function value 
that is much larger than

 others computed during the optimization.

 I think we can agree on calling this a bug.
 [[alternative HTML version deleted]]


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to catch errors regarding the hessian in 'optim'

2013-09-02 Thread Prof J C Nash (U30A)
This may be one of the many mysteries of the internals of L-BFGS-B, 
which I have found fails from time to time. That is one of the reasons 
for Rvmmin and Rcgmin (and hopefully sooner rather than later Rtn - a 
truncated Newton method, currently working for unconstrained problems, 
but still glitchy for bounds constraints). These are all-R codes so that 
users and developers can get inside to control special situations.


If you have a test problem -- the infamous reproducible example -- there 
are several of us who can likely help to sort out your troubles.


JN


On 13-09-02 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 10
Date: Sun, 1 Sep 2013 17:09:35 +0200
From: Simon Zehnderszehn...@uni-bonn.de
To: R-help helpr-help@r-project.org
Subject: [R] How to catch errors regarding the hessian in 'optim'
Message-ID:eb37670e-8544-4c89-9172-245eb6cc5...@uni-bonn.de
Content-Type: text/plain; charset=us-ascii

Dear R-Users and R-Developers,

in a comparison between two different estimation approaches I would like to 
catch errors from optim regarding the hessian matrix.

I use optim with method = L-BFGS-B thereby relying on numerical 
differentiation for the hessian matrix. I do know, that the estimation approach that uses 
numerical optimization has sometimes problems with singular hessian matrices and I 
consider it as one of its disadvantages of this method. To show the frequency of such 
problems in my simulation study I have to set 'hessian = TRUE' and to collect the errors 
from optim regarding the hessian.

Now I am a little stucked how I could catch specifically errors from the 
hessian matrix in 'optim'. I do know that such errors are thrown most certainly 
from function 'La_solve' in Lapack.c. Does anyone has an idea how I could solve 
this task (clearly with tryCatch but how to choose only errors for the hessian)?


Best

Simon


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] quoted expressions in microbenchmark

2013-09-02 Thread Prof J C Nash (U30A)
I use microbenchmark to time various of my code segments and find it 
very useful. However, by accident I called it with the expression I 
wanted to time quoted. This simply measured the time to evaluate the 
quote. The following illustrates the difference. When explained, the 
issue is obvious, but I spun some wheels for a while and the example may 
help others.


 rm(list=ls()) # clear workspace
 genrose.g - function(x, gs=100){
 # vectorized gradient for genrose.f
 n - length(x)
 gg - as.vector(rep(0, n))
 tn - 2:n
 tn1 - tn - 1
 z1 - x[tn] - x[tn1]^2
 z2 - 1 - x[tn]
 gg[tn] - 2 * (gs * z1 - z2)
 gg[tn1] - gg[tn1] - 4 * gs * x[tn1] * z1
 return(gg)
 }
 require(microbenchmark)
 trep=1000
 xx-rep(pi/2,20)
 atq-microbenchmark(genrose.g(xx), times=trep)
 print(atq)
 at-microbenchmark(genrose.g(xx), times=trep)
 print(at)

which gives

 source(tmbench.R)
Unit: nanoseconds
expr min  lq median  uq   max neval
 genrose.g(xx)  70 420489 489 12851  1000
Unit: microseconds
  exprmin lq median uq  max neval
 genrose.g(xx) 47.982 49.868 50.426 51.404 3523.566  1000

Thanks to Olaf Mersmann for a quick response to set me straight. He 
pointed out that sometimes one wants to measure the time to evaluate a 
character string, as in the following.


   microbenchmark(NULL, , asdf, 12345678901234567890, times=1000L)
  Unit: nanoseconds
 expr min lq median uq max neval
 NULL  24 25 28 29 161  1000
 24 25 28 29 121  1000
   asdf  24 25 28 29 161  1000
   12345678901234567890  24 28 28 29 542  1000


John Nash

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] X axis label as months

2013-08-21 Thread Prof J C Nash (U30A)

Thanks. The staxlab works with the plot generated by

plot(tt, data,)

but not with

plot(tsdata, ...)

i.e., it seems to be the tsplot that somehow changes something and the 
axis commands have no effect.


I probably should have pointed out that my main concern is that I'm not 
getting any error msg. Just no axis label. And it seems I need to make 
sure the labels vector is the same length as I need -- no recycling.


JN


On 13-08-21 06:37 PM, Jim Lemon wrote:

On 08/22/2013 07:56 AM, Prof J C Nash (U30A) wrote:

There are several items on the web about putting month names as tick
labels on x axis of time plots, but I found when I tried them they did
not work for me. After an hour or so of driving myself silly looking for
a bug in my code, I've prepared a reproducible example below, where I
did find a workaround, but would prefer to be able to plot a ts object.

Perhaps someone can spot the error in this. I'm sure it's something
silly or fumble-fingered.

JN


Hi,
Try this:

library(plotrix)
staxlab(1, at=1:28, label=months)

Jim



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] [optim/bbmle] function returns NA at

2013-08-13 Thread Prof J C Nash (U30A)
1) Why use Nelder-Mead with optimx when it is an optim() function. You 
are going from New York to Philadelphia via Beijing because of the extra 
overhead. The NM method is there for convenience in comparisons.


2) NM cannot work with NA when it wants to compute the centroid of 
points and search direction. So you've got to find a way to make sure 
your likelihood is properly defined. This seems to be the issue for 
about 90% of failures with optim(x) or other ML methods in my recent 
experience. Note that returning a large value (and make it a good deal 
smaller than the .Machine$double.xmax, say that number *1e-6 to avoid 
computation troubles) often works, but it is a quick and dirty fix.


JN


On 13-08-13 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 36
Date: Tue, 13 Aug 2013 10:38:05 +0200
From: Carlos Nashercarlos.nas...@googlemail.com
To:r-help@r-project.org
Subject: [R] [optim/bbmle] function returns NA at ... distance from x
Message-ID:
CAP=bvwpxj991fbyt9ou5x1jf9nol3vtq1svtjvw82jwfjyz...@mail.gmail.com
Content-Type: text/plain

Dear R helpers,

I try to find the model parameters using mle2 (bbmle package). As I try to
optimize the likelihood function the following error message occurs:

Error in grad.default(objectivefunction, coef) :
   function returns NA at
1e-040.001013016911639890.0003166929388711890.000935163594829395 distance
from x.
In addition: Warning message:
In optimx(par = c(0.5, 10, 0.7, 10), fn = function (p)  :
   Gradient not computable after method Nelder-Mead

I can't figure out what that means exactly and how to fix it. I understand
that mle2 uses optim (or in my case optimx) to optimize the likelihood
function. As I use the Nelder-Mead method it should not be a problem if the
function returns NA at any iteration (as long as the initial values don't
return NA). Can anyone help me with that?

Here a small example of my code that reproduces the problem:

library(plyr)
library(optimx)

### Sample data ###
x - c(1,1,4,2,3,0,1,6,0,0)
tx - c(30.14, 5.14, 24.43, 10.57, 25.71, 0.00, 14.14, 32.86, 0.00, 0.00)
T - c(32.57, 29.14, 33.57, 34.71, 27.71, 38.14, 36.57, 37.71, 35.86, 30.57)
data - data.frame(x=x, tx=tx, T=T)

### Likelihood function ###
Likelihood - function(data, r, alpha, s, beta) {
   with(data, {
 if (r=0 | alpha=0 | s=0 | beta=0) return (NaN)
 f - function(x, tx, T)
 {
   g - function(y)
 (y + alpha)^(-( r + x))*(y + beta)^(-(s + 1))
   integrate(g, tx, T)$value
 }
 integral - mdply(data, f)
 L -
exp(lgamma(r+x)-lgamma(r)+r*(log(alpha)-log(alpha+T))-x*log(alpha+T)+s*(log(beta)-log(beta+T)))+exp(lgamma(r+x)-lgamma(r)+r*log(alpha)+log(s)+s*log(beta)+log(integral$V1))
 f - sum(log(L))
 return (f)
   })
}

### ML estimation function ###
Estimate_parameters_MLE - function(data, initValues) {
   llhd - function(r, alpha, s, beta) {
 return (Likelihood(data, r, alpha, s, beta))
   }
   library(bbmle)
   fit - mle2(llhd, initValues, skip.hessian=TRUE, optimizer=optimx,
method=Nelder-Mead, control=list(maxit=1e8))
   return (fit)
}

### Parameter estimation ###
Likelihood(data=data, r=0.5, alpha=10, s=0.7, beta=10) ### check initial
parameters -- -72.75183 -- initial parameters do return value
MLE_estimation - Estimate_parameters_MLE(data=data, list(r=0.5, alpha=10,
s=0.7, beta=10))

'Error in grad.default(objectivefunction, coef) :
   function returns NA at
1e-040.001013016911639890.0003166929388711890.000935163594829395 distance
from x.
In addition: Warning message:
   In optimx(par = c(0.5, 10, 0.7, 10), fn = function (p)  :
   Gradient not computable after method Nelder-Mead'


Best regards,
Carlos

-
Carlos Nasher
Buchenstr. 12
22299 Hamburg

tel:+49 (0)40 67952962
mobil:+49 (0)175 9386725
mail:carlos.nas...@gmail.com

[[alternative HTML version deleted]]



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim with(out) box constraints!

2013-08-02 Thread Prof J C Nash (U30A)
It's possibly the L in L-BFGS-B that is more interesting for some 
problems, namely for the Limited Memory, so running without bounds can 
make sense. Unfortunately, the version of L-BFGS-B in R is from the 
1990s, and Nocedal et al. released an update in 2011. Maybe someone will 
want to work on putting that into the distributed R. I'll be happy to 
kibbitz off-list to see what can be done.


The types of algorithms that compete for low-memory (i.e., when you have 
1000s of parameters) are the CG codes -- Rcgmin is better than CG by a 
long shot for most problems, and I was involved in both -- and truncated 
Newton methods, of which I only know of one version in nloptr that in 
the very limited tests I've done seems not to be as speedy as I'd 
anticipate (there's also an lbfgs in that package -- not tried). I'm 
slowly working on getting an all-R truncated Newton code up so different 
options could be tried.


If it's the B you need, then Rcgmin, Rvmmin (essentially optim:BFGS) 
work with gradients, as does nlminb, while dfoptim:nmkb or dfoptim:hjkb 
or minqa:bobyqa (also bobyqa in nloptr, again I've no more than tried 
one example) are available for no-derivative case, and possibly some 
other routines.


JN


On 13-08-02 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 36
Date: Fri, 02 Aug 2013 10:38:51 +0200
From: Uwe Liggeslig...@statistik.tu-dortmund.de
To: Anera Saluccia.salu...@yahoo.com
Cc:r-help@r-project.org  r-help@r-project.org
Subject: Re: [R] optim with(out) box constraints!
Message-ID:51fb701b.70...@statistik.tu-dortmund.de
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



On 02.08.2013 10:10, Anera Salucci wrote:

Hello  R users,

Does optimizing a function using optim with  method= L-BFGS-B and without box 
constraints lead to L-BFGS optimization in R?

Sort of, but the question is why this would be beneficial with today's
computers ...

Best,
Uwe Ligges





__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] nls power law help

2013-07-15 Thread Prof J C Nash (U30A)
With minor corrections, the original problem can be solved with nlxb 
from nlmrt package.


 coef(modeln)
  a   b
 -0.8470857 409.5190808

ssquares =  145585533

but since

 svd(modeln$jacobian)$d
[1] 5.128345e+04 6.049076e-14


I may have made nlmrt too robust.

JN



On 13-07-15 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 9
Date: Sun, 14 Jul 2013 16:11:40 +0100
From: Prof Brian Ripleyrip...@stats.ox.ac.uk
To:r-help@r-project.org
Subject: Re: [R] nls power law help
Message-ID:51e2bfac.1010...@stats.ox.ac.uk
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 14/07/2013 14:30, JenPool wrote:

Hi,

I am trying to use a power law y=bx^a as a nls model as below, however I
keep getting 'singular gradient' error. I have tried multiple different
starting values but always get an error.

That is not the model you tried to fit. b*x*exp(a) is always
over-parametrized.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimisation does not optimise!

2013-07-13 Thread Prof J C Nash (U30A)
Considering that I devised the code initially on a computer with only 8K 
bytes for program and data, and it appears that your problem has 1 
parameters, I'm surprised you got any output. I suspect the printout is 
the BUILD phase where each weight is being adjusted in turn by the same 
shift.


Don't try to move the Titanic on a pram.

If you work out a gradient function, you can likely use Rcgmin (even 
though I wrote original CG in optim(), not recommended). spg from BB may 
also work OK.


This problem is near linear, so there are other approaches.

JN


On 13-07-13 06:00 AM, r-help-requ...@r-project.org wrote:

Date: Fri, 12 Jul 2013 21:22:00 +0100
From: Stephen Clarkg...@leeds.ac.uk
To:r-help@R-project.org  r-help@R-project.org
Subject: [R] Optimisation does not optimise!
Message-ID:
928c4f7877280844b906d12d63a3f15b01145e5b5...@hermes8.ds.leeds.ac.uk
Content-Type: text/plain; charset=us-ascii

Hello,

I have the following code and data. I am basically trying to select individuals 
in a sample (by setting some weights) to match known counts for a zone. This is 
been done by matching gender and age bands. I have tested the function to be 
optimised and it does behave as I would expect when the weights are changed. 
However when I run the optimisation I get the following output


optout-optim(weights0, func_opt, control=list(REPORT=1))

[1] 27164
[1] 27163.8
[1] 27163.8
[1] 27163.8
[1] 27163.8
[1] 27163.8
[1] 27163.8
[1] 27163.8
[1] 27163.8
etc

which suggest an initial change but thereafter the optimisation does not appear 
to adapt the weights at all. Can anyone see what this is happening and how to 
make the problem optimise?


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] fitting log function: errors using nls and nlxb

2013-07-10 Thread Prof J C Nash (U30A)
This reply only addresses the NaN in Jacobian matter. I believe it is a 
result of getting a perfect fit (0 sum of squares). I have amended the 
r-forge version of nlmrt package in routines nlfb and nlxb and did not 
get the error running Elizabeth's example. This only answers the 
software issue, of course, not the statistical one.


Use the version of nlmrt from the SCM repository on
https://r-forge.r-project.org/R/?group_id=395

or email me for a tarball of this.

JN


On 13-07-10 06:00 AM, r-help-requ...@r-project.org wrote:

On Mon, Jul 8, 2013 at 9:27 PM, Elizabeth Webb
webb.elizabet...@gmail.comwrote:

Hi-
I am trying to fit a log function to my data, with the ultimate goal of
finding the second derivative of the function.  However, I am stalled on
the first step of fitting a curve.

When I use the following code:
FG2.model-(nls((CO2~log(a*Time)+b), start=setNames(coef(lm(CO2 ~
log(Time), data=FG2)), c(a, b)),data=FG2))
I get the following error:
Error in numericDeriv(form[[3L]], names(ind), env) :
   Missing value or an infinity produced when evaluating the model
In addition: Warning messages:
1: In min(x) : no non-missing arguments to min; returning Inf
2: In max(x) : no non-missing arguments to max; returning -Inf
3: In log(a * Time) : NaNs produced
4: In log(a * Time) : NaNs produced

When I fit the curve in Plot and use the coefficients as starting values:
start=c(a=68,b=400)
FG2.model-(nls((CO2~log(a*Time)+b), start=start,data=FG2))
I get the following error:
Error in nls((CO2 ~ log(a * Time) + b), start = start, data = FG2) :
   singular gradient
In addition: Warning messages:
1: In min(x) : no non-missing arguments to min; returning Inf
2: In max(x) : no non-missing arguments to max; returning -Inf

So then when I substituded nlxb for nls in the above two models, I got this
error:
Error in nlxb((CO2 ~ log(a * Time) + b), start = start, data = FG2) :
   NaN in Jacobian



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non-linear modelling with several variables including a categorical variable

2013-07-05 Thread Prof J C Nash (U30A)

Off list I sent the OP a note that wrapnls() from nlmrt calls nls after
nlxb finishes. This is not, of course, super-efficient, but returns the
nls-structured answer.

JN

On 13-07-05 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 49
Date: Fri, 5 Jul 2013 08:30:39 +0700
From: Robbie Weteringsrobbie.weteri...@gmail.com
To: Prof J C Nash (U30A)nas...@uottawa.ca,r-help@r-project.org
Subject: Re: [R] Non-linear modelling with several variables including
a categorical variable
Message-ID:
CAFe5dHZdXFbFtwKmTE1_QPi1rqNGsd+=82tproyfs6mg6zm...@mail.gmail.com
Content-Type: text/plain

Dear Prof. Nash,

I tried to run nls with the nlxb function and as you mention it is fairly
slower in terms of running the code. However, if I would have used this
function earlier I would have saved a lot of time trying to find the start
values. The output looks a little bit sloppy but I think it is very usefull
to use in combination with nls.

Thanks
Robbie


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Non-linear modelling with several variables including a categorical variable

2013-07-03 Thread Prof J C Nash (U30A)
If preytype is an independent variable, then models based on it should 
be OK. If preytype comes into the parameters you are trying to estimate, 
then the easiest way is often to generate all the possible combinations 
(integers -- fairly modest number of these) and run all the least 
squares minimizations. Crude but effective. nlxb from nlmrt or nlsLM 
from minpack.lm may be more robust in doing this, but less efficient if 
nls works OK.


JN

On 13-07-03 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 10
Date: Tue, 2 Jul 2013 19:01:55 +0700
From: Robbie Weteringsrobbie.weteri...@gmail.com
To:r-help@r-project.org
Subject: [R] Non-linear modelling with several variables including a
categorical variable
Message-ID:
cafe5dhzrm+bpg1v77ezhun+tacv64j_9pnsfgh_xne5csz9...@mail.gmail.com
Content-Type: text/plain

Hello everyone,

I am trying to model some data regarding a predator prey interaction
experiment (n=26). Predation rate is my response variable and I have 4
explanatory variables: predator density (1,2,3,4 5), predator size, prey
density (5,10,15,20,25,30) and prey type (3 categories). I started with
several linear models (glm) and found (as expected) that prey and predator
density were non-linear related to predation rates. If I use a log
transformation on these variables I get really nice curves and an adjusted
R2 of 0.82, but it is not really the right approach for modelling
non-linear relationships. Therefore I switched to non-linear least square
regression (nls). I have several predator-prey models based on existing
ecological literature e.g.:

model1 - nls(rates ~ (a * prey)/(1 + b * prey), start = list(a = 0.27,b =
0.13), trace = TRUE) ### Holling's type II functional response

model2 - nls(rates ~ (a*prey)/(1+ (b * prey) + c * (pred -1 )), start =
list(a=0.22451, b=-0.18938, c=1.06941), trace=TRUE, subset=I1) ###
Beddington-**DeAngelis functional response

These models work perfectly, but now I want to add prey type as well. In
the linear models prey type was the most important variable so I don't want
to leave it out. I understand that you can't add categorical variables in
nls, so I thought I try a generalized additive model (gam).

The problem with the gam models is that the smoothers (both spline and
loess) don't work on both variables because there are only a very
restricted number of values for prey density and predator density. I can
manage to get a model with a single variable smoothed using loess. But for
two variables it is simply not working. The spline function does not work
at all because I have so few values (5) for my variables (see model 4).

model3 - gam(rates~ lo(pred, span=0.9)+prey) ## this one is actually
working but does not include a smoother for prey.

model4 - gam(rates~ s(pred)+prey) ## this one gives problems:
*A term has fewer unique covariate combinations than specified maximum
degrees of freedom*

My question is: are there any other possibilities to model data with 2
non-linear related variables in which I can also include a categorical
variable. I would prefer to use nls (model2) with for example different
intercepts for each category but I'm not sure how to get this sorted, if it
is possible at all. The dataset is too small to split it up into the three
categories, moreover, one of the categories only contains 5 data points.

Any help would be really appreciated.

With kind regards,
-- Robbie Weterings *Project Manager Cat Drop Thailand ** Tel:
+66(0)890176087 * 65/13 Mooban Chakangrao, Naimuang Muang Kamphaeng Phet
62000, Thailand àÅ¢·Õè 65/13 Á.ªÒ¡Ñ§ÃÒÇ ¶¹¹ ÃÒª´íÒà¹Ô¹2 ã¹àÁ×ͧ ÍíÒàÀÍ/
ࢵ àÁ×ͧ¡íÒàྦྷྪà ¨Ñ§ËÇÑ´ ¡íÒàྦྷྪà 62000
http://www.catdropfoundation.org
http://www.catdropfoundation.org/facebook/Facebook.html
*www.catdropfoundation.org* http://www.catdropfoundation.org/
*www.facebook.com/catdropfoundation*http://www.facebook.com/catdropfoundation
*Boorn 45, 9204 AZ, Drachten, The Netherlands* [[alternative HTML
version deleted]]


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R-help Digest, Vol 124, Issue 21

2013-06-20 Thread Prof J C Nash (U30A)

And it could be that you should try nlmrt or minpack.lm.

I don't think you were at my talk in Jena May 23 -- might have been very 
helpful to you.


JN


On 13-06-20 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 47
Date: Wed, 19 Jun 2013 13:17:29 -0500
From: Adams, Jeanjvad...@usgs.gov
To: pakounpko...@bgc-jena.mpg.de
Cc: R helpr-help@r-project.org
Subject: Re: [R] nls singular gradient ..as always..
Message-ID:
CAN5YmCEF9Ut5J-Zqh6URYOn0bfH=M=cugmgdmtzy9owwkrw...@mail.gmail.com
Content-Type: text/plain

It's hard to say without seeing the data.  It could be the data, it could
be the starting values, it could be the model choice.

Jean


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Assessing the fit of a nonlinear model to a new dataset

2013-04-05 Thread Prof J C Nash (U30A)
Given nls has a lot of C code (and is pretty complicated), I doubt 
you'll find much joy doing that.


nlxb from my nlmrt package is all in R, but you'll need to do quite a 
bit of work at each stage. I don't form the J' J matrix, and do a 
Marquardt approximation by adding appropriate rows to the J matrix then 
do a qr decomposition on that.


In any event, the Hessian (which  J' J is only a rather poor 
appriximation to) is what you want, and it may not be positive definite 
at the iterates, so you have infinite standard errors. Well, if the 
curvature was 0, they'd be infinite. Since the curvature is negative, 
maybe the SEs are more than infinite, if that has any meaning.


I have one problem for which I generated 1000 starting points and 75% 
had the Hessian indefinite. That is a simple logistic nonlinear 
regression, albeit with nasty data.


JN



Message: 90 Date: Fri, 5 Apr 2013 05:06:57 + From: Rebecca Lester rebecca.les...@deakin.edu.au 
To: r-help@r-project.org r-help@r-project.org Subject: [R] Assessing the fit of a 
nonlinear model to a new dataset Message-ID: 
5a72faa65583bc45a816a698a960e92788612...@mbox-f-3.du.deakin.edu.au Content-Type: text/plain Hi all, 
I am attempting to apply a nonlinear model developed using nls to a new dataset and assess the fit of that 
model. At the moment, I am using the fitted model from my fit dataset as the starting point for an nls fit 
for my test dataset (see below). I would like to be able to view the t-statistic and p-values for each of 
the iterations using the trace function, but have not yet worked out how to do this. Any other suggestions 
are also welcome. Many thanks, Rebecca

model.wa - nls(y ~ A*(x^B), start=list(A=107614,B=-0.415)) # create nls() 
power model for WA data
summary(model.wa) # model summary


Formula: y ~ A * (x^B)

Parameters:
Estimate Std. Error t value Pr(|t|)
A  7.644e+04  1.240e+04   6.165 4.08e-06 ***
B -3.111e-01  4.618e-02  -6.736 1.15e-06 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 5605 on 21 degrees of freedom

Number of iterations to convergence: 6
Achieved convergence tolerance: 7.184e-06
  (6 observations deleted due to missingness)



model.vic - nls(y.vic ~ A*(x.vic^B), start = list(A = 7.644e+04, B = 
-3.111e-01), trace = T)

3430193778 :  76440.-0.3111
2634092902 :  48251.9235397-0.2552481
2614516166 :  27912.1921354-0.1772322
2521588892 :  32718.3764594-0.1862611
2521233646 :  32476.4536126-0.1836836
2521230904 :  32553.0767231-0.1841362
2521230824 :  32540.063480-0.184059
2521230822 :  32542.2970040-0.1840721




Important Notice: The contents of this email are intended solely for the named 
addressee and are confidential; any unauthorised use, reproduction or storage 
of the contents is expressly prohibited. If you have received this email in 
error, please delete it and any attachments immediately and advise the sender 
by return email or telephone.

Deakin University does not warrant that this email and any attachments are 
error or virus free.

[[alternative HTML version deleted]]



--


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] DUD (Does not Use Derivatives) for nonlinear

2013-04-03 Thread Prof J C Nash (U30A)

 Date: Tue, 2 Apr 2013 06:59:13 -0500

From: Paul Johnson pauljoh...@gmail.com
To: qi A send2...@gmail.com
Cc: R-help r-help@r-project.org
Subject: Re: [R] DUD (Does not Use Derivatives) for nonlinear
regression in   R?
Message-ID:
CAErODj_1pK8raHyAme_2Wt5zQZ_HqOhRjQ62bChhkORWbW=o...@mail.gmail.com
Content-Type: text/plain

On Apr 1, 2013 1:10 AM, qi A send2...@gmail.com wrote:


Hi, All

SAS has DUD (Does not Use Derivatives)/Secant Method for nonlinear
regression, does R offer this option for nonlinear regression?

I have read the helpfile for nls() and could not find such option, any
suggestion?



nelder-mead is default algorithm in optim. It does not use derivatives. dud
is from same generation, but John Nash recommended N-M method.

Pj


Thanks,

Derek

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide

http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


[[alternative HTML version deleted]]


I'm not sure where Paul is saying I recommended N-M, but I do think it
is important with optimization methods to recommend a method FOR some
particular class of problems or for a problem solving situation. A
blanket this is good recommendation cannot be made.

I chose NM (slightly BEFORE DUD was released) as the only derivative
free method in my 1979 book as it had the best balance of reliability
and performance for an 8K machine (code and data) that I was using in
1975. It still works well as a first-try method for optimization, but
generally is less efficient than gradient based methods, in particular
because it does not have a good way to know it is finished. As a
derivative-free method, it is not too bad, particularly in the nmk
version in the dfoptim package. Indeed, I wish this version were put in
optim() as the default, since it can deal with bounds constraints,
though slightly less generally and less well than bobyqa or some other
methods, and there are a couple of minor details it handles better than
N-M in optim() that give it better performance and reliability.

Readers should notice that there are lots of conditional statements
above. It's a matter of selecting the right tool for the job. If you
have lots of compute power and don't mind wasting it, NM will likely get
somewhere near some or other optimum of your problem. It won't do it
terribly fast, and you'd better make sure you didn't just run out of
iterations or other measure that stops the program before it has
decided it is done. Also that the answer is the one you want. Most
optimization problems have more than one answer, and the wrong ones
often seem to be easier to find.

JN

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is DUD available in nls()?

2013-04-02 Thread Prof J C Nash (U30A)
One of the reasons DUD is not available much any more is that methods 
have evolved:


- nls (and nlxb() from nlmrt as well as nlsLM from minpack.LM -- which 
are more robust but may be less efficient) can use automatic derivative 
computation, which avoids the motive for which DUD was written


- DUD came about, as Bert noted, in a period when many methods were 
being tried. I recall being at one of the first talks about DUD by Mary
Ralston. It seems to work well on the examples used to illustrate it, 
but I have never seen a good comparative test of it. I think this is 
because it was always embedded in proprietary code, so not properly 
available for scrutiny. (If I'm wrong in this, I hope to be enlightened 
as to source code. I did find a student paper, but it tests with SAS.)


- Marquardt methods, as in nlmrt and minpack.LM, generally have a better 
track record. They can be used with numerical derivative approximations 
(numDeriv, for example) with R functions rather than expressions (nlfb 
from nlmrt, and one of the other minpack.LM routines), in which case one 
gets a form of secant method.


- Ben's mention of bobyqa leads to a different derivative-free minimizer 
that approximates the surface and minimizes that.


John Nash




Date: Tue, 2 Apr 2013 07:22:33 +
From: Ben Bolker bbol...@gmail.com
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] Is DUD available in nls()?
Message-ID: loom.20130402t091830-...@post.gmane.org
Content-Type: text/plain; charset=us-ascii

Bert Gunter gunter.berton at gene.com writes:


 I certainly second all Jeff's comments.

 **HOWEVER** :
 http://www.tandfonline.com/doi/pdf/10.1080/00401706.1978.10489610

 IIRC, DUD's provenance is old, being originally a BMDP feature.


  If you want to do derivative-free nonlinear least-squares fitting
you could do something like:

library(bbmle)
dnorm2 - function(x,mean,log=FALSE) {
   ssq - sum((x-mean)^2)
   n - length(x)
   dnorm(x,mean,sd=ssq/n,log=log)
}
mle2(y~dnorm2(mean=a*x^b),data=...,method=Nelder-Mead)

  This is not necessarily the most efficient/highly tuned possibility,
but it is one reasonably quick way to get going (you can substitute
other derivative-free optimizers, e.g.

library(optimx)
mle2(...,optimizer=optimx,method=bobyqa)

(I think).

  This isn't the exact algorithm you asked for, but it might serve
your purpose.

  Ben Bolker

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] question about nls

2013-03-15 Thread Prof J C Nash (U30A)
Actually, it likely won't matter where you start. The Gauss-Newton 
direction is nearly always close to 90 degrees from the gradient, as 
seen by turning trace=TRUE in the package nlmrt function nlxb(), which 
does a safeguarded Marquardt calculation. This can be used in place of 
nls(), except you need to put your data in a data frame. It finds a 
solution pretty straightforwardly, though with quite a few iterations 
and function evaluations.


Of course, one may not really want to do any statistics with 4 
observations and 3 parameters, but the problem illustrates the GN vs. 
Marquardt directions.


JN


 sol-nlxb(y ~ exp(a + b*x)+d,start=list(a=0,b=0,d=1), data=mydata, 
trace=T)

formula: y ~ exp(a + b * x) + d
lower:[1] -Inf -Inf -Inf
upper:[1] Inf Inf Inf
...snip...
Data variable  y :[1]  0.8  6.5 20.5 45.9
Data variable  x :[1]  60  80 100 120
Start:lamda: 1e-04  SS= 2291.15  at  a = 0  b = 0  d = 1  1 / 0
gradient projection =  -2191.093  g-delta-angle= 90.47372
Stepsize= 1
lamda: 0.001  SS= 4.408283e+55  at  a = -25.29517  b = 0.74465  d = 
-24.29517  2 / 1

gradient projection =  -2168.709  g-delta-angle= 90.48307
Stepsize= 1
lamda: 0.01  SS= 3.986892e+54  at  a = -24.55223  b = 0.7284461  d = 
-23.55223  3 / 1

gradient projection =  -1991.804  g-delta-angle= 90.58199
Stepsize= 1
lamda: 0.1  SS= 2.439544e+46  at  a = -18.71606  b = 0.6010118  d = 
-17.71606  4 / 1

gradient projection =  -1476.935  g-delta-angle= 92.79733
Stepsize= 1
lamda: 1  SS= 4.114152e+23  at  a = -2.883776  b = 0.2505892  d = 
-1.883776  5 / 1

gradient projection =  -954.5234  g-delta-angle= 91.78881
Stepsize= 1
lamda: 10  SS= 39033042903  at  a = 2.918809  b = 0.07709855  d = 
3.918809  6 / 1

gradient projection =  -264.9953  g-delta-angle= 91.41647
Stepsize= 1
lamda: 4  SS= 571.451  at  a = 1.023367  b = 0.01762421  d = 2.023367 
 7 / 1

gradient projection =  -60.46016  g-delta-angle= 90.96421
Stepsize= 1
lamda: 1.6  SS= 462.3257  at  a = 1.080764  b = 0.0184132  d = 
1.981399  8 / 2

gradient projection =  -56.91866  g-delta-angle= 90.08103
Stepsize= 1
lamda: 0.64  SS= 359.6233  at  a = 1.135265  b = 0.01942354  d = 
0.9995471  9 / 3

gradient projection =  -65.90027  g-delta-angle= 90.04527
Stepsize= 1

... snip ...

lamda: 0.2748779  SS= 0.5742948  at  a = -0.1491842  b = 0.03419761  d = 
-6.196575  31 / 20

gradient projection =  -6.998402e-25  g-delta-angle= 90.07554
Stepsize= 1
lamda: 2.748779  SS= 0.5742948  at  a = -0.1491842  b = 0.03419761  d = 
-6.196575  32 / 20

gradient projection =  -2.76834e-25  g-delta-angle= 90.16973
Stepsize= 1
lamda: 27.48779  SS= 0.5742948  at  a = -0.1491842  b = 0.03419761  d = 
-6.196575  33 / 20

gradient projection =  -4.632864e-26  g-delta-angle= 90.08759
Stepsize= 1
No parameter change



On 13-03-15 07:00 AM, r-help-requ...@r-project.org wrote:

Message: 36
Date: Thu, 14 Mar 2013 11:04:27 -0400
From: Gabor Grothendieckggrothendi...@gmail.com
To: menglaomen...@163.com
Cc: R helpr-help@r-project.org
Subject: Re: [R] question about nls
Message-ID:
cap01urmodfn87qqvtwmatuid0fx0d7lqmfqh4chofm5b2c9...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Mar 14, 2013 at 5:07 AM, menglaomen...@163.com  wrote:

Hi,all:
I met a problem of nls.

My data:
xy
60 0.8
80 6.5
100 20.5
120 45.9

I want to fit exp curve of data.

My code:

nls(y ~ exp(a + b*x)+d,start=list(a=0,b=0,d=1))

Error in nlsModel(formula, mf, start, wts) :
   singular gradient matrix at initial parameter estimates

I can't find out the reason for the error.
Any suggesions are welcome.


The gradient is singular at your starting value so you will have to
use a better starting value.  If d = 0 then its linear in log(y) so
you can compute a starting value using lm like this:

lm1 - lm(log(y) ~ x, DF)
st - setNames(c(coef(lm1), 0), c(a, b, d))

Also note that you are trying to fit a model with 3 parameters to only
4 data points.

--
Statistics  Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] question about nls

2013-03-15 Thread Prof J C Nash (U30A)
As Gabor indicates, using a start based on a good approximation is 
usually helpful, and nls() will generally find solutions to problems 
where there are such starts, hence the SelfStart methods. The Marquardt 
approaches are more of a pit-bull approach to the original 
specification. They grind away at the problem without much finesse, but 
generally get there eventually. If one is solving lots of problems of a 
similar type, good starts are the way to go. One-off (or being lazy), I 
like Marquardt.


It would be interesting to know what proportion of random starting 
points in some reasonable bounding box get the singular gradient 
message or other early termination with nls() vs. a Marquardt approach, 
especially as this is a tiny problem. This is just one example of the 
issue R developers face in balancing performance and robustness. The GN 
method in nls() is almost always a good deal more efficient than 
Marquardt approaches when it works, but suffers from a fairly high 
failure rate.



JN


On 13-03-15 10:01 AM, Gabor Grothendieck wrote:

On Fri, Mar 15, 2013 at 9:45 AM, Prof J C Nash (U30A) nas...@uottawa.ca wrote:

Actually, it likely won't matter where you start. The Gauss-Newton direction
is nearly always close to 90 degrees from the gradient, as seen by turning
trace=TRUE in the package nlmrt function nlxb(), which does a safeguarded
Marquardt calculation. This can be used in place of nls(), except you need
to put your data in a data frame. It finds a solution pretty
straightforwardly, though with quite a few iterations and function
evaluations.



Interesting observation but it does converge in 5 iterations with the
improved starting value whereas it fails due to a singular gradient
with the original starting value.


Lines - 

+ xy
+ 60 0.8
+ 80 6.5
+ 100 20.5
+ 120 45.9
+ 

DF - read.table(text = Lines, header = TRUE)

# original starting value - singular gradient
nls(y ~ exp(a + b*x)+d,DF,start=list(a=0,b=0,d=1))

Error in nlsModel(formula, mf, start, wts) :
   singular gradient matrix at initial parameter estimates


# better starting value - converges in 5 iterations
lm1 - lm(log(y) ~ x, DF)
st - setNames(c(coef(lm1), 0), c(a, b, d))
nls(y ~ exp(a + b*x)+d, DF, start=st)

Nonlinear regression model
   model:  y ~ exp(a + b * x) + d
data:  DF
   a   b   d
-0.1492  0.0342 -6.1966
  residual sum-of-squares: 0.5743

Number of iterations to convergence: 5
Achieved convergence tolerance: 6.458e-07




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] question about nls

2013-03-15 Thread Prof J C Nash (U30A)
I decided to follow up my own suggestion and look at the robustness of 
nls vs. nlxb. NOTE: this problem is NOT one that nls() would usually be 
applied to. The script below is very crude, but does illustrate that 
nls() is unable to find a solution in 70% of tries where nlxb (a 
Marquardt approach) succeeds.


I make no claim for elegance of this code -- very quick and dirty.

JN



debug-FALSE
library(nlmrt)
x-c(60,80,100,120)
y-c(0.8,6.5,20.5,45.9)
mydata-data.frame(x,y)
mydata
xmin-c(0,0,0)
xmax-c(8,8,8)
set.seed(123456)
nrep - as.numeric(readline(Number of reps:))
pnames-c(a, b, d)
npar-length(pnames)
# set up structure to record results
#  need start, failnls, parnls, ssnls, failnlxb, parnlxb, ssnlxb
tmp-matrix(NA, nrow=nrep, ncol=3*npar+4)
outcome-as.data.frame(tmp)
rm(tmp)
colnames(outcome)-c(paste(st-,pnames[[1]],''),
paste(st-,pnames[[2]],''),
paste(st-,pnames[[3]],''),
failnls, paste(nls-,pnames[[1]],''),
paste(nls,pnames[[1]],''),
paste(nls-,pnames[[1]],''), ssnls,
failnlxb, paste(nlxb-,pnames[[1]],''),
paste(nlxb,pnames[[1]],''),
paste(nlxb-,pnames[[1]],''), ssnlxb)


for (i in 1:nrep){

cat(Try ,i,:\n)

st-runif(3)
names(st)-pnames
if (debug) print(st)
rnls-try(nls(y ~ exp(a + b*x)+d,start=st, data=mydata), silent=TRUE)
if (class(rnls) == try-error) {
   failnls-TRUE
   parnls-rep(NA,length(st))
   ssnls-NA
} else {
   failnls-FALSE
   ssnls-deviance(rnls)
   parnls-coef(rnls)
}
names(parnls)-pnames
if (debug) {
  cat(nls():)
  print(rnls)
}
rnlxb-try(nlxb(y ~ exp(a + b*x)+d,start=st, data=mydata), silent=TRUE)
if (class(rnlxb) == try-error) {
   failnxlb-TRUE
   parnlxb-rep(NA,length(st))
   ssnlxb-NA
} else {
   failnlxb-FALSE
   ssnlxb-rnlxb$ssquares
   parnlxb-rnlxb$coeffs
}
names(parnls)-pnames
if (debug) {
  cat(nlxb():)
  print(rnlxb)
  tmp-readline()
  cat(\n)
  }
 solrow-c(st, failnls=failnls, parnls, ssnls=ssnls,
 failnlxb=failnlxb, parnlxb, ssnlxb=ssnlxb)
  outcome[i,]-solrow
} # end loop

cat(Proportion of nls  runs that failed = ,sum(outcome$failnls)/nrep,\n)
cat(Proportion of nlxb runs that failed = 
,sum(outcome$failnlxb)/nrep,\n)


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Nice analysis of time series data

2013-03-07 Thread Prof J C Nash (U30A)
In the recent SIAM Review, vol 54, No 3, pp 597-606, Robert Vanderbei 
does a nice analysis of daily temperature data. This uses publicly 
available data. A version of the paper is available at


http://arxiv.org/pdf/1209.0624

and there is a presentation at

http://www.princeton.edu/~rvdb/tex/talks/GERAD/LocalWarming.pdf

This would make a nice case for a vignette showing how to do such an 
analysis in R, possibly as a project for a senior undergraduate or 
perhaps even at the Master's level if some tools were developed, since 
Vanderbei presents a good argument for using Least Absolute Deviations 
regression (LAD). Generally LAD is overlooked by statisticians, maybe 
because the tools are unfamiliar.


I'm willing to kibbitz on a vignette if there's interest somewhere.

John Nash

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] 'gmm' package: How to pass controls to a numerical solver used in the gmm() function?

2013-02-20 Thread Prof J C Nash (U30A)
This is a quite general issue that those of us who try to prepare 
optimization tools must deal with quite often. The minqa package 
internal methods were designed to be used with customized controls to 
the algorithm, but we had to package them with some more or less OK 
compromise settings. If you use that package directly, you can change 
the values, but not usually if it is wrapped inside something like gmm.


I'd welcome -- off list -- discussions with developers of packages like 
gmm to find a consistent approach to allow the controls to be set by 
those users needing to do so. A danger is that each wrapper package uses 
either its own approach or none at all. My suggestion is that such 
packages include a control() list as an argument with sensible defaults, 
and that this list contains an item (probably also a list) of controls 
to the optimizer(s). A bit ugly, but for most users, not used.


Best,

JN


On 13-02-20 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 2
Date: Tue, 19 Feb 2013 19:01:35 -0800
From: David Winsemiusdwinsem...@comcast.net
To: Malikov, Emiremali...@binghamton.edu
Cc:r-help@r-project.org
Subject: Re: [R] 'gmm' package: How to pass controls to a numerical
solver  used in the gmm() function?
Message-ID:b54404ca-acc2-441b-8a1e-b0483695b...@comcast.net
Content-Type: text/plain; charset=us-ascii


On Feb 19, 2013, at 5:25 PM, Malikov, Emir wrote:


Hello --

The question I have is about the gmm() function from the 'gmm' package
(v. 1.4-5).

The manual accompanying the package says that the gmm() function is
programmed to use either of four numerical solvers -- optim, optimize,
constrOptim, or nlminb -- for the minimization of the GMM objective
function.

I wonder whether there is a way to pass controls to a solver used
while calling the gmm() function?

In particular, the problem that I have been having is that the gmm()
fails to converge withing the default number of iteration for the
'optim' solver that it uses. Ideally, I would wish to figure out a way
to be able to choose controls, including the number of iterations, for
the solver that I tell gmm() to use.

Currently, the way I call the function is as follows:

model.name - gmm(g=g.fn, x=data, gradv=g.gr, t0=c(start),
type=c(twostep), optfct=c(optim) )

I also would want the gmm() function to know that I want it to pass
the following control -- maxit=1500 -- to the optim solver.

The argument name in the manual is `itermax`. I cannot tell from lookng at the 
code whether that would get passed to 'optim'.


Unfortunately, the 'gmm' manual does not tell whether this is doable.

  There is also a ... argument which is stated in the help page to be passed to 
optim. Looking at ?optim one sees that controls generally need to be in a list named 
'control'. That this is the intent is supported by the sentence on page 11 of the gmm vignette:

We could try dierent starting values, increase the number of iterations in 
the control option of
optim or use nlminb which allows to put restrictions on the parameter space.

-- David Winsemius Alameda, CA, USA


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R nls results different from those of Excel ??

2013-02-19 Thread Prof J C Nash (U30A)

This thread unfortunately pushes a number of buttons:

- Excel computing a model by linearization which fits to
residual = log(data) - log(model)

rather than
wanted_residual = data - model

The COBB.RES example in my (freely available but rather dated) book
at http://macnash.telfer.uottawa.ca/nlpe/  shows an example where
comparing the results shows how extreme the differences can be.

- nls not doing well when the fit is near perfect. Package nlmrt is 
happy to compute such models, which have a role in approximation. The 
builders of nls() are rather (too?) insistent that nls() is a 
statistical function rather than simply nonlinear least squares. I can 
agree with their view in its context, but not for a general scientific 
computing package that R has become. It is one of the gotchas of R.


- Rolf's suggestion to inform Microsoft is, I'm sure, made with the sure 
knowledge that M$ will ignore such suggestions. They did, for example, 
fix one financial function temporarily (I don't know which). However, 
one of Excel's maintainers told me he would disavow admitting that 
Bill called to tell them to put the bug back in because the president 
of a large American bank called to complain his 1998 profit and loss 
spreadsheet had changed in the new version of Excel. Appearances are 
more important than getting things right. At the same conference where 
this I won't admit I told you conversation took place, a presentation 
was made estimating that 95% of major investment decisions were made 
based on Excel spreadsheets. The conference took place before the 2008 
crash. One is tempted to make non-statistical inferences.



JN



On 13-02-19 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 79
Date: Mon, 18 Feb 2013 22:40:25 -0800
From: Jeff Newmillerjdnew...@dcn.davis.ca.us
To: Greg Snow538...@gmail.com, David Gwenzidgwe...@gmail.com
Cc: r-helpr-help@r-project.org
Subject: Re: [R] R nls results different from those of Excel ??
Message-ID:50c09528-7917-4a20-ad0e-5f4ebf9d0...@email.android.com
Content-Type: text/plain; charset=UTF-8

Excel definitely does not use nonlinear least squares fitting for power curve 
fitting. It uses linear LS fitting of the logs of x and y. There should be no 
surprise in the OP's observation.
---
Jeff NewmillerThe .   .  Go Live...
DCN:jdnew...@dcn.davis.ca.us Basics: ##.#.   ##.#.  Live Go...
   Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
---
Sent from my phone. Please excuse my brevity.

Greg Snow538...@gmail.com  wrote:


Have you plotted the data and the lines to see how they compare?  (see
fortune(193)).

Is there error around the line in the data?  The nls function is known
to
not work well when there is no error around the line.   Also check and
make
sure that the 2 methods are fitting the same model.

You might consider taking the log of both sides of the function to turn
it
into a linear function and using lm to fit the logs.


On Mon, Feb 18, 2013 at 9:49 PM, David Gwenzidgwe...@gmail.com
wrote:


Hi all

I have a set of data whose scatter plot shows a very nice power
relationship. My problem is when I fit a Power Trend Line in an Excel
spreadsheet, I get the model y= 44.23x^2.06  with an R square value of

0.72.

Now, if I input the same data into R and use
model -nls(y~ a*x^b , trace=TRUE, data= my_data, start = c(a=40,

b=2)) I

get a solution with a = 246.29 and b = 1.51. I have tried several

starting

values and this what I always get. I was expecting to get a value of

a

close to 44 and that of b close to 2. Why are these values of a and b
so different from those Excel gave me. Also the R square value for

the nls

model is as low as 0.41. What have I done wrong here? Please help.

Thanks

in advance

David

 [[alternative HTML version deleted]]

__
R-help@r-project.org  mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Pasting a list of parameters into a function

2013-01-25 Thread Prof J C Nash (U30A)
Quite a long time ago, there was a thread about generalized eigenvalues, 
which ended inconclusively.

   http://tolstoy.newcastle.edu.au/R/help/05/06/6832.html

For students, a good proposal for the Google Summer of Code (gsoc-r) 
would be a nice interface to things like the QZ algorithm and similar 
extended numerical linear algebra. I'm willing to mentor that, and am 
sure there would be someone else to back me up on it. But it needs a 
GOOD proposal.


JN


On 13-01-25 06:00 AM, r-help-requ...@r-project.org wrote:

Message: 39
Date: Thu, 24 Jan 2013 10:37:07 -0800
From: Jeff Newmillerjdnew...@dcn.davis.ca.us
To: hp wanhuaping@gmail.com,r-help@r-project.org
Subject: Re: [R] Pasting a list of parameters into a function
Message-ID:3ad1a172-d00c-43a5-b949-20873debd...@email.android.com
Content-Type: text/plain; charset=UTF-8

The eigenvalue problem is not unique to R. This is an R mailing list. What is 
your question about R?
---
Jeff NewmillerThe .   .  Go Live...
DCN:jdnew...@dcn.davis.ca.us Basics: ##.#.   ##.#.  Live Go...
   Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...1k
---
Sent from my phone. Please excuse my brevity.

hp wanhuaping@gmail.com  wrote:


Hi mailing listers,

Sorry, I made a little mistake in the previous mail. B^{1} should be
B^{-1}.

Is there certain function in R deal with how to compute generalized
eigenvalues, that is the problem: A*x* = ?B*x *? When I use
eigen(B^{-1}*A), error happened. It displays there are many Inf
elements in
B^{-1}.

Thanks

Huaping Wan



2013/1/25 hp wanhuaping.wan@gmail


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vectorizing integrate()

2012-12-07 Thread Prof J C Nash (U30A)
I found mixed (and not always easy to predict) results from the 
byte-code compiler. It seems necessary to test whether it helps. On some 
calculations, it is definitely worthwhile.


JN


On 12-12-07 01:57 PM, Berend Hasselman wrote:


On 07-12-2012, at 19:37, Spencer Graves wrote:


On 12/7/2012 9:40 AM, Berend Hasselman wrote:


benchmark(eta1 - f1(X, B, x, sem1), eta2 - f2(X, B, x, sem1), eta3 - f3(X, 
B, x, sem1),

+   eta4 - f4(X, B, x, sem1), eta5 - f5(X, B, x, sem1), eta6 - f6(X, 
B, x, sem1),
+   replications=10, columns=c(test,elapsed,relative))
test elapsed relative
1 eta1 - f1(X, B, x, sem1)   1.8731.207
2 eta2 - f2(X, B, x, sem1)   1.5521.000
3 eta3 - f3(X, B, x, sem1)   1.8071.164
4 eta4 - f4(X, B, x, sem1)   1.8411.186
5 eta5 - f5(X, B, x, sem1)   1.8521.193
6 eta6 - f6(X, B, x, sem1)   1.6011.032

As you can see using the compiler package is beneficial speedwise.
f2 and f6, both the the result of using the compiler package, are the quickest.
It's quite likely that more can be eked out of this.



  So the compiler (f2, f4, f6) provided a slight improvement over f1 and f3 
but not f2, and in any event, the improvement was not great.


I don't understand the but not f2.
And I don't understand the conclusion for (f2,f4,f6). f4 is a compiled version 
of f3 and is slower than its non compiled version.
f2 and f6 are the quickest compiled versions.
Indeed the improvement is not earth shattering but it does demonstrate what you 
can achieve by using the compiler package.

Berend



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.