Re: [R] Trouble with Optimization in "Alabama" Package

2011-06-20 Thread mcgrete
Hello all,

I was in direct email contact with Ravi, who was very kind to offer his
assistance.  Inbetween correspondence with Ravi, I arrived at a solution. 
In case others have the same issue, I am sharing the solution.   Thanks Ravi
for your willingness to help.

Regards,
Tim

**
Solution: or should I say lack of known-how to pass additional arguments to
functions fn,gr,heq,hin, etc. through the auglag function in the alabama
package.
**

Well, I believe I have solved the problem.  My version of supporting
documentation on alabama must have been an older one.  In more recent
documentation, an example code that I found this morning led me to the
solution. 

Here is what I needed to do.

1.  I created a list 'var2' to hold my parameters that I wished to pass
along to the function fn,gr,hin,heq,etc...  (These parameters are constant,
and will and cannot change during the optimization solution.  I am
attempting to find the best set of the variable 'f' that optimize my cost
function.)  I then modified the declaration of each function (e.g.
fn,fr,hin,heq,etc) to be of the form below.  Not sure a list is required,
but it was nice and compact for my needs.
foo1 <- function(f,var,...){
x<-var$x
p<-var$p
wo<-var$wo
s<-var$s
r<-var$r
   # Other code
   return(return_object_name)
}  # end of function


2.  Then, I created a function 'foobar' from which I wished to call auglag
from, passing arguments into foobar.  I ran foobar two different ways: 1)
commenting out the assignment of different values of fo,x,p,wo,s,r (which
are variables in my list 'var2') so as to utilize the values sent by the
arguments in the calling with foobar(...), and 2) uncommented assignments of
variables that form the list 'var2'.  This permitted me to ensure which set
of data were being used by auglag when called from within the function.

foobar<-function(fo,x,p,wo,s,r){
# Comment out code for setting x,fo,p,wo,s,r to use incoming code,
uncomment to set in function
# I was successful at demonstrating and using var2 with variables from
arguments supplied to foobar
# vs using variables for var2 that were specified from within foobar.
#x<-c(0.25,0.25,0.5)
#fo<-0.1
#p<-c(0.4,0.2,0.4)
#wo<-100
#s<-c(0.1,0.5,0.4)*1e6
#r<-0.18
var2<-list(x=x,p=p,wo=wo,s=s,r=r)
ans2 <- auglag(par=fo,fn=MVK_cost_fcn, gr=MVK_grad_cost_fcn,
hin=MVK_w_inequality,var=var2)
}

Then, I formulated the call to auglag from the main environment (not from
within any function):
auglag(par=fo,fn=MVK_cost_fcn, gr=MVK_grad_cost_fcn,
hin=MVK_w_inequality,var=var2) # to verify that using additional arguments
in auglag will work.  The key: I had not tried to add the additional
argument(s) in this location before.
Then, I utilized
ans5<-foobar(fo,x,p,wo,s,r) # to ensure that I can execute auglag from a
function, and use variable 'system parameters' (x,p,,wo,s,r) as a list in my
other functions, i.e. cost function, gradient, etc.

I was successful, and was able to verify proper usage of different local and
global variables accordingly.

** NOTE: I realized that I was attempting to send 'var' without assigning a
name to it, such as 'var=var'.  Doing so incorrectly as follows:
ans <- auglag(par=fo,var,fn=MVK_cost_fcn, gr=MVK_grad_cost_fcn,
hin=MVK_w_inequality)

,  I obtained the error: "Error in hin(par, ...) : argument "var" is
missing, with no default
Calls: auglag -> auglag2 -> hin -> hin"  Silly mistake...

Using 'var=var' (or inside my function provided I used 'var2' to hold the
new list of variables) in the call of auglag:
ans <- auglag(par=fo,fn=MVK_cost_fcn, gr=MVK_grad_cost_fcn,
hin=MVK_w_inequality,var=var), or
ans <- auglag(par=fo,var=var,fn=MVK_cost_fcn, gr=MVK_grad_cost_fcn,
hin=MVK_w_inequality)
both work!


Regards,
Tim

--
View this message in context: 
http://r.789695.n4.nabble.com/Trouble-with-Optimization-in-Alabama-Package-tp2548801p3612614.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Trouble with Optimization in "Alabama" Package

2011-06-19 Thread mcgrete
Hello Erik and Ravi,

I am curious if Erick or Ravi have a solution to Erick's question.

I have a similar problem.  I wish to use auglag to solve a nonlinear
optimization problem.  The cost function, grad function, and constraints are
all function of some parameters that are dependent upon the system I wish to
analyze.  For any given system, these parameters are fixed, e.g. x,y,z are
parameters that are problem or system dependent.  The variable f is a vector
that I wish to find such that my cost function is maximized or minimized.

I have used auglag without any issues as follows.  At first, I called auglag
without trying to provide arguments for x,y,z, but rather only the initial
parameter fo.  My functions (fn, gr, hin, heq, etc) were all defined in the
format of:
foo<-function(f){
# code...
}
The parameters x,z,z were obtained from what I understand to be the global
environment.  I was NOT calling 'auglag' from a function, but simply from
the command line in 'Rkward'. In other words, parameters x,y,z were
calculated outside of any functions but were utilized in the function
fn,gr,hin,heq,etc.

Now, I have constructed a function, foobar, that utilizes inputs a,b,c that
are used to calculate parameters x,y,z.  And within foobar, I attempt to
call 'auglag' to find an optimum solution.  For example:
foobar<-function(a,b,c){
# code to calculate x,y,z as function of a,b,c
ans<-auglag(par=fo,fn=foo1,gr=foo2,hin=foo3,heq=foo4)
return(ans)
}

However, auglag appears to utilize x,y,z from the global environment rather
than my local variables x,y,z inside foobar.

I have attempted to construct functions fn,gr,hin,heq,... by several ways
without success.
For example
foo<-function(f,x,y,z){code...},
ans<-auglag(par=fo,fn=foo1,gr=foo2,hin=foo3,heq=foo4), or
ans<-auglag(par=c(fo,x,y,z),fn=foo1(,gr=foo2,hin=foo3,heq=foo4)

and I tried
foo<-function(f,...){code...},
ans<-auglag(par=fo,fn=foo1,gr=foo2,hin=foo3,heq=foo4), or

foo<-function(f,...){code...},
ans<-auglag(par=c(fo,x,y,z),fn=foo1(,gr=foo2,hin=foo3,heq=foo4)

All failed.

Is it simply not feasible to use 'auglag' in this manner?  Or, have I missed
how to pass arguments to auglag, which in turn can pass these arguments to
fn,gr,hin,heq,...

Any help would be greatly appreciated.

Regards,
Tim



--
View this message in context: 
http://r.789695.n4.nabble.com/Trouble-with-Optimization-in-Alabama-Package-tp2548801p3610402.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Constrainted Nonlinear Optimization - lack of convergence

2011-05-20 Thread mcgrete
Hello all,

I did find a bug in coding my cost function; I have only run a few examples
to verify it works properly, but this appears to have solved most of my
questions.

I did find that some of my inequality constraints are not met, e.g. if z1=0;
z2=0 are two equality constraints, often the code converges with z1~-1e-6,
z2~-1e-8.  For the few examples that I have run, this does not appear to
have caused a problem.  I will try and use z1-1e-4=0 or similar to see if
this helps.

Tim

--
View this message in context: 
http://r.789695.n4.nabble.com/Constrainted-Nonlinear-Optimization-lack-of-convergence-tp3531534p3538514.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Constrainted Nonlinear Optimization - lack of convergence

2011-05-17 Thread mcgrete
Hello,

I am attempting to utilize the 'alabama' package to solve a constrained
nonlinear optimization problem.

The problem has both equality and inequality constraints (heq and hin
functions are used).  All constraints are smooth, i.e. I can differentiate
easily to produce heq.jac and hin.jac functions. 

My initial solution is feasible; I am attempting to maximize a function,
phi.  As such, I create an objective or cost function '-phi', and the
gradient of the cost function '-dphi/dz'.  

I will gladly provide the detailed code, but perhaps an overview of the
problem may be sufficient.

0.  I installed 'alabama' and was successful at solving the example problem.

1.  My constraints are:
z=0 (for several elements in the vector z)
z>=0 (for remaining elements in vector z)
Z - sum(z) >=0, where Z is a constant real number.

2.  My cost function to maximize is (or, minimize -phi):  phi==SUM[
p[i]*LN{f[i]} ], where sum is for i=1:length(z)
 and where f[i]=={(1-r)*SUM[z+s]-z[i]-s[i]}*z[i]/(z[i]+s[i]) + Z -
sum(z)  
 and where s, p are vectors of length(z) and are constants.  Note,
elements of p & s are all >0
 and where (1-r) is a scalar>0
 Note: f[i], under the constraints listed above that is, should always
be >= 0
 

3.  I can readily calculate the gradient of phi, where in general:
dphi/dz=d/dz[i] of phi== p*f'/f, where f' is df/dz[i].

4.  I created functions for inequality and equality constraints and their
jacobians, the cost function, and the grad of cost function.
 
5.  I utilize the alabama package, and the 'auglag' function.  As a first
attempt, I utilized only a single inequality constraint for z>0, all other z
constraints are z=0, and the Z - sum(z) > 0 inequality constraint.I used
default settings, except for attempts to utilize various 'methods', e.g.
BFGS, Nelder-Mead.

Review of the alamaba package source code leads me to believe that this code
automatically generates the Lagrangian of the cost function augmented with
Lagrangian multipliers, and also generates the gradient of the augmented
Lagrangian.  Hence, I assume (perhaps incorrectly), that auglag is
automatically generating the dual problem, and attempts to find a solution
to the dual problem by calling 'optim'.  

MY ISSUE:
The code often runs successfully (converges); sometimes with satisfying
(TRUE) KKT1 and KKT2, sometimes only 1 of the 2.  Sometimes it fails to
converge at all.  When it does converge, I do not obtain the same optimum
condition when I utilize different initial conditions.  When it does fail to
converge, I often end up with a Nan, generated when attempting to take
log(f[i]), meaning that f[i]<0, and I interpret and observe that some or all
of the elements of the vector z are less than zero, despite my constraints.

QUESTION
Other than the obvious - review my code for typos, etc, which I believe have
been resolved...
1.  Can the alabama procedure take a solution path that may not satisfy the
constraints?  If not, then I must have an error in my code despite attempts
to eliminate and I must review yet again.  

2.  If the path may not satisfy all of the constraints (perhaps to due to
steep gradients), how to avoid this situation?  

2a.  I presume that some of the issues may be with difference in scaling,
e.g. say s=[200,500,400,300,100], p=[0.1,0.2,0.4,0.1,0.2], Z=1000,
(1-r)=0.8, and initial starting point for z=[0,0,200,0,0].  However, I am
not experienced at scaling these or the constraints.  Any suggestions?

2b I am not an expert in optimization, but have some background in
math/engineering.  I suspect and hope that something as simple as relaxing
the constraints on z=0 to z=delta, where delta is a small positive number,
may help - any comments?  I admit, I am lazy for not trying this, as I just
thought of it while writing this post.

2c.  I am dangerously knowledgeable that penalty functions exist, but I am
uncertain on how to utilize and how to determine how to select the term
'sig0'.  Suggestions?

2d.  Thinking more, I have not rigorously attempted to modify the tolerance
for convergence, thinking that perhaps my issue is more related to the
solution path not remaining in the constraints being the issue, and not my
convergence.  Am I incorrect in thinking so?  

I would appreciate any assitance that someone can provide.  Again, if the
code is required, I will share, but I hope that I have defined my problem
well enough above so as to avoid anyone having to sort through / degub my
own code.

Much appreciated,
Tim


--
View this message in context: 
http://r.789695.n4.nabble.com/Constrainted-Nonlinear-Optimization-lack-of-convergence-tp3531534p3531534.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.