Hello,
I'm hoping for some help implementing a general optimization problem in R. I'd 
like to write a program that for a vector of non-negative input values, x, 
returns a non-negative "normalized" vector y such that sum(y)==1, and y <= maxx 
(vector of maximum values), and for which sum((x-y)^2) is minimized. 
Additionally, I'd like to remove (0,minx) from the domain of each y such that 
any y value may be zero or it may be minx <= y <= maxx, but it may not be 0 < y 
< minx. Thus small, non-zero values are removed.

The last criteria is that the solution must be very fast to compute (e.g. 1/3 
second for vector of 5000).

I coded something up using the L-BFGS-B method of optim where I penalized 
values between (0, minx) with a parabolic cost function. While reasonably fast 
and accurate, I occasionally get the message "ERROR: 
ABNORMAL_TERMINATION_IN_LNSRCH". I believe this is because the gradient is 
discontinuous at 'minx', so optim finds the gradient calculation unsatisfactory 
around that value. Not supplying the gradient avoids the error (by using a 
finite-difference model), but is unacceptably slow.

Does anyone have an idea for a more clever way to preform what is effectively a 
simple quadratic programming problem on a discontinuous domain: {0, [minp, 
maxp]}?

Thanks, Robert


Robert McGehee, CFA
Geode Capital Management, LLC
One Post Office Square, 28th Floor | Boston, MA | 02109
Direct: (617)392-8396

This e-mail, and any attachments hereto, are intended fo...{{dropped:10}}

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to