Hello.

On Thu, 17 May 2018 09:14:13 +0530, Manish Java wrote:
Dear Josef

My sincere thanks for your reply and your insights into using the
Levenberg-Marquardt algorithm. I already have a method in place for finding the optimal "alpha" using a linear optimization algorithm. I was therefore looking add a non-linear optimization strategy for users who would want to
use it.

For univariate optimization, there is
http://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/optim/univariate/BrentOptimizer.html

I will use the linear optimization algorithm I have for the time being and
see if I can incorporate some other type of non-linear optimization
algorithm, such as some form of gradient descent.

If it is a least-squares problem, it should be possible to
use either "LevenbergMarquardtOptimizer" or "GaussNewtonOptimizer":
http://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/fitting/leastsquares/package-frame.html

The unit tests in the code repository should put you on track.

There is also the userguide:
http://commons.apache.org/proper/commons-math/userguide/leastsquares.html

[While working your way to implementing the code for getting the solution,
you could even contribute another example to that page.]

If you can't make it work, don't hesitate to ask more.

HTH,
Gilles

Once again, many thanks for your kind reply and insights into the approach
I shared.


Kind regards
~ Manish

On Wed, May 16, 2018 at 9:54 PM, <josef.v...@uni-ulm.de> wrote:

Dear Manish,

There are a few issues with your approach:

first: as far as I understand, the LevenbergMarquardtOptimizer is designed to optimize more than one parameter. In the present case alpha is the sole parameter to be optimized. There is no Jacobian for the one-dimensional
case.

In the web-side that you mentioned, they recommend to use a proper
starting value for the smoothed sequence as additional unknown parameter. In your case it would be something around 9. With a second parameter to be
optimized, you could formally use the LevenbergMarquardtOptimizer.

second: you alpha-value should be restrained to stay in the range of
0<alpha<1.0.
You can achieve this by using a fit parameter x, and in the routine that calculates the predicted sequence you have to first convert x to alpha
using for example the formula alpha= exp(x)/(1+exp(x)).

third: your calculation for the weights does not reflect the
weight-formula they recommend on the web-side.

fourth: The jacobian is the derivative of predicted sequence values
against the parameters,i.e against alpha and the startvalue mentioned above. However, in your case the 'weights', which correspond to the inverse of the variance or standard deviation of the observations (aka measurement error) also depend on alpha, and hence you do not have a straightforward
least squares problem, were the weights (or measurement errors or
observational errors) are assumed to be constant.

For your case, I would use a standard optimizer like 'Powell' or 'BobyQA' and in the 'value'-function I would calculate the 'Residual sum of squares'
based on observations, sequence- and weight formulas.

Good luck,  Josef


Zitat von Manish Java <manish.in.j...@gmail.com>:

I am trying to write a Java program for generating a forecast using
exponential smoothing as described here:
https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc431.htm. As described at the linked document, exponential smoothing uses a dampening factor "alpha". The document further goes on to say that an optimal value for "alpha" can be found using the "Marquardt procedure", which I take as referring to the Levenberg-Marquardt algorithm. From the linked document, it seems that the problem of finding the optimal "alpha" is treated as a least-squares problem and fed into the optimizer, with an initial guess
for
"alpha".

After extensive web search I could not find any ready example of using the Levenberg-Marquardt algorithm to find "alpha" for this kind of a problem, with any programming language. So, I dug into the Javadocs and test cases for the class *LevenbergMarquardtOptimizer* to see if I could come up with a solution of my own. My program is given an array of values, say *[9, 8, 9, 12, 10, 12, 11, 7, 13, 9, 11, 10]*, and an initial guess for "alpha", say *0.2*. I have been able to determine that this information needs to be
converted into a *LeastSquaresProblem*, for which I have done the
following
so far:


   1. Set the input array as the *target*;
2. Set the starting point *start *as the initial value of alpha (*{ 0.2
   }*);
   3. Set *weight* to *[1, 1 - alpha, (1 - alpha)^2, ...]*; and
4. Set the optimization function of the model to return smooth values
   for each of the input values.

I am now unsure how the Jacobian should be calculated. I would like to
know
if I have approached the problem correctly so far, and how to calculate
the
Jacobian. I have not been able to find any material on the web or printed form that describes the procedure for finding the Jacobian for a problem
like this.

Any help or pointers will be greatly appreciated.







---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@commons.apache.org
For additional commands, e-mail: user-h...@commons.apache.org

Reply via email to