Dear Josef

My sincere thanks for your reply and your insights into using the
Levenberg-Marquardt algorithm. I already have a method in place for finding
the optimal "alpha" using a linear optimization algorithm. I was therefore
looking add a non-linear optimization strategy for users who would want to
use it.

I will use the linear optimization algorithm I have for the time being and
see if I can incorporate some other type of non-linear optimization
algorithm, such as some form of gradient descent.

Once again, many thanks for your kind reply and insights into the approach
I shared.


Kind regards
~ Manish

On Wed, May 16, 2018 at 9:54 PM, <[email protected]> wrote:

> Dear Manish,
>
> There are a few issues with your approach:
>
> first: as far as I understand, the LevenbergMarquardtOptimizer is designed
> to optimize more than one parameter. In the present case alpha is the sole
> parameter to be optimized. There is no Jacobian for the one-dimensional
> case.
>
> In the web-side that you mentioned, they recommend to use a proper
> starting value for the smoothed sequence as additional unknown parameter.
> In your case it would be something around 9. With a second parameter to be
> optimized, you could formally use the LevenbergMarquardtOptimizer.
>
> second: you alpha-value should be restrained to stay in the range of
> 0<alpha<1.0.
> You can achieve this by using a fit parameter x, and in the routine that
> calculates the predicted sequence  you have to first convert x to alpha
> using for example the formula alpha= exp(x)/(1+exp(x)).
>
> third: your calculation for the weights does not reflect the
> weight-formula they recommend on the web-side.
>
> fourth: The jacobian is the derivative of predicted sequence values
> against the parameters,i.e against alpha and the startvalue mentioned
> above. However, in your case the 'weights', which correspond to the inverse
> of the variance or standard deviation of the observations (aka measurement
> error) also depend on alpha, and hence you do not have a straightforward
> least squares problem, were the weights (or measurement errors or
> observational errors) are assumed to be constant.
>
> For your case, I would use a standard optimizer like 'Powell' or 'BobyQA'
> and in the 'value'-function I would calculate the 'Residual sum of squares'
> based on observations, sequence- and weight formulas.
>
> Good luck,  Josef
>
>
> Zitat von Manish Java <[email protected]>:
>
> I am trying to write a Java program for generating a forecast using
>> exponential smoothing as described here:
>> https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc431.htm. As
>> described at the linked document, exponential smoothing uses a dampening
>> factor "alpha". The document further goes on to say that an optimal value
>> for "alpha" can be found using the "Marquardt procedure", which I take as
>> referring to the Levenberg-Marquardt algorithm. From the linked document,
>> it seems that the problem of finding the optimal "alpha" is treated as a
>> least-squares problem and fed into the optimizer, with an initial guess
>> for
>> "alpha".
>>
>> After extensive web search I could not find any ready example of using the
>> Levenberg-Marquardt algorithm to find "alpha" for this kind of a problem,
>> with any programming language. So, I dug into the Javadocs and test cases
>> for the class *LevenbergMarquardtOptimizer* to see if I could come up with
>> a solution of my own. My program is given an array of values, say *[9, 8,
>> 9, 12, 10, 12, 11, 7, 13, 9, 11, 10]*, and an initial guess for "alpha",
>> say *0.2*. I have been able to determine that this information needs to be
>> converted into a *LeastSquaresProblem*, for which I have done the
>> following
>> so far:
>>
>>
>>    1. Set the input array as the *target*;
>>    2. Set the starting point *start *as the initial value of alpha (*{ 0.2
>>    }*);
>>    3. Set *weight* to *[1, 1 - alpha, (1 - alpha)^2, ...]*; and
>>    4. Set the optimization function of the model to return smooth values
>>    for each of the input values.
>>
>> I am now unsure how the Jacobian should be calculated. I would like to
>> know
>> if I have approached the problem correctly so far, and how to calculate
>> the
>> Jacobian. I have not been able to find any material on the web or printed
>> form that describes the procedure for finding the Jacobian for a problem
>> like this.
>>
>> Any help or pointers will be greatly appreciated.
>>
>>
>
>
>

Reply via email to