Hi all
I've not been using Scilab for optimization tasks for years and I
decided to dig up my old codes; I didn't follow such items in the
mailing list so the current questionning has probably ever been treated
(?!?!):
why numderivative leads to different result than derivative (since
derivative will be removed in the next release)? ... algorithms + step
calculation are probably different (while they are using finite
difference method ... I don't know)
To myself first, I've made a (very) basic example based on the famous
Rosembrock function to perform tests
Thanks for any support
Paul
###############################################################################"
mode(0)
clear
// function
function f = rosembrock(x)
f = ( 1 - x(1))^2 + 100*( x(2)-x(1)^2 )^2;
endfunction
// cost function
function [f,g,ind] = cost(x,ind)
f = rosembrock(x);
// g = derivative(rosembrock, x.',order = 4);
g = numderivative(rosembrock, x.',order = 4);
endfunction
//initial_parameters = [10 10]
initial_parameters = [100 100]
//lower_bounds = [90 90];
lower_bounds = [0 0];
upper_bounds = [1000 1000];
//[fopt , xopt] = optim(cost,initial_parameters)
[fopt , xopt] =
optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'ar',100,100)
_______________________________________________
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users