Re: [Scilab-users] optim vs Neldermead: improvement

2017-01-23 Thread Paul Bignier


Hi Paul,

As I'm sure you've read them, optim 
, neldermead 
 & 
numderivative 
 help pages 
provide you with all the flags you can customize.
After that, it's all in the functions that you provide to compute the 
cost & its derivative.


Best regards,
Paul


On 01/23/2017 09:41 AM, paul.carr...@free.fr wrote:


Hi All

I’m using ‘optim’ and ‘NelderMead’ in conjunction with my finite 
element solver.


A “good” optimization is a balance between accuracy and cpu time … in 
other word I do not necessary need to be accurate at 1e-11 but 
requiring a lot of iterations where 1e-3 is enough with a lower amount 
of loops.


In my understanding:

 1. With ‘optim’, I can modifiy

  * The step value h in numderivative (put at 1e-3 after previous
tests on analytical functions tests)
  * The values of epsf (default value) and epsg (tested at 1e-3 and 1e-5)

 1. With “Neldermead”, I can change:

  * Tolfunrelative (tested at default value for the moment)
  * Tolxrelative (tested at default value for the moment)


 Am I right or is there another 'flags' I can play with?

/NB/: so far, Nelder-Mead requires less iterations than ‘optim’ with a 
single variable … I’m wondering how can I improve optim use that is 
supposed to converge faster?


 Thanks

Paul



___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


--
Paul BIGNIER
Development engineer
---
Scilab Enterprises
143bis rue Yves Le Coz - 78000 Versailles, France
Phone: +33.1.80.77.04.68
http://www.scilab-enterprises.com

___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


[Scilab-users] optim vs Neldermead: improvement

2017-01-23 Thread paul . carrico
Hi All 

I'm using 'optim' and 'NelderMead' in conjunction with my finite element
solver. 

A "good" optimization is a balance between accuracy and cpu time … in
other word I do not necessary need to be accurate at 1e-11 but requiring
a lot of iterations where 1e-3 is enough with a lower amount of loops. 

In my understanding: 

* With 'optim', I can modifiy

* The step value h in numderivative (put at 1e-3 after previous tests
on analytical functions tests)
* The values of epsf (default value) and epsg (tested at 1e-3 and
1e-5)

* With "Neldermead", I can change:

* Tolfunrelative (tested at default value for the moment)
* Tolxrelative (tested at default value for the moment)

 Am I right or is there another 'flags' I can play with? 

 _NB_: so far, Nelder-Mead requires less iterations than 'optim' with a
single variable … I'm wondering how can I improve optim use that is
supposed to converge faster? 

 Thanks 

Paul___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] Optim & NelderMead use [closed]

2017-01-16 Thread Pierre Vuillemin
Hi Paul,

You should be careful when using the square root function as it is not
differentiable at 0 (cf the attached file which illustrates this point
in your case). This will leads to issues and prevent optim from "really"
(in a sound way) converging towards the solution. 

To address the issue, you can 

- reformulate your objective function so that it is smooth near the
solution (you will still have an issue at 0 where g is not
differentiable though), 

- use order 0 methods (still, some of them might fail on non-smooth
optimisation problems as the minimum might be very "narrow") 

- dig into non-smooth optimisation (see e.g.
http://napsu.karmitsa.fi/nso/)

Regards,

Pierre 

Le 16.01.2017 10:30, Carrico, Paul a écrit :

> Hi all 
> 
> After  performing tests (and modifying the target function as it should have 
> been done first), I can better understand how to use 'optim' and 'Neldermead' 
> procedures. 
> 
> For my needs the mean flags are :
> 
> -  Step h in numderivative à usefull reading as "EE 221 Numerical 
> Computing" Scott Hudson
> 
> -  The threshold epsg in optim (%eps is the default value - such high 
> accuracy is not necessary for my application - furthermore using a value such 
> as 1e-5 leads to err=1 that is correct for checking)
> 
> -  Ditto for Nelder-Mead and '-tolfunrelative' & '-tolxrelative'
> 
> Now it works fine :-)
> 
> Thanks all for the support
> 
> Paul
> 
> #
> 
> mode(0)
> 
> clear
> 
> global count_use;
> 
> count_use = 0;
> 
> _// _
> 
> function F=lineaire(X, A2, B2)
> 
> F = A2*X+B2;
> 
> endfunction
> 
> _// _
> 
> function G=racine(X, A1, B1)
> 
> G = sqrt(A1*X) + B1;
> 
> endfunction
> 
> _// _
> 
> function F=target(X, A1, B1, A2, B2)
> 
> val_lin = lineaire(X,A2,B2);
> 
> val_rac = racine(X,A1,B1);
> 
> F = sqrt((val_lin - val_rac)^2); 
> 
> global count_use;
> 
> count_use = count_use +1;
> 
> endfunction
> 
> _// Cost function: _
> 
> function [F, G, IND]=cost(X, IND, A1, B1, A2, B2)
> 
> F = target(X);   
> 
> _//g = numderivative(target, x.',order = 4); _
> 
> G = numderivative(target, X.',1e-3, order = 4);  _// 1E-3 => see EE 221 
> "Numerical Computing" Scott Hudson_
> 
> _// Study of the influence of h on the number of target function calculation 
> & the fopt accuracy:_
> 
> _// (epsg = %eps here)_
> 
> _// h = 1.e-1 => number = 220 & fopt = 2.242026e-05_
> 
> _// h = 1.e-2 => number = 195 & fopt = 2.267564e-07_
> 
> _// h = 1.e-3 => number = 170 & fopt = 2.189495e-09 ++_
> 
> _// h = 1.e-4 => number = 190 & fopt = 1.941203e-11_
> 
> _// h = 1.e-5 => number = 215 & fopt = 2.131628e-13_
> 
> _// h = 1.e-6 => number = 235 & fopt = 0._
> 
> _// h = 1.e-7 => number = 255 & fopt = 7.105427e-15_
> 
> _// h = 1.e-8 => number = 275 & fopt = 0._
> 
> endfunction
> 
> _// *_
> 
> _// optimisation with optim_
> 
> initial_parameters = [10]
> 
> lower_bounds = [0];
> 
> upper_bounds = [1000];
> 
> nocf = 1000;  _// number max of call of f_
> 
> niter = 1000;_// number max of iterations_
> 
> a1 = 30;
> 
> b1 = 2.5;
> 
> a2 = 1;
> 
> b2 = 2;
> 
> epsg = 1e-5;_// gradient norm threshold (%eps by defaut) --> lead to err 
> = 1 !!!_
> 
> _//epsg = %eps; // lead to Err = 13_
> 
> epsf = 0;   _//threshold controlling decreasing of f (epsf = 0 by defaut)_
> 
> costf = list (cost, a1, b1, a2, b2);
> 
> [fopt, xopt, gopt, work, iters, evals, err] = 
> optim(costf,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter,epsg,epsf);
> 
> printf("Optimized value : %g\n",xopt);
> 
> printf("min cost function value (should be as closed as possible to 0) ; 
> %e\n",fopt);
> 
> printf('Number of calculations = %d !!!\n',count_use);
> 
> _// Curves definition_
> 
> x = linspace(0,50,1000)';
> 
> plot_raci = racine(x,a1,b1);
> 
> plot_lin = lineaire(x,a2,b2);
> 
> scf(1);
> 
> drawlater();
> 
> xgrid(3);
> 
> f = gcf(); 
> 
> _//f  _
> 
> f.figure_size = [1000, 1000];
> 
> f.background = color(255,255,255);   
> 
> a = gca();   
> 
> _//a  _
> 
> a.font_size = 2; 
> 
> a.x_label.text = "X axis" ;  
> 
> a.x_location="bottom";   
> 
> a.x_label.font_angle=0;  
> 
> a.x_label.font_size = 4; 
> 
> a.y_label.text = "Y axis";
> 
> a.y_location="left";
> 
> a.y_label.font_angle=-90;
> 
> a.Y_label.font_size = 4;
> 
> a.title.text = "Title"; 
> 
> a.title.font_size = 5;
> 
> a.line_style = 1;
> 
> _// Curves plot_
> 
> plot(x,plot_lin);
> 
> e1 = gce(); 
> 
> p1 = e1.children;   
> 
> p1.thickness = 1;
> 
> p1.line_style = 1;
> 
> p1.foreground = 3;
> 
> plot(x,plot_raci);
> 
> e2 = gce(); 
> 
> p2 = e2.children;

Re: [Scilab-users] Optim & NelderMead use [closed]

2017-01-16 Thread Stéphane Mottelet

Hi Paul,

your cost function

*f*= sqrt((val_lin - val_rac)^2);

hasn't  changed, since sqrt(x^2)=abs(x). What I meant before is replacing

*f*= abs(val_lin - val_rac);

by

*f*= (val_lin - val_rac)^2;

in order to make it differentiable. When using a non-differentiable cost 
function together with numderivative, it seems logical that tweaking the 
step size could artificially help convergence.


S.


Le 16/01/2017 à 10:30, Carrico, Paul a écrit :


Hi all

After  performing tests (and modifying the target function as it 
should have been done first), I can better understand how to use 
‘optim’ and ‘Neldermead’ procedures.


For my needs the mean flags are :

-Step h in numderivative àusefull reading as “EE 221 Numerical 
Computing" Scott Hudson


-The threshold epsg in optim (%eps is the default value – such high 
accuracy is not necessary for my application – furthermore using a 
value such as 1e-5 leads to err=1 that is correct for checking)


-Ditto for Nelder-Mead and ‘-tolfunrelative’ & ‘-tolxrelative’
Now it works fine :-)
Thanks all for the support
Paul

#

mode(0)
clear
globalcount_use;
count_use= 0;
/// /
function*f*=_lineaire_(*x*, *a2*, *b2*)
*f* = *a2***x*+*b2*;
endfunction
/// /
function*g*=_racine_(*x*, *a1*, *b1*)
*g* = sqrt(*a1***x*) + *b1*;
endfunction
/// /
function*f*=_target_(*x*, *a1*, *b1*, *a2*, *b2*)
val_lin= _lineaire_(*x*,*a2*,*b2*);
val_rac = _racine_(*x*,*a1*,*b1*);
*f*= sqrt((val_lin - val_rac)^2);
global count_use;
count_use = count_use +1;
endfunction
/// Cost function: /
function[*f*, *g*, *ind*]=_cost_(*x*, *ind*, *a1*, *b1*, *a2*, *b2*)
*f* = _target_(*x*);
///g = numderivative(target, x.',order = 4); /
*g* = _numderivative_(_target_, *x*.',1e-3, order = 4); /// 1E-3 => 
see EE 221 "Numerical Computing" Scott Hudson /
/// Study of the influence of h on the number of target function 
calculation & the fopt accuracy:/

/// (epsg = %eps here)/
/// h = 1.e-1 => number = 220 & fopt = 2.242026e-05/
/// h = 1.e-2 => number = 195 & fopt = 2.267564e-07/
/// h = 1.e-3 => number = 170 & fopt = 2.189495e-09 ++/
/// h = 1.e-4 => number = 190 & fopt = 1.941203e-11/
/// h = 1.e-5 => number = 215 & fopt = 2.131628e-13/
/// h = 1.e-6 => number = 235 & fopt = 0./
/// h = 1.e-7 => number = 255 & fopt = 7.105427e-15/
/// h = 1.e-8 => number = 275 & fopt = 0./
endfunction
/// */
/// optimisation with optim/
initial_parameters= [10]
lower_bounds= [0];
upper_bounds= [1000];
nocf= 1000; /// number max of call of f/
niter= 1000; /// number max of iterations/
a1= 30;
b1= 2.5;
a2= 1;
b2= 2;
epsg= 1e-5; /// gradient norm threshold (%eps by defaut) --> lead to 
err = 1 !!!/

///epsg = %eps; // lead to Err = 13/
epsf= 0; ///threshold controlling decreasing of f (epsf = 0 by defaut)/
costf= list (_cost_, a1, b1, a2, b2);
[fopt,xopt, gopt, work, iters, evals, err] = 
optim(costf,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter,epsg,epsf);

printf("Optimized value : %g\n",xopt);
printf("min cost function value (should be as closed as possible to 0) 
; %e\n",fopt);

printf('Number of calculations = %d !!!\n',count_use);
/// Curves definition/
x= _linspace_(0,50,1000)';
plot_raci= _racine_(x,a1,b1);
plot_lin= _lineaire_(x,a2,b2);
_scf_(1);
drawlater();
xgrid(3);
f= _gcf_();
///f /
f.figure_size= [1000, 1000];
f.background= color(255,255,255);
a= _gca_();
///a /
a.font_size= 2;
a.x_label.text= "X axis" ;
a.x_location="bottom";
a.x_label.font_angle=0;
a.x_label.font_size= 4;
a.y_label.text= "Y axis";
a.y_location="left";
a.y_label.font_angle=-90;
a.Y_label.font_size= 4;
a.title.text= "Title";
a.title.font_size= 5;
a.line_style= 1;
/// Curves plot/
_plot_(x,plot_lin);
e1= _gce_();
p1= e1.children;
p1.thickness= 1;
p1.line_style= 1;
p1.foreground= 3;
_plot_(x,plot_raci);
e2= _gce_();
p2= e2.children;
p2.thickness= 1;
p2.line_style= 1;
p2.foreground= 2;
drawnow();

*/EXPORT CONTROL :
/**Cet email ne contient pas de données techniques
This email does not contain technical data*



___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


[Scilab-users] Optim & NelderMead use [closed]

2017-01-16 Thread Carrico, Paul
Hi all

After  performing tests (and modifying the target function as it should have 
been done first), I can better understand how to use 'optim' and 'Neldermead' 
procedures.

For my needs the mean flags are :

-  Step h in numderivative --> usefull reading as "EE 221 Numerical 
Computing" Scott Hudson

-  The threshold epsg in optim (%eps is the default value - such high 
accuracy is not necessary for my application - furthermore using a value such 
as 1e-5 leads to err=1 that is correct for checking)

-  Ditto for Nelder-Mead and '-tolfunrelative' & '-tolxrelative'



Now it works fine :-)



Thanks all for the support



Paul

#

mode(0)

clear



global count_use;

count_use = 0;



// 

function f=lineaire(x, a2, b2)

f = a2*x+b2;

endfunction



// 

function g=racine(x, a1, b1)

g = sqrt(a1*x) + b1;

endfunction



// 

function f=target(x, a1, b1, a2, b2)

val_lin = lineaire(x,a2,b2);

val_rac = racine(x,a1,b1);

f = sqrt((val_lin - val_rac)^2);



global count_use;

count_use = count_use +1;

endfunction



// Cost function:

function [f, g, ind]=cost(x, ind, a1, b1, a2, b2)

f = target(x);

//g = numderivative(target, x.',order = 4);

g = numderivative(target, x.',1e-3, order = 4);  // 1E-3 => see EE 221 
"Numerical Computing" Scott Hudson

// Study of the influence of h on the number of target function calculation & 
the fopt accuracy:

// (epsg = %eps here)

// h = 1.e-1 => number = 220 & fopt = 2.242026e-05

// h = 1.e-2 => number = 195 & fopt = 2.267564e-07

// h = 1.e-3 => number = 170 & fopt = 2.189495e-09 ++

// h = 1.e-4 => number = 190 & fopt = 1.941203e-11

// h = 1.e-5 => number = 215 & fopt = 2.131628e-13

// h = 1.e-6 => number = 235 & fopt = 0.

// h = 1.e-7 => number = 255 & fopt = 7.105427e-15

// h = 1.e-8 => number = 275 & fopt = 0.



endfunction



// *

// optimisation with optim

initial_parameters = [10]

lower_bounds = [0];

upper_bounds = [1000];

nocf = 1000;  // number max of call of f

niter = 1000;// number max of iterations

a1 = 30;

b1 = 2.5;

a2 = 1;

b2 = 2;



epsg = 1e-5;// gradient norm threshold (%eps by defaut) --> lead to err = 1 
!!!

//epsg = %eps; // lead to Err = 13

epsf = 0;   //threshold controlling decreasing of f (epsf = 0 by defaut)



costf = list (cost, a1, b1, a2, b2);

[fopt, xopt, gopt, work, iters, evals, err] = 
optim(costf,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter,epsg,epsf);

printf("Optimized value : %g\n",xopt);

printf("min cost function value (should be as closed as possible to 0) ; 
%e\n",fopt);

printf('Number of calculations = %d !!!\n',count_use);





// Curves definition

x = linspace(0,50,1000)';

plot_raci = racine(x,a1,b1);

plot_lin = lineaire(x,a2,b2);



scf(1);

drawlater();

xgrid(3);

f = gcf();

//f

f.figure_size = [1000, 1000];

f.background = color(255,255,255);

a = gca();

//a

a.font_size = 2;

a.x_label.text = "X axis" ;

a.x_location="bottom";

a.x_label.font_angle=0;

a.x_label.font_size = 4;

a.y_label.text = "Y axis";

a.y_location="left";

a.y_label.font_angle=-90;

a.Y_label.font_size = 4;

a.title.text = "Title";

a.title.font_size = 5;

a.line_style = 1;



// Curves plot

plot(x,plot_lin);

e1 = gce();

p1 = e1.children;

p1.thickness = 1;

p1.line_style = 1;

p1.foreground = 3;



plot(x,plot_raci);

e2 = gce();

p2 = e2.children;

p2.thickness = 1;

p2.line_style = 1;

p2.foreground = 2;

drawnow();

EXPORT CONTROL :
Cet email ne contient pas de données techniques
This email does not contain technical data

___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] 'optim' vs 'Nelder-Mead' ... or difficulties to use 'optim'

2017-01-15 Thread paul . carrico
Hi Stephane, 

You're right, I forgot my basis :-) 

Nevertheless when I re-write the target function as the following, it
fails after some iterations and I do not understand the origin (??) 

Paul 

### 

function f=target(x)
val_lin = lineaire(x,1,2);
val_rac = racine(x,10,6);
//f = abs(val_lin - val_rac); 
f = sqrt((val_lin - val_rac)^2); 
printf("f = %g\n",f);
endfunction

## 
Console message: 
*** qnbd (with bound cstr) 
dimension= 1, epsq= 0.2220446049250313E-15, verbosity level: imp= 3
max number of iterations allowed: iter= 1000
max number of calls to costf allowed: nap= 1000

f = 23.6393
f = 25.1965
f = 22.0911
qnbd : iter= 1 f= 0.2363932D+02
qnbd : nbre fact 1 defact 0 total var factorisees 1
rlbd tp= 0.6440E+02 tmax= 0.1000E+11 dh0/dt=-0.6027E+00
f = 21.6468
f = 23.1924
f = 20.111
es t= 0.3318E+01 h=-0.1992E+01 dh/dt=-0.5981E+00 dfh/dt=-0.6005E+00 dt
0.3E+01
f = 4
f = 6.47214
f = 6
al_parameters,'qn','ar',nocf,niter,imp=3)
!--error 98 
La variable retournée par la fonction Scilab passée en argument n'est
pas valide.
at line 36 of exec file called by : 
RFACE/Rosembrock/fonctions_test.sce', -1

t= 0.6440E+02 h=-0.1992E+01 dh/dt=-0.5981E+00 dfh/dt= 0.E+00 dt
0.6E+02
qnbd : indqn= 0 

Le 2017-01-15 12:59, Stéphane Mottelet a écrit : 

> Hello 
> 
> Your target function is not differrentiable (Because of the absolute
> value). That explains why optim has some difficulties. Using a square
> instead should give the advantage to optim against nelder-mead.
> 
> S.
> 
>> Le 15 janv. 2017 à 11:39, paul.carr...@free.fr a écrit :
>> 
>> Hi all
>> 
>> Well, from a "macro" point of view, gradient based methods (order 1 method) 
>> should be more efficient than Nelder-Mead one (order 0) especially  for 1D 
>> topic, shouldn't be?
>> 
>> In the case here bellow, when I use 'Optim' it is not the case in comparison 
>> to 'NelderMead'... I don'y know what I'm doing wrong but I decide to frop 
>> off 'optim' in favor tp Nelder-Mead - nevertheless the later method requires 
>> much more iterations :-(
>> 
>> I'll be interested by some feedback on such topic, either in the mailing 
>> list (or in MP in order to not pollute it).
>> 
>> Paul
>> "
>> 'optim'
>> 
>> mode(0)
>> 
>> function f=lineaire(x, a2, b2)
>> f = a2*x+b2;
>> endfunction
>> 
>> function g=racine(x, a1, b1)
>> g = sqrt(a1*x) + b1;
>> endfunction
>> 
>> function f=target(x)
>> val_lin = lineaire(x,1,2);
>> val_rac = racine(x,10,6);
>> f = abs(val_lin - val_rac);
>> endfunction
>> 
>> // Cost function :
>> function [f, g, ind]=cost(x, ind)
>> f = target(x);
>> //g = numderivative(target, x.');
>> //g = numderivative(target, x.',0.1,order = 2);
>> g = numderivative(target, x.',order = 2);  // better but why ??? h automatic 
>> works better ???
>> //g = numderivative(target, x.',order = 4);
>> //g = numderivative(target, x.',0.1, order = 4);
>> endfunction
>> 
>> // optimisation avec optim
>> initial_parameters = [50]
>> lower_bounds = [0];
>> upper_bounds = [1000];
>> nocf = 1000;  // number of call of f
>> niter = 1000;// number of iterations
>> [fopt, xopt, gopt, work, iters, evals, err] = 
>> optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter,imp=3);
>> xopt
>> fopt
>> err
>> 
>> // traçage courbes
>> x = linspace(0,50,1000)';
>> plot_raci = racine(x,10,6);
>> plot_lin = lineaire(x,1,2);
>> 
>> scf(1);
>> drawlater();
>> xgrid(3);
>> f = gcf();
>> //f
>> f.figure_size = [1000, 1000];
>> f.background = color(255,255,255);
>> a = gca();
>> //a
>> a.font_size = 2;
>> a.x_label.text = "X axis" ;
>> a.x_location="bottom";
>> a.x_label.font_angle=0;
>> a.x_label.font_size = 4;
>> a.y_label.text = "Y axis";
>> a.y_location="left";
>> a.y_label.font_angle=-90;
>> a.Y_label.font_size = 4;
>> a.title.text = "Title";
>> a.title.font_size = 5;
>> a.line_style = 1;
>> 
>> // début des courbes
>> plot(x,plot_lin);
>> e1 = gce();
>> p1 = e1.children;
>> p1.thickness = 1;
>> p1.line_style = 1;
>> p1.foreground = 3;
>> 
>> plot(x,plot_raci);
>> e2 = gce();
>> p2 = e2.children;
>> p2.thickness = 1;
>> p2.line_style = 1;
>> p2.foreground = 2;
>> drawnow();
>> 
>> ###
>> Nelder Mead
>> ###
>> mode(0)
>> 
>> function f=lineaire(x, a2, b2)
>> f = a2*x+b2;
>> endfunction
>> 
>> function g=racine(x, a1, b1)
>> g = sqrt(a1*x) + b1;
>> endfunction
>> 
>> function [f, index]=target(x, index)
>> val_lin = lineaire(x,1,2);
>> val_rac = racine(x,10,6);
>> f = abs(val_lin - val_rac);
>> endfunction
>> 
>> // optimisation avec optim
>> initial_parameters = [50]
>> lower_bounds = [0];
>> upper_bounds = [1000];
>> 
>> nm = neldermead_new ();
>> nm = neldermead_configure(nm,"-numberofvariables",1);
>> nm = 

Re: [Scilab-users] 'optim' vs 'Nelder-Mead' ... or difficulties to use 'optim'

2017-01-15 Thread Stéphane Mottelet
Hello 

Your target function is not differrentiable (Because of the absolute value). 
That explains why optim has some difficulties. Using a square instead should 
give the advantage to optim against nelder-mead.

S.



> Le 15 janv. 2017 à 11:39, paul.carr...@free.fr a écrit :
> 
> Hi all
> 
> Well, from a "macro" point of view, gradient based methods (order 1 method) 
> should be more efficient than Nelder-Mead one (order 0) especially  for 1D 
> topic, shouldn't be?
> 
> In the case here bellow, when I use 'Optim' it is not the case in comparison 
> to 'NelderMead'... I don'y know what I'm doing wrong but I decide to frop off 
> 'optim' in favor tp Nelder-Mead - nevertheless the later method requires much 
> more iterations :-(
> 
> I'll be interested by some feedback on such topic, either in the mailing list 
> (or in MP in order to not pollute it).
> 
> Paul
> "
> 'optim'
> 
> mode(0)
> 
> function f=lineaire(x, a2, b2)
> f = a2*x+b2;
> endfunction
> 
> function g=racine(x, a1, b1)
> g = sqrt(a1*x) + b1;
> endfunction
> 
> function f=target(x)
> val_lin = lineaire(x,1,2);
> val_rac = racine(x,10,6);
> f = abs(val_lin - val_rac);
> endfunction
> 
> // Cost function : 
> function [f, g, ind]=cost(x, ind)
> f = target(x);   
> //g = numderivative(target, x.'); 
> //g = numderivative(target, x.',0.1,order = 2); 
> g = numderivative(target, x.',order = 2);  // better but why ??? h 
> automatic works better ???
> //g = numderivative(target, x.',order = 4); 
> //g = numderivative(target, x.',0.1, order = 4);
> endfunction
> 
> // optimisation avec optim
> initial_parameters = [50]
> lower_bounds = [0];
> upper_bounds = [1000];
> nocf = 1000;  // number of call of f
> niter = 1000;// number of iterations
> [fopt, xopt, gopt, work, iters, evals, err] = 
> optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter,imp=3);
> xopt
> fopt
> err
> 
> // traçage courbes
> x = linspace(0,50,1000)';
> plot_raci = racine(x,10,6);
> plot_lin = lineaire(x,1,2);
> 
> scf(1);
> drawlater();
> xgrid(3);
> f = gcf(); 
> //f  
> f.figure_size = [1000, 1000];
> f.background = color(255,255,255);   
> a = gca();   
> //a  
> a.font_size = 2; 
> a.x_label.text = "X axis" ;  
> a.x_location="bottom";   
> a.x_label.font_angle=0;  
> a.x_label.font_size = 4; 
> a.y_label.text = "Y axis";
> a.y_location="left";
> a.y_label.font_angle=-90;
> a.Y_label.font_size = 4;
> a.title.text = "Title"; 
> a.title.font_size = 5;
> a.line_style = 1;
> 
> // début des courbes
> plot(x,plot_lin);
> e1 = gce(); 
> p1 = e1.children;   
> p1.thickness = 1;
> p1.line_style = 1;
> p1.foreground = 3;
> 
> plot(x,plot_raci);
> e2 = gce(); 
> p2 = e2.children;   
> p2.thickness = 1;
> p2.line_style = 1;
> p2.foreground = 2;
> drawnow();
> 
> ###
> Nelder Mead
> ###
> mode(0)
> 
> function f=lineaire(x, a2, b2)
> f = a2*x+b2;
> endfunction
> 
> function g=racine(x, a1, b1)
> g = sqrt(a1*x) + b1;
> endfunction
> 
> function [f, index]=target(x, index)
> val_lin = lineaire(x,1,2);
> val_rac = racine(x,10,6);
> f = abs(val_lin - val_rac);
> endfunction
> 
> // optimisation avec optim
> initial_parameters = [50]
> lower_bounds = [0];
> upper_bounds = [1000];
> 
> nm = neldermead_new ();
> nm = neldermead_configure(nm,"-numberofvariables",1);
> nm = neldermead_configure(nm,"-function",target);
> nm = neldermead_configure(nm,"-x0",initial_parameters);
> nm = neldermead_configure(nm,"-maxiter",1000); 
> nm = neldermead_configure(nm,"-maxfunevals",1000); 
> nm = neldermead_configure(nm,"-tolfunrelative",10*%eps);
> nm = neldermead_configure(nm,"-tolxrelative",10*%eps);
> nm = neldermead_configure(nm,"-method","box");
> nm = neldermead_configure(nm,"-boundsmin",lower_bounds);
> nm = neldermead_configure(nm,"-boundsmax", upper_bounds);
> nm = neldermead_search(nm);
> nm = neldermead_restart(nm);
> xopt = neldermead_get(nm,"-xopt")
> fopt = neldermead_get(nm,"-fopt")
> nm = neldermead_destroy(nm);
> 
> // traçage courbes
> x = linspace(0,50,1000)';
> a1 = 10; b1 = 6;  // b1 > b2 here
> a2 = 1; b2 = 2;
> 
> plot_raci = racine(x,a1,b1);
> plot_lin = lineaire(x,a2,b2);
> 
> scf(1);
> drawlater();
> xgrid(3);
> f = gcf(); 
> //f  
> f.figure_size = [1000, 1000];
> f.background = color(255,255,255);   
> a = gca();   
> //a  
> a.font_size = 

[Scilab-users] 'optim' vs 'Nelder-Mead' ... or difficulties to use 'optim'

2017-01-15 Thread paul . carrico
Hi all

Well, from a "macro" point of view, gradient based methods (order 1
method) should be more efficient than Nelder-Mead one (order 0)
especially  for 1D topic, shouldn't be?

In the case here bellow, when I use 'Optim' it is not the case in
comparison to 'NelderMead'... I don'y know what I'm doing wrong but I
decide to frop off 'optim' in favor tp Nelder-Mead - nevertheless the
later method requires much more iterations :-(

I'll be interested by some feedback on such topic, either in the mailing
list (or in MP in order to not pollute it).

Paul

" 
'optim' 
 

mode(0)

function f=lineaire(x, a2, b2)
f = a2*x+b2;
endfunction

function g=racine(x, a1, b1)
g = sqrt(a1*x) + b1;
endfunction

function f=target(x)
val_lin = lineaire(x,1,2);
val_rac = racine(x,10,6);
f = abs(val_lin - val_rac);
endfunction

// Cost function : 
function [f, g, ind]=cost(x, ind)
f = target(x);   
//g = numderivative(target, x.'); 
//g = numderivative(target, x.',0.1,order = 2); 
g = numderivative(target, x.',order = 2);  // better but why ??? h
automatic works better ???
//g = numderivative(target, x.',order = 4); 
//g = numderivative(target, x.',0.1, order = 4);
endfunction

// optimisation avec optim
initial_parameters = [50]
lower_bounds = [0];
upper_bounds = [1000];
nocf = 1000;  // number of call of f
niter = 1000;// number of iterations
[fopt, xopt, gopt, work, iters, evals, err] =
optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter,imp=3);
xopt
fopt
err

// traçage courbes
x = linspace(0,50,1000)';
plot_raci = racine(x,10,6);
plot_lin = lineaire(x,1,2);

scf(1);
drawlater();
xgrid(3);
f = gcf(); 
//f  
f.figure_size = [1000, 1000];
f.background = color(255,255,255);   
a = gca();   
//a  
a.font_size = 2; 
a.x_label.text = "X axis" ;  
a.x_location="bottom";   
a.x_label.font_angle=0;  
a.x_label.font_size = 4; 
a.y_label.text = "Y axis";
a.y_location="left";
a.y_label.font_angle=-90;
a.Y_label.font_size = 4;
a.title.text = "Title"; 
a.title.font_size = 5;
a.line_style = 1;

// début des courbes
plot(x,plot_lin);
e1 = gce(); 
p1 = e1.children;   
p1.thickness = 1;
p1.line_style = 1;
p1.foreground = 3;

plot(x,plot_raci);
e2 = gce(); 
p2 = e2.children;   
p2.thickness = 1;
p2.line_style = 1;
p2.foreground = 2;
drawnow();

###
Nelder Mead
###

mode(0)

function f=lineaire(x, a2, b2)
f = a2*x+b2;
endfunction

function g=racine(x, a1, b1)
g = sqrt(a1*x) + b1;
endfunction

function [f, index]=target(x, index)
val_lin = lineaire(x,1,2);
val_rac = racine(x,10,6);
f = abs(val_lin - val_rac);
endfunction

// optimisation avec optim
initial_parameters = [50]
lower_bounds = [0];
upper_bounds = [1000];

nm = neldermead_new ();
nm = neldermead_configure(nm,"-numberofvariables",1);
nm = neldermead_configure(nm,"-function",target);
nm = neldermead_configure(nm,"-x0",initial_parameters);
nm = neldermead_configure(nm,"-maxiter",1000); 
nm = neldermead_configure(nm,"-maxfunevals",1000); 
nm = neldermead_configure(nm,"-tolfunrelative",10*%eps);
nm = neldermead_configure(nm,"-tolxrelative",10*%eps);
nm = neldermead_configure(nm,"-method","box");
nm = neldermead_configure(nm,"-boundsmin",lower_bounds);
nm = neldermead_configure(nm,"-boundsmax", upper_bounds);
nm = neldermead_search(nm);
nm = neldermead_restart(nm);
xopt = neldermead_get(nm,"-xopt")
fopt = neldermead_get(nm,"-fopt")
nm = neldermead_destroy(nm);

// traçage courbes
x = linspace(0,50,1000)';
a1 = 10; b1 = 6;  // b1 > b2 here
a2 = 1; b2 = 2;

plot_raci = racine(x,a1,b1);
plot_lin = lineaire(x,a2,b2);

scf(1);
drawlater();
xgrid(3);
f = gcf(); 
//f  
f.figure_size = [1000, 1000];
f.background = color(255,255,255);   
a = gca();   
//a  
a.font_size = 2; 
a.x_label.text = "X axis" ;  
a.x_location="bottom";   
a.x_label.font_angle=0;  
a.x_label.font_size = 4; 
a.y_label.text = "Y axis";
a.y_location="left";
a.y_label.font_angle=-90;
a.Y_label.font_size = 4;
a.title.text = "Title"; 
a.title.font_size = 5;
a.line_style = 1;

// début des courbes
plot(x,plot_lin);
e1 = gce(); 
p1 = e1.children;   
p1.thickness = 1;
p1.line_style = 1;
p1.foreground = 3;

plot(x,plot_raci);
e2 = gce(); 
p2 = 

Re: [Scilab-users] Optim use and 'err' flag

2017-01-14 Thread paul . carrico
Hi Paul 

I found why I had err=3: I was "playing" with the step value of
numderivative and I sent the last trial; neveryheless if you remove it
of use a low value such as 0.01 instead of 0.1, the err value becomes 3 

CQFD :-) 

Paul 

Le 2017-01-13 22:15, paul.carr...@free.fr a écrit : 

> Hi Paul 
> 
> I've been using the latest Scilab stable release on my working
> station; nevertheless you're right I get 12 when running the code on
> my laptop (same release but under Linux). 
> 
> Paul 
> 
> Le 2017-01-13 17:11, Paul Bignier a écrit : 
> 
> Hello Paul,
> 
> Running your script gives me "err=12", which is not documented but I
> don't get how you got 3?
> 
> I see though that you reached 'evals' & 'iters', perhaps optim
> wanted
> to continue but was capped by those.
> 
> Feel free to use the format [1 [1]] function to get more on-screen
> precision to your values.
> 
> I will surely commit something soon in order to fix the "12" flag.
> 
> Have a good evening,
> 
> Paul
> 
> On 01/13/2017 02:39 PM, paul.carr...@free.fr wrote:
> 
> Hi all
> 
> I'm trying to improve how to use Optim in Scilab, so I'm still
> using the basic Rosembrock function; in the example hereafter, one
> can see that Optim goes back the Error flag to 3 and I do not
> understand why?
> 
> The goal is to be able to check all the values of this flag in
> order
> to validate the result ; while the values are the optimized ones,
> the calculation indicates that the optimization fails …
> 
> I'm a bit loss … so any feedback will be appreciated
> 
> Thanks
> 
> Paul

###


>> In my understanding:
>> -err = 9 : everything went well … ok
>> 
>> -err = 3 : Optimization stops because of too small variations
>> for x
>> -err=1 : Norm of projected gradient lower than …
>> -err=2 : At last iteration f decreases by less than …
>> -err=4 : Optim stops: maximum number of calls to f is reached
>> ==> increase nocf
>> -err=5 : Optim stops: maximum number of iterations is reached.
>> ==> increase niter
>> -err=6 : Optim stops: too small variations in gradient
>> direction.
>> -err=7 : Stop during calculation of descent direction.
>> -err=8 : Stop during calculation of estimated hessian.
>> -err=10 : End of optimization (linear search fails).
>> 
>> // Rosembrock function
>> function f=rosembrock(x)
>> f = ( 1 - x(1))^2 + 100*( x(2)-x(1)^2 )^2;
>> endfunction
>> 
>> // Cost function
>> function [f, g, ind]=cost(x, ind)
>> f = rosembrock(x);
>> //g = derivative(rosembrock, x.',order = 4);
>> //g = numderivative(rosembrock, x.',order = 4);
>> g = numderivative(rosembrock, x.',0.1, order = 4);
>> endfunction
>> 
>> initial_parameters = [10 100]
>> lower_bounds = [0 0];
>> upper_bounds = [1000 1000];
>> nocf = 10;  // number of call of f
>> niter = 10;// number of iterations
>> [fopt, xopt, gopt, work, iters, evals, err] =

optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter);


>> xopt
>> fopt
>> iters
>> evals
>> err
>> ___
>> users mailing list
>> users@lists.scilab.org
>> http://lists.scilab.org/mailman/listinfo/users
> 
> --
> Paul BIGNIER
> Development engineer
> ---
> Scilab Enterprises
> 143bis rue Yves Le Coz - 78000 Versailles, France
> Phone: +33.1.80.77.04.68
> http://www.scilab-enterprises.com
> 
> Links:
> --
> [1] https://help.scilab.org/docs/6.0.0/en_US/format.html
> ___
> users mailing list
> users@lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/users

Links:
--
[1] https://help.scilab.org/docs/6.0.0/en_US/format.html
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] Optim use and 'err' flag

2017-01-13 Thread paul . carrico
Paul, 

When using imp=3, I can see that 'iters' returns in fact the value of
niter (ditto for evals and nocf): am I right ? 

[fopt, xopt, gopt, work, iters, evals, err] =
optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter,imp=3);

(thanks for the support) 

Paul 

Le 2017-01-13 18:06, Paul Bignier a écrit : 

> To complete my previous answer, the error arose from the built-in line
> search algorithm (err>10 means different flavors of it failing). In
> your case, it is pointing to a "too small deltaT", so yes it looks
> like your algorithm is converging but it has trouble finishing. 
> 
> You can get more info by using "imp=3" in your call to optim (as
> documented). If you are using a nightly build then use "iprint=3" (the
> online doc hasn't been updated yet). 
> 
> Best regards, 
> 
> Paul 
> On 01/13/2017 05:11 PM, Paul Bignier wrote:
> 
> Hello Paul,
> 
> Running your script gives me "err=12", which is not documented but I
> don't get how you got 3?
> 
> I see though that you reached 'evals' & 'iters', perhaps optim
> wanted to continue but was capped by those.
> 
> Feel free to use the format [1 [1]] function to get more on-screen
> precision to your values.
> 
> I will surely commit something soon in order to fix the "12" flag.
> 
> Have a good evening,
> 
> Paul
> 
> On 01/13/2017 02:39 PM, paul.carr...@free.fr wrote:
> 
> Hi all
> 
> I'm trying to improve how to use Optim in Scilab, so I'm still
> using the basic Rosembrock function; in the example hereafter, one
> can see that Optim goes back the Error flag to 3 and I do not
> understand why?
> 
> The goal is to be able to check all the values of this flag in
> order to validate the result ; while the values are the optimized
> ones, the calculation indicates that the optimization fails …
> 
> I'm a bit loss … so any feedback will be appreciated
> 
> Thanks
> 
> Paul

###


>> In my understanding:
>> -err = 9 : everything went well … ok
>> 
>> -err = 3 : Optimization stops because of too small variations
>> for x
>> -err=1 : Norm of projected gradient lower than …
>> -err=2 : At last iteration f decreases by less than …
>> -err=4 : Optim stops: maximum number of calls to f is reached
>> ==> increase nocf
>> -err=5 : Optim stops: maximum number of iterations is reached.
>> ==> increase niter
>> -err=6 : Optim stops: too small variations in gradient
>> direction.
>> -err=7 : Stop during calculation of descent direction.
>> -err=8 : Stop during calculation of estimated hessian.
>> -err=10 : End of optimization (linear search fails).
>> 
>> // Rosembrock function
>> function f=rosembrock(x)
>> f = ( 1 - x(1))^2 + 100*( x(2)-x(1)^2 )^2;
>> endfunction
>> 
>> // Cost function
>> function [f, g, ind]=cost(x, ind)
>> f = rosembrock(x);
>> //g = derivative(rosembrock, x.',order = 4);
>> //g = numderivative(rosembrock, x.',order = 4);
>> g = numderivative(rosembrock, x.',0.1, order = 4);
>> endfunction
>> 
>> initial_parameters = [10 100]
>> lower_bounds = [0 0];
>> upper_bounds = [1000 1000];
>> nocf = 10;  // number of call of f
>> niter = 10;// number of iterations
>> [fopt, xopt, gopt, work, iters, evals, err] =

optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter);


>> xopt
>> fopt
>> iters
>> evals
>> err
>> ___
>> users mailing list
>> users@lists.scilab.org
>> http://lists.scilab.org/mailman/listinfo/users
> 
> --
> Paul BIGNIER
> Development engineer
> ---
> Scilab Enterprises
> 143bis rue Yves Le Coz - 78000 Versailles, France
> Phone: +33.1.80.77.04.68
> http://www.scilab-enterprises.com
> 
> ___
> users mailing list
> users@lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/users

-- 
Paul BIGNIER
Development engineer
---
Scilab Enterprises
143bis rue Yves Le Coz - 78000 Versailles, France
Phone: +33.1.80.77.04.68
http://www.scilab-enterprises.com

Links:
--
[1] https://help.scilab.org/docs/6.0.0/en_US/format.html
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users 

Links:
--
[1] https://help.scilab.org/docs/6.0.0/en_US/format.html___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] Optim use and 'err' flag

2017-01-13 Thread paul . carrico
Hi Paul 

I've been using the latest Scilab stable release on my working station;
nevertheless you're right I get 12 when running the code on my laptop
(same release but under Linux). 

Paul 

Le 2017-01-13 17:11, Paul Bignier a écrit : 

> Hello Paul, 
> 
> Running your script gives me "err=12", which is not documented but I
> don't get how you got 3? 
> 
> I see though that you reached 'evals' & 'iters', perhaps optim wanted
> to continue but was capped by those. 
> 
> Feel free to use the format [1 [1]] function to get more on-screen
> precision to your values. 
> 
> I will surely commit something soon in order to fix the "12" flag. 
> 
> Have a good evening, 
> 
> Paul
> 
> On 01/13/2017 02:39 PM, paul.carr...@free.fr wrote:
> 
>> Hi all
>> 
>> I'm trying to improve how to use Optim in Scilab, so I'm still
>> using the basic Rosembrock function; in the example hereafter, one
>> can see that Optim goes back the Error flag to 3 and I do not
>> understand why?
>> 
>> The goal is to be able to check all the values of this flag in order
>> to validate the result ; while the values are the optimized ones,
>> the calculation indicates that the optimization fails …
>> 
>> I'm a bit loss … so any feedback will be appreciated
>> 
>> Thanks
>> 
>> Paul
> ###
>  
> 
>> In my understanding:
>> -err = 9 : everything went well … ok
>> 
>> -err = 3 : Optimization stops because of too small variations
>> for x
>> -err=1 : Norm of projected gradient lower than …
>> -err=2 : At last iteration f decreases by less than …
>> -err=4 : Optim stops: maximum number of calls to f is reached
>> ==> increase nocf
>> -err=5 : Optim stops: maximum number of iterations is reached.
>> ==> increase niter
>> -err=6 : Optim stops: too small variations in gradient
>> direction.
>> -err=7 : Stop during calculation of descent direction.
>> -err=8 : Stop during calculation of estimated hessian.
>> -err=10 : End of optimization (linear search fails).
>> 
>> // Rosembrock function
>> function f=rosembrock(x)
>> f = ( 1 - x(1))^2 + 100*( x(2)-x(1)^2 )^2;
>> endfunction
>> 
>> // Cost function
>> function [f, g, ind]=cost(x, ind)
>> f = rosembrock(x);
>> //g = derivative(rosembrock, x.',order = 4);
>> //g = numderivative(rosembrock, x.',order = 4);
>> g = numderivative(rosembrock, x.',0.1, order = 4);
>> endfunction
>> 
>> initial_parameters = [10 100]
>> lower_bounds = [0 0];
>> upper_bounds = [1000 1000];
>> nocf = 10;  // number of call of f
>> niter = 10;// number of iterations
>> [fopt, xopt, gopt, work, iters, evals, err] =
> optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter);
>  
> 
>> xopt
>> fopt
>> iters
>> evals
>> err
>> ___
>> users mailing list
>> users@lists.scilab.org
>> http://lists.scilab.org/mailman/listinfo/users
> 
> -- 
> Paul BIGNIER
> Development engineer
> ---
> Scilab Enterprises
> 143bis rue Yves Le Coz - 78000 Versailles, France
> Phone: +33.1.80.77.04.68
> http://www.scilab-enterprises.com
> 
> Links:
> --
> [1] https://help.scilab.org/docs/6.0.0/en_US/format.html
> ___
> users mailing list
> users@lists.scilab.org
> http://lists.scilab.org/mailman/listinfo/users
 

Links:
--
[1] https://help.scilab.org/docs/6.0.0/en_US/format.html___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] Optim use and 'err' flag

2017-01-13 Thread Paul Bignier


To complete my previous answer, the error arose from the built-in line 
search algorithm (err>10 means different flavors of it failing). In your 
case, it is pointing to a "too small deltaT", so yes it looks like your 
algorithm is converging but it has trouble finishing.


You can get more info by using "imp=3" in your call to optim (as 
documented). If you are using a nightly build then use "iprint=3" (the 
online doc hasn't been updated yet).


Best regards,

Paul


On 01/13/2017 05:11 PM, Paul Bignier wrote:



Hello Paul,

Running your script gives me "err=12", which is not documented but I 
don't get how you got 3?


I see though that you reached 'evals' & 'iters', perhaps optim wanted 
to continue but was capped by those.


Feel free to use the format 
 function to get 
more on-screen precision to your values.


I will surely commit something soon in order to fix the "12" flag.

Have a good evening,

Paul


On 01/13/2017 02:39 PM, paul.carr...@free.fr wrote:

Hi all

I’m trying to improve how to use Optim in Scilab, so I’m still using 
the basic Rosembrock function; in the example hereafter, one can see 
that Optim goes back the Error flag to 3 and I do not understand why?


The goal is to be able to check all the values of this flag in order 
to validate the result ; while the values are the optimized ones, the 
calculation indicates that the optimization fails …


I’m a bit loss … so any feedback will be appreciated

Thanks

Paul

### 


In my understanding:
-err = 9 : everything went well … ok

-err = 3 : Optimization stops because of too small variations for x
-err=1 : Norm of projected gradient lower than …
-err=2 : At last iteration f decreases by less than …
-err=4 : Optim stops: maximum number of calls to f is reached ==> 
increase nocf
-err=5 : Optim stops: maximum number of iterations is reached. 
==> increase niter

-err=6 : Optim stops: too small variations in gradient direction.
-err=7 : Stop during calculation of descent direction.
-err=8 : Stop during calculation of estimated hessian.
-err=10 : End of optimization (linear search fails).



// Rosembrock function
function f=rosembrock(x)
f = ( 1 - x(1))^2 + 100*( x(2)-x(1)^2 )^2;
endfunction

// Cost function
function [f, g, ind]=cost(x, ind)
f = rosembrock(x);
//g = derivative(rosembrock, x.',order = 4);
//g = numderivative(rosembrock, x.',order = 4);
g = numderivative(rosembrock, x.',0.1, order = 4);
endfunction

initial_parameters = [10 100]
lower_bounds = [0 0];
upper_bounds = [1000 1000];
nocf = 10;  // number of call of f
niter = 10;// number of iterations
[fopt, xopt, gopt, work, iters, evals, err] = 
optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter);

xopt
fopt
iters
evals
err
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


--
Paul BIGNIER
Development engineer
---
Scilab Enterprises
143bis rue Yves Le Coz - 78000 Versailles, France
Phone: +33.1.80.77.04.68
http://www.scilab-enterprises.com


___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


--
Paul BIGNIER
Development engineer
---
Scilab Enterprises
143bis rue Yves Le Coz - 78000 Versailles, France
Phone: +33.1.80.77.04.68
http://www.scilab-enterprises.com

___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] Optim use and 'err' flag

2017-01-13 Thread Paul Bignier


Hello Paul,

Running your script gives me "err=12", which is not documented but I 
don't get how you got 3?


I see though that you reached 'evals' & 'iters', perhaps optim wanted to 
continue but was capped by those.


Feel free to use the format 
 function to get 
more on-screen precision to your values.


I will surely commit something soon in order to fix the "12" flag.

Have a good evening,

Paul


On 01/13/2017 02:39 PM, paul.carr...@free.fr wrote:

Hi all

I’m trying to improve how to use Optim in Scilab, so I’m still using 
the basic Rosembrock function; in the example hereafter, one can see 
that Optim goes back the Error flag to 3 and I do not understand why?


The goal is to be able to check all the values of this flag in order 
to validate the result ; while the values are the optimized ones, the 
calculation indicates that the optimization fails …


I’m a bit loss … so any feedback will be appreciated

Thanks

Paul

### 


In my understanding:
-err = 9 : everything went well … ok

-err = 3 : Optimization stops because of too small variations for x
-err=1 : Norm of projected gradient lower than …
-err=2 : At last iteration f decreases by less than …
-err=4 : Optim stops: maximum number of calls to f is reached ==> 
increase nocf
-err=5 : Optim stops: maximum number of iterations is reached. ==> 
increase niter

-err=6 : Optim stops: too small variations in gradient direction.
-err=7 : Stop during calculation of descent direction.
-err=8 : Stop during calculation of estimated hessian.
-err=10 : End of optimization (linear search fails).



// Rosembrock function
function f=rosembrock(x)
f = ( 1 - x(1))^2 + 100*( x(2)-x(1)^2 )^2;
endfunction

// Cost function
function [f, g, ind]=cost(x, ind)
f = rosembrock(x);
//g = derivative(rosembrock, x.',order = 4);
//g = numderivative(rosembrock, x.',order = 4);
g = numderivative(rosembrock, x.',0.1, order = 4);
endfunction

initial_parameters = [10 100]
lower_bounds = [0 0];
upper_bounds = [1000 1000];
nocf = 10;  // number of call of f
niter = 10;// number of iterations
[fopt, xopt, gopt, work, iters, evals, err] = 
optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter);

xopt
fopt
iters
evals
err
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


--
Paul BIGNIER
Development engineer
---
Scilab Enterprises
143bis rue Yves Le Coz - 78000 Versailles, France
Phone: +33.1.80.77.04.68
http://www.scilab-enterprises.com

___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


[Scilab-users] Optim use and 'err' flag

2017-01-13 Thread paul . carrico

Hi all

I’m trying to improve how to use Optim in Scilab, so I’m still using the 
basic Rosembrock function; in the example hereafter, one can see that 
Optim goes back the Error flag to 3 and I do not understand why?


The goal is to be able to check all the values of this flag in order to 
validate the result ; while the values are the optimized ones, the 
calculation indicates that the optimization fails …


I’m a bit loss … so any feedback will be appreciated

Thanks

Paul

###
In my understanding:
-   err = 9 : everything went well … ok

-   err = 3 : Optimization stops because of too small variations for x
-   err=1 : Norm of projected gradient lower than …
-   err=2 : At last iteration f decreases by less than …
-	err=4 : Optim stops: maximum number of calls to f is reached ==> 
increase nocf
-	err=5 : Optim stops: maximum number of iterations is reached. ==> 
increase niter

-   err=6 : Optim stops: too small variations in gradient direction.
-   err=7 : Stop during calculation of descent direction.
-   err=8 : Stop during calculation of estimated hessian.
-   err=10 : End of optimization (linear search fails).



// Rosembrock function
function f=rosembrock(x)
f = ( 1 - x(1))^2 + 100*( x(2)-x(1)^2 )^2;
endfunction

// Cost function
function [f, g, ind]=cost(x, ind)
f = rosembrock(x);
//g = derivative(rosembrock, x.',order = 4);
//g = numderivative(rosembrock, x.',order = 4);
g = numderivative(rosembrock, x.',0.1, order = 4);
endfunction

initial_parameters = [10 100]
lower_bounds = [0 0];
upper_bounds = [1000 1000];
nocf = 10;  // number of call of f
niter = 10;// number of iterations
[fopt, xopt, gopt, work, iters, evals, err] = 
optim(cost,'b',lower_bounds,upper_bounds,initial_parameters,'qn','ar',nocf,niter);

xopt
fopt
iters
evals
err
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] optim

2016-01-27 Thread fujimoto2005
Hi,mottelet 

Now I understand I don't need to provide the value of 'Ind '.
As explained in the help page, 'the ind input argument is a message sent
from the solver to the cost function ' .

Also I understand  I have to provide the gradient explicitly even if I don't
need its value.
Unless an user does't  provide the gradient explicitly, the solver must
estimate it numerically even if there is an analytical form of the gradient. 
In general numerical estimation  is time-consuming compared with the  the
analytical form.
So solver asks a user to provide the gradient explicitly  to avoid
time-consuming estimation.
  
Best regards



--
View this message in context: 
http://mailinglists.scilab.org/optim-tp4033329p409.html
Sent from the Scilab users - Mailing Lists Archives mailing list archive at 
Nabble.com.
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] optim

2016-01-25 Thread fujimoto2005
Hi,mottelet

Thanks a lot for your solution.

Actually I don't need the gradient .
In that case may I provide only f?
Or is g indispensable?
In 'Description' of 'optim' help page, there is a explanation 'If ind=2,
costf must provide f.'  
https://help.scilab.org/docs/5.3.3/en_US/optim.html
It seem to mean when I input ind=2 optim can work without the gradient by
user.
If so, how do I input ind=2?
Best regards.




--
View this message in context: 
http://mailinglists.scilab.org/optim-tp4033329p402.html
Sent from the Scilab users - Mailing Lists Archives mailing list archive at 
Nabble.com.
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


[Scilab-users] optim

2016-01-25 Thread fujimoto2005
I'm useing scilab 6.0.
I want to use 'optim' function .
But it fail to work in my code with an error message 'costfunction: Wrong
number of input argument(s): 3 expected.'
The following code is the simple code for the demonstration of the problem.
Though para1 and para2 don't influence on value of f, I introduce them to
keep the structure of arguments.
I can't see the cause of error.
Please teach me the cause and how to fix it.

Best regards

**
function f=costfunction(x,ind,para1,para2)
f=x(1)^2+x(2)^2;
endfunction

x0=[100,100];
para1=[1,2];
para2=[2,3];
costf=list(costfunction,para1,para2)
[fopt,xopt]=optim(costf,x0)

**




--
View this message in context: 
http://mailinglists.scilab.org/optim-tp4033329.html
Sent from the Scilab users - Mailing Lists Archives mailing list archive at 
Nabble.com.
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] optim

2016-01-25 Thread Stéphane Mottelet

Hello,

your costfunction has to provide f and its gradient :

function  [f, g, ind]=costfunction(x, ind, para1, para2)
f=x(1)^2+x(2)^2;
g=2*x;
endfunction

x0=[100,100]';
para1=[1,2]';
para2=[2,3]';
costf=list(costfunction,para1,para2)
[fopt,xopt]=optim(costf,x0) S.



Le 25/01/2016 10:06, fujimoto2005 a écrit :

I'm useing scilab 6.0.
I want to use 'optim' function .
But it fail to work in my code with an error message 'costfunction: Wrong
number of input argument(s): 3 expected.'
The following code is the simple code for the demonstration of the problem.
Though para1 and para2 don't influence on value of f, I introduce them to
keep the structure of arguments.
I can't see the cause of error.
Please teach me the cause and how to fix it.

Best regards

**
function f=costfunction(x,ind,para1,para2)
f=x(1)^2+x(2)^2;
endfunction

x0=[100,100];
para1=[1,2];
para2=[2,3];
costf=list(costfunction,para1,para2)
[fopt,xopt]=optim(costf,x0)

**




--
View this message in context: 
http://mailinglists.scilab.org/optim-tp4033329.html
Sent from the Scilab users - Mailing Lists Archives mailing list archive at 
Nabble.com.
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users



--
Département de Génie Informatique
EA 4297 Transformations Intégrées de la Matière Renouvelable
Université de Technologie de Compiègne -  CS 60319
60203 Compiègne cedex

___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


Re: [Scilab-users] optim

2016-01-25 Thread Stéphane Mottelet
btw the error message is incorrect as you had an incorrect number of 
*output* arguments (Scilab 6.0 bug ?)


S.

Le 25/01/2016 10:06, fujimoto2005 a écrit :

I'm useing scilab 6.0.
I want to use 'optim' function .
But it fail to work in my code with an error message 'costfunction: Wrong
number of input argument(s): 3 expected.'
The following code is the simple code for the demonstration of the problem.
Though para1 and para2 don't influence on value of f, I introduce them to
keep the structure of arguments.
I can't see the cause of error.
Please teach me the cause and how to fix it.

Best regards

**
function f=costfunction(x,ind,para1,para2)
f=x(1)^2+x(2)^2;
endfunction

x0=[100,100];
para1=[1,2];
para2=[2,3];
costf=list(costfunction,para1,para2)
[fopt,xopt]=optim(costf,x0)

**




--
View this message in context: 
http://mailinglists.scilab.org/optim-tp4033329.html
Sent from the Scilab users - Mailing Lists Archives mailing list archive at 
Nabble.com.
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users



--
Département de Génie Informatique
EA 4297 Transformations Intégrées de la Matière Renouvelable
Université de Technologie de Compiègne -  CS 60319
60203 Compiègne cedex

___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users


[Scilab-users] optim (fminbnd) and alpha_star

2014-06-13 Thread saber_s
salam hi!
1-f=x^2 - 2*x;
f  x optim in scilab?

(fminbnd?)
plz



--
View this message in context: 
http://mailinglists.scilab.org/optim-fminbnd-and-alpha-star-tp4030739.html
Sent from the Scilab users - Mailing Lists Archives mailing list archive at 
Nabble.com.
___
users mailing list
users@lists.scilab.org
http://lists.scilab.org/mailman/listinfo/users