There is no added value when using NDcost, you can directly use
numderivative like this
function [f,g,ind]=costf(p,ind)
f=norm(fun(p))^2;
g=2*numderivative(fun,p)'*fun(p);
endfunction
S.
Le 08/01/2020 à 16:45, David Brant a écrit :
Many thanks Stéphane.
Is it also possible to do
Many thanks Stéphane.
Is it also possible to do this if the gradient is not known or impractical
to obtain using NDcost as i had intended with something like
[fopt,popt,gopt]=optim(list(NDcost,fun),p0') ?
Regards, Dave
--
Sent from:
Many thanks Stéphane.
Is it also possible to do this if the gradient is not known or impractical
to obtain using NDcost as i had intended with something like
[fopt,popt,gopt]=optim(list(NDcost,fun),p0') ?
Regards, Dave
--
Sent from:
Hello,
If you want to use optim for your least squares problem you have to
consider the minimization of norm(fun(p))^2, which gradient is
2*dfun(p)'*fun(p), i.e. costf must be written like this:
function [f,g,ind]=costf(p,ind)
f=norm(fun(p))^2;
g=2*dfun(p)'*fun(p);
endfunction
After
Hi, i am having problems with the below code.
It is a variation of an example listed in the optimization chapter of the
Modelling and Simuation in Scilab-Scicos book (pages 109-110 &114). I can
configure the code to work for leastsq and lsqrsolve, but not optim. Any
advice on mods would be very