Reposted with pdf figure instead of too big scg JPD
-------- Message original -------- Sujet: Optimization Date : Thu, 15 Nov 2012 18:27:47 -0500 De : Jean-Pierre Dussault <[email protected]> Pour : [email protected] Hi all,I am preparing examples for an optimization course for students in image science. I use an example from http://www.ceremade.dauphine.fr/~peyre/numerical-tour/tours/optim_1_gradient_descent/ to promote the use of better algorithms than the simple gradient descent.
I attach the convergence plot of the norm of the gradient for 5 variants of the optim command: gc unconstrained, gc with bounds [-%inf,%inf], gc with bounds [0,1], gc with bounds [0,%inf] and nd. I also include the gradient descent.
Except for the [0,%inf] variant, the solution has all components strictly in [0,1] as displayed here:
-->[max(xoptS),max(xoptGC),max(xoptGCB),max(xoptGCBinf),max(xoptGCB0inf),max(xoptND)]
ans =
0.9249840 0.9211455 0.9216067 0.9213056 1.0402906
0.9212348
-->[min(xoptS),min(xoptGC),min(xoptGCB),min(xoptGCBinf),min(xoptGCB0inf),min(xoptND)]
ans =
0.0671743 0.0718204 0.0678885 0.0714951 0.0772300
0.0714255
On the convergence plot, we clearly see that the gradient norm of the gc with [0,1] bounds stalls away from zero while with no bounds or infinite bounds, it converges to zero. This is even more severe for the variant with bounds [0.%inf], which no more approaches the solution, making virtually no progress at all after some 30 function evaluations.
Is it a Scilab bug or a bad example for the gcbd underlying routine? The cost function is strongly convex of dimension 65536. Has someone experienced a similar behavior?
This is unfortunate since I wish to convince my students to use suitably constrained models instead of enforcing constraints afterward.
Thanks for any suggestion to work around this troublesome situation. JPD
Comp.pdf
Description: Adobe PDF document
_______________________________________________ dev mailing list [email protected] http://lists.scilab.org/mailman/listinfo/dev
