On Tue, Feb 25, 2020 at 12:23 PM Sajid Ali <sajidsyed2...@u.northwestern.edu>
wrote:

> Hi Hong,
>
> Thanks for the explanation!
>
> If I have a cost function consisting of an L2 norm of the difference of a
> TS-solution and some reference along with some constraints (say bounds,
> L1-sparsity, total variation etc), would I provide a routine for gradient
> evaluation of only the L2 norm (where TAO would take care of the
> constraints) or do I also have to take the constraints into account (since
> I'd also have to differentiate the regularizers) ?
>

We want to have a framework for this separable case. The ADMM
implementation that was recently merged is a step in this direction.
See Alp's talk from SIAM PP 2020.

  Thanks,

    Matt


> Thank You,
> Sajid Ali | PhD Candidate
> Applied Physics
> Northwestern University
> s-sajid-ali.github.io
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>

Reply via email to