I don't think Tristan is looking for users just yet, but he has an LM
that he's using for bundle adjustment (comparing with Ceres) here.  We
hope to merge this once it's better tested.

  https://bitbucket.org/tristankonolie/petsc/commits/all

"Dener, Alp via petsc-users" <petsc-users@mcs.anl.gov> writes:

> Hi Hansol,
>
> We don’t have a Levenberg-Marquardt method available, and if the PETSc/TAO 
> manual says otherwise, that may be misleading. Let me know where you saw that 
> and I can take a look and fix it.
>
> In the meantime, if you want to solve a least-squares problem, the master 
> branch of PETSc on Bitbucket has a bound-constrained regularized Gauss-Newton 
> (TAOBRGN) method available. The only available regularization right now is an 
> L2 proximal point Tikhonov regularizer. There are ongoing efforts to support 
> an L1 regularizer, and also the ability for users to define their own, but 
> these have not made it into the master branch yet. We’re working on it and 
> should be in for the next major PETSc release in the Spring.
>
> If you’d like to use that method, you need to set the Tao type to TAOBRGN and 
> then go through the TaoSetResidualRoutine() and 
> TaoSetJacobianResidualRoutine() interfaces to define your problem.
>
> In general, you can use other TAO algorithms (e.g.: BNLS, BQNLS, etc.) with 
> your own regularization term by embedding it into the objective, gradient and 
> Hessian (if applicable) evaluation callbacks. The caveat is that your 
> regularizer needs to be C1 continuous for first-order methods and C2 
> continuous for second order methods. This typically limits you to L2-norm 
> regularizers. There is no support yet for L1-norm regularizers, but as I 
> said, we’re working on it right now and it should be available in a couple of 
> months.
>
> Hope that helps,
> ——
> Alp Dener
> Argonne National Laboratory
> https://www.anl.gov/profile/alp-dener
>
>
>
> On Jan 24, 2019, at 2:57 PM, David via petsc-users 
> <petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Hi. I was wondering whether there was some kind of general consensus about
>
> the currently-best-implemented L1, L2 norm regularization for petsc/tao
> that has been implemented.
>
> Naively, I would shoot for Levenberg-Marquardt for some kind of random
> matrix, or even generic
>
> finite-difference stencil problem. (but it seems like LM is yet to be
> implemented, but only on petsc manual pdf?)
>
> Or perhaps, of the implemented ones, LMVM seems to work well, at least
> on my local machine.
>
> In any due case, I would highly appreciate the input and opinion about
> these matters.
>
>
> Thanks.
>
>
>
> Hansol Suh,
>
> PhD Student
>
>
> Georgia Institute of Technology

Reply via email to