Thanks Oleg and Sameer for looking into this.

On Mon, Jun 29, 2020 at 5:39 AM Oleg Shirokobrod <[email protected]>
wrote:

> Sameer,
>
> I have submitted the issue on
> https://github.com/ceres-solver/ceres-solver/issues.
>
> Oleg
>
>
> On Sun, Jun 28, 2020 at 3:59 PM Sameer Agarwal <[email protected]>
> wrote:
>
>> Tobias,
>>
>> I have looked into this and added some details to the issue. As I
>> understand it, these matrix algorithms require instantiating
>> std::complex<ceres::Jet<T, N>, that leads to problems in the STL.
>>
>> Sameer
>>
>>
>>
>> On Thu, Jun 18, 2020 at 9:35 AM Sameer Agarwal <[email protected]>
>> wrote:
>>
>>> sorry folks I have been missing in action, I will take a look as soon as
>>> I can.
>>> Sameer
>>>
>>>
>>> On Sat, Jun 6, 2020 at 12:26 PM Wood, Tobias <[email protected]>
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> I have opened an issue here:
>>>> https://gitlab.com/libeigen/eigen/-/issues/1912
>>>>
>>>>
>>>>
>>>> I remembered that I did previously discuss the .exp() issue with Sameer
>>>> on the Ceres mailing list, I have added a link to that, however I am now
>>>> getting a slightly different error message because it looks like the
>>>> internals of the STL have changed. Also, I have changed my algorithm
>>>> slightly and now only need .pow(), but this does not work with Jets either.
>>>> I think the problem with .pow() looks easier to fix?
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Toby
>>>>
>>>>
>>>>
>>>> *From: *Rasmus Munk Larsen <[email protected]>
>>>> *Reply to: *"[email protected]" <[email protected]>
>>>> *Date: *Thursday, 4 June 2020 at 18:46
>>>> *To: *eigen <[email protected]>, Sameer Agarwal <
>>>> [email protected]>
>>>> *Subject: *Re: [eigen] Using Eigen decompositions with ceres autodiff
>>>> Jet data type.
>>>>
>>>>
>>>>
>>>> Hi Tobias,
>>>>
>>>> Please do. Sameer, since this is Ceres solver related, could I ask you
>>>> to help out with this issue.
>>>>
>>>> Rasmus
>>>>
>>>>
>>>>
>>>> On Thu, Jun 4, 2020 at 3:06 AM Wood, Tobias <[email protected]>
>>>> wrote:
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> Apologies to bring up a tangentially related topic - Eigen's matrix
>>>> exponential also doesn't work with Ceres Jets. There is some code inside
>>>> the matrix exponential that checks if the scalar type is "known" to Eigen,
>>>> I assume because there are some constants it requires. Jet<double> is not
>>>> one of those types, so Eigen refuses to compile. When I encountered this
>>>> problem earlier this year I worked around it by using Ceres numeric
>>>> differentiation, but obviously if there's a chance to fix this and use
>>>> auto-differentiation I would be very happy (big speed increase hopefully).
>>>> Should I create an issue on the Eigen gitlab?
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Toby
>>>>
>>>>
>>>>
>>>> *From: *Oleg Shirokobrod <[email protected]>
>>>> *Reply to: *"[email protected]" <[email protected]>
>>>> *Date: *Thursday, 4 June 2020 at 06:36
>>>> *To: *"[email protected]" <[email protected]>
>>>> *Subject: *Re: [eigen] Using Eigen decompositions with ceres autodiff
>>>> Jet data type.
>>>>
>>>>
>>>>
>>>> 1. I would like to have autodiff ability, so I cannot use double for
>>>> both A and b. If I cast b: A.jacobiSVD().solve( b.cast<T>()) everything
>>>> works fine, but BinOp(T,T) is more expensive than BinOp(T, double). I would
>>>> like to keep b as a vector of doubles.
>>>>
>>>> 2. T=Jet is ceres solver autodiff implementation type. There is a trait
>>>> definition for Jet binary operations for type deduction such that
>>>> type(Jet*double) = Jet and so on. It works when I do direct multiplication 
>>>> VS^-1U^T
>>>> * b. It works similar to complex scalar matrices and double rhs and there
>>>> is the same problem for complex scalar cases.
>>>>
>>>> 3. I think that the mixed type deduction rule should give the same type
>>>> for VS^-1U^T * b and  for A.jcobianSVD().solve(b); where A = USV^T because
>>>> both use the same algorithm.
>>>>
>>>> 4. Unless there are serious reasons, deduction rules should be similar
>>>> to scalar type equations. complex<double> A; double b; x = A^-1 * b;
>>>> type(x) = complex<double>.
>>>>
>>>>
>>>>
>>>> On Wed, Jun 3, 2020 at 11:16 PM Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>> Try to compile your code in debug mode with the type assertions on.
>>>>
>>>>
>>>>
>>>> On Wed, Jun 3, 2020 at 1:14 PM Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>> Are you saying that you compute the decomposition in one type and solve
>>>> with a RHS of a different type? Why do you say that VS^-1U^T * b should be
>>>> Matrix<T>? That makes an assumption about type coercion rules. In fact, you
>>>> cannot generally mix types in Eigen expressions without explicit casting,
>>>> and U.adjoint() * b should fail if the types are different.
>>>>
>>>>
>>>>
>>>> On Wed, Jun 3, 2020 at 11:33 AM Oleg Shirokobrod <
>>>> [email protected]> wrote:
>>>>
>>>> Rasmuss, I do not quite understand this issue. Decomposition solve
>>>> should propagate scalar type of a matrix but not scalar type of its
>>>> argument. Example:
>>>>
>>>> template <typename T> Matrix<T> A;
>>>>
>>>> VectorXd b;
>>>>
>>>> A.jcobiSVD().solve(b) should be of type Matrix<T> but it is not. Type
>>>> of result is Matrix<double>. If we make SVD decomposition of A = USV^T and
>>>> express result as VS^-1U^T * b, than result will be of type Matrix<T>.
>>>> Which is correct and differs from result of solve which uses the same
>>>> algorithm but more complex result’s type deduction. This is the problem.
>>>>
>>>>
>>>>
>>>> On Wed 3. Jun 2020 at 20.19, Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>> OK, I opened https://gitlab.com/libeigen/eigen/-/issues/1909
>>>> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Flibeigen%2Feigen%2F-%2Fissues%2F1909&data=01%7C01%7Ctobias.wood%40kcl.ac.uk%7C9b7731b3d2c24c6ea7e908d808af4522%7C8370cf1416f34c16b83c724071654356%7C0&sdata=4adEoOhABpJiEJlKW29VCAHkhG4EXH6ZnSeSQr8dmJ0%3D&reserved=0>
>>>>  for
>>>> this.
>>>>
>>>>
>>>>
>>>> On Tue, Jun 2, 2020 at 11:06 PM Oleg Shirokobrod <
>>>> [email protected]> wrote:
>>>>
>>>> Yes. At the time of computing only 1d observation (VectorXd) is known.
>>>>
>>>>
>>>>
>>>> On Tue, Jun 2, 2020 at 9:42 PM Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>> OK, so b is declared as VectorXf or some other type with
>>>> ColsAtCompileTime=1?
>>>>
>>>>
>>>>
>>>> On Tue, Jun 2, 2020 at 11:27 AM Oleg Shirokobrod <
>>>> [email protected]> wrote:
>>>>
>>>>
>>>>
>>>> Yes, b is measured spectrum that is 1d array. I have to get x for 1d
>>>> array at a time. I fit sum of peak models to 1d rhs. 1d array of peak model
>>>> values is one column of matrix A.
>>>>
>>>>
>>>>
>>>> On Tue 2. Jun 2020 at 20.06, Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>> Why do you say that? You could be solving for multiple
>>>> right-hand sides. Is b know to have 1 column at compile time?
>>>>
>>>>
>>>>
>>>> On Tue, Jun 2, 2020 at 1:31 AM Oleg Shirokobrod <
>>>> [email protected]> wrote:
>>>>
>>>> Hi Rasmus,
>>>>
>>>>
>>>>
>>>> I have just tested COD decomposition in Eigen library. It arises the
>>>> same problem. This is defect of Eigen decomposition module type reduction
>>>> of result of solve method.  If
>>>>
>>>>  template <typename T> Matrix<T, Dynamic, Dynamic>  A; and ArraXd b;,
>>>> than x = A.solve(b) should be of type  <typename T> Matrix<T, Dynamic, 1.>.
>>>>
>>>>
>>>>
>>>> I like the idea to use COD as an alternative to QR or SVD and I added
>>>> this option to my code.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jun 2, 2020 at 10:36 AM Oleg Shirokobrod <
>>>> [email protected]> wrote:
>>>>
>>>> Rasmus, I wiil have a look at COD. Brad, I did not try CppAD.I am
>>>> working in given framework: ceres nonlinear least squares solver + ceres
>>>> autodiff + Eigen decomposition modules SVD or QR. The problem is not just
>>>> on autodiff side. The problem is that Eigen decomposition modul does not
>>>> work properly with autodiff type variable.
>>>>
>>>>
>>>>
>>>> Thank you everybody for advice.
>>>>
>>>>
>>>>
>>>> On Mon, Jun 1, 2020 at 8:41 PM Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Jun 1, 2020 at 10:33 AM Patrik Huber <[email protected]>
>>>> wrote:
>>>>
>>>> Hi Rasmus,
>>>>
>>>>
>>>>
>>>> This is slightly off-topic to this thread here, but it would be great
>>>> if you added your COD to the list/table of decompositions in Eigen:
>>>> https://eigen.tuxfamily.org/dox/group__TopicLinearAlgebraDecompositions.html
>>>> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Feigen.tuxfamily.org%2Fdox%2Fgroup__TopicLinearAlgebraDecompositions.html&data=01%7C01%7Ctobias.wood%40kcl.ac.uk%7C9b7731b3d2c24c6ea7e908d808af4522%7C8370cf1416f34c16b83c724071654356%7C0&sdata=BeutduNEeXIXTOtbc8%2BfWXS3FnlTvzEQq0yPrJ7nUOo%3D&reserved=0>
>>>>
>>>>
>>>> First, it would make it easier for people to find, and second, it would
>>>> also help a lot to see on that page how the algorithm compares to the
>>>> others, to be able to choose it appropriately.
>>>>
>>>>
>>>> Good point. Will do.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Unrelated: @All/Maintainers: It seems like lots (all) of the images on
>>>> the documentation website are broken? At least for me. E.g.:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Best wishes,
>>>>
>>>> Patrik
>>>>
>>>>
>>>>
>>>> On Mon, 1 Jun 2020 at 17:59, Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>> Hi Oleg and Sameer,
>>>>
>>>>
>>>>
>>>> A faster option than SVD, but more robust than QR (since it also
>>>> handles the under-determined case) is the complete orthogonal decomposition
>>>> that I implemented in Eigen a few years ago.
>>>>
>>>>
>>>> https://eigen.tuxfamily.org/dox/classEigen_1_1CompleteOrthogonalDecomposition.html
>>>> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Feigen.tuxfamily.org%2Fdox%2FclassEigen_1_1CompleteOrthogonalDecomposition.html&data=01%7C01%7Ctobias.wood%40kcl.ac.uk%7C9b7731b3d2c24c6ea7e908d808af4522%7C8370cf1416f34c16b83c724071654356%7C0&sdata=4ALzcdxWY8wDOlejWGXr9DfIUg%2FGV%2B9CnWkoozLWMSU%3D&reserved=0>
>>>>
>>>> (Looks like the docstring is broken - oops!)
>>>>
>>>> It appears to also be available in the 3.3 branch:
>>>> https://gitlab.com/libeigen/eigen/-/blob/3.3/Eigen/src/QR/CompleteOrthogonalDecomposition.h
>>>> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Flibeigen%2Feigen%2F-%2Fblob%2F3.3%2FEigen%2Fsrc%2FQR%2FCompleteOrthogonalDecomposition.h&data=01%7C01%7Ctobias.wood%40kcl.ac.uk%7C9b7731b3d2c24c6ea7e908d808af4522%7C8370cf1416f34c16b83c724071654356%7C0&sdata=F4uktFTL%2BeQ%2BOqeURPZ%2FoOaoReqnH1hU2CobNC%2BNxHk%3D&reserved=0>
>>>>
>>>> Rasmus
>>>>
>>>>
>>>>
>>>> On Mon, Jun 1, 2020 at 6:57 AM Sameer Agarwal <[email protected]>
>>>> wrote:
>>>>
>>>> Oleg,
>>>>
>>>> Two ideas:
>>>>
>>>>
>>>>
>>>> 1. You may have an easier time using QR factorization instead of SVD to
>>>> solve your least squares problem.
>>>>
>>>> 2.  But you can do better, instead of trying to solve linear least
>>>> squares problem involving a matrix of Jets, you are better off, solving the
>>>> linear least squares problem on the scalars, and then using the implicit
>>>> function theorem
>>>> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FImplicit_function_theorem&data=01%7C01%7Ctobias.wood%40kcl.ac.uk%7C9b7731b3d2c24c6ea7e908d808af4522%7C8370cf1416f34c16b83c724071654356%7C0&sdata=wACe44wQ0vA%2BVFojAnCxvAnvRkgps4y2sIcl0d1wLC4%3D&reserved=0>
>>>> to compute the derivative w.r.t the parameters and then applying the chain
>>>> rule.
>>>>
>>>>
>>>>
>>>> i.e., start with min |A x = b|
>>>>
>>>>
>>>>
>>>> the solution satisfies the equation
>>>>
>>>>
>>>>
>>>> A'A x - A'b = 0.
>>>>
>>>>
>>>>
>>>> solve this equation to get the optimal value of x, and then compute the
>>>> jacobian of this equation w.r.t A, b and x. and apply the implicit theorem.
>>>>
>>>>
>>>>
>>>> Sameer
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Jun 1, 2020 at 4:46 AM Oleg Shirokobrod <
>>>> [email protected]> wrote:
>>>>
>>>> Hi list, I am using Eigen 3.3.7 release with ceres solver 1.14.0 with
>>>> autodiff Jet data type and I have some problems. I need to solve linear
>>>> least square subproblem within variable projection algorithm, namely I need
>>>> to solve LLS equation
>>>>
>>>> A(p)*x = b
>>>>
>>>> Where matrix A(p) depends on nonlinear parameters p:
>>>>
>>>> x(p) = pseudo-inverse(A(p))*b;
>>>>
>>>> x(p) will be optimized in nonlinear least squares fitting, so I need
>>>> Jcobian. Rhs b is measured vector of doubles, e.g. VectorXd. In order to
>>>> use ceres's autodiff p must be of Jet type. Ceres provides corresponding
>>>> traits for binary operations
>>>>
>>>>
>>>>
>>>> #if EIGEN_VERSION_AT_LEAST(3, 3, 0)
>>>> // Specifying the return type of binary operations between Jets and
>>>> scalar types
>>>> // allows you to perform matrix/array operations with Eigen matrices
>>>> and arrays
>>>> // such as addition, subtraction, multiplication, and division where
>>>> one Eigen
>>>> // matrix/array is of type Jet and the other is a scalar type. This
>>>> improves
>>>> // performance by using the optimized scalar-to-Jet binary operations
>>>> but
>>>> // is only available on Eigen versions >= 3.3
>>>> template <typename BinaryOp, typename T, int N>
>>>> struct ScalarBinaryOpTraits<ceres::Jet<T, N>, T, BinaryOp> {
>>>>   typedef ceres::Jet<T, N> ReturnType;
>>>> };
>>>> template <typename BinaryOp, typename T, int N>
>>>> struct ScalarBinaryOpTraits<T, ceres::Jet<T, N>, BinaryOp> {
>>>>   typedef ceres::Jet<T, N> ReturnType;
>>>> };
>>>> #endif  // EIGEN_VERSION_AT_LEAST(3, 3, 0)
>>>>
>>>>
>>>>
>>>> There two problems.
>>>>
>>>> 1. Small problem. In a function "RealScalar threshold() const" in
>>>> SCDbase.h I have to replace "return m_usePrescribedThreshold ?
>>>> m_prescribedThreshold
>>>>                                     : diagSize*
>>>> NumTraits<Scalar>::epsilon();" with "return m_usePrescribedThreshold ?
>>>> m_prescribedThreshold
>>>>                                     : Scalar(diagSize)*
>>>> NumTraits<Scalar>::epsilon();"
>>>>
>>>> This fix is similar Gael's fix of Bug 1403
>>>> <https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Feigen.tuxfamily.org%2Fbz%2Fshow_bug.cgi%3Fid%3D1403&data=01%7C01%7Ctobias.wood%40kcl.ac.uk%7C9b7731b3d2c24c6ea7e908d808af4522%7C8370cf1416f34c16b83c724071654356%7C0&sdata=xcVjY1p2d8oscbHsEuiqRMdNPzGOGGI%2BLb%2FOqZUWrec%3D&reserved=0>
>>>>
>>>> 2. It is less trivial. I expect that x(p) = pseudo-inverse(A(p))*b; is
>>>> vector of Jet. And it is actually true for e.g SVD decompoazition
>>>>
>>>> x(p) = VSU^T * b.
>>>>
>>>> But if I use
>>>>
>>>> JcobySVD<Matrix<Jet<double, 2>, Dynamic, Dynamic>> svd(A);
>>>>
>>>> x(p) = svd.solve(b),
>>>>
>>>> I got error message.
>>>>
>>>> Here code for reproducing the error
>>>>
>>>>
>>>>
>>>> // test_svd_jet.cpp
>>>>
>>>> #include <ceres/jet.h>
>>>>
>>>> using ceres::Jet;
>>>>
>>>>
>>>>
>>>> int test_svd_jet()
>>>> {
>>>>
>>>>     typedef Matrix<Jet<double, 2>, Dynamic, Dynamic> Mat;
>>>>     typedef Matrix<Jet<double, 2>, Dynamic, 1> Vec;
>>>>      Mat A = MatrixXd::Random(3, 2).cast <Jet<double, 2>>();
>>>>      VectorXd b = VectorXd::Random(3);
>>>>
>>>>      JacobiSVD<Mat> svd(A, ComputeThinU | ComputeThinV);
>>>>      int l_rank = svd.rank();
>>>>      Vec c = svd.matrixV().leftCols(l_rank)
>>>>          * svd.singularValues().head(l_rank).asDiagonal().inverse()
>>>>          * svd.matrixU().leftCols(l_rank).adjoint() * b; // *
>>>>
>>>>      Vec c1 = svd.solve(b.cast<Jet<double, 2>>()); // **
>>>>
>>>>      Vec c2 = svd.solve(b); // ***
>>>>
>>>>      return 0;
>>>> }
>>>>
>>>> // End test_svd_jet.cpp
>>>>
>>>>
>>>>
>>>> // * and // ** work fine an give the same results. // *** fails with VS
>>>> 2019 error message
>>>>
>>>> Eigen\src\Core\functors\AssignmentFunctors.h(24,1):
>>>>
>>>> error C2679: binary '=': no operator found which takes
>>>> a right-hand operand of type 'const SrcScalar'
>>>> (or there is no acceptable conversion)
>>>>
>>>> The error points to line //***. I thing that solution is of type
>>>> VectorXd instead of Vec and there is problem with assignment of double to
>>>> Jet. Derivatives are lost either. It should work similar to complex type.
>>>> If A is complex matrix and b is real vector, x must be complex. There is
>>>> something wrong with Type deduction in SVD or QR decomposition.
>>>>
>>>>
>>>>
>>>> Do you have any idea of how to fix it.
>>>>
>>>>
>>>>
>>>> Best regards,
>>>>
>>>>
>>>>
>>>> Oleg Shirokobrod
>>>>
>>>>
>>>>
>>>>

Reply via email to