Oleg,

Basically my response is that  A.jacobiSVD().solve( b.cast<T>()) is what
you should be doing - we made a conscious decision  to disallow code to
rely on implicit casting. Just speculation: Perhaps add your own
specialization of BinOp(T, double) that does the right thing and is faster?

On Wed, Jun 3, 2020 at 10:35 PM Oleg Shirokobrod <[email protected]>
wrote:

> 1. I would like to have autodiff ability, so I cannot use double for both
> A and b. If I cast b: A.jacobiSVD().solve( b.cast<T>()) everything works
> fine, but BinOp(T,T) is more expensive than BinOp(T, double). I would like
> to keep b as a vector of doubles.
> 2. T=Jet is ceres solver autodiff implementation type. There is a trait
> definition for Jet binary operations for type deduction such that
> type(Jet*double) = Jet and so on. It works when I do direct multiplication 
> VS^-1U^T
> * b. It works similar to complex scalar matrices and double rhs and there
> is the same problem for complex scalar cases.
> 3. I think that the mixed type deduction rule should give the same type
> for VS^-1U^T * b and  for A.jcobianSVD().solve(b); where A = USV^T
> because both use the same algorithm.
> 4. Unless there are serious reasons, deduction rules should be similar to
> scalar type equations. complex<double> A; double b; x = A^-1 * b; type(x) =
> complex<double>.
>
> On Wed, Jun 3, 2020 at 11:16 PM Rasmus Munk Larsen <[email protected]>
> wrote:
>
>> Try to compile your code in debug mode with the type assertions on.
>>
>> On Wed, Jun 3, 2020 at 1:14 PM Rasmus Munk Larsen <[email protected]>
>> wrote:
>>
>>> Are you saying that you compute the decomposition in one type and solve
>>> with a RHS of a different type? Why do you say that VS^-1U^T * b should be
>>> Matrix<T>? That makes an assumption about type coercion rules. In fact, you
>>> cannot generally mix types in Eigen expressions without explicit casting,
>>> and U.adjoint() * b should fail if the types are different.
>>>
>>> On Wed, Jun 3, 2020 at 11:33 AM Oleg Shirokobrod <
>>> [email protected]> wrote:
>>>
>>>> Rasmuss, I do not quite understand this issue. Decomposition solve
>>>> should propagate scalar type of a matrix but not scalar type of its
>>>> argument. Example:
>>>> template <typename T> Matrix<T> A;
>>>> VectorXd b;
>>>> A.jcobiSVD().solve(b) should be of type Matrix<T> but it is not. Type
>>>> of result is Matrix<double>. If we make SVD decomposition of A = USV^T and
>>>> express result as VS^-1U^T * b, than result will be of type Matrix<T>.
>>>> Which is correct and differs from result of solve which uses the same
>>>> algorithm but more complex result’s type deduction. This is the problem.
>>>>
>>>> On Wed 3. Jun 2020 at 20.19, Rasmus Munk Larsen <[email protected]>
>>>> wrote:
>>>>
>>>>> OK, I opened https://gitlab.com/libeigen/eigen/-/issues/1909 for this.
>>>>>
>>>>> On Tue, Jun 2, 2020 at 11:06 PM Oleg Shirokobrod <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Yes. At the time of computing only 1d observation (VectorXd) is known.
>>>>>>
>>>>>> On Tue, Jun 2, 2020 at 9:42 PM Rasmus Munk Larsen <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> OK, so b is declared as VectorXf or some other type with
>>>>>>> ColsAtCompileTime=1?
>>>>>>>
>>>>>>> On Tue, Jun 2, 2020 at 11:27 AM Oleg Shirokobrod <
>>>>>>> [email protected]> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> Yes, b is measured spectrum that is 1d array. I have to get x for
>>>>>>>> 1d array at a time. I fit sum of peak models to 1d rhs. 1d array of 
>>>>>>>> peak
>>>>>>>> model values is one column of matrix A.
>>>>>>>>
>>>>>>>> On Tue 2. Jun 2020 at 20.06, Rasmus Munk Larsen <
>>>>>>>> [email protected]> wrote:
>>>>>>>>
>>>>>>>>> Why do you say that? You could be solving for multiple
>>>>>>>>> right-hand sides. Is b know to have 1 column at compile time?
>>>>>>>>>
>>>>>>>>> On Tue, Jun 2, 2020 at 1:31 AM Oleg Shirokobrod <
>>>>>>>>> [email protected]> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Rasmus,
>>>>>>>>>>
>>>>>>>>>> I have just tested COD decomposition in Eigen library. It arises
>>>>>>>>>> the same problem. This is defect of Eigen decomposition module type
>>>>>>>>>> reduction of result of solve method.  If
>>>>>>>>>>  template <typename T> Matrix<T, Dynamic, Dynamic>  A; and ArraXd
>>>>>>>>>> b;, than x = A.solve(b) should be of type  <typename T> Matrix<T, 
>>>>>>>>>> Dynamic,
>>>>>>>>>> 1.>.
>>>>>>>>>>
>>>>>>>>>> I like the idea to use COD as an alternative to QR or SVD and I
>>>>>>>>>> added this option to my code.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Jun 2, 2020 at 10:36 AM Oleg Shirokobrod <
>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>
>>>>>>>>>>> Rasmus, I wiil have a look at COD. Brad, I did not try CppAD.I
>>>>>>>>>>> am working in given framework: ceres nonlinear least squares solver 
>>>>>>>>>>> + ceres
>>>>>>>>>>> autodiff + Eigen decomposition modules SVD or QR. The problem is 
>>>>>>>>>>> not just
>>>>>>>>>>> on autodiff side. The problem is that Eigen decomposition modul 
>>>>>>>>>>> does not
>>>>>>>>>>> work properly with autodiff type variable.
>>>>>>>>>>>
>>>>>>>>>>> Thank you everybody for advice.
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Jun 1, 2020 at 8:41 PM Rasmus Munk Larsen <
>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Jun 1, 2020 at 10:33 AM Patrik Huber <
>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Rasmus,
>>>>>>>>>>>>>
>>>>>>>>>>>>> This is slightly off-topic to this thread here, but it would
>>>>>>>>>>>>> be great if you added your COD to the list/table of 
>>>>>>>>>>>>> decompositions in
>>>>>>>>>>>>> Eigen:
>>>>>>>>>>>>> https://eigen.tuxfamily.org/dox/group__TopicLinearAlgebraDecompositions.html
>>>>>>>>>>>>>
>>>>>>>>>>>>> First, it would make it easier for people to find, and second,
>>>>>>>>>>>>> it would also help a lot to see on that page how the algorithm 
>>>>>>>>>>>>> compares to
>>>>>>>>>>>>> the others, to be able to choose it appropriately.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Good point. Will do.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Unrelated: @All/Maintainers: It seems like lots (all) of the
>>>>>>>>>>>>> images on the documentation website are broken? At least for me. 
>>>>>>>>>>>>> E.g.:
>>>>>>>>>>>>>
>>>>>>>>>>>>> [image: image.png]
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Best wishes,
>>>>>>>>>>>>> Patrik
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, 1 Jun 2020 at 17:59, Rasmus Munk Larsen <
>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi Oleg and Sameer,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> A faster option than SVD, but more robust than QR (since it
>>>>>>>>>>>>>> also handles the under-determined case) is the complete 
>>>>>>>>>>>>>> orthogonal
>>>>>>>>>>>>>> decomposition that I implemented in Eigen a few years ago.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> https://eigen.tuxfamily.org/dox/classEigen_1_1CompleteOrthogonalDecomposition.html
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> (Looks like the docstring is broken - oops!)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> It appears to also be available in the 3.3 branch:
>>>>>>>>>>>>>> https://gitlab.com/libeigen/eigen/-/blob/3.3/Eigen/src/QR/CompleteOrthogonalDecomposition.h
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Rasmus
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Jun 1, 2020 at 6:57 AM Sameer Agarwal <
>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Oleg,
>>>>>>>>>>>>>>> Two ideas:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 1. You may have an easier time using QR factorization
>>>>>>>>>>>>>>> instead of SVD to solve your least squares problem.
>>>>>>>>>>>>>>> 2.  But you can do better, instead of trying to solve linear
>>>>>>>>>>>>>>> least squares problem involving a matrix of Jets, you are 
>>>>>>>>>>>>>>> better off,
>>>>>>>>>>>>>>> solving the linear least squares problem on the scalars, and 
>>>>>>>>>>>>>>> then using the implicit
>>>>>>>>>>>>>>> function theorem
>>>>>>>>>>>>>>> <https://en.wikipedia.org/wiki/Implicit_function_theorem>
>>>>>>>>>>>>>>> to compute the derivative w.r.t the parameters and then 
>>>>>>>>>>>>>>> applying the chain
>>>>>>>>>>>>>>> rule.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> i.e., start with min |A x = b|
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> the solution satisfies the equation
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> A'A x - A'b = 0.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> solve this equation to get the optimal value of x, and then
>>>>>>>>>>>>>>> compute the jacobian of this equation w.r.t A, b and x. and 
>>>>>>>>>>>>>>> apply the
>>>>>>>>>>>>>>> implicit theorem.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Sameer
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Mon, Jun 1, 2020 at 4:46 AM Oleg Shirokobrod <
>>>>>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi list, I am using Eigen 3.3.7 release with ceres solver
>>>>>>>>>>>>>>>> 1.14.0 with autodiff Jet data type and I have some problems. I 
>>>>>>>>>>>>>>>> need to
>>>>>>>>>>>>>>>> solve linear least square subproblem within variable 
>>>>>>>>>>>>>>>> projection algorithm,
>>>>>>>>>>>>>>>> namely I need to solve LLS equation
>>>>>>>>>>>>>>>> A(p)*x = b
>>>>>>>>>>>>>>>> Where matrix A(p) depends on nonlinear parameters p:
>>>>>>>>>>>>>>>> x(p) = pseudo-inverse(A(p))*b;
>>>>>>>>>>>>>>>> x(p) will be optimized in nonlinear least squares fitting,
>>>>>>>>>>>>>>>> so I need Jcobian. Rhs b is measured vector of doubles, e.g. 
>>>>>>>>>>>>>>>> VectorXd. In
>>>>>>>>>>>>>>>> order to use ceres's autodiff p must be of Jet type. Ceres 
>>>>>>>>>>>>>>>> provides
>>>>>>>>>>>>>>>> corresponding traits for binary operations
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> #if EIGEN_VERSION_AT_LEAST(3, 3, 0)
>>>>>>>>>>>>>>>> // Specifying the return type of binary operations between
>>>>>>>>>>>>>>>> Jets and scalar types
>>>>>>>>>>>>>>>> // allows you to perform matrix/array operations with Eigen
>>>>>>>>>>>>>>>> matrices and arrays
>>>>>>>>>>>>>>>> // such as addition, subtraction, multiplication, and
>>>>>>>>>>>>>>>> division where one Eigen
>>>>>>>>>>>>>>>> // matrix/array is of type Jet and the other is a scalar
>>>>>>>>>>>>>>>> type. This improves
>>>>>>>>>>>>>>>> // performance by using the optimized scalar-to-Jet binary
>>>>>>>>>>>>>>>> operations but
>>>>>>>>>>>>>>>> // is only available on Eigen versions >= 3.3
>>>>>>>>>>>>>>>> template <typename BinaryOp, typename T, int N>
>>>>>>>>>>>>>>>> struct ScalarBinaryOpTraits<ceres::Jet<T, N>, T, BinaryOp> {
>>>>>>>>>>>>>>>>   typedef ceres::Jet<T, N> ReturnType;
>>>>>>>>>>>>>>>> };
>>>>>>>>>>>>>>>> template <typename BinaryOp, typename T, int N>
>>>>>>>>>>>>>>>> struct ScalarBinaryOpTraits<T, ceres::Jet<T, N>, BinaryOp> {
>>>>>>>>>>>>>>>>   typedef ceres::Jet<T, N> ReturnType;
>>>>>>>>>>>>>>>> };
>>>>>>>>>>>>>>>> #endif  // EIGEN_VERSION_AT_LEAST(3, 3, 0)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> There two problems.
>>>>>>>>>>>>>>>> 1. Small problem. In a function "RealScalar threshold()
>>>>>>>>>>>>>>>> const" in SCDbase.h I have to replace "return 
>>>>>>>>>>>>>>>> m_usePrescribedThreshold ?
>>>>>>>>>>>>>>>> m_prescribedThreshold
>>>>>>>>>>>>>>>>                                     : diagSize*
>>>>>>>>>>>>>>>> NumTraits<Scalar>::epsilon();" with "return 
>>>>>>>>>>>>>>>> m_usePrescribedThreshold ?
>>>>>>>>>>>>>>>> m_prescribedThreshold
>>>>>>>>>>>>>>>>                                     : Scalar(diagSize)*
>>>>>>>>>>>>>>>> NumTraits<Scalar>::epsilon();"
>>>>>>>>>>>>>>>> This fix is similar Gael's fix of Bug 1403
>>>>>>>>>>>>>>>> <http://eigen.tuxfamily.org/bz/show_bug.cgi?id=1403>
>>>>>>>>>>>>>>>> 2. It is less trivial. I expect that x(p) =
>>>>>>>>>>>>>>>> pseudo-inverse(A(p))*b; is vector of Jet. And it is actually 
>>>>>>>>>>>>>>>> true for e.g
>>>>>>>>>>>>>>>> SVD decompoazition
>>>>>>>>>>>>>>>> x(p) = VSU^T * b.
>>>>>>>>>>>>>>>> But if I use
>>>>>>>>>>>>>>>> JcobySVD<Matrix<Jet<double, 2>, Dynamic, Dynamic>> svd(A);
>>>>>>>>>>>>>>>> x(p) = svd.solve(b),
>>>>>>>>>>>>>>>> I got error message.
>>>>>>>>>>>>>>>> Here code for reproducing the error
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> // test_svd_jet.cpp
>>>>>>>>>>>>>>>> #include <ceres/jet.h>
>>>>>>>>>>>>>>>> using ceres::Jet;
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> int test_svd_jet()
>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>     typedef Matrix<Jet<double, 2>, Dynamic, Dynamic> Mat;
>>>>>>>>>>>>>>>>     typedef Matrix<Jet<double, 2>, Dynamic, 1> Vec;
>>>>>>>>>>>>>>>>      Mat A = MatrixXd::Random(3, 2).cast <Jet<double, 2>>();
>>>>>>>>>>>>>>>>      VectorXd b = VectorXd::Random(3);
>>>>>>>>>>>>>>>>      JacobiSVD<Mat> svd(A, ComputeThinU | ComputeThinV);
>>>>>>>>>>>>>>>>      int l_rank = svd.rank();
>>>>>>>>>>>>>>>>      Vec c = svd.matrixV().leftCols(l_rank)
>>>>>>>>>>>>>>>>          *
>>>>>>>>>>>>>>>> svd.singularValues().head(l_rank).asDiagonal().inverse()
>>>>>>>>>>>>>>>>          * svd.matrixU().leftCols(l_rank).adjoint() * b; //
>>>>>>>>>>>>>>>> *
>>>>>>>>>>>>>>>>      Vec c1 = svd.solve(b.cast<Jet<double, 2>>()); // **
>>>>>>>>>>>>>>>>      Vec c2 = svd.solve(b); // ***
>>>>>>>>>>>>>>>>      return 0;
>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>> // End test_svd_jet.cpp
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> // * and // ** work fine an give the same results. // ***
>>>>>>>>>>>>>>>> fails with VS 2019 error message
>>>>>>>>>>>>>>>> Eigen\src\Core\functors\AssignmentFunctors.h(24,1):
>>>>>>>>>>>>>>>> error C2679: binary '=': no operator found which takes
>>>>>>>>>>>>>>>> a right-hand operand of type 'const SrcScalar'
>>>>>>>>>>>>>>>> (or there is no acceptable conversion)
>>>>>>>>>>>>>>>> The error points to line //***. I thing that solution is of
>>>>>>>>>>>>>>>> type VectorXd instead of Vec and there is problem with 
>>>>>>>>>>>>>>>> assignment of double
>>>>>>>>>>>>>>>> to Jet. Derivatives are lost either. It should work similar to 
>>>>>>>>>>>>>>>> complex
>>>>>>>>>>>>>>>> type. If A is complex matrix and b is real vector, x must be 
>>>>>>>>>>>>>>>> complex. There
>>>>>>>>>>>>>>>> is something wrong with Type deduction in SVD or QR 
>>>>>>>>>>>>>>>> decomposition.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Do you have any idea of how to fix it.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Oleg Shirokobrod
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>

Reply via email to