[ 
https://issues.apache.org/jira/browse/MAHOUT-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13131505#comment-13131505
 ] 

Sean Owen commented on MAHOUT-672:
----------------------------------

Folks -- what's the status on this? It's been sitting about for a few months. 
Is this going to go in for 0.6 or should we retire it?
                
> Implementation of Conjugate Gradient for solving large linear systems
> ---------------------------------------------------------------------
>
>                 Key: MAHOUT-672
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-672
>             Project: Mahout
>          Issue Type: New Feature
>          Components: Math
>    Affects Versions: 0.5
>            Reporter: Jonathan Traupman
>            Priority: Minor
>             Fix For: 0.6
>
>         Attachments: 0001-MAHOUT-672-LSMR-iterative-linear-solver.patch, 
> 0001-MAHOUT-672-LSMR-iterative-linear-solver.patch, MAHOUT-672.patch, 
> mahout-672.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> This patch contains an implementation of conjugate gradient, an iterative 
> algorithm for solving large linear systems. In particular, it is well suited 
> for large sparse systems where a traditional QR or Cholesky decomposition is 
> infeasible. Conjugate gradient only works for matrices that are square, 
> symmetric, and positive definite (basically the same types where Cholesky 
> decomposition is applicable). Systems like these commonly occur in statistics 
> and machine learning problems (e.g. regression). 
> Both a standard (in memory) solver and a distributed hadoop-based solver 
> (basically the standard solver run using a DistributedRowMatrix a la 
> DistributedLanczosSolver) are included.
> There is already a version of this algorithm in taste package, but it doesn't 
> operate on standard mahout matrix/vector objects, nor does it implement a 
> distributed version. I believe this implementation will be more generically 
> useful to the community than the specialized one in taste.
> This implementation solves the following types of systems:
> Ax = b, where A is square, symmetric, and positive definite
> A'Ax = b where A is arbitrary but A'A is positive definite. Directly solving 
> this system is more efficient than computing A'A explicitly then solving.
> (A + lambda * I)x = b and (A'A + lambda * I)x = b, for systems where A or A'A 
> is singular and/or not full rank. This occurs commonly if A is large and 
> sparse. Solving a system of this form is used, for example, in ridge 
> regression.
> In addition to the normal conjugate gradient solver, this implementation also 
> handles preconditioning, and has a sample Jacobi preconditioner included as 
> an example. More work will be needed to build more advanced preconditioners 
> if desired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to