[
https://issues.apache.org/jira/browse/MAHOUT-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15990588#comment-15990588
]
ASF GitHub Bot commented on MAHOUT-1974:
----------------------------------------
Github user andrewpalumbo commented on the issue:
https://github.com/apache/mahout/pull/310
@nsakharnykh , @rawkintrevo, I ran out of time tonight to finish out
`dense %*% dense` and `dense %x% sparse`; went down a rabbit hole woth the
NVIDIA `c` api docs for cusparse. I noticed that JCuda supported only a single
`dense dense` dgemm algorithm, with column major-matrices. Most mahout
matrices are row-major, but i began considering the `dense sparse`
multiplication, and was slightly thrown off by what seems to be required `csr`
compression. it seems that sparse matrices should be compressed as `csc` since
the. Anyways I ended up in the LAPACK fortran; apologies for not finishing it
up tonight guys, I got off on a long tangent and ran out of time.
I pushed my beginning work up to my MAHOUT-1974 branch. Nothing really
worth looking at right now, but I wil' make a PR against this when I get the
`dense`work together.
Regardless, I should have at least a quick n dirty version ready to go
soon, while i work out what we'll need for experiments and benchmarking. We
can still discuss and consider different SPARK configurations tomorrow with out
`dense` cases. but I'd of course like to get this right.
As I mentioned on the last call we allow a "Sparse" DRM's in-core
components to be both sparse and dense. Currently the threshold for conversion
of a DRM block to be changed from a sparse to a dense matrix is pretty high
(25% non zero estimate). In the future we will need to allow the user to set
the sparsity somehow.
FYI:
https://github.com/apache/mahout/blob/master/math-scala/src/main/scala/org/apache/mahout/math/scalabindings/package.scala#L431
> CUDA support
> ------------
>
> Key: MAHOUT-1974
> URL: https://issues.apache.org/jira/browse/MAHOUT-1974
> Project: Mahout
> Issue Type: New Feature
> Reporter: Nikolay Sakharnykh
> Labels: features
>
> Implement native CUDA bindings using JCuda
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)