[ 
https://issues.apache.org/jira/browse/MAHOUT-1597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070893#comment-14070893
 ] 

ASF GitHub Bot commented on MAHOUT-1597:
----------------------------------------

GitHub user dlyubimov opened a pull request:

    https://github.com/apache/mahout/pull/33

    MAHOUT-1597: A + 1.0 (element-wise scala operation) gives wrong result if 
rdd is missing rows, Spark side

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/dlyubimov/mahout intfixing

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/mahout/pull/33.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #33
    
----
commit 45642b65f3f1620a4e2187af4b2b54e26ce1c42e
Author: Dmitriy Lyubimov <dlyubi...@apache.org>
Date:   2014-07-22T01:19:37Z

    WIP

commit 746b3ddc6c0e7e8bb89ce591c32ba1b70ec688e6
Author: Dmitriy Lyubimov <dlyubi...@apache.org>
Date:   2014-07-22T18:25:57Z

    WIP

commit 1ff376b2ddd1bcbe61f896d14e27d7a413e7313c
Author: Dmitriy Lyubimov <dlyubi...@apache.org>
Date:   2014-07-22T20:23:14Z

    Code up for lazy int-keyed missing rows fix

commit c9ac3be81ed464ccc4d440b8187e15efa9a21193
Author: Dmitriy Lyubimov <dlyubi...@apache.org>
Date:   2014-07-22T21:03:25Z

    Tests, passing .

----


> A + 1.0 (element-wise scala operation) gives wrong result if rdd is missing 
> rows, Spark side
> --------------------------------------------------------------------------------------------
>
>                 Key: MAHOUT-1597
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1597
>             Project: Mahout
>          Issue Type: Bug
>    Affects Versions: 0.9
>            Reporter: Dmitriy Lyubimov
>            Assignee: Dmitriy Lyubimov
>             Fix For: 1.0
>
>
> {code}
>     // Concoct an rdd with missing rows
>     val aRdd: DrmRdd[Int] = sc.parallelize(
>       0 -> dvec(1, 2, 3) ::
>           3 -> dvec(3, 4, 5) :: Nil
>     ).map { case (key, vec) => key -> (vec: Vector)}
>     val drmA = drmWrap(rdd = aRdd)
>     val controlB = inCoreA + 1.0
>     val drmB = drmA + 1.0
>     (drmB -: controlB).norm should be < 1e-10
> {code}
> should not fail.
> it was failing due to elementwise scalar operator only evaluates rows 
> actually present in dataset. 
> In case of Int-keyed row matrices, there are implied rows that yet may not be 
> present in RDD. 
> Our goal is to detect the condition and evaluate missing rows prior to 
> physical operators that don't work with missing implied rows.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to