ok hopefully i can look over this on weekend.

yes, if tests do not pass, probably it is easier to remove the base class
from declaration that contains the standard tests, for now.

On Tue, Oct 6, 2015 at 10:53 AM, ASF GitHub Bot (JIRA) <j...@apache.org>
wrote:

>
>     [
> https://issues.apache.org/jira/browse/MAHOUT-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14945456#comment-14945456
> ]
>
> ASF GitHub Bot commented on MAHOUT-1570:
> ----------------------------------------
>
> Github user alexeygrigorev commented on the pull request:
>
>     https://github.com/apache/mahout/pull/137#issuecomment-145945199
>
>     > If they are standard algorithm tests coming from the abstract test
> suites in math-scala, they currently cannot be disabled (in any way i know
> anyway) without disabling them for other backends too.
>
>     Yes, they are the standard tests, I unfortunately also don't a way to
> disable a specific test.
>
>     Probably I can just disable the suites with failing tests completely,
> with a note that these tests should be fixed (and run when
> developing/changing something). I hope that my tests should give enough
> coverage to detect errors if they are introduced to the existing code.
>
>
> > Adding support for Apache Flink as a backend for the Mahout DSL
> > ---------------------------------------------------------------
> >
> >                 Key: MAHOUT-1570
> >                 URL: https://issues.apache.org/jira/browse/MAHOUT-1570
> >             Project: Mahout
> >          Issue Type: Improvement
> >            Reporter: Till Rohrmann
> >            Assignee: Alexey Grigorev
> >              Labels: DSL, flink, scala
> >             Fix For: 0.11.1
> >
> >
> > With the finalized abstraction of the Mahout DSL plans from the backend
> operations (MAHOUT-1529), it should be possible to integrate further
> backends for the Mahout DSL. Apache Flink would be a suitable candidate to
> act as a good execution backend.
> > With respect to the implementation, the biggest difference between Spark
> and Flink at the moment is probably the incremental rollout of plans, which
> is triggered by Spark's actions and which is not supported by Flink yet.
> However, the Flink community is working on this issue. For the moment, it
> should be possible to circumvent this problem by writing intermediate
> results required by an action to HDFS and reading from there.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>

Reply via email to