[ https://issues.apache.org/jira/browse/MAHOUT-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14935805#comment-14935805 ]
ASF GitHub Bot commented on MAHOUT-1570: ---------------------------------------- Github user alexeygrigorev commented on the pull request: https://github.com/apache/mahout/pull/137#issuecomment-144183839 I also remember having problems with building Mahout with jdk 1.8. Have you tried 1.7? Unfortunately I cannot test it myself at the moment... On 29 September 2015 at 22:26, Dmitriy Lyubimov <notificati...@github.com> wrote: > Hm. doesn't compile for me with maven 3.3.3 and jdk 1.8. > > INFO] --- maven-assembly-plugin:2.4.1:single (dependency-reduced) @ > mahout-flink_2.10 --- > [INFO] Reading assembly descriptor: > src/main/assembly/dependency-reduced.xml > [INFO] > ------------------------------------------------------------------------ > [INFO] Reactor Summary: > [INFO] > [INFO] Mahout Build Tools ................................. SUCCESS [ > 1.782 s] > [INFO] Apache Mahout ...................................... SUCCESS [ > 0.039 s] > [INFO] Mahout Math ........................................ SUCCESS [ > 7.477 s] > [INFO] Mahout HDFS ........................................ SUCCESS [ > 1.487 s] > [INFO] Mahout Map-Reduce .................................. SUCCESS [ > 12.867 s] > [INFO] Mahout Integration ................................. SUCCESS [ > 2.695 s] > [INFO] Mahout Examples .................................... SUCCESS [ > 13.358 s] > [INFO] Mahout Math Scala bindings ......................... SUCCESS [ > 25.349 s] > [INFO] Mahout H2O backend ................................. SUCCESS [ > 16.332 s] > [INFO] Mahout Spark bindings .............................. SUCCESS [ > 26.458 s] > [INFO] Mahout Spark bindings shell ........................ SUCCESS [ > 4.863 s] > [INFO] Mahout Release Package ............................. SUCCESS [ > 0.663 s] > [INFO] Mahout Flink bindings .............................. FAILURE [06:09 > min] > [INFO] > ------------------------------------------------------------------------ > [INFO] BUILD FAILURE > [INFO] > ------------------------------------------------------------------------ > [INFO] Total time: 08:03 min > [INFO] Finished at: 2015-09-29T13:24:29-07:00 > [INFO] Final Memory: 80M/859M > [INFO] > ------------------------------------------------------------------------ > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-assembly-plugin:2.4.1:single > (dependency-reduced) on project mahout-flink_2.10: Error reading > assemblies: Error locating assembly descriptor: > src/main/assembly/dependency-reduced.xml > [ERROR] > [ERROR] [1] [INFO] Searching for file location: > /TB/dmitriy/projects/mahout/flink/src/main/assembly/dependency-reduced.xml > [ERROR] > [ERROR] [2] [INFO] File: > /TB/dmitriy/projects/mahout/flink/src/main/assembly/dependency-reduced.xml > does not exist. > [ERROR] > [ERROR] [3] [INFO] File: > /TB/dmitriy/projects/mahout/src/main/assembly/dependency-reduced.xml does > not exist. > [ERROR] -> [Help 1] > [E > > — > Reply to this email directly or view it on GitHub > <https://github.com/apache/mahout/pull/137#issuecomment-144181773>. > > Adding support for Apache Flink as a backend for the Mahout DSL > --------------------------------------------------------------- > > Key: MAHOUT-1570 > URL: https://issues.apache.org/jira/browse/MAHOUT-1570 > Project: Mahout > Issue Type: Improvement > Reporter: Till Rohrmann > Assignee: Alexey Grigorev > Labels: DSL, flink, scala > Fix For: 0.11.1 > > > With the finalized abstraction of the Mahout DSL plans from the backend > operations (MAHOUT-1529), it should be possible to integrate further backends > for the Mahout DSL. Apache Flink would be a suitable candidate to act as a > good execution backend. > With respect to the implementation, the biggest difference between Spark and > Flink at the moment is probably the incremental rollout of plans, which is > triggered by Spark's actions and which is not supported by Flink yet. > However, the Flink community is working on this issue. For the moment, it > should be possible to circumvent this problem by writing intermediate results > required by an action to HDFS and reading from there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)