[ 
https://issues.apache.org/jira/browse/MAHOUT-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15853002#comment-15853002
 ] 

ASF GitHub Bot commented on MAHOUT-1894:
----------------------------------------

Github user andrewpalumbo commented on the issue:

    https://github.com/apache/mahout/pull/271
  
    hmm.. just tried to launch into `local[4]` and blew it up:
    
    ```
    AP-RE-X16743C45L:mahout apalumbo$ MASTER=local[4] mahout spark-shell
    log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
more info.
    Using Spark's repl log4j profile: 
org/apache/spark/log4j-defaults-repl.properties
    To adjust logging level use sc.setLogLevel("INFO")
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_\   version 1.6.2
          /_/
    
    Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 
1.8.0_102)
    Type in expressions to have them evaluated.
    Type :help for more information.
    Spark context available as sc.
    SQL context available as sqlContext.
    Loading /Users/apalumbo/sandbox/mahout/bin/load-shell.scala...
    import org.apache.mahout.math._
    import org.apache.mahout.math.scalabindings._
    import org.apache.mahout.math.drm._
    import org.apache.mahout.math.scalabindings.RLikeOps._
    import org.apache.mahout.math.drm.RLikeDrmOps._
    import org.apache.mahout.sparkbindings._
    sdc: org.apache.mahout.sparkbindings.SparkDistributedContext = 
org.apache.mahout.sparkbindings.SparkDistributedContext@73e0c775
    
                    _                 _
    _ __ ___   __ _| |__   ___  _   _| |_
     '_ ` _ \ / _` | '_ \ / _ \| | | | __|
     | | | | (_| | | | | (_) | |_| | |_
    _| |_| |_|\__,_|_| |_|\___/ \__,_|\__|  version 0.13.0
    
    
    
    Exception in thread "main" java.io.FileNotFoundException: spark-shell (Is a 
directory)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at scala.reflect.io.File.inputStream(File.scala:97)
        at scala.reflect.io.File.inputStream(File.scala:82)
        at scala.reflect.io.Streamable$Chars$class.reader(Streamable.scala:93)
        at scala.reflect.io.File.reader(File.scala:82)
        at 
scala.reflect.io.Streamable$Chars$class.bufferedReader(Streamable.scala:98)
        at scala.reflect.io.File.bufferedReader(File.scala:82)
        at 
scala.reflect.io.Streamable$Chars$class.bufferedReader(Streamable.scala:97)
        at scala.reflect.io.File.bufferedReader(File.scala:82)
        at 
scala.reflect.io.Streamable$Chars$class.applyReader(Streamable.scala:103)
        at scala.reflect.io.File.applyReader(File.scala:82)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$interpretAllFrom$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SparkILoop.scala:677)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$interpretAllFrom$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:677)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$interpretAllFrom$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:677)
        at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$savingReplayStack(SparkILoop.scala:162)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$interpretAllFrom$1.apply$mcV$sp(SparkILoop.scala:676)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$interpretAllFrom$1.apply(SparkILoop.scala:676)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$interpretAllFrom$1.apply(SparkILoop.scala:676)
        at org.apache.spark.repl.SparkILoop.savingReader(SparkILoop.scala:167)
        at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$interpretAllFrom(SparkILoop.scala:675)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$loadCommand$1.apply(SparkILoop.scala:740)
        at 
org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$loadCommand$1.apply(SparkILoop.scala:739)
        at org.apache.spark.repl.SparkILoop.withFile(SparkILoop.scala:733)
        at 
org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loadCommand(SparkILoop.scala:739)
    
    {...}
    
    ```


> Add support for Spark 2x backend
> --------------------------------
>
>                 Key: MAHOUT-1894
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1894
>             Project: Mahout
>          Issue Type: Task
>          Components: spark
>    Affects Versions: 0.13.0
>            Reporter: Suneel Marthi
>            Assignee: Trevor Grant
>            Priority: Critical
>             Fix For: 1.0.0, 0.13.0, 0.14.0
>
>
> add support for Spark 2.x as backend execution engine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to