[ 
https://issues.apache.org/jira/browse/ZEPPELIN-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392797#comment-14392797
 ] 

Sebastian YEPES FERNANDEZ edited comment on ZEPPELIN-24 at 4/2/15 3:02 PM:
---------------------------------------------------------------------------

Its strange as it work correctly when I run them from the spark-shell connected 
to the same cluster  (spark://n1.xxx.com:7077)

When I run sc.version, the zeppelin app appears in the Spark "Running 
Applications"

{code}
    "2AKYY8BY2": {
      "id": "2AKYY8BY2",
      "name": "spark",
      "group": "spark",
      "properties": {
        "spark.cores.max": "10",
        "spark.yarn.jar": "",
        "master": "spark://n1.xxx.com:7077",
        "zeppelin.spark.maxResult": "10000",
        "zeppelin.dep.localrepo": "local-repo",
        "spark.app.name": "Zeppelin",
        "spark.executor.memory": "10g",
        "zeppelin.spark.useHiveContext": "false",
        "args": "",
        "spark.home": "",
        "zeppelin.spark.concurrentSQL": "false",
        "zeppelin.pyspark.python": "python"
      },
      "interpreterGroup": [
{code}

{code}
zeppelin-env.sh
export MASTER="spark://n1.xxx.com:7077"
{code}




was (Author: syepes):
Its strange as it work correctly when I run them from the spark-shell connected 
to the same cluster  (spark://n1.xxx.com:7077)

{code}
    "2AKYY8BY2": {
      "id": "2AKYY8BY2",
      "name": "spark",
      "group": "spark",
      "properties": {
        "spark.cores.max": "10",
        "spark.yarn.jar": "",
        "master": "spark://n1.xxx.com:7077",
        "zeppelin.spark.maxResult": "10000",
        "zeppelin.dep.localrepo": "local-repo",
        "spark.app.name": "Zeppelin",
        "spark.executor.memory": "10g",
        "zeppelin.spark.useHiveContext": "false",
        "args": "",
        "spark.home": "",
        "zeppelin.spark.concurrentSQL": "false",
        "zeppelin.pyspark.python": "python"
      },
      "interpreterGroup": [
{code}

> Exception when reading files: textFile / parquetFile
> ----------------------------------------------------
>
>                 Key: ZEPPELIN-24
>                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-24
>             Project: Zeppelin
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.5.0
>            Reporter: Sebastian YEPES FERNANDEZ
>
> Hello,
> I have just encountered the following issue when running the last version 
> #b6768c, Has anyone encountered this issue as well? 
> Build options:
> -Phadoop-2.4 -Dhadoop.version=2.4.0 -Pspark-1.3 -Dspark.version=1.3.0
> {code:title=%Spark|borderStyle=solid}
> val bankText = sc.textFile("/data/bank-full.csv")
> bankText: org.apache.spark.rdd.RDD[String] = /data/bank-full.csv 
> MapPartitionsRDD[1] at textFile at <console>:23
> bankText.count
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
> (TID 7, n1): ExecutorLostFailure (executor 3 lost)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1203)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1191)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1191)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
>       at scala.Option.foreach(Option.scala:236)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to