[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-07-22 Thread Ryan Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16097318#comment-16097318
 ] 

Ryan Williams edited comment on SPARK-16599 at 7/22/17 1:23 PM:


Seems like the second {{SparkContext}}'s 
{{BlockManager}}/{{NettyBlockTransferService}} starts at the same address as 
the first's, intercepts subsequent block requests, and doesn't know about 
blocks that were created with the first {{SparkContext}}/{{BlockManager}}. 
 this doesn't explain why switching to {{DEBUG}} logging made the issue 
go away in my application though; hmm… 

If anyone else who saw this issue wants to chime in about whether they may have 
– or definitely did not have – multiple concurrently-active {{SparkContext}}'s 
in the same JVM when they saw the issue, that might be useful!




was (Author: rdub):
Seems like the second {{SparkContext}}'s 
{{BlockManager}}/{{NettyBlockTransferService}} starts at the same address as 
the first's, intercepts subsequent block requests, and doesn't know about 
blocks that were created with the first {{SparkContext}}/{{BlockManager}}.

If anyone else who saw this issue wants to chime in about whether they may have 
– or definitely did not have – multiple concurrently-active {{SparkContext}}'s 
in the same JVM when they saw the issue, that might be useful!

> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-04-19 Thread ilker ozsaracoglu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15975490#comment-15975490
 ] 

ilker ozsaracoglu edited comment on SPARK-16599 at 4/19/17 8:52 PM:


[~sowen], I get this error consistently. I am currently on 2.1 but I had the 
same experience on 2.0. Error points to "foreach" step.

This is the case (with collect) that I do NOT experience the problem regardless 
of my job submit type (Local, YARN-client, or YARN-cluster)
DFnodeGroup.collect().foreach(r=> {  
...
}

This is the case (without collect) that I DO experience the problem every time 
when submitting job to YARN (client or cluster), but not Local 
DFnodeGroup.foreach(r=> {  
...
}

The difference might be that if tasks are running in the same JVM or not. I 
tried workarounds including the one suggested by [~naegelejd] above on Sep 8 
with no success.

Thanks.

executor 1): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at 
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at 
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:670)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:289)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


was (Author: iozsaracoglu):
[~sowen], I get this error consistently. I am currently on 2.1 but I had the 
same experience on 2.0. Error points to "foreach" step.

This is the case (with collect) that I do NOT experience the problem regardless 
of my job submit type (Local, YARN-client, or YARN-cluster)
DFnodeGroup.collect().foreach(r=> {  
...
}

This is the case (without collect) that I DO experience the problem when 
submitting job to YARN (client or cluster), but not Local 
DFnodeGroup.foreach(r=> {  
...
}

The difference might be that if tasks are running in the same JVM or not. I 
tried workarounds including the one suggested by [~naegelejd] above on Sep 8 
with no success.

Thanks.

> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-04-19 Thread ilker ozsaracoglu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15975490#comment-15975490
 ] 

ilker ozsaracoglu edited comment on SPARK-16599 at 4/19/17 8:50 PM:


[~sowen], I get this error consistently. I am currently on 2.1 but I had the 
same experience on 2.0. Error points to "foreach" step.

This is the case (with collect) that I do NOT experience the problem regardless 
of my job submit type (Local, YARN-client, or YARN-cluster)
DFnodeGroup.collect().foreach(r=> {  
...
}

This is the case (without collect) that I DO experience the problem when 
submitting job to YARN (client or cluster), but not Local 
DFnodeGroup.foreach(r=> {  
...
}

The difference might be that if tasks are running in the same JVM or not. I 
tried workarounds including the one suggested by [~naegelejd] above on Sep 8 
with no success.

Thanks.


was (Author: iozsaracoglu):
[~sowen], I get this error consistently. I am currently on 2.1 but I had the 
same experience on 2.0. Error points to "foreach" step.

This is the case (with collect) that I do NOT experience the problem regardless 
of my job submit type (Local, YARN-client, or YARN-cluster)
DFnodeGroup.collect().foreach(r=> {  
...
}

This is the case (without collect) that I DO experience the problem when 
submitting job to YARN (client or cluster), but not Local 
DFnodeGroup.foreach(r=> {  
...
}

The difference might be that if tasks are running in the same JVM or not. I 
tried workarounds including the one suggested by [~naegelejd] above on Sep 8.

Thanks.

> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-03-14 Thread Kim Yong Hwan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924043#comment-15924043
 ] 

Kim Yong Hwan edited comment on SPARK-16599 at 3/14/17 11:45 AM:
-

I have same problem too. I installed spark-2.1.0-bin-hadoop2.7 on Mac.

My example code is below, very simple. But sometimes 
"java.util.NoSuchElementException: None.get" was throwned.

val nums = 1 to 30
val powerfulRdd = sc.parallelize(nums)
powerfulRdd.filter(_ % 2 == 0).collect()


Error is below.
17/03/14 20:33:06 ERROR Executor: Exception in task 3.0 in stage 4.0 (TID 35)
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at org.apa


17/03/14 20:33:06 ERROR Executor: Exception in task 1.0 in stage 4.0 (TID 33)
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at 
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at 
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:670)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:289)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in 
stage 4.0 failed 1 times, most recent failure: Lost task 7.0 in stage 4.0 (TID 
39, localhost, executor driver): java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at 
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
at 
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:670)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:289)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
  at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
  at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
  at scala.Option.foreach(Option.scala:257)
  at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
  at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
  at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
  at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
  at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:917)
  at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:915)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
  at org.apache.spark.rdd.RDD.foreach(RDD.scala:915)
  ... 52 elided
Caused by: java.util.NoSuchElementException: None.get
  at scala.None$.get(Option.scala:347)
  at scala.None$.get(Option.scala:345)
  at 
org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
  at 
org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:670)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:289)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at 
java.util.co

[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-02-01 Thread Jakub Dubovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848138#comment-15848138
 ] 

Jakub Dubovsky edited comment on SPARK-16599 at 2/1/17 9:34 AM:


[~yetsun] Have you run this in spark-shell or by spark-submit?

I still do not have minimal example to post here. But my code also involves 
usage of custom case class in Dataset. It works when I spark-submit it or when 
typed directly in spark-shell. But it fails when run in spark-shell through 
[sparkNotebook|https://github.com/andypetrella/spark-notebook]. I do not know 
yet what difference does that make. See [sparkNB issue 
here|https://github.com/andypetrella/spark-notebook/issues/807]


was (Author: dubovsky):
[~yetsun] Have you run this in spark-shell or by spark-submit?

I still do not have minimal example to post here. But my code also involves 
usage of custom case class in Dataset. It works when I spark-submit it but it 
fails when I run it in spark-shell (including definition of case class).

> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-01-31 Thread Jun Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847889#comment-15847889
 ] 

Jun Ye edited comment on SPARK-16599 at 2/1/17 2:25 AM:


I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id: Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://myS3Bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line:
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I changed it to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.





was (Author: yetsun):
I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id: Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line:
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I changed it to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.




> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-01-31 Thread Jun Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847889#comment-15847889
 ] 

Jun Ye edited comment on SPARK-16599 at 2/1/17 2:25 AM:


I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id: Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://myS3Bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line:
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I changed it to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://myS3Bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.





was (Author: yetsun):
I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id: Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://myS3Bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line:
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I changed it to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.




> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-01-31 Thread Jun Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847889#comment-15847889
 ] 

Jun Ye edited comment on SPARK-16599 at 2/1/17 2:24 AM:


I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id: Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line:
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I changed it to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.





was (Author: yetsun):
I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id:Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line:
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I changed it to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.




> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-01-31 Thread Jun Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847889#comment-15847889
 ] 

Jun Ye edited comment on SPARK-16599 at 2/1/17 2:24 AM:


I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id:Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line:
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I changed it to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.





was (Author: yetsun):
I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id:Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line 
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I change to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.




> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-01-31 Thread Jun Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847889#comment-15847889
 ] 

Jun Ye edited comment on SPARK-16599 at 2/1/17 2:22 AM:


I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code:scala}
case class MyClass(id:Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line 
{code:scala}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I change to the following to bypass this exception.
{code:scala}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.





was (Author: yetsun):
I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

```
case class MyClass(id:Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
```

The exception only happened at the following line 
```
.map(a => MyClass(a(0).toInt, a(1)))
```
If I removed this line, there is no exception. 

So I change to the following to bypass this exception.
```
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")

```

It works for me.




> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2017-01-31 Thread Jun Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847889#comment-15847889
 ] 

Jun Ye edited comment on SPARK-16599 at 2/1/17 2:23 AM:


I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code}
case class MyClass(id:Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line 
{code}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I change to the following to bypass this exception.
{code}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.





was (Author: yetsun):
I got the same exception with the following code: 

(My Spark version is 2.1.0. Scala version: 2.11.8. Hadoop version: 2.7.3)

{code:scala}
case class MyClass(id:Int, name: String)

val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map(a => MyClass(a(0).toInt, a(1)))
.toDF
{code}

The exception only happened at the following line 
{code:scala}
.map(a => MyClass(a(0).toInt, a(1)))
{code}
If I removed this line, there is no exception. 

So I change to the following to bypass this exception.
{code:scala}
val myDF = sparkSession.sparkContext
.textFile("s3a://mys3bucket/myTextFile.txt")
.map(_.split("\t"))
.map(_.map(_.trim))
.map {
  case a: Array[String] => (a(0).toInt, a(1))
}.toDF("id", "name")
{code}

It works for me.




> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-16599) java.util.NoSuchElementException: None.get at at org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)

2016-10-12 Thread Shivansh (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15570826#comment-15570826
 ] 

Shivansh edited comment on SPARK-16599 at 10/13/16 4:53 AM:


[~srowen], [~joshrosen]: Any updates on this issue ? We are also facing the 
same issue here. Can you please let us know what is the exact problem ?? We are 
using Cassandra as a store .


was (Author: shiv4nsh):
[~srowen], [~joshrosen]: Any updates on this issue ? We are also facing the 
same issue here. Can you please ket us know what is the exact problem ??

> java.util.NoSuchElementException: None.get  at at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
> --
>
> Key: SPARK-16599
> URL: https://issues.apache.org/jira/browse/SPARK-16599
> Project: Spark
>  Issue Type: Bug
>Affects Versions: 2.0.0
> Environment: centos 6.7   spark 2.0
>Reporter: binde
>
> run a spark job with spark 2.0, error message
> Job aborted due to stage failure: Task 0 in stage 821.0 failed 4 times, most 
> recent failure: Lost task 0.3 in stage 821.0 (TID 1480, e103): 
> java.util.NoSuchElementException: None.get
>   at scala.None$.get(Option.scala:347)
>   at scala.None$.get(Option.scala:345)
>   at 
> org.apache.spark.storage.BlockInfoManager.releaseAllLocksForTask(BlockInfoManager.scala:343)
>   at 
> org.apache.spark.storage.BlockManager.releaseAllLocksForTask(BlockManager.scala:644)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:281)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org