[jira] [Commented] (SPARK-20252) java.lang.ClassNotFoundException: $line22.$read$$iwC$$iwC$movie_row

2017-04-10 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15962636#comment-15962636
 ] 

Sean Owen commented on SPARK-20252:
---

It's likely related to other spark-shell + case class issues, whether exactly 
the same or not. Those are really issues with the Scala shell.
Stopping and starting contexts isn't supported.
If you have a lead on a reliable fix, propose it, but otherwise this is why I 
closed this. Generally, don't reopen issues without new info.

> java.lang.ClassNotFoundException: $line22.$read$$iwC$$iwC$movie_row
> ---
>
> Key: SPARK-20252
> URL: https://issues.apache.org/jira/browse/SPARK-20252
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Shell
>Affects Versions: 1.6.3
> Environment: Datastax DSE dual node SPARK cluster
> [cqlsh 5.0.1 | Cassandra 3.0.12.1586 | DSE 5.0.7 | CQL spec 3.4.0 | Native 
> protocol v4]
>Reporter: Peter Mead
>
> After starting a spark shell using DSE -u  -p x spark
> scala> case class movie_row (actor: String, character_name: String, video_id: 
> java.util.UUID, video_year: Int, title: String)
> defined class movie_row
> scala> val vids=sc.cassandraTable("hcl","videos_by_actor")
> vids: 
> com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow]
>  = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15
> scala> val vids=sc.cassandraTable[movie_row]("hcl","videos_by_actor")
> vids: com.datastax.spark.connector.rdd.CassandraTableScanRDD[movie_row] = 
> CassandraTableScanRDD[1] at RDD at CassandraRDD.scala:15
> scala> vids.count
> res0: Long = 114961
>  Works OK!!
> BUT if the spark context is stopped and recreated THEN:
> scala> sc.stop()
> scala> import org.apache.spark.SparkContext, org.apache.spark.SparkContext._, 
> org.apache.spark.SparkConf
> import org.apache.spark.SparkContext
> import org.apache.spark.SparkContext._
> import org.apache.spark.SparkConf
> scala> :paste
> // Entering paste mode (ctrl-D to finish)
> val conf = new SparkConf(true)
> .set("spark.cassandra.connection.host", "redacted")
> .set("spark.cassandra.auth.username", "redacted")
> .set("spark.cassandra.auth.password", "redacted")
> // Exiting paste mode, now interpreting.
> conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@e207342
> scala> val sc = new SparkContext("spark://192.168.1.84:7077", "pjm", conf)
> sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@12b8b7c8
> scala> case class movie_row (actor: String, character_name: String, video_id: 
> java.util.UUID, video_year: Int, title: String)
> defined class movie_row
> scala> val vids=sc.cassandraTable[movie_row]("hcl","videos_by_actor")
> vids: com.datastax.spark.connector.rdd.CassandraTableScanRDD[movie_row] = 
> CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15
> scala> vids.count
> [Stage 0:>  (0 + 2) / 
> 2]WARN  2017-04-07 12:52:03,277 org.apache.spark.scheduler.TaskSetManager: 
> Lost task 0.0 in stage 0.0 (TID 0, cassandra183): 
> java.lang.ClassNotFoundException: $line22.$read$$iwC$$iwC$movie_row
> FAILS!!
> I have been unable to get this to work from a remote SPARK shell!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20252) java.lang.ClassNotFoundException: $line22.$read$$iwC$$iwC$movie_row

2017-04-07 Thread Peter Mead (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15960762#comment-15960762
 ] 

Peter Mead commented on SPARK-20252:


I'm Not sure how this explains how it work the first (and every) time if the 
spark context is not changed? There must be a discrepancy in the way that DSE 
creates the spark context the first time through and the way I create it after 
sc.stop?

> java.lang.ClassNotFoundException: $line22.$read$$iwC$$iwC$movie_row
> ---
>
> Key: SPARK-20252
> URL: https://issues.apache.org/jira/browse/SPARK-20252
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Shell
>Affects Versions: 1.6.3
> Environment: Datastax DSE dual node SPARK cluster
> [cqlsh 5.0.1 | Cassandra 3.0.12.1586 | DSE 5.0.7 | CQL spec 3.4.0 | Native 
> protocol v4]
>Reporter: Peter Mead
>
> After starting a spark shell using DSE -u  -p x spark
> scala> case class movie_row (actor: String, character_name: String, video_id: 
> java.util.UUID, video_year: Int, title: String)
> defined class movie_row
> scala> val vids=sc.cassandraTable("hcl","videos_by_actor")
> vids: 
> com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow]
>  = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15
> scala> val vids=sc.cassandraTable[movie_row]("hcl","videos_by_actor")
> vids: com.datastax.spark.connector.rdd.CassandraTableScanRDD[movie_row] = 
> CassandraTableScanRDD[1] at RDD at CassandraRDD.scala:15
> scala> vids.count
> res0: Long = 114961
>  Works OK!!
> BUT if the spark context is stopped and recreated THEN:
> scala> sc.stop()
> scala> import org.apache.spark.SparkContext, org.apache.spark.SparkContext._, 
> org.apache.spark.SparkConf
> import org.apache.spark.SparkContext
> import org.apache.spark.SparkContext._
> import org.apache.spark.SparkConf
> scala> :paste
> // Entering paste mode (ctrl-D to finish)
> val conf = new SparkConf(true)
> .set("spark.cassandra.connection.host", "redacted")
> .set("spark.cassandra.auth.username", "redacted")
> .set("spark.cassandra.auth.password", "redacted")
> // Exiting paste mode, now interpreting.
> conf: org.apache.spark.SparkConf = org.apache.spark.SparkConf@e207342
> scala> val sc = new SparkContext("spark://192.168.1.84:7077", "pjm", conf)
> sc: org.apache.spark.SparkContext = org.apache.spark.SparkContext@12b8b7c8
> scala> case class movie_row (actor: String, character_name: String, video_id: 
> java.util.UUID, video_year: Int, title: String)
> defined class movie_row
> scala> val vids=sc.cassandraTable[movie_row]("hcl","videos_by_actor")
> vids: com.datastax.spark.connector.rdd.CassandraTableScanRDD[movie_row] = 
> CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15
> scala> vids.count
> [Stage 0:>  (0 + 2) / 
> 2]WARN  2017-04-07 12:52:03,277 org.apache.spark.scheduler.TaskSetManager: 
> Lost task 0.0 in stage 0.0 (TID 0, cassandra183): 
> java.lang.ClassNotFoundException: $line22.$read$$iwC$$iwC$movie_row
> FAILS!!
> I have been unable to get this to work from a remote SPARK shell!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org