I am running into the same problem describe here
https://www.mail-archive.com/user%40spark.apache.org/msg17788.html which for
some reasons does not appear in the archives.

I am having a standalone scala application, build (using sbt) with spark
jars from maven:
  "org.apache.spark"  %% "spark-core"              % "1.2.0",
  "org.apache.spark"  %% "spark-sql"               % "1.2.0",
  "org.apache.spark"  %% "spark-hive"              % "1.2.0",

The application acts as a driver and connects to a cluster in standalone
mode. I use the ec2 scripts to create the cluster in AWS.

I see the scripts downloading and installing the following packages among
other:
http://s3.amazonaws.com/spark-related-packages/scala-2.10.3.tgz
http://s3.amazonaws.com/spark-related-packages/spark-1.2.0-bin-hadoop1.tgz
http://s3.amazonaws.com/spark-related-packages/hadoop-1.0.4.tar.gz  
http://s3.amazonaws.com/spark-related-packages/hadoop-1.0.4.tar.gz

which looks ok, since it's the same spark version (1.2.0) and I believe the
jars in maven central are built used hadoop1.

But quickly after loading my data, I try to do:
sc.saveAsObjectFile("s3m://path/to/my/s3/bucket/")

and I get the following exception: java.lang.ClassCastException:
scala.Tuple2 cannot be cast to scala.collection.Iterator and I really don't
know how to get rid of it.

I have read this https://issues.apache.org/jira/browse/SPARK-2075 with
interest but I still don't understand what I might be doing wrong.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/1-2-0-java-lang-ClassCastException-scala-Tuple2-cannot-be-cast-to-scala-collection-Iterator-tp21001.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to