[ 
https://issues.apache.org/jira/browse/SPARK-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16716684#comment-16716684
 ] 

vishal kumar yadav edited comment on SPARK-20528 at 12/11/18 9:52 AM:
----------------------------------------------------------------------

I am facing similar kind of issues.  

val sc: SparkContext = new SparkContext(conf)
val a = sc.binaryFiles("path_for/binary_file").map \{ x => (x._1, x._2.toArray) 
}.map \{ x => (x._1, x._2.toArray) }

val sqlContext = new SQLContext(sc)

val binDataFrame = sqlContext.createDataFrame(a)

println(binDataFrame.show())

*Error* :

18/12/11 15:07:25 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
192.168.0.164, executor 0): java.lang.ClassCastException: cannot assign 
instance of scala.collection.immutable.List$SerializationProxy to 
fieldorg.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD

 

When i tried this code

val a = sc.binaryFiles("path_of_.pdub")
val sqlContext = new SQLContext(sc)

val binDataFrame = sqlContext.createDataFrame(a)

println(binDataFrame.show())

*Error:* 

Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder 
found for org.apache.spark.input.PortableDataStream
- field (class: "org.apache.spark.input.PortableDataStream", name: "_2")
- root class: "scala.Tuple2" 

 


was (Author: vkyadav):
I am facing similar kind of issues.  

val sc: SparkContext = new SparkContext(conf)
val a = sc.binaryFiles("path_for/binary_file").map \{ x => (x._1, x._2.toArray) 
}.map \{ x => (x._1, x._2.toArray) }

val sqlContext = new SQLContext(sc)

val binDataFrame = sqlContext.createDataFrame(a)

println(binDataFrame.show())

Error :

18/12/11 15:07:25 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
192.168.0.164, executor 0): java.lang.ClassCastException: cannot assign 
instance of scala.collection.immutable.List$SerializationProxy to field 
org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD

 

 

> Add BinaryFileReader and Writer for DataFrames
> ----------------------------------------------
>
>                 Key: SPARK-20528
>                 URL: https://issues.apache.org/jira/browse/SPARK-20528
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Joseph K. Bradley
>            Priority: Major
>
> It would be very useful to have a binary data reader/writer for DataFrames, 
> presumably called via {{spark.read.binaryFiles}}, etc.
> Currently, going through RDDs is annoying since it requires different code 
> paths for Scala vs Python:
> Scala:
> {code}
> val binaryFilesRDD = sc.binaryFiles("mypath")
> val binaryFilesDF = spark.createDataFrame(binaryFilesRDD)
> {code}
> Python:
> {code}
> binaryFilesRDD = sc.binaryFiles("mypath")
> binaryFilesRDD_recast = binaryFilesRDD.map(lambda x: (x[0], bytearray(x[1])))
> binaryFilesDF = spark.createDataFrame(binaryFilesRDD_recast)
> {code}
> This is because Scala and Python {{sc.binaryFiles}} return different types, 
> which makes sense in RDD land but not DataFrame land.
> My motivation here is working with images in Spark.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to