[ https://issues.apache.org/jira/browse/SPARK-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16716949#comment-16716949 ]
vishal kumar yadav commented on SPARK-20528: -------------------------------------------- It means to do some transformation or process, the data should be in it *text/json/csv* format. If its in format like .protobuf binary, then there is no way to directly load as a dataframe. > Add BinaryFileReader and Writer for DataFrames > ---------------------------------------------- > > Key: SPARK-20528 > URL: https://issues.apache.org/jira/browse/SPARK-20528 > Project: Spark > Issue Type: New Feature > Components: SQL > Affects Versions: 2.2.0 > Reporter: Joseph K. Bradley > Priority: Major > Attachments: > part-00000-5ae00646-8400-4b45-aa6f-d6f27068972c-c000.json, stocklist.json, > stocklist.pdub > > > It would be very useful to have a binary data reader/writer for DataFrames, > presumably called via {{spark.read.binaryFiles}}, etc. > Currently, going through RDDs is annoying since it requires different code > paths for Scala vs Python: > Scala: > {code} > val binaryFilesRDD = sc.binaryFiles("mypath") > val binaryFilesDF = spark.createDataFrame(binaryFilesRDD) > {code} > Python: > {code} > binaryFilesRDD = sc.binaryFiles("mypath") > binaryFilesRDD_recast = binaryFilesRDD.map(lambda x: (x[0], bytearray(x[1]))) > binaryFilesDF = spark.createDataFrame(binaryFilesRDD_recast) > {code} > This is because Scala and Python {{sc.binaryFiles}} return different types, > which makes sense in RDD land but not DataFrame land. > My motivation here is working with images in Spark. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org