[ 
https://issues.apache.org/jira/browse/SPARK-19646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15871493#comment-15871493
 ] 

BahaaEddin AlAila commented on SPARK-19646:
-------------------------------------------

Thank you very much for the quick reply.
All I did was the following in spark-shell:
val x = sc.binaryRecords('binary_file.bin',3073)
val t= x.take(3)
t(0)
t(1)
t(2)
// all returning the same array, even though they shouldn't be the same

in pyspark, I do the same:
x = sc.binaryRecords('binary_file.bin',3073)
t = x.take(3)
t[0]
t[1]
t[2]
// different legit results, verified manually as well.



> binaryRecords replicates records in scala API
> ---------------------------------------------
>
>                 Key: SPARK-19646
>                 URL: https://issues.apache.org/jira/browse/SPARK-19646
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.0.0, 2.1.0
>            Reporter: BahaaEddin AlAila
>            Priority: Minor
>
> The scala sc.binaryRecords replicates one record for the entire set.
> for example, I am trying to load the cifar binary data where in a big binary 
> file, each 3073 represents a 32x32x3 bytes image with 1 byte for the label 
> label. The file resides on my local filesystem.
> .take(5) returns 5 records all the same, .collect() returns 10,000 records 
> all the same.
> What is puzzling is that the pyspark one works perfectly even though 
> underneath it is calling the scala implementation.
> I have tested this on 2.1.0 and 2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to