would be
very hard to add big endian support and instead of producing wrong results,
it is going to
fail to run.
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/Tungsten-in-a-mixed-endian-environment-tp15975p16027.htm
.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Tungsten-in-a-mixed-endian-environment-tp15975p16027.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com
How big of a deal this use case is in a heterogeneous endianness
environment? If we do want to fix it, we should do it when right before
Spark shuffles data to minimize performance penalty, i.e. turn big-endian
encoded data into little-indian encoded data before it goes on the wire.
This is a
I logged SPARK-12778 where endian awareness in Platform.java should
help in mixed
endian set up.
There could be other parts of the code base which are related.
Cheers
On Tue, Jan 12, 2016 at 7:01 AM, Adam Roberts wrote:
> Hi all, I've been experimenting with DataFrame
Hi all, I've been experimenting with DataFrame operations in a mixed
endian environment - a big endian master with little endian workers. With
tungsten enabled I'm encountering data corruption issues.
For example, with this simple test code:
import org.apache.spark.SparkContext
import