this?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Shuffle-produces-one-huge-partition-tp23358.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscri
simpler way to solve this?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Shuffle-produces-one-huge-partition-tp23358.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
at are known to appear very often are assigned random
partitions instead of using the existing partitioning mechanism.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Shuffle-produces-one-huge-partition-and-many-tiny-partitions-tp23358p23387.html
Sent from t
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Shuffle-produces-one-huge-partition-and-many-tiny-partitions-tp23358p23387.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
eats certain "exception" keys differently.
> These keys that are known to appear very often are assigned random
> partitions instead of using the existing partitioning mechanism.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3
itioner that
extends HashPartitioner. It treats certain "exception" keys differently.
These keys that are known to appear very often are assigned random
partitions instead of using the existing partitioning mechanism.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.n
n "exception" keys differently.
> These keys that are known to appear very often are assigned random
> partitions instead of using the existing partitioning mechanism.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Shuffl