In Spark 1.3+, PySpark also support this kind of narrow dependencies,
for example,

N = 10
a1 = a.partitionBy(N)
b1 = b.partitionBy(N)

then a1.union(b1) will only have N partitions.

So, a1.join(b1) do not need shuffle anymore.

On Thu, Apr 9, 2015 at 11:57 AM, pop <xia...@adobe.com> wrote:
> In scala, we can make two Rdd using the same partitioner so that they are
> co-partitioned
>    val partitioner = new HashPartitioner(5)
>    val a1 = a.partitionBy(partitioner).cache()
>    val b1 = b.partiitonBy(partitioner).cache()
>
> How can we achieve the same in python? It would be great if somebody can
> share some examples.
>
>
> Thanks,
> Xiang
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/make-two-rdd-co-partitioned-in-python-tp22445.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to