Hi All,

I'm developing a DataSource on Spark 3.2 to write data to our system,
and using DataSource V2 API. I want to implement the interface
RequiresDistributionAndOrdering
<https://github.com/apache/spark/blob/branch-3.2/sql/catalyst/src/main/java/org/apache/spark/sql/connector/write/RequiresDistributionAndOrdering.java>
to
set the number of partitions used for write. But I don't know how to
implement a distribution without shuffle as  RDD.coalesce does. Is there
any example or advice?

Thank You
Best Regards

Reply via email to