Hey all,

I saw in some of the discussions around DataSourceV2 writes that we might
have the data source inform Spark of requirements for the input data's
ordering and partitioning. Has there been a proposed API for that yet?

Even one level up it would be helpful to understand how I should be
thinking about the responsibility of the data source writer, when I should
be inserting a custom catalyst rule, and how I should handle
validation/assumptions of the table before attempting the write.

Thanks!
Pat

Reply via email to