szehon-ho commented on PR #55242: URL: https://github.com/apache/spark/pull/55242#issuecomment-4201589994
fyi @peter-toth @cloud-fan , i tried AGENTS.md to make the pr https://github.com/apache/spark/pull/55179 but it looked weird initially: **Summary** 1. Converts partition-column filters to PartitionPredicates (reusing SPARK-55596 infrastructure) 2. Translates remaining data-column filters to standard V2 predicates 3. Combines them (partition predicates first) and calls table.canDeleteWhere **Changes** - OptimizeMetadataOnlyDeleteFromTable: Added tryDeleteWithPartitionPredicates fallback method and tryTranslateToV2 helper - PushDownUtils: Extracted createPartitionPredicates and made flattenNestedPartitionFilters package-private for reuse; getPartitionPredicateSchema now returns None for empty partition fields - InMemoryTableWithV2Filter: Extracted evalPredicate to companion object for reuse by test tables - InMemoryPartitionPredicateDeleteTable (new): Test table supporting PartitionPredicates and configurable data predicate acceptance - DataSourceV2EnhancedDeleteFilterSuite (new): 9 test cases covering first-pass accept, second-pass accept/reject, mixed partition+data filters, UDF on non-contiguous partition columns, multiple PartitionPredicates, and row-level fallback **Test plan** - [ ] DataSourceV2EnhancedDeleteFilterSuite — 9/9 pass - [ ] DataSourceV2EnhancedPartitionFilterSuite — 19/19 pass (no regressions) - [ ] GroupBasedDeleteFromTableSuite — 32/32 pass (no regressions) - [ ] Scalastyle — 0 errors So want to clarify to make AGENTS use the right template -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
