JingsongLi commented on a change in pull request #18394:
URL: https://github.com/apache/flink/pull/18394#discussion_r789435121



##########
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/utils/TableTestBase.scala
##########
@@ -47,21 +53,30 @@ import org.apache.flink.table.expressions.Expression
 import org.apache.flink.table.factories.{FactoryUtil, PlannerFactoryUtil, 
StreamTableSourceFactory}
 import org.apache.flink.table.functions._
 import org.apache.flink.table.module.ModuleManager
-import org.apache.flink.table.operations.{ModifyOperation, Operation, 
QueryOperation, SinkModifyOperation}
+import org.apache.flink.table.operations.ModifyOperation
+import org.apache.flink.table.operations.Operation
+import org.apache.flink.table.operations.QueryOperation
+import org.apache.flink.table.operations.SinkModifyOperation

Review comment:
       Your scala style may be something wrong... can you check for Flink Scala 
style?

##########
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
##########
@@ -572,14 +574,19 @@ private Operation convertAlterTableReset(
         return new AlterTableOptionsOperation(tableIdentifier, 
oldTable.copy(newOptions));
     }
 
-    private Operation convertAlterTableCompact(
+    /**
+     * Convert `ALTER TABLE ... COMPACT` operation to {@link ModifyOperation} 
for Flink's managed
+     * table to trigger a compaction batch job.
+     */
+    private ModifyOperation convertAlterTableCompact(
             ObjectIdentifier tableIdentifier,
-            ResolvedCatalogTable resolvedCatalogTable,
+            ContextResolvedTable contextResolvedTable,
             SqlAlterTableCompact alterTableCompact) {
         Catalog catalog = 
catalogManager.getCatalog(tableIdentifier.getCatalogName()).orElse(null);
+        ResolvedCatalogTable resolvedCatalogTable = 
contextResolvedTable.getResolvedTable();
         if (ManagedTableListener.isManagedTable(catalog, 
resolvedCatalogTable)) {
-            LinkedHashMap<String, String> partitionKVs = 
alterTableCompact.getPartitionKVs();
-            CatalogPartitionSpec partitionSpec = null;
+            Map<String, String> partitionKVs = 
alterTableCompact.getPartitionKVs();
+            CatalogPartitionSpec partitionSpec = new 
CatalogPartitionSpec(Collections.emptyMap());
             if (partitionKVs != null) {
                 List<String> orderedPartitionKeys = 
resolvedCatalogTable.getPartitionKeys();

Review comment:
       Minor: partitionKeys, no need to `orderedPartitionKeys`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to