Copilot commented on code in PR #1862:
URL: https://github.com/apache/polaris/pull/1862#discussion_r2177979409


##########
plugins/spark/v3.5/spark/build.gradle.kts:
##########
@@ -46,6 +46,47 @@ dependencies {
   // TODO: extract a polaris-rest module as a thin layer for
   //  client to depends on.
   implementation(project(":polaris-core")) { isTransitive = false }
+  implementation(project(":polaris-api-iceberg-service")) {
+    // exclude the iceberg dependencies, use the ones pulled
+    // by iceberg-core
+    exclude("org.apache.iceberg", "*")
+    // exclude all cloud and quarkus specific dependencies to avoid
+    // running into problems with signature files.
+    exclude("com.azure", "*")
+    exclude("software.amazon.awssdk", "*")
+    exclude("com.google.cloud", "*")
+    exclude("io.airlift", "*")
+    exclude("io.smallrye", "*")
+    exclude("io.smallrye.common", "*")
+    exclude("io.swagger", "*")
+    exclude("org.apache.commons", "*")
+  }
+  implementation(project(":polaris-api-catalog-service")) {
+    exclude("org.apache.iceberg", "*")
+    exclude("com.azure", "*")
+    exclude("software.amazon.awssdk", "*")
+    exclude("com.google.cloud", "*")
+    exclude("io.airlift", "*")
+    exclude("io.smallrye", "*")
+    exclude("io.smallrye.common", "*")
+    exclude("io.swagger", "*")
+    exclude("org.apache.commons", "*")
+  }
+  implementation(project(":polaris-core")) {

Review Comment:
   The `polaris-core` project is added twice as an implementation dependency. 
Consider removing the duplicate block to reduce confusion and avoid redundant 
classpath entries.



##########
plugins/spark/v3.5/spark/src/test/java/org/apache/polaris/spark/SparkCatalogTest.java:
##########
@@ -122,25 +130,44 @@ public void setup() throws Exception {
     catalogConfig.put("cache-enabled", "false");
     catalogConfig.put(
         DeltaHelper.DELTA_CATALOG_IMPL_KEY, 
"org.apache.polaris.spark.NoopDeltaCatalog");
+    catalogConfig.put(HudiHelper.HUDI_CATALOG_IMPL_KEY, 
"org.apache.polaris.spark.NoopHudiCatalog");
     catalog = new InMemorySparkCatalog();
     Configuration conf = new Configuration();
-    try (MockedStatic<SparkSession> mockedStaticSparkSession =
-            Mockito.mockStatic(SparkSession.class);
-        MockedStatic<SparkUtil> mockedSparkUtil = 
Mockito.mockStatic(SparkUtil.class)) {
-      SparkSession mockedSession = Mockito.mock(SparkSession.class);
-      
mockedStaticSparkSession.when(SparkSession::active).thenReturn(mockedSession);
+
+    // Setup persistent SparkSession mock
+    mockedStaticSparkSession = Mockito.mockStatic(SparkSession.class);
+    mockedSession = Mockito.mock(SparkSession.class);
+    org.apache.spark.sql.RuntimeConfig mockedConfig =
+        Mockito.mock(org.apache.spark.sql.RuntimeConfig.class);
+    SparkContext mockedContext = Mockito.mock(SparkContext.class);
+
+    
mockedStaticSparkSession.when(SparkSession::active).thenReturn(mockedSession);
+    Mockito.when(mockedSession.conf()).thenReturn(mockedConfig);
+    Mockito.when(mockedConfig.get("spark.sql.extensions", null))
+        .thenReturn(
+            
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions,"
+                + "io.delta.sql.DeltaSparkSessionExtension"

Review Comment:
   The returned extension string concatenates `DeltaSparkSessionExtension` and 
`HoodieSparkSessionExtension` without a comma, producing 
`...DeltaSparkSessionExtensionorg.apache.spark...`. This will break 
`isHudiExtensionEnabled` detection. Add a separating comma and any necessary 
whitespace.
   ```suggestion
                   + "io.delta.sql.DeltaSparkSessionExtension, "
   ```



##########
plugins/spark/v3.5/spark/src/main/java/org/apache/polaris/spark/utils/HudiCatalogUtils.java:
##########
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.polaris.spark.utils;
+
+import java.util.Map;
+import org.apache.spark.sql.SparkSession;
+import org.apache.spark.sql.connector.catalog.NamespaceChange;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Utility class for Hudi-specific catalog operations, particularly namespace 
synchronization
+ * between Polaris catalog and Spark session catalog for Hudi compatibility.
+ *
+ * <p>Hudi table loading requires namespace validation through the session 
catalog, but only the
+ * Polaris catalog contains the actual namespace metadata. This class provides 
methods to
+ * synchronize namespace operations to maintain consistency between catalogs.
+ */
+public class HudiCatalogUtils {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HudiCatalogUtils.class);
+
+  /**
+   * Synchronizes namespace creation to session catalog when Hudi extension is 
enabled. This ensures
+   * session catalog metadata stays consistent with Polaris catalog for 
comprehensive Hudi
+   * compatibility.
+   *
+   * @param namespace The namespace to create
+   * @param metadata The namespace metadata properties
+   */
+  public static void createNamespace(String[] namespace, Map<String, String> 
metadata) {
+    if (!PolarisCatalogUtils.isHudiExtensionEnabled()) {
+      return;
+    }
+
+    // Sync namespace with filtered metadata to session catalog only when Hudi 
is enabled
+    // This is needed because Hudi table loading uses the spark session catalog
+    // to validate namespace existence and access metadata properties.
+    // Reserved properties (owner, location, comment) are automatically 
filtered out.
+    try {
+      SparkSession spark = SparkSession.active();
+      String ns = String.join(".", namespace);

Review Comment:
   The namespace identifier `ns` is inserted directly into SQL without 
escaping, which could lead to SQL injection if namespaces contain unexpected 
characters. Consider validating or sanitizing the namespace string before 
embedding it in the SQL.
   ```suggestion
         String ns = validateAndSanitizeNamespace(namespace);
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to