wuchong commented on code in PR #2408:
URL: https://github.com/apache/fluss/pull/2408#discussion_r2751264342


##########
fluss-spark/fluss-spark-common/src/main/scala/org/apache/fluss/spark/execution/CallProcedureExec.scala:
##########
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.fluss.spark.execution
+
+import org.apache.fluss.spark.procedure.Procedure
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.{Attribute, Expression, 
GenericInternalRow, UnsafeProjection}
+import 
org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection
+import org.apache.spark.sql.execution.SparkPlan
+
+/** Physical plan node for executing a stored procedure. */
+case class CallProcedureExec(output: Seq[Attribute], procedure: Procedure, 
args: Seq[Expression])
+  extends SparkPlan {

Review Comment:
   It seems Paimon implements procedure exec by extending Spark 
`LeafV2CommandExec` which seems much simpler (not relying on `RDD`). Is there 
any reason for us to extending `SparkPlan`?



##########
fluss-spark/PROCEDURES.md:
##########
@@ -0,0 +1,96 @@
+# Fluss Spark Procedures

Review Comment:
   This is not the appropriate place for documentation—please move it to the 
`website/` directory.
   
   Specifically:
   - Create a new section titled **“Engine Spark”** under **“Engine Flink”** in 
the documentation sidebar.
   - Within “Engine Spark,” add a page named **“Procedures”**.
   
   Please follow the structure and style of the [Flink Procedures 
page](https://fluss.apache.org/docs/next/engine-flink/procedures/) as a 
reference. The Spark Procedures page should include, for each supported 
procedure:
   - **Syntax**
   - **Parameters**
   - **Return value(s)**
   - **Example usage**
   
   Additionally, ensure that all procedure names are listed in the right-side 
table of contents (TOC) for easy navigation.



##########
fluss-spark/fluss-spark-common/src/main/scala/org/apache/fluss/spark/procedure/CompactProcedure.scala:
##########
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.fluss.spark.procedure
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.connector.catalog.TableCatalog
+import org.apache.spark.sql.types.{DataTypes, Metadata, StructField, 
StructType}
+
+class CompactProcedure(tableCatalog: TableCatalog) extends 
BaseProcedure(tableCatalog) {

Review Comment:
   Fluss doesn't support compact, and will not support it in the future. So 
providing an empty `compact` procedure looks strange to users, and will be 
backward in-compatible when we removing it. 
   
   Could you remove this in the PR, and introduce `xxx_cluster_configs` as 
first procedures, like Flink procedures 
https://fluss.apache.org/docs/next/engine-flink/procedures/#get_cluster_configs?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to