Hi All,

We use Apache Calcite to transform SQL then we want to run the logical plan
on our spark cluster. Currently we use RelToSqlConverter and execute the
result with Spark SQL. I was wondering if we could just execute from a
Spark logical plan (
https://github.com/apache/spark/blob/b0576fff9b72880cd81a9d22c044dec329bc67d0/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala#L211-L213
).

Are there any project that goes from Calcite Logical Plan to Spark Logical
Plan directly?

I know there is one possibility: Calcite => Substrait => Spark
Substrait => Calcite via substrate-java
<https://github.com/substrait-io/substrait-java>
Substrait => Spark via gluten
<https://github.com/oap-project/gluten/tree/main/substrait/substrait-spark>


-- 
Guillaume Massé
[Gee-OHM]
(马赛卫)

Reply via email to