comphead commented on code in PR #922:
URL: https://github.com/apache/datafusion-comet/pull/922#discussion_r1762105059


##########
docs/source/contributor-guide/plugin_overview.md:
##########
@@ -17,30 +17,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Comet Plugin Overview
+# Comet Plugin Architecture
 
-The entry point to Comet is the `org.apache.spark.CometPlugin` class, which 
can be registered with Spark by adding the following setting to the Spark 
configuration when launching `spark-shell` or `spark-submit`:
+## Comet SQL Plugin
+
+The entry point to Comet is the `org.apache.spark.CometPlugin` class, which 
can be registered with Spark by adding the 
+following setting to the Spark configuration when launching `spark-shell` or 
`spark-submit`:
 
 ```
 --conf spark.plugins=org.apache.spark.CometPlugin
 ```
 
-On initialization, this class registers two physical plan optimization rules 
with Spark: `CometScanRule` and `CometExecRule`. These rules run whenever a 
query stage is being planned.
+On initialization, this class registers two physical plan optimization rules 
with Spark: `CometScanRule` 
+and `CometExecRule`. These rules run whenever a query stage is being planned 
during Adaptive Query Execution.
 
 ## CometScanRule
 
-`CometScanRule` replaces any Parquet scans with Comet Parquet scan classes.
+`CometScanRule` replaces any Parquet scans with Comet operators. There are 
different paths for v1 and v2 data sources.
 
-When the V1 data source API is being used, `FileSourceScanExec` is replaced 
with `CometScanExec`.
+When reading from Parquet v1 data sources, Comet replaces `FileSourceScanExec` 
with a `CometScanExec`, and for v2 
+data sources, `BatchScanExec` is replaced with `CometBatchScanExec`. In both 
cases, Comet replaces Spark's Parquet 
+reader with a custom vectorized Parquet reader. This is similar to Spark's 
vectorized Parquet reader used by the v2 
+Parquet data source but leverages native code for decoding Parquet row groups 
directly into Arrow format.
 
-When the V2 data source API is being used, `BatchScanExec` is replaced with 
`CometBatchScanExec`.
+Comet only supports a subset of data types and will fall back to Spark's scan 
if unsupported types

Review Comment:
   should we name the types or make a link to the code what types are 
supported? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to