This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 697a1af5f Publish built docs triggered by 
4cfceb799942a3a502ed8ff350cd3fda441c4e51
697a1af5f is described below

commit 697a1af5fae356d68a0c7f71f016bc278f9133a8
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Mon Nov 10 18:40:29 2025 +0000

    Publish built docs triggered by 4cfceb799942a3a502ed8ff350cd3fda441c4e51
---
 _sources/user-guide/latest/configs.md.txt | 10 +++---
 searchindex.js                            |  2 +-
 user-guide/latest/configs.html            | 58 +++++++++++++++----------------
 3 files changed, 35 insertions(+), 35 deletions(-)

diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index fd232874e..0a12abadc 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -27,15 +27,10 @@ Comet provides the following configuration settings.
 <!--BEGIN:CONFIG_TABLE[scan]-->
 | Config | Description | Default Value |
 |--------|-------------|---------------|
-| `spark.comet.convert.csv.enabled` | When enabled, data from Spark 
(non-native) CSV v1 and v2 scans will be converted to Arrow format. Note that 
to enable native vectorized execution, both this config and 
`spark.comet.exec.enabled` need to be enabled. | false |
-| `spark.comet.convert.json.enabled` | When enabled, data from Spark 
(non-native) JSON v1 and v2 scans will be converted to Arrow format. Note that 
to enable native vectorized execution, both this config and 
`spark.comet.exec.enabled` need to be enabled. | false |
-| `spark.comet.convert.parquet.enabled` | When enabled, data from Spark 
(non-native) Parquet v1 and v2 scans will be converted to Arrow format. Note 
that to enable native vectorized execution, both this config and 
`spark.comet.exec.enabled` need to be enabled. | false |
 | `spark.comet.scan.allowIncompatible` | Some Comet scan implementations are 
not currently fully compatible with Spark for all datatypes. Set this config to 
true to allow them anyway. For more information, refer to the [Comet 
Compatibility 
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | 
false |
 | `spark.comet.scan.enabled` | Whether to enable native scans. When this is 
turned on, Spark will use Comet to read supported data sources (currently only 
Parquet is supported natively). Note that to enable native vectorized 
execution, both this config and `spark.comet.exec.enabled` need to be enabled. 
| true |
 | `spark.comet.scan.preFetch.enabled` | Whether to enable pre-fetching feature 
of CometScan. | false |
 | `spark.comet.scan.preFetch.threadNum` | The number of threads running 
pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is 
enabled. Note that more pre-fetching threads means more memory requirement to 
store pre-fetched row groups. | 2 |
-| `spark.comet.sparkToColumnar.enabled` | Whether to enable Spark to Arrow 
columnar conversion. When this is turned on, Comet will convert operators in 
`spark.comet.sparkToColumnar.supportedOperatorList` into Arrow columnar format 
before processing. | false |
-| `spark.comet.sparkToColumnar.supportedOperatorList` | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
`spark.comet.sparkToColumnar.enabled` is true | Range,InMemoryTableScan,RDDScan 
|
 | `spark.hadoop.fs.comet.libhdfs.schemes` | Defines filesystem schemes (e.g., 
hdfs, webhdfs) that the native side accesses via libhdfs, separated by commas. 
Valid only when built with hdfs feature enabled. | |
 <!--END:CONFIG_TABLE-->
 
@@ -127,9 +122,14 @@ These settings can be used to determine which parts of the 
plan are accelerated
 | Config | Description | Default Value |
 |--------|-------------|---------------|
 | `spark.comet.columnar.shuffle.memory.factor` | Fraction of Comet memory to 
be allocated per executor process for columnar shuffle when running in on-heap 
mode. For more information, refer to the [Comet Tuning 
Guide](https://datafusion.apache.org/comet/user-guide/tuning.html). | 1.0 |
+| `spark.comet.convert.csv.enabled` | When enabled, data from Spark 
(non-native) CSV v1 and v2 scans will be converted to Arrow format. This is an 
experimental feature and has known issues with non-UTC timezones. | false |
+| `spark.comet.convert.json.enabled` | When enabled, data from Spark 
(non-native) JSON v1 and v2 scans will be converted to Arrow format. This is an 
experimental feature and has known issues with non-UTC timezones. | false |
+| `spark.comet.convert.parquet.enabled` | When enabled, data from Spark 
(non-native) Parquet v1 and v2 scans will be converted to Arrow format.  This 
is an experimental feature and has known issues with non-UTC timezones. | false 
|
 | `spark.comet.exec.onHeap.enabled` | Whether to allow Comet to run in on-heap 
mode. Required for running Spark SQL tests. Can be overridden by environment 
variable `ENABLE_COMET_ONHEAP`. | false |
 | `spark.comet.exec.onHeap.memoryPool` | The type of memory pool to be used 
for Comet native execution when running Spark in on-heap mode. Available pool 
types are `greedy`, `fair_spill`, `greedy_task_shared`, 
`fair_spill_task_shared`, `greedy_global`, `fair_spill_global`, and 
`unbounded`. | greedy_task_shared |
 | `spark.comet.memoryOverhead` | The amount of additional memory to be 
allocated per executor process for Comet, in MiB, when running Spark in on-heap 
mode. | 1024 MiB |
+| `spark.comet.sparkToColumnar.enabled` | Whether to enable Spark to Arrow 
columnar conversion. When this is turned on, Comet will convert operators in 
`spark.comet.sparkToColumnar.supportedOperatorList` into Arrow columnar format 
before processing. This is an experimental feature and has known issues with 
non-UTC timezones. | false |
+| `spark.comet.sparkToColumnar.supportedOperatorList` | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
`spark.comet.sparkToColumnar.enabled` is true. | 
Range,InMemoryTableScan,RDDScan |
 | `spark.comet.testing.strict` | Experimental option to enable strict testing, 
which will fail tests that could be more comprehensive, such as checking for a 
specific fallback reason. Can be overridden by environment variable 
`ENABLE_COMET_STRICT_TESTING`. | false |
 <!--END:CONFIG_TABLE-->
 
diff --git a/searchindex.js b/searchindex.js
index ede19e0eb..b51af8efe 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[18, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[18, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[18, "run-spark-sql-tests"]], "ANSI Mode": [[21, 
"ansi-mode"], [34, "ansi-mode"], [74, "ansi-mode"]], "ANSI mode": [[47, 
"ansi-mode"], [60, "ansi-mode"]], "API Differences Between Spark Versions": 
[[3, "api-differences-between-spark-versions"]], "ASF Links": [[2, null], [2, 
null]], "Accelerating Apache Iceberg Parque [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[18, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[18, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[18, "run-spark-sql-tests"]], "ANSI Mode": [[21, 
"ansi-mode"], [34, "ansi-mode"], [74, "ansi-mode"]], "ANSI mode": [[47, 
"ansi-mode"], [60, "ansi-mode"]], "API Differences Between Spark Versions": 
[[3, "api-differences-between-spark-versions"]], "ASF Links": [[2, null], [2, 
null]], "Accelerating Apache Iceberg Parque [...]
\ No newline at end of file
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index 294ca1b85..b87397df5 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -473,43 +473,23 @@ under the License.
 </tr>
 </thead>
 <tbody>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.convert.csv.enabled</span></code></p></td>
-<td><p>When enabled, data from Spark (non-native) CSV v1 and v2 scans will be 
converted to Arrow format. Note that to enable native vectorized execution, 
both this config and <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.enabled</span></code> need to be enabled.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.convert.json.enabled</span></code></p></td>
-<td><p>When enabled, data from Spark (non-native) JSON v1 and v2 scans will be 
converted to Arrow format. Note that to enable native vectorized execution, 
both this config and <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.enabled</span></code> need to be enabled.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.convert.parquet.enabled</span></code></p></td>
-<td><p>When enabled, data from Spark (non-native) Parquet v1 and v2 scans will 
be converted to Arrow format. Note that to enable native vectorized execution, 
both this config and <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.enabled</span></code> need to be enabled.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.allowIncompatible</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.allowIncompatible</span></code></p></td>
 <td><p>Some Comet scan implementations are not currently fully compatible with 
Spark for all datatypes. Set this config to true to allow them anyway. For more 
information, refer to the <a class="reference external" 
href="https://datafusion.apache.org/comet/user-guide/compatibility.html";>Comet 
Compatibility Guide</a>.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.enabled</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.enabled</span></code></p></td>
 <td><p>Whether to enable native scans. When this is turned on, Spark will use 
Comet to read supported data sources (currently only Parquet is supported 
natively). Note that to enable native vectorized execution, both this config 
and <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.enabled</span></code> need to be enabled.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.enabled</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.enabled</span></code></p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.threadNum</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.threadNum</span></code></p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching 
threads means more memory requirement to store pre-fetched row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.enabled</span></code></p></td>
-<td><p>Whether to enable Spark to Arrow columnar conversion. When this is 
turned on, Comet will convert operators in <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code> 
into Arrow columnar format before processing.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code></p></td>
-<td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when <code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.enabled</span></code> is true</p></td>
-<td><p>Range,InMemoryTableScan,RDDScan</p></td>
-</tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.hadoop.fs.comet.libhdfs.schemes</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.hadoop.fs.comet.libhdfs.schemes</span></code></p></td>
 <td><p>Defines filesystem schemes (e.g., hdfs, webhdfs) that the native side 
accesses via libhdfs, separated by commas. Valid only when built with hdfs 
feature enabled.</p></td>
 <td><p></p></td>
 </tr>
@@ -776,19 +756,39 @@ under the License.
 <td><p>Fraction of Comet memory to be allocated per executor process for 
columnar shuffle when running in on-heap mode. For more information, refer to 
the <a class="reference external" 
href="https://datafusion.apache.org/comet/user-guide/tuning.html";>Comet Tuning 
Guide</a>.</p></td>
 <td><p>1.0</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.onHeap.enabled</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.convert.csv.enabled</span></code></p></td>
+<td><p>When enabled, data from Spark (non-native) CSV v1 and v2 scans will be 
converted to Arrow format. This is an experimental feature and has known issues 
with non-UTC timezones.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.convert.json.enabled</span></code></p></td>
+<td><p>When enabled, data from Spark (non-native) JSON v1 and v2 scans will be 
converted to Arrow format. This is an experimental feature and has known issues 
with non-UTC timezones.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.convert.parquet.enabled</span></code></p></td>
+<td><p>When enabled, data from Spark (non-native) Parquet v1 and v2 scans will 
be converted to Arrow format.  This is an experimental feature and has known 
issues with non-UTC timezones.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.onHeap.enabled</span></code></p></td>
 <td><p>Whether to allow Comet to run in on-heap mode. Required for running 
Spark SQL tests. Can be overridden by environment variable <code 
class="docutils literal notranslate"><span 
class="pre">ENABLE_COMET_ONHEAP</span></code>.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.onHeap.memoryPool</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.onHeap.memoryPool</span></code></p></td>
 <td><p>The type of memory pool to be used for Comet native execution when 
running Spark in on-heap mode. Available pool types are <code class="docutils 
literal notranslate"><span class="pre">greedy</span></code>, <code 
class="docutils literal notranslate"><span 
class="pre">fair_spill</span></code>, <code class="docutils literal 
notranslate"><span class="pre">greedy_task_shared</span></code>, <code 
class="docutils literal notranslate"><span 
class="pre">fair_spill_task_shared</span></code> [...]
 <td><p>greedy_task_shared</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.memoryOverhead</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.memoryOverhead</span></code></p></td>
 <td><p>The amount of additional memory to be allocated per executor process 
for Comet, in MiB, when running Spark in on-heap mode.</p></td>
 <td><p>1024 MiB</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.testing.strict</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.enabled</span></code></p></td>
+<td><p>Whether to enable Spark to Arrow columnar conversion. When this is 
turned on, Comet will convert operators in <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code> 
into Arrow columnar format before processing. This is an experimental feature 
and has known issues with non-UTC timezones.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code></p></td>
+<td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when <code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.enabled</span></code> is true.</p></td>
+<td><p>Range,InMemoryTableScan,RDDScan</p></td>
+</tr>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.testing.strict</span></code></p></td>
 <td><p>Experimental option to enable strict testing, which will fail tests 
that could be more comprehensive, such as checking for a specific fallback 
reason. Can be overridden by environment variable <code class="docutils literal 
notranslate"><span 
class="pre">ENABLE_COMET_STRICT_TESTING</span></code>.</p></td>
 <td><p>false</p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to