This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 1db45ef88 Publish built docs triggered by 
62e8c2a79e4c7b6e443ed948c06fa35ce5cc8ac1
1db45ef88 is described below

commit 1db45ef885100529515cc32c558d1c80c6b6f8ab
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Wed Jan 28 16:54:20 2026 +0000

    Publish built docs triggered by 62e8c2a79e4c7b6e443ed948c06fa35ce5cc8ac1
---
 _sources/user-guide/latest/configs.md.txt | 2 +-
 user-guide/latest/configs.html            | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index 9828b8b7f..d41edc50f 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -63,7 +63,7 @@ Comet provides the following configuration settings.
 | `spark.comet.dppFallback.enabled` | Whether to fall back to Spark for 
queries that use DPP. | true |
 | `spark.comet.enabled` | Whether to enable Comet extension for Spark. When 
this is turned on, Spark will use Comet to read Parquet data source. Note that 
to enable native vectorized execution, both this config and 
`spark.comet.exec.enabled` need to be enabled. It can be overridden by the 
environment variable `ENABLE_COMET`. | true |
 | `spark.comet.exceptionOnDatetimeRebase` | Whether to throw exception when 
seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. 
Since Spark 3, dates/timestamps were written according to the Proleptic 
Gregorian calendar. When this is true, Comet will throw exceptions when seeing 
these dates/timestamps that were written by Spark version before 3.0. If this 
is false, these dates/timestamps will be read as if they were written to the 
Proleptic Gregorian calendar and [...]
-| `spark.comet.exec.columnarToRow.native.enabled` | Whether to enable native 
columnar to row conversion. When enabled, Comet will use native Rust code to 
convert Arrow columnar data to Spark UnsafeRow format instead of the JVM 
implementation. This can improve performance for queries that need to convert 
between columnar and row formats. This is an experimental feature. | false |
+| `spark.comet.exec.columnarToRow.native.enabled` | Whether to enable native 
columnar to row conversion. When enabled, Comet will use native Rust code to 
convert Arrow columnar data to Spark UnsafeRow format instead of the JVM 
implementation. This can improve performance for queries that need to convert 
between columnar and row formats. | true |
 | `spark.comet.exec.enabled` | Whether to enable Comet native vectorized 
execution for Spark. This controls whether Spark should convert operators into 
their Comet counterparts and execute them in native space. Note: each operator 
is associated with a separate config in the format of 
`spark.comet.exec.<operator_name>.enabled` at the moment, and both the config 
and this need to be turned on, in order for the operator to be executed in 
native. | true |
 | `spark.comet.exec.replaceSortMergeJoin` | Experimental feature to force 
Spark to replace SortMergeJoin with ShuffledHashJoin for improved performance. 
This feature is not stable yet. For more information, refer to the [Comet 
Tuning Guide](https://datafusion.apache.org/comet/user-guide/tuning.html). | 
false |
 | `spark.comet.exec.strictFloatingPoint` | When enabled, fall back to Spark 
for floating-point operations that may differ from Spark, such as when 
comparing or sorting -0.0 and 0.0. For more information, refer to the [Comet 
Compatibility 
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | 
false |
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index f9253ee66..7062732f6 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -582,8 +582,8 @@ under the License.
 <td><p>false</p></td>
 </tr>
 <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.columnarToRow.native.enabled</span></code></p></td>
-<td><p>Whether to enable native columnar to row conversion. When enabled, 
Comet will use native Rust code to convert Arrow columnar data to Spark 
UnsafeRow format instead of the JVM implementation. This can improve 
performance for queries that need to convert between columnar and row formats. 
This is an experimental feature.</p></td>
-<td><p>false</p></td>
+<td><p>Whether to enable native columnar to row conversion. When enabled, 
Comet will use native Rust code to convert Arrow columnar data to Spark 
UnsafeRow format instead of the JVM implementation. This can improve 
performance for queries that need to convert between columnar and row 
formats.</p></td>
+<td><p>true</p></td>
 </tr>
 <tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.enabled</span></code></p></td>
 <td><p>Whether to enable Comet native vectorized execution for Spark. This 
controls whether Spark should convert operators into their Comet counterparts 
and execute them in native space. Note: each operator is associated with a 
separate config in the format of <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.exec.&lt;operator_name&gt;.enabled</span></code> at the 
moment, and both the config and this need to be turned on, in order for the 
operator to be executed in [...]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to