This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new ee8a9f908 Publish built docs triggered by 
5c1f1311c78ae19e1d6ae52ea63621d257e70484
ee8a9f908 is described below

commit ee8a9f908cbd7aea9da62491275c9c47b2403214
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Wed Feb 18 15:39:48 2026 +0000

    Publish built docs triggered by 5c1f1311c78ae19e1d6ae52ea63621d257e70484
---
 _sources/user-guide/latest/compatibility.md.txt | 2 +-
 _sources/user-guide/latest/configs.md.txt       | 4 ++--
 searchindex.js                                  | 2 +-
 user-guide/latest/compatibility.html            | 2 +-
 user-guide/latest/configs.html                  | 6 +++---
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/_sources/user-guide/latest/compatibility.md.txt 
b/_sources/user-guide/latest/compatibility.md.txt
index 0163efc4f..6fe09df9e 100644
--- a/_sources/user-guide/latest/compatibility.md.txt
+++ b/_sources/user-guide/latest/compatibility.md.txt
@@ -62,7 +62,7 @@ the [Comet Supported Expressions Guide](expressions.md) for 
more information on
 
 Comet uses the Rust regexp crate for evaluating regular expressions, and this 
has different behavior from Java's
 regular expression engine. Comet will fall back to Spark for patterns that are 
known to produce different results, but
-this can be overridden by setting `spark.comet.regexp.allowIncompatible=true`.
+this can be overridden by setting 
`spark.comet.expression.regexp.allowIncompatible=true`.
 
 ## Window Functions
 
diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index 7a0ed1dc0..617634cb0 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -71,7 +71,7 @@ Comet provides the following configuration settings.
 | `spark.comet.maxTempDirectorySize` | The maximum amount of data (in bytes) 
stored inside the temporary directories. | 107374182400b |
 | `spark.comet.metrics.updateInterval` | The interval in milliseconds to 
update metrics. If interval is negative, metrics will be updated upon task 
completion. | 3000 |
 | `spark.comet.nativeLoadRequired` | Whether to require Comet native library 
to load successfully when Comet is enabled. If not, Comet will silently 
fallback to Spark when it fails to load the native lib. Otherwise, an error 
will be thrown and the Spark job will be aborted. | false |
-| `spark.comet.regexp.allowIncompatible` | Comet is not currently fully 
compatible with Spark for all regular expressions. Set this config to true to 
allow them anyway. For more information, refer to the [Comet Compatibility 
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | 
false |
+| `spark.comet.operator.DataWritingCommandExec.allowIncompatible` | Whether to 
allow incompatibility for operator: DataWritingCommandExec. False by default. 
Can be overridden with 
SPARK_COMET_OPERATOR_DATAWRITINGCOMMANDEXEC_ALLOWINCOMPATIBLE env variable It 
can be overridden by the environment variable 
`SPARK_COMET_OPERATOR_DATAWRITINGCOMMANDEXEC_ALLOWINCOMPATIBLE`. | false |
 <!-- prettier-ignore-end -->
 <!--END:CONFIG_TABLE-->
 
@@ -142,7 +142,7 @@ These settings can be used to determine which parts of the 
plan are accelerated
 | `spark.comet.exec.onHeap.memoryPool` | The type of memory pool to be used 
for Comet native execution when running Spark in on-heap mode. Available pool 
types are `greedy`, `fair_spill`, `greedy_task_shared`, 
`fair_spill_task_shared`, `greedy_global`, `fair_spill_global`, and 
`unbounded`. | greedy_task_shared |
 | `spark.comet.exec.respectDataFusionConfigs` | Development and testing 
configuration option to allow DataFusion configs set in Spark configuration 
settings starting with `spark.comet.datafusion.` to be passed into native 
execution. | false |
 | `spark.comet.memoryOverhead` | The amount of additional memory to be 
allocated per executor process for Comet, in MiB, when running Spark in on-heap 
mode. | 1024 MiB |
-| `spark.comet.parquet.write.enabled` | Whether to enable native Parquet write 
through Comet. When enabled, Comet will intercept Parquet write operations and 
execute them natively. This feature is highly experimental and only partially 
implemented. It should not be used in production. | false |
+| `spark.comet.parquet.write.enabled` | Whether to enable native Parquet write 
through Comet. When enabled, Comet will intercept Parquet write operations and 
execute them natively. This feature is highly experimental and only partially 
implemented. It should not be used in production. It can be overridden by the 
environment variable `ENABLE_COMET_WRITE`. | false |
 | `spark.comet.scan.csv.v2.enabled` | Whether to use the native Comet V2 CSV 
reader for improved performance. Default: false (uses standard Spark CSV 
reader) Experimental: Performance benefits are workload-dependent. | false |
 | `spark.comet.sparkToColumnar.enabled` | Whether to enable Spark to Arrow 
columnar conversion. When this is turned on, Comet will convert operators in 
`spark.comet.sparkToColumnar.supportedOperatorList` into Arrow columnar format 
before processing. This is an experimental feature and has known issues with 
non-UTC timezones. | false |
 | `spark.comet.sparkToColumnar.supportedOperatorList` | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
`spark.comet.sparkToColumnar.enabled` is true. | 
Range,InMemoryTableScan,RDDScan |
diff --git a/searchindex.js b/searchindex.js
index 1985b0dbe..2fb393851 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Format Your Code": [[12, 
"format-your-code"]], "1. Install Comet": [[22, "install-comet"]], "1. Native 
Operators (nativeExecs map)": [[4, "native-operators-nativeexecs-map"]], "2. 
Build and Verify": [[12, "build-and-verify"]], "2. Clone Spark and Apply Diff": 
[[22, "clone-spark-and-apply-diff"]], "2. Sink Operators (sinks map)": [[4, 
"sink-operators-sinks-map"]], "3. Comet JVM Operators": [[4, 
"comet-jvm-operators"]], "3. Run Clippy (Recommended)": [[12 [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Format Your Code": [[12, 
"format-your-code"]], "1. Install Comet": [[22, "install-comet"]], "1. Native 
Operators (nativeExecs map)": [[4, "native-operators-nativeexecs-map"]], "2. 
Build and Verify": [[12, "build-and-verify"]], "2. Clone Spark and Apply Diff": 
[[22, "clone-spark-and-apply-diff"]], "2. Sink Operators (sinks map)": [[4, 
"sink-operators-sinks-map"]], "3. Comet JVM Operators": [[4, 
"comet-jvm-operators"]], "3. Run Clippy (Recommended)": [[12 [...]
\ No newline at end of file
diff --git a/user-guide/latest/compatibility.html 
b/user-guide/latest/compatibility.html
index 33c007450..4d8edc0f2 100644
--- a/user-guide/latest/compatibility.html
+++ b/user-guide/latest/compatibility.html
@@ -502,7 +502,7 @@ the <a class="reference internal" 
href="expressions.html"><span class="std std-d
 <h2>Regular Expressions<a class="headerlink" href="#regular-expressions" 
title="Link to this heading">#</a></h2>
 <p>Comet uses the Rust regexp crate for evaluating regular expressions, and 
this has different behavior from Java’s
 regular expression engine. Comet will fall back to Spark for patterns that are 
known to produce different results, but
-this can be overridden by setting <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.regexp.allowIncompatible=true</span></code>.</p>
+this can be overridden by setting <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.expression.regexp.allowIncompatible=true</span></code>.</p>
 </section>
 <section id="window-functions">
 <h2>Window Functions<a class="headerlink" href="#window-functions" title="Link 
to this heading">#</a></h2>
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index fa21bce21..38d8822ad 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -613,8 +613,8 @@ under the License.
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.regexp.allowIncompatible</span></code></p></td>
-<td><p>Comet is not currently fully compatible with Spark for all regular 
expressions. Set this config to true to allow them anyway. For more 
information, refer to the <a class="reference external" 
href="https://datafusion.apache.org/comet/user-guide/compatibility.html";>Comet 
Compatibility Guide</a>.</p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.operator.DataWritingCommandExec.allowIncompatible</span></code></p></td>
+<td><p>Whether to allow incompatibility for operator: DataWritingCommandExec. 
False by default. Can be overridden with 
SPARK_COMET_OPERATOR_DATAWRITINGCOMMANDEXEC_ALLOWINCOMPATIBLE env variable It 
can be overridden by the environment variable <code class="docutils literal 
notranslate"><span 
class="pre">SPARK_COMET_OPERATOR_DATAWRITINGCOMMANDEXEC_ALLOWINCOMPATIBLE</span></code>.</p></td>
 <td><p>false</p></td>
 </tr>
 </tbody>
@@ -819,7 +819,7 @@ under the License.
 <td><p>1024 MiB</p></td>
 </tr>
 <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.parquet.write.enabled</span></code></p></td>
-<td><p>Whether to enable native Parquet write through Comet. When enabled, 
Comet will intercept Parquet write operations and execute them natively. This 
feature is highly experimental and only partially implemented. It should not be 
used in production.</p></td>
+<td><p>Whether to enable native Parquet write through Comet. When enabled, 
Comet will intercept Parquet write operations and execute them natively. This 
feature is highly experimental and only partially implemented. It should not be 
used in production. It can be overridden by the environment variable <code 
class="docutils literal notranslate"><span 
class="pre">ENABLE_COMET_WRITE</span></code>.</p></td>
 <td><p>false</p></td>
 </tr>
 <tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.csv.v2.enabled</span></code></p></td>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to