This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 7fb79480c Publish built docs triggered by 
937cacd887653eed7e75283c27cb8a1b4b07cda6
7fb79480c is described below

commit 7fb79480cff412d57c2a4abe4ccde9409a2bd606
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Thu Nov 20 18:32:00 2025 +0000

    Publish built docs triggered by 937cacd887653eed7e75283c27cb8a1b4b07cda6
---
 _sources/user-guide/latest/configs.md.txt |  1 +
 searchindex.js                            |  2 +-
 user-guide/latest/configs.html            | 10 +++++++---
 3 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index 90eb88600..1e77032f7 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -30,6 +30,7 @@ Comet provides the following configuration settings.
 |--------|-------------|---------------|
 | `spark.comet.scan.allowIncompatible` | Some Comet scan implementations are 
not currently fully compatible with Spark for all datatypes. Set this config to 
true to allow them anyway. For more information, refer to the [Comet 
Compatibility 
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | 
false |
 | `spark.comet.scan.enabled` | Whether to enable native scans. When this is 
turned on, Spark will use Comet to read supported data sources (currently only 
Parquet is supported natively). Note that to enable native vectorized 
execution, both this config and `spark.comet.exec.enabled` need to be enabled. 
| true |
+| `spark.comet.scan.icebergNative.enabled` | Whether to enable native Iceberg 
table scan using iceberg-rust. When enabled, Iceberg tables are read directly 
through native execution, bypassing Spark's DataSource V2 API for better 
performance. | false |
 | `spark.comet.scan.preFetch.enabled` | Whether to enable pre-fetching feature 
of CometScan. | false |
 | `spark.comet.scan.preFetch.threadNum` | The number of threads running 
pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is 
enabled. Note that more pre-fetching threads means more memory requirement to 
store pre-fetched row groups. | 2 |
 | `spark.hadoop.fs.comet.libhdfs.schemes` | Defines filesystem schemes (e.g., 
hdfs, webhdfs) that the native side accesses via libhdfs, separated by commas. 
Valid only when built with hdfs feature enabled. | |
diff --git a/searchindex.js b/searchindex.js
index a1eb399b7..0cc082827 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[19, "install-comet"]], 
"1. Native Operators (nativeExecs map)": [[4, 
"native-operators-nativeexecs-map"]], "2. Clone Spark and Apply Diff": [[19, 
"clone-spark-and-apply-diff"]], "2. Sink Operators (sinks map)": [[4, 
"sink-operators-sinks-map"]], "3. Comet JVM Operators": [[4, 
"comet-jvm-operators"]], "3. Run Spark SQL Tests": [[19, 
"run-spark-sql-tests"]], "ANSI Mode": [[22, "ansi-mode"], [35, "ansi-mode"], 
[75, "ansi-mode"]], "ANSI mo [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[19, "install-comet"]], 
"1. Native Operators (nativeExecs map)": [[4, 
"native-operators-nativeexecs-map"]], "2. Clone Spark and Apply Diff": [[19, 
"clone-spark-and-apply-diff"]], "2. Sink Operators (sinks map)": [[4, 
"sink-operators-sinks-map"]], "3. Comet JVM Operators": [[4, 
"comet-jvm-operators"]], "3. Run Spark SQL Tests": [[19, 
"run-spark-sql-tests"]], "ANSI Mode": [[22, "ansi-mode"], [35, "ansi-mode"], 
[75, "ansi-mode"]], "ANSI mo [...]
\ No newline at end of file
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index 4b8556420..6832f1020 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -482,15 +482,19 @@ under the License.
 <td><p>Whether to enable native scans. When this is turned on, Spark will use 
Comet to read supported data sources (currently only Parquet is supported 
natively). Note that to enable native vectorized execution, both this config 
and <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.enabled</span></code> need to be enabled.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.enabled</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.icebergNative.enabled</span></code></p></td>
+<td><p>Whether to enable native Iceberg table scan using iceberg-rust. When 
enabled, Iceberg tables are read directly through native execution, bypassing 
Spark’s DataSource V2 API for better performance.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.enabled</span></code></p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.threadNum</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.preFetch.threadNum</span></code></p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching 
threads means more memory requirement to store pre-fetched row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.hadoop.fs.comet.libhdfs.schemes</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.hadoop.fs.comet.libhdfs.schemes</span></code></p></td>
 <td><p>Defines filesystem schemes (e.g., hdfs, webhdfs) that the native side 
accesses via libhdfs, separated by commas. Valid only when built with hdfs 
feature enabled.</p></td>
 <td><p></p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to