This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 0ceaaaf528 Fix the CSS of Spark 3.5.0 doc's generated tables (#492)
0ceaaaf528 is described below

commit 0ceaaaf528ec1d0201e1eab1288f37cce607268b
Author: Gengliang Wang <gengli...@apache.org>
AuthorDate: Thu Nov 30 15:06:18 2023 -0800

    Fix the CSS of Spark 3.5.0 doc's generated tables (#492)
    
    After https://github.com/apache/spark/pull/40269, there is no border in the 
generated tables of Spark doc(for example,  
[sql-ref-ansi-compliance.html](https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html))
 . Currently only the doc of Spark 3.5.0 is affected.
    This PR is to apply the changes in 
https://github.com/apache/spark/pull/44096 on the current Spark 3.5.0 doc by
    1. change the `site/docs/3.5.0/css/custom.css`
    2. Execute `sed  -i ''  's/table class="table table-striped"/table/' 
*.html` under `site/docs/3.5.0/` directory.
    
    This should be a safe change. I have verified it on my local env.
---
 site/docs/3.5.0/building-spark.html                |  2 +-
 site/docs/3.5.0/cluster-overview.html              |  2 +-
 site/docs/3.5.0/configuration.html                 | 40 +++++++++++-----------
 site/docs/3.5.0/css/custom.css                     | 13 +++++++
 site/docs/3.5.0/ml-classification-regression.html  | 14 ++++----
 site/docs/3.5.0/ml-clustering.html                 |  8 ++---
 .../3.5.0/mllib-classification-regression.html     |  2 +-
 site/docs/3.5.0/mllib-decision-tree.html           |  2 +-
 site/docs/3.5.0/mllib-ensembles.html               |  2 +-
 site/docs/3.5.0/mllib-evaluation-metrics.html      | 10 +++---
 site/docs/3.5.0/mllib-linear-methods.html          |  4 +--
 site/docs/3.5.0/mllib-pmml-model-export.html       |  2 +-
 site/docs/3.5.0/monitoring.html                    | 10 +++---
 site/docs/3.5.0/rdd-programming-guide.html         |  8 ++---
 site/docs/3.5.0/running-on-kubernetes.html         |  8 ++---
 site/docs/3.5.0/running-on-mesos.html              |  2 +-
 site/docs/3.5.0/running-on-yarn.html               |  8 ++---
 site/docs/3.5.0/security.html                      | 26 +++++++-------
 site/docs/3.5.0/spark-standalone.html              | 12 +++----
 site/docs/3.5.0/sparkr.html                        |  6 ++--
 site/docs/3.5.0/sql-data-sources-avro.html         | 12 +++----
 site/docs/3.5.0/sql-data-sources-csv.html          |  2 +-
 site/docs/3.5.0/sql-data-sources-hive-tables.html  |  4 +--
 site/docs/3.5.0/sql-data-sources-jdbc.html         |  2 +-
 site/docs/3.5.0/sql-data-sources-json.html         |  2 +-
 .../sql-data-sources-load-save-functions.html      |  2 +-
 site/docs/3.5.0/sql-data-sources-orc.html          |  4 +--
 site/docs/3.5.0/sql-data-sources-parquet.html      |  4 +--
 site/docs/3.5.0/sql-data-sources-text.html         |  2 +-
 .../sql-distributed-sql-engine-spark-sql-cli.html  |  4 +--
 .../docs/3.5.0/sql-error-conditions-sqlstates.html | 26 +++++++-------
 site/docs/3.5.0/sql-migration-guide.html           |  4 +--
 site/docs/3.5.0/sql-performance-tuning.html        | 16 ++++-----
 site/docs/3.5.0/storage-openstack-swift.html       |  2 +-
 site/docs/3.5.0/streaming-custom-receivers.html    |  2 +-
 site/docs/3.5.0/streaming-programming-guide.html   | 10 +++---
 .../structured-streaming-kafka-integration.html    | 20 +++++------
 .../structured-streaming-programming-guide.html    | 12 +++----
 site/docs/3.5.0/submitting-applications.html       |  2 +-
 site/docs/3.5.0/web-ui.html                        |  2 +-
 40 files changed, 164 insertions(+), 151 deletions(-)

diff --git a/site/docs/3.5.0/building-spark.html 
b/site/docs/3.5.0/building-spark.html
index 0af9dd6517..672d686bc3 100644
--- a/site/docs/3.5.0/building-spark.html
+++ b/site/docs/3.5.0/building-spark.html
@@ -481,7 +481,7 @@ Change the major Scala version using (e.g. 2.13):</p>
 
 <h3 id="related-environment-variables">Related environment variables</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Variable Name</th><th>Default</th><th>Meaning</th></tr></thead>
 <tr>
   <td><code>SPARK_PROJECT_URL</code></td>
diff --git a/site/docs/3.5.0/cluster-overview.html 
b/site/docs/3.5.0/cluster-overview.html
index d6015a8686..552b24b729 100644
--- a/site/docs/3.5.0/cluster-overview.html
+++ b/site/docs/3.5.0/cluster-overview.html
@@ -216,7 +216,7 @@ The <a href="job-scheduling.html">job scheduling 
overview</a> describes this in
 
 <p>The following table summarizes terms you&#8217;ll see used to refer to 
cluster concepts:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th style="width: 130px;">Term</th><th>Meaning</th></tr>
   </thead>
diff --git a/site/docs/3.5.0/configuration.html 
b/site/docs/3.5.0/configuration.html
index d6c9255302..3ca1684ffd 100644
--- a/site/docs/3.5.0/configuration.html
+++ b/site/docs/3.5.0/configuration.html
@@ -309,7 +309,7 @@ of the most common options to set are:</p>
 
 <h3 id="application-properties">Application Properties</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.app.name</code></td>
@@ -694,7 +694,7 @@ of the most common options to set are:</p>
 
 <h3 id="runtime-environment">Runtime Environment</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.driver.extraClassPath</code></td>
@@ -1081,7 +1081,7 @@ of the most common options to set are:</p>
 
 <h3 id="shuffle-behavior">Shuffle Behavior</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.reducer.maxSizeInFlight</code></td>
@@ -1456,7 +1456,7 @@ of the most common options to set are:</p>
 
 <h3 id="spark-ui">Spark UI</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.eventLog.logBlockUpdates.enabled</code></td>
@@ -1848,7 +1848,7 @@ of the most common options to set are:</p>
 
 <h3 id="compression-and-serialization">Compression and Serialization</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.broadcast.compress</code></td>
@@ -2046,7 +2046,7 @@ of the most common options to set are:</p>
 
 <h3 id="memory-management">Memory Management</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.memory.fraction</code></td>
@@ -2171,7 +2171,7 @@ of the most common options to set are:</p>
 
 <h3 id="execution-behavior">Execution Behavior</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.broadcast.blockSize</code></td>
@@ -2421,7 +2421,7 @@ of the most common options to set are:</p>
 
 <h3 id="executor-metrics">Executor Metrics</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.eventLog.logStageExecutorMetrics</code></td>
@@ -2489,7 +2489,7 @@ of the most common options to set are:</p>
 
 <h3 id="networking">Networking</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.rpc.message.maxSize</code></td>
@@ -2652,7 +2652,7 @@ of the most common options to set are:</p>
 
 <h3 id="scheduling">Scheduling</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.cores.max</code></td>
@@ -3136,7 +3136,7 @@ of the most common options to set are:</p>
 
 <h3 id="barrier-execution-mode">Barrier Execution Mode</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.barrier.sync.timeout</code></td>
@@ -3183,7 +3183,7 @@ of the most common options to set are:</p>
 
 <h3 id="dynamic-allocation">Dynamic Allocation</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.dynamicAllocation.enabled</code></td>
@@ -3325,7 +3325,7 @@ finer granularity starting from driver and executor. Take 
RPC module as example
 like shuffle, just replace &#8220;rpc&#8221; with &#8220;shuffle&#8221; in the 
property names except
 <code>spark.{driver|executor}.rpc.netty.dispatcher.numThreads</code>, which is 
only for RPC module.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.{driver|executor}.rpc.io.serverThreads</code></td>
@@ -4923,7 +4923,7 @@ Note that 1, 2, and 3 support wildcard. For example:
 
 <h3 id="spark-streaming">Spark Streaming</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.streaming.backpressure.enabled</code></td>
@@ -5055,7 +5055,7 @@ Note that 1, 2, and 3 support wildcard. For example:
 
 <h3 id="sparkr">SparkR</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.r.numRBackendThreads</code></td>
@@ -5111,7 +5111,7 @@ Note that 1, 2, and 3 support wildcard. For example:
 
 <h3 id="graphx">GraphX</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.graphx.pregel.checkpointInterval</code></td>
@@ -5126,7 +5126,7 @@ Note that 1, 2, and 3 support wildcard. For example:
 
 <h3 id="deploy">Deploy</h3>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
   <tr>
     <td><code>spark.deploy.recoveryMode</code></td>
@@ -5174,7 +5174,7 @@ copy <code class="language-plaintext 
highlighter-rouge">conf/spark-env.sh.templa
 
 <p>The following variables can be set in <code class="language-plaintext 
highlighter-rouge">spark-env.sh</code>:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th style="width:21%">Environment 
Variable</th><th>Meaning</th></tr></thead>
   <tr>
     <td><code>JAVA_HOME</code></td>
@@ -5310,7 +5310,7 @@ This is only available for the RDD API in Scala, Java, 
and Python.  It is availa
 
 <h3 id="external-shuffle-serviceserver-side-configuration-options">External 
Shuffle service(server) side configuration options</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.shuffle.push.server.mergedShuffleFileManagerImpl</code></td>
@@ -5344,7 +5344,7 @@ This is only available for the RDD API in Scala, Java, 
and Python.  It is availa
 
 <h3 id="client-side-configuration-options">Client side configuration 
options</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.shuffle.push.enabled</code></td>
diff --git a/site/docs/3.5.0/css/custom.css b/site/docs/3.5.0/css/custom.css
index 4576f45d1a..e7416d9ded 100644
--- a/site/docs/3.5.0/css/custom.css
+++ b/site/docs/3.5.0/css/custom.css
@@ -1110,5 +1110,18 @@ img {
 table {
   width: 100%;
   overflow-wrap: normal;
+  border-collapse: collapse; /* Ensures that the borders collapse into a 
single border */
 }
 
+table th, table td {
+  border: 1px solid #cccccc; /* Adds a border to each table header and data 
cell */
+  padding: 6px 13px; /* Optional: Adds padding inside each cell for better 
readability */
+}
+
+table tr {
+  background-color: white; /* Sets a default background color for all rows */
+}
+
+table tr:nth-child(2n) {
+  background-color: #F1F4F5; /* Sets a different background color for even 
rows */
+}
diff --git a/site/docs/3.5.0/ml-classification-regression.html 
b/site/docs/3.5.0/ml-classification-regression.html
index 6cc40fbd85..83c236fad2 100644
--- a/site/docs/3.5.0/ml-classification-regression.html
+++ b/site/docs/3.5.0/ml-classification-regression.html
@@ -2705,7 +2705,7 @@ others.</p>
 
 <h3 id="available-families">Available families</h3>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th>Family</th>
@@ -4147,7 +4147,7 @@ All output columns are optional; to exclude an output 
column, set its correspond
 
 <h3 id="input-columns">Input Columns</h3>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -4174,7 +4174,7 @@ All output columns are optional; to exclude an output 
column, set its correspond
 
 <h3 id="output-columns">Output Columns</h3>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -4250,7 +4250,7 @@ All output columns are optional; to exclude an output 
column, set its correspond
 
 <h4 id="input-columns-1">Input Columns</h4>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -4277,7 +4277,7 @@ All output columns are optional; to exclude an output 
column, set its correspond
 
 <h4 id="output-columns-predictions">Output Columns (Predictions)</h4>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -4329,7 +4329,7 @@ All output columns are optional; to exclude an output 
column, set its correspond
 
 <h4 id="input-columns-2">Input Columns</h4>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -4358,7 +4358,7 @@ All output columns are optional; to exclude an output 
column, set its correspond
 
 <h4 id="output-columns-predictions-1">Output Columns (Predictions)</h4>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
diff --git a/site/docs/3.5.0/ml-clustering.html 
b/site/docs/3.5.0/ml-clustering.html
index ccc0cb191b..f093bf505e 100644
--- a/site/docs/3.5.0/ml-clustering.html
+++ b/site/docs/3.5.0/ml-clustering.html
@@ -402,7 +402,7 @@ called <a 
href="http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf";>kmea
 
 <h3 id="input-columns">Input Columns</h3>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -423,7 +423,7 @@ called <a 
href="http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf";>kmea
 
 <h3 id="output-columns">Output Columns</h3>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -840,7 +840,7 @@ model.</p>
 
 <h3 id="input-columns-1">Input Columns</h3>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
@@ -861,7 +861,7 @@ model.</p>
 
 <h3 id="output-columns-1">Output Columns</h3>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th align="left">Param name</th>
diff --git a/site/docs/3.5.0/mllib-classification-regression.html 
b/site/docs/3.5.0/mllib-classification-regression.html
index b4eb93ff39..f052d69fba 100644
--- a/site/docs/3.5.0/mllib-classification-regression.html
+++ b/site/docs/3.5.0/mllib-classification-regression.html
@@ -431,7 +431,7 @@ classification</a>, and
 <a href="http://en.wikipedia.org/wiki/Regression_analysis";>regression 
analysis</a>. The table below outlines
 the supported algorithms for each type of problem.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Problem Type</th><th>Supported Methods</th></tr>
   </thead>
diff --git a/site/docs/3.5.0/mllib-decision-tree.html 
b/site/docs/3.5.0/mllib-decision-tree.html
index a673501145..a92c22b494 100644
--- a/site/docs/3.5.0/mllib-decision-tree.html
+++ b/site/docs/3.5.0/mllib-decision-tree.html
@@ -419,7 +419,7 @@ is the information gain when a split <code 
class="language-plaintext highlighter
 implementation provides two impurity measures for classification (Gini 
impurity and entropy) and one
 impurity measure for regression (variance).</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Impurity</th><th>Task</th><th>Formula</th><th>Description</th></tr>
   </thead>
diff --git a/site/docs/3.5.0/mllib-ensembles.html 
b/site/docs/3.5.0/mllib-ensembles.html
index a075e72cd5..4e606b517d 100644
--- a/site/docs/3.5.0/mllib-ensembles.html
+++ b/site/docs/3.5.0/mllib-ensembles.html
@@ -818,7 +818,7 @@ Note that each loss is applicable to one of classification 
or regression, not bo
 
 <p>Notation: $N$ = number of instances. $y_i$ = label of instance $i$.  $x_i$ 
= features of instance $i$.  $F(x_i)$ = model&#8217;s predicted label for 
instance $i$.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Loss</th><th>Task</th><th>Formula</th><th>Description</th></tr>
   </thead>
diff --git a/site/docs/3.5.0/mllib-evaluation-metrics.html 
b/site/docs/3.5.0/mllib-evaluation-metrics.html
index 49f9c4734e..f0ac45379c 100644
--- a/site/docs/3.5.0/mllib-evaluation-metrics.html
+++ b/site/docs/3.5.0/mllib-evaluation-metrics.html
@@ -441,7 +441,7 @@ plots (recall, false positive rate) points.</p>
 
 <p><strong>Available metrics</strong></p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Metric</th><th>Definition</th></tr>
   </thead>
@@ -706,7 +706,7 @@ correctly normalized by the number of times that label 
appears in the output.</p
 
 \[\hat{\delta}(x) = \begin{cases}1 &amp; \text{if $x = 0$}, \\ 0 &amp; 
\text{otherwise}.\end{cases}\]
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Metric</th><th>Definition</th></tr>
   </thead>
@@ -983,7 +983,7 @@ correspond to document $d_i$.</p>
 
 \[I_A(x) = \begin{cases}1 &amp; \text{if $x \in A$}, \\ 0 &amp; 
\text{otherwise}.\end{cases}\]
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Metric</th><th>Definition</th></tr>
   </thead>
@@ -1263,7 +1263,7 @@ documents, returns a relevance score for the recommended 
document.</p>
 
 \[rel_D(r) = \begin{cases}1 &amp; \text{if $r \in D$}, \\ 0 &amp; 
\text{otherwise}.\end{cases}\]
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Metric</th><th>Definition</th><th>Notes</th></tr>
   </thead>
@@ -1595,7 +1595,7 @@ variable from a number of independent variables.</p>
 
 <p><strong>Available metrics</strong></p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>Metric</th><th>Definition</th></tr>
   </thead>
diff --git a/site/docs/3.5.0/mllib-linear-methods.html 
b/site/docs/3.5.0/mllib-linear-methods.html
index d133fefd8f..09542778e6 100644
--- a/site/docs/3.5.0/mllib-linear-methods.html
+++ b/site/docs/3.5.0/mllib-linear-methods.html
@@ -437,7 +437,7 @@ training error) and minimizing model complexity (i.e., to 
avoid overfitting).</p
 <p>The following table summarizes the loss functions and their gradients or 
sub-gradients for the
 methods <code class="language-plaintext highlighter-rouge">spark.mllib</code> 
supports:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th></th><th>loss function $L(\wv; \x, y)$</th><th>gradient or 
sub-gradient</th></tr>
   </thead>
@@ -470,7 +470,7 @@ multiclass labeling.</p>
 encourage simple models and avoid overfitting.  We support the following
 regularizers in <code class="language-plaintext 
highlighter-rouge">spark.mllib</code>:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th></th><th>regularizer $R(\wv)$</th><th>gradient or 
sub-gradient</th></tr>
   </thead>
diff --git a/site/docs/3.5.0/mllib-pmml-model-export.html 
b/site/docs/3.5.0/mllib-pmml-model-export.html
index b2f44ec56b..4621af6ae9 100644
--- a/site/docs/3.5.0/mllib-pmml-model-export.html
+++ b/site/docs/3.5.0/mllib-pmml-model-export.html
@@ -379,7 +379,7 @@
 
 <p>The table below outlines the <code class="language-plaintext 
highlighter-rouge">spark.mllib</code> models that can be exported to PMML and 
their equivalent PMML model.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr><th>spark.mllib model</th><th>PMML model</th></tr>
   </thead>
diff --git a/site/docs/3.5.0/monitoring.html b/site/docs/3.5.0/monitoring.html
index bfc9e8e235..fd254e8d86 100644
--- a/site/docs/3.5.0/monitoring.html
+++ b/site/docs/3.5.0/monitoring.html
@@ -226,7 +226,7 @@ spark.eventLog.dir hdfs://namenode/shared/spark-logs
 
 <h3 id="environment-variables">Environment Variables</h3>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th style="width:21%">Environment 
Variable</th><th>Meaning</th></tr></thead>
   <tr>
     <td><code>SPARK_DAEMON_MEMORY</code></td>
@@ -304,7 +304,7 @@ Use it with caution.</p>
 <p>Security options for the Spark History Server are covered more detail in the
 <a href="security.html#web-ui">Security</a> page.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>Property Name</th>
@@ -635,7 +635,7 @@ only for applications in cluster mode, not applications in 
client mode. Applicat
 can be identified by their <code class="language-plaintext 
highlighter-rouge">[attempt-id]</code>. In the API listed below, when running 
in YARN cluster mode,
 <code class="language-plaintext highlighter-rouge">[app-id]</code> will 
actually be <code class="language-plaintext 
highlighter-rouge">[base-app-id]/[attempt-id]</code>, where <code 
class="language-plaintext highlighter-rouge">[base-app-id]</code> is the YARN 
application ID.</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th>Endpoint</th><th>Meaning</th></tr></thead>
   <tr>
     <td><code>/applications</code></td>
@@ -834,7 +834,7 @@ more entries by increasing these values and restarting the 
history server.</p>
 of task execution. The metrics can be used for performance troubleshooting and 
workload characterization.
 A list of the available metrics, with a short description:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
     <tr>
       <th>Spark Executor Task Metric name</th>
@@ -992,7 +992,7 @@ In addition, aggregated per-stage peak values of the 
executor memory metrics are
 Executor memory metrics are also exposed via the Spark metrics system based on 
the <a href="https://metrics.dropwizard.io/4.2.0";>Dropwizard metrics 
library</a>.
 A list of the available metrics, with a short description:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
       <tr><th>Executor Level Metric name</th>
       <th>Short description</th>
diff --git a/site/docs/3.5.0/rdd-programming-guide.html 
b/site/docs/3.5.0/rdd-programming-guide.html
index f6c3bbf095..3df7363e05 100644
--- a/site/docs/3.5.0/rdd-programming-guide.html
+++ b/site/docs/3.5.0/rdd-programming-guide.html
@@ -518,7 +518,7 @@ resulting Java objects using <a 
href="https://github.com/irmen/pickle/";>pickle</
 PySpark does the reverse. It unpickles Python objects into Java objects and 
then converts them to Writables. The following
 Writables are automatically converted:</p>
 
-    <table class="table table-striped">
+    <table>
 <thead><tr><th>Writable Type</th><th>Python Type</th></tr></thead>
 <tr><td>Text</td><td>str</td></tr>
 <tr><td>IntWritable</td><td>int</td></tr>
@@ -1079,7 +1079,7 @@ and pair RDD functions doc
  <a 
href="api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html">Java</a>)
 for details.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th 
style="width:25%">Transformation</th><th>Meaning</th></tr></thead>
 <tr>
   <td> <b>map</b>(<i>func</i>) </td>
@@ -1194,7 +1194,7 @@ RDD API doc
  <a 
href="api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html">Java</a>)
 for details.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Action</th><th>Meaning</th></tr></thead>
 <tr>
   <td> <b>reduce</b>(<i>func</i>) </td>
@@ -1340,7 +1340,7 @@ to <code class="language-plaintext 
highlighter-rouge">persist()</code>. The <cod
 which is <code class="language-plaintext 
highlighter-rouge">StorageLevel.MEMORY_ONLY</code> (store deserialized objects 
in memory). The full set of
 storage levels is:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th style="width:23%">Storage 
Level</th><th>Meaning</th></tr></thead>
 <tr>
   <td> MEMORY_ONLY </td>
diff --git a/site/docs/3.5.0/running-on-kubernetes.html 
b/site/docs/3.5.0/running-on-kubernetes.html
index fe79c79d9d..045a7db2f3 100644
--- a/site/docs/3.5.0/running-on-kubernetes.html
+++ b/site/docs/3.5.0/running-on-kubernetes.html
@@ -757,7 +757,7 @@ using <code class="language-plaintext 
highlighter-rouge">--conf</code> as means
 
 <h4 id="spark-properties">Spark Properties</h4>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.kubernetes.context</code></td>
@@ -1823,7 +1823,7 @@ using <code class="language-plaintext 
highlighter-rouge">--conf</code> as means
 
 <h3 id="pod-metadata">Pod Metadata</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Pod metadata key</th><th>Modified 
value</th><th>Description</th></tr></thead>
 <tr>
   <td>name</td>
@@ -1859,7 +1859,7 @@ using <code class="language-plaintext 
highlighter-rouge">--conf</code> as means
 
 <h3 id="pod-spec">Pod Spec</h3>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Pod spec key</th><th>Modified 
value</th><th>Description</th></tr></thead>
 <tr>
   <td>imagePullSecrets</td>
@@ -1912,7 +1912,7 @@ using <code class="language-plaintext 
highlighter-rouge">--conf</code> as means
 
 <p>The following affect the driver and executor containers. All other 
containers in the pod spec will be unaffected.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Container spec key</th><th>Modified 
value</th><th>Description</th></tr></thead>
 <tr>
   <td>env</td>
diff --git a/site/docs/3.5.0/running-on-mesos.html 
b/site/docs/3.5.0/running-on-mesos.html
index 8a3f5bb10f..08f8128af3 100644
--- a/site/docs/3.5.0/running-on-mesos.html
+++ b/site/docs/3.5.0/running-on-mesos.html
@@ -536,7 +536,7 @@ termination. To launch it, run <code 
class="language-plaintext highlighter-rouge
 
 <h4 id="spark-properties">Spark Properties</h4>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.mesos.coarse</code></td>
diff --git a/site/docs/3.5.0/running-on-yarn.html 
b/site/docs/3.5.0/running-on-yarn.html
index 6b6337da4c..0ee92f2b8f 100644
--- a/site/docs/3.5.0/running-on-yarn.html
+++ b/site/docs/3.5.0/running-on-yarn.html
@@ -295,7 +295,7 @@ to the same log file).</p>
 
 <h4 id="spark-properties">Spark Properties</h4>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.yarn.am.memory</code></td>
@@ -848,7 +848,7 @@ to the same log file).</p>
 
 <h4 id="available-patterns-for-shs-custom-executor-log-url">Available patterns 
for SHS custom executor log URL</h4>
 
-<table class="table table-striped">
+<table>
     <thead><tr><th>Pattern</th><th>Meaning</th></tr></thead>
     <tr>
       <td>&#123;&#123;HTTP_SCHEME&#125;&#125;</td>
@@ -933,7 +933,7 @@ staging directory of the Spark application.</p>
 
 <h2 id="yarn-specific-kerberos-configuration">YARN-specific Kerberos 
Configuration</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.kerberos.keytab</code></td>
@@ -1030,7 +1030,7 @@ to avoid garbage collection issues during shuffle.</li>
 
 <p>The following extra configuration options are available when the shuffle 
service is running on YARN:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr></thead>
 <tr>
   <td><code>spark.yarn.shuffle.stopOnFailure</code></td>
diff --git a/site/docs/3.5.0/security.html b/site/docs/3.5.0/security.html
index 97f7bf7581..39131daa90 100644
--- a/site/docs/3.5.0/security.html
+++ b/site/docs/3.5.0/security.html
@@ -221,7 +221,7 @@ distributing the shared secret. Each application will use a 
unique shared secret
 the case of YARN, this feature relies on YARN RPC encryption being enabled for 
the distribution of
 secrets to be secure.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.yarn.shuffle.server.recovery.disabled</code></td>
@@ -243,7 +243,7 @@ that any user that can list pods in the namespace where the 
Spark application is
 also see their authentication secret. Access control rules should be properly 
set up by the
 Kubernetes admin to ensure that Spark authentication is secure.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.authenticate</code></td>
@@ -264,7 +264,7 @@ Kubernetes admin to ensure that Spark authentication is 
secure.</p>
 <p>Alternatively, one can mount authentication secrets using files and 
Kubernetes secrets that
 the user mounts into their pods.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.authenticate.secret.file</code></td>
@@ -320,7 +320,7 @@ is still required when talking to shuffle services from 
Spark versions older tha
 
 <p>The following table describes the different options available for 
configuring this feature.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.network.crypto.enabled</code></td>
@@ -379,7 +379,7 @@ encrypting output data generated by applications with APIs 
such as <code class="
 
 <p>The following settings cover enabling encryption for data written to 
disk:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.io.encryption.enabled</code></td>
@@ -446,7 +446,7 @@ below.</p>
 
 <p>The following options control the authentication of Web UIs:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.ui.allowFramingFrom</code></td>
@@ -550,7 +550,7 @@ servlet filters.</p>
 
 <p>To enable authorization in the SHS, a few extra options are used:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.history.ui.acls.enable</code></td>
@@ -599,7 +599,7 @@ protocol-specific settings. This way the user can easily 
provide the common sett
 protocols without disabling the ability to configure each one individually. 
The following table
 describes the SSL configuration namespaces:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>Config Namespace</th>
@@ -630,7 +630,7 @@ describes the SSL configuration namespaces:</p>
 <p>The full breakdown of available SSL options can be found below. The <code 
class="language-plaintext highlighter-rouge">${ns}</code> placeholder should be
 replaced with one of the above namespaces.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr></thead>
   <tr>
     <td><code>${ns}.enabled</code></td>
@@ -800,7 +800,7 @@ appropriate files or environment variables.</p>
 (XSS), Cross-Frame Scripting (XFS), MIME-Sniffing, and also to enforce HTTP 
Strict Transport
 Security.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.ui.xXssProtection</code></td>
@@ -855,7 +855,7 @@ configure those ports.</p>
 
 <h2 id="standalone-mode-only">Standalone mode only</h2>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>From</th><th>To</th><th>Default 
Port</th><th>Purpose</th><th>Configuration
@@ -906,7 +906,7 @@ configure those ports.</p>
 
 <h2 id="all-cluster-managers">All cluster managers</h2>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>From</th><th>To</th><th>Default 
Port</th><th>Purpose</th><th>Configuration
@@ -981,7 +981,7 @@ deployment-specific page for more information.</p>
 
 <p>The following options provides finer-grained control for this feature:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.security.credentials.${service}.enabled</code></td>
diff --git a/site/docs/3.5.0/spark-standalone.html 
b/site/docs/3.5.0/spark-standalone.html
index aafe485852..7c0a5ee94f 100644
--- a/site/docs/3.5.0/spark-standalone.html
+++ b/site/docs/3.5.0/spark-standalone.html
@@ -198,7 +198,7 @@ You should see the new node listed there, along with its 
number of CPUs and memo
 
 <p>Finally, the following configuration options can be passed to the master 
and worker:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th style="width:21%">Argument</th><th>Meaning</th></tr></thead>
   <tr>
     <td><code>-h HOST</code>, <code>--host HOST</code></td>
@@ -261,7 +261,7 @@ If you do not have a password-less setup, you can set the 
environment variable S
 
 <p>You can optionally configure the cluster further by setting environment 
variables in <code class="language-plaintext 
highlighter-rouge">conf/spark-env.sh</code>. Create this file by starting with 
the <code class="language-plaintext 
highlighter-rouge">conf/spark-env.sh.template</code>, and <em>copy it to all 
your worker machines</em> for the settings to take effect. The following 
settings are available:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th style="width:21%">Environment 
Variable</th><th>Meaning</th></tr></thead>
   <tr>
     <td><code>SPARK_MASTER_HOST</code></td>
@@ -333,7 +333,7 @@ If you do not have a password-less setup, you can set the 
environment variable S
 
 <p>SPARK_MASTER_OPTS supports the following system properties:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.deploy.retainedApplications</code></td>
@@ -432,7 +432,7 @@ If you do not have a password-less setup, you can set the 
environment variable S
 
 <p>SPARK_WORKER_OPTS supports the following system properties:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.worker.cleanup.enabled</code></td>
@@ -538,7 +538,7 @@ constructor</a>.</p>
 
 <p>Spark applications supports the following configuration properties specific 
to standalone mode:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th style="width:21%">Property Name</th><th>Default 
Value</th><th>Meaning</th><th>Since Version</th></tr></thead>
   <tr>
   <td><code>spark.standalone.submit.waitAppCompletion</code></td>
@@ -683,7 +683,7 @@ For more information about these configurations please 
refer to the <a href="con
 
 <p>In order to enable this recovery mode, you can set SPARK_DAEMON_JAVA_OPTS 
in spark-env using this configuration:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th style="width:21%">System 
property</th><th>Meaning</th><th>Since Version</th></tr></thead>
   <tr>
     <td><code>spark.deploy.recoveryMode</code></td>
diff --git a/site/docs/3.5.0/sparkr.html b/site/docs/3.5.0/sparkr.html
index 99a43b78ac..488c0e27d5 100644
--- a/site/docs/3.5.0/sparkr.html
+++ b/site/docs/3.5.0/sparkr.html
@@ -258,7 +258,7 @@ them, pass them as you would other configuration properties 
in the <code class="
 
   <p>The following Spark driver properties can be set in <code 
class="language-plaintext highlighter-rouge">sparkConfig</code> with <code 
class="language-plaintext highlighter-rouge">sparkR.session</code> from 
RStudio:</p>
 
-  <table class="table table-striped">
+  <table>
   <thead><tr><th>Property Name</th><th>Property 
group</th><th><code>spark-submit</code> equivalent</th></tr></thead>
   <tr>
     <td><code>spark.master</code></td>
@@ -782,7 +782,7 @@ SparkR supports a subset of the available R formula 
operators for model fitting,
 <div><small>Find full example code at "examples/src/main/r/ml/ml.R" in the 
Spark repo.</small></div>
 
 <h1 id="data-type-mapping-between-r-and-spark">Data type mapping between R and 
Spark</h1>
-<table class="table table-striped">
+<table>
 <thead><tr><th>R</th><th>Spark</th></tr></thead>
 <tr>
   <td>byte</td>
@@ -921,7 +921,7 @@ function is masking another function.</p>
 
 <p>The following functions are masked by the SparkR package:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th>Masked function</th><th>How to Access</th></tr></thead>
   <tr>
     <td><code>cov</code> in <code>package:stats</code></td>
diff --git a/site/docs/3.5.0/sql-data-sources-avro.html 
b/site/docs/3.5.0/sql-data-sources-avro.html
index 8ce20afc5a..fe4a011f4a 100644
--- a/site/docs/3.5.0/sql-data-sources-avro.html
+++ b/site/docs/3.5.0/sql-data-sources-avro.html
@@ -585,7 +585,7 @@ Kafka key-value record will be augmented with some 
metadata, such as the ingesti
   <li>the <code class="language-plaintext highlighter-rouge">options</code> 
parameter in function <code class="language-plaintext 
highlighter-rouge">from_avro</code>.</li>
 </ul>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th><th><b>Since
 Version</b></th></tr></thead>
   <tr>
     <td><code>avroSchema</code></td>
@@ -683,7 +683,7 @@ Kafka key-value record will be augmented with some 
metadata, such as the ingesti
 
 <h2 id="configuration">Configuration</h2>
 <p>Configuration of Avro can be done using the <code class="language-plaintext 
highlighter-rouge">setConf</code> method on SparkSession or by running <code 
class="language-plaintext highlighter-rouge">SET key=value</code> commands 
using SQL.</p>
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Since 
Version</b></th></tr></thead>
   <tr>
     <td>spark.sql.legacy.replaceDatabricksSparkAvro.enabled</td>
@@ -770,7 +770,7 @@ Submission Guide for more details.</p>
 
 <h2 id="supported-types-for-avro---spark-sql-conversion">Supported types for 
Avro -&gt; Spark SQL conversion</h2>
 <p>Currently Spark supports reading all <a 
href="https://avro.apache.org/docs/1.11.2/specification/#primitive-types";>primitive
 types</a> and <a 
href="https://avro.apache.org/docs/1.11.2/specification/#complex-types";>complex 
types</a> under records of Avro.</p>
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Avro type</b></th><th><b>Spark SQL 
type</b></th></tr></thead>
   <tr>
     <td>boolean</td>
@@ -837,7 +837,7 @@ All other union types are considered complex. They will be 
mapped to StructType
 
 <p>It also supports reading the following Avro <a 
href="https://avro.apache.org/docs/1.11.2/specification/#logical-types";>logical 
types</a>:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Avro logical type</b></th><th><b>Avro 
type</b></th><th><b>Spark SQL type</b></th></tr></thead>
   <tr>
     <td>date</td>
@@ -870,7 +870,7 @@ All other union types are considered complex. They will be 
mapped to StructType
 <h2 id="supported-types-for-spark-sql---avro-conversion">Supported types for 
Spark SQL -&gt; Avro conversion</h2>
 <p>Spark supports writing of all Spark SQL types into Avro. For most types, 
the mapping from Spark types to Avro types is straightforward (e.g. IntegerType 
gets converted to int); however, there are a few special cases which are listed 
below:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th><b>Spark SQL type</b></th><th><b>Avro type</b></th><th><b>Avro 
logical type</b></th></tr></thead>
   <tr>
     <td>ByteType</td>
@@ -906,7 +906,7 @@ All other union types are considered complex. They will be 
mapped to StructType
 
 <p>You can also specify the whole output Avro schema with the option <code 
class="language-plaintext highlighter-rouge">avroSchema</code>, so that Spark 
SQL types can be converted into other Avro types. The following conversions are 
not applied by default and require user specified Avro schema:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Spark SQL type</b></th><th><b>Avro 
type</b></th><th><b>Avro logical type</b></th></tr></thead>
   <tr>
     <td>BinaryType</td>
diff --git a/site/docs/3.5.0/sql-data-sources-csv.html 
b/site/docs/3.5.0/sql-data-sources-csv.html
index f75e1d1651..71fe964d48 100644
--- a/site/docs/3.5.0/sql-data-sources-csv.html
+++ b/site/docs/3.5.0/sql-data-sources-csv.html
@@ -582,7 +582,7 @@
   <li><code class="language-plaintext highlighter-rouge">OPTIONS</code> clause 
at <a href="sql-ref-syntax-ddl-create-table-datasource.html">CREATE TABLE USING 
DATA_SOURCE</a></li>
 </ul>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead>
   <tr>
     <td><code>sep</code></td>
diff --git a/site/docs/3.5.0/sql-data-sources-hive-tables.html 
b/site/docs/3.5.0/sql-data-sources-hive-tables.html
index f92935182f..51729dc5b7 100644
--- a/site/docs/3.5.0/sql-data-sources-hive-tables.html
+++ b/site/docs/3.5.0/sql-data-sources-hive-tables.html
@@ -711,7 +711,7 @@ format(&#8220;serde&#8221;, &#8220;input format&#8221;, 
&#8220;output format&#82
 By default, we will read the table files as plain text. Note that, Hive 
storage handler is not supported yet when
 creating table, you can create a table using storage handler at Hive side, and 
use Spark SQL to read it.</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th>Property Name</th><th>Meaning</th></tr></thead>
   <tr>
     <td><code>fileFormat</code></td>
@@ -759,7 +759,7 @@ will compile against built-in Hive and use those classes 
for internal execution
 
 <p>The following options can be used to configure the version of Hive that is 
used to retrieve metadata:</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
   <tr>
     <td><code>spark.sql.hive.metastore.version</code></td>
diff --git a/site/docs/3.5.0/sql-data-sources-jdbc.html 
b/site/docs/3.5.0/sql-data-sources-jdbc.html
index 70bf62a979..4cbd6bad6c 100644
--- a/site/docs/3.5.0/sql-data-sources-jdbc.html
+++ b/site/docs/3.5.0/sql-data-sources-jdbc.html
@@ -404,7 +404,7 @@ following command:</p>
 <code>user</code> and <code>password</code> are normally provided as 
connection properties for
 logging into the data sources.</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead>
   <tr>
     <td><code>url</code></td>
diff --git a/site/docs/3.5.0/sql-data-sources-json.html 
b/site/docs/3.5.0/sql-data-sources-json.html
index 265a764e30..be3c033b56 100644
--- a/site/docs/3.5.0/sql-data-sources-json.html
+++ b/site/docs/3.5.0/sql-data-sources-json.html
@@ -594,7 +594,7 @@ line must contain a separate, self-contained valid JSON 
object. For more informa
   <li><code class="language-plaintext highlighter-rouge">OPTIONS</code> clause 
at <a href="sql-ref-syntax-ddl-create-table-datasource.html">CREATE TABLE USING 
DATA_SOURCE</a></li>
 </ul>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead>
   <tr>
     <!-- TODO(SPARK-35433): Add timeZone to Data Source Option for CSV, too. 
-->
diff --git a/site/docs/3.5.0/sql-data-sources-load-save-functions.html 
b/site/docs/3.5.0/sql-data-sources-load-save-functions.html
index c9b568f81c..5ee1ddb7de 100644
--- a/site/docs/3.5.0/sql-data-sources-load-save-functions.html
+++ b/site/docs/3.5.0/sql-data-sources-load-save-functions.html
@@ -646,7 +646,7 @@ present. It is important to realize that these save modes 
do not utilize any loc
 atomic. Additionally, when performing an <code class="language-plaintext 
highlighter-rouge">Overwrite</code>, the data will be deleted before writing 
out the
 new data.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Scala/Java</th><th>Any 
Language</th><th>Meaning</th></tr></thead>
 <tr>
   <td><code>SaveMode.ErrorIfExists</code> (default)</td>
diff --git a/site/docs/3.5.0/sql-data-sources-orc.html 
b/site/docs/3.5.0/sql-data-sources-orc.html
index 8f605a0927..a5a548a169 100644
--- a/site/docs/3.5.0/sql-data-sources-orc.html
+++ b/site/docs/3.5.0/sql-data-sources-orc.html
@@ -489,7 +489,7 @@ Please visit <a 
href="https://hadoop.apache.org/docs/current/hadoop-kms/index.ht
 
 <h3 id="configuration">Configuration</h3>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Since 
Version</b></th></tr></thead>
   <tr>
     <td><code>spark.sql.orc.impl</code></td>
@@ -595,7 +595,7 @@ Please visit <a 
href="https://hadoop.apache.org/docs/current/hadoop-kms/index.ht
   <li><code class="language-plaintext highlighter-rouge">OPTIONS</code> clause 
at <a href="sql-ref-syntax-ddl-create-table-datasource.html">CREATE TABLE USING 
DATA_SOURCE</a></li>
 </ul>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead>
   <tr>
     <td><code>mergeSchema</code></td>
diff --git a/site/docs/3.5.0/sql-data-sources-parquet.html 
b/site/docs/3.5.0/sql-data-sources-parquet.html
index f612d32bfe..2b86e4ce46 100644
--- a/site/docs/3.5.0/sql-data-sources-parquet.html
+++ b/site/docs/3.5.0/sql-data-sources-parquet.html
@@ -954,7 +954,7 @@ metadata.</p>
   <li><code class="language-plaintext highlighter-rouge">OPTIONS</code> clause 
at <a href="sql-ref-syntax-ddl-create-table-datasource.html">CREATE TABLE USING 
DATA_SOURCE</a></li>
 </ul>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead>
   <tr>
     <td><code>datetimeRebaseMode</code></td>
@@ -1002,7 +1002,7 @@ metadata.</p>
 <p>Configuration of Parquet can be done using the <code 
class="language-plaintext highlighter-rouge">setConf</code> method on <code 
class="language-plaintext highlighter-rouge">SparkSession</code> or by running
 <code class="language-plaintext highlighter-rouge">SET key=value</code> 
commands using SQL.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.sql.parquet.binaryAsString</code></td>
diff --git a/site/docs/3.5.0/sql-data-sources-text.html 
b/site/docs/3.5.0/sql-data-sources-text.html
index ef3f2984aa..4435b391f7 100644
--- a/site/docs/3.5.0/sql-data-sources-text.html
+++ b/site/docs/3.5.0/sql-data-sources-text.html
@@ -530,7 +530,7 @@
   <li><code class="language-plaintext highlighter-rouge">OPTIONS</code> clause 
at <a href="sql-ref-syntax-ddl-create-table-datasource.html">CREATE TABLE USING 
DATA_SOURCE</a></li>
 </ul>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th><b>Property 
Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr></thead>
   <tr>
     <td><code>wholetext</code></td>
diff --git a/site/docs/3.5.0/sql-distributed-sql-engine-spark-sql-cli.html 
b/site/docs/3.5.0/sql-distributed-sql-engine-spark-sql-cli.html
index b1473e8ce6..ea1fd70505 100644
--- a/site/docs/3.5.0/sql-distributed-sql-engine-spark-sql-cli.html
+++ b/site/docs/3.5.0/sql-distributed-sql-engine-spark-sql-cli.html
@@ -308,7 +308,7 @@ For example: <code class="language-plaintext 
highlighter-rouge">/path/to/spark-s
 
 <h2 id="supported-comment-types">Supported comment types</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Comment</th><th>Example</th></tr></thead>
 <tr>
   <td>simple comment</td>
@@ -362,7 +362,7 @@ Use <code class="language-plaintext 
highlighter-rouge">;</code> (semicolon) to t
   </li>
 </ol>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Command</th><th>Description</th></tr></thead>
 <tr>
   <td><code>quit</code> or <code>exit</code></td>
diff --git a/site/docs/3.5.0/sql-error-conditions-sqlstates.html 
b/site/docs/3.5.0/sql-error-conditions-sqlstates.html
index fbcc1fe141..38115eb954 100644
--- a/site/docs/3.5.0/sql-error-conditions-sqlstates.html
+++ b/site/docs/3.5.0/sql-error-conditions-sqlstates.html
@@ -462,7 +462,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 
 <h2 id="class-0a-feature-not-supported">Class <code class="language-plaintext 
highlighter-rouge">0A</code>: feature not supported</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>0A000</td>
@@ -477,7 +477,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-21-cardinality-violation">Class <code class="language-plaintext 
highlighter-rouge">21</code>: cardinality violation</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>21000</td>
@@ -492,7 +492,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-22-data-exception">Class <code class="language-plaintext 
highlighter-rouge">22</code>: data exception</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>22003</td>
@@ -597,7 +597,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-23-integrity-constraint-violation">Class <code 
class="language-plaintext highlighter-rouge">23</code>: integrity constraint 
violation</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>23505</td>
@@ -612,7 +612,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-2b-dependent-privilege-descriptors-still-exist">Class <code 
class="language-plaintext highlighter-rouge">2B</code>: dependent privilege 
descriptors still exist</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>2BP01</td>
@@ -627,7 +627,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-38-external-routine-exception">Class <code 
class="language-plaintext highlighter-rouge">38</code>: external routine 
exception</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>38000</td>
@@ -642,7 +642,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-39-external-routine-invocation-exception">Class <code 
class="language-plaintext highlighter-rouge">39</code>: external routine 
invocation exception</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>39000</td>
@@ -657,7 +657,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-42-syntax-error-or-access-rule-violation">Class <code 
class="language-plaintext highlighter-rouge">42</code>: syntax error or access 
rule violation</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>42000</td>
@@ -1077,7 +1077,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-46-java-ddl-1">Class <code class="language-plaintext 
highlighter-rouge">46</code>: java ddl 1</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>46110</td>
@@ -1101,7 +1101,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-53-insufficient-resources">Class <code class="language-plaintext 
highlighter-rouge">53</code>: insufficient resources</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>53200</td>
@@ -1116,7 +1116,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-54-program-limit-exceeded">Class <code class="language-plaintext 
highlighter-rouge">54</code>: program limit exceeded</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>54000</td>
@@ -1131,7 +1131,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-hy-cli-specific-condition">Class <code class="language-plaintext 
highlighter-rouge">HY</code>: CLI-specific condition</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>HY008</td>
@@ -1146,7 +1146,7 @@ Each character must be a digit <code 
class="language-plaintext highlighter-rouge
 </table>
 <h2 id="class-xx-internal-error">Class <code class="language-plaintext 
highlighter-rouge">XX</code>: internal error</h2>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQLSTATE</th><th>Description and issuing error 
classes</th></tr></thead>
 <tr>
   <td>XX000</td>
diff --git a/site/docs/3.5.0/sql-migration-guide.html 
b/site/docs/3.5.0/sql-migration-guide.html
index 22b49d677a..3b174cc37c 100644
--- a/site/docs/3.5.0/sql-migration-guide.html
+++ b/site/docs/3.5.0/sql-migration-guide.html
@@ -1026,7 +1026,7 @@ the extremely short interval that results will likely 
cause applications to fail
 
 <ul>
   <li>In Spark version 2.3 and earlier, the second parameter to array_contains 
function is implicitly promoted to the element type of first array type 
parameter. This type promotion can be lossy and may cause <code 
class="language-plaintext highlighter-rouge">array_contains</code> function to 
return wrong result. This problem has been addressed in 2.4 by employing a 
safer type promotion mechanism. This can cause some change in behavior and are 
illustrated in the table below.
-    <table class="table table-striped">
+    <table>
     <thead>
       <tr>
         <th>
@@ -1167,7 +1167,7 @@ the extremely short interval that results will likely 
cause applications to fail
   </li>
   <li>
     <p>Partition column inference previously found incorrect common type for 
different inferred types, for example, previously it ended up with double type 
as the common type for double type and date type. Now it finds the correct 
common type for such conflicts. The conflict resolution follows the table 
below:</p>
-    <table class="table table-striped">
+    <table>
 <thead>
   <tr>
     <th>
diff --git a/site/docs/3.5.0/sql-performance-tuning.html 
b/site/docs/3.5.0/sql-performance-tuning.html
index 5db8609654..e38b9d8473 100644
--- a/site/docs/3.5.0/sql-performance-tuning.html
+++ b/site/docs/3.5.0/sql-performance-tuning.html
@@ -316,7 +316,7 @@ memory usage and GC pressure. You can call <code 
class="language-plaintext highl
 <p>Configuration of in-memory caching can be done using the <code 
class="language-plaintext highlighter-rouge">setConf</code> method on <code 
class="language-plaintext highlighter-rouge">SparkSession</code> or by running
 <code class="language-plaintext highlighter-rouge">SET key=value</code> 
commands using SQL.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td><code>spark.sql.inMemoryColumnarStorage.compressed</code></td>
@@ -344,7 +344,7 @@ memory usage and GC pressure. You can call <code 
class="language-plaintext highl
 <p>The following options can also be used to tune the performance of query 
execution. It is possible
 that these options will be deprecated in future release as more optimizations 
are performed automatically.</p>
 
-<table class="table table-striped">
+<table>
   <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
   <tr>
     <td><code>spark.sql.files.maxPartitionBytes</code></td>
@@ -524,7 +524,7 @@ hint has an initial partition number, columns, or 
both/neither of them as parame
 
 <h3 id="coalescing-post-shuffle-partitions">Coalescing Post Shuffle 
Partitions</h3>
 <p>This feature coalesces the post shuffle partitions based on the map output 
statistics when both <code class="language-plaintext 
highlighter-rouge">spark.sql.adaptive.enabled</code> and <code 
class="language-plaintext 
highlighter-rouge">spark.sql.adaptive.coalescePartitions.enabled</code> 
configurations are true. This feature simplifies the tuning of shuffle 
partition number when running queries. You do not need to set a proper shuffle 
partition number to fit your dataset. Spark can pi [...]
-<table class="table table-striped">
+<table>
    <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
    <tr>
      <td><code>spark.sql.adaptive.coalescePartitions.enabled</code></td>
@@ -569,7 +569,7 @@ hint has an initial partition number, columns, or 
both/neither of them as parame
  </table>
 
 <h3 id="spliting-skewed-shuffle-partitions">Spliting skewed shuffle 
partitions</h3>
-<table class="table table-striped">
+<table>
    <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
    <tr>
      
<td><code>spark.sql.adaptive.optimizeSkewsInRebalancePartitions.enabled</code></td>
@@ -591,7 +591,7 @@ hint has an initial partition number, columns, or 
both/neither of them as parame
 
 <h3 id="converting-sort-merge-join-to-broadcast-join">Converting sort-merge 
join to broadcast join</h3>
 <p>AQE converts sort-merge join to broadcast hash join when the runtime 
statistics of any join side is smaller than the adaptive broadcast hash join 
threshold. This is not as efficient as planning a broadcast hash join in the 
first place, but it&#8217;s better than keep doing the sort-merge join, as we 
can save the sorting of both the join sides, and read shuffle files locally to 
save network traffic(if <code class="language-plaintext 
highlighter-rouge">spark.sql.adaptive.localShuffleRea [...]
-<table class="table table-striped">
+<table>
      <thead><tr><th>Property 
Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
      <tr>
        <td><code>spark.sql.adaptive.autoBroadcastJoinThreshold</code></td>
@@ -613,7 +613,7 @@ hint has an initial partition number, columns, or 
both/neither of them as parame
 
 <h3 id="converting-sort-merge-join-to-shuffled-hash-join">Converting 
sort-merge join to shuffled hash join</h3>
 <p>AQE converts sort-merge join to shuffled hash join when all post shuffle 
partitions are smaller than a threshold, the max threshold can see the config 
<code class="language-plaintext 
highlighter-rouge">spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold</code>.</p>
-<table class="table table-striped">
+<table>
      <thead><tr><th>Property 
Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
      <tr>
        
<td><code>spark.sql.adaptive.maxShuffledHashJoinLocalMapThreshold</code></td>
@@ -627,7 +627,7 @@ hint has an initial partition number, columns, or 
both/neither of them as parame
 
 <h3 id="optimizing-skew-join">Optimizing Skew Join</h3>
 <p>Data skew can severely downgrade the performance of join queries. This 
feature dynamically handles skew in sort-merge join by splitting (and 
replicating if needed) skewed tasks into roughly evenly sized tasks. It takes 
effect when both <code class="language-plaintext 
highlighter-rouge">spark.sql.adaptive.enabled</code> and <code 
class="language-plaintext 
highlighter-rouge">spark.sql.adaptive.skewJoin.enabled</code> configurations 
are enabled.</p>
-<table class="table table-striped">
+<table>
      <thead><tr><th>Property 
Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
      <tr>
        <td><code>spark.sql.adaptive.skewJoin.enabled</code></td>
@@ -664,7 +664,7 @@ hint has an initial partition number, columns, or 
both/neither of them as parame
    </table>
 
 <h3 id="misc">Misc</h3>
-<table class="table table-striped">
+<table>
     <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
     <tr>
       <td><code>spark.sql.adaptive.optimizer.excludedRules</code></td>
diff --git a/site/docs/3.5.0/storage-openstack-swift.html 
b/site/docs/3.5.0/storage-openstack-swift.html
index dd85b98802..d6c96bfa60 100644
--- a/site/docs/3.5.0/storage-openstack-swift.html
+++ b/site/docs/3.5.0/storage-openstack-swift.html
@@ -178,7 +178,7 @@ required by Keystone.</p>
 <p>The following table contains a list of Keystone mandatory parameters. 
<code>PROVIDER</code> can be
 any (alphanumeric) name.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Meaning</th><th>Required</th></tr></thead>
 <tr>
   <td><code>fs.swift.service.PROVIDER.auth.url</code></td>
diff --git a/site/docs/3.5.0/streaming-custom-receivers.html 
b/site/docs/3.5.0/streaming-custom-receivers.html
index 71cc9325a7..fd2b4b98fc 100644
--- a/site/docs/3.5.0/streaming-custom-receivers.html
+++ b/site/docs/3.5.0/streaming-custom-receivers.html
@@ -357,7 +357,7 @@ interval in the <a 
href="streaming-programming-guide.html">Spark Streaming Progr
 
 <p>The following table summarizes the characteristics of both types of 
receivers</p>
 
-<table class="table table-striped">
+<table>
 <thead>
 <tr>
   <th>Receiver Type</th>
diff --git a/site/docs/3.5.0/streaming-programming-guide.html 
b/site/docs/3.5.0/streaming-programming-guide.html
index d139fae8a0..247c09c297 100644
--- a/site/docs/3.5.0/streaming-programming-guide.html
+++ b/site/docs/3.5.0/streaming-programming-guide.html
@@ -541,7 +541,7 @@ Streaming core
 artifact <code class="language-plaintext 
highlighter-rouge">spark-streaming-xyz_2.12</code> to the dependencies. For 
example,
 some of the common ones are as follows.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Source</th><th>Artifact</th></tr></thead>
 <tr><td> Kafka </td><td> spark-streaming-kafka-0-10_2.12 </td></tr>
 <tr><td> Kinesis<br /></td><td>spark-streaming-kinesis-asl_2.12 [Amazon 
Software License] </td></tr>
@@ -916,7 +916,7 @@ that no data will be lost due to any kind of failure. This 
leads to two kinds of
 DStreams support many of the transformations available on normal Spark 
RDD&#8217;s.
 Some of the common ones are as follows.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th 
style="width:25%">Transformation</th><th>Meaning</th></tr></thead>
 <tr>
   <td> <b>map</b>(<i>func</i>) </td>
@@ -1179,7 +1179,7 @@ operation <code class="language-plaintext 
highlighter-rouge">reduceByKeyAndWindo
 <p>Some of the common window operations are as follows. All of these 
operations take the
 said two parameters - <i>windowLength</i> and <i>slideInterval</i>.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th 
style="width:25%">Transformation</th><th>Meaning</th></tr></thead>
 <tr>
   <td> <b>window</b>(<i>windowLength</i>, <i>slideInterval</i>) </td>
@@ -1347,7 +1347,7 @@ Since the output operations actually allow the 
transformed data to be consumed b
 they trigger the actual execution of all the DStream transformations (similar 
to actions for RDDs).
 Currently, the following output operations are defined:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th style="width:30%">Output 
Operation</th><th>Meaning</th></tr></thead>
 <tr>
   <td> <b>print</b>()</td>
@@ -2595,7 +2595,7 @@ enabled</a> and reliable receivers, there is zero data 
loss. In terms of semanti
 
 <p>The following table summarizes the semantics under failures:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th style="width:30%">Deployment Scenario</th>
diff --git a/site/docs/3.5.0/structured-streaming-kafka-integration.html 
b/site/docs/3.5.0/structured-streaming-kafka-integration.html
index 46e017361f..21f4e4c0bc 100644
--- a/site/docs/3.5.0/structured-streaming-kafka-integration.html
+++ b/site/docs/3.5.0/structured-streaming-kafka-integration.html
@@ -408,7 +408,7 @@ you can create a Dataset/DataFrame for a defined range of 
offsets.</p>
 </div>
 
 <p>Each row in the source has the following schema:</p>
-<table class="table table-striped">
+<table>
 <thead><tr><th>Column</th><th>Type</th></tr></thead>
 <tr>
   <td>key</td>
@@ -447,7 +447,7 @@ you can create a Dataset/DataFrame for a defined range of 
offsets.</p>
 <p>The following options must be set for the Kafka source
 for both batch and streaming queries.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Option</th><th>value</th><th>meaning</th></tr></thead>
 <tr>
   <td>assign</td>
@@ -479,7 +479,7 @@ for both batch and streaming queries.</p>
 
 <p>The following configurations are optional:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Option</th><th>value</th><th>default</th><th>query 
type</th><th>meaning</th></tr></thead>
 <tr>
   <td>startingTimestamp</td>
@@ -724,7 +724,7 @@ Because of this, Spark pools Kafka consumers on executors, 
by leveraging Apache
 
 <p>The following properties are available to configure the consumer pool:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td>spark.kafka.consumer.cache.capacity</td>
@@ -774,7 +774,7 @@ Note that it doesn&#8217;t leverage Apache Commons Pool due 
to the difference of
 
 <p>The following properties are available to configure the fetched data 
pool:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td>spark.kafka.consumer.fetchedData.cache.timeout</td>
@@ -802,7 +802,7 @@ solution to remove duplicates when reading the written data 
could be to introduc
 that can be used to perform de-duplication when reading.</p>
 
 <p>The Dataframe being written to Kafka should have the following columns in 
schema:</p>
-<table class="table table-striped">
+<table>
 <thead><tr><th>Column</th><th>Type</th></tr></thead>
 <tr>
   <td>key (optional)</td>
@@ -841,7 +841,7 @@ will be used.</p>
 <p>The following options must be set for the Kafka sink
 for both batch and streaming queries.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Option</th><th>value</th><th>meaning</th></tr></thead>
 <tr>
   <td>kafka.bootstrap.servers</td>
@@ -852,7 +852,7 @@ for both batch and streaming queries.</p>
 
 <p>The following configurations are optional:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Option</th><th>value</th><th>default</th><th>query 
type</th><th>meaning</th></tr></thead>
 <tr>
   <td>topic</td>
@@ -1018,7 +1018,7 @@ It will use different Kafka producer when delegation 
token is renewed; Kafka pro
 
 <p>The following properties are available to configure the producer pool:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
 <tr>
   <td>spark.kafka.producer.cache.timeout</td>
@@ -1161,7 +1161,7 @@ must match with Kafka broker configuration.</p>
 
 <p>Delegation tokens can be obtained from multiple clusters and 
<code>${cluster}</code> is an arbitrary unique identifier which helps to group 
different configurations.</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since 
Version</th></tr></thead>
   <tr>
     
<td><code>spark.kafka.clusters.${cluster}.auth.bootstrap.servers</code></td>
diff --git a/site/docs/3.5.0/structured-streaming-programming-guide.html 
b/site/docs/3.5.0/structured-streaming-programming-guide.html
index d23eee64dd..a0dc13d805 100644
--- a/site/docs/3.5.0/structured-streaming-programming-guide.html
+++ b/site/docs/3.5.0/structured-streaming-programming-guide.html
@@ -737,7 +737,7 @@ checkpointed offsets after a failure. See the earlier 
section on
 <a href="#fault-tolerance-semantics">fault-tolerance semantics</a>.
 Here are the details of all the sources in Spark.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>Source</th>
@@ -1953,7 +1953,7 @@ regarding watermark delays and whether data will be 
dropped or not.</p>
 
 <h5 id="support-matrix-for-joins-in-streaming-queries">Support matrix for 
joins in streaming queries</h5>
 
-<table class="table table-striped">
+<table>
 <thead>
   <tr>
     <th>Left Input</th>
@@ -2427,7 +2427,7 @@ to <code class="language-plaintext 
highlighter-rouge">org.apache.spark.sql.execu
 
 <p>Here are the configs regarding to RocksDB instance of the state store 
provider:</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>Config Name</th>
@@ -2610,7 +2610,7 @@ More information to be added in future releases.</p>
 <p>Different types of streaming queries support different output modes.
 Here is the compatibility matrix.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>Query Type</th>
@@ -2748,7 +2748,7 @@ meant for debugging purposes only. See the earlier 
section on
 <a href="#fault-tolerance-semantics">fault-tolerance semantics</a>.
 Here are the details of all the sinks in Spark.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>Sink</th>
@@ -3334,7 +3334,7 @@ If you need deduplication on output, try out <code 
class="language-plaintext hig
 the query is going to be executed as micro-batch query with a fixed batch 
interval or as a continuous processing query.
 Here are the different kinds of triggers that are supported.</p>
 
-<table class="table table-striped">
+<table>
   <thead>
   <tr>
     <th>Trigger Type</th>
diff --git a/site/docs/3.5.0/submitting-applications.html 
b/site/docs/3.5.0/submitting-applications.html
index 3af859f084..400c5dc3d4 100644
--- a/site/docs/3.5.0/submitting-applications.html
+++ b/site/docs/3.5.0/submitting-applications.html
@@ -277,7 +277,7 @@ run it with <code class="language-plaintext 
highlighter-rouge">--help</code>. He
 
 <p>The master URL passed to Spark can be in one of the following formats:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>Master URL</th><th>Meaning</th></tr></thead>
 <tr><td> <code>local</code> </td><td> Run Spark locally with one worker thread 
(i.e. no parallelism at all). </td></tr>
 <tr><td> <code>local[K]</code> </td><td> Run Spark locally with K worker 
threads (ideally, set this to the number of cores on your machine). </td></tr>
diff --git a/site/docs/3.5.0/web-ui.html b/site/docs/3.5.0/web-ui.html
index 1686a1120e..e43c1104b8 100644
--- a/site/docs/3.5.0/web-ui.html
+++ b/site/docs/3.5.0/web-ui.html
@@ -494,7 +494,7 @@ operator shows the number of bytes written by a shuffle.</p>
 
 <p>Here is the list of SQL metrics:</p>
 
-<table class="table table-striped">
+<table>
 <thead><tr><th>SQL metrics</th><th>Meaning</th><th>Operators</th></tr></thead>
 <tr><td> <code>number of output rows</code> </td><td> the number of output 
rows of the operator </td><td> Aggregate operators, Join operators, Sample, 
Range, Scan operators, Filter, etc.</td></tr>
 <tr><td> <code>data size</code> </td><td> the size of 
broadcast/shuffled/collected data of the operator </td><td> BroadcastExchange, 
ShuffleExchange, Subquery </td></tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to