This is an automated email from the ASF dual-hosted git repository.

bowenliang pushed a commit to branch https-link
in repository https://gitbox.apache.org/repos/asf/kyuubi.git

commit 2370f4bfcc405ea5ce380698e6f4abcca66f4953
Author: liangbowen <liangbo...@gf.com.cn>
AuthorDate: Thu Feb 2 22:21:22 2023 +0800

    prefer https URLs in docs
---
 docs/appendix/terminology.md               |  2 +-
 docs/client/cli/hive_beeline.rst           |  2 +-
 docs/deployment/engine_on_kubernetes.md    |  8 ++++----
 docs/deployment/engine_on_yarn.md          | 10 +++++-----
 docs/deployment/hive_metastore.md          |  8 ++++----
 docs/deployment/settings.md                | 10 +++++-----
 docs/develop_tools/building.md             |  2 +-
 docs/develop_tools/testing.md              |  4 ++--
 docs/extensions/engines/spark/lineage.md   |  2 +-
 docs/make.bat                              |  2 +-
 docs/monitor/logging.md                    |  2 +-
 docs/overview/architecture.md              |  4 ++--
 docs/overview/kyuubi_vs_hive.md            |  2 +-
 docs/overview/kyuubi_vs_thriftserver.md    |  2 +-
 docs/security/authorization/spark/build.md |  2 +-
 docs/security/kinit.md                     |  2 +-
 16 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/docs/appendix/terminology.md b/docs/appendix/terminology.md
index 21b8cb1b6..b81fa25fe 100644
--- a/docs/appendix/terminology.md
+++ b/docs/appendix/terminology.md
@@ -139,7 +139,7 @@ Kyuubi unifies DataLake & LakeHouse access in the simplest 
pure SQL way, meanwhi
 
 <p align=right>
 <em>
-<a href="http://iceberg.apache.org/";>http://iceberg.apache.org/</a>
+<a href="https://iceberg.apache.org/";>https://iceberg.apache.org/</a>
 </em>
 </p>
 
diff --git a/docs/client/cli/hive_beeline.rst b/docs/client/cli/hive_beeline.rst
index fda925aa1..f75e00819 100644
--- a/docs/client/cli/hive_beeline.rst
+++ b/docs/client/cli/hive_beeline.rst
@@ -17,7 +17,7 @@ Hive Beeline
 ============
 
 Kyuubi supports Apache Hive beeline that works with Kyuubi server.
-Hive beeline is a `SQLLine CLI <http://sqlline.sourceforge.net/>`_ based on 
the `Hive JDBC Driver <../jdbc/hive_jdbc.html>`_.
+Hive beeline is a `SQLLine CLI <https://sqlline.sourceforge.net/>`_ based on 
the `Hive JDBC Driver <../jdbc/hive_jdbc.html>`_.
 
 Prerequisites
 -------------
diff --git a/docs/deployment/engine_on_kubernetes.md 
b/docs/deployment/engine_on_kubernetes.md
index ae8edcb75..44fca1602 100644
--- a/docs/deployment/engine_on_kubernetes.md
+++ b/docs/deployment/engine_on_kubernetes.md
@@ -21,7 +21,7 @@
 
 When you want to run Kyuubi's Spark SQL engines on Kubernetes, you'd better 
have cognition upon the following things.
 
-* Read about [Running Spark On 
Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html)
+* Read about [Running Spark On 
Kubernetes](https://spark.apache.org/docs/latest/running-on-kubernetes.html)
 * An active Kubernetes cluster
 * [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/)
 * KubeConfig of the target cluster
@@ -97,7 +97,7 @@ As it known to us all, Kubernetes can use configurations to 
mount volumes into d
 * persistentVolumeClaim: mounts a PersistentVolume into a pod.
 
 Note: Please
-see [the Security section of this 
document](http://spark.apache.org/docs/latest/running-on-kubernetes.html#security)
 for security issues related to volume mounts.
+see [the Security section of this 
document](https://spark.apache.org/docs/latest/running-on-kubernetes.html#security)
 for security issues related to volume mounts.
 
 ```
 spark.kubernetes.driver.volumes.<type>.<name>.options.path=<dist_path>
@@ -107,7 +107,7 @@ 
spark.kubernetes.executor.volumes.<type>.<name>.options.path=<dist_path>
 spark.kubernetes.executor.volumes.<type>.<name>.mount.path=<container_path>
 ```
 
-Read [Using Kubernetes 
Volumes](http://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes)
 for more about volumes.
+Read [Using Kubernetes 
Volumes](https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes)
 for more about volumes.
 
 ### PodTemplateFile
 
@@ -117,4 +117,4 @@ To do so, specify the spark properties 
`spark.kubernetes.driver.podTemplateFile`
 
 ### Other
 
-You can read Spark's official documentation for [Running on 
Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html) for 
more information.
+You can read Spark's official documentation for [Running on 
Kubernetes](https://spark.apache.org/docs/latest/running-on-kubernetes.html) 
for more information.
diff --git a/docs/deployment/engine_on_yarn.md 
b/docs/deployment/engine_on_yarn.md
index cb5bdd9e0..6812afa46 100644
--- a/docs/deployment/engine_on_yarn.md
+++ b/docs/deployment/engine_on_yarn.md
@@ -23,11 +23,11 @@
 
 When you want to deploy Kyuubi's Spark SQL engines on YARN, you'd better have 
cognition upon the following things.
 
-- Knowing the basics about [Running Spark on 
YARN](http://spark.apache.org/docs/latest/running-on-yarn.html)
+- Knowing the basics about [Running Spark on 
YARN](https://spark.apache.org/docs/latest/running-on-yarn.html)
 - A binary distribution of Spark which is built with YARN support
   - You can use the built-in Spark distribution
   - You can get it from [Spark official 
website](https://spark.apache.org/downloads.html) directly
-  - You can [Build 
Spark](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn)
 with `-Pyarn` maven option
+  - You can [Build 
Spark](https://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn)
 with `-Pyarn` maven option
 - An active [Apache Hadoop 
YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html)
 cluster
 - An active Apache Hadoop HDFS cluster
 - Setup Hadoop client configurations at the machine the Kyuubi server locates
@@ -92,7 +92,7 @@ and how many cpus and memory will Spark driver, 
ApplicationMaster and each execu
 | spark.executor.memory         | 1g                                         | 
Amount of memory to use for the executor process                                
                                                                                
          |
 | spark.executor.memoryOverhead | executorMemory * 0.10, with minimum of 384 | 
Amount of additional memory to be allocated per executor process. This is 
memory that accounts for things like VM overheads, interned strings other 
native overheads, etc |
 
-It is recommended to use [Dynamic 
Allocation](http://spark.apache.org/docs/3.0.1/configuration.html#dynamic-allocation)
 with Kyuubi,
+It is recommended to use [Dynamic 
Allocation](https://spark.apache.org/docs/3.0.1/configuration.html#dynamic-allocation)
 with Kyuubi,
 since the SQL engine will be long-running for a period, execute user's queries 
from clients periodically,
 and the demand for computing resources is not the same for those queries.
 It is better for Spark to release some executors when either the query is 
lightweight, or the SQL engine is being idled.
@@ -104,11 +104,11 @@ which allows YARN to cache it on nodes so that it doesn't 
need to be distributed
 
 ##### Others
 
-Please refer to [Spark 
properties](http://spark.apache.org/docs/latest/running-on-yarn.html#spark-properties)
 to check other acceptable configs.
+Please refer to [Spark 
properties](https://spark.apache.org/docs/latest/running-on-yarn.html#spark-properties)
 to check other acceptable configs.
 
 ### Kerberos
 
-Kyuubi currently does not support Spark's [YARN-specific Kerberos 
Configuration](http://spark.apache.org/docs/3.0.1/running-on-yarn.html#kerberos),
+Kyuubi currently does not support Spark's [YARN-specific Kerberos 
Configuration](https://spark.apache.org/docs/3.0.1/running-on-yarn.html#kerberos),
 so `spark.kerberos.keytab` and `spark.kerberos.principal` should not use now.
 
 Instead, you can schedule a periodically `kinit` process via `crontab` task on 
the local machine that hosts Kyuubi server or simply use [Kyuubi 
Kinit](settings.html#kinit).
diff --git a/docs/deployment/hive_metastore.md 
b/docs/deployment/hive_metastore.md
index f3a24d897..f60465a1a 100644
--- a/docs/deployment/hive_metastore.md
+++ b/docs/deployment/hive_metastore.md
@@ -30,7 +30,7 @@ In this section, you will learn how to configure Kyuubi to 
interact with Hive Me
 - A Spark binary distribution built with `-Phive` support
   - Use the built-in one in the Kyuubi distribution
   - Download from [Spark official 
website](https://spark.apache.org/downloads.html)
-  - Build from Spark source, [Building With Hive and JDBC 
Support](http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support)
+  - Build from Spark source, [Building With Hive and JDBC 
Support](https://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support)
 - A copy of Hive client configuration
 
 So the whole thing here is to let Spark applications use this copy of Hive 
configuration to start a Hive metastore client for their own to talk to the 
Hive metastore server.
@@ -199,13 +199,13 @@ Caused by: org.apache.thrift.TApplicationException: 
Invalid method name: 'get_ta
        ... 93 more
 ```
 
-To prevent this problem, we can use Spark's [Interacting with Different 
Versions of Hive 
Metastore](http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore).
+To prevent this problem, we can use Spark's [Interacting with Different 
Versions of Hive 
Metastore](https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html#interacting-with-different-versions-of-hive-metastore).
 
 ## Further Readings
 
 - Hive Wiki
   - [Hive Metastore 
Administration](https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration)
 - Spark Online Documentation
-  - [Custom Hadoop/Hive 
Configuration](http://spark.apache.org/docs/latest/configuration.html#custom-hadoophive-configuration)
-  - [Hive 
Tables](http://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html)
+  - [Custom Hadoop/Hive 
Configuration](https://spark.apache.org/docs/latest/configuration.html#custom-hadoophive-configuration)
+  - [Hive 
Tables](https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html)
 
diff --git a/docs/deployment/settings.md b/docs/deployment/settings.md
index f8beaa83b..880bc774a 100644
--- a/docs/deployment/settings.md
+++ b/docs/deployment/settings.md
@@ -522,7 +522,7 @@ You can configure the Kyuubi properties in 
`$KYUUBI_HOME/conf/kyuubi-defaults.co
 
 ### Via spark-defaults.conf
 
-Setting them in `$SPARK_HOME/conf/spark-defaults.conf` supplies with default 
values for SQL engine application. Available properties can be found at Spark 
official online documentation for [Spark 
Configurations](http://spark.apache.org/docs/latest/configuration.html)
+Setting them in `$SPARK_HOME/conf/spark-defaults.conf` supplies with default 
values for SQL engine application. Available properties can be found at Spark 
official online documentation for [Spark 
Configurations](https://spark.apache.org/docs/latest/configuration.html)
 
 ### Via kyuubi-defaults.conf
 
@@ -533,13 +533,13 @@ Setting them in `$KYUUBI_HOME/conf/kyuubi-defaults.conf` 
supplies with default v
 Setting them in the JDBC Connection URL supplies session-specific for each SQL 
engine. For example: 
```jdbc:hive2://localhost:10009/default;#spark.sql.shuffle.partitions=2;spark.executor.memory=5g```
 
 - **Runtime SQL Configuration**
-  - For [Runtime SQL 
Configurations](http://spark.apache.org/docs/latest/configuration.html#runtime-sql-configuration),
 they will take affect every time
+  - For [Runtime SQL 
Configurations](https://spark.apache.org/docs/latest/configuration.html#runtime-sql-configuration),
 they will take affect every time
 - **Static SQL and Spark Core Configuration**
-  - For [Static SQL 
Configurations](http://spark.apache.org/docs/latest/configuration.html#static-sql-configuration)
 and other spark core configs, e.g. `spark.executor.memory`, they will take 
effect if there is no existing SQL engine application. Otherwise, they will 
just be ignored
+  - For [Static SQL 
Configurations](https://spark.apache.org/docs/latest/configuration.html#static-sql-configuration)
 and other spark core configs, e.g. `spark.executor.memory`, they will take 
effect if there is no existing SQL engine application. Otherwise, they will 
just be ignored
 
 ### Via SET Syntax
 
-Please refer to the Spark official online documentation for [SET 
Command](http://spark.apache.org/docs/latest/sql-ref-syntax-aux-conf-mgmt-set.html)
+Please refer to the Spark official online documentation for [SET 
Command](https://spark.apache.org/docs/latest/sql-ref-syntax-aux-conf-mgmt-set.html)
 
 ## Flink Configurations
 
@@ -641,7 +641,7 @@ Kyuubi uses [log4j](https://logging.apache.org/log4j/2.x/) 
for logging. You can
 
 ### Hadoop Configurations
 
-Specifying `HADOOP_CONF_DIR` to the directory containing Hadoop configuration 
files or treating them as Spark properties with a `spark.hadoop.` prefix. 
Please refer to the Spark official online documentation for [Inheriting Hadoop 
Cluster 
Configuration](http://spark.apache.org/docs/latest/configuration.html#inheriting-hadoop-cluster-configuration).
 Also, please refer to the [Apache Hadoop](http://hadoop.apache.org)'s online 
documentation for an overview on how to configure Hadoop.
+Specifying `HADOOP_CONF_DIR` to the directory containing Hadoop configuration 
files or treating them as Spark properties with a `spark.hadoop.` prefix. 
Please refer to the Spark official online documentation for [Inheriting Hadoop 
Cluster 
Configuration](https://spark.apache.org/docs/latest/configuration.html#inheriting-hadoop-cluster-configuration).
 Also, please refer to the [Apache Hadoop](https://hadoop.apache.org)'s online 
documentation for an overview on how to configure Hadoop.
 
 ### Hive Configurations
 
diff --git a/docs/develop_tools/building.md b/docs/develop_tools/building.md
index 9dfc01f42..d4582dc8d 100644
--- a/docs/develop_tools/building.md
+++ b/docs/develop_tools/building.md
@@ -19,7 +19,7 @@
 
 ## Building Kyuubi with Apache Maven
 
-**Kyuubi** is built based on [Apache Maven](http://maven.apache.org),
+**Kyuubi** is built based on [Apache Maven](https://maven.apache.org),
 
 ```bash
 ./build/mvn clean package -DskipTests
diff --git a/docs/develop_tools/testing.md b/docs/develop_tools/testing.md
index 48a2e9787..3e63aa1a2 100644
--- a/docs/develop_tools/testing.md
+++ b/docs/develop_tools/testing.md
@@ -17,8 +17,8 @@
 
 # Running Tests
 
-**Kyuubi** can be tested based on [Apache Maven](http://maven.apache.org) and 
the ScalaTest Maven Plugin,
-please refer to the [ScalaTest 
documentation](http://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin),
+**Kyuubi** can be tested based on [Apache Maven](https://maven.apache.org) and 
the ScalaTest Maven Plugin,
+please refer to the [ScalaTest 
documentation](https://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin),
 
 ## Running Tests Fully
 
diff --git a/docs/extensions/engines/spark/lineage.md 
b/docs/extensions/engines/spark/lineage.md
index 1ef28c173..8f2f76c9f 100644
--- a/docs/extensions/engines/spark/lineage.md
+++ b/docs/extensions/engines/spark/lineage.md
@@ -97,7 +97,7 @@ Currently supported column lineage for spark's `Command` and 
`Query` type:
 
 ### Build with Apache Maven
 
-Kyuubi Spark Lineage Listener Extension is built using [Apache 
Maven](http://maven.apache.org).
+Kyuubi Spark Lineage Listener Extension is built using [Apache 
Maven](https://maven.apache.org).
 To build it, `cd` to the root direct of kyuubi project and run:
 
 ```shell
diff --git a/docs/make.bat b/docs/make.bat
index 1f441aefc..39586a7af 100644
--- a/docs/make.bat
+++ b/docs/make.bat
@@ -38,7 +38,7 @@ if errorlevel 9009 (
        echo.may add the Sphinx directory to PATH.
        echo.
        echo.If you don't have Sphinx installed, grab it from
-       echo.http://sphinx-doc.org/
+       echo.https://www.sphinx-doc.org
        exit /b 1
 )
 
diff --git a/docs/monitor/logging.md b/docs/monitor/logging.md
index 8d373f5a9..24a5a88d6 100644
--- a/docs/monitor/logging.md
+++ b/docs/monitor/logging.md
@@ -265,5 +265,5 @@ You will both get the final results and the corresponding 
operation logs telling
 - [Monitoring Kyuubi - Server Metrics](metrics.md)
 - [Trouble Shooting](trouble_shooting.md)
 - Spark Online Documentation
-  - [Monitoring and 
Instrumentation](http://spark.apache.org/docs/latest/monitoring.html)
+  - [Monitoring and 
Instrumentation](https://spark.apache.org/docs/latest/monitoring.html)
 
diff --git a/docs/overview/architecture.md b/docs/overview/architecture.md
index ec4dc0d8d..4df5e24a4 100644
--- a/docs/overview/architecture.md
+++ b/docs/overview/architecture.md
@@ -107,7 +107,7 @@ and these applications can be placed in different shared 
domains for other conne
 Kyuubi does not occupy any resources from the Cluster Manager(e.g. Yarn) 
during startup and will give all resources back if there
 is not any active session interacting with a `SparkContext`.
 
-Spark also provides [Dynamic Resource 
Allocation](http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation)
 to dynamically adjust the resources your application occupies based on the 
workload. It means
+Spark also provides [Dynamic Resource 
Allocation](https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation)
 to dynamically adjust the resources your application occupies based on the 
workload. It means
 that your application may give resources back to the cluster if they are no 
longer used and request them again later when
 there is demand. This feature is handy if multiple applications share 
resources in your Spark cluster.
 
@@ -172,5 +172,5 @@ We also create a [Submarine: Spark 
Security](https://mvnrepository.com/artifact/
 
 ## Conclusions
 
-Kyuubi is a unified multi-tenant JDBC interface for large-scale data 
processing and analytics, built on top of [Apache 
Spark™](http://spark.apache.org/).
+Kyuubi is a unified multi-tenant JDBC interface for large-scale data 
processing and analytics, built on top of [Apache 
Spark™](https://spark.apache.org/).
 It extends the Spark Thrift Server's scenarios in enterprise applications, the 
most important of which is multi-tenancy support.
diff --git a/docs/overview/kyuubi_vs_hive.md b/docs/overview/kyuubi_vs_hive.md
index f69215240..52e38b3a1 100644
--- a/docs/overview/kyuubi_vs_hive.md
+++ b/docs/overview/kyuubi_vs_hive.md
@@ -41,7 +41,7 @@ have multiple reducer stages.
 | ** Engine **                   | up to Spark 3.x                             
                                   | MapReduce/[up to Spark 
2.3](https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started#HiveonSpark:GettingStarted-VersionCompatibility)/Tez
 |
 | ** Performance **              | High                                        
                                   | Low                                        
                                                                                
                                        |
 | ** Compatibility with Spark ** | Good                                        
                                   | Bad(need to rebuild on a specific version) 
                                                                                
                                        |
-| ** Data Types **               | [Spark Data 
Types](http://spark.apache.org/docs/latest/sql-ref-datatypes.html) | [Hive Data 
Types](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types)   
                                                                        |
+| ** Data Types **               | [Spark Data 
Types](https://spark.apache.org/docs/latest/sql-ref-datatypes.html) | [Hive 
Data 
Types](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types)   
                                                                        |
 
 ## Performance
 
diff --git a/docs/overview/kyuubi_vs_thriftserver.md 
b/docs/overview/kyuubi_vs_thriftserver.md
index 00a03c3b2..66f900c74 100644
--- a/docs/overview/kyuubi_vs_thriftserver.md
+++ b/docs/overview/kyuubi_vs_thriftserver.md
@@ -19,7 +19,7 @@
 
 ## Introductions
 
-The Apache Spark [Thrift JDBC/ODBC 
Server](http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html) is 
a Thrift service implemented by the Apache Spark community based on HiveServer2.
+The Apache Spark [Thrift JDBC/ODBC 
Server](https://spark.apache.org/docs/latest/sql-distributed-sql-engine.html) 
is a Thrift service implemented by the Apache Spark community based on 
HiveServer2.
 Designed to be seamlessly compatible with HiveServer2, it provides Spark SQL 
capabilities to end-users in a pure SQL way through a JDBC interface.
 This "out-of-the-box" model minimizes the barriers and costs for users to use 
Spark.
 
diff --git a/docs/security/authorization/spark/build.md 
b/docs/security/authorization/spark/build.md
index 2756cc356..3886f08df 100644
--- a/docs/security/authorization/spark/build.md
+++ b/docs/security/authorization/spark/build.md
@@ -19,7 +19,7 @@
 
 ## Build with Apache Maven
 
-Kyuubi Spark AuthZ Plugin is built using [Apache 
Maven](http://maven.apache.org).
+Kyuubi Spark AuthZ Plugin is built using [Apache 
Maven](https://maven.apache.org).
 To build it, `cd` to the root direct of kyuubi project and run:
 
 ```shell
diff --git a/docs/security/kinit.md b/docs/security/kinit.md
index e9dfbc491..0d613e000 100644
--- a/docs/security/kinit.md
+++ b/docs/security/kinit.md
@@ -104,5 +104,5 @@ hadoop.proxyuser.<user name in principal>.hosts *
 ## Further Readings
 
 - [Hadoop in Secure 
Mode](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html)
-- [Use Kerberos for authentication in 
Spark](http://spark.apache.org/docs/latest/security.html#kerberos)
+- [Use Kerberos for authentication in 
Spark](https://spark.apache.org/docs/latest/security.html#kerberos)
 

Reply via email to