This is an automated email from the ASF dual-hosted git repository.
casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git
The following commit(s) were added to refs/heads/dev by this push:
new a8ea0f1db2 [ISSUE-3791]Remove data source start service guidance (#569)
a8ea0f1db2 is described below
commit a8ea0f1db2a7c3e26206cf727c9050f21d83002f
Author: aiceflower <[email protected]>
AuthorDate: Sat Dec 3 12:43:05 2022 +0800
[ISSUE-3791]Remove data source start service guidance (#569)
* Remove data source start service guidance
* remove kerberos
* remove ps-cs and cg-engineplugin
---
docs/deployment/deploy-quick.md | 47 +++++++++------------
docs/deployment/images/eureka.png | Bin 0 -> 86788 bytes
.../current/deployment/deploy-quick.md | 44 ++++++++-----------
.../current/deployment/images/eureka.png | Bin 0 -> 86788 bytes
4 files changed, 38 insertions(+), 53 deletions(-)
diff --git a/docs/deployment/deploy-quick.md b/docs/deployment/deploy-quick.md
index 85e888e81d..0c66601ed9 100644
--- a/docs/deployment/deploy-quick.md
+++ b/docs/deployment/deploy-quick.md
@@ -182,13 +182,7 @@ export SERVER_HEAP_SIZE="512M"
##The decompression directory and the installation directory need to be
inconsistent
LINKIS_HOME=/appcom/Install/LinkisInstall
````
-#### Data source service is enabled (optional)
-> According to the actual situation, if you want to use the data source
function, you need to adjust
-```shell script
-#If you want to start metadata related microservices, you can set this export
ENABLE_METADATA_MANAGE=true
-export ENABLE_METADATA_QUERY=true
-````
#### No HDFS mode deployment (optional >1.1.2 version support hold)
> Deploy Linkis services in an environment without HDFS to facilitate more
> lightweight learning and debugging. Deploying in HDFS mode does not support
> tasks such as hive/spark/flink engines
@@ -203,7 +197,19 @@ RESULT_SET_ROOT_PATH=file:///tmp/linkis
export ENABLE_HDFS=false
export ENABLE_HIVE=false
export ENABLE_SPARK=false
-````
+```
+
+#### kerberos authentication (optional)
+
+> By default, kerberos authentication is disabled on Linkis. If kerberos
authentication is enabled in the hive cluster, you need to set the following
parameters:
+
+Modify the `linkis-env.sh` file and modify the following
+
+```bash
+#HADOOP
+HADOOP_KERBEROS_ENABLE=true
+HADOOP_KEYTAB_PATH=/appcom/keytab/
+```
## 3. Install and start
@@ -246,22 +252,14 @@ cp mysql-connector-java-5.1.49.jar
{LINKIS_HOME}/lib/linkis-commons/public-modul
### 3.3 Configuration Adjustment (Optional)
> The following operations are related to the dependent environment. According
> to the actual situation, determine whether the operation is required
-#### 3.3.1 kerberos authentication
-If the hive cluster used has kerberos mode authentication enabled, modify the
configuration `${LINKIS_HOME}/conf/linkis.properties` (<=1.1.3) file
-```shell script
-#Append the following configuration
-echo "wds.linkis.keytab.enable=true" >> linkis.properties
-````
-#### 3.3.2 Yarn Authentication
+#### 3.3.1 Yarn Authentication
When executing spark tasks, you need to use the ResourceManager of yarn, which
is controlled by the configuration item
`YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088 `.
During installation and deployment, the
`YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088` information will be updated to the
database table `linkis_cg_rm_external_resource_provider`. By default, access to
yarn resources does not require permission verification.
If password authentication is enabled in yarn's ResourceManager, please modify
the yarn data information generated in the database table
`linkis_cg_rm_external_resource_provider` after installation and deployment.
For details, please refer to [Check whether the yarn address is configured
correctly] (#811-Check whether the yarn address is configured correctly)
-
-
-#### 3.3.3 session
+#### 3.3.2 session
If you are upgrading to Linkis. Deploy DSS or other projects at the same time,
but the dependent linkis version introduced in other software is <1.1.1 (mainly
in the lib package, the linkis-module-x.x.x.jar package of the dependent Linkis
is <1.1.1), you need to modify the linkis located in `
${LINKIS_HOME}/conf/linkis.properties` file
```shell
echo "wds.linkis.session.ticket.key=bdp-user-ticket-id" >> linkis.properties
@@ -278,24 +276,19 @@ After the installation is complete, if you need to modify
the configuration (bec
### 3.6 Check whether the service starts normally
Visit the eureka service page (http://eurekaip:20303),
-The 1.x.x version will start 8 Linkis microservices by default, and the
linkis-cg-engineconn service in the figure below will be started only for
running tasks
-
+The Linkis will start 6 microservices by default, and the linkis-cg-engineconn
service in the figure below will be started only for running tasks
+
```shell script
LINKIS-CG-ENGINECONNMANAGER Engine Management Services
-LINKIS-CG-ENGINEPLUGIN Engine Plugin Management Service
LINKIS-CG-ENTRANCE Computing Governance Entry Service
LINKIS-CG-LINKISMANAGER Computing Governance Management Service
LINKIS-MG-EUREKA Microservice registry service
LINKIS-MG-GATEWAY gateway service
-LINKIS-PS-CS context service
LINKIS-PS-PUBLICSERVICE Public Service
````
-If the data source service function is enabled (not enabled by default), you
will see these two services
-```shell script
-LINKIS-PS-DATA-SOURCE-MANAGER
-LINKIS-PS-METADATAMANAGER
-````
+
+Note: Linkis-ps-cs, Linkis-ps-data-source-Manager and
Linkis-Ps-Metadatamanager services have been merged into
Linkis-Ps-PublicService in Linkis 1.3.1 and merge LINKIS-CG-ENGINECONNMANAGER
services into LINKIS-CG-LINKISMANAGER.
If any services are not started, you can view detailed exception logs in the
corresponding log/${service name}.log file.
@@ -527,7 +520,7 @@ The normal is as follows:
Check whether the material record of the engine exists (if there is an update,
check whether the update time is correct).
- If it does not exist or is not updated, first try to manually refresh the
material resource (for details, see [Engine Material Resource
Refresh](install-engineconn#23-Engine Refresh)).
-- Check the specific reasons for material failure through
`log/linkis-cg-engineplugin.log` log. In many cases, it may be caused by the
lack of permissions in the hdfs directory
+- Check the specific reasons for material failure through
`log/linkis-cg-linkismanager.log` log. In many cases, it may be caused by the
lack of permissions in the hdfs directory
- Check whether the gateway address configuration is correct. The
configuration item `wds.linkis.gateway.url` of `conf/linkis.properties`
The material resources of the engine are uploaded to the hdfs directory by
default as `/apps-data/${deployUser}/bml`
diff --git a/docs/deployment/images/eureka.png
b/docs/deployment/images/eureka.png
new file mode 100644
index 0000000000..3b3f24a4b0
Binary files /dev/null and b/docs/deployment/images/eureka.png differ
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
index 892bc577d6..34d0732cad 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
@@ -182,13 +182,7 @@ export SERVER_HEAP_SIZE="512M"
##The decompression directory and the installation directory need to be
inconsistent
LINKIS_HOME=/appcom/Install/LinkisInstall
```
-#### 数据源服务开启(可选)
-> 按实际情况,如果想使用数据源功能,则需要调整
-```shell script
-#If you want to start metadata related microservices, you can set this export
ENABLE_METADATA_MANAGE=true
-export ENABLE_METADATA_QUERY=true
-```
#### 无HDFS模式部署(可选 >1.1.2版本支持)
> 在没有HDFS 的环境中部署 Linkis 服务,以方便更轻量化的学习使用和调试。去HDFS模式部署不支持hive/spark/flink引擎等任务
@@ -205,6 +199,17 @@ export ENABLE_HIVE=false
export ENABLE_SPARK=false
```
+#### kerberos 认证(可选)
+
+> 默认 Linkis 未开启 kerberos 认证,如果使用的hive集群开启了 kerberos 模式认证,需要配置如下参数。
+
+修改 `linkis-env.sh` 文件,修改内容如下
+```bash
+#HADOOP
+HADOOP_KERBEROS_ENABLE=true
+HADOOP_KEYTAB_PATH=/appcom/keytab/
+```
+
## 3. 安装和启动
### 3.1 执行安装脚本:
@@ -246,22 +251,14 @@ cp mysql-connector-java-5.1.49.jar
{LINKIS_HOME}/lib/linkis-commons/public-modu
### 3.3 配置调整(可选)
> 以下操作,跟依赖的环境有关,根据实际情况,确定是否需要操作
-#### 3.3.1 kerberos认证
-如果使用的hive集群开启了kerberos模式认证,修改配置`${LINKIS_HOME}/conf/linkis.properties`(<=1.1.3)文件
-```shell script
-#追加以下配置
-echo "wds.linkis.keytab.enable=true" >> linkis.properties
-```
-#### 3.3.2 Yarn的认证
+#### 3.3.1 Yarn的认证
执行spark任务时,需要使用到yarn的ResourceManager,通过配置项`YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088
`控制。
执行安装部署时,会将`YARN_RESTFUL_URL=http://xx.xx.xx.xx:8088` 信息更新到数据库表中
`linkis_cg_rm_external_resource_provider`中时候,默认访问yarn资源是不需权限验证的,
如果yarn的ResourceManager开启了密码权限验证,请安装部署后,修改数据库表
`linkis_cg_rm_external_resource_provider` 中生成的yarn数据信息,
详细可以参考[查看yarn地址是否配置正确](#811-查看yarn地址是否配置正确)
-
-
-#### 3.3.3 session
+#### 3.3.2 session
如果您是对Linkis的升级。同时部署DSS或者其他项目,但其它软件中引入的依赖linkis版本<1.1.1(主要看lib包中,所依赖的Linkis的linkis-module-x.x.x.jar包
<1.1.1),则需要修改位于`${LINKIS_HOME}/conf/linkis.properties`文件
```shell
echo "wds.linkis.session.ticket.key=bdp-user-ticket-id" >> linkis.properties
@@ -278,24 +275,19 @@ sh sbin/linkis-start-all.sh
### 3.6 检查服务是否正常启动
访问eureka服务页面(http://eurekaip:20303),
-1.x.x版本默认会启动8个Linkis微服务,其中图下linkis-cg-engineconn服务为运行任务才会启动
-
+默认会启动6个 Linkis 微服务,其中下图linkis-cg-engineconn服务为运行任务才会启动
+
```shell script
LINKIS-CG-ENGINECONNMANAGER 引擎管理服务
-LINKIS-CG-ENGINEPLUGIN 引擎插件管理服务
LINKIS-CG-ENTRANCE 计算治理入口服务
LINKIS-CG-LINKISMANAGER 计算治理管理服务
LINKIS-MG-EUREKA 微服务注册中心服务
LINKIS-MG-GATEWAY 网关服务
-LINKIS-PS-CS 上下文服务
LINKIS-PS-PUBLICSERVICE 公共服务
```
-如果开启了数据源服务功能(默认未开启),会看到这两个服务
-```shell script
-LINKIS-PS-DATA-SOURCE-MANAGER
-LINKIS-PS-METADATAMANAGER
-```
+
+注意:在 Linkis 1.3.1 中已将
LINKIS-PS-CS、LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER服务合并到LINKIS-PS-PUBLICSERVICE,将LINKIS-CG-ENGINECONNMANAGER服务合并到LINKIS-CG-LINKISMANAGER。
如果有服务未启动,可以在对应的log/${服务名}.log文件中查看详细异常日志。
@@ -527,7 +519,7 @@ select * from linkis_cg_engine_conn_plugin_bml_resources
查看引擎的物料记录是否存在(如果有更新,查看更新时间是否正确)。
- 如果不存在或则未更新,先尝试手动刷新物料资源(详细见[引擎物料资源刷新](install-engineconn#23-引擎刷新))。
-- 通过`log/linkis-cg-engineplugin.log`日志,查看物料失败的具体原因,很多时候可能是hdfs目录没有权限导致
+- 通过`log/linkis-cg-linkismanager.log`日志,查看物料失败的具体原因,很多时候可能是hdfs目录没有权限导致
- 检查gateway地址配置是否正确`conf/linkis.properties`的配置项`wds.linkis.gateway.url`
引擎的物料资源默认上传到hdfs目录为 `/apps-data/${deployUser}/bml`
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/images/eureka.png
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/images/eureka.png
new file mode 100644
index 0000000000..3b3f24a4b0
Binary files /dev/null and
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/images/eureka.png
differ
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]