This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new a7d90ee72c9 Create 2024-01-26-linkis1.3.0-adaptation-Huawei MRS-share 
.md (#786)
a7d90ee72c9 is described below

commit a7d90ee72c9ecfcf41e56e08bd8e616b0513652b
Author: livi12138 <156271765+livi12...@users.noreply.github.com>
AuthorDate: Wed Feb 21 19:33:54 2024 +0800

    Create 2024-01-26-linkis1.3.0-adaptation-Huawei MRS-share .md (#786)
    
    * Create 2024-01-26-linkis1.3.0-adaptation-Huawei MRS-share .md
    
    linkis分享
    
    * Linkis 1.3.0 英文提交
    
    * 修改authors
    
    * 增加作者
    
    * Update 2024-01-26-linkis1.3.0-adaptation-Huawei MRS-share .md
    
    * 作者修改
    
    * 回退作者修改
    
    * Update authors.yml
    
    * Update authors.yml
    
    * Update authors.yml
    
    * Update authors.yml
    
    * Update authors.yml
    
    * 修改文件名
    
    ---------
    
    Co-authored-by: peacewong <peacew...@apache.org>
---
 ...-01-26-linkis130-adaptation-Huawei-MRS-share.md | 228 +++++++++++++++++++++
 blog/authors.yml                                   |   8 +-
 ...-01-26-linkis130-adaptation-Huawei-MRS-share.md | 228 +++++++++++++++++++++
 .../docusaurus-plugin-content-blog/authors.yml     |  13 +-
 4 files changed, 472 insertions(+), 5 deletions(-)

diff --git a/blog/2024-01-26-linkis130-adaptation-Huawei-MRS-share.md 
b/blog/2024-01-26-linkis130-adaptation-Huawei-MRS-share.md
new file mode 100644
index 00000000000..4fddbbf4723
--- /dev/null
+++ b/blog/2024-01-26-linkis130-adaptation-Huawei-MRS-share.md
@@ -0,0 +1,228 @@
+---
+title: Linkis 1.3.0 adapts to Huawei MRS+SCRIPTIS actual combat sharing
+authors: [livi12138]
+tags: [blog,linki1.3.0,hadoop3.1.1,spark3.0.1,hive3.1.0]
+---
+## Overview
+  The team needs to use SQL and Python syntax to analyze the data at the same 
time on the page. During the investigation, I found that Linkis can meet the 
needs. As a Huawei MRS is used, it is different from the open source software.
+It also carried out secondary development and adaptation. This article will 
share the experience, hoping to help students in need.
+  
+
+## environment and version
+- JDK-1.8.0_112, Maven-3.5.2
+- Hadoop-3.1.1, spark-3.1.1, hive-3.1.0, zookerper-3.5.9 (Huawei MRS version)
+- Linkis-1.3.0
+- Scriptis-Web 1.1.0
+
+## dependence adjustment and packaging
+   First download the source code of 1.3.0 from the Linkis official website, 
and then adjust the dependent version
+#### Linkis outermost adjustment pom file
+
+```xml
+<hadoop.version>3.1.1</hadoop.version>
+<zookerper.version>3.5.9</zookerper.version>
+<curaor.version>4.2.0</curaor.version>
+<guava.version>30.0-jre</guava.version>
+<json4s.version>3.7.0-M5</json4s.version>
+<scala.version>2.12.15</scala.version>
+<scala.binary.version>2.12</scala.binary.version>
+```
+#### linkis-engineplugin-hive的pom文件
+
+```xml
+<hive.version>3.1.2</hive.version>
+```
+
+#### linkis-engineplugin-spark的pom文件
+
+```xml
+<spark.version>3.1.1</spark.version>
+```
+#### linkis-hadoop-common的pom文件
+```xml
+<dependency>
+        <groupId>org.apache.hadoop</groupId>
+        <artifactId>hadoop-hdfs</artifactId>  <!-Just replace the line and 
replace it with <Arttifactid> Hadoop-HDFS-Client </Artifactid>->->
+        <version>${hadoop.version}</version>
+</dependency>
+Modify the Hadoop-HDFS to:
+ <dependency>
+        <cepid> org.apache.hadoop </groupid>
+        <Artifactid> Hadoop-HDFS-Client </Artifactid>
+        <Version> $ {Hadoop.Version} </version>
+</dependency>
+```
+#### Linkis-Label-Common
+org.apache.linkis.manager.label.conf.labelcommonconfig
+Modify the default version, which is convenient for subsequent self -compiling 
scheduling components
+```
+    Public Static Final Commonvars <string> Spark_engine_Version =
+            Commonvars.apply ("wds.linkis.spark.engine.version", "3.1.1");
+
+    Public Static Final Commonvars <string> Hive_engine_Version =
+            Commonvars.apply ("wds.linkis.hive.engine.version", "3.1.2");
+```
+
+#### Linkis-computation-Governance-Common
+org.apache.linkis.governance.Common.conf.governanceCommonConf
+Modify the default version, which is convenient for subsequent self -compiling 
scheduling components
+
+```
+  Val spark_engine_version = Commonvars ("wds.linkis.spark.engine.version", 
"3.1.1")
+
+  VAL HIVE_ENGINE_VERSION = Commonvars ("wds.linkis.hive.engine.version", 
"3.1.2")
+```
+
+#### Compilation
+
+After the above configuration is adjusted, you can start compiling full 
amount, and execute the following commands in turn
+
+```shell
+    cd linkis-x.x.x
+    MVN -N Install
+    MVN CLEAN Install -DSKIPTESTS
+```
+
+#### Compile Error
+
+-If when you compile it, there is an error, try to enter a module alone to 
compile, see if there are errors, and adjust according to specific errors
+-Since the SCALA language is used in Linkis for code writing, it is 
recommended that you can configure the SCALA environment first to facilitate 
reading the source code
+-Aar package conflict is the most common problem, especially after upgrading 
Hadoop, please adjust the dependent version patiently
+
+#### DatasphereStudio pom file
+
+As we upgrade the version of Scala, the error will be reported when deploying.
+Conn to BML Now Exit Java.net.socketexception: Connection Reset. Here you need 
to modify the SCALA version and re -compile.
+1. Delete the low version of the DSS-Gateway-SUPPPPORT JAR package,
+2. Modify the scala version in DSS 1.1.0 to 2.12, compile it, get the new 
DSS-Gateway-SUPPPORT -.1.0.JAR, replace the 
linkis_installhome/lib/linkis-spaint-service/linkis-mg-gateway The original jar 
package of the Central Plains
+```xml
+<!-The SCALA environment is consistent->
+<scala.version> 2.12.15 </scala.version>
+```
+According to the adjustment of the dependent version above, you can solve most 
of the problems. If you have any problems, you need to carefully adjust the 
corresponding log.
+If a complete package can be compiled, it represents the full compilation of 
Linkis and can be deployed.
+
+## deployment
+
+- In in order to allow the engine node to have sufficient resource execution 
script, we have adopted multiple server deployments, and the general deployment 
structure is as follows.
+-SLB 1 load balancing is rotary inquiry
+-E ECS-WEB 2 Nginx, static resource deployment, background agent forwarding
+-ECS-APP 2 micro-service governance, computing governance, public enhancement 
and other node deployment
+-ECS-APP 4 Engineconnmanager node deployment
+
+### linkis deployment
+
+- At the use of multiple node deployments, we did not peel the code, or put 
the full amount on the server, but just modified the startup script to make it 
only start the service required
+
+Refer to the official website single machine deployment example: https: 
//linkis.apache.org/zh-docs/1.3.0/dePlayment/dePlay-qick
+
+#### Linkis Deployment Points Attention Point
+- 1. Deployment user: The startup user of the core process of Linkis. At the 
same time, this user will default as an administrator permissions. During the 
deployment process, the corresponding administrator login password, located in 
the linkis support specified in CONF/LINKIS-MG-Gateway.properties file file 
Submitted and executed users. The main process service of Linkis will be 
switched to the corresponding user through the SUDO -U $ {linkis-user}, and 
then executes the corresponding e [...]
+-The user defaults to the submission and executor of the task, if you want to 
change to the login user, you need to modify
+org.apache.linkis.entRance.Restful.entRANCERESTFAPI class
+json.put (taskConstant.execute_user, moduleuseuserutills.GetOperationUser 
(REQ));
+json.put (taskConstant.submit_user, SecurityFilter.getLoginusername (REQ));
+Change the above settings to the user and execute user to the Scriptis page to 
log in to the user
+- 2. Sudo -U $ {linkis -user} Switch to the corresponding user. If you use the 
login user, this command may fail, and you need to modify the command here.
+- org.apache.linkis.ecm.server.operator.EngineConnYarnLogOperator.sudoCommands
+```scala
+private def sudoCommands(creator: String, command: String): Array[String] = {
+    Array(
+      "/bin/bash",
+      "-c",
+      "sudo su " + creator + " -c \"source ~/.bashrc 2>/dev/null; " + command 
+ "\""
+    )
+  } change into
+  private def sudoCommands(creator: String, command: String): Array[String] = {
+    Array(
+      "/bin/bash",
+      "-c",
+      "\"source ~/.bashrc 2>/dev/null; " + command + "\""
+    )
+  }
+```
+- 3. Mysql's driver package must be Copy 
to/lib/linkis-commons/public-module/and/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+
+- 4. The default is to use static users and passwords. Static users are 
deploying users. Static passwords will generate a password string in execution 
deployment, stored at $ {linkis_home} /conf/linkis-mg-gateway.properties
+
+- 5. database script execution, linkis itself needs to use the database, but 
when we execute the script of the inserted data of Linkis 1.3.0, we found an 
error. We directly deleted the data of the error part at that time.
+
+- 6. Yarn's certification. When performing the spark task, the task will be 
submitted to the queue. The resource information of the queue will be obtained 
first to determine whether there is a resource to submit.
+For certification, if the file authentication is enabled, the file needs to be 
placed in the corresponding directory of the server, and the information is 
updated in the linkis_cg_rm_extRNAL_Resource_Provider library table.
+
+### Install web front end
+- WEB side uses nginx as a static resource server, download the front -end 
installation package and decompress it, and place it on the directory 
corresponding to the Nginx server
+
+### scriptis tool installation
+- Scriptis is a pure front -end project. As a component integrates in the web 
code component of DSS, we only need to compile the DSSWEB project for separate 
Scriptis modules, upload the compiled static resources to Visit, note: Linkis 
stand -by -machine deployment defaults to use session for verification. You 
need to log in to the Linkis management desk first, and then log in to Scriptis 
to use.
+
+## Nginx deployment for example
+#### nginx.conf
+```
+upstream linkisServer{
+    server ip:port;
+    server ip:port;
+}
+Server {
+Listen 8088;# Access port
+Server_name localhost;
+#Charset Koi8-R;
+#access_log /var/log/nginx/host.access.log main;
+#Scriptis static resources
+local /scriptis {
+# Modify to your own front path
+alias/home/nginx/scriptis-weight; # static file directory
+#Root/Home/Hadoop/DSS/Web/DSS/Linkis;
+index index.html index.html;
+}
+#The default resource path points to the static resource of the front end of 
the management platform
+location / {
+# Modify to your own front path
+root/Home/Nginx/Linkis-Web/DIST; # r r r r
+#Root/Home/Hadoop/DSS/Web/DSS/Linkis;
+index index.html index.html;
+}
+
+local /ws {
+Proxy_pass http:// linkisserver/api #back -end linkis address
+proxy_http_version 1.1;
+proxy_set_header upgrade $ http_upgrade;
+proxy_set_header connection upgrade;
+}
+
+location /api {
+proxy_pass http:// linkisserver/api; #The address of the back end linkis
+proxy_set_header Host $host;
+proxy_set_header X-Real-IP $remote_addr;
+proxy_set_header x_real_ipP $remote_addr;
+proxy_set_header remote_addr $remote_addr;
+proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+proxy_http_version 1.1;
+proxy_connect_timeout 4s;
+proxy_read_timeout 600s;
+proxy_send_timeout 12s;
+proxy_set_header Upgrade $http_upgrade;
+proxy_set_header Connection upgrade;
+}
+
+#error_page  404              /404.html;
+# redirect server error pages to the static page /50x.html
+#
+error_page   500 502 503 504  /50x.html;
+location = /50x.html {
+root   /usr/share/nginx/html;
+}
+}
+
+```
+## How to check the question
+- 1. There are more than 100 modules in Linkis, and the final service has 7 
services, which are linkis-cg -ngineconnmanager, linkis-cg -ngineplugin, 
linkis-cg-entrance, linkis-cg-linkisManager,
+Linkis-Mg-Gateway, Linkis-Mg-Eureka, Linkis-PS-PublicService, each module has 
this different features. Among them, Linkis-CG-ENGINECONNMANAGER is responsible 
for managing the start-engine service, which will generate the corresponding 
engine script to pull up the engine. Services, so our team launched the 
Linkis-CG-ENGINECONNMANAGER alone on the server for sufficient resources to 
execute on the server.
+- 2. The execution of engines like JDBC, Spark.hedu and other engines require 
some JAR package support. When the linkis species is called material, these jar 
packs will be hit in the linkis-cg-oblmphin engine when packaging , Conf and 
LIB directory will appear. When starting this service, two packages will be 
uploaded to the configuration directory, which will generate two ZIP files. We 
use OSS to store these material information. Download it to the 
Linkis-CG-ENGINECONNMANAGER service, a [...]
+- 3. If you want to check the engine log, you can see the directory under 
wds.linkis.enginecoon.root.dir configuration. Of course, the log information 
will be displayed on the log of the scriptis page. Just paste it to find it.
+
+
+
+
+
+
diff --git a/blog/authors.yml b/blog/authors.yml
index 99a7b33207b..c29435805a8 100644
--- a/blog/authors.yml
+++ b/blog/authors.yml
@@ -34,8 +34,14 @@ ruY9527:
   url: https://github.com/ruY9527
   image_url: https://avatars.githubusercontent.com/u/43773582?v=4
 
+livi12138:
+  name: livi12138
+  title: contributors
+  url: https://github.com/livi12138
+  image_url: https://avatars.githubusercontent.com/u/156271765?v=4
+
 kevinWdong:
   name: kevinWdong
   title: contributors
   url: https://github.com/kongslove
-  image_url: https://avatars.githubusercontent.com/u/42604208?v=4
\ No newline at end of file
+  image_url: https://avatars.githubusercontent.com/u/42604208?v=4
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-blog/2024-01-26-linkis130-adaptation-Huawei-MRS-share.md
 
b/i18n/zh-CN/docusaurus-plugin-content-blog/2024-01-26-linkis130-adaptation-Huawei-MRS-share.md
new file mode 100644
index 00000000000..7f29f1734e2
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-blog/2024-01-26-linkis130-adaptation-Huawei-MRS-share.md
@@ -0,0 +1,228 @@
+---
+title: Linkis1.3.0 适配 华为MRS+Scriptis 实战分享
+authors: [livi12138]
+tags: [blog,linki1.3.0,hadoop3.1.1,spark3.0.1,hive3.1.0]
+---
+## 概述
+  
团队有需求要在页面上同时使用sql和python语法对数据进行分析,在调研过程中发现linkis可以满足需要,遂将其引入内网,由于使用的是华为MRS,与开源的软件有所不同,
+又进行了二次开发适配,本文将分享使用经验,希望对有需要的同学有所帮助。
+  
+
+## 环境以及版本
+- jdk-1.8.0_112 , maven-3.5.2
+- hadoop-3.1.1,Spark-3.1.1,Hive-3.1.0,zookerper-3.5.9 (华为MRS版本)
+- linkis-1.3.0
+- scriptis-web 1.1.0
+
+## 依赖调整以及打包
+   首先从linkis官网上下载1.3.0的源码,然后调整依赖版本
+#### linkis最外层调整pom文件
+
+```xml
+<hadoop.version>3.1.1</hadoop.version>
+<zookerper.version>3.5.9</zookerper.version>
+<curaor.version>4.2.0</curaor.version>
+<guava.version>30.0-jre</guava.version>
+<json4s.version>3.7.0-M5</json4s.version>
+<scala.version>2.12.15</scala.version>
+<scala.binary.version>2.12</scala.binary.version>
+```
+#### linkis-engineplugin-hive的pom文件
+
+```xml
+<hive.version>3.1.2</hive.version>
+```
+
+#### linkis-engineplugin-spark的pom文件
+
+```xml
+<spark.version>3.1.1</spark.version>
+```
+#### linkis-hadoop-common的pom文件
+```xml
+<dependency>
+        <groupId>org.apache.hadoop</groupId>
+        <artifactId>hadoop-hdfs</artifactId>  <!-- 只需要将该行替换即可,替换为 
<artifactId>hadoop-hdfs-client</artifactId>-->
+        <version>${hadoop.version}</version>
+</dependency>
+ 将hadoop-hdfs修改为:
+ <dependency>
+        <groupId>org.apache.hadoop</groupId>
+        <artifactId>hadoop-hdfs-client</artifactId>
+        <version>${hadoop.version}</version>
+</dependency>
+```
+#### linkis-label-common
+org.apache.linkis.manager.label.conf.LabelCommonConfig
+修改默认版本,便于后续的自编译调度组件使用
+```
+    public static final CommonVars<String> SPARK_ENGINE_VERSION =
+            CommonVars.apply("wds.linkis.spark.engine.version", "3.1.1");
+
+    public static final CommonVars<String> HIVE_ENGINE_VERSION =
+            CommonVars.apply("wds.linkis.hive.engine.version", "3.1.2");
+```
+
+#### linkis-computation-governance-common
+org.apache.linkis.governance.common.conf.GovernanceCommonConf
+修改默认版本,便于后续的自编译调度组件使用
+
+```
+  val SPARK_ENGINE_VERSION = CommonVars("wds.linkis.spark.engine.version", 
"3.1.1")
+
+  val HIVE_ENGINE_VERSION = CommonVars("wds.linkis.hive.engine.version", 
"3.1.2")
+```
+
+#### 编译
+
+在以上配置都调整好之后,可以开始全量编译,依次执行以下命令
+
+```shell
+    cd linkis-x.x.x
+    mvn -N  install
+    mvn clean install -DskipTests
+```
+
+#### 编译错误
+
+- 如果你进行编译的时候,出现了错误,尝试单独进入到一个模块中进行编译,看是否有错误,根据具体的错误来进行调整
+- 由于linkis中使用了scala语言进行代码编写,建议可以先在配置scala环境,便于阅读源码
+- jar包冲突是最常见的问题,特别是升级了hadoop之后,请耐心调整依赖版本
+
+#### DataSphereStudio的pom文件
+
+由于我们升级了scala的版本,在部署时会报错,engineplugin启动失败,dss-gateway-support-1.1.0
+conn to bml now exit java.net.socketException:Connection 
reset,这里需要修改scala版本,重新编译。
+1.删除掉低版本的 dss-gateway-support jar包,
+2.将DSS1.1.0中的scala版本修改为2.12,重新编译,获得新的dss-gateway-support-1.1.0.jar,替换linkis_installhome/lib/linkis-spring-cloud-service/linkis-mg-gateway中原有的jar包
+```xml
+<!-- scala 环境一致 -->
+<scala.version>2.12.15</scala.version>
+```
+按照上面的依赖版本调整,就能解决大部分问题,如果还有问题则需要对应日志仔细调整。
+如果能编译出完整的包,则代表linkis全量编译完成,可以进行部署。
+
+## 部署
+
+- 为了让引擎节点有足够的资源执行脚本,我们采用了多服务器部署,大致部署结构如下
+- SLB      1台   负载均衡为轮询
+- ECS-WEB  2台   nginx,静态资源部署,后台代理转发
+- ECS-APP  2台   微服务治理,计算治理,公共增强等节点部署
+- ECS-APP  4台   EngineConnManager节点部署
+
+### linkis部署
+
+- 虽然采用了多节点部署,但是我们并没有将代码剥离,还是把全量包放在服务器上,只是修改了启动脚本,使其只启动所需要的服务
+
+参考官网单机部署示例:https://linkis.apache.org/zh-CN/docs/1.3.0/deployment/deploy-quick
+
+#### linkis部署注意点
+- 1.部署用户: 
linkis核心进程的启动用户,同时此用户会默认作为管理员权限,部署过程中会生成对应的管理员登录密码,位于conf/linkis-mg-gateway.properties文件中
 Linkis支持指定提交、执行的用户。linkis主要进程服务会通过sudo -u ${linkis-user} 
切换到对应用户下,然后执行对应的引擎启动命令,所以引擎linkis-engine进程归属的用户是任务的执行者
+- 该用户默认为任务的提交和执行者,如果你想改为登录用户,需要修改 
+org.apache.linkis.entrance.restful.EntranceRestfulApi类下对应提交方法的代码
+json.put(TaskConstant.EXECUTE_USER, ModuleUserUtils.getOperationUser(req));
+json.put(TaskConstant.SUBMIT_USER, SecurityFilter.getLoginUsername(req));
+将以上设置提交用户和执行用户改为Scriptis页面登录用户
+- 2.sudo -u ${linkis-user}切换到对应用户下,如果使用登录用户,这个命令可能会失败,需要修改此处命令。
+- org.apache.linkis.ecm.server.operator.EngineConnYarnLogOperator.sudoCommands
+```scala
+private def sudoCommands(creator: String, command: String): Array[String] = {
+    Array(
+      "/bin/bash",
+      "-c",
+      "sudo su " + creator + " -c \"source ~/.bashrc 2>/dev/null; " + command 
+ "\""
+    )
+  } 修改为
+  private def sudoCommands(creator: String, command: String): Array[String] = {
+    Array(
+      "/bin/bash",
+      "-c",
+      "\"source ~/.bashrc 2>/dev/null; " + command + "\""
+    )
+  }
+```
+- 
3.Mysql的驱动包一定要copy到/lib/linkis-commons/public-module/和/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+
+- 
4.默认是使用静态用户和密码,静态用户即部署用户,静态密码会在执行部署是随机生成一个密码串,存储于${LINKIS_HOME}/conf/linkis-mg-gateway.properties
+
+- 5 
数据库脚本执行,linkis本身需要用到数据库,但是我们再执行linkis1.3.0版本的插入数据的脚本时,发现了报错,我们当时时直接删掉了报错部分的数据
+
+- 6 
Yarn的认证,执行spark任务时会将任务提交到队列上去,会首先获取队列的资源信息,进行判断是否有资源可以提交,这里需要配置是否开启kerberos模式认证和是否使用keytab文件
+进行认证,如果开启了文件认证需要将文件放入到服务器对应目录,并且在linkis_cg_rm_external_resource_provider库表中更新信息。
+
+### 安装web前端
+- web端是使用nginx作为静态资源服务器的,直接下载前端安装包并解压,将其放在nginx服务器对应的目录即可
+
+### Scriptis工具安装
+- scriptis 
是一个纯前端的项目,作为一个组件集成在DSS的web代码组件中,我们只需要将DSSweb项目进行单独的scriptis模块编译,将编译的静态资源上传至Linkis管理台所在的服务器,既可访问,注意:linkis单机部署默认使用的是session进行校验,需要先登录linkis管理台,再登录Scriptis就可以使用。
+
+## Nginx部署举例
+#### nginx.conf
+```
+upstream linkisServer{
+    server ip:port;
+    server ip:port;
+}
+server {
+listen       8088;# 访问端口
+server_name  localhost;
+#charset koi8-r;
+#access_log  /var/log/nginx/host.access.log  main;
+#scriptis静态资源
+location /scriptis {
+# 修改为自己的前端路径
+alias   /home/nginx/scriptis-web/dist; # 静态文件目录
+#root /home/hadoop/dss/web/dss/linkis;
+index  index.html index.html;
+}
+#默认资源路径指向管理台前端静态资源
+location / {
+# 修改为自己的前端路径
+root   /home/nginx/linkis-web/dist; # 静态文件目录
+#root /home/hadoop/dss/web/dss/linkis;
+index  index.html index.html;
+}
+
+location /ws {
+proxy_pass http://linkisServer/api #后端Linkis的地址
+proxy_http_version 1.1;
+proxy_set_header Upgrade $http_upgrade;
+proxy_set_header Connection upgrade;
+}
+
+location /api {
+proxy_pass http://linkisServer/api; #后端Linkis的地址
+proxy_set_header Host $host;
+proxy_set_header X-Real-IP $remote_addr;
+proxy_set_header x_real_ipP $remote_addr;
+proxy_set_header remote_addr $remote_addr;
+proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+proxy_http_version 1.1;
+proxy_connect_timeout 4s;
+proxy_read_timeout 600s;
+proxy_send_timeout 12s;
+proxy_set_header Upgrade $http_upgrade;
+proxy_set_header Connection upgrade;
+}
+
+#error_page  404              /404.html;
+# redirect server error pages to the static page /50x.html
+#
+error_page   500 502 503 504  /50x.html;
+location = /50x.html {
+root   /usr/share/nginx/html;
+}
+}
+
+```
+## 如何排查问题
+- 1. linkis一共有100多个模块,最终启动的服务一共是7个,分别是 
linkis-cg-engineconnmanager,linkis-cg-engineplugin,linkis-cg-entrance,linkis-cg-linkismanager,
+linkis-mg-gateway, 
linkis-mg-eureka,linkis-ps-publicservice,每一个模块都有这不同的功能,其中linkis-cg-engineconnmanager
 
负责管理启动引擎服务,会生成对应引擎的脚本来拉起引擎服务,所以我们团队在部署时将linkis-cg-engineconnmanager单独启动在服务器上以便于有足够的资源给用户执行。
+- 2. 
像jdbc,spark.hetu之类的引擎的执行需要一些jar包的支撑,在linkis种称之为物料,打包的时候这些jar包会打到linkis-cg-engineplugin下对用的引擎中,会出现conf
 
和lib目录,启动这个服务时,会将两个打包上传到配置的目录,会生成两个zip文件,我们使用的是OSS来存储这些物料信息,所以首先是上传到OSS,然后再下载到linkis-cg-engineconnmanager这个服务所在服务器上,然后如果配置了以下两个配置
  wds.linkis.enginecoon.public.dir 和 wds.linkis.enginecoon.root.dir 
,那么会把包拉到wds.linkis.enginecoon.public.dir这个目录下来,wds.linkis.enginecoon.root.dir这个目录是工作目录,里面存放日志和脚本信息,还有一个lib和conf的软连接到
 wds.linkis.enginecoon.public.dir。
+- 3. 如果要排查引擎日志可以到 wds.linkis.enginecoon.root.dir 
配置下的目录去看,当然日志信息也会在Scriptis页面执行的日志上展示,直接粘贴去查找即可。
+
+
+
+
+
+
diff --git a/i18n/zh-CN/docusaurus-plugin-content-blog/authors.yml 
b/i18n/zh-CN/docusaurus-plugin-content-blog/authors.yml
index 4fe6718d8b9..ac432c41610 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-blog/authors.yml
+++ b/i18n/zh-CN/docusaurus-plugin-content-blog/authors.yml
@@ -1,18 +1,18 @@
 Casion:
   name: Casion
-  title: 微众银行开发工程师
+  title: 开源贡献者
   url: https://github.com/casionone/
   image_url: https://avatars.githubusercontent.com/u/7869972?v=4
 
 peacewong:
   name: Peacewong
-  title: 微众银行开发工程师
+  title: 开源贡献者
   url: https://github.com/peacewong/
   image_url: https://avatars.githubusercontent.com/u/11496700?v=4
 
 aiceflower:
   name: aiceflower
-  title: 微众银行开发工程师
+  title: 开源贡献者
   url: https://github.com/aiceflower/
   image_url: https://avatars.githubusercontent.com/u/22620332?s=400&v=4
 
@@ -39,4 +39,9 @@ kevinWdong:
   title: contributors
   url: https://github.com/kongslove
   image_url: https://avatars.githubusercontent.com/u/42604208?v=4
-  
\ No newline at end of file
+  
+livi12138:
+  name: livi12138
+  title: 开源贡献者
+  url: https://github.com/livi12138
+  image_url: https://avatars.githubusercontent.com/u/156271765?v=4


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@linkis.apache.org
For additional commands, e-mail: commits-h...@linkis.apache.org

Reply via email to