This is an automated email from the ASF dual-hosted git repository.
jianbin pushed a commit to branch docusaurus
in repository https://gitbox.apache.org/repos/asf/incubator-seata-website.git
The following commit(s) were added to refs/heads/docusaurus by this push:
new 8c4a31524d Grpc blog (#923)
8c4a31524d is described below
commit 8c4a31524d098ac18a7b67da86212a75ce08c853
Author: yiqi <[email protected]>
AuthorDate: Sat Dec 14 00:25:48 2024 +0800
Grpc blog (#923)
---
blog/seata-grpc-client.md | 1 +
.../seata-grpc-client.md | 156 ++++++++++++++
.../seata-grpc-client.md | 232 +++++++++++++++++++++
static/img/blog/2024121301.png | Bin 0 -> 365111 bytes
static/img/blog/2024121302.png | Bin 0 -> 318738 bytes
static/img/blog/2024121303.png | Bin 0 -> 419640 bytes
static/img/blog/2024121304.png | Bin 0 -> 413464 bytes
static/img/blog/2024121305.png | Bin 0 -> 358454 bytes
static/img/blog/2024121306.jpeg | Bin 0 -> 92986 bytes
9 files changed, 389 insertions(+)
diff --git a/blog/seata-grpc-client.md b/blog/seata-grpc-client.md
new file mode 100644
index 0000000000..e872b67af1
--- /dev/null
+++ b/blog/seata-grpc-client.md
@@ -0,0 +1 @@
+Placeholder. DO NOT DELETE.
\ No newline at end of file
diff --git a/i18n/en/docusaurus-plugin-content-blog/seata-grpc-client.md
b/i18n/en/docusaurus-plugin-content-blog/seata-grpc-client.md
new file mode 100644
index 0000000000..a274ca4ace
--- /dev/null
+++ b/i18n/en/docusaurus-plugin-content-blog/seata-grpc-client.md
@@ -0,0 +1,156 @@
+---
+title: Go Language Client Communication with Seata Server
+author: Wang Mingjun, Seata Open Source Summer Student Participant
+description: This article takes Go language as an example to demonstrate
Seata's multi-language client communication capabilities.
+date: 2024/11/30
+keywords: [seata, distributed transaction, cloud-native, grpc, multi-language
communication]
+---
+
+# Background
+With the merge of PR
[https://github.com/apache/incubator-seata/pull/6754](https://github.com/apache/incubator-seata/pull/6754),
Seata Server is now capable of recognizing and processing Grpc requests. This
means that any language client, by simply including the proto files, can
communicate with the Seata Server deployed on the JVM, thereby achieving the
full process of distributed transactions.
+
+Below is a demonstration of this process using Go language as an example.
+
+# Environment Preparation
+Goland 2024.2
+
+Idea 2024.3
+
+JDK 1.8
+
+Go 1.23.3
+
+Seata 2.3.0-SNAPSHOT
+
+libprotoc 3.21.0
+
+# Operation Process
+## Deploy and Start Seata Server
+Run org.apache.seata.server.ServerApplication#main as shown below:
+
+
+
+## Proto File Import
+Import the necessary proto files for the transaction process in the Go
project, including various transaction request and response proto files and the
proto files for initiating RPC. As shown below:
+
+
+
+## Grpc File Generation
+In the directory where the proto files were imported in the previous step,
execute the command:
+
+```shell
+protoc --go_out=. --go-grpc_out=. .\*.proto
+```
+
+After execution, the grpc code will be generated as shown below:
+
+
+
+## Grpc Invocation
+Complete a distributed transaction process in main.go and print the response
from Seata Server. The code is as follows:
+
+```go
+func main() {
+ conn, err := grpc.Dial(":8091", grpc.WithInsecure())
+ if err != nil {
+ log.Fatalf("did not connect: %v", err)
+ }
+ defer conn.Close()
+ client := pb.NewSeataServiceClient(conn)
+ stream, err := client.SendRequest(context.Background())
+ if err != nil {
+ log.Fatalf("could not sendRequest: %v", err)
+ }
+ defer stream.CloseSend()
+
+ sendRegisterTm(stream)
+ xid := sendGlobalBegin(stream)
+ sendBranchRegister(stream, xid)
+ sendGlobalCommit(stream, xid)
+}
+
+// ... Other functions ...
+
+```
+
+After running, the Seata Server console prints as follows:
+
+
+
+The Go client console prints as follows:
+
+
+
+# Implementation Principle
+## Proto Design
+To achieve communication with multi-language grpc clients, Seata Server
defines grpcMessage.proto, which defines the GrpcMessageProto that assembles
various Seata Message objects and the bidirectional stream interface
sendRequest for assembling Seata communication requests. Seata Server uses
grpcMessage.proto as a medium to achieve communication with multi-language
clients.
+
+```proto
+syntax = "proto3";
+package org.apache.seata.protocol.protobuf;
+import "google/protobuf/any.proto";
+option java_multiple_files = true;
+option java_outer_classname = "GrpcMessage";
+option java_package = "org.apache.seata.core.protocol.generated";
+
+message GrpcMessageProto {
+ int32 id = 1;
+ int32 messageType = 2;
+ map<string, string> headMap = 3;
+ google.protobuf.Any body = 4;
+}
+
+service SeataService {
+ rpc sendRequest (stream GrpcMessageProto) returns (stream
GrpcMessageProto);
+}
+```
+
+In addition, GrpcSerializer is defined, adapting to Seata's serialization SPI
system, which is used to achieve the mutual conversion of protobuf byte streams
and Seata message objects.
+
+## Grpc Protocol Recognition
+Seata Server implements ProtocolDetectHandler and ProtocolDetector.
ProtocolDetectHandler, as a ByteToMessageDecoder, will traverse the
ProtocolDetector list when receiving a message to find a ProtocolDetector that
can recognize the current message. ProtocolDetector distinguishes Seata
protocols, Http1.1 protocols, and Http2 protocols through recognizing magic
numbers. Once recognized, the ChannelHandler capable of handling the protocol
is added to the current Channel's Pipeline.
+
+
+
+## Grpc Request Sending and Processing
+Seata Server implements GrpcEncoder and GrpcDecoder. GrpcEncoder is
responsible for converting Seata's RpcMessage into GrpcMessageProto
recognizable by grpc native clients, filling the header with status,
contentType, and other protocol headers for communication with grpc native
clients. GrpcEncoder also adapts to grpc protocol specifications, writing the
compression bit, length, and message body in the order specified by the grpc
protocol into the channel.
+
+GrpcDecoder is responsible for processing requests from grpc native clients.
Since grpc clients implement request batching in the underlying transmission
through a queue flush, GrpcDecoder is also responsible for unpacking a batch of
requests. Finally, GrpcDecoder converts the protobuf byte stream into one or
more RpcMessages and passes them to the Seata request processor.
+
+## Grpc Connection Establishment and Management
+On the server side, simply configure a ProtocolDetectHandler to complete the
recognition and establishment of various types of connections.
+
+```java
+@Override
+public void initChannel(SocketChannel ch) {
+ ProtocolDetector[] defaultProtocolDetectors = {
+ new Http2Detector(getChannelHandlers()),
+ new SeataDetector(getChannelHandlers()),
+ new HttpDetector()
+ };
+ ch.pipeline().addLast(new
IdleStateHandler(nettyServerConfig.getChannelMaxReadIdleSeconds(), 0, 0))
+ .addLast(new ProtocolDetectHandler(defaultProtocolDetectors));
+}
+```
+
+On the client side, when obtaining a Channel, if the current communication
method is Grpc, an Http2MultiStreamChannel is obtained as the parent Channel,
and grpc-related handlers are added to this Channel.
+
+```java
+if (nettyClientConfig.getProtocol().equals(Protocol.GPRC.value)) {
+ Http2StreamChannelBootstrap bootstrap = new
Http2StreamChannelBootstrap(channel);
+ bootstrap.handler(new ChannelInboundHandlerAdapter() {
+ @Override
+ public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+ Channel channel = ctx.channel();
+ channel.pipeline().addLast(new GrpcDecoder());
+ channel.pipeline().addLast(new GrpcEncoder());
+ if (channelHandlers != null) {
+ addChannelPipelineLast(channel, channelHandlers);
+ }
+ }
+ });
+ channel = bootstrap.open().get();
+}
+```
+
+Please note that due to network issues, the parsing of the above links was
unsuccessful. If you need the content of the parsed web pages, please check the
legality of the web page links and try again. If you do not need the parsing of
these links, the question can be answered normally.
diff --git a/i18n/zh-cn/docusaurus-plugin-content-blog/seata-grpc-client.md
b/i18n/zh-cn/docusaurus-plugin-content-blog/seata-grpc-client.md
new file mode 100644
index 0000000000..e67f9e106e
--- /dev/null
+++ b/i18n/zh-cn/docusaurus-plugin-content-blog/seata-grpc-client.md
@@ -0,0 +1,232 @@
+---
+title: Go语言客户端与Seata Server通信
+author: 王明俊,Seata 开源之夏学生参与者
+description: 本文以Go语言为例,展示了Seata的多语言客户端通信能力。
+date: 2024/11/30
+keywords: [seata,分布式事务,云原生,grpc,多语言通信]
+---
+
+# 背景
+随着PR
[https://github.com/apache/incubator-seata/pull/6754](https://github.com/apache/incubator-seata/pull/6754)
的合并,Seata Server能够做到识别并处理Grpc请求,这意味着任意语言客户端,只需要引入proto文件,就可以和部署在JVM上的Seata
Server通信,进而实现分布式事务的全流程。
+
+下面以Go语言为例,向大家演示这一过程。
+
+# 环境准备
+Goland 2024.2
+
+Idea 2024.3
+
+jdk 1.8
+
+go 1.23.3
+
+Seata 2.3.0-SNAPSHOT
+
+libprotoc 3.21.0
+
+# 操作过程
+## 部署并启动 Seata Server
+运行 org.apache.seata.server.ServerApplication#main,如下所示
+
+
+
+## proto文件导入
+在go项目中导入完成本次事务流程所需的proto文件,包括各类事务请求和响应的proto文件和发起RPC的proto文件。如下所示
+
+
+
+## grpc相关文件生成
+在上一步导入的proto文件目录下,执行命令
+
+```shell
+ protoc --go_out=. --go-grpc_out=. .\*.proto
+```
+
+执行完后会生成grpc代码,如下所示
+
+
+
+## grpc调用
+在main.go中完成一次分布式事务的流程,并打印Seata Server的响应,代码如下所示
+
+```go
+func main() {
+ conn, err := grpc.Dial(":8091", grpc.WithInsecure())
+ if err != nil {
+ log.Fatalf("did not connect: %v", err)
+ }
+ defer conn.Close()
+ client := pb.NewSeataServiceClient(conn)
+ stream, err := client.SendRequest(context.Background())
+ if err != nil {
+ log.Fatalf("could not sendRequest: %v", err)
+ }
+ defer stream.CloseSend()
+
+ sendRegisterTm(stream)
+ xid := sendGlobalBegin(stream)
+ sendBranchRegister(stream, xid)
+ sendGlobalCommit(stream, xid)
+}
+
+func sendRegisterTm(stream grpc.BidiStreamingClient[pb.GrpcMessageProto,
pb.GrpcMessageProto]) {
+ abstractIdentifyRequestProto := &pb.AbstractIdentifyRequestProto{
+ ApplicationId: "test-applicationId",
+ }
+ registerTMRequestProto := &pb.RegisterTMRequestProto{
+ AbstractIdentifyRequest: abstractIdentifyRequestProto,
+ }
+
+ registerTMResponseProto := &pb.RegisterTMResponseProto{}
+ sendMessage(stream, registerTMRequestProto, registerTMResponseProto)
+}
+
+func sendGlobalBegin(stream grpc.BidiStreamingClient[pb.GrpcMessageProto,
pb.GrpcMessageProto]) string {
+ globalBeginRequestProto := &pb.GlobalBeginRequestProto{
+ TransactionName: "test-transactionName",
+ Timeout: 200,
+ }
+ globalBeginResponseProto := &pb.GlobalBeginResponseProto{}
+ sendMessage(stream, globalBeginRequestProto, globalBeginResponseProto)
+ return globalBeginResponseProto.Xid
+}
+
+func sendBranchRegister(stream grpc.BidiStreamingClient[pb.GrpcMessageProto,
pb.GrpcMessageProto], xid string) {
+ branchRegisterRequestProto := &pb.BranchRegisterRequestProto{
+ Xid: xid,
+ LockKey: "1",
+ ResourceId: "test-resourceId",
+ BranchType: pb.BranchTypeProto_AT,
+ ApplicationData: "{\"mock\":\"mock\"}",
+ }
+
+ branchRegisterResponseProto := &pb.BranchRegisterResponseProto{}
+ sendMessage(stream, branchRegisterRequestProto,
branchRegisterResponseProto)
+}
+
+func sendGlobalCommit(stream grpc.BidiStreamingClient[pb.GrpcMessageProto,
pb.GrpcMessageProto], xid string) {
+ abstractGlobalEndRequestProto := &pb.AbstractGlobalEndRequestProto{
+ Xid: xid,
+ }
+ globalCommitRequestProto := &pb.GlobalCommitRequestProto{
+ AbstractGlobalEndRequest: abstractGlobalEndRequestProto,
+ }
+
+ globalCommitResponseProto := &pb.GlobalCommitResponseProto{}
+ sendMessage(stream, globalCommitRequestProto, globalCommitResponseProto)
+}
+
+func sendMessage(stream grpc.BidiStreamingClient[pb.GrpcMessageProto,
pb.GrpcMessageProto], req proto.Message, response proto.Message) {
+ anyMsg, err := anypb.New(req)
+ if err != nil {
+ log.Fatalf("could not new any msg: %v", err)
+ }
+ marshal, err := proto.Marshal(anyMsg)
+ msg := &pb.GrpcMessageProto{
+ HeadMap: map[string]string{},
+ Body: marshal,
+ }
+ err = stream.Send(msg)
+ if err != nil {
+ log.Fatalf("could not send msg: %v", err)
+ }
+ resp, err := stream.Recv()
+ if err != nil {
+ log.Fatalf("failed to receive message: %v", err)
+ }
+
+ body := resp.Body
+ var anyMessage anypb.Any
+ err = proto.Unmarshal(body, &anyMessage)
+ if err != nil {
+ log.Fatalf("failed to unmarshal to any: %v", err)
+ }
+ err = anypb.UnmarshalTo(&anyMessage, response, proto.UnmarshalOptions{})
+ if err != nil {
+ log.Fatalf("failed to unmarshal to message: %v", err)
+ }
+
+ log.Printf("Received: %+v", response)
+}
+```
+
+运行后,Seata Server控制台打印如下
+
+
+
+Go客户端控制台打印如下
+
+
+
+# 实现原理
+## proto设计
+为了实现与多语言grpc客户端的通信,Seata Server定义了grpcMessage.proto,其中定义了装配
Seata各种Message对象的GrpcMessageProto和装配Seata各类通信请求的双向流接口sendRequest。Seata
Server以grpcMessage.proto作为媒介,可以实现与多语言客户端的通信
+
+```proto
+syntax = "proto3";
+package org.apache.seata.protocol.protobuf;
+import "google/protobuf/any.proto";
+option java_multiple_files = true;
+option java_outer_classname = "GrpcMessage";
+option java_package = "org.apache.seata.core.protocol.generated";
+
+message GrpcMessageProto {
+ int32 id = 1;
+ int32 messageType = 2;
+ map<string, string> headMap = 3;
+ google.protobuf.Any body = 4;
+}
+
+service SeataService {
+ rpc sendRequest (stream GrpcMessageProto) returns (stream
GrpcMessageProto);
+}
+```
+
+除此之外,还定义了GrpcSerializer,适配 Seata 的序列化器SPI体系,用于实现protobuf字节流和Seata消息对象的互相转换
+
+## grpc协议识别
+Seata
Server实现了ProtocolDetectHandler和ProtocolDetector。ProtocolDetectHandler作为ByteToMessageDecoder,在收到消息时,会遍历ProtocolDetector列表寻找能够识别当前消息的ProtocolDetector,ProtocolDetector通过识别魔数的方式区分Seata协议,Http1.1协议,Http2协议,一旦识别成功,会将能够处理该协议的ChannelHandler加入到当前Channel的Pipeline中
+
+
+
+## grpc请求发送与处理
+Seata Server
实现了GrpcEncoder和GrpcDecoder,GrpcEncoder负责将Seata的RpcMessage转换为grpc原生客户端可识别的GrpcMessageProto,并在header中填充status,contentType等协议头用于与grpc原生客户端通信。GrpcEncoder还负责适配grpc协议规范,将压缩位、长度、消息体按照grpc协议约定的顺序写入channel
+
+GrpcDecoder负责处理grpc原生客户端的请求。由于grpc客户端在底层传输时通过队列的方式实现了请求的分批flush,因此GrpcDecoder还负责将一批请求进行拆包。最终GrpcDecoder将protobuf字节流转换为一个或多个RpcMessage,并传递给Seata请求处理器
+
+## grpc连接的建立和管理
+Server端只需配置配置一个ProtocolDetectHandler,即可完成各种类型连接的识别和建立
+
+```java
+@Override
+public void initChannel(SocketChannel ch) {
+ ProtocolDetector[] defaultProtocolDetectors = {
+ new Http2Detector(getChannelHandlers()),
+ new SeataDetector(getChannelHandlers()),
+ new HttpDetector()
+ };
+ ch.pipeline().addLast(new
IdleStateHandler(nettyServerConfig.getChannelMaxReadIdleSeconds(), 0, 0))
+ .addLast(new ProtocolDetectHandler(defaultProtocolDetectors));
+}
+```
+
+Client端在每次获取Channel时,如果当前配置的通信方式是Grpc,则会以NioSocketChannel作为父Channel,获取一个Http2MultiStreamChannel,并在该Channel中添加grpc相关的handler
+
+```java
+if (nettyClientConfig.getProtocol().equals(Protocol.GPRC.value)) {
+ Http2StreamChannelBootstrap bootstrap = new
Http2StreamChannelBootstrap(channel);
+ bootstrap.handler(new ChannelInboundHandlerAdapter() {
+ @Override
+ public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
+ Channel channel = ctx.channel();
+ channel.pipeline().addLast(new GrpcDecoder());
+ channel.pipeline().addLast(new GrpcEncoder());
+ if (channelHandlers != null) {
+ addChannelPipelineLast(channel, channelHandlers);
+ }
+ }
+ });
+ channel = bootstrap.open().get();
+}
+```
+
diff --git a/static/img/blog/2024121301.png b/static/img/blog/2024121301.png
new file mode 100644
index 0000000000..3856775cc0
Binary files /dev/null and b/static/img/blog/2024121301.png differ
diff --git a/static/img/blog/2024121302.png b/static/img/blog/2024121302.png
new file mode 100644
index 0000000000..1e0be016c3
Binary files /dev/null and b/static/img/blog/2024121302.png differ
diff --git a/static/img/blog/2024121303.png b/static/img/blog/2024121303.png
new file mode 100644
index 0000000000..e15065568f
Binary files /dev/null and b/static/img/blog/2024121303.png differ
diff --git a/static/img/blog/2024121304.png b/static/img/blog/2024121304.png
new file mode 100644
index 0000000000..77c070ea48
Binary files /dev/null and b/static/img/blog/2024121304.png differ
diff --git a/static/img/blog/2024121305.png b/static/img/blog/2024121305.png
new file mode 100644
index 0000000000..577930adac
Binary files /dev/null and b/static/img/blog/2024121305.png differ
diff --git a/static/img/blog/2024121306.jpeg b/static/img/blog/2024121306.jpeg
new file mode 100644
index 0000000000..809ef71426
Binary files /dev/null and b/static/img/blog/2024121306.jpeg differ
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]