Re: Problems with multiple sinks using postgres-cdc connector

2024-06-17 Thread Hongshun Wang
Hi David,
In your modified pipeline, just one source from table1 is sufficient, with
both sink1 and process2 sharing a single source from process1. However,
based on your log, it appears that two sources have been generated. Do you
have the execution graph available in the Flink UI?

Best,
Hongshun

On Mon, Jun 17, 2024 at 11:40 PM David Bryson  wrote:

> These sink share the same source.  The pipeline that works looks something
> like this:
>
> table1 -> process1 -> process2 -> sink2
>
> When I change it to this:
>
> table1 -> process1 -> process2 -> sink2
>  `--> sink1
>
> I get the errors described, where it appears that a second process is
> created that attempts to use the current slot twice.
>
> On Mon, Jun 17, 2024 at 1:58 AM Hongshun Wang 
> wrote:
>
>> Hi David,
>> > When I add this second sink, the postgres-cdc connector appears to add
>> a second reader from the replication log, but with the same slot name.
>>
>> I don't understand what you mean by adding a second sink. Do they share
>> the same source, or does each have a separate pipeline? If the former one,
>> you can share the same source for two sinks, in which case one replication
>> slot is sufficient. If the later one, if you want each sink to have its own
>> source, you can set a different slot name for each source (the option name
>> is slot.name[1]).
>>
>> [1]
>> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/flink-sources/postgres-cdc/#connector-options
>>
>> On Sat, Jun 15, 2024 at 12:40 AM David Bryson  wrote:
>>
>>> Hi,
>>>
>>> I have a stream reading from postgres-cdc connector version 3.1.0. I
>>> read from two tables:
>>>
>>> flink.cleaned_migrations
>>> public.cleaned
>>>
>>> I convert the tables into a datastream, do some processing, then write
>>> it to a sink at the end of my stream:
>>>
>>> joined_table_result =
>>> joined_with_metadata.execute_insert(daily_sink_property_map['flink_table_name'])
>>>
>>> This works well, however I recently tried to add a second table which
>>> contains state reached in the middle of my stream:
>>>
>>> continuous_metrics_table = table_env.execute_sql("SELECT f1, f2, f3
>>> from joined_processed_table")
>>>
>>>  
>>> continuous_metrics_table.execute_insert(continuous_sink_property_map['flink_table_name'])
>>>
>>> When I add this second sink, the postgres-cdc connector appears to add a
>>> second reader from the replication log, but with the same slot name. It
>>> seems to behave this way regardless of the sink connector I use, and seems
>>> to happen in addition to the existing slot that is already allocated to the
>>> stream.  This second reader of course cannot use the same replication slot,
>>> and so the connector eventually times out.  Is this expected behavior from
>>> the connector? It seems strange the connector would attempt to use a slot
>>> twice.
>>>
>>> I am using incremental snapshots, and I am passing a unique slot per
>>> table connector.
>>>
>>> Logs below:
>>>
>>> 2024-06-14 09:23:59,600 INFO  
>>> org.apache.flink.cdc.connectors.postgres.source.utils.TableDiscoveryUtils
>>> [] - Postgres captured tables : flink.cleaned_migrations .
>>>
>>> 2024-06-14 09:23:59,603 INFO  io.debezium.jdbc.JdbcConnection
>>> [] - Connection gracefully closed
>>>
>>> 2024-06-14 09:24:00,198 INFO  
>>> org.apache.flink.cdc.connectors.postgres.source.utils.TableDiscoveryUtils
>>> [] - Postgres captured tables : public.cleaned .
>>>
>>> 2024-06-14 09:24:00,199 INFO  io.debezium.jdbc.JdbcConnection
>>> [] - Connection gracefully closed
>>>
>>> 2024-06-14 09:24:00,224 INFO  
>>> io.debezium.connector.postgresql.PostgresSnapshotChangeEventSource
>>> [] - Creating initial offset context
>>>
>>> 2024-06-14 09:24:00,417 INFO  
>>> io.debezium.connector.postgresql.PostgresSnapshotChangeEventSource
>>> [] - Read xlogStart at 'LSN{6/C9806378}' from transaction '73559679'
>>>
>>> 2024-06-14 09:24:00,712 INFO  io.debezium.jdbc.JdbcConnection
>>> [] - Connection gracefully closed
>>>
>>> 2024-06-14 09:24:00,712 INFO  
>>> org.apache.flink.cdc.connectors.base.source.reader.IncrementalSourceReader
>>> [] - Source reader 0 discovers table sche

Re: Problems with multiple sinks using postgres-cdc connector

2024-06-17 Thread Hongshun Wang
Hi David,
> When I add this second sink, the postgres-cdc connector appears to add a
second reader from the replication log, but with the same slot name.

I don't understand what you mean by adding a second sink. Do they share the
same source, or does each have a separate pipeline? If the former one, you
can share the same source for two sinks, in which case one replication slot
is sufficient. If the later one, if you want each sink to have its own
source, you can set a different slot name for each source (the option name
is slot.name[1]).

[1]
https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/flink-sources/postgres-cdc/#connector-options

On Sat, Jun 15, 2024 at 12:40 AM David Bryson  wrote:

> Hi,
>
> I have a stream reading from postgres-cdc connector version 3.1.0. I read
> from two tables:
>
> flink.cleaned_migrations
> public.cleaned
>
> I convert the tables into a datastream, do some processing, then write it
> to a sink at the end of my stream:
>
> joined_table_result =
> joined_with_metadata.execute_insert(daily_sink_property_map['flink_table_name'])
>
> This works well, however I recently tried to add a second table which
> contains state reached in the middle of my stream:
>
> continuous_metrics_table = table_env.execute_sql("SELECT f1, f2, f3
> from joined_processed_table")
>
>  
> continuous_metrics_table.execute_insert(continuous_sink_property_map['flink_table_name'])
>
> When I add this second sink, the postgres-cdc connector appears to add a
> second reader from the replication log, but with the same slot name. It
> seems to behave this way regardless of the sink connector I use, and seems
> to happen in addition to the existing slot that is already allocated to the
> stream.  This second reader of course cannot use the same replication slot,
> and so the connector eventually times out.  Is this expected behavior from
> the connector? It seems strange the connector would attempt to use a slot
> twice.
>
> I am using incremental snapshots, and I am passing a unique slot per table
> connector.
>
> Logs below:
>
> 2024-06-14 09:23:59,600 INFO  
> org.apache.flink.cdc.connectors.postgres.source.utils.TableDiscoveryUtils
> [] - Postgres captured tables : flink.cleaned_migrations .
>
> 2024-06-14 09:23:59,603 INFO  io.debezium.jdbc.JdbcConnection
>   [] - Connection gracefully closed
>
> 2024-06-14 09:24:00,198 INFO  
> org.apache.flink.cdc.connectors.postgres.source.utils.TableDiscoveryUtils
> [] - Postgres captured tables : public.cleaned .
>
> 2024-06-14 09:24:00,199 INFO  io.debezium.jdbc.JdbcConnection
>   [] - Connection gracefully closed
>
> 2024-06-14 09:24:00,224 INFO  
> io.debezium.connector.postgresql.PostgresSnapshotChangeEventSource
> [] - Creating initial offset context
>
> 2024-06-14 09:24:00,417 INFO  
> io.debezium.connector.postgresql.PostgresSnapshotChangeEventSource
> [] - Read xlogStart at 'LSN{6/C9806378}' from transaction '73559679'
>
> 2024-06-14 09:24:00,712 INFO  io.debezium.jdbc.JdbcConnection
>   [] - Connection gracefully closed
>
> 2024-06-14 09:24:00,712 INFO  
> org.apache.flink.cdc.connectors.base.source.reader.IncrementalSourceReader
> [] - Source reader 0 discovers table schema for stream split stream-split
> success
>
> 2024-06-14 09:24:00,712 INFO  
> org.apache.flink.cdc.connectors.base.source.reader.IncrementalSourceReader
> [] - Source reader 0 received the stream split :
> StreamSplit{splitId='stream-split', offset=Offset{lsn=LSN{6/C98060F8},
> txId=73559674, lastCommitTs=-9223372036854775808],
> endOffset=Offset{lsn=LSN{/}, txId=null,
> lastCommitTs=-9223372036853775810], isSuspended=false}.
>
> 2024-06-14 09:24:00,714 INFO  
> org.apache.flink.connector.base.source.reader.SourceReaderBase
> [] - Adding split(s) to reader: [StreamSplit{splitId='stream-split',
> offset=Offset{lsn=LSN{6/C98060F8}, txId=73559674,
> lastCommitTs=-9223372036854775808],
> endOffset=Offset{lsn=LSN{/}, txId=null,
> lastCommitTs=-9223372036853775810], isSuspended=false}]
>
> 2024-06-14 09:24:00,714 INFO  
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher
> [] - Starting split fetcher 0
>
> 2024-06-14 09:24:00,716 INFO  
> org.apache.flink.cdc.connectors.base.source.enumerator.IncrementalSourceEnumerator
> [] - The enumerator receives notice from subtask 0 for the stream split
> assignment.
>
> 2024-06-14 09:24:00,721 INFO  
> org.apache.flink.cdc.connectors.postgres.source.fetch.PostgresSourceFetchTaskContext
> [] - PostgresConnectorConfig is
>
> 2024-06-14 09:24:00,847 INFO  
> io.debezium.connector.postgresql.PostgresSnapshotChangeEventSource
> [] - Creating initial offset context
>
> 2024-06-14 09:24:01,000 INFO  
> io.debezium.connector.postgresql.PostgresSnapshotChangeEventSource
> [] - Read xlogStart at 'LSN{6/C9806430}' from transaction '73559682'
>
> 2024-06-14 09:24:01,270 INFO  io.debezium.jdbc.JdbcConnection
>   [] - 

Re: problem with the heartbeat interval feature

2024-05-18 Thread Hongshun Wang
Hi Thomas,

I have reviewed the code and just
noticed that heartbeat.action.query is not mandatory. Debezium will
generate Heartbeat Events at regular intervals. Flink CDC will then
receive these Heartbeat Events and advance the offset[1]. Finally, the
source
reader
will commit the offset during checkpointing in the streaming phase[2].

Therefore, you may want to verify whether checkpointing is enabled and
if the process has entered the streaming phase (indicating that it is
only reading the WAL log).

[1]
https://github.com/apache/flink-cdc/blob/d386c7c1c2db3bb8590be6c20198286cf0567c97/flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/reader/IncrementalSourceRecordEmitter.java#L119

[2]
https://github.com/apache/flink-cdc/blob/d386c7c1c2db3bb8590be6c20198286cf0567c97/flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/reader/IncrementalSourceReaderWithCommit.java#L93

On Sat, May 18, 2024 at 12:34 AM Thomas Peyric 
wrote:

> thanks Hongshun for your response !
>
> Le ven. 17 mai 2024 à 07:51, Hongshun Wang  a
> écrit :
>
>> Hi Thomas,
>>
>> In debezium dos says: For the connector to detect and process events from
>> a heartbeat table, you must add the table to the PostgreSQL publication
>> specified by the publication.name
>> <https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-publication-name>
>>  property.
>> If this publication predates your Debezium deployment, the connector uses
>> the publications as defined. If the publication is not already configured
>> to automatically replicate changes FOR ALL TABLES in the database, you
>> must explicitly add the heartbeat table to the publication[2].
>>
>> Thus, if you want use heart beat in cdc:
>>
>>1. add a heartbeat table to publication: ALTER PUBLICATION
>>** ADD TABLE **;
>>2. set heartbeatInterval
>>3. add debezium.heartbeat.action.query
>>
>> <https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-heartbeat-action-query>
>> [3]
>>
>> However, when I use it it CDC, some exception occurs:
>>
>> Caused by: java.lang.NullPointerException
>> at 
>> io.debezium.heartbeat.HeartbeatFactory.createHeartbeat(HeartbeatFactory.java:55)
>> at io.debezium.pipeline.EventDispatcher.(EventDispatcher.java:127)
>> at io.debezium.pipeline.EventDispatcher.(EventDispatcher.java:94)
>>
>>
>>
>>
>> It seems CDC don't add  a HeartbeatConnectionProvider  when configure
>> PostgresEventDispatcher:
>>
>> //org.apache.flink.cdc.connectors.postgres.source.fetch.PostgresSourceFetchTaskContext#configurethis.postgresDispatcher
>>  =
>> new PostgresEventDispatcher<>(
>> dbzConfig,
>> topicSelector,
>> schema,
>> queue,
>> dbzConfig.getTableFilters().dataCollectionFilter(),
>> DataChangeEvent::new,
>> metadataProvider,
>> schemaNameAdjuster);
>>
>>
>> In debezium, when PostgresConnectorTask start, it will  do it
>>
>> //io.debezium.connector.postgresql.PostgresConnectorTask#start  final 
>> PostgresEventDispatcher dispatcher = new PostgresEventDispatcher<>(
>> connectorConfig,
>> topicNamingStrategy,
>> schema,
>> queue,
>> connectorConfig.getTableFilters().dataCollectionFilter(),
>> DataChangeEvent::new,
>> PostgresChangeRecordEmitter::updateSchema,
>> metadataProvider,
>> connectorConfig.createHeartbeat(
>> topicNamingStrategy,
>> schemaNameAdjuster,
>> () -> new 
>> PostgresConnection(connectorConfig.getJdbcConfig(), 
>> PostgresConnection.CONNECTION_GENERAL),
>> exception -> {
>> String sqlErrorId = exception.getSQLState();
>> switch (sqlErrorId) {
>> case "57P01":
>> // Postgres error admin_shutdown, 
>> see https://www.postgresql.org/docs/12/errcodes-appendix.html
>>   

Re: problem with the heartbeat interval feature

2024-05-16 Thread Hongshun Wang
Hi Thomas,

In debezium dos says: For the connector to detect and process events from a
heartbeat table, you must add the table to the PostgreSQL publication
specified by the publication.name

property.
If this publication predates your Debezium deployment, the connector uses
the publications as defined. If the publication is not already configured
to automatically replicate changes FOR ALL TABLES in the database, you must
explicitly add the heartbeat table to the publication[2].

Thus, if you want use heart beat in cdc:

   1. add a heartbeat table to publication: ALTER PUBLICATION
   ** ADD TABLE **;
   2. set heartbeatInterval
   3. add debezium.heartbeat.action.query
   

[3]

However, when I use it it CDC, some exception occurs:

Caused by: java.lang.NullPointerException
at 
io.debezium.heartbeat.HeartbeatFactory.createHeartbeat(HeartbeatFactory.java:55)
at io.debezium.pipeline.EventDispatcher.(EventDispatcher.java:127)
at io.debezium.pipeline.EventDispatcher.(EventDispatcher.java:94)




It seems CDC don't add  a HeartbeatConnectionProvider  when configure
PostgresEventDispatcher:

//org.apache.flink.cdc.connectors.postgres.source.fetch.PostgresSourceFetchTaskContext#configurethis.postgresDispatcher
=
new PostgresEventDispatcher<>(
dbzConfig,
topicSelector,
schema,
queue,
dbzConfig.getTableFilters().dataCollectionFilter(),
DataChangeEvent::new,
metadataProvider,
schemaNameAdjuster);


In debezium, when PostgresConnectorTask start, it will  do it

//io.debezium.connector.postgresql.PostgresConnectorTask#start  final
PostgresEventDispatcher dispatcher = new
PostgresEventDispatcher<>(
connectorConfig,
topicNamingStrategy,
schema,
queue,
connectorConfig.getTableFilters().dataCollectionFilter(),
DataChangeEvent::new,
PostgresChangeRecordEmitter::updateSchema,
metadataProvider,
connectorConfig.createHeartbeat(
topicNamingStrategy,
schemaNameAdjuster,
() -> new
PostgresConnection(connectorConfig.getJdbcConfig(),
PostgresConnection.CONNECTION_GENERAL),
exception -> {
String sqlErrorId = exception.getSQLState();
switch (sqlErrorId) {
case "57P01":
// Postgres error
admin_shutdown, see
https://www.postgresql.org/docs/12/errcodes-appendix.html
  throw new DebeziumException("Could not
execute heartbeat action query (Error: " + sqlErrorId + ")",
exception);
case "57P03":
// Postgres error
cannot_connect_now, see
https://www.postgresql.org/docs/12/errcodes-appendix.html
  throw new RetriableException("Could not
execute heartbeat action query (Error: " + sqlErrorId + ")",
exception);
default:
break;
}
}),
schemaNameAdjuster,
signalProcessor);

Thus, I have create a new jira[4] to fix it.



 [1]
https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/legacy-flink-cdc-sources/postgres-cdc/

[2]
https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-heartbeat-interval-ms

[3]
https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-property-heartbeat-action-query

[4] https://issues.apache.org/jira/browse/FLINK-35387


Best

Hongshun

On Thu, May 16, 2024 at 9:03 PM Thomas Peyric 
wrote:

> Hi Flink Community !
>
> I am using :
> * Flink
> * Flink CDC posgtres Connector
> * scala + sbt
>
> versions are :
>   * orgApacheKafkaVersion = "3.2.3"
>   * flinkVersion = "1.19.0"
>   * flinkKafkaVersion = "3.0.2-1.18"
>   * flinkConnectorPostgresCdcVersion = "3.0.1"
>   * debeziumVersion = "1.9.8.Final"
>   * scalaVersion = "2.12.13"
>   * javaVersion = "11"
>
>
> the problem
> ---
>
> I have a problem with the heartbeat interval feature:
> * when I am querying PG with `select * from pg_replication_slots;` for
> checking if information are updated on each replication slots at defined
> interval
> * then confirmed_flush_lsn values are never updated
> PS: i have other 

Re: Re:RE: RE: flink cdc动态加表不生效

2024-03-07 Thread Hongshun Wang
Hi, casel chan,
社区已经对增量框架实现动态加表(https://github.com/apache/flink-cdc/pull/3024
),预计3.1对mongodb和postgres暴露出来,但是Oracle和Sqlserver目前并没暴露,你可以去社区参照这两个框架,将参数打开,并且测试和适配。
Best,
Hongshun


Re: 退订

2023-05-10 Thread Hongshun Wang
如果需要取消订阅 user-zh@flink.apache.org 邮件组,请发送任意内容的邮件到
user-zh-unsubscr...@flink.apache.org ,参考[1]

[1] https://flink.apache.org/zh/community/

On Wed, May 10, 2023 at 1:38 AM Zhanshun Zou  wrote:

> 退订
>


Re: 使用Flink SQL如何实现支付对帐超时告警?

2023-05-10 Thread Hongshun Wang
Hi  casel.chen,
我理解你的意思是:
希望在ThirdPartyPaymentStream一条数据达到的30分钟后,*再触发查询*
,如果此时该数据在PlatformPaymentStream中还未出现,说明超时未支付,则输入到下游。而不是等ThirdPartyPaymentStream数据达到时再判断是否超时,因为此时虽然超时达到,但是也算已支付,没必要再触发报警了。

如果是流计算,可以采用timer定时器延时触发。

对于sql, 我个人的一个比较绕的想法是(供参考,不一定对):是通过Pulsar
Sink(或RocketMQ等有延迟队列的消息中间件)将PlatformPaymentStream的数据写入延迟队列(30分钟)[1],
然后延迟消费为PlatformPaymentStream2。然后将PlatformPaymentStream2 *left join*
ThirdPartyPaymentStream, 如果join后的结果不包含ThirdPartyPaymentStream部分,说明没有及时付款。

[1]
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/connectors/datastream/pulsar/#%e6%b6%88%e6%81%af%e5%bb%b6%e6%97%b6%e5%8f%91%e9%80%81

Best
Hongshun

On Wed, May 10, 2023 at 8:45 AM Shammon FY  wrote:

> Hi
>
> 如果使用CEP,可以将两个流合并成一个流,然后通过subtype根据不同的事件类型来匹配,定义CEP的Pattern,例如以下这种
> DataStream s1 = ...;
> DataStream s2 = ...;
> DataStream s = s1.union(s1)...;
> Pattern = Pattern.begin("first")
> .subtype(E1.class)
> .where(...)
> .followedBy("second")
> .subtype(E2.class)
> .where(...)
>
> 如果使用Flink SQL,可以直接使用双流Join+窗口实现
>
> Best,
> Shammon FY
>
>
>
>
> On Wed, May 10, 2023 at 2:24 AM casel.chen  wrote:
>
> > 需求:业务端实现支付功能,需要通过第三方支付平台的交易数据采用Flink
> > SQL来做一个实时对账,对于超过30分钟内未到达的第三方支付平台交易数据进行告警。
> > 请问这个双流实时对帐场景使用Flink CEP SQL要如何实现?
> >
> >
> 网上找的例子都是基于单条流实现的,而上述场景会用到两条流,一个是PlatformPaymentStream,另一个是ThirdPartyPaymentStream。
>


Re: 退订

2023-05-09 Thread Hongshun Wang
Please send email to  user-zh-unsubscr...@flink.apache.org
 if you want to unsubscribe the mail from
user-zh-unsubscr...@flink.apache.org , and you can
refer[1][2] for more details.
请发送任意内容的邮件到  user-zh-unsubscr...@flink.apache.org 
地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理邮件订阅。

Best Hongshun,

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2]https://flink.apache.org/community.html#mailing-lists


On Sun, May 7, 2023 at 10:14 PM 胡家发 <15802974...@163.com> wrote:

> 退订


Re: 退订

2023-05-06 Thread Hongshun Wang
Please send email to  user-zh-unsubscr...@flink.apache.org
 if you want to unsubscribe the mail from
user-zh-unsubscr...@flink.apache.org , and you can
refer[1][2] for more details.
请发送任意内容的邮件到  user-zh-unsubscr...@flink.apache.org 
地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理邮件订阅。

Best Hongshun,

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2]https://flink.apache.org/community.html#mailing-lists

On Fri, Apr 21, 2023 at 10:50 AM 杨光跃  wrote:

>
>
> 退订
> | |
> 杨光跃
> |
> |
> yangguangyuem...@163.com
> |
>
>


Re: streaming.api.operators和streaming.runtime.operators的区别是啥?

2023-05-06 Thread Hongshun Wang
我来谈一下我个人的看法,streaming.api.operators是提供给用户使用的stream api,
用户可以使用和扩展该接口。而streaming.runtime.operators是用户侧不感知,在执行时由flink自动调用的。比如:
Sink用户可以自己设置,如kafkaSink。但是输出时的state处理和事务commit(CommitterOperator)是Flink根据不同类型的Sink自动生成的统一逻辑,用户无需自己设置和实现。

Best
Hongshun

On Sat, May 6, 2023 at 11:57 AM yidan zhao  wrote:

> 如题,想知道这个分类的标准是啥呢?
>


Re: flink issue可以登录,但是flink中文邮箱账号密码错误,是出现什么原因了嘛

2023-05-05 Thread Hongshun Wang
>
>  flink issue可以登录

这个是jira账号吗?

flink中文邮箱账号密码

什么是flink中文邮箱账号 ?有无登陆页面链接

On Wed, Apr 19, 2023 at 11:36 AM kcz <573693...@qq.com.invalid> wrote:

> 请帮忙看看是我哪里出问题了嘛?我的账号是kcz。我想咨询大佬flink avro的问题
>
>
>
>
> kcz
> 573693...@qq.com
>
>
>
> 


Re: 退订

2023-05-05 Thread Hongshun Wang
Please send email to  user-zh-unsubscr...@flink.apache.org
 if you want to unsubscribe the mail from
user-zh-unsubscr...@flink.apache.org , and you can
refer[1][2] for more details.
请发送任意内容的邮件到  user-zh-unsubscr...@flink.apache.org 
地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理邮件订阅。

Best Hongshun,

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2]https://flink.apache.org/community.html#mailing-lists

On Sun, Apr 23, 2023 at 10:30 PM 朱静  wrote:

> 退订


Re: 退订

2023-05-05 Thread Hongshun Wang
Please send email to  user-zh-unsubscr...@flink.apache.org
 if you want to unsubscribe the mail from
user-zh-unsubscr...@flink.apache.org , and you can
refer[1][2] for more details.
请发送任意内容的邮件到  user-zh-unsubscr...@flink.apache.org 
地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理邮件订阅。

Best Hongshun,

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2]https://flink.apache.org/community.html#mailing-lists

On Tue, May 2, 2023 at 9:45 PM 胡家发 <15802974...@163.com> wrote:

> 退订


Re: 退订

2023-05-05 Thread Hongshun Wang
Please send email to  user-zh-unsubscr...@flink.apache.org
 if you want to unsubscribe the mail from
user-zh-unsubscr...@flink.apache.org , and you can
refer[1][2] for more details.
请发送任意内容的邮件到  user-zh-unsubscr...@flink.apache.org 
地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理邮件订阅。

On Fri, May 5, 2023 at 2:59 PM 李浩  wrote:

>
>


Re: 退订

2023-05-05 Thread Hongshun Wang
如果需要取消订阅 u...@flink.apache.org 和  d...@flink.apache.org 邮件组,请发送任意内容的邮件到
user-unsubscr...@flink.apache.org 和  dev-unsubscr...@flink.apache.org ,参考[1]

[1] https://flink.apache.org/zh/community/

On Fri, May 5, 2023 at 3:24 PM wuzhongxiu  wrote:

> 退订
>
>
>
> | |
> go574...@163.com
> |
> |
> 邮箱:go574...@163.com
> |
>
>
>
>
>  回复的原邮件 
> | 发件人 | willluzheng |
> | 日期 | 2023年05月05日 15:22 |
> | 收件人 | user-zh@flink.apache.org |
> | 抄送至 | |
> | 主题 | 退订 |
> 退订