Re: [VOTE] PIP-359: Support custom message listener executor for specific subscription

2024-06-19 Thread xiangying meng
+1 (no-binding)

Thanks,
Xiangying

On Wed, Jun 19, 2024 at 2:14 PM Lari Hotari  wrote:

> +1 (binding)
>
> -Lari
>
> On 2024/06/18 16:44:46 Aurora Twinkle wrote:
> > Hi, Pulsar Community: I would like to start the voting thread for
> > PIP-359: Support
> > custom message listener executor for specific subscription.
> > PIP: https://github.com/apache/pulsar/pull/22902
> >
> > Please review and vote on the PIP-359, as follows: [ ] +1, Approve the
> > PIP-359 [ ] -1, Do not approve the PIP-539 (please provide specific
> > comments)
> > Thanks, Linlin Duan(AuroraTwinkle)
> >
>


RE: RE: Re: [DISCUSS] PIP-359: Support custom message listener executor for specific subscription

2024-06-13 Thread xiangying meng
Good work! Same as Lari.
 
Thanks 
Xiangying meng


On 2024/06/14 03:39:01 xiangying meng wrote:
> 
> 
> On 2024/06/14 03:13:03 Yubiao Feng wrote:
> > Same as Lari
> > 
> > Thanks
> > Yubiao Feng
> > 
> > On Thu, Jun 13, 2024 at 8:05 PM Aurora Twinkle 
> > wrote:
> > 
> > > Hi, Pulsar Community.
> > > I open a new PIP for support custom message listener executor for specific
> > > subscription to avoid individual subscription listener consuming too much
> > > time leading to higher consumption delay in other subscriptions.
> > > link: https://github.com/apache/pulsar/pull/22902
> > >
> > > Thanks,
> > > Linlin Duan(AuroraTwinkle)
> > >
> > 

RE: Re: [DISCUSS] PIP-359: Support custom message listener executor for specific subscription

2024-06-13 Thread xiangying meng



On 2024/06/14 03:13:03 Yubiao Feng wrote:
> Same as Lari
> 
> Thanks
> Yubiao Feng
> 
> On Thu, Jun 13, 2024 at 8:05 PM Aurora Twinkle 
> wrote:
> 
> > Hi, Pulsar Community.
> > I open a new PIP for support custom message listener executor for specific
> > subscription to avoid individual subscription listener consuming too much
> > time leading to higher consumption delay in other subscriptions.
> > link: https://github.com/apache/pulsar/pull/22902
> >
> > Thanks,
> > Linlin Duan(AuroraTwinkle)
> >
> 

[DISCUSS] cherry-pick #22034 Create new ledger after the current ledger is closed

2024-04-07 Thread Xiangying Meng
Hi all,

I want to start a discussion to cherry-pick #22034 [0] to release branches.
The PR creates a new ledger after the current one is full. This is a
bug fix to resolve the issue where the last ledger could not be
deleted after expiration. Moreover, since there is no need to create a
new ledger only when messages are sent, it can reduce the send
latency.

However, it could be a behavior change and introduce some flaky tests
we have fixed in the master. So I think it may be necessary to have a
discussion before cherry-picking.

The target branches:

- branch-3.0
- branch-3.1
- branch-3.2

[0] https://github.com/apache/pulsar/pull/22034

I will keep the discussion open for at least 48 hours.
If there is no objections, I will perform the cherry-picking.

BR,
Xiangying


Re: [DISCUSS] GEO-replication issues on topic level

2024-03-28 Thread Xiangying Meng
Hi zixuan,

Thanks for your work in improving this geo-replication issue.
In my opinion, this should be a mistake when implementing pulsar
geo-replication at the topic level.
As we know, after a user configures replication policies at the
namespace level, the topics under the namespace will be created at the
remote clusters when the topic is created in the local cluster. If the
geo-replication policies are enabled at the topic level for a
non-partition topic, the topic will be created automatically in the
remote cluster when building the replicator producer.
However, for a partitioned topic, the topics created automatically are
no-partition topics in the remote cluster.

Therefore, creating topics at the remote clusters when the admin
uploads the replication policies is an acceptable solution. That is
following the current behavior of the topic creating for the
geo-replication enabling in namespace level and no-partition topic at
the topic level.  That is not a break-changing process that does not
require a proposal or a simple proposal to record this improvement.

Thanks,
Xiangying

On Fri, Mar 22, 2024 at 4:44 PM Zixuan Liu  wrote:
>
>  Hi all,
>
> The GEO-replication can be enabled on the Namespace and topic levels. When
> GEO-replication is enabled on the namespace level, it automatically creates
> the topic for the remote cluster, but the topic level misses this feature,
> which can cause unexpected problems.
>
> When two clusters use different global configuration metadatastore, the
> local cluster has a partitioned topic, and then we enable the
> GEO-replication on the topic level, I expect a partitioned topic will be
> created on the remote cluster, not a non-partitioned topic. BTW,
> `allowAutoTopicCreation` was enabled.
>
> There are two options:
>
> 1. When the GEO replication is enabled on the topic level , we can create
> the topic for the remote cluster(usually, we have superuser permissions):
> https://github.com/apache/pulsar/pull/22203
> 2. When the remote cluster has no topic, stop GEO-replication and throw an
> error.
>
> We also need to consider resource/permission issues, so like the remote
> clusters disables topic creation or exceed the maximum number of topics.
>
> Please let me know your thoughts.
>
> More contexts: https://github.com/apache/pulsar/pull/21679 (This is an
> incorrect implement, but there is more context here.)
>
> Thanks,
> Zixuan


Re: [DISCUSS] Optimizing the Method of Estimating Message Backlog Size in Pulsar

2024-03-27 Thread Xiangying Meng
Agree. While the name might be misleading, it indeed accurately
reflects the actual disk usage situation.


BR

On Wed, Mar 27, 2024 at 3:48 PM Girish Sharma  wrote:
>
> Hi Xiangying,
>
>
> > In the current implementation, the backlog size is estimated from the
> > mark delete position to the last confirm position, whereas the backlog
> > message count is the number of messages from the mark delete position
> > to the last confirm position, minus the count of individually
> > acknowledged messages. The inconsistency between these two could
> > potentially confuse users.
> >
>
> While confusing, it is somewhat accurate. Since the messages can be part of
> the same ledger where some messages are acked, some aren't, we can't delete
> that entire ledger until all messages of the ledger are acked - so it does
> contribute towards size of the backlog from a disk perspective.
> There might be some optimization possible - in a way that we try to figure
> out all completely acked ledgers from markDeletePosition to latest offset
> and remove their size, but what's the ROI there?
>
> So I would say that in your proposal, option 2 (current) is more accurate
> (while not being the best) than option 1.
>
> Regards
> --
> Girish Sharma


Re: [DISCUSS] Optimize the Acktimeout Mechanism in Pulsar Client

2024-03-27 Thread Xiangying Meng
I have considered this issue, but it may increase the user's consumption delay.
However, it is not appropriate to make modifications in the ack timeout.
Yubiao and I have just discussed this issue, and we should handle it
in the AcknowledgementsGroupingTracker to automatically retry failed
ack requests.

Thanks,

On Wed, Mar 27, 2024 at 5:37 PM ZhangJian He  wrote:
>
> Hi, Xiangying. Have you ever considered the `isAckReceiptEnabled` param?
>
> Thanks
> ZhangJian He
>
>
> On Wed, Mar 27, 2024 at 3:33 PM Xiangying Meng  wrote:
>
> > Dear Pulsar Community,
> >
> > I would like to initiate a discussion regarding the optimization of
> > the acktimeout mechanism on the client side. As we all know, the
> > Pulsar consumer has a configuration for ack timeout that automatically
> > redelivers unacknowledged messages after a certain period. The
> > workflow is approximately as follows:
> >
> > 1. Record a message when it is received.
> > 2. Remove the message from the record when the consumer begins to
> > acknowledge it.
> > 3. A timed task checks whether the messages from the record have
> > reached the ack timeout and triggers redelivery for those messages.
> >
> > This workflow has a potential shortcoming: it does not wait for the
> > ack response before removing the message from the record. If an ack
> > request is lost during transmission to the broker - due to issues with
> > proxy processing or buffer overflow after reaching the broker, for
> > instance - then the ack timeout will not be resent, leading to an ack
> > hole. While this situation is quite extreme and the likelihood of
> > occurrence is extremely low, it is nonetheless a possibility. Another,
> > more common scenario is when the broker fails to process or persist
> > the ack request.
> >
> > In such cases, users who are highly sensitive to ack holes may prefer
> > the message to be removed only after receiving the ack response.
> > Perhaps we could add a parameter to the acktimeout to determine
> > whether to wait for the ack response before removing the message.
> >
> > I am interested in hearing your thoughts on this issue and look
> > forward to your responses and valuable suggestions.
> >
> > Best Regards,
> > Xiangying
> >


[DISCUSS] Optimize the Acktimeout Mechanism in Pulsar Client

2024-03-27 Thread Xiangying Meng
Dear Pulsar Community,

I would like to initiate a discussion regarding the optimization of
the acktimeout mechanism on the client side. As we all know, the
Pulsar consumer has a configuration for ack timeout that automatically
redelivers unacknowledged messages after a certain period. The
workflow is approximately as follows:

1. Record a message when it is received.
2. Remove the message from the record when the consumer begins to
acknowledge it.
3. A timed task checks whether the messages from the record have
reached the ack timeout and triggers redelivery for those messages.

This workflow has a potential shortcoming: it does not wait for the
ack response before removing the message from the record. If an ack
request is lost during transmission to the broker - due to issues with
proxy processing or buffer overflow after reaching the broker, for
instance - then the ack timeout will not be resent, leading to an ack
hole. While this situation is quite extreme and the likelihood of
occurrence is extremely low, it is nonetheless a possibility. Another,
more common scenario is when the broker fails to process or persist
the ack request.

In such cases, users who are highly sensitive to ack holes may prefer
the message to be removed only after receiving the ack response.
Perhaps we could add a parameter to the acktimeout to determine
whether to wait for the ack response before removing the message.

I am interested in hearing your thoughts on this issue and look
forward to your responses and valuable suggestions.

Best Regards,
Xiangying


[DISCUSS] Optimizing the Method of Estimating Message Backlog Size in Pulsar

2024-03-27 Thread Xiangying Meng
Dear Pulsar Community,

I would like to initiate a discussion regarding the optimization of
the method used for estimating the message backlog size.

In the current implementation, the backlog size is estimated from the
mark delete position to the last confirm position, whereas the backlog
message count is the number of messages from the mark delete position
to the last confirm position, minus the count of individually
acknowledged messages. The inconsistency between these two could
potentially confuse users.

For instance, let's consider there are 3,000 messages in a topic and
all messages except for message 1:0, 1:998, and 3:999 have been
acknowledged by a subscription. When users retrieve the stats of the
subscription, they will find that `msgBacklog` is 3, while
`backlogSize` is 3000 * entry size.

|1:0|...|1:998|...|3:999|

When it comes to the value of `backlogSize`, there seem to be two
different opinions:
1. The backlog size should be consistent with the message backlog, and
it should not include the messages that have been individually
acknowledged.
2. Only the messages before the mark delete position can be deleted,
so we should calculate the backlog size from the mark delete position,
and individual acknowledgments should not affect the calculation of
the backlog size.

I'm interested in hearing how others view this issue. I look forward
to your response.

Best Regards,
Xiangying


[ANNOUNCE] Apache Pulsar 2.10.6 released

2024-03-08 Thread Xiangying Meng
The Apache Pulsar team is proud to announce Apache Pulsar version 2.10.6.

Pulsar is a highly scalable, low latency messaging platform running on
commodity hardware. It provides simple pub-sub semantics over topics,
guaranteed at-least-once delivery of messages, automatic cursor management for
subscribers, and cross-datacenter replication.

For Pulsar release details and downloads, visit:

https://pulsar.apache.org/download

Release Notes are at:
https://pulsar.apache.org/release-notes

We would like to thank the contributors that made the release possible.

Regards,

The Pulsar Team


Re: [VOTE] Pulsar Release 2.10.6 Candidate 2

2024-03-08 Thread Xiangying Meng
Close this vote with 3 bindings.
- Penghui
- Jiwei Guo (Tboy)
- Lari

On Fri, Mar 8, 2024 at 5:04 PM PengHui Li  wrote:
>
> +1 (binding)
>
> - Checked the signatures
> - Build from the source
> - Checked the bookkeeper JNI libs
> org.apache.bookkeeper-circe-checksum-*.jar and
> org.apache.bookkeeper-cpu-affinity-*.jar
> - Verified the Cassandra connector
> - Verified the Stateful Function
> - Verified the Trino connector
> - Tested performance with 100 topics
>
> Regards,
> Penghui
>
> On Fri, Mar 8, 2024 at 4:14 PM guo jiwei  wrote:
>
> > +1 (binding)
> >
> > - Built from source
> > - Checked the signatures
> > - Run standalone
> > - Checked producer and consumer
> > - Verified the Cassandra connector
> > - Verified the Stateful function
> >
> >
> > Regards
> > Jiwei Guo (Tboy)
> >
> >
> > On Fri, Mar 8, 2024 at 3:48 PM Lari Hotari  wrote:
> >
> > > +1 (binding)
> > >
> > > - Built from source
> > > - Checked the signatures of the source and binary release artifacts
> > > - Run standalone
> > > - Checked producer and consumer
> > > - Verified the Cassandra connector
> > > - Verified the Stateful function
> > >
> > > -Lari
> > >
> > > On 2024/03/08 02:08:34 Xiangying Meng wrote:
> > > > This is the first release candidate for Apache Pulsar, version 2.10.6.
> > > >
> > > > It fixes the following issues:
> > > >
> > >
> > https://github.com/apache/pulsar/pulls?q=is:pr+label:cherry-picked/branch-2.10+label:release/2.10.6+is:closed
> > > >
> > > > *** Please download, test and vote on this release. This vote will stay
> > > open
> > > > for at least 72 hours ***
> > > >
> > > > Note that we are voting upon the source (tag), binaries are provided
> > for
> > > > convenience.
> > > >
> > > > Source and binary files:
> > > >
> > https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.6-candidate-2/
> > > >
> > > > SHA-512 checksums:
> > > >
> > >
> > 699a18b25828dc1274debb4d20cb22e9700935849243ea1892a15ad7b40c67d42ba46574e96356c39f4963183f080fb154fb42002559469eb1d240ceadf7a76c
> > > >  apache-pulsar-2.10.6-bin.tar.gz
> > > >
> > >
> > fe7230e6c939b4da8da7c80b41611fd05336f83caeea981d16a749988cf4cb10cce64703c2c62aaec9f7ed1a4b353b1f68f8800b5befafe3264a5504750a2b6a
> > > >  apache-pulsar-2.10.6-src.tar.gz
> > > >
> > > > Maven staging repo:
> > > >
> > https://repository.apache.org/content/repositories/orgapachepulsar-1274
> > > >
> > > > The tag to be voted upon:
> > > > v2.10.6-candidate-2 (a76ddbe5af523b4aa541a2272c58f685ef05859f)
> > > > https://github.com/apache/pulsar/releases/tag/v2.10.6-candidate-2
> > > >
> > > > Pulsar's KEYS file containing PGP keys you use to sign the release:
> > > > https://downloads.apache.org/pulsar/KEYS
> > > >
> > > > Docker images:
> > > >
> > > > 
> > > >
> > >
> > https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.6/images/sha256-743a4b5d79708b7e87d99008882c4d7321433616d7ee3b9a32063c398238decf?context=repo
> > > >
> > > > 
> > > >
> > >
> > https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.6/images/sha256-14013a1307c0c44ccb35526866c0775e5978fcb10beae5d5ffbdf4e73a029dc1?context=repo
> > > >
> > > > Please download the source package, and follow the README to build
> > > > and run the Pulsar standalone service.
> > > >
> > >
> >


[VOTE] Pulsar Release 2.10.6 Candidate 2

2024-03-07 Thread Xiangying Meng
This is the first release candidate for Apache Pulsar, version 2.10.6.

It fixes the following issues:
https://github.com/apache/pulsar/pulls?q=is:pr+label:cherry-picked/branch-2.10+label:release/2.10.6+is:closed

*** Please download, test and vote on this release. This vote will stay open
for at least 72 hours ***

Note that we are voting upon the source (tag), binaries are provided for
convenience.

Source and binary files:
https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.6-candidate-2/

SHA-512 checksums:
699a18b25828dc1274debb4d20cb22e9700935849243ea1892a15ad7b40c67d42ba46574e96356c39f4963183f080fb154fb42002559469eb1d240ceadf7a76c
 apache-pulsar-2.10.6-bin.tar.gz
fe7230e6c939b4da8da7c80b41611fd05336f83caeea981d16a749988cf4cb10cce64703c2c62aaec9f7ed1a4b353b1f68f8800b5befafe3264a5504750a2b6a
 apache-pulsar-2.10.6-src.tar.gz

Maven staging repo:
https://repository.apache.org/content/repositories/orgapachepulsar-1274

The tag to be voted upon:
v2.10.6-candidate-2 (a76ddbe5af523b4aa541a2272c58f685ef05859f)
https://github.com/apache/pulsar/releases/tag/v2.10.6-candidate-2

Pulsar's KEYS file containing PGP keys you use to sign the release:
https://downloads.apache.org/pulsar/KEYS

Docker images:


https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.6/images/sha256-743a4b5d79708b7e87d99008882c4d7321433616d7ee3b9a32063c398238decf?context=repo


https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.6/images/sha256-14013a1307c0c44ccb35526866c0775e5978fcb10beae5d5ffbdf4e73a029dc1?context=repo

Please download the source package, and follow the README to build
and run the Pulsar standalone service.


Re: [VOTE] Pulsar Release 2.10.6 Candidate 1

2024-03-07 Thread Xiangying Meng
Close this candidate, as we have a new fix, #22202, that needs to be
contained in the release.  I will raise a new candidate soon.


Regards

On Fri, Mar 8, 2024 at 10:06 AM Xiangying Meng  wrote:
>
> This right.
> Maybe we need some discussion and update our release policy.
>
> Thanks
> Xiangying
>
> On Wed, Mar 6, 2024 at 5:09 PM Zixuan Liu  wrote:
> >
> > It sounds like the version release was triggered due to security issues. If
> > so, I think we need to update our release policy.
> >
> > Only when a fatal security issue occurs can we trigger a release of a new
> > version, but we also need to clarify the maintenance cycle, otherwise this
> > maintenance is endless.
> >
> > Thanks,
> > Zixuan
> >
> > Xiangying Meng  于2024年3月6日周三 16:45写道:
> >
> > > Dear Zixuan,
> > >
> > > Thank you for your email and your ongoing commitment to the Pulsar 
> > > project.
> > >
> > > I wanted to clarify that this release, 2.10.6, is a special case. It
> > > was primarily focused on addressing certain security issues that were
> > > deemed critical. This decision was made following internal discussions
> > > within the PMC.
> > >
> > > I completely understand and respect the release policy defined by
> > > Pulsar [0]. Under normal circumstances, we would indeed follow the
> > > policy and consider version 2.10 as EOL, ceasing further maintenance.
> > >
> > > However, given the exceptional nature of this release and the
> > > importance of the security issues it addresses, we felt it was
> > > necessary to make an exception in this case.
> > >
> > > Thank you for your understanding and for bringing this to our
> > > attention. We appreciate your diligence in adhering to Pulsar's
> > > release policy.
> > >
> > > Best regards,
> > >
> > > Xiangying
> > >
> > > On Wed, Mar 6, 2024 at 4:22 PM Zixuan Liu  wrote:
> > > >
> > > > Thank you for releasing 2.10.6.
> > > >
> > > > According to the release policy defined [0] by Pulsar, this version is
> > > EOL and
> > > > does not require further maintenance.
> > > >
> > > > If we need to continue to maintain the 2.10, we must discuss the
> > > > maintenance lifecycle of the 2.10, and update our doc.
> > > >
> > > > - [0] https://pulsar.apache.org/contribute/release-policy/
> > > >
> > > > Thanks,
> > > > Zixuan
> > > >
> > > >
> > > > Xiangying Meng  于2024年3月6日周三 11:15写道:
> > > >
> > > > > This is the first release candidate for Apache Pulsar, version 2.10.6.
> > > > >
> > > > > It fixes the following issues:
> > > > >
> > > > >
> > > https://github.com/apache/pulsar/pulls?q=is:pr+label:cherry-picked/branch-2.10+label:release/2.10.6+is:closed
> > > > >
> > > > > *** Please download, test and vote on this release. This vote will 
> > > > > stay
> > > > > open
> > > > > for at least 72 hours ***
> > > > >
> > > > > Note that we are voting upon the source (tag), binaries are provided
> > > for
> > > > > convenience.
> > > > >
> > > > > Source and binary files:
> > > > >
> > > https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.6-candidate-1/
> > > > >
> > > > > SHA-512 checksums:
> > > > >
> > > > >
> > > 09f29265f8173331d4c05b470c4e77a31146172b27ef333f45d8c8a19074ef25061cb1e80872fc45c323c9ce8e2e17989c6df5d991ef84c4d245197303d9e6d7
> > > > >  apache-pulsar-2.10.6-bin.tar.gz
> > > > >
> > > > >
> > > 49c8836882818c6f38748dae26b51c598f163606c16993a3287ab1ce9f853a4aaa43c6729c1b6f6957738b4dead3818cd12026da68b328eb2d4ac0d0214957bb
> > > > >  apache-pulsar-2.10.6-src.tar.gz
> > > > >
> > > > > Maven staging repo:
> > > > >
> > > https://repository.apache.org/content/repositories/orgapachepulsar-1270
> > > > >
> > > > > The tag to be voted upon:
> > > > > v2.10.6-candidate-1 (9c29b76ff2be865429ad44df8683aec80deacfba)
> > > > > https://github.com/apache/pulsar/releases/tag/v2.10.6-candidate-1
> > > > >
> > > > > Pulsar's KEYS file containing PGP keys you use to sign the release:
> > > > > https://downloads.apache.org/pulsar/KEYS
> > > > >
> > > > > Docker images:
> > > > >
> > > > > 
> > > > >
> > > > >
> > > https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.6/images/sha256-bf8f36e49ff44ef810ab2c76742121205e51d3a04c79afdb5d288c7d8a06443f?context=repo
> > > > >
> > > > > 
> > > > >
> > > > >
> > > https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.6/images/sha256-1b3a10db12f6d5a0acd2d4ed73eb11864b6b598294bb905b6ede34aef1157f23?context=repo
> > > > >
> > > > > Please download the source package, and follow the README to build
> > > > > and run the Pulsar standalone service.
> > > > >
> > >


Re: [VOTE] Pulsar Release 2.10.6 Candidate 1

2024-03-07 Thread Xiangying Meng
This right.
Maybe we need some discussion and update our release policy.

Thanks
Xiangying

On Wed, Mar 6, 2024 at 5:09 PM Zixuan Liu  wrote:
>
> It sounds like the version release was triggered due to security issues. If
> so, I think we need to update our release policy.
>
> Only when a fatal security issue occurs can we trigger a release of a new
> version, but we also need to clarify the maintenance cycle, otherwise this
> maintenance is endless.
>
> Thanks,
> Zixuan
>
> Xiangying Meng  于2024年3月6日周三 16:45写道:
>
> > Dear Zixuan,
> >
> > Thank you for your email and your ongoing commitment to the Pulsar project.
> >
> > I wanted to clarify that this release, 2.10.6, is a special case. It
> > was primarily focused on addressing certain security issues that were
> > deemed critical. This decision was made following internal discussions
> > within the PMC.
> >
> > I completely understand and respect the release policy defined by
> > Pulsar [0]. Under normal circumstances, we would indeed follow the
> > policy and consider version 2.10 as EOL, ceasing further maintenance.
> >
> > However, given the exceptional nature of this release and the
> > importance of the security issues it addresses, we felt it was
> > necessary to make an exception in this case.
> >
> > Thank you for your understanding and for bringing this to our
> > attention. We appreciate your diligence in adhering to Pulsar's
> > release policy.
> >
> > Best regards,
> >
> > Xiangying
> >
> > On Wed, Mar 6, 2024 at 4:22 PM Zixuan Liu  wrote:
> > >
> > > Thank you for releasing 2.10.6.
> > >
> > > According to the release policy defined [0] by Pulsar, this version is
> > EOL and
> > > does not require further maintenance.
> > >
> > > If we need to continue to maintain the 2.10, we must discuss the
> > > maintenance lifecycle of the 2.10, and update our doc.
> > >
> > > - [0] https://pulsar.apache.org/contribute/release-policy/
> > >
> > > Thanks,
> > > Zixuan
> > >
> > >
> > > Xiangying Meng  于2024年3月6日周三 11:15写道:
> > >
> > > > This is the first release candidate for Apache Pulsar, version 2.10.6.
> > > >
> > > > It fixes the following issues:
> > > >
> > > >
> > https://github.com/apache/pulsar/pulls?q=is:pr+label:cherry-picked/branch-2.10+label:release/2.10.6+is:closed
> > > >
> > > > *** Please download, test and vote on this release. This vote will stay
> > > > open
> > > > for at least 72 hours ***
> > > >
> > > > Note that we are voting upon the source (tag), binaries are provided
> > for
> > > > convenience.
> > > >
> > > > Source and binary files:
> > > >
> > https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.6-candidate-1/
> > > >
> > > > SHA-512 checksums:
> > > >
> > > >
> > 09f29265f8173331d4c05b470c4e77a31146172b27ef333f45d8c8a19074ef25061cb1e80872fc45c323c9ce8e2e17989c6df5d991ef84c4d245197303d9e6d7
> > > >  apache-pulsar-2.10.6-bin.tar.gz
> > > >
> > > >
> > 49c8836882818c6f38748dae26b51c598f163606c16993a3287ab1ce9f853a4aaa43c6729c1b6f6957738b4dead3818cd12026da68b328eb2d4ac0d0214957bb
> > > >  apache-pulsar-2.10.6-src.tar.gz
> > > >
> > > > Maven staging repo:
> > > >
> > https://repository.apache.org/content/repositories/orgapachepulsar-1270
> > > >
> > > > The tag to be voted upon:
> > > > v2.10.6-candidate-1 (9c29b76ff2be865429ad44df8683aec80deacfba)
> > > > https://github.com/apache/pulsar/releases/tag/v2.10.6-candidate-1
> > > >
> > > > Pulsar's KEYS file containing PGP keys you use to sign the release:
> > > > https://downloads.apache.org/pulsar/KEYS
> > > >
> > > > Docker images:
> > > >
> > > > 
> > > >
> > > >
> > https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.6/images/sha256-bf8f36e49ff44ef810ab2c76742121205e51d3a04c79afdb5d288c7d8a06443f?context=repo
> > > >
> > > > 
> > > >
> > > >
> > https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.6/images/sha256-1b3a10db12f6d5a0acd2d4ed73eb11864b6b598294bb905b6ede34aef1157f23?context=repo
> > > >
> > > > Please download the source package, and follow the README to build
> > > > and run the Pulsar standalone service.
> > > >
> >


Re: [VOTE] Pulsar Release 2.10.6 Candidate 1

2024-03-06 Thread Xiangying Meng
Dear Zixuan,

Thank you for your email and your ongoing commitment to the Pulsar project.

I wanted to clarify that this release, 2.10.6, is a special case. It
was primarily focused on addressing certain security issues that were
deemed critical. This decision was made following internal discussions
within the PMC.

I completely understand and respect the release policy defined by
Pulsar [0]. Under normal circumstances, we would indeed follow the
policy and consider version 2.10 as EOL, ceasing further maintenance.

However, given the exceptional nature of this release and the
importance of the security issues it addresses, we felt it was
necessary to make an exception in this case.

Thank you for your understanding and for bringing this to our
attention. We appreciate your diligence in adhering to Pulsar's
release policy.

Best regards,

Xiangying

On Wed, Mar 6, 2024 at 4:22 PM Zixuan Liu  wrote:
>
> Thank you for releasing 2.10.6.
>
> According to the release policy defined [0] by Pulsar, this version is EOL and
> does not require further maintenance.
>
> If we need to continue to maintain the 2.10, we must discuss the
> maintenance lifecycle of the 2.10, and update our doc.
>
> - [0] https://pulsar.apache.org/contribute/release-policy/
>
> Thanks,
> Zixuan
>
>
> Xiangying Meng  于2024年3月6日周三 11:15写道:
>
> > This is the first release candidate for Apache Pulsar, version 2.10.6.
> >
> > It fixes the following issues:
> >
> > https://github.com/apache/pulsar/pulls?q=is:pr+label:cherry-picked/branch-2.10+label:release/2.10.6+is:closed
> >
> > *** Please download, test and vote on this release. This vote will stay
> > open
> > for at least 72 hours ***
> >
> > Note that we are voting upon the source (tag), binaries are provided for
> > convenience.
> >
> > Source and binary files:
> > https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.6-candidate-1/
> >
> > SHA-512 checksums:
> >
> > 09f29265f8173331d4c05b470c4e77a31146172b27ef333f45d8c8a19074ef25061cb1e80872fc45c323c9ce8e2e17989c6df5d991ef84c4d245197303d9e6d7
> >  apache-pulsar-2.10.6-bin.tar.gz
> >
> > 49c8836882818c6f38748dae26b51c598f163606c16993a3287ab1ce9f853a4aaa43c6729c1b6f6957738b4dead3818cd12026da68b328eb2d4ac0d0214957bb
> >  apache-pulsar-2.10.6-src.tar.gz
> >
> > Maven staging repo:
> > https://repository.apache.org/content/repositories/orgapachepulsar-1270
> >
> > The tag to be voted upon:
> > v2.10.6-candidate-1 (9c29b76ff2be865429ad44df8683aec80deacfba)
> > https://github.com/apache/pulsar/releases/tag/v2.10.6-candidate-1
> >
> > Pulsar's KEYS file containing PGP keys you use to sign the release:
> > https://downloads.apache.org/pulsar/KEYS
> >
> > Docker images:
> >
> > 
> >
> > https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.6/images/sha256-bf8f36e49ff44ef810ab2c76742121205e51d3a04c79afdb5d288c7d8a06443f?context=repo
> >
> > 
> >
> > https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.6/images/sha256-1b3a10db12f6d5a0acd2d4ed73eb11864b6b598294bb905b6ede34aef1157f23?context=repo
> >
> > Please download the source package, and follow the README to build
> > and run the Pulsar standalone service.
> >


[VOTE] Pulsar Release 2.10.6 Candidate 1

2024-03-05 Thread Xiangying Meng
This is the first release candidate for Apache Pulsar, version 2.10.6.

It fixes the following issues:
https://github.com/apache/pulsar/pulls?q=is:pr+label:cherry-picked/branch-2.10+label:release/2.10.6+is:closed

*** Please download, test and vote on this release. This vote will stay open
for at least 72 hours ***

Note that we are voting upon the source (tag), binaries are provided for
convenience.

Source and binary files:
https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.6-candidate-1/

SHA-512 checksums:
09f29265f8173331d4c05b470c4e77a31146172b27ef333f45d8c8a19074ef25061cb1e80872fc45c323c9ce8e2e17989c6df5d991ef84c4d245197303d9e6d7
 apache-pulsar-2.10.6-bin.tar.gz
49c8836882818c6f38748dae26b51c598f163606c16993a3287ab1ce9f853a4aaa43c6729c1b6f6957738b4dead3818cd12026da68b328eb2d4ac0d0214957bb
 apache-pulsar-2.10.6-src.tar.gz

Maven staging repo:
https://repository.apache.org/content/repositories/orgapachepulsar-1270

The tag to be voted upon:
v2.10.6-candidate-1 (9c29b76ff2be865429ad44df8683aec80deacfba)
https://github.com/apache/pulsar/releases/tag/v2.10.6-candidate-1

Pulsar's KEYS file containing PGP keys you use to sign the release:
https://downloads.apache.org/pulsar/KEYS

Docker images:


https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.6/images/sha256-bf8f36e49ff44ef810ab2c76742121205e51d3a04c79afdb5d288c7d8a06443f?context=repo


https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.6/images/sha256-1b3a10db12f6d5a0acd2d4ed73eb11864b6b598294bb905b6ede34aef1157f23?context=repo

Please download the source package, and follow the README to build
and run the Pulsar standalone service.


Re: [DISCUSS] Deletion of Current Ledger upon Rollover

2024-02-06 Thread Xiangying Meng
>The problem is that if the ledger is deleted, the next time a client
produces a message, a new ledger needs to be opened. This is an operation
that may take some time, disrupting latency.

Hi Enrico, I think there may be some misunderstandings. If the state of the
managerLedger is `ClosedLedger`, the current ledger is closed and cannot be
written to. So when a client sends new messages, it must open a new ledger.
This will cause latency regardless of whether the current ledger is
deleted, since the current ledger is already closed.

In the discussion of the PR[1], we found that there are different behaviors
in Pulsar after ledger rollover.
1. If the rollover is triggered in the process of adding an entry, a new
ledger will be created depending on whether there are pending write
operations.
2. In other cases, a new ledger will be created directly regardless of
whether there are pending write operations.

In the latest implementation of the PR, for the first situation, a new
ledger will be created directly even if there are no pending write
operations. Of course, this will increase the overhead of the bookkeeper.
But for normal production and consumption scenarios, there should be no
situation where the new ledger is not written to for a long time. Even for
inactive topics, users can handle this situation according to the
inactive_topic_policies.

[1] - https://github.com/apache/pulsar/pull/22034

On Tue, Feb 6, 2024 at 9:55 PM Enrico Olivelli  wrote:

> Xiangying
>
> Il giorno mar 6 feb 2024 alle ore 13:01 Xiangying Meng
>  ha scritto:
> >
> > Dear Community,
> >
> > I hope this message finds you well. I am writing to discuss a
> modification
> > to the behavior of deleting the current ledger. As you may know, in
> Pulsar,
> > the current ledger cannot be deleted because it may still be written to.
> > However, there is an exception. When the current ledger is rolled over,
> but
> > no new messages are written, the current ledger does not change. In this
> > case, the current ledger will not be written to, but it is also not
> deleted.
>
> I understand the problem and I was surprised about it the first time I saw
> it.
> The problem is that if you delete the ledger the next time a client
> produces a message the broken
> must open a new ledger and this is an operation that may take some
> time, disrupting latency.
>
> It is a trade-off, I know, in production you usually don't need to
> release the disk space.
> Maybe you have a use case in which you write a lot to a topic, then
> you stop writing ?
>
> Maybe you could "unload" the topic, and that will force a ledger
> rollover (with an impact on latency)
>
> Enrico
>
> >
> > This can be confusing for users, especially when they configure
> > `managedLedgerMaxLedgerRolloverTimeMinutes` and `retentionTimeInMinutes`.
> > They expect the current ledger to roll over and then be deleted after
> > `managedLedgerMaxLedgerRolloverTimeMinutes` and `retentionTimeInMinutes`.
> > However, in reality, while the current ledger does rollover, it is not
> > deleted.
> >
> > The purpose of this discussion is to consider deleting the current ledger
> > when it is rolled over. The specific implementation can be found at
> > https://github.com/apache/pulsar/pull/22034.
> >
> > Looking forward to a productive discussion.
> >
> > Best Regards,
> >
> > xiangying
>


[DISCUSS] Deletion of Current Ledger upon Rollover

2024-02-06 Thread Xiangying Meng
Dear Community,

I hope this message finds you well. I am writing to discuss a modification
to the behavior of deleting the current ledger. As you may know, in Pulsar,
the current ledger cannot be deleted because it may still be written to.
However, there is an exception. When the current ledger is rolled over, but
no new messages are written, the current ledger does not change. In this
case, the current ledger will not be written to, but it is also not deleted.

This can be confusing for users, especially when they configure
`managedLedgerMaxLedgerRolloverTimeMinutes` and `retentionTimeInMinutes`.
They expect the current ledger to roll over and then be deleted after
`managedLedgerMaxLedgerRolloverTimeMinutes` and `retentionTimeInMinutes`.
However, in reality, while the current ledger does rollover, it is not
deleted.

The purpose of this discussion is to consider deleting the current ledger
when it is rolled over. The specific implementation can be found at
https://github.com/apache/pulsar/pull/22034.

Looking forward to a productive discussion.

Best Regards,

xiangying


Re: [DISCUSS] PIP-335: Oxia metadata support

2024-01-31 Thread Xiangying Meng
+1

Thanks,
Xiangying

On Thu, Feb 1, 2024 at 11:20 AM Yubiao Feng
 wrote:

> +1
>
> Thanks
> Yubiao Feng
>
> On Thu, Feb 1, 2024 at 7:58 AM Matteo Merli 
> wrote:
>
> > https://github.com/apache/pulsar/pull/22009
> >
> > ===
> >
> > # PIP-335: Supporty Oxia metadata store plugin
> >
> > # Motivation
> >
> > Oxia is a scalable metadata store and coordination system that can be
> used
> > as the core infrastructure
> > to build large scale distributed systems.
> >
> > Oxia was created with the primary goal of providing an alternative Pulsar
> > to replace ZooKeeper as the
> > long term preferred metadata store, overcoming all the current
> limitations
> > in terms of metadata
> > access throughput and data set size.
> >
> > # Goals
> >
> > Add a Pulsar MetadataStore plugin that uses Oxia client SDK.
> >
> > Users will be able to start a Pulsar cluster using just Oxia, without any
> > ZooKeeper involved.
> >
> > ## Not in Scope
> >
> > It's not in the scope of this proposal to change any default behavior or
> > configuration of Pulsar.
> >
> > # Detailed Design
> >
> > ## Design & Implementation Details
> >
> > Oxia semantics and client SDK were already designed with Pulsar and
> > MetadataStore plugin API in mind, so
> > there is not much integration work that needs to be done here.
> >
> > Just a few notes:
> >  1. Oxia client already provides support for transparent batching of read
> > and write operations,
> > so there will be no use of the batching logic in
> > `AbstractBatchedMetadataStore`
> >  2. Oxia does not treat keys as a walkable file-system like interface,
> with
> > directories and files. Instead
> > all the keys are independent. Though Oxia sorting of keys is aware of
> > '/' and provides efficient key
> > range scanning operations to identify the first level children of a
> > given key
> >  3. Oxia, unlike ZooKeeper, doesn't require the parent path of a key to
> > exist. eg: we can create `/a/b/c` key
> > without `/a/b` and `/a` existing.
> > In the Pulsar integration for Oxia we're forcing to create all parent
> > keys when they are not there. This
> > is due to several places in BookKeeper access where it does not
> create
> > the parent keys, though it will
> > later make `getChildren()` operations on the parents.
> >
> > ## Other notes
> >
> > Unlike in the ZooKeeper implementation, the notification of events is
> > guaranteed in Oxia, because the Oxia
> > client SDK will use the transaction offset after server reconnections and
> > session restarted events. This
> > will ensure that brokers cache will always be properly invalidated. We
> will
> > then be able to remove the
> > current 5minutes automatic cache refresh which is in place to prevent the
> > ZooKeeper missed watch issue.
> >
> > # Links
> >
> > Oxia: https://github.com/streamnative/oxia
> > Oxia Java Client SDK: https://github.com/streamnative/oxia-java
> >
> >
> > --
> > Matteo Merli
> > 
> >
>


Re: [VOTE]PIP-321: Split the responsibilities of namespace replication-clusters

2024-01-19 Thread Xiangying Meng
Close this vote with three bindings.

- Jiwei Guo (Tboy)
- Penghui
- Yubiao Feng

On Fri, Jan 19, 2024 at 6:31 PM Yubiao Feng
 wrote:

> +1 (binding)
>
> Thanks
> Yubiao Feng
>
> On Tue, Jan 16, 2024 at 2:41 PM Xiangying Meng 
> wrote:
>
> > Dear Pulsar community,
> >
> > I am initiating a voting thread for "PIP-321: Split the responsibilities
> of
> > namespace replication-clusters".
> >
> > Here is the pull request for PIP-321:
> > https://github.com/apache/pulsar/pull/21648
> >
> > And the discussion thread:
> > https://lists.apache.org/thread/87qfp8ht5s0fvw2y4t3j9yzgfmdzmcnz
> >
> > Highlight:
> > This proposal introduces a new feature for topics, allowing a topic to be
> > loaded across different clusters without data replication between these
> > clusters. It also introduces more granular control over
> `allowed_clusters`
> > at the namespace level.
> >
> > Please review the PIP document and cast your vote!
> >
> > Thank you,
> >
> > Xiangying
> >
>


[VOTE]PIP-321: Split the responsibilities of namespace replication-clusters

2024-01-15 Thread Xiangying Meng
Dear Pulsar community,

I am initiating a voting thread for "PIP-321: Split the responsibilities of
namespace replication-clusters".

Here is the pull request for PIP-321:
https://github.com/apache/pulsar/pull/21648

And the discussion thread:
https://lists.apache.org/thread/87qfp8ht5s0fvw2y4t3j9yzgfmdzmcnz

Highlight:
This proposal introduces a new feature for topics, allowing a topic to be
loaded across different clusters without data replication between these
clusters. It also introduces more granular control over `allowed_clusters`
at the namespace level.

Please review the PIP document and cast your vote!

Thank you,

Xiangying


Re: [DISSCUSS] PIP-325: Add command to abort transaction

2023-12-25 Thread Xiangying Meng
Hi, Ruihong

This proposal looks good to me.

BR,
Xiangying

On Tue, Dec 19, 2023 at 8:13 PM Xiangying Meng  wrote:

> Hi, Ruihong
>
> Thanks for your proposal.
> I wonder whether we should abort all the transactions one client creates
> when the client crushes.
> For example, a process builds a Pulsar client and creates a transaction by
> this client to do some operations.
> If the process crushes, the transaction cannot be committed or aborted.
>
> In this case, aborting the transaction automatically when the client is
> crushed may be more helpful than aborting them after restarting.
>
> BR,
> Xiangying
>
>
> On Mon, Dec 18, 2023 at 9:08 AM PengHui Li  wrote:
>
>> Hi, Ruihong
>>
>> The proposal looks good to me.
>> Just left a comment about the security considerations.
>> We need to have a clear permission definition for newly added admin API
>>
>> Regards,
>> Penghui
>>
>> On Sun, Dec 17, 2023 at 1:14 AM |海阔天高 <1373544...@qq.com.invalid> wrote:
>>
>> > Hi community,
>> >
>> >
>> > PIP-325 introduces a new API for aborting transactions, allowing
>> > administrators to proactively abort a transaction when it gets stuck,
>> thus
>> > preventing consumers from being blocked for an extended period.
>> >
>> >
>> > Hopes for discuss.
>> > PIP: https://github.com/apache/pulsar/pull/21731
>> > Releted PR: https://github.com/apache/pulsar/pull/21630
>> >
>> >
>> > Thanks,
>> >
>> >
>> > Ruihong
>> >
>> >
>> > |海阔天高
>> > 1373544...@qq.com
>> >
>> >
>> >
>> > 
>>
>


Re: [DISSCUSS] PIP-325: Add command to abort transaction

2023-12-19 Thread Xiangying Meng
Hi, Ruihong

Thanks for your proposal.
I wonder whether we should abort all the transactions one client creates
when the client crushes.
For example, a process builds a Pulsar client and creates a transaction by
this client to do some operations.
If the process crushes, the transaction cannot be committed or aborted.

In this case, aborting the transaction automatically when the client is
crushed may be more helpful than aborting them after restarting.

BR,
Xiangying


On Mon, Dec 18, 2023 at 9:08 AM PengHui Li  wrote:

> Hi, Ruihong
>
> The proposal looks good to me.
> Just left a comment about the security considerations.
> We need to have a clear permission definition for newly added admin API
>
> Regards,
> Penghui
>
> On Sun, Dec 17, 2023 at 1:14 AM |海阔天高 <1373544...@qq.com.invalid> wrote:
>
> > Hi community,
> >
> >
> > PIP-325 introduces a new API for aborting transactions, allowing
> > administrators to proactively abort a transaction when it gets stuck,
> thus
> > preventing consumers from being blocked for an extended period.
> >
> >
> > Hopes for discuss.
> > PIP: https://github.com/apache/pulsar/pull/21731
> > Releted PR: https://github.com/apache/pulsar/pull/21630
> >
> >
> > Thanks,
> >
> >
> > Ruihong
> >
> >
> > |海阔天高
> > 1373544...@qq.com
> >
> >
> >
> > 
>


Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-17 Thread Xiangying Meng
Hi Penghui

>I'm sorry, I don't fully understand your point here. What is the "support
replication on message and topic level"?

>As I understand, are the `allowed-clusters` and `replication-clusters` more
concise options?

Pulsar support set replication-cluster for per message.
After this proposal, the `replication-clusters` at the topic/namespace
level could be the default value when the message level is not set, and
`allowed-clusters` could be the constraint when setting
`replication-cluster` for the message level.

I am struggling with whether we should introduce message level into the
title of this proposal.
Do you have any suggestions for the title of the proposal?

BR,
Xiangying


Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-17 Thread Xiangying Meng
Hi Jiwei

Great advice. Thanks for your suggestions and additions.

Thanks,
Xiangying

On Fri, Dec 15, 2023 at 9:41 AM guo jiwei  wrote:

> Hi Xiangying,
>I think  we can rename this PIP to:   *Introduce `allowed-clusters` and
> `topic-policy-synchronized-clusters` to fully support replication on
> message and topic level*
>Currently, we can set replication clusters on the message and topic
> level, but the replication clusters should be a subset of the namespace
> replication clusters. which means :
>If we set namespace replication clusters: cluster1, cluster2, cluster3,
> at most, these three or two clusters can be set on message or topic. If the
> user wanna set cluster4 or others, the replication
>can't work as expected.
>It's easy to reproduce by this test:
>
> >@Test
>
> public void testEnableReplicationInTopicLevel() throws Exception {
>
> // 1. Create namespace and topic
>
> String namespace =
> > BrokerTestUtil.newUniqueName("pulsar/testEnableReplicationInTopicLevel");
>
> String topic1 = "persistent://" + namespace + "/topic-1";
>
> admin1.namespaces().createNamespace(namespace);
>
> admin1.topics().createNonPartitionedTopic(topic1);
>
> // 2. Configure replication clusters for the topic.
>
> admin1.topics().setReplicationClusters(topic1, List.of("r1",
> "r2"));
>
> // 3. Check if the replicator connected successfully.
>
> Awaitility.await().atMost(5, TimeUnit.MINUTES).untilAsserted(() ->
> {
>
> List keys = pulsar1.getBrokerService()
>
> .getTopic(topic1, false).get().get()
>
> .getReplicators().keys();
>
> assertEquals(keys.size(), 1);
>
> assertTrue(pulsar1.getBrokerService()
>
> .getTopic(topic1, false).get().get()
>
> .getReplicators().get(keys.get(0)).isConnected());
>
> });
>
> }
>
>
>   To fully support the replication, we find out an easy way to solve it.
> Introduce `allowed-clusters` on namespace policies, which Xiangying
> explains above.
>   How could this work and solve the issue? The same example :
>   If we set namespace replication clusters: cluster1, cluster2, cluster3,
> and
>set topic1 replication clusters: cluster2, cluster4.
>set topic2 replication clusters: cluster1, cluster4.
>   We must set `allowed-clusters` with cluster1, cluster2, cluster3, and
> cluster4.  The broker side will validate the topic or message replication
> clusters from the `allowed-cluster.`
>   In this way,  we can simplify more codes and logic here.
>   For *`topic-policy-synchronized-clusters` *we also add examples in the
> PIP.
>
>   Hope the explanation could help @Rajan @Girish
>
>
>
>
> Regards
> Jiwei Guo (Tboy)
>
>
> On Thu, Dec 7, 2023 at 10:29 PM Xiangying Meng 
> wrote:
>
> > Hi Girish,
> >
> > I'm very pleased that we have reached some consensus now. Pulsar already
> > supports geo-replication at the topic level, but the existing
> > implementation of this topic level replication does not match our
> > expectations.
> >
> > At the moment, I can think of three directions to solve this problem:
> >
> > 1. Treat this issue as a bug and fix it so that Pulsar can truly support
> > replication at the topic level.
> > 2. Limit the replication topic policy, so that the replication clusters
> at
> > the topic level must be included in the replication clusters configured
> at
> > the namespace level. In this case, the topic level replication would
> serve
> > as a supplement to the namespace replication, rather than a true topic
> > level policy.
> > 3. Remove topic level replication.
> >
> > I lean towards the first option, as it would make Pulsar's replication
> > configuration more flexible and would not break the previous behavior
> > logic.
> >
> > >Yes, that's my viewpoint. In case that's not your view point, then in
> your
> > >use cases do you ever have more than one namespace inside a tenant?
> > >With every property coming at topic level, it makes no sense for the
> > >namespace hierarchy to exist anymore.
> >
> > I didn't propose this from the perspective of a user, but from the
> > perspective of a Pulsar maintainer. The replication cluster at the topic
> > level cannot function independently like other topic policies, and I
> > attempted to fix it after finding the reason.
> >
> > From the user's perspective, I c

Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-07 Thread Xiangying Meng
Hi Girish,

I'm very pleased that we have reached some consensus now. Pulsar already
supports geo-replication at the topic level, but the existing
implementation of this topic level replication does not match our
expectations.

At the moment, I can think of three directions to solve this problem:

1. Treat this issue as a bug and fix it so that Pulsar can truly support
replication at the topic level.
2. Limit the replication topic policy, so that the replication clusters at
the topic level must be included in the replication clusters configured at
the namespace level. In this case, the topic level replication would serve
as a supplement to the namespace replication, rather than a true topic
level policy.
3. Remove topic level replication.

I lean towards the first option, as it would make Pulsar's replication
configuration more flexible and would not break the previous behavior logic.

>Yes, that's my viewpoint. In case that's not your view point, then in your
>use cases do you ever have more than one namespace inside a tenant?
>With every property coming at topic level, it makes no sense for the
>namespace hierarchy to exist anymore.

I didn't propose this from the perspective of a user, but from the
perspective of a Pulsar maintainer. The replication cluster at the topic
level cannot function independently like other topic policies, and I
attempted to fix it after finding the reason.

>From the user's perspective, I could modify my system to put topics with
the same replication strategy under the same namespace. From the
maintainer's perspective, if a feature can help users use Pulsar more
flexibly and conveniently without introducing risks, then this feature
should be implemented. Perhaps business systems do not want to maintain too
many namespaces, as they would need to configure multiple namespace
policies or it might make their business logic complex. The other
configurations for topics under this namespace might be consistent, with
only a few topics needing to enable replication. In this case, topic level
replication becomes valuable. Therefore, I lean towards the first option,
to solve this problem and make it a truly expected topic policy.

On Thu, Dec 7, 2023 at 12:45 PM Girish Sharma 
wrote:

> Hello Xiangying,
>
>
> On Thu, Dec 7, 2023 at 6:32 AM Xiangying Meng 
> wrote:
>
> > Hi Girish,
> >
> > What you are actually opposing is the implementation of true topic-level
> > geo-replication. You believe that topics should be divided into different
> > namespaces based on replication. Following this line of thought, what we
> > should do is restrict the current topic-level replication settings, not
> > allowing the replication clusters set at the topic level to exceed the
> > range of replication clusters set in the namespace.
> >
>
> Yes, that's my viewpoint. In case that's not your view point, then in your
> use cases do you ever have more than one namespace inside a tenant?
> With every property coming at topic level, it makes no sense for the
> namespace hierarchy to exist anymore.
>
>
> >
> > One point that confuses me is that we provide a setting for topic-level
> > replication clusters, but it can only be used to amend the namespace
> > settings and cannot work independently. Isn't this also a poor design for
> > Pulsar?
> >
>
> This feature was originally added in pulsar without a PIP. And the PR [0]
> also doesn't have much context around why it was needed and why it is being
> added.. So I can't comment on why this was added..
> But my understanding is that even in a situation when the topics are
> divided into proper namespaces based on use cases and suddenly there is an
> exceptional need for one of the existing topics to have lesser replication,
> then instead of following a long exercise of moving that topic to a new
> namespace, you can use this feature.
>
> [0] - https://github.com/apache/pulsar/pull/12136
>
>
>
> >
> > On Thu, Dec 7, 2023 at 2:28 AM Girish Sharma 
> > wrote:
> >
> > > Hello, replies inline.
> > >
> > > On Wed, Dec 6, 2023 at 5:28 PM Xiangying Meng 
> > > wrote:
> > >
> > > > Hi Girish,
> > > >
> > > > Thank you for your explanation. Because Joe's email referenced the
> > > current
> > > > implementation of Pulsar, I misunderstood him to be saying that this
> > > > current implementation is not good.
> > > >
> > > > A possible use case is where there is one or a small number of topics
> > in
> > > > the namespace that store important messages, which need to be
> > replicated
> > > to
> > > > other clusters. Meanwhile, other topics only need to store data in
> the
> > >

Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-06 Thread Xiangying Meng
Hi Girish,

What you are actually opposing is the implementation of true topic-level
geo-replication. You believe that topics should be divided into different
namespaces based on replication. Following this line of thought, what we
should do is restrict the current topic-level replication settings, not
allowing the replication clusters set at the topic level to exceed the
range of replication clusters set in the namespace.

One point that confuses me is that we provide a setting for topic-level
replication clusters, but it can only be used to amend the namespace
settings and cannot work independently. Isn't this also a poor design for
Pulsar?

On Thu, Dec 7, 2023 at 2:28 AM Girish Sharma 
wrote:

> Hello, replies inline.
>
> On Wed, Dec 6, 2023 at 5:28 PM Xiangying Meng 
> wrote:
>
> > Hi Girish,
> >
> > Thank you for your explanation. Because Joe's email referenced the
> current
> > implementation of Pulsar, I misunderstood him to be saying that this
> > current implementation is not good.
> >
> > A possible use case is where there is one or a small number of topics in
> > the namespace that store important messages, which need to be replicated
> to
> > other clusters. Meanwhile, other topics only need to store data in the
> > local cluster.
> >
>
> Is it not possible to simply have the other topics in a namespace which
> allows for that other cluster, and the local topics remain in the namespace
> with local cluster needs. Seems to me like a proper use case of two
> different namespaces as the use case is different in both cases.
>
>
>
> >
> > For example, only topic1 needs replication, while topic2 to topic100 do
> > not. According to the current implementation, we need to set replication
> > clusters at the namespace level (e.g. cluster1 and cluster2), and then
> set
> > the topic-level replication clusters (cluster1) for topic2 to topic100 to
> > exclude them. It's hard to say that this is a good design.
> >
>
> No, all you need is to put topic1 in namespace1 and topic2 to topic100 in
> namespace2 . This is exactly what me and Joe were saying is a bad design
> choice that you are clubbing all 100 topics in same namespace.
>
>
>
> >
> > Best regards.
> >
> > On Wed, Dec 6, 2023 at 12:49 PM Joe F  wrote:
> >
> > > Girish,
> > >
> > > Thank you for making my point much better than I did ..
> > >
> > > -Joe
> > >
> > > On Tue, Dec 5, 2023 at 1:45 AM Girish Sharma 
> > > wrote:
> > >
> > > > Hello Xiangying,
> > > >
> > > > I believe what Joe here is referring to as "application design" is
> not
> > > the
> > > > design of pulsar or namespace level replication but the design of
> your
> > > > application and the dependency that you've put on topic level
> > > replication.
> > > >
> > > > In general, I am aligned with Joe from an application design
> > standpoint.
> > > A
> > > > namespace is supposed to represent a single application use case,
> topic
> > > > level override of replication clusters helps in cases where there
> are a
> > > few
> > > > exceptional topics which do not need replication in all of the
> > namespace
> > > > clusters. This helps in saving network bandwidth, storage, CPU, RAM
> etc
> > > >
> > > > But the reason why you've raised this PIP is to bring down the actual
> > > > replication semantics at a topic level. Yes, namespace level still
> > exists
> > > > as per your PIP as well, but is basically left only to be a "default
> in
> > > > case topic level is missing".
> > > > This brings me to a very basic question - What's the use case that
> you
> > > are
> > > > trying to solve that needs these changes? Because, then what's
> stopping
> > > us
> > > > from bringing every construct that's at a namespace level (bundling,
> > > > hardware affinity, etc) down to a topic level?
> > > >
> > > > Regards
> > > >
> > > > On Tue, Dec 5, 2023 at 2:52 PM Xiangying Meng 
> > > > wrote:
> > > >
> > > > > Hi Joe,
> > > > >
> > > > > You're correct. The initial design of the replication policy leaves
> > > room
> > > > > for improvement. To address this, we aim to refine the cluster
> > settings
> > > > at
> > > > > the namespace level in a way that won't impact the existing system.
>

Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-06 Thread Xiangying Meng
Hi Girish,

>But the reason why you've raised this PIP is to bring down the actual
replication semantics at a topic level. Yes, namespace level still exists
as per your PIP as well, but is basically left only to be a "default in
case topic level is missing".

I'm afraid there's some misunderstanding here. According to the Pulsar
website, replication can actually be enabled at the topic level.

>You can enable geo-replication at namespace or topic level. [1]

So, it's not this proposal that introduces topic-level replication
semantics. Prior to this, topic-level replication was constrained by the
namespace-level replication policy. There are just some problems here. If
replication was not configured at the namespace level, then topic-level
replication would also be ineffective. Moreover, users would not be aware
of this replication failure.

> Yes, namespace level still exists as per your PIP as well, but is
basically left only to be a "default in case topic level is missing".

This behavior is consistent with the current behavior of Pulsar and is not
something introduced by this proposal.
This proposal introduces an `allowed-cluster` configuration at the
namespace level.
As the website states, you can enable replication at either the namespace
or topic level.
But If you only enable replication at the topic level, the replication
configuration would not take effect prior to this proposal.

Before this proposal: even though the topic policy can be updated
successfully, topic1 cannot be created in cluster2.
```
Namespace policy {replication clusters -> local cluster(cluster1)}, topic1
policy {replication clusters {cluster1, cluster2}}
```
After this proposal: you can set allowed clusters at the namespace level,
which specifies the clusters where topics under this namespace are allowed.
Then, the topic-level replication would also be effective, as described on
the Pulsar website.
```
Namespace policy {replication clusters -> local cluster(cluster1), allowed
clusters -> {cluster1, cluster2, cluster3}}, topic1 policy {replication
clusters {cluster1, cluster2}}
```

[1]
https://pulsar.apache.org/docs/3.1.x/administration-geo/#enable-geo-replication


On Wed, Dec 6, 2023 at 7:57 PM Xiangying Meng  wrote:

> Hi Girish,
>
> Thank you for your explanation. Because Joe's email referenced the current
> implementation of Pulsar, I misunderstood him to be saying that this
> current implementation is not good.
>
> A possible use case is where there is one or a small number of topics in
> the namespace that store important messages, which need to be replicated to
> other clusters. Meanwhile, other topics only need to store data in the
> local cluster.
>
> For example, only topic1 needs replication, while topic2 to topic100 do
> not. According to the current implementation, we need to set replication
> clusters at the namespace level (e.g. cluster1 and cluster2), and then set
> the topic-level replication clusters (cluster1) for topic2 to topic100 to
> exclude them. It's hard to say that this is a good design.
>
> Best regards.
>
> On Wed, Dec 6, 2023 at 12:49 PM Joe F  wrote:
>
>> Girish,
>>
>> Thank you for making my point much better than I did ..
>>
>> -Joe
>>
>> On Tue, Dec 5, 2023 at 1:45 AM Girish Sharma 
>> wrote:
>>
>> > Hello Xiangying,
>> >
>> > I believe what Joe here is referring to as "application design" is not
>> the
>> > design of pulsar or namespace level replication but the design of your
>> > application and the dependency that you've put on topic level
>> replication.
>> >
>> > In general, I am aligned with Joe from an application design
>> standpoint. A
>> > namespace is supposed to represent a single application use case, topic
>> > level override of replication clusters helps in cases where there are a
>> few
>> > exceptional topics which do not need replication in all of the namespace
>> > clusters. This helps in saving network bandwidth, storage, CPU, RAM etc
>> >
>> > But the reason why you've raised this PIP is to bring down the actual
>> > replication semantics at a topic level. Yes, namespace level still
>> exists
>> > as per your PIP as well, but is basically left only to be a "default in
>> > case topic level is missing".
>> > This brings me to a very basic question - What's the use case that you
>> are
>> > trying to solve that needs these changes? Because, then what's stopping
>> us
>> > from bringing every construct that's at a namespace level (bundling,
>> > hardware affinity, etc) down to a topic level?
>> >
>> > Regards
>> >
>> > On Tue, Dec 5, 2023 at 2:52

Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-06 Thread Xiangying Meng
Hi Girish,

Thank you for your explanation. Because Joe's email referenced the current
implementation of Pulsar, I misunderstood him to be saying that this
current implementation is not good.

A possible use case is where there is one or a small number of topics in
the namespace that store important messages, which need to be replicated to
other clusters. Meanwhile, other topics only need to store data in the
local cluster.

For example, only topic1 needs replication, while topic2 to topic100 do
not. According to the current implementation, we need to set replication
clusters at the namespace level (e.g. cluster1 and cluster2), and then set
the topic-level replication clusters (cluster1) for topic2 to topic100 to
exclude them. It's hard to say that this is a good design.

Best regards.

On Wed, Dec 6, 2023 at 12:49 PM Joe F  wrote:

> Girish,
>
> Thank you for making my point much better than I did ..
>
> -Joe
>
> On Tue, Dec 5, 2023 at 1:45 AM Girish Sharma 
> wrote:
>
> > Hello Xiangying,
> >
> > I believe what Joe here is referring to as "application design" is not
> the
> > design of pulsar or namespace level replication but the design of your
> > application and the dependency that you've put on topic level
> replication.
> >
> > In general, I am aligned with Joe from an application design standpoint.
> A
> > namespace is supposed to represent a single application use case, topic
> > level override of replication clusters helps in cases where there are a
> few
> > exceptional topics which do not need replication in all of the namespace
> > clusters. This helps in saving network bandwidth, storage, CPU, RAM etc
> >
> > But the reason why you've raised this PIP is to bring down the actual
> > replication semantics at a topic level. Yes, namespace level still exists
> > as per your PIP as well, but is basically left only to be a "default in
> > case topic level is missing".
> > This brings me to a very basic question - What's the use case that you
> are
> > trying to solve that needs these changes? Because, then what's stopping
> us
> > from bringing every construct that's at a namespace level (bundling,
> > hardware affinity, etc) down to a topic level?
> >
> > Regards
> >
> > On Tue, Dec 5, 2023 at 2:52 PM Xiangying Meng 
> > wrote:
> >
> > > Hi Joe,
> > >
> > > You're correct. The initial design of the replication policy leaves
> room
> > > for improvement. To address this, we aim to refine the cluster settings
> > at
> > > the namespace level in a way that won't impact the existing system. The
> > > replication clusters should solely be used to establish full mesh
> > > replication for that specific namespace, without having any other
> > > definitions or functionalities.
> > >
> > > BR,
> > > Xiangying
> > >
> > >
> > > On Mon, Dec 4, 2023 at 1:52 PM Joe F  wrote:
> > >
> > > > >if users want to change the replication policy for
> > > > topic-n and do not change the replication policy of other topics,
> they
> > > need
> > > > to change all the topic policy under this namespace.
> > > >
> > > > This PIP unfortunately  flows from  attempting to solve bad
> application
> > > > design
> > > >
> > > > A namespace is supposed to represent an application, and the
> namespace
> > > > policy is an umbrella for a similar set of policies  that applies to
> > all
> > > > topics.  The exceptions would be if a topic had a need for a deficit,
> > The
> > > > case of one topic in the namespace sticking out of the namespace
> policy
> > > > umbrella is bad  application design in my opinion
> > > >
> > > > -Joe.
> > > >
> > > >
> > > >
> > > > On Sun, Dec 3, 2023 at 6:00 PM Xiangying Meng 
> > > > wrote:
> > > >
> > > > > Hi Rajan and Girish,
> > > > > Thanks for your reply. About the question you mentioned, there is
> > some
> > > > > information I want to share with you.
> > > > > >If anyone wants to setup different replication clusters then
> either
> > > > > >those topics can be created under different namespaces or defined
> at
> > > > topic
> > > > > >level policy.
> > > > >
> > > > > >And users can anyway go and update the namespace's cluster list to
> > add
> > > > the
> > > > > >missing 

Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-05 Thread Xiangying Meng
Hi Joe,

You're correct. The initial design of the replication policy leaves room
for improvement. To address this, we aim to refine the cluster settings at
the namespace level in a way that won't impact the existing system. The
replication clusters should solely be used to establish full mesh
replication for that specific namespace, without having any other
definitions or functionalities.

BR,
Xiangying


On Mon, Dec 4, 2023 at 1:52 PM Joe F  wrote:

> >if users want to change the replication policy for
> topic-n and do not change the replication policy of other topics, they need
> to change all the topic policy under this namespace.
>
> This PIP unfortunately  flows from  attempting to solve bad application
> design
>
> A namespace is supposed to represent an application, and the namespace
> policy is an umbrella for a similar set of policies  that applies to all
> topics.  The exceptions would be if a topic had a need for a deficit, The
> case of one topic in the namespace sticking out of the namespace policy
> umbrella is bad  application design in my opinion
>
> -Joe.
>
>
>
> On Sun, Dec 3, 2023 at 6:00 PM Xiangying Meng 
> wrote:
>
> > Hi Rajan and Girish,
> > Thanks for your reply. About the question you mentioned, there is some
> > information I want to share with you.
> > >If anyone wants to setup different replication clusters then either
> > >those topics can be created under different namespaces or defined at
> topic
> > >level policy.
> >
> > >And users can anyway go and update the namespace's cluster list to add
> the
> > >missing cluster.
> > Because the replication clusters also mean the clusters where the topic
> can
> > be created or loaded, the topic-level replication clusters can only be
> the
> > subset of namespace-level replication clusters.
> > Just as Girish mentioned, the users can update the namespace's cluster
> list
> > to add the missing cluster.
> > But there is a problem because the replication clusters as the namespace
> > level will create a full mesh replication for that namespace across the
> > clusters defined in
> > replication-clusters if users want to change the replication policy for
> > topic-n and do not change the replication policy of other topics, they
> need
> > to change all the topic policy under this namespace.
> >
> > > Pulsar is being used by many legacy systems and changing its
> > >semantics for specific usecases without considering consequences are
> > >creating a lot of pain and incompatibility problems for other existing
> > >systems and let's avoid doing it as we are struggling with such changes
> > and
> > >breaking compatibility or changing semantics are just not acceptable.
> >
> > This proposal will not introduce an incompatibility problem, because the
> > default value of the namespace policy of allowed-clusters and
> > topic-policy-synchronized-clusters are the replication-clusters.
> >
> > >Allowed clusters defined at tenant level
> > >will restrict tenants to create namespaces in regions/clusters where
> they
> > >are not allowed.
> > >As Rajan also mentioned, allowed-clusters field has a different
> > meaning/purpose.
> >
> > Allowed clusters defined at the tenant level will restrict tenants from
> > creating namespaces in regions/clusters where they are not allowed.
> > Similarly, the allowed clusters defined at the namespace level will
> > restrict the namespace from creating topics in regions/clusters where
> they
> > are not allowed.
> > What's wrong with this?
> >
> > Regards,
> > Xiangying
> >
> > On Fri, Dec 1, 2023 at 2:35 PM Girish Sharma 
> > wrote:
> >
> > > Hi Xiangying,
> > >
> > > Shouldn't the solution to the issue mentioned in #21564 [0] mostly be
> > > around validating that topic level replication clusters are subset of
> > > namespace level replication clusters?
> > > It would be a completely compatible change as even today the case
> where a
> > > topic has a cluster not defined in namespaces's replication-clusters
> > > doesn't really work.
> > > And users can anyway go and update the namespace's cluster list to add
> > the
> > > missing cluster.
> > >
> > > As Rajan also mentioned, allowed-clusters field has a different
> > > meaning/purpose.
> > > Regards
> > >
> > > On Thu, Nov 30, 2023 at 10:56 AM Xiangying Meng 
> > > wrote:
> > >
> > > > Hi, Pulsar Community
> > > >
> > > > I drafted a proposal to make the configuration of clusters at the
> > > namespace
> > > > level clearer. This helps solve the problem of geo-replication not
> > > working
> > > > correctly at the topic level.
> > > >
> > > > https://github.com/apache/pulsar/pull/21648
> > > >
> > > > I'm looking forward to hearing from you.
> > > >
> > > > BR
> > > > Xiangying
> > > >
> > >
> > >
> > > --
> > > Girish Sharma
> > >
> >
>


Re: [DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-12-03 Thread Xiangying Meng
Hi Rajan and Girish,
Thanks for your reply. About the question you mentioned, there is some
information I want to share with you.
>If anyone wants to setup different replication clusters then either
>those topics can be created under different namespaces or defined at topic
>level policy.

>And users can anyway go and update the namespace's cluster list to add the
>missing cluster.
Because the replication clusters also mean the clusters where the topic can
be created or loaded, the topic-level replication clusters can only be the
subset of namespace-level replication clusters.
Just as Girish mentioned, the users can update the namespace's cluster list
to add the missing cluster.
But there is a problem because the replication clusters as the namespace
level will create a full mesh replication for that namespace across the
clusters defined in
replication-clusters if users want to change the replication policy for
topic-n and do not change the replication policy of other topics, they need
to change all the topic policy under this namespace.

> Pulsar is being used by many legacy systems and changing its
>semantics for specific usecases without considering consequences are
>creating a lot of pain and incompatibility problems for other existing
>systems and let's avoid doing it as we are struggling with such changes and
>breaking compatibility or changing semantics are just not acceptable.

This proposal will not introduce an incompatibility problem, because the
default value of the namespace policy of allowed-clusters and
topic-policy-synchronized-clusters are the replication-clusters.

>Allowed clusters defined at tenant level
>will restrict tenants to create namespaces in regions/clusters where they
>are not allowed.
>As Rajan also mentioned, allowed-clusters field has a different
meaning/purpose.

Allowed clusters defined at the tenant level will restrict tenants from
creating namespaces in regions/clusters where they are not allowed.
Similarly, the allowed clusters defined at the namespace level will
restrict the namespace from creating topics in regions/clusters where they
are not allowed.
What's wrong with this?

Regards,
Xiangying

On Fri, Dec 1, 2023 at 2:35 PM Girish Sharma 
wrote:

> Hi Xiangying,
>
> Shouldn't the solution to the issue mentioned in #21564 [0] mostly be
> around validating that topic level replication clusters are subset of
> namespace level replication clusters?
> It would be a completely compatible change as even today the case where a
> topic has a cluster not defined in namespaces's replication-clusters
> doesn't really work.
> And users can anyway go and update the namespace's cluster list to add the
> missing cluster.
>
> As Rajan also mentioned, allowed-clusters field has a different
> meaning/purpose.
> Regards
>
> On Thu, Nov 30, 2023 at 10:56 AM Xiangying Meng 
> wrote:
>
> > Hi, Pulsar Community
> >
> > I drafted a proposal to make the configuration of clusters at the
> namespace
> > level clearer. This helps solve the problem of geo-replication not
> working
> > correctly at the topic level.
> >
> > https://github.com/apache/pulsar/pull/21648
> >
> > I'm looking forward to hearing from you.
> >
> > BR
> > Xiangying
> >
>
>
> --
> Girish Sharma
>


[DISCUSS] PIP-321 Split the responsibilities of namespace replication-clusters

2023-11-29 Thread Xiangying Meng
Hi, Pulsar Community

I drafted a proposal to make the configuration of clusters at the namespace
level clearer. This helps solve the problem of geo-replication not working
correctly at the topic level.

https://github.com/apache/pulsar/pull/21648

I'm looking forward to hearing from you.

BR
Xiangying


Re: [ANNOUNCE] Yubiao Feng as new PMC member in Apache Pulsar

2023-11-13 Thread Xiangying Meng
Congrats! Yubiao.

Thanks,
Xiangying

On Mon, Nov 13, 2023 at 8:15 PM Kai Wang  wrote:

> Congrats!
>
> Thanks,
> Kai
>


[DISCUSS] PIP-311 Modify the signature of the `newMultiTransactionMessageAck` method

2023-10-23 Thread Xiangying Meng
Hi dev,
Currently, the public API `newMultiTransactionMessageAck` in the
`Commands.java` creates the ack commands without request ID and hopes
to wait for the response.

This causes all the requests created by
`newMultiTransactionMessageAck` to be timed out.
I drafted a proposal [0] to modify it.
Looking forward to receiving your feedback and suggestions.

[0]: https://github.com/apache/pulsar/pull/21419

Sincerely,
Xiangying


Re: [VOTE] PIP-302 Introduce refreshAsync API for TableView

2023-10-18 Thread Xiangying Meng
Close this vote with 3 +1(binding)
- Penghui
- Jiwei Guo (Tboy)
- Mattison

On Sun, Oct 8, 2023 at 5:02 PM mattison chao  wrote:
>
> +1(binding)
>
> Best,
> Mattison
> On 8 Oct 2023 at 14:49 +0800, Yunze Xu , wrote:
> > Totally I'm +0 at the moment. I'm still wondering which issue you
> > really want to resolve. I left a comment
> > https://github.com/apache/pulsar/pull/21271#issuecomment-1751899833.
> > Generally you can get latest value unless the producer is far quicker
> > than the reader. However, even with the refreshAsync() method, in the
> > future's callback, the data could still be updated.
> >
> > From my perspective, this proposal only makes sense when you can
> > guarantee there is no more data written to the topic when you call
> > methods of TableView. But if so, a simple `boolean hasReachedLatest()`
> > method could do the trick.
> >
> > Thanks,
> > Yunze
> >
> > On Sun, Oct 8, 2023 at 2:12 PM 太上玄元道君  wrote:
> > >
> > > +1 (no-binding)
> > >
> > >
> > > Xiangying Meng 于2023年9月27日 周三15:05写道:
> > >
> > > > > Hi dev,
> > > > > This thread is to start a vote for PIP-302 Add new API
> > > > > refreshAsync for TableView.
> > > > > Discuss thread:
> > > > > https://lists.apache.org/thread/o085y2314o0fymvx0x8pojmgjwcwn59q
> > > > > PIP: https://github.com/apache/pulsar/pull/21166
> > > > >
> > > > > BR,
> > > > > Xiangying
> > > > >


Re: [VOTE] PIP-302 Add new API readAllExistingMessages for TableView

2023-09-27 Thread Xiangying Meng
Close this via https://lists.apache.org/thread/vox93tmj33mms026wt52l92h1wffctbk

On Mon, Sep 25, 2023 at 6:34 PM Xiangying Meng  wrote:
>
> Thank you for your reminder. In our discussion, there were several
> changes to the specific plan and method names, which resulted in the
> PR title not being updated promptly. This was my oversight. The email
> title for the vote was not modified to match the titles of the
> discussed emails.
>
> Regarding my proposal itself, would you happen to have any other questions?
>
> BR,
> Xiangying
>
> On Mon, Sep 25, 2023 at 6:03 PM Zike Yang  wrote:
> >
> > Hi, Xiangying
> >
> > This PIP is a little confusing to me.
> > This mail title states that we are introducing `readAllExistingMessages`.
> > But actually, we only introduced `refreshAsync` in the PIP:
> > https://github.com/apache/pulsar/pull/21166/files#diff-45c655583d6c0c73d87afd3df3fe67f77caadbf1bd691cf8f8211cc89728a1ceR34-R36
> > And the PR title doesn't seem relevant. “PIP-302 Add alwaysRefresh
> > Configuration Option for TableView to Read Latest Values”
> >
> > BR,
> > Zike Yang
> >
> > On Mon, Sep 25, 2023 at 3:25 PM Xiangying Meng  wrote:
> > >
> > > Hi dev,
> > >This thread is to start a vote for PIP-302 Add new API
> > > readAllExistingMessages for TableView.
> > > Discuss thread: 
> > > https://lists.apache.org/thread/o085y2314o0fymvx0x8pojmgjwcwn59q
> > > PIP: https://github.com/apache/pulsar/pull/21166
> > >
> > > BR,
> > > Xiangying


[VOTE] PIP-302 Introduce refreshAsync API for TableView

2023-09-27 Thread Xiangying Meng
Hi dev,
   This thread is to start a vote for PIP-302 Add new API
refreshAsync for TableView.
Discuss thread: https://lists.apache.org/thread/o085y2314o0fymvx0x8pojmgjwcwn59q
PIP: https://github.com/apache/pulsar/pull/21166

BR,
Xiangying


Re: [DISSCUSS] PIP-298: Consumer supports specifying consumption isolation level

2023-09-25 Thread Xiangying Meng
Hi Dave,

Thanks for your support.
I also think this should only be for the master branch.

Thanks,
Xiangying

On Tue, Sep 26, 2023 at 9:34 AM Dave Fisher  wrote:
>
> Hi -
>
> OK. I’ll agree, but I think the PIP ought to include documentation. There 
> should also be clear communication about this use case and how to use it.
>
> Sent from my iPhone
>
> > On Sep 25, 2023, at 6:23 PM, Xiangying Meng  wrote:
> >
> > Hi Dave,
> > The uncommitted transactions do not impact actual users' bank accounts.
> > Business Processing System E only reads committed transactional
> > messages and operates users' accounts. It needs Exactly-once semantic.
> > Real-time Monitoring System D reads uncommitted transactional
> > messages. It does not need Exactly-once semantic.
> >
> > They use different subscriptions and choose different isolation
> > levels. One needs transaction, one does not.
> > In general, multiple subscriptions of the same topic do not all
> > require transaction guarantees.
> > Some want low latency without the exact-once semantic guarantee, and
> > some must require the exactly-once guarantee.
> > We just provide a new option for different subscriptions. This should
> > not be a breaking change,right?
>
> Not a breaking change, but it does add to the API.
>
> It should be discussed if this PIP is only for master - 3.2, or if may be 
> cherry picked to current versions.
>
> >
> > Looking forward to your reply.
>
> Thank you,
> Dave
> >
> > Thanks,
> > Xiangying
> >
> >> On Tue, Sep 26, 2023 at 4:09 AM Dave Fisher  wrote:
> >>
> >>
> >>
> >>>> On Sep 20, 2023, at 12:50 AM, Xiangying Meng  
> >>>> wrote:
> >>>
> >>> Hi, all,
> >>>
> >>> Let's consider another example:
> >>>
> >>> **System**: Financial Transaction System
> >>>
> >>> **Operations**: Large volume of deposit and withdrawal operations, a
> >>> small number of transfer operations.
> >>>
> >>> **Roles**:
> >>>
> >>> - **Client A1**
> >>> - **Client A2**
> >>> - **User Account B1**
> >>> - **User Account B2**
> >>> - **Request Topic C**
> >>> - **Real-time Monitoring System D**
> >>> - **Business Processing System E**
> >>>
> >>> **Client Operations**:
> >>>
> >>> - **Withdrawal**: Client A1 decreases the deposit amount from User
> >>> Account B1 or B2.
> >>> - **Deposit**: Client A1 increases the deposit amount in User Account B1 
> >>> or B2.
> >>> - **Transfer**: Client A2 decreases the deposit amount from User
> >>> Account B1 and increases it in User Account B2. Or vice versa.
> >>>
> >>> **Real-time Monitoring System D**: Obtains the latest data from
> >>> Request Topic C as quickly as possible to monitor transaction data and
> >>> changes in bank reserves in real-time. This is necessary for the
> >>> timely detection of anomalies and real-time decision-making.
> >>>
> >>> **Business Processing System E**: Reads data from Request Topic C,
> >>> then actually operates User Accounts B1, B2.
> >>>
> >>> **User Scenario**: Client A1 sends a large number of deposit and
> >>> withdrawal requests to Request Topic C. Client A2 writes a small
> >>> number of transfer requests to Request Topic C.
> >>>
> >>> In this case, Business Processing System E needs a read-committed
> >>> isolation level to ensure operation consistency and Exactly Once
> >>> semantics. The real-time monitoring system does not care if a small
> >>> number of transfer requests are incomplete (dirty data). What it
> >>> cannot tolerate is a situation where a large number of deposit and
> >>> withdrawal requests cannot be presented in real time due to a small
> >>> number of transfer requests (the current situation is that uncommitted
> >>> transaction messages can block the reading of committed transaction
> >>> messages).
> >>
> >> So you are willing to let uncommitted transactions impact actual users 
> >> bank accounts? Are you sure that there is not another way to bypass 
> >> uncommitted records? Letting uncommitted records through is not Exactly 
> >> once.
> >>
> >> Are you ready to rewrite Pulsar’s documentation to explain how normal 
> >> users can a

Re: [DISCUSS] PIP-300: Add custom dynamic configuration for plugins

2023-09-25 Thread Xiangying Meng
Hi Zixuan,

This is really a great feature. I support it.

Regarding cherry-pick, as far as I know, we have cherry-picked some
configuration items and interfaces into branch-2.10.
But that should be mentioned in a separate discussion and provide
sufficient reasons why we have to do it. Cherry-pick is performed
after the vote passes. So, I suggest that feature and cherry-pick can
be discussed separately through different emails.

Sincerely,
Xiangying

On Mon, Sep 25, 2023 at 6:51 PM Zike Yang  wrote:
>
> Hi, Zixuan
>
> Thanks for your proposal. I'm +1 for it.
>
> >  This is a feature I need. If cherry-pick is not allowed, then it will
> only be kept in 3.2+.
>
> This is a new feature, and I also think that we couldn't cherry-pick
> it. What about cherry-picking this change to your fork repo and
> building the pulsar for your own to meet this need? Does it make sense
> to you?
>
> BR,
> Zike Yang
>
> On Mon, Sep 25, 2023 at 12:23 AM mattison chao  wrote:
> >
> > Hi, Zixuan
> >
> > I am afraid I can't support you in cherry-picking this feature for all of 
> > the active branches by the current fact. Because this is a new feature and 
> > it's not a bug fix or security issue.
> >
> > We can wait for other contributor's ideas.  WDYT?
> >
> > Thanks,
> > Mattison
> > On 19 Sep 2023 at 10:42 +0800, Zixuan Liu , wrote:
> > > > 1. When you want to cherry-pick a PR, it should be cherry-picked for 
> > > > all branches after the target branch. Isn't it?
> > >
> > > I think we should cherry-pick this PR to Pulsar 2.10, 2.11, 3.0.
> > >
> > > > 2. I've got a slight concern about cherry-picking. Do you have any 
> > > > issue or feature request in the community that can demonstrate this 
> > > > feature is required to cherry-pick?
> > >
> > > This is a feature I need. If cherry-pick is not allowed, then it will
> > > only be kept in 3.2+.
> > >
> > > Thanks,
> > > Zixuan
> > >
> > > mattison chao  于2023年9月18日周一 09:42写道:
> > >
> > > >
> > > > HI, Zixuan
> > > >
> > > > Thanks for your discussion. It's a great feature. But I've got several 
> > > > questions I want to discuss here.
> > > >
> > > > 1. When you want to cherry-pick a PR, it should be cherry-picked for 
> > > > all branches after the target branch. Isn't it?
> > > > 2. I've got a slight concern about cherry-picking. Do you have any 
> > > > issue or feature request in the community that can demonstrate this 
> > > > feature is required to cherry-pick?
> > > >
> > > >
> > > > Best,
> > > > Mattison
> > > > On 12 Sep 2023 at 11:25 +0800, Zixuan Liu , wrote:
> > > > > > BTW, I think we can cherry-pick this feature to the Pulsar 2.10. As
> > > > > > far as I know, the Pulsar 2.10 is a stable/main version, because 
> > > > > > many
> > > > > > users are using it.
> > > > > >
> > > > > > Thanks,
> > > > > > Zixuan
> > > > > >
> > > > > > Zixuan Liu  于2023年9月5日周二 17:09写道:
> > > > > > > >
> > > > > > > > Hi Pulsar Community,
> > > > > > > >
> > > > > > > > Discuss for PIP-300: https://github.com/apache/pulsar/pull/21127
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Zixuan


Re: [DISSCUSS] PIP-298: Consumer supports specifying consumption isolation level

2023-09-25 Thread Xiangying Meng
Hi Dave,
The uncommitted transactions do not impact actual users' bank accounts.
Business Processing System E only reads committed transactional
messages and operates users' accounts. It needs Exactly-once semantic.
Real-time Monitoring System D reads uncommitted transactional
messages. It does not need Exactly-once semantic.

They use different subscriptions and choose different isolation
levels. One needs transaction, one does not.
In general, multiple subscriptions of the same topic do not all
require transaction guarantees.
Some want low latency without the exact-once semantic guarantee, and
some must require the exactly-once guarantee.
We just provide a new option for different subscriptions. This should
not be a breaking change,right?

Looking forward to your reply.

Thanks,
Xiangying

On Tue, Sep 26, 2023 at 4:09 AM Dave Fisher  wrote:
>
>
>
> > On Sep 20, 2023, at 12:50 AM, Xiangying Meng  wrote:
> >
> > Hi, all,
> >
> > Let's consider another example:
> >
> > **System**: Financial Transaction System
> >
> > **Operations**: Large volume of deposit and withdrawal operations, a
> > small number of transfer operations.
> >
> > **Roles**:
> >
> > - **Client A1**
> > - **Client A2**
> > - **User Account B1**
> > - **User Account B2**
> > - **Request Topic C**
> > - **Real-time Monitoring System D**
> > - **Business Processing System E**
> >
> > **Client Operations**:
> >
> > - **Withdrawal**: Client A1 decreases the deposit amount from User
> > Account B1 or B2.
> > - **Deposit**: Client A1 increases the deposit amount in User Account B1 or 
> > B2.
> > - **Transfer**: Client A2 decreases the deposit amount from User
> > Account B1 and increases it in User Account B2. Or vice versa.
> >
> > **Real-time Monitoring System D**: Obtains the latest data from
> > Request Topic C as quickly as possible to monitor transaction data and
> > changes in bank reserves in real-time. This is necessary for the
> > timely detection of anomalies and real-time decision-making.
> >
> > **Business Processing System E**: Reads data from Request Topic C,
> > then actually operates User Accounts B1, B2.
> >
> > **User Scenario**: Client A1 sends a large number of deposit and
> > withdrawal requests to Request Topic C. Client A2 writes a small
> > number of transfer requests to Request Topic C.
> >
> > In this case, Business Processing System E needs a read-committed
> > isolation level to ensure operation consistency and Exactly Once
> > semantics. The real-time monitoring system does not care if a small
> > number of transfer requests are incomplete (dirty data). What it
> > cannot tolerate is a situation where a large number of deposit and
> > withdrawal requests cannot be presented in real time due to a small
> > number of transfer requests (the current situation is that uncommitted
> > transaction messages can block the reading of committed transaction
> > messages).
>
> So you are willing to let uncommitted transactions impact actual users bank 
> accounts? Are you sure that there is not another way to bypass uncommitted 
> records? Letting uncommitted records through is not Exactly once.
>
> Are you ready to rewrite Pulsar’s documentation to explain how normal users 
> can avoid allowing this?
>
> Best,
> Dave
>
>
> >
> > In this case, it is necessary to set different isolation levels for
> > different consumers/subscriptions.
> >
> > Thanks,
> > Xiangying
> >
> > On Tue, Sep 19, 2023 at 11:35 PM 杨国栋  wrote:
> >>
> >> Hi Dave and Xiangying,
> >> Thanks for all your support.
> >>
> >> Let me add some background.
> >>
> >> Apache Paimon take message queue as External Log Systems and changelog of
> >> Paimon can also be consumed from message queue.
> >> By default, change-log of message queue in Paimon are visible to consumers
> >> only after a snapshot. Snapshot have a same life cycle as message queue
> >> transactions.
> >> However, users can immediately consume change-log by read uncommited
> >> message without waiting for the next snapshot.
> >> This behavior reduces the latency of changelog, but it relies on reading
> >> uncommited message in Kafka or other message queue.
> >> So we hope Pulsar can support Read Uncommitted isolation level.
> >>
> >> Put aside the application scenarios of Paimon. Let's discuss Read
> >> Uncommitted isolation level itself.
> >>
> >> Read Uncommitted isolation will bring certain security risks, but will also
> >> make the message immediately readable.
> >> Reading submitted data can ensure accuracy, and reading uncommitted data
> >> can ensure real-time performance (there may be some repeated message or
> >> dirty message).
> >> Real-time performance is what users need. How to handle dirty message
> >> should be considered by the application side.
> >>
> >> We can still get complete and accurate data from Read Committed isolation
> >> level.
> >>
> >> Sincerely yours.
>


Re: [DISCUSS] Unload Rate Limiting during Graceful Shutdown of Pulsar

2023-09-25 Thread Xiangying Meng
>I think the RateLimiter can handle it:
https://github.com/apache/pulsar/blob/a1405ea006f175b1bd0b9d28b9444d592fb4a010/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerService.java#L965-L968

See here. Do you mean we make `maxConcurrentUnloadPerSec` and
`brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute` work
together? What is the meaning of this?
https://github.com/apache/pulsar/blob/a1405ea006f175b1bd0b9d28b9444d592fb4a010/pulsar-broker/src/main/java/org/apache/pulsar/broker/admin/impl/BrokersBase.java#L564-L568

>`maxConcurrentUnloadPerSec ` is for the admin and CLI usage. This
proposal is to add
`brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute ` to the
broker configuration.

Yes, we are adding a broker configuration. That's not an issue, but
adding `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute` with a
minute-based interval and expecting it to work in conjunction with
`maxConcurrentUnloadPerSec`, which is only used in the CLI, doesn't
make sense.

I understand that what we want to achieve is to have a broker
configuration that limits the concurrency of unloads even when
`maxConcurrentUnloadPerSec` is not set. Instead of adding
`brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute` when
`maxConcurrentUnloadPerSec` is already controlling the concurrency

Thanks,
Xiangying

On Mon, Sep 25, 2023 at 6:45 PM Zike Yang  wrote:
>
> > If we want the
> maximum concurrency per second to be 1, and set the maximum
> concurrency per minute to 60, then the actual maximum concurrency per
> second could be up to 60, which is 60 times larger than our expected
> maximum concurrency.
>
> I think the RateLimiter can handle it:
> https://github.com/apache/pulsar/blob/a1405ea006f175b1bd0b9d28b9444d592fb4a010/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/BrokerService.java#L965-L968
>
> > Secondly, we already have the `maxConcurrentUnloadPerSec`
> configuration, which is provided to the user in the CLI. Adding a
> similar `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute`
> configuration might confuse users.
>
> `maxConcurrentUnloadPerSec ` is for the admin and CLI usage. This
> proposal is to add
> `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute ` to the
> broker configuration.
>
>
> Overall, I'm +1 for this proposal. And I agree that we need a new PIP
> for this change.
>
> BR,
> Zike Yang
>
> On Mon, Sep 25, 2023 at 3:54 PM Xiangying Meng  wrote:
> >
> > Hi Donglai, Heesung
> >
> > >brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute=60 is the same as
> > brokerShutdownMaxNumberOfGracefulBundleUnloadPerSec=1 So, the "per-min"
> > config can be more granular.
> >
> > I have some doubts about introducing the
> > `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute`
> > configuration.
> >
> > Firstly, I also think that a minute is too long. If we want the
> > maximum concurrency per second to be 1, and set the maximum
> > concurrency per minute to 60, then the actual maximum concurrency per
> > second could be up to 60, which is 60 times larger than our expected
> > maximum concurrency. Moreover, if the unload requests are concentrated
> > in the last 10 seconds of the previous minute and the first 10 seconds
> > of the next minute, then the concurrency during this period will
> > exceed our configuration. Such fluctuations are inevitable, but the
> > larger the time span we set, the greater the distortion of the
> > configuration.
> >
> > Secondly, we already have the `maxConcurrentUnloadPerSec`
> > configuration, which is provided to the user in the CLI. Adding a
> > similar `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute`
> > configuration might confuse users. When designing configuration
> > parameters, we should try to keep it simple and consistent, and avoid
> > introducing unnecessary complexity.
> >
> > Thanks,
> > Xiangying
> >
> > On Mon, Sep 25, 2023 at 12:14 PM Yubiao Feng
> >  wrote:
> > >
> > > Hi Donglai, Mattison
> > >
> > > I agree with @Mattison
> > >
> > > Thanks
> > > Yubiao Feng
> > >
> > > On Mon, Aug 21, 2023 at 8:50 PM  wrote:
> > >
> > > >
> > > > Hi,
> > > >
> > > > I agree with this change to improve the stability of the pulsar cluster.
> > > >
> > > > Just one concern. it's better to move this discussion to a new PIP.
> > > > because you wanna introduce a new broker configuration.
> > > > `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute`
> > > >
> > > > FYI: https://github

Re: [VOTE] PIP-302 Add new API readAllExistingMessages for TableView

2023-09-25 Thread Xiangying Meng
Thank you for your reminder. In our discussion, there were several
changes to the specific plan and method names, which resulted in the
PR title not being updated promptly. This was my oversight. The email
title for the vote was not modified to match the titles of the
discussed emails.

Regarding my proposal itself, would you happen to have any other questions?

BR,
Xiangying

On Mon, Sep 25, 2023 at 6:03 PM Zike Yang  wrote:
>
> Hi, Xiangying
>
> This PIP is a little confusing to me.
> This mail title states that we are introducing `readAllExistingMessages`.
> But actually, we only introduced `refreshAsync` in the PIP:
> https://github.com/apache/pulsar/pull/21166/files#diff-45c655583d6c0c73d87afd3df3fe67f77caadbf1bd691cf8f8211cc89728a1ceR34-R36
> And the PR title doesn't seem relevant. “PIP-302 Add alwaysRefresh
> Configuration Option for TableView to Read Latest Values”
>
> BR,
> Zike Yang
>
> On Mon, Sep 25, 2023 at 3:25 PM Xiangying Meng  wrote:
> >
> > Hi dev,
> >This thread is to start a vote for PIP-302 Add new API
> > readAllExistingMessages for TableView.
> > Discuss thread: 
> > https://lists.apache.org/thread/o085y2314o0fymvx0x8pojmgjwcwn59q
> > PIP: https://github.com/apache/pulsar/pull/21166
> >
> > BR,
> > Xiangying


Re: [DISCUSS] Unload Rate Limiting during Graceful Shutdown of Pulsar

2023-09-25 Thread Xiangying Meng
Hi Donglai, Heesung

>brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute=60 is the same as
brokerShutdownMaxNumberOfGracefulBundleUnloadPerSec=1 So, the "per-min"
config can be more granular.

I have some doubts about introducing the
`brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute`
configuration.

Firstly, I also think that a minute is too long. If we want the
maximum concurrency per second to be 1, and set the maximum
concurrency per minute to 60, then the actual maximum concurrency per
second could be up to 60, which is 60 times larger than our expected
maximum concurrency. Moreover, if the unload requests are concentrated
in the last 10 seconds of the previous minute and the first 10 seconds
of the next minute, then the concurrency during this period will
exceed our configuration. Such fluctuations are inevitable, but the
larger the time span we set, the greater the distortion of the
configuration.

Secondly, we already have the `maxConcurrentUnloadPerSec`
configuration, which is provided to the user in the CLI. Adding a
similar `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute`
configuration might confuse users. When designing configuration
parameters, we should try to keep it simple and consistent, and avoid
introducing unnecessary complexity.

Thanks,
Xiangying

On Mon, Sep 25, 2023 at 12:14 PM Yubiao Feng
 wrote:
>
> Hi Donglai, Mattison
>
> I agree with @Mattison
>
> Thanks
> Yubiao Feng
>
> On Mon, Aug 21, 2023 at 8:50 PM  wrote:
>
> >
> > Hi,
> >
> > I agree with this change to improve the stability of the pulsar cluster.
> >
> > Just one concern. it's better to move this discussion to a new PIP.
> > because you wanna introduce a new broker configuration.
> > `brokerShutdownMaxNumberOfGracefulBundleUnloadPerMinute`
> >
> > FYI: https://github.com/apache/pulsar/blob/master/pip/README.md
> >
> > Looking forward this change and thanks for your contribution. :)
> >
> > Best,
> > Mattison
> >
> >
> >
> > On 7 Jul 2023 at 15:30 +0800, labuladong , wrote:
> > > Thanks you guys.
> > >
> > >
> > > I agree that per-minute is better than per-second, which is more
> > flexible.
> > >
> > >
> > > I open an issue here:
> > >
> > >
> > > https://github.com/apache/pulsar/issues/20753
> > >
> > >
> > > Regards,
> > > donglai
> >


[VOTE] PIP-302 Add new API readAllExistingMessages for TableView

2023-09-25 Thread Xiangying Meng
Hi dev,
   This thread is to start a vote for PIP-302 Add new API
readAllExistingMessages for TableView.
Discuss thread: https://lists.apache.org/thread/o085y2314o0fymvx0x8pojmgjwcwn59q
PIP: https://github.com/apache/pulsar/pull/21166

BR,
Xiangying


Re: [DISSCUSS] PIP-298: Consumer supports specifying consumption isolation level

2023-09-20 Thread Xiangying Meng
Hi, all,

Let's consider another example:

**System**: Financial Transaction System

**Operations**: Large volume of deposit and withdrawal operations, a
small number of transfer operations.

**Roles**:

- **Client A1**
- **Client A2**
- **User Account B1**
- **User Account B2**
- **Request Topic C**
- **Real-time Monitoring System D**
- **Business Processing System E**

**Client Operations**:

- **Withdrawal**: Client A1 decreases the deposit amount from User
Account B1 or B2.
- **Deposit**: Client A1 increases the deposit amount in User Account B1 or B2.
- **Transfer**: Client A2 decreases the deposit amount from User
Account B1 and increases it in User Account B2. Or vice versa.

**Real-time Monitoring System D**: Obtains the latest data from
Request Topic C as quickly as possible to monitor transaction data and
changes in bank reserves in real-time. This is necessary for the
timely detection of anomalies and real-time decision-making.

**Business Processing System E**: Reads data from Request Topic C,
then actually operates User Accounts B1, B2.

**User Scenario**: Client A1 sends a large number of deposit and
withdrawal requests to Request Topic C. Client A2 writes a small
number of transfer requests to Request Topic C.

In this case, Business Processing System E needs a read-committed
isolation level to ensure operation consistency and Exactly Once
semantics. The real-time monitoring system does not care if a small
number of transfer requests are incomplete (dirty data). What it
cannot tolerate is a situation where a large number of deposit and
withdrawal requests cannot be presented in real time due to a small
number of transfer requests (the current situation is that uncommitted
transaction messages can block the reading of committed transaction
messages).

In this case, it is necessary to set different isolation levels for
different consumers/subscriptions.

Thanks,
Xiangying

On Tue, Sep 19, 2023 at 11:35 PM 杨国栋  wrote:
>
> Hi Dave and Xiangying,
> Thanks for all your support.
>
> Let me add some background.
>
> Apache Paimon take message queue as External Log Systems and changelog of
> Paimon can also be consumed from message queue.
> By default, change-log of message queue in Paimon are visible to consumers
> only after a snapshot. Snapshot have a same life cycle as message queue
> transactions.
> However, users can immediately consume change-log by read uncommited
> message without waiting for the next snapshot.
> This behavior reduces the latency of changelog, but it relies on reading
> uncommited message in Kafka or other message queue.
> So we hope Pulsar can support Read Uncommitted isolation level.
>
> Put aside the application scenarios of Paimon. Let's discuss Read
> Uncommitted isolation level itself.
>
> Read Uncommitted isolation will bring certain security risks, but will also
> make the message immediately readable.
> Reading submitted data can ensure accuracy, and reading uncommitted data
> can ensure real-time performance (there may be some repeated message or
> dirty message).
> Real-time performance is what users need. How to handle dirty message
> should be considered by the application side.
>
> We can still get complete and accurate data from Read Committed isolation
> level.
>
> Sincerely yours.


Re: [DISSCUSS] PIP-298: Consumer supports specifying consumption isolation level

2023-09-18 Thread Xiangying Meng
Hi Dave,

I greatly appreciate your perspective, yet it leaves me with some
uncertainties that I am eager to address. Why would the introduction
of isolation levels constitute an insecure action?

>I think if this proceeds then the scope needs to be expanded to 
>producers/admins needing to proactively allow transactions to be consumed 
>uncommitted.

 We are merely presenting an option to the users. The notion of
establishing isolation levels for producers and administrators is, in
my view, devoid of necessity, and I am not inclined to implement it,
for it is devoid of substance.

Sincerely,
Xiangying

On Mon, Sep 18, 2023 at 10:58 PM Dave Fisher  wrote:
>
> Thanks. So, this is to support exfiltration of uncommitted transaction data? 
> This is IMO wrong and a security risk.
>
> Pulsar already supports CDC through IO Connectors.
>
> Kafka can be wrong about these isolation levels.
>
> There is really no information in those Paimon issues. How is Paimon’s 
> ability to support Pulsar broken by this edge case?
>
> Best,
> Dave
>
> Sent from my iPhone
>
> > On Sep 18, 2023, at 7:26 AM, Xiangying Meng  wrote:
> >
> > Hi Dave,
> > This is an external request. Paimon has added support for Kafka but
> > has not yet incorporated support for Pulsar. Therefore, the Paimon
> > community desires to integrate Pulsar.
> >
> > Furthermore, when integrating Pulsar into Paimon, it is desired to
> > enable the ability to configure isolation levels, similar to Kafka, to
> > support reading uncommitted transaction logs.
> >
> > Additional context can be found in the following link:
> > https://github.com/apache/incubator-paimon/issues/765
> >
> > Sincerely,
> > Xiangying
> >
> >> On Mon, Sep 18, 2023 at 10:30 AM Dave Fisher  wrote:
> >>
> >> My concern is that this pip allows consumers to change the processing 
> >> rules for transactions in ways that a producer might find unexpected.
> >>
> >> I think if this proceeds then the scope needs to be expanded to 
> >> producers/admins needing to proactively allow transactions to be consumed 
> >> uncommitted.
> >>
> >> I am also interested in the use case that motivates this change.
> >>
> >> Best,
> >> Dave
> >>
> >> Sent from my iPhone
> >>
> >>>> On Sep 17, 2023, at 8:50 AM, hzh0425  wrote:
> >>>
> >>> Hi, all
> >>>
> >>> This PR contributed to pip 298: 
> >>> https://github.com/apache/pulsar/pull/21114
> >>>
> >>>
> >>>
> >>>
> >>> This pip is to implement Read Committed and Read Uncommitted isolation 
> >>> levels for Pulsar transactions, allow consumers to configure isolation 
> >>> levels during the building process.
> >>>
> >>> For more details, please refer to pip-298.md
> >>>
> >>>
> >>>
> >>>
> >>> I hope everyone can help review and discuss this pip and enter the 
> >>> discuss stage.
> >>>
> >>> Thanks
> >>> Zhangheng Huang
> >>
>


Re: [DISSCUSS] PIP-298: Consumer supports specifying consumption isolation level

2023-09-18 Thread Xiangying Meng
Hi Dave,
This is an external request. Paimon has added support for Kafka but
has not yet incorporated support for Pulsar. Therefore, the Paimon
community desires to integrate Pulsar.

Furthermore, when integrating Pulsar into Paimon, it is desired to
enable the ability to configure isolation levels, similar to Kafka, to
support reading uncommitted transaction logs.

Additional context can be found in the following link:
https://github.com/apache/incubator-paimon/issues/765

Sincerely,
Xiangying

On Mon, Sep 18, 2023 at 10:30 AM Dave Fisher  wrote:
>
> My concern is that this pip allows consumers to change the processing rules 
> for transactions in ways that a producer might find unexpected.
>
> I think if this proceeds then the scope needs to be expanded to 
> producers/admins needing to proactively allow transactions to be consumed 
> uncommitted.
>
> I am also interested in the use case that motivates this change.
>
> Best,
> Dave
>
> Sent from my iPhone
>
> > On Sep 17, 2023, at 8:50 AM, hzh0425  wrote:
> >
> > Hi, all
> >
> > This PR contributed to pip 298: https://github.com/apache/pulsar/pull/21114
> >
> >
> >
> >
> > This pip is to implement Read Committed and Read Uncommitted isolation 
> > levels for Pulsar transactions, allow consumers to configure isolation 
> > levels during the building process.
> >
> > For more details, please refer to pip-298.md
> >
> >
> >
> >
> > I hope everyone can help review and discuss this pip and enter the discuss 
> > stage.
> >
> > Thanks
> > Zhangheng Huang
>


[DISCUSS] PIP-302 Add new API readAllExistingMessages for TableView

2023-09-11 Thread Xiangying Meng
Hi dev,
I proposed a PIP, accessible via
https://github.com/apache/pulsar/pull/21166, to introduce an API that
allows us to wait until all data has been fully retrieved before
accessing the value corresponding to the desired key.
Please take a look and give your feedback.

Best Regards,
Xiangying


Re: [VOTE] PIP 296: Introduce the `getLastMessageIds` API to Reader

2023-08-31 Thread Xiangying Meng
Close this vote with 4 binding
- Penghui
- Mattison
- Hang
- tison

On Mon, Aug 28, 2023 at 10:41 AM Zili Chen  wrote:
>
> +1 (binding)
>
> Thanks for driving the proposal!
>
> On 2023/08/25 06:52:38 Xiangying Meng wrote:
> > Hi Pulsar Community,
> >
> > This is the vote thread for PIP 296:
> > https://github.com/apache/pulsar/pull/21052
> >
> > This PIP will help to improve the flexibility of Reader usage.
> >
> > Thanks,
> > Xiangying
> >


Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-27 Thread Xiangying Meng
Hi community,

After internal discussions and agreement with Penghui and Zike, we've
come to a consensus that a small amount of dirty data within the topic
can be tolerated. As a result, this proposal has lost its
significance. We are now closing this discussion. Thanks to all
participants for their contributions. This has been a successful
collaboration.

Best regards,
Xiangying

On Mon, Aug 28, 2023 at 10:13 AM Xiangying Meng  wrote:
>
> Hi Penghui,
>
> >From my understanding.
> >The message deduplication should only check the last chunk of the message.
>
> Yes, I agree. If we only check the first chunk, the third chunk will be 
> dropped.
>
> Thanks,
> Xiangying
>
> On Mon, Aug 28, 2023 at 10:08 AM PengHui Li  wrote:
> >
> > Hi Xiangying,
> >
> > Thanks for driving the proposal.
> > From my understanding.
> > The message deduplication should only check the last chunk of the message.
> > It doesn't need to care about whether each chunk is duplicated.
> > The client side should handle issues like duplicated chunks.
> >
> > For the example that you have discussed:
> >
> > ```
> > Producer sent:
> > 1. SequenceID: 0, ChunkID: 0
> > 2. SequenceID: 0, ChunkID: 1
> > 3. SequenceID: 0, ChunkID: 0
> > 4. SequenceID: 0, ChunkID: 1
> > 5. SequenceID: 0, ChunkID: 2
> > ```
> >
> > The consumer should give up 1 and 2. And start to build the chunk message
> > from 3 to 5.
> > Because 1 and 2 belong to an incomplete chunk message.
> >
> > For the deduplication. If the chunkId 2 is the last chunk of the message.
> > We should put it into the persistence map in the deduplication once it has
> > been persistent.
> > Any subsequent messages with the same sequence ID and producer name will be
> > treated as
> > duplicates, no matter whether the sequence ID is generated by the producer
> > or specified by users.
> >
> > Regards,
> > Penghui
> >
> > On Sat, Aug 26, 2023 at 5:55 PM Xiangying Meng  wrote:
> >
> > > Share more information: Even for versions before 3.0.0, the approach
> > > doesn't assemble chunks 3, 4, and 5 together.
> > >
> > > Please note this line of code:
> > >
> > > ```java
> > > chunkedMsgCtx = chunkedMessagesMap.computeIfAbsent(msgMetadata.getUuid(),
> > > (key) -> ChunkedMessageCtx.get(totalChunks,
> > > chunkedMsgBuffer));
> > > ```
> > >
> > > And the new solution we adopted in the PR [0] is to add a timestamp in
> > > the uuid. Thank Heesung for providing this idea again.
> > >
> > > [0]  https://github.com/apache/pulsar/pull/20948
> > >
> > >
> > > On Sat, Aug 26, 2023 at 5:20 PM Xiangying Meng 
> > > wrote:
> > > >
> > > > Hi Zike,
> > > >
> > > > PR [0] has already fixed this bug and won't introduce compatibility
> > > issues.
> > > > PR [1] is unnecessary and can be closed. However, I still greatly
> > > > appreciate the information you provided.
> > > >
> > > > [0] https://github.com/apache/pulsar/pull/20948
> > > > [1] https://github.com/apache/pulsar/pull/21070
> > > >
> > > > On Sat, Aug 26, 2023 at 4:49 PM Zike Yang  wrote:
> > > > >
> > > > > > Hi Zike
> > > > > You can see the processMessageChunk method of the ConsumerImpl.
> > > > >
> > > > > Ah. That seems like a regression bug introduced by
> > > > > https://github.com/apache/pulsar/pull/18511. I have pushed a PR to fix
> > > > > it: https://github.com/apache/pulsar/pull/21070
> > > > >
> > > > > For the behavior before Pulsar 3.0.0. The consumer should assemble the
> > > > > message using 3,4,5.
> > > > >
> > > > > Thanks for pointing this out.
> > > > >
> > > > > BR,
> > > > > Zike Yang
> > > > >
> > > > > On Sat, Aug 26, 2023 at 3:58 PM Xiangying Meng 
> > > wrote:
> > > > > >
> > > > > > >> Consumer receive:
> > > > > > >1. SequenceID: 0, ChunkID: 0
> > > > > > >2. SequenceID: 0, ChunkID: 1
> > > > > > >3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > > > > > >chunk and recycle its `chunkedMsgCtx`.
> > > > > > >4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > > > > 

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-27 Thread Xiangying Meng
Hi Penghui,

>From my understanding.
>The message deduplication should only check the last chunk of the message.

Yes, I agree. If we only check the first chunk, the third chunk will be dropped.

Thanks,
Xiangying

On Mon, Aug 28, 2023 at 10:08 AM PengHui Li  wrote:
>
> Hi Xiangying,
>
> Thanks for driving the proposal.
> From my understanding.
> The message deduplication should only check the last chunk of the message.
> It doesn't need to care about whether each chunk is duplicated.
> The client side should handle issues like duplicated chunks.
>
> For the example that you have discussed:
>
> ```
> Producer sent:
> 1. SequenceID: 0, ChunkID: 0
> 2. SequenceID: 0, ChunkID: 1
> 3. SequenceID: 0, ChunkID: 0
> 4. SequenceID: 0, ChunkID: 1
> 5. SequenceID: 0, ChunkID: 2
> ```
>
> The consumer should give up 1 and 2. And start to build the chunk message
> from 3 to 5.
> Because 1 and 2 belong to an incomplete chunk message.
>
> For the deduplication. If the chunkId 2 is the last chunk of the message.
> We should put it into the persistence map in the deduplication once it has
> been persistent.
> Any subsequent messages with the same sequence ID and producer name will be
> treated as
> duplicates, no matter whether the sequence ID is generated by the producer
> or specified by users.
>
> Regards,
> Penghui
>
> On Sat, Aug 26, 2023 at 5:55 PM Xiangying Meng  wrote:
>
> > Share more information: Even for versions before 3.0.0, the approach
> > doesn't assemble chunks 3, 4, and 5 together.
> >
> > Please note this line of code:
> >
> > ```java
> > chunkedMsgCtx = chunkedMessagesMap.computeIfAbsent(msgMetadata.getUuid(),
> > (key) -> ChunkedMessageCtx.get(totalChunks,
> > chunkedMsgBuffer));
> > ```
> >
> > And the new solution we adopted in the PR [0] is to add a timestamp in
> > the uuid. Thank Heesung for providing this idea again.
> >
> > [0]  https://github.com/apache/pulsar/pull/20948
> >
> >
> > On Sat, Aug 26, 2023 at 5:20 PM Xiangying Meng 
> > wrote:
> > >
> > > Hi Zike,
> > >
> > > PR [0] has already fixed this bug and won't introduce compatibility
> > issues.
> > > PR [1] is unnecessary and can be closed. However, I still greatly
> > > appreciate the information you provided.
> > >
> > > [0] https://github.com/apache/pulsar/pull/20948
> > > [1] https://github.com/apache/pulsar/pull/21070
> > >
> > > On Sat, Aug 26, 2023 at 4:49 PM Zike Yang  wrote:
> > > >
> > > > > Hi Zike
> > > > You can see the processMessageChunk method of the ConsumerImpl.
> > > >
> > > > Ah. That seems like a regression bug introduced by
> > > > https://github.com/apache/pulsar/pull/18511. I have pushed a PR to fix
> > > > it: https://github.com/apache/pulsar/pull/21070
> > > >
> > > > For the behavior before Pulsar 3.0.0. The consumer should assemble the
> > > > message using 3,4,5.
> > > >
> > > > Thanks for pointing this out.
> > > >
> > > > BR,
> > > > Zike Yang
> > > >
> > > > On Sat, Aug 26, 2023 at 3:58 PM Xiangying Meng 
> > wrote:
> > > > >
> > > > > >> Consumer receive:
> > > > > >1. SequenceID: 0, ChunkID: 0
> > > > > >2. SequenceID: 0, ChunkID: 1
> > > > > >3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > > > > >chunk and recycle its `chunkedMsgCtx`.
> > > > > >4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > > > > >5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
> > > > > >
> > > > > >I think this case is wrong. For the current implementation, the
> > > > > >message 3,4,5 will be assembled as a original large message.
> > > > >
> > > > > Hi Zike
> > > > > You can see the processMessageChunk method of the ConsumerImpl.
> > > > >
> > > > > ```
> > > > >
> > > > > ChunkedMessageCtx chunkedMsgCtx =
> > chunkedMessagesMap.get(msgMetadata.getUuid());
> > > > >
> > > > > if (msgMetadata.getChunkId() == 0 && chunkedMsgCtx == null) {
> > > > > //assemble a chunkedMsgCtx and put into
> > > > > pendingChunkedMessageUuidQueue and chunkedMessagesMap.
> > > > > }
> > > > >
> > > > > if (c

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-26 Thread Xiangying Meng
Share more information: Even for versions before 3.0.0, the approach
doesn't assemble chunks 3, 4, and 5 together.

Please note this line of code:

```java
chunkedMsgCtx = chunkedMessagesMap.computeIfAbsent(msgMetadata.getUuid(),
(key) -> ChunkedMessageCtx.get(totalChunks,
chunkedMsgBuffer));
```

And the new solution we adopted in the PR [0] is to add a timestamp in
the uuid. Thank Heesung for providing this idea again.

[0]  https://github.com/apache/pulsar/pull/20948


On Sat, Aug 26, 2023 at 5:20 PM Xiangying Meng  wrote:
>
> Hi Zike,
>
> PR [0] has already fixed this bug and won't introduce compatibility issues.
> PR [1] is unnecessary and can be closed. However, I still greatly
> appreciate the information you provided.
>
> [0] https://github.com/apache/pulsar/pull/20948
> [1] https://github.com/apache/pulsar/pull/21070
>
> On Sat, Aug 26, 2023 at 4:49 PM Zike Yang  wrote:
> >
> > > Hi Zike
> > You can see the processMessageChunk method of the ConsumerImpl.
> >
> > Ah. That seems like a regression bug introduced by
> > https://github.com/apache/pulsar/pull/18511. I have pushed a PR to fix
> > it: https://github.com/apache/pulsar/pull/21070
> >
> > For the behavior before Pulsar 3.0.0. The consumer should assemble the
> > message using 3,4,5.
> >
> > Thanks for pointing this out.
> >
> > BR,
> > Zike Yang
> >
> > On Sat, Aug 26, 2023 at 3:58 PM Xiangying Meng  wrote:
> > >
> > > >> Consumer receive:
> > > >1. SequenceID: 0, ChunkID: 0
> > > >2. SequenceID: 0, ChunkID: 1
> > > >3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > > >chunk and recycle its `chunkedMsgCtx`.
> > > >4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > > >5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
> > > >
> > > >I think this case is wrong. For the current implementation, the
> > > >message 3,4,5 will be assembled as a original large message.
> > >
> > > Hi Zike
> > > You can see the processMessageChunk method of the ConsumerImpl.
> > >
> > > ```
> > >
> > > ChunkedMessageCtx chunkedMsgCtx = 
> > > chunkedMessagesMap.get(msgMetadata.getUuid());
> > >
> > > if (msgMetadata.getChunkId() == 0 && chunkedMsgCtx == null) {
> > > //assemble a chunkedMsgCtx and put into
> > > pendingChunkedMessageUuidQueue and chunkedMessagesMap.
> > > }
> > >
> > > if (chunkedMsgCtx == null || chunkedMsgCtx.chunkedMsgBuffer == null
> > > || msgMetadata.getChunkId() !=
> > > (chunkedMsgCtx.lastChunkedMessageId + 1)) {
> > > if (chunkedMsgCtx != null) {
> > > if (chunkedMsgCtx.chunkedMsgBuffer != null) {
> > > 
> > > ReferenceCountUtil.safeRelease(chunkedMsgCtx.chunkedMsgBuffer);
> > > }
> > > chunkedMsgCtx.recycle();
> > > }
> > > chunkedMessagesMap.remove(msgMetadata.getUuid());
> > > compressedPayload.release();
> > > increaseAvailablePermits(cnx);
> > > }
> > >
> > > ```
> > >
> > > On Sat, Aug 26, 2023 at 3:48 PM Zike Yang  wrote:
> > > >
> > > > > Consumer receive:
> > > > 1. SequenceID: 0, ChunkID: 0
> > > > 2. SequenceID: 0, ChunkID: 1
> > > > 3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > > > chunk and recycle its `chunkedMsgCtx`.
> > > > 4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > > > 5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
> > > >
> > > > I think this case is wrong. For the current implementation, the
> > > > message 3,4,5 will be assembled as a original large message.
> > > >
> > > > HI, Heesung
> > > >
> > > >
> > > > > I think brokers probably need to track map to 
> > > > > dedup
> > > >
> > > > I propose a simpler solution in this mail thread earlier, which
> > > > doesn't need to introduce map :
> > > >
> > > > > I think a simple better approach is to only check the deduplication
> > > > for the last chunk of the large message. The consumer only gets the
> > > > whole message after receiving the last chunk. We don't need to check
> > > > the deduplication for all previous chunks. Also by doing this we only
> > > > need bug fixes, we d

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-26 Thread Xiangying Meng
Hi Zike,

PR [0] has already fixed this bug and won't introduce compatibility issues.
PR [1] is unnecessary and can be closed. However, I still greatly
appreciate the information you provided.

[0] https://github.com/apache/pulsar/pull/20948
[1] https://github.com/apache/pulsar/pull/21070

On Sat, Aug 26, 2023 at 4:49 PM Zike Yang  wrote:
>
> > Hi Zike
> You can see the processMessageChunk method of the ConsumerImpl.
>
> Ah. That seems like a regression bug introduced by
> https://github.com/apache/pulsar/pull/18511. I have pushed a PR to fix
> it: https://github.com/apache/pulsar/pull/21070
>
> For the behavior before Pulsar 3.0.0. The consumer should assemble the
> message using 3,4,5.
>
> Thanks for pointing this out.
>
> BR,
> Zike Yang
>
> On Sat, Aug 26, 2023 at 3:58 PM Xiangying Meng  wrote:
> >
> > >> Consumer receive:
> > >1. SequenceID: 0, ChunkID: 0
> > >2. SequenceID: 0, ChunkID: 1
> > >3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > >chunk and recycle its `chunkedMsgCtx`.
> > >4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > >5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
> > >
> > >I think this case is wrong. For the current implementation, the
> > >message 3,4,5 will be assembled as a original large message.
> >
> > Hi Zike
> > You can see the processMessageChunk method of the ConsumerImpl.
> >
> > ```
> >
> > ChunkedMessageCtx chunkedMsgCtx = 
> > chunkedMessagesMap.get(msgMetadata.getUuid());
> >
> > if (msgMetadata.getChunkId() == 0 && chunkedMsgCtx == null) {
> > //assemble a chunkedMsgCtx and put into
> > pendingChunkedMessageUuidQueue and chunkedMessagesMap.
> > }
> >
> > if (chunkedMsgCtx == null || chunkedMsgCtx.chunkedMsgBuffer == null
> > || msgMetadata.getChunkId() !=
> > (chunkedMsgCtx.lastChunkedMessageId + 1)) {
> > if (chunkedMsgCtx != null) {
> > if (chunkedMsgCtx.chunkedMsgBuffer != null) {
> > ReferenceCountUtil.safeRelease(chunkedMsgCtx.chunkedMsgBuffer);
> > }
> > chunkedMsgCtx.recycle();
> > }
> > chunkedMessagesMap.remove(msgMetadata.getUuid());
> > compressedPayload.release();
> > increaseAvailablePermits(cnx);
> > }
> >
> > ```
> >
> > On Sat, Aug 26, 2023 at 3:48 PM Zike Yang  wrote:
> > >
> > > > Consumer receive:
> > > 1. SequenceID: 0, ChunkID: 0
> > > 2. SequenceID: 0, ChunkID: 1
> > > 3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > > chunk and recycle its `chunkedMsgCtx`.
> > > 4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > > 5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
> > >
> > > I think this case is wrong. For the current implementation, the
> > > message 3,4,5 will be assembled as a original large message.
> > >
> > > HI, Heesung
> > >
> > >
> > > > I think brokers probably need to track map to dedup
> > >
> > > I propose a simpler solution in this mail thread earlier, which
> > > doesn't need to introduce map :
> > >
> > > > I think a simple better approach is to only check the deduplication
> > > for the last chunk of the large message. The consumer only gets the
> > > whole message after receiving the last chunk. We don't need to check
> > > the deduplication for all previous chunks. Also by doing this we only
> > > need bug fixes, we don't need to introduce a new PIP.
> > >
> > > Could you explain or show a case in what cases would lead to this
> > > simpler solution not working?
> > >
> > > Thanks,
> > > Zike Yang
> > >
> > > On Sat, Aug 26, 2023 at 1:38 PM Heesung Sohn
> > >  wrote:
> > > >
> > > > > In this case, the consumer only can receive m1.
> > > >
> > > > Regarding this comment, can you explain how the consumer only receives 
> > > > m1?
> > > > Here, m1's and m2's uuid and msgId will be different(if we suffix with a
> > > > chunkingSessionId), although the sequence id is the same.
> > > >
> > > > > If we throw an exception when users use the same sequence to send the
> > > > message.
> > > > Do You mean `If "producers" throw an exception when users use the same
> > > > sequence to send the message.`.
> > > > Again, when th

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-26 Thread Xiangying Meng
>> Consumer receive:
>1. SequenceID: 0, ChunkID: 0
>2. SequenceID: 0, ChunkID: 1
>3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
>chunk and recycle its `chunkedMsgCtx`.
>4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
>5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
>
>I think this case is wrong. For the current implementation, the
>message 3,4,5 will be assembled as a original large message.

Hi Zike
You can see the processMessageChunk method of the ConsumerImpl.

```

ChunkedMessageCtx chunkedMsgCtx = chunkedMessagesMap.get(msgMetadata.getUuid());

if (msgMetadata.getChunkId() == 0 && chunkedMsgCtx == null) {
//assemble a chunkedMsgCtx and put into
pendingChunkedMessageUuidQueue and chunkedMessagesMap.
}

if (chunkedMsgCtx == null || chunkedMsgCtx.chunkedMsgBuffer == null
|| msgMetadata.getChunkId() !=
(chunkedMsgCtx.lastChunkedMessageId + 1)) {
if (chunkedMsgCtx != null) {
if (chunkedMsgCtx.chunkedMsgBuffer != null) {
ReferenceCountUtil.safeRelease(chunkedMsgCtx.chunkedMsgBuffer);
}
chunkedMsgCtx.recycle();
}
chunkedMessagesMap.remove(msgMetadata.getUuid());
compressedPayload.release();
increaseAvailablePermits(cnx);
}

```

On Sat, Aug 26, 2023 at 3:48 PM Zike Yang  wrote:
>
> > Consumer receive:
> 1. SequenceID: 0, ChunkID: 0
> 2. SequenceID: 0, ChunkID: 1
> 3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> chunk and recycle its `chunkedMsgCtx`.
> 4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> 5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
>
> I think this case is wrong. For the current implementation, the
> message 3,4,5 will be assembled as a original large message.
>
> HI, Heesung
>
>
> > I think brokers probably need to track map to dedup
>
> I propose a simpler solution in this mail thread earlier, which
> doesn't need to introduce map :
>
> > I think a simple better approach is to only check the deduplication
> for the last chunk of the large message. The consumer only gets the
> whole message after receiving the last chunk. We don't need to check
> the deduplication for all previous chunks. Also by doing this we only
> need bug fixes, we don't need to introduce a new PIP.
>
> Could you explain or show a case in what cases would lead to this
> simpler solution not working?
>
> Thanks,
> Zike Yang
>
> On Sat, Aug 26, 2023 at 1:38 PM Heesung Sohn
>  wrote:
> >
> > > In this case, the consumer only can receive m1.
> >
> > Regarding this comment, can you explain how the consumer only receives m1?
> > Here, m1's and m2's uuid and msgId will be different(if we suffix with a
> > chunkingSessionId), although the sequence id is the same.
> >
> > > If we throw an exception when users use the same sequence to send the
> > message.
> > Do You mean `If "producers" throw an exception when users use the same
> > sequence to send the message.`.
> > Again, when the producers restart, they lose the last sequence id sent.
> >
> >
> > > If we do not throw an exception when users use the same sequence to
> > send the message.
> >
> > For this logic, how do we handle if the producer suddenly resends the
> > chunked message with a different chunking scheme(e.g. maxMessageSize) ?
> > uuid=1, sid=0, cid=0
> > uuid=1, sid=0, cid=1
> > uuid=2, sid=0, cid=0
> > uuid=2, sid=0, cid=1
> >
> > We could refine what to track and algo logic on the broker side more, but
> > do we agree that the broker chunk dedup logic is needed?
> >
> > I will continue to think more next week. Have a nice weekend.
> >
> >
> >
> >
> > On Fri, Aug 25, 2023 at 9:14 PM Xiangying Meng  wrote:
> >
> > > Hi Heesung,
> > >
> > > Maybe we only need to maintain the last chunk ID in a map.
> > > Map map1.
> > > And we already have a map maintaining the last sequence ID.
> > > Map map2.
> > >
> > > If we do not throw an exception when users use the same sequence to
> > > send the message.
> > >
> > > For any incoming msg, m :
> > > chunk ID = -1;
> > > If m is a chunk message:
> > > chunk ID = m.chunkID.
> > >   If m.currentSeqid < LastSeqId, dedup.
> > >   If m.currentSeqid > LastSeqId && m.chunk ID = 0, nodedup
> > > if chunk ID exists in the map.
> > >Update it and log an error. This means there is an
> > > incomplete chunk message.
> > > If chunk

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-26 Thread Xiangying Meng
>Regarding this comment, can you explain how the consumer only receives m1?
>Here, m1's and m2's uuid and msgId will be different(if we suffix with a
>chunkingSessionId), although the sequence ID is the same.

They have the same sequence ID, so the first chunk will be dropped
because the sequence ID <= the last sequence ID. The other sequence ID
will be dropped because there is no chunk ID in the chunk ID map.
The detailed process can be found in my last email.


>> If we throw an exception when users use the same sequence to send the
>message.
>Do You mean `If "producers" throw an exception when users use the same
>sequence to send the message.`.
>Again, when the producers restart, they lose the last sequence id sent.

When producers restart, they will get the last sequence ID sent from the broker.
There are 2 sequence IDs for each producer:  seq 1, seq2. The seq1 is
updated before persisting messages, and the seq2 is updated after
persisting successfully.
If the client crushes and the producer rebuilds, the producer will get
seq1 from the broker.

So, the different messages should only have the same sequence ID if
the user mistakenly sets it.

>> If we do not throw an exception when users use the same sequence to
> Send the message.
>
>For this logic, how do we handle if the producer suddenly resends the
>chunked message with a different chunking scheme(e.g. maxMessageSize)?
>uuid=1, sid=0, cid=0
>uuid=1, sid=0, cid=1
>uuid=2, sid=0, cid=0
>uuid=2, sid=0, cid=1

They have different uuid, so they are different messages. But they
have the same sequence ID. The sequence ID should be recognized as
mistakenly set by users.
If we do not throw an exception, it should be deduplicated by the broker.

 > We could refine what to track and algo logic on the broker side more, but
> do we agree that the broker chunk dedup logic is needed?

I agree.
We have two issues that need to be clarified:
1. Can we allow some dirty data in the topic? This relates to whether
this proposal is necessary.
2. When the user's manually set sequence ID is greater than the
sequence ID of a previously sent message, should we throw an
exception?

On Sat, Aug 26, 2023 at 1:38 PM Heesung Sohn
 wrote:
>
> > In this case, the consumer only can receive m1.
>
> Regarding this comment, can you explain how the consumer only receives m1?
> Here, m1's and m2's uuid and msgId will be different(if we suffix with a
> chunkingSessionId), although the sequence id is the same.
>
> > If we throw an exception when users use the same sequence to send the
> message.
> Do You mean `If "producers" throw an exception when users use the same
> sequence to send the message.`.
> Again, when the producers restart, they lose the last sequence id sent.
>
>
> > If we do not throw an exception when users use the same sequence to
> send the message.
>
> For this logic, how do we handle if the producer suddenly resends the
> chunked message with a different chunking scheme(e.g. maxMessageSize) ?
> uuid=1, sid=0, cid=0
> uuid=1, sid=0, cid=1
> uuid=2, sid=0, cid=0
> uuid=2, sid=0, cid=1
>
> We could refine what to track and algo logic on the broker side more, but
> do we agree that the broker chunk dedup logic is needed?
>
> I will continue to think more next week. Have a nice weekend.
>
>
>
>
> On Fri, Aug 25, 2023 at 9:14 PM Xiangying Meng  wrote:
>
> > Hi Heesung,
> >
> > Maybe we only need to maintain the last chunk ID in a map.
> > Map map1.
> > And we already have a map maintaining the last sequence ID.
> > Map map2.
> >
> > If we do not throw an exception when users use the same sequence to
> > send the message.
> >
> > For any incoming msg, m :
> > chunk ID = -1;
> > If m is a chunk message:
> > chunk ID = m.chunkID.
> >   If m.currentSeqid < LastSeqId, dedup.
> >   If m.currentSeqid > LastSeqId && m.chunk ID = 0, nodedup
> > if chunk ID exists in the map.
> >Update it and log an error. This means there is an
> > incomplete chunk message.
> > If chunk ID does not exist in the map.
> >Put it on the map.
> >   If m.currentSeqid == LastSeqId,
> >1. if m.chunk ID == -1 || m.chunk ID == 0. dedup.
> >2. If 1 <= m.chunkID <= total chunk.
> >   1. If chunk ID does not exist in the map. dedup.
> >   2. If chunk ID exists in the map. dedup. Check the order
> > of the chunkID to determine whether dedup;
> >3. If m.chunkID == total chunk, persistent the chunk and
> > remove the chunkID in the map.
> >
> >
> > If we throw an ex

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-25 Thread Xiangying Meng
Hi Heesung,

Maybe we only need to maintain the last chunk ID in a map.
Map map1.
And we already have a map maintaining the last sequence ID.
Map map2.

If we do not throw an exception when users use the same sequence to
send the message.

For any incoming msg, m :
chunk ID = -1;
If m is a chunk message:
chunk ID = m.chunkID.
  If m.currentSeqid < LastSeqId, dedup.
  If m.currentSeqid > LastSeqId && m.chunk ID = 0, nodedup
if chunk ID exists in the map.
   Update it and log an error. This means there is an
incomplete chunk message.
If chunk ID does not exist in the map.
   Put it on the map.
  If m.currentSeqid == LastSeqId,
   1. if m.chunk ID == -1 || m.chunk ID == 0. dedup.
   2. If 1 <= m.chunkID <= total chunk.
  1. If chunk ID does not exist in the map. dedup.
  2. If chunk ID exists in the map. dedup. Check the order
of the chunkID to determine whether dedup;
   3. If m.chunkID == total chunk, persistent the chunk and
remove the chunkID in the map.


If we throw an exception when users use the same sequence to send the message.

For any incoming msg, m :
chunk ID = 0;
If m is a chunk message:
chunk ID = m.chunkID.
   If m.currentSeqid < LastSeqId, dedup.
   If m.currentSeqid == LastSeqId.
   If chunkID > 0, Check the last chunkID to determine whether to dedup.
If chunkID == 1, put chunkID into the map if absent.
   IF chunkID = 0, dedup.

BR,
xiangying

On Sat, Aug 26, 2023 at 11:53 AM Heesung Sohn
 wrote:
>
> However, what If the producer jvm gets restarted after the broker persists
> the m1 (but before updating their sequence id in their persistence
> storage), and the producer is trying to resend the same msg(so m2) with the
> same sequence id after restarting?
>
>
>
>
>
> On Fri, Aug 25, 2023 at 8:22 PM Xiangying Meng  wrote:
>
> > Hi Heesung,
> >
> > In this case, the consumer only can receive m1.
> >
> > But it has the same content as the previous case: What should we do if
> > the user sends messages with the sequence ID that was used previously?
> >
> > I am afraid to introduce the incompatibility in this case, so I only
> > added a warning log in the PR[0] instead of throwing an exception.
> > Regarding this matter, what do you think? Should we throw an exception
> > or add error logs?
> >
> > I'm looking forward to hearing your viewpoint.
> >
> > Thanks,
> > Xiangying
> >
> > [0] https://github.com/apache/pulsar/pull/21047
> >
> > On Sat, Aug 26, 2023 at 10:58 AM Heesung Sohn
> >  wrote:
> > >
> > > Actually, can we think about this case too?
> > >
> > > What happens if the cx sends the same chunked msg with the same seq id
> > when
> > > dedup is enabled?
> > >
> > > // user send a chunked msg, m1
> > > s1, c0
> > > s1, c1
> > > s1, c2 // complete
> > >
> > > // user resend the duplicate msg, m2
> > > s1, c0
> > > s1, c1
> > > s1, c2 //complete
> > >
> > > Do consumers receive m1 and m2(no dedup)?
> > >
> > >
> > >
> > > On Fri, Aug 25, 2023 at 6:55 PM Xiangying Meng 
> > wrote:
> > >
> > > > Hi Heesung,
> > > >
> > > > >I think this means, for the PIP, the broker side's chunk
> > deduplication.
> > > > >I think brokers probably need to track map to
> > dedup
> > > >
> > > > What is the significance of doing this?
> > > > My understanding is that if the client crashes and restarts after
> > > > sending half of a chunk message and then it resends the previous chunk
> > > > message, the resent chunk message should be treated as a new message
> > > > since it calls the producer's API again.
> > > > If deduplication is enabled, users should ensure that their customized
> > > > sequence ID is not lower than the previous sequence ID.
> > > > I have considered this scenario and added a warning log in PR[0]. (I'm
> > > > not sure whether an error log should be added or an exception thrown.)
> > > > If deduplication is not enabled, on the consumer side, there should be
> > > > an incomplete chunk message received alongside another complete chunk
> > > > message, each with a different UUID, and they will not interfere with
> > > > each other.
> > > >
> > > > My main point is that every message sent using
> > > > `producer.newMessage().send()` should be treated as a new message.
> > > &g

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-25 Thread Xiangying Meng
Hi Heesung,

In this case, the consumer only can receive m1.

But it has the same content as the previous case: What should we do if
the user sends messages with the sequence ID that was used previously?

I am afraid to introduce the incompatibility in this case, so I only
added a warning log in the PR[0] instead of throwing an exception.
Regarding this matter, what do you think? Should we throw an exception
or add error logs?

I'm looking forward to hearing your viewpoint.

Thanks,
Xiangying

[0] https://github.com/apache/pulsar/pull/21047

On Sat, Aug 26, 2023 at 10:58 AM Heesung Sohn
 wrote:
>
> Actually, can we think about this case too?
>
> What happens if the cx sends the same chunked msg with the same seq id when
> dedup is enabled?
>
> // user send a chunked msg, m1
> s1, c0
> s1, c1
> s1, c2 // complete
>
> // user resend the duplicate msg, m2
> s1, c0
> s1, c1
> s1, c2 //complete
>
> Do consumers receive m1 and m2(no dedup)?
>
>
>
> On Fri, Aug 25, 2023 at 6:55 PM Xiangying Meng  wrote:
>
> > Hi Heesung,
> >
> > >I think this means, for the PIP, the broker side's chunk deduplication.
> > >I think brokers probably need to track map to dedup
> >
> > What is the significance of doing this?
> > My understanding is that if the client crashes and restarts after
> > sending half of a chunk message and then it resends the previous chunk
> > message, the resent chunk message should be treated as a new message
> > since it calls the producer's API again.
> > If deduplication is enabled, users should ensure that their customized
> > sequence ID is not lower than the previous sequence ID.
> > I have considered this scenario and added a warning log in PR[0]. (I'm
> > not sure whether an error log should be added or an exception thrown.)
> > If deduplication is not enabled, on the consumer side, there should be
> > an incomplete chunk message received alongside another complete chunk
> > message, each with a different UUID, and they will not interfere with
> > each other.
> >
> > My main point is that every message sent using
> > `producer.newMessage().send()` should be treated as a new message.
> > UUID is solely used for the consumer side to identify different chunk
> > messages.
> >
> > BR
> > Xiangying
> >
> > [0] https://github.com/apache/pulsar/pull/21047
> >
> > On Sat, Aug 26, 2023 at 9:34 AM Heesung Sohn
> >  wrote:
> > >
> > > I think this means, for the PIP, the broker side's chunk deduplication.
> > > I think brokers probably need to track map to dedup
> > > chunks on the broker side.
> > >
> > >
> > >
> > >
> > > On Fri, Aug 25, 2023 at 6:16 PM Xiangying Meng 
> > wrote:
> > >
> > > > Hi Heesung
> > > >
> > > > It is a good point.
> > > > Assume the producer application jvm restarts in the middle of chunking
> > and
> > > > resends the message chunks from the beginning with the previous
> > sequence
> > > > id.
> > > >
> > > > For the previous version, it should be:
> > > >
> > > > Producer send:
> > > > 1. SequenceID: 0, ChunkID: 0
> > > > 2. SequenceID: 0, ChunkID: 1
> > > > 3. SequenceID: 0, ChunkID: 0
> > > > 4. SequenceID: 0, ChunkID: 1
> > > > 5. SequenceID: 0, ChunkID: 2
> > > >
> > > > Consumer receive:
> > > > 1. SequenceID: 0, ChunkID: 0
> > > > 2. SequenceID: 0, ChunkID: 1
> > > > 3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > > > chunk and recycle its `chunkedMsgCtx`.
> > > > 4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > > > 5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
> > > >
> > > > Therefore, for the previous version, this chunk message can not be
> > > > received by the consumer. It is not an incompatibility issue.
> > > >
> > > > However, the solution of optimizing the `uuid` is valuable to the new
> > > > implementation.
> > > > I will modify this in the PR[0]. Thank you very much for your reminder
> > > > and the provided UUID optimization solution.
> > > >
> > > > BR,
> > > > Xiangying
> > > >
> > > > [0] https://github.com/apache/pulsar/pull/20948
> > > >
> > > > On Sat, Aug 26, 2023 at 8:48 AM Heesung Sohn
> > > >  wrote:
> > > > >
> > > > > Hi, I meant
&g

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-25 Thread Xiangying Meng
Hi Heesung,

>I think this means, for the PIP, the broker side's chunk deduplication.
>I think brokers probably need to track map to dedup

What is the significance of doing this?
My understanding is that if the client crashes and restarts after
sending half of a chunk message and then it resends the previous chunk
message, the resent chunk message should be treated as a new message
since it calls the producer's API again.
If deduplication is enabled, users should ensure that their customized
sequence ID is not lower than the previous sequence ID.
I have considered this scenario and added a warning log in PR[0]. (I'm
not sure whether an error log should be added or an exception thrown.)
If deduplication is not enabled, on the consumer side, there should be
an incomplete chunk message received alongside another complete chunk
message, each with a different UUID, and they will not interfere with
each other.

My main point is that every message sent using
`producer.newMessage().send()` should be treated as a new message.
UUID is solely used for the consumer side to identify different chunk messages.

BR
Xiangying

[0] https://github.com/apache/pulsar/pull/21047

On Sat, Aug 26, 2023 at 9:34 AM Heesung Sohn
 wrote:
>
> I think this means, for the PIP, the broker side's chunk deduplication.
> I think brokers probably need to track map to dedup
> chunks on the broker side.
>
>
>
>
> On Fri, Aug 25, 2023 at 6:16 PM Xiangying Meng  wrote:
>
> > Hi Heesung
> >
> > It is a good point.
> > Assume the producer application jvm restarts in the middle of chunking and
> > resends the message chunks from the beginning with the previous sequence
> > id.
> >
> > For the previous version, it should be:
> >
> > Producer send:
> > 1. SequenceID: 0, ChunkID: 0
> > 2. SequenceID: 0, ChunkID: 1
> > 3. SequenceID: 0, ChunkID: 0
> > 4. SequenceID: 0, ChunkID: 1
> > 5. SequenceID: 0, ChunkID: 2
> >
> > Consumer receive:
> > 1. SequenceID: 0, ChunkID: 0
> > 2. SequenceID: 0, ChunkID: 1
> > 3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
> > chunk and recycle its `chunkedMsgCtx`.
> > 4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
> > 5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.
> >
> > Therefore, for the previous version, this chunk message can not be
> > received by the consumer. It is not an incompatibility issue.
> >
> > However, the solution of optimizing the `uuid` is valuable to the new
> > implementation.
> > I will modify this in the PR[0]. Thank you very much for your reminder
> > and the provided UUID optimization solution.
> >
> > BR,
> > Xiangying
> >
> > [0] https://github.com/apache/pulsar/pull/20948
> >
> > On Sat, Aug 26, 2023 at 8:48 AM Heesung Sohn
> >  wrote:
> > >
> > > Hi, I meant
> > >
> > > What if the producer application jvm restarts in the middle of chunking
> > and
> > > resends the message chunks from the beginning with the previous sequence
> > id?
> > >
> > >
> > >
> > > On Fri, Aug 25, 2023 at 5:15 PM Xiangying Meng 
> > wrote:
> > >
> > > > Hi Heesung
> > > >
> > > > It is a good idea to cover this incompatibility case if the producer
> > > > splits the chunk message again when retrying.
> > > >
> > > > But in fact, the producer only resents the chunks that are assembled
> > > > to `OpSendMsg` instead of splitting the chunk message again.
> > > > So, there is no incompatibility issue of resenting the chunk message
> > > > by splitting the chunk message again.
> > > >
> > > > The logic of sending chunk messages can be found here:
> > > >
> > > >
> > https://github.com/apache/pulsar/blob/e0c481e5f8d7fa5534d3327785928a234376789e/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ProducerImpl.java#L533
> > > >
> > > > The logic of resending the message can be found here:
> > > >
> > > >
> > https://github.com/apache/pulsar/blob/e0c481e5f8d7fa5534d3327785928a234376789e/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ProducerImpl.java#L1892
> > > >
> > > > BR,
> > > > Xiangying
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Sat, Aug 26, 2023 at 2:24 AM Heesung Sohn
> > > >  wrote:
> > > > >
> > > > > >> I think brokers can track the last

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-25 Thread Xiangying Meng
Hi Heesung

It is a good point.
Assume the producer application jvm restarts in the middle of chunking and
resends the message chunks from the beginning with the previous sequence id.

For the previous version, it should be:

Producer send:
1. SequenceID: 0, ChunkID: 0
2. SequenceID: 0, ChunkID: 1
3. SequenceID: 0, ChunkID: 0
4. SequenceID: 0, ChunkID: 1
5. SequenceID: 0, ChunkID: 2

Consumer receive:
1. SequenceID: 0, ChunkID: 0
2. SequenceID: 0, ChunkID: 1
3. SequenceID: 0, ChunkID: 0 // chunk ID out of order. Release this
chunk and recycle its `chunkedMsgCtx`.
4. SequenceID: 0, ChunkID: 1  // chunkedMsgCtx == null Release it.
5. SequenceID: 0, ChunkID: 2  // chunkedMsgCtx == null Release it.

Therefore, for the previous version, this chunk message can not be
received by the consumer. It is not an incompatibility issue.

However, the solution of optimizing the `uuid` is valuable to the new
implementation.
I will modify this in the PR[0]. Thank you very much for your reminder
and the provided UUID optimization solution.

BR,
Xiangying

[0] https://github.com/apache/pulsar/pull/20948

On Sat, Aug 26, 2023 at 8:48 AM Heesung Sohn
 wrote:
>
> Hi, I meant
>
> What if the producer application jvm restarts in the middle of chunking and
> resends the message chunks from the beginning with the previous sequence id?
>
>
>
> On Fri, Aug 25, 2023 at 5:15 PM Xiangying Meng  wrote:
>
> > Hi Heesung
> >
> > It is a good idea to cover this incompatibility case if the producer
> > splits the chunk message again when retrying.
> >
> > But in fact, the producer only resents the chunks that are assembled
> > to `OpSendMsg` instead of splitting the chunk message again.
> > So, there is no incompatibility issue of resenting the chunk message
> > by splitting the chunk message again.
> >
> > The logic of sending chunk messages can be found here:
> >
> > https://github.com/apache/pulsar/blob/e0c481e5f8d7fa5534d3327785928a234376789e/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ProducerImpl.java#L533
> >
> > The logic of resending the message can be found here:
> >
> > https://github.com/apache/pulsar/blob/e0c481e5f8d7fa5534d3327785928a234376789e/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ProducerImpl.java#L1892
> >
> > BR,
> > Xiangying
> >
> >
> >
> >
> >
> >
> >
> > On Sat, Aug 26, 2023 at 2:24 AM Heesung Sohn
> >  wrote:
> > >
> > > >> I think brokers can track the last chunkMaxMessageSize for each
> > producer.
> > >
> > > > Using different chunkMaxMessageSize is just one of the aspects. In
> > > PIP-132 [0], we have included the message metadata size when checking
> > > maxMessageSize.The message metadata can be changed after splitting the
> > > chunks. We are still uncertain about the way the chunked message is
> > split,
> > > even using the same ss chunkMaxMessageSize.
> > >
> > > This sounds like we need to revisit chunking uuid logic.
> > > Like I commented here,
> > > https://github.com/apache/pulsar/pull/20948/files#r1305997883
> > > Why don't we add a chunk session id suffix to identify the ongoing
> > chunking
> > > uniquely?
> > >
> > > Currently,
> > >
> > > chunking uuid = producer + sequence_id
> > >
> > > Proposal
> > > chunking  uuid = producer + sequence_id + chunkingSessionId
> > >
> > > * chunkingSessionId could be a timestamp when the chunking started.
> > >
> > >
> > >
> > > On Fri, Aug 25, 2023 at 6:02 AM Xiangying Meng 
> > wrote:
> > >
> > > > Hi Zike,
> > > >
> > > > >How would this happen to get two duplicated and consecutive ChunkID-1
> > > > >messages? The producer should guarantee to retry the whole chunked
> > > > >messages instead of some parts of the chunks.
> > > >
> > > > If the producer guarantees to retry the whole chunked messages instead
> > > > of some parts of the chunks,
> > > > Why doesn't the bug of the producer retry chunk messages in the PR [0]
> > > > appear?
> > > > And why do you need to set `chunkId` in `op.rePopulate`?
> > > > It will be rested when split chunk messages again if the producer
> > > > guarantees to retry the whole chunked messages.
> > > > ```
> > > > final MessageMetadata finalMsgMetadata = msgMetadata;
> > > > op.rePopulate = () -> {
> > > > if (msgMetadata.hasChunkId()) {
> > > > // The message metadata is shared between all 

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-25 Thread Xiangying Meng
Hi Heesung

It is a good idea to cover this incompatibility case if the producer
splits the chunk message again when retrying.

But in fact, the producer only resents the chunks that are assembled
to `OpSendMsg` instead of splitting the chunk message again.
So, there is no incompatibility issue of resenting the chunk message
by splitting the chunk message again.

The logic of sending chunk messages can be found here:
https://github.com/apache/pulsar/blob/e0c481e5f8d7fa5534d3327785928a234376789e/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ProducerImpl.java#L533

The logic of resending the message can be found here:
https://github.com/apache/pulsar/blob/e0c481e5f8d7fa5534d3327785928a234376789e/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ProducerImpl.java#L1892

BR,
Xiangying







On Sat, Aug 26, 2023 at 2:24 AM Heesung Sohn
 wrote:
>
> >> I think brokers can track the last chunkMaxMessageSize for each producer.
>
> > Using different chunkMaxMessageSize is just one of the aspects. In
> PIP-132 [0], we have included the message metadata size when checking
> maxMessageSize.The message metadata can be changed after splitting the
> chunks. We are still uncertain about the way the chunked message is split,
> even using the same ss chunkMaxMessageSize.
>
> This sounds like we need to revisit chunking uuid logic.
> Like I commented here,
> https://github.com/apache/pulsar/pull/20948/files#r1305997883
> Why don't we add a chunk session id suffix to identify the ongoing chunking
> uniquely?
>
> Currently,
>
> chunking uuid = producer + sequence_id
>
> Proposal
> chunking  uuid = producer + sequence_id + chunkingSessionId
>
> * chunkingSessionId could be a timestamp when the chunking started.
>
>
>
> On Fri, Aug 25, 2023 at 6:02 AM Xiangying Meng  wrote:
>
> > Hi Zike,
> >
> > >How would this happen to get two duplicated and consecutive ChunkID-1
> > >messages? The producer should guarantee to retry the whole chunked
> > >messages instead of some parts of the chunks.
> >
> > If the producer guarantees to retry the whole chunked messages instead
> > of some parts of the chunks,
> > Why doesn't the bug of the producer retry chunk messages in the PR [0]
> > appear?
> > And why do you need to set `chunkId` in `op.rePopulate`?
> > It will be rested when split chunk messages again if the producer
> > guarantees to retry the whole chunked messages.
> > ```
> > final MessageMetadata finalMsgMetadata = msgMetadata;
> > op.rePopulate = () -> {
> > if (msgMetadata.hasChunkId()) {
> > // The message metadata is shared between all chunks in a large message
> > // We need to reset the chunk id for each call of this method
> > // It's safe to do that because there is only 1 thread to manipulate
> > this message metadata
> > finalMsgMetadata.setChunkId(chunkId);
> > }
> > op.cmd = sendMessage(producerId, sequenceId, numMessages, messageId,
> > finalMsgMetadata,
> > encryptedPayload);
> > };
> >
> > ```
> >
> > >> But chunks 1, 2, 3, and 4 are still persisted in the topic.
> > >
> > >I think it's OK to persist them all on the topic. Is there any issue
> > >with doing that?
> >
> > This is just one scenario. Whether only check the sequence ID of the
> > first chunk (as I used in PR[1]) or check the sequence ID of the last
> > chunk (as you suggested), in reality, neither of these methods can
> > deduplicate chunks on the broker side because the broker cannot know
> > the chunk ID of the previous message.
> >
> > However, if combined with the optimization of consumer-side logic,
> > end-to-end deduplication can be completed.
> > This is also a less-than-perfect solution I mentioned in my first
> > email and implemented in PR[1].
> >
> > The reason I propose this proposal is not to solve the end-to-end
> > deduplication of chunk messages between producers and consumers. That
> > aspect has essentially been addressed in PR[1] and is still undergoing
> > review.
> >
> > This proposal aims to ensure that no corrupt data exists within the
> > topic, as our data might be offloaded and used elsewhere in scenarios
> > where consumer logic is not optimized.
> >
> > BR,
> > Xiangying
> >
> > [0] https://github.com/apache/pulsar/pull/21048
> > [1] https://github.com/apache/pulsar/pull/20948
> >
> > On Fri, Aug 25, 2023 at 5:18 PM Zike Yang  wrote:
> > >
> > > HI xiangying
> > >
> > > > The rewind operation is seen in the test log.
> > >
> > > That seems weird. Not sure if

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-25 Thread Xiangying Meng
Hi Zike,

>How would this happen to get two duplicated and consecutive ChunkID-1
>messages? The producer should guarantee to retry the whole chunked
>messages instead of some parts of the chunks.

If the producer guarantees to retry the whole chunked messages instead
of some parts of the chunks,
Why doesn't the bug of the producer retry chunk messages in the PR [0] appear?
And why do you need to set `chunkId` in `op.rePopulate`?
It will be rested when split chunk messages again if the producer
guarantees to retry the whole chunked messages.
```
final MessageMetadata finalMsgMetadata = msgMetadata;
op.rePopulate = () -> {
if (msgMetadata.hasChunkId()) {
// The message metadata is shared between all chunks in a large message
// We need to reset the chunk id for each call of this method
// It's safe to do that because there is only 1 thread to manipulate
this message metadata
finalMsgMetadata.setChunkId(chunkId);
}
op.cmd = sendMessage(producerId, sequenceId, numMessages, messageId,
finalMsgMetadata,
encryptedPayload);
};

```

>> But chunks 1, 2, 3, and 4 are still persisted in the topic.
>
>I think it's OK to persist them all on the topic. Is there any issue
>with doing that?

This is just one scenario. Whether only check the sequence ID of the
first chunk (as I used in PR[1]) or check the sequence ID of the last
chunk (as you suggested), in reality, neither of these methods can
deduplicate chunks on the broker side because the broker cannot know
the chunk ID of the previous message.

However, if combined with the optimization of consumer-side logic,
end-to-end deduplication can be completed.
This is also a less-than-perfect solution I mentioned in my first
email and implemented in PR[1].

The reason I propose this proposal is not to solve the end-to-end
deduplication of chunk messages between producers and consumers. That
aspect has essentially been addressed in PR[1] and is still undergoing
review.

This proposal aims to ensure that no corrupt data exists within the
topic, as our data might be offloaded and used elsewhere in scenarios
where consumer logic is not optimized.

BR,
Xiangying

[0] https://github.com/apache/pulsar/pull/21048
[1] https://github.com/apache/pulsar/pull/20948

On Fri, Aug 25, 2023 at 5:18 PM Zike Yang  wrote:
>
> HI xiangying
>
> > The rewind operation is seen in the test log.
>
> That seems weird. Not sure if this rewind is related to the chunk consuming.
>
> > 1. SequenceID: 0, ChunkID: 0
> 2. SequenceID: 0, ChunkID: 1
> 3. SequenceID: 0, ChunkID: 1
> 4. SequenceID: 0, ChunkID: 2
> Such four chunks cannot be processed correctly by the consumer.
>
> How would this happen to get two duplicated and consecutive ChunkID-1
> messages? The producer should guarantee to retry the whole chunked
> messages instead of some parts of the chunks.
>
> > But chunks 1, 2, 3, and 4 are still persisted in the topic.
>
> I think it's OK to persist them all in the topic. Is there any issue
> with doing that?
>
> > There is another point. The resend of the chunk message has a bug that
> I shared with you, and you fixed in PR [0]. It will make this case
> happen in another way.
>
> If the user sets the sequence ID manually, the case could be reproduced.
>
> BR,
> Zike Yang
>
> On Thu, Aug 24, 2023 at 8:48 PM Xiangying Meng  wrote:
> >
> > >IIUC, this may change the existing behavior and may introduce 
> > >inconsistencies.
> > >Suppose that we have a large message with 3 chunks. But the producer
> > >crashes and resends the message after sending the chunk-1. It will
> > >send a total of 5 messages to the Pulsar topic:
> > >
> > >1. SequenceID: 0, ChunkID: 0
> > >2. SequenceID: 0, ChunkID: 1
> > >3. SequenceID: 0, ChunkID: 0   -> This message will be dropped
> > >4. SequenceID: 0, ChunkID: 1-> Will also be dropped
> > >5. SequenceID: 0, ChunkID: 2-> The last chunk of the message
> >
> > Hi Zike
> > There is another point. The resend of the chunk message has a bug that
> > I shared with you, and you fixed in PR [0]. It will make this case
> > happen in another way.
> > Sample description for the  bug:
> > Because the chunk message uses the same message metadata, if the chunk
> > is not sent out immediately. Then, when resending, all chunks of the
> > same chunk message use the chunk ID of the last chunk.
> > In this case, It should happen as:
> > 1. SequenceID: 0, ChunkID: 0 (Put op1 into `pendingMessages` and send)
> > 2. SequenceID: 0, ChunkID: 1 (Put op2 into `pendingMessages` and send)
> > 3. SequenceID: 0, ChunkID: 2   -> (Put op3 into `pendingMessages`)
> > 4. SequenceID: 0, ChunkID: 2   -> (Resend op1)
> > 5. SequenceID: 0, ChunkID: 2   -> (

[VOTE] PIP 296: Introduce the `getLastMessageIds` API to Reader

2023-08-25 Thread Xiangying Meng
Hi Pulsar Community,

This is the vote thread for PIP 296:
https://github.com/apache/pulsar/pull/21052

This PIP will help to improve the flexibility of Reader usage.

Thanks,
Xiangying


Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-24 Thread Xiangying Meng
>IIUC, this may change the existing behavior and may introduce inconsistencies.
>Suppose that we have a large message with 3 chunks. But the producer
>crashes and resends the message after sending the chunk-1. It will
>send a total of 5 messages to the Pulsar topic:
>
>1. SequenceID: 0, ChunkID: 0
>2. SequenceID: 0, ChunkID: 1
>3. SequenceID: 0, ChunkID: 0   -> This message will be dropped
>4. SequenceID: 0, ChunkID: 1-> Will also be dropped
>5. SequenceID: 0, ChunkID: 2-> The last chunk of the message

Hi Zike
There is another point. The resend of the chunk message has a bug that
I shared with you, and you fixed in PR [0]. It will make this case
happen in another way.
Sample description for the  bug:
Because the chunk message uses the same message metadata, if the chunk
is not sent out immediately. Then, when resending, all chunks of the
same chunk message use the chunk ID of the last chunk.
In this case, It should happen as:
1. SequenceID: 0, ChunkID: 0 (Put op1 into `pendingMessages` and send)
2. SequenceID: 0, ChunkID: 1 (Put op2 into `pendingMessages` and send)
3. SequenceID: 0, ChunkID: 2   -> (Put op3 into `pendingMessages`)
4. SequenceID: 0, ChunkID: 2   -> (Resend op1)
5. SequenceID: 0, ChunkID: 2   -> (Resend op2)
6. SequenceID: 0, ChunkID: 2   -> (Send op3)


BR,
Xiangying

[0] - https://github.com/apache/pulsar/pull/21048

On Thu, Aug 24, 2023 at 8:09 PM Xiangying Meng  wrote:
>
> >> This solution also cannot solve the out-of-order messages inside the
> >>chunks. For example, the above five messages will still be persisted.
> >The consumer already handles this case. The above 5 messages will all
> >be persisted but the consumer will skip message 1 and 2.
> >For messages 3, 4, and 5. The producer can guarantee these chunks are in 
> >order.
>
> The rewind operation is seen in the test log. Every time an incorrect
> chunk message is received, it will rewind, and the code has yet to be
> studied in depth.
> If it does not call rewind, then this case is considered a workable
> case. Let's look at another case.
> 1. SequenceID: 0, ChunkID: 0
> 2. SequenceID: 0, ChunkID: 1
> 3. SequenceID: 0, ChunkID: 1
> 4. SequenceID: 0, ChunkID: 2
> Such four chunks cannot be processed correctly by the consumer.
>
> In fact, this solution is my original idea. The PR I mentioned in the
> first email above uses a similar solution and modifies the logic on
> the consumer side.
> Also, as I mentioned in the first email, this solution can only solve
> the problem of end-to-end duplication. But chunks 1, 2, 3, and 4 are
> still persisted in the topic.
>
> On Thu, Aug 24, 2023 at 3:00 PM Zike Yang  wrote:
> >
> > Hi Heesung,
> >
> > > I believe in this PIP "similar to the existing "sequence ID map",
> > to facilitate effective filtering" actually means tracking the last
> > chunkId(not all chunk ids) on the broker side.
> >
> > With this simple solution, I think we don't need to track the
> > (sequenceID, chunkID) on the broker side at all. The broker just needs
> > to apply the deduplication logic to the last chunk instead of all
> > previous chunks. This PIP actually could do that, but it will
> > introduce a new data format and compatibility issue.
> >
> > > This is still a behavior change(deduping chunk messages on the broker),
> > and I believe we need to discuss this addition as a PIP.
> >
> > Actually, we didn't specifically state the deduping chunk message
> > behavior before. The chunked message should be equally applicable to
> > the de-duplication logic as a regular message. Therefore, I think it
> > should be considered as a bug fix. But if this FIX is worth discussing
> > in depth. I have no objection to it being a new PIP.
> >
> > > I think brokers can track the last chunkMaxMessageSize for
> > each producer.
> >
> > Using different chunkMaxMessageSize is just one of the aspects. In
> > PIP-132 [0], we have included the message metadata size when checking
> > maxMessageSize.
> > The message metadata can be changed after splitting the chunks. We are
> > still uncertain about the way the chunked message is split, even using
> > the same ss chunkMaxMessageSize.
> >
> > > then the brokers can assume that the producer is resending the chunks from
> > the beginning with a different scheme(restarted with a different
> > chunkMaxMessageSize) and accept those new chunks from the beginning.
> >
> > Regarding this, it seems like we are implementing dynamic
> > configuration for the chunkMaxMessageSize. I'm afraid that this would
> > change the expected behavior and introduce more complexity to

Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-24 Thread Xiangying Meng
t by this PIP
> > > > will cause the consumer to use messages 1,2,5 for
> > > > assembly. There is
> > > > no guarantee that the producer will split the message
> > > > in the same way
> > > > twice before and after. For example, the producer's
> > > > maxMessageSize may
> > > > be different. This may cause the consumer to
> > > > receive a corrupt
> > > > message.
> > >
> > > Good point.
> > >
> > >
> > > Thanks
> > > Yubiao Feng
> > >
> > > On Wed, Aug 23, 2023 at 12:34 PM Zike Yang  wrote:
> > >
> > > > Hi, xiangying,
> > > >
> > > > Thanks for your PIP.
> > > >
> > > > IIUC, this may change the existing behavior and may introduce
> > > > inconsistencies.
> > > > Suppose that we have a large message with 3 chunks. But the producer
> > > > crashes and resends the message after sending the chunk-1. It will
> > > > send a total of 5 messages to the Pulsar topic:
> > > >
> > > > 1. SequenceID: 0, ChunkID: 0
> > > > 2. SequenceID: 0, ChunkID: 1
> > > > 3. SequenceID: 0, ChunkID: 0   -> This message will be dropped
> > > > 4. SequenceID: 0, ChunkID: 1-> Will also be dropped
> > > > 5. SequenceID: 0, ChunkID: 2-> The last chunk of the message
> > > >
> > > > For the existing behavior, the consumer assembles messages 3,4,5 into
> > > > the original large message. But the changes brought about by this PIP
> > > > will cause the consumer to use messages 1,2,5 for assembly. There is
> > > > no guarantee that the producer will split the message in the same way
> > > > twice before and after. For example, the producer's maxMessageSize may
> > > > be different. This may cause the consumer to receive a corrupt
> > > > message.
> > > >
> > > > Also, this PIP increases the complexity of handling chunks on the
> > > > broker side. Brokers should, in general, treat the chunk as a normal
> > > > message.
> > > >
> > > > I think a simple better approach is to only check the deduplication
> > > > for the last chunk of the large message. The consumer only gets the
> > > > whole message after receiving the last chunk. We don't need to check
> > > > the deduplication for all previous chunks. Also by doing this we only
> > > > need bug fixes, we don't need to introduce a new PIP.
> > > >
> > > > BR,
> > > > Zike Yang
> > > >
> > > > On Fri, Aug 18, 2023 at 7:54 PM Xiangying Meng 
> > > > wrote:
> > > > >
> > > > > Dear Community,
> > > > >
> > > > > I hope this email finds you well. I'd like to address an important
> > > > > issue related to Apache Pulsar and discuss a solution I've proposed on
> > > > > GitHub. The problem pertains to the handling of Chunk Messages after
> > > > > enabling deduplication.
> > > > >
> > > > > In the current version of Apache Pulsar, all chunks of a Chunk Message
> > > > > share the same sequence ID. However, enabling the depublication
> > > > > feature results in an inability to send Chunk Messages. To tackle this
> > > > > problem, I've proposed a solution [1] that ensures messages are not
> > > > > duplicated throughout end-to-end delivery. While this fix addresses
> > > > > the duplication issue for end-to-end messages, there remains a
> > > > > possibility of duplicate chunks within topics.
> > > > >
> > > > > To address this concern, I believe we should introduce a "Chunk ID
> > > > > map" at the Broker level, similar to the existing "sequence ID map",
> > > > > to facilitate effective filtering. However, implementing this has led
> > > > > to a challenge: a producer requires storage for two Long values
> > > > > simultaneously (sequence ID and chunk ID). Because the snapshot of the
> > > > > sequence ID map is stored through the properties of the cursor
> > > > > (Map), so in order to satisfy the storage of two Longs
> > > > > (sequence ID, chunk ID) corresponding to one producer, we hope to add
> > > > > a mark DeleteProperties (Map) String, String>) to
> > > > > replace the properties (Map) field. To resolve this,
> > > > > I've proposed an alternative proposal [2] involving the introduction
> > > > > of a "mark DeleteProperties" (Map) to replace the
> > > > > current properties (Map) field.
> > > > >
> > > > > I'd appreciate it if you carefully review both PRs and share your
> > > > > valuable feedback and insights. Thank you immensely for your time and
> > > > > attention. I eagerly anticipate your valuable opinions and
> > > > > recommendations.
> > > > >
> > > > > Warm regards,
> > > > > Xiangying
> > > > >
> > > > > [1] https://github.com/apache/pulsar/pull/20948
> > > > > [2] https://github.com/apache/pulsar/pull/21027
> > > >


Re: [DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-23 Thread Xiangying Meng
Hi Zike
Thank you for your attention.
>For the existing behavior, the consumer assembles messages 3,4,5 into the 
>original large message. But the changes brought about by this PIP will cause 
>the consumer to use messages 1,2,5 for assembly. There is no guarantee that 
>the The producer will split the message in the same way twice before and 
>after. For example, the producer's maxMessageSize may be different. This may 
>cause the consumer to receive a corrupt message.

For the previous behavior, if duplication is not enabled, messages 1,
2, 3, 4, and 5 will all be persisted. When the consumer receives
message 3  (sequenceID: 0, ChunkID: 0), it will find that the message
is out of order and rewind the cursor. Loop this operation, and
discard this message after it expires instead of assembling 3, 4, 5
into a message.
If duplication is enabled, the chunk messages 2, 3, 4, and 5 will be
filtered. The message also will be discarded.

>I think a simple better approach is to only check the deduplication for the 
>last chunk of the large message.

This solution also cannot solve the out-of-order messages inside the
chunks. For example, the above five messages will still be persisted.

On Wed, Aug 23, 2023 at 12:34 PM Zike Yang  wrote:
>
> Hi, xiangying,
>
> Thanks for your PIP.
>
> IIUC, this may change the existing behavior and may introduce inconsistencies.
> Suppose that we have a large message with 3 chunks. But the producer
> crashes and resends the message after sending the chunk-1. It will
> send a total of 5 messages to the Pulsar topic:
>
> 1. SequenceID: 0, ChunkID: 0
> 2. SequenceID: 0, ChunkID: 1
> 3. SequenceID: 0, ChunkID: 0   -> This message will be dropped
> 4. SequenceID: 0, ChunkID: 1-> Will also be dropped
> 5. SequenceID: 0, ChunkID: 2-> The last chunk of the message
>
> For the existing behavior, the consumer assembles messages 3,4,5 into
> the original large message. But the changes brought about by this PIP
> will cause the consumer to use messages 1,2,5 for assembly. There is
> no guarantee that the producer will split the message in the same way
> twice before and after. For example, the producer's maxMessageSize may
> be different. This may cause the consumer to receive a corrupt
> message.
>
> Also, this PIP increases the complexity of handling chunks on the
> broker side. Brokers should, in general, treat the chunk as a normal
> message.
>
> I think a simple better approach is to only check the deduplication
> for the last chunk of the large message. The consumer only gets the
> whole message after receiving the last chunk. We don't need to check
> the deduplication for all previous chunks. Also by doing this we only
> need bug fixes, we don't need to introduce a new PIP.
>
> BR,
> Zike Yang
>
> On Fri, Aug 18, 2023 at 7:54 PM Xiangying Meng  wrote:
> >
> > Dear Community,
> >
> > I hope this email finds you well. I'd like to address an important
> > issue related to Apache Pulsar and discuss a solution I've proposed on
> > GitHub. The problem pertains to the handling of Chunk Messages after
> > enabling deduplication.
> >
> > In the current version of Apache Pulsar, all chunks of a Chunk Message
> > share the same sequence ID. However, enabling the depublication
> > feature results in an inability to send Chunk Messages. To tackle this
> > problem, I've proposed a solution [1] that ensures messages are not
> > duplicated throughout end-to-end delivery. While this fix addresses
> > the duplication issue for end-to-end messages, there remains a
> > possibility of duplicate chunks within topics.
> >
> > To address this concern, I believe we should introduce a "Chunk ID
> > map" at the Broker level, similar to the existing "sequence ID map",
> > to facilitate effective filtering. However, implementing this has led
> > to a challenge: a producer requires storage for two Long values
> > simultaneously (sequence ID and chunk ID). Because the snapshot of the
> > sequence ID map is stored through the properties of the cursor
> > (Map), so in order to satisfy the storage of two Longs
> > (sequence ID, chunk ID) corresponding to one producer, we hope to add
> > a mark DeleteProperties (Map) String, String>) to
> > replace the properties (Map) field. To resolve this,
> > I've proposed an alternative proposal [2] involving the introduction
> > of a "mark DeleteProperties" (Map) to replace the
> > current properties (Map) field.
> >
> > I'd appreciate it if you carefully review both PRs and share your
> > valuable feedback and insights. Thank you immensely for your time and
> > attention. I eagerly anticipate your valuable opinions and
> > recommendations.
> >
> > Warm regards,
> > Xiangying
> >
> > [1] https://github.com/apache/pulsar/pull/20948
> > [2] https://github.com/apache/pulsar/pull/21027


[DISCUSS]PIP-296 Introduce the `getLastMessageIds` API to Reader

2023-08-23 Thread Xiangying Meng
Hi, community,

I would like to bring attention to the current absence of the
`getLastMessageIds` method within the Reader interface, which has
caused various inconveniences. To address this issue, I have prepared
a proposal [1] to incorporate this API into the Reader interface.
Please review the proposal and provide valuable feedback.

Best regards,
Xiangying

[1] Proposal link: [https://github.com/apache/pulsar/pull/21052]


[DISCUSS]PIP-295: Fixing Chunk Message Duplication Issue

2023-08-18 Thread Xiangying Meng
Dear Community,

I hope this email finds you well. I'd like to address an important
issue related to Apache Pulsar and discuss a solution I've proposed on
GitHub. The problem pertains to the handling of Chunk Messages after
enabling deduplication.

In the current version of Apache Pulsar, all chunks of a Chunk Message
share the same sequence ID. However, enabling the depublication
feature results in an inability to send Chunk Messages. To tackle this
problem, I've proposed a solution [1] that ensures messages are not
duplicated throughout end-to-end delivery. While this fix addresses
the duplication issue for end-to-end messages, there remains a
possibility of duplicate chunks within topics.

To address this concern, I believe we should introduce a "Chunk ID
map" at the Broker level, similar to the existing "sequence ID map",
to facilitate effective filtering. However, implementing this has led
to a challenge: a producer requires storage for two Long values
simultaneously (sequence ID and chunk ID). Because the snapshot of the
sequence ID map is stored through the properties of the cursor
(Map), so in order to satisfy the storage of two Longs
(sequence ID, chunk ID) corresponding to one producer, we hope to add
a mark DeleteProperties (Map) String, String>) to
replace the properties (Map) field. To resolve this,
I've proposed an alternative proposal [2] involving the introduction
of a "mark DeleteProperties" (Map) to replace the
current properties (Map) field.

I'd appreciate it if you carefully review both PRs and share your
valuable feedback and insights. Thank you immensely for your time and
attention. I eagerly anticipate your valuable opinions and
recommendations.

Warm regards,
Xiangying

[1] https://github.com/apache/pulsar/pull/20948
[2] https://github.com/apache/pulsar/pull/21027


Re: [VOTE] Pulsar Release 2.10.5 Candidate 1

2023-08-10 Thread Xiangying Meng
 ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/SinglePartitionMessageRouter.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/LogUtils.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_ConsumerConfiguration.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_Authentication.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/cStringList.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_Consumer.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_ReaderConfiguration.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_Message.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_Producer.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_Result.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_MessageId.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/cStringMap.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_ClientConfiguration.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_MessageRouter.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_Client.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_ProducerConfiguration.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/c/c_Reader.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/NamedEntity.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/HandlerBase.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/BrokerConsumerStatsImpl.cc.o
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/MessageCrypto.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/ProducerConfiguration.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/stats/ConsumerStatsImpl.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/stats/ProducerStatsImpl.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/MemoryLimitController.cc.o
> >
> > ./pulsar-client-cpp/lib/CMakeFiles/PULSAR_OBJECT_LIB.dir/AckGroupingTracker.cc.o
> > ./pulsar-client-cpp/lib/libpulsar.a
> >
> > That should be unexpected.
> >
> >
> > Best,
> > tison.
> >
> >
> > Xiangying Meng  于2023年7月28日周五 11:14写道:
> >
> >> Close this vote with three +1 (binding)
> >>
> >> On Fri, Jul 28, 2023 at 10:30 AM Yunze Xu  wrote:
> >> >
> >> > +1 (binding)
> >> >
> >> > - Verified checksums and signatures
> >> > - Build from source with JDK 8
> >> > - Run standalone with KoP 2.10.4.6 and verified basic e2e with Pulsar
> >> > and Kafka clients
> >> >
> >> > Thanks,
> >> > Yunze
> >> >
> >> > On Thu, Jul 27, 2023 at 5:23 PM tison  wrote:
> >> > >
> >> > > +1 (binding)
> >> > >
> >> > > I checked (the src release and server bin, archlinux amd64 with kernel
> >> > > 6.4.5 since this pulsar version doesn't support macOS with M1 chip)
> >> > >
> >> > > - GPG sign and checksum matched
> >> > > - LICENSE and NOTICE exists
> >> > > - Source release doesn't contains unexpected binaries
> >> > > - Source release can build from source
> >> > > - Binary release can run simple pubsub examples
> >> > >
> >> > > Best,
> >> > > tison.
> >> > >
> >> > >
> >> > > guo jiwei  于2023年7月25日周二 20:53写道:
> >> > >
> >> > > > +1 (binding)
> >> > > >
> >> > > > Checked the signature
> >> > > > - Build from source
> >> > > > - Start standalone
> >> > > > - Publish and Consume
> >> > > > - Verified Cassandra connector
> >> > > > - Verified stateful function
> >> > > >
> >> > > >
> >> > > > Regards
> >> > > > Jiwei Guo (Tboy)
> >> > > >
> >> > > >
> >> > > > On Sat, Jul 22, 2023 at 10:10 PM Xiangying Meng <
> >> xiangy...@apache.org>
> >> > > > wrote:
> >> > > >
> >> > > > > This is the first release candidate for Apache Pulsar, version
> >> 2.10.5.
> >> > &g

[ANNOUNCE] Apache Pulsar 2.10.5 released

2023-08-01 Thread Xiangying Meng
The Apache Pulsar team is proud to announce Apache Pulsar version 2.10.5.

Pulsar is a highly scalable, low latency messaging platform running on
commodity hardware. It provides simple pub-sub semantics over topics,
guaranteed at-least-once delivery of messages, automatic cursor management for
subscribers, and cross-datacenter replication.

For Pulsar release details and downloads, visit:

https://pulsar.apache.org/download

Release Notes are at:
https://pulsar.apache.org/release-notes

We would like to thank the contributors that made the release possible.

Regards,

The Pulsar Team


Re: [VOTE] Pulsar Release 2.10.5 Candidate 1

2023-07-27 Thread Xiangying Meng
Close this vote with three +1 (binding)

On Fri, Jul 28, 2023 at 10:30 AM Yunze Xu  wrote:
>
> +1 (binding)
>
> - Verified checksums and signatures
> - Build from source with JDK 8
> - Run standalone with KoP 2.10.4.6 and verified basic e2e with Pulsar
> and Kafka clients
>
> Thanks,
> Yunze
>
> On Thu, Jul 27, 2023 at 5:23 PM tison  wrote:
> >
> > +1 (binding)
> >
> > I checked (the src release and server bin, archlinux amd64 with kernel
> > 6.4.5 since this pulsar version doesn't support macOS with M1 chip)
> >
> > - GPG sign and checksum matched
> > - LICENSE and NOTICE exists
> > - Source release doesn't contains unexpected binaries
> > - Source release can build from source
> > - Binary release can run simple pubsub examples
> >
> > Best,
> > tison.
> >
> >
> > guo jiwei  于2023年7月25日周二 20:53写道:
> >
> > > +1 (binding)
> > >
> > > Checked the signature
> > > - Build from source
> > > - Start standalone
> > > - Publish and Consume
> > > - Verified Cassandra connector
> > > - Verified stateful function
> > >
> > >
> > > Regards
> > > Jiwei Guo (Tboy)
> > >
> > >
> > > On Sat, Jul 22, 2023 at 10:10 PM Xiangying Meng 
> > > wrote:
> > >
> > > > This is the first release candidate for Apache Pulsar, version 2.10.5.
> > > >
> > > > This release contains 128 commits by 48 contributors.
> > > > https://github.com/apache/pulsar/compare/v2.10.4...v2.10.5-candidate-1
> > > >
> > > > *** Please download, test, and vote on this release. This vote will stay
> > > > open
> > > > for at least 72 hours ***
> > > >
> > > > Note that we are voting upon the source (tag), binaries are provided for
> > > > convenience.
> > > >
> > > > Source and binary files:
> > > > https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.5-candidate-1/
> > > >
> > > > SHA-512 checksums:
> > > >
> > > >
> > > 3637e0d148b4ee5884e0aebd0411d2f2c98c697b93b399eb0fb947faa3e0d99bf54a6eeb0c12824fcbafbefc4e9ccd5b9f7d1f3486ddad913aeb128aafc3d230
> > > > apache-pulsar-2.10.5-bin.tar.gz
> > > >
> > > >
> > > bee0c9fde34833a042b192dc00e952d4c79f3ccec2aa908b2052c23f1b5793ff1331c46f461458b520747d806077c3b840f866e1ac551f6b468d5a42b56b738a
> > > > apache-pulsar-2.10.5-src.tar.gz
> > > >
> > > >
> > > > Maven staging repo:
> > > > https://repository.apache.org/content/repositories/orgapachepulsar-1239/
> > > >
> > > > The tag to be voted upon:
> > > > v2.10.5-candidate-1 (1eb5eb351636ca102e9d05d0ba391cf32fc06db6)
> > > > https://github.com/apache/pulsar/releases/tag/v2.10.5-candidate-1
> > > >
> > > > Pulsar's KEYS file containing PGP keys you use to sign the release:
> > > > https://dist.apache.org/repos/dist/dev/pulsar/KEYS
> > > >
> > > > Docker images:
> > > >
> > > >
> > > >
> > > https://hub.docker.com/layers/mattison/pulsar/2.10.5/images/sha256-5e59c366848eecc5d61486c78d3b4d5d85ea21c6f5e63db8d3232fdae27ce7e9?context=explore
> > > >
> > > >
> > > >
> > > https://hub.docker.com/layers/mattison/pulsar-all/2.10.5/images/sha256-cbc4e99e3f164f8965edb3131d3673237103900ca510633297c0bbc55d0017c6?context=explore
> > > >
> > > > Please download the source package, and follow the README to build
> > > > and run the Pulsar standalone service.
> > > >
> > >


[VOTE] Pulsar Release 2.10.5 Candidate 1

2023-07-22 Thread Xiangying Meng
This is the first release candidate for Apache Pulsar, version 2.10.5.

This release contains 128 commits by 48 contributors.
https://github.com/apache/pulsar/compare/v2.10.4...v2.10.5-candidate-1

*** Please download, test, and vote on this release. This vote will stay open
for at least 72 hours ***

Note that we are voting upon the source (tag), binaries are provided for
convenience.

Source and binary files:
https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.5-candidate-1/

SHA-512 checksums:
3637e0d148b4ee5884e0aebd0411d2f2c98c697b93b399eb0fb947faa3e0d99bf54a6eeb0c12824fcbafbefc4e9ccd5b9f7d1f3486ddad913aeb128aafc3d230
apache-pulsar-2.10.5-bin.tar.gz
bee0c9fde34833a042b192dc00e952d4c79f3ccec2aa908b2052c23f1b5793ff1331c46f461458b520747d806077c3b840f866e1ac551f6b468d5a42b56b738a
apache-pulsar-2.10.5-src.tar.gz


Maven staging repo:
https://repository.apache.org/content/repositories/orgapachepulsar-1239/

The tag to be voted upon:
v2.10.5-candidate-1 (1eb5eb351636ca102e9d05d0ba391cf32fc06db6)
https://github.com/apache/pulsar/releases/tag/v2.10.5-candidate-1

Pulsar's KEYS file containing PGP keys you use to sign the release:
https://dist.apache.org/repos/dist/dev/pulsar/KEYS

Docker images:

https://hub.docker.com/layers/mattison/pulsar/2.10.5/images/sha256-5e59c366848eecc5d61486c78d3b4d5d85ea21c6f5e63db8d3232fdae27ce7e9?context=explore

https://hub.docker.com/layers/mattison/pulsar-all/2.10.5/images/sha256-cbc4e99e3f164f8965edb3131d3673237103900ca510633297c0bbc55d0017c6?context=explore

Please download the source package, and follow the README to build
and run the Pulsar standalone service.


[DISCUSS] Apache Pulsar 2.10.5 release

2023-06-26 Thread Xiangying Meng
Hello, community:
   It has been more than 2 months since the release of 2.10.4. During this
period, we have 64 fixes.
https://github.com/apache/pulsar/compare/v2.10.4...branch-2.10
   I suggest releasing  2.10.5. If you have no comments, I will check the
existing PR that needs to cherry-pick to the 2.10 branch.
   If you have a PR that needs to cherry-pick, please leave a message here.

Regards
Xiangying


Re: [VOTE] PIP-251 Enhancing Transaction Buffer Stats and Introducing TransactionBufferInternalStats API

2023-05-21 Thread Xiangying Meng
Close this vote with 3 +1(binding).
- Nicolò
- Bo
- Penghui

Thanks,
Xiangying

On Mon, May 22, 2023 at 9:50 AM PengHui Li  wrote:
>
> +1 binding
>
> Thanks,
> Penghui
>
> On Fri, May 12, 2023 at 10:34 AM 丛搏  wrote:
>
> > +1(binding)
> >
> > Thanks,
> > Bo
> >
> > Nicolò Boschi  于2023年5月11日周四 16:38写道:
> > >
> > > +1 binding
> > >
> > > I'm happy that we're going to improve the monitoring tools for
> > > transactions,
> > > which is probably the aspect that is lacking more from a user perspective
> > >
> > >
> > > Nicolò Boschi
> > >
> > >
> > > Il giorno mer 10 mag 2023 alle ore 10:58 Xiangying Meng <
> > > xiangy...@apache.org> ha scritto:
> > >
> > > > Hello Pulsar community,
> > > >
> > > > This thread is to start a vote for PIP-251: Enhancing Transaction
> > > > Buffer Stats and Introducing TransactionBufferInternalStats API.
> > > >
> > > > Discussion thread:
> > > > https://lists.apache.org/thread/jsh2rod208xg28mojxwrod84p5zt1nrw
> > > > Issue:
> > > > https://github.com/apache/pulsar/issues/20291
> > > >
> > > > Voting will be open for at least 48 hours.
> > > >
> > > > Thanks!
> > > > Xiangying
> > > >
> >


[VOTE] PIP-251 Enhancing Transaction Buffer Stats and Introducing TransactionBufferInternalStats API

2023-05-10 Thread Xiangying Meng
Hello Pulsar community,

This thread is to start a vote for PIP-251: Enhancing Transaction
Buffer Stats and Introducing TransactionBufferInternalStats API.

Discussion thread:
https://lists.apache.org/thread/jsh2rod208xg28mojxwrod84p5zt1nrw
Issue:
https://github.com/apache/pulsar/issues/20291

Voting will be open for at least 48 hours.

Thanks!
Xiangying


[VOTE] PIP-266: Support batch deletion of tenants, namespaces, topics, and subscriptions using input files and regex in Pulsar CLI

2023-05-04 Thread Xiangying Meng
Hello Pulsar community,

This thread is to start a vote for PIP-266: Support batch deletion of
tenants, namespaces, topics, and subscriptions using input files and
regex in Pulsar CLI.

Discussion thread:
https://lists.apache.org/thread/bcw7wdbll6d85z9sry4g1mskfnn0nwrx
Issue: https://github.com/apache/pulsar/issues/20225

Voting will be open for at least 48 hours.

Thanks!
Xiangying


Re: [Discuss] Suggestion for a "clear" parameter in Pulsar-admin to simplify tenant and namespace cleanup

2023-04-25 Thread Xiangying Meng
Hi Girish and Zike,

@Girish,
Regarding your question about the scope of the regex-based deletion
feature, the initial proposal is to delete the specified topics or
namespaces only in the local cluster where the command is run. If you
need to delete them in remote clusters as well, you can simply run the
command again in the respective clusters.

@Zike,
You raised a valid point about the regex-based deletion feature not
covering the case Yubiao mentioned. To address this, we can introduce
another flag (e.g., `--from-file`) that allows users to read a list of
namespaces and topics from a file for deletion. This would cater to
situations where an arbitrary list of topics/namespaces needs to be
deleted, providing a more comprehensive solution.

Let me know if you have any further questions or suggestions.

Best regards,
Xiangying Meng

On Tue, Apr 25, 2023 at 5:23 PM Zike Yang  wrote:
>
> Hi, Xiangying
>
> > 1. I understand that one of your concerns is whether the proposed
> > regex-based deletion feature would provide significant advantages over
> > using a simple one-liner script to call the delete topic command for
> > each topic. As Yubiao pointed out, using scripts to delete topics one
> > by one can lead to increased network overhead and slow performance,
> > particularly when dealing with a large number of topics. Implementing
> > regex support for delete operations would provide a more efficient and
> > convenient way to manage resources in Pulsar.
>
> From my understanding, introducing regex-based deletion feature
> doesn't solve the case yubiao mentioned above. They are two different
> cases.
> Regex-based deletion can only delete topics of the same format, not an
> arbitrary list of topics.
>
>
> BR,
> Zike Yang
> On Tue, Apr 25, 2023 at 11:22 AM Xiangying Meng  wrote:
> >
> > Hi Girish,
> >
> > Thank you for raising concerns about the proposed feature. I would
> > like to address the points you mentioned in your email.
> >
> > 1. I understand that one of your concerns is whether the proposed
> > regex-based deletion feature would provide significant advantages over
> > using a simple one-liner script to call the delete topic command for
> > each topic. As Yubiao pointed out, using scripts to delete topics one
> > by one can lead to increased network overhead and slow performance,
> > particularly when dealing with a large number of topics. Implementing
> > regex support for delete operations would provide a more efficient and
> > convenient way to manage resources in Pulsar.
> >
> > 2. In addition to the benefits for testing purposes, we have
> > communicated with business users of Pulsar and found that the proposed
> > regex-based deletion feature can be helpful in production environments
> > as well. For instance, it can be used to efficiently clean up
> > subscriptions associated with deprecated services, ensuring better
> > resource management and reducing clutter in the system.
> >
> > 3. As I suggested earlier, we can introduce a new option flag (e.g.,
> > `--regex` or `--pattern`) to the existing `pulsar-admin topics delete`
> > command to prevent breaking changes for users who have already used
> > the command in their scripts. This would ensure backward compatibility
> > while providing the new functionality for those who want to use regex
> > for deletion.
> >
> > I hope this clears up any confusion and addresses your concerns.
> > Please let me know if you have any further questions or suggestions.
> >
> > Best regards,
> > Xiangying Meng
> >
> > On Mon, Apr 24, 2023 at 6:23 PM Girish Sharma  
> > wrote:
> > >
> > > Hello Yubiao,
> > > As per my understanding, this feature suggestion is intended to delete the
> > > topics from all replicated clusters under the namespace. Thus, the example
> > > you are providing may not be a good fit for this?
> > >
> > > Xiangying, please clarify if my understanding is incorrect.
> > >
> > > On Mon, Apr 24, 2023 at 3:24 PM Yubiao Feng
> > >  wrote:
> > >
> > > > Hi Girish Sharma
> > > >
> > > > > What additional advantage would one get by using that approach
> > > > > rather than simply using a one liner script to just call delete
> > > > > topic for each of those topics if the list of topics is known.
> > > >
> > > > If users enabled `Geo-Replication` on a namespace in mistake(expected
> > > > only to enable one topic),
> > > > it is possible to create many topics on the remote cluster in one 
> > > > second.
> &g

Re: [Discuss] Suggestion for a "clear" parameter in Pulsar-admin to simplify tenant and namespace cleanup

2023-04-24 Thread Xiangying Meng
Hi Girish,

Thank you for raising concerns about the proposed feature. I would
like to address the points you mentioned in your email.

1. I understand that one of your concerns is whether the proposed
regex-based deletion feature would provide significant advantages over
using a simple one-liner script to call the delete topic command for
each topic. As Yubiao pointed out, using scripts to delete topics one
by one can lead to increased network overhead and slow performance,
particularly when dealing with a large number of topics. Implementing
regex support for delete operations would provide a more efficient and
convenient way to manage resources in Pulsar.

2. In addition to the benefits for testing purposes, we have
communicated with business users of Pulsar and found that the proposed
regex-based deletion feature can be helpful in production environments
as well. For instance, it can be used to efficiently clean up
subscriptions associated with deprecated services, ensuring better
resource management and reducing clutter in the system.

3. As I suggested earlier, we can introduce a new option flag (e.g.,
`--regex` or `--pattern`) to the existing `pulsar-admin topics delete`
command to prevent breaking changes for users who have already used
the command in their scripts. This would ensure backward compatibility
while providing the new functionality for those who want to use regex
for deletion.

I hope this clears up any confusion and addresses your concerns.
Please let me know if you have any further questions or suggestions.

Best regards,
Xiangying Meng

On Mon, Apr 24, 2023 at 6:23 PM Girish Sharma  wrote:
>
> Hello Yubiao,
> As per my understanding, this feature suggestion is intended to delete the
> topics from all replicated clusters under the namespace. Thus, the example
> you are providing may not be a good fit for this?
>
> Xiangying, please clarify if my understanding is incorrect.
>
> On Mon, Apr 24, 2023 at 3:24 PM Yubiao Feng
>  wrote:
>
> > Hi Girish Sharma
> >
> > > What additional advantage would one get by using that approach
> > > rather than simply using a one liner script to just call delete
> > > topic for each of those topics if the list of topics is known.
> >
> > If users enabled `Geo-Replication` on a namespace in mistake(expected
> > only to enable one topic),
> > it is possible to create many topics on the remote cluster in one second.
> >
> > Not long ago, 10,000 topics were created per second because of this
> > mistake. It took us a long time to
> > remove these topics. We delete these topics in this way:
> > ```
> > cat topics_name_file | awk  '{system("bin/pulsar-admin topics delete "$0)}'
> > )
> > ```
> > It deletes topics one by one.
> >
> > We conclude later that stress test tools such as `Jmeter` or `ab` should be
> > used to delete so many topics.
> >
> > If Pulsar could provide these APIs, it would be better.
> >
> > Thanks
> > Yubiao Feng
> >
> >
> >
> >
> > On Wed, Apr 19, 2023 at 3:29 PM Girish Sharma 
> > wrote:
> >
> > > Hello Yubiao,
> > >
> > > What additional advantage would one get by using that approach rather
> > than
> > > simply using a one liner script to just call delete topic for each of
> > those
> > > topics if the list of topics is known.
> > >
> > > Regards
> > >
> > > On Wed, Apr 19, 2023 at 12:54 PM Yubiao Feng
> > >  wrote:
> > >
> > > > In addition to these two, It is recommended to add a method to batch
> > > delete
> > > > topics, such as this:
> > > >
> > > > ```
> > > > pulsar-admin topics delete-all-topics , 
> > > >
> > > > or
> > > >
> > > > pulsar-admin topics delete-all-topic  > lists>
> > > > ```
> > > >
> > > > Thanks
> > > > Yubiao Feng
> > > >
> > > > On Sat, Apr 15, 2023 at 5:37 PM Xiangying Meng 
> > > > wrote:
> > > >
> > > > > Dear Apache Pulsar Community,
> > > > >
> > > > > I hope this email finds you well.I am writing to suggest a potential
> > > > > improvement to the Pulsar-admin tool,
> > > > >  which I believe could simplify the process of cleaning up tenants
> > and
> > > > > namespaces in Apache Pulsar.
> > > > >
> > > > > Currently, cleaning up all the namespaces and topics within a tenant
> > or
> > > > > cleaning up all the topics within a namespace requires several manual
> > > &g

[DISCUSS] Pulsar Transaction Buffer Stats Enhancements and New API Proposal

2023-04-22 Thread Xiangying Meng
Dear Pulsar community,

We would like to initiate a discussion on a proposal to enhance
Pulsar's Transaction Buffer Stats and introduce a new API to improve
visibility and troubleshooting capabilities. The proposal aims to
provide more detailed information about the snapshot stats and system
topic internal status.

The high-level goals are:
1. Enhance the existing TransactionBufferStats by adding information
about snapshot stats.
2. Introduce a new API for obtaining TransactionBufferInternalStats,
allowing users to access the state of the system topic used for
storing snapshots.

Here is a brief overview of the proposed changes:
- Extend the existing TransactionBufferStats class to include
additional fields related to snapshot segment stats.
- Introduce a new API to obtain TransactionBufferInternalStats, which
provides information about the state of the system topic used for
storing snapshots and indexes.

For more details on the proposal, please refer to the following PIP
document [1].

We welcome your feedback and suggestions on this proposal.

Best regards,
Xiangying

[1] 
https://docs.google.com/document/d/19vyWjpukq8XmuOmp2kXndjAbofzQ8AfQBR_v9VaZFBw/edit?usp=sharing


Re: ANNOUNCE] Apache Pulsar 2.10.4 released

2023-04-20 Thread Xiangying Meng
Hi Girish,

Thank you for bringing this to our attention. The issue occurred during the
promotion process when I inadvertently executed an operation that requires
PMC permissions, which I do not have.
Fortunately, we have resolved the issue by asking a PMC member to
re-promote the release, and the Pulsar 2.10.4 artifacts are now available
in the Maven Central Repository. I apologize for any inconvenience this may
have caused and appreciate your understanding during this process.
Please let us know if you have any further questions or concerns.

Best regards,
Xiangying

On Thu, Apr 20, 2023 at 3:30 PM Girish Sharma 
wrote:

> Hello Xiangying,
> The parent pom - org.apache.pulsar:pulsar:2.10.4 doesn't exist on maven
> central here -
> https://repo.maven.apache.org/maven2/org/apache/pulsar/pulsar/
>
> Fetching 2.10.4 pulsar-client is failing due to this.
>
> Regards
>
> On Wed, Apr 19, 2023 at 2:09 PM Zike Yang  wrote:
>
> > Hi, Xiangying
> >
> > Thanks for the announcement.
> > I think we also need to send this email to us...@pulsar.apache.org and
> > annou...@apache.org.
> >
> > BR,
> > Zike Yang
> >
> > On Wed, Apr 19, 2023 at 12:37 PM Xiangying Meng 
> > wrote:
> > >
> > > The Apache Pulsar team is proud to announce Apache Pulsar version
> 2.10.4.
> > >
> > > Pulsar is a highly scalable, low latency messaging platform running on
> > > commodity hardware. It provides simple pub-sub semantics over topics,
> > > guaranteed at-least-once delivery of messages, automatic cursor
> > management
> > > for
> > > subscribers, and cross-datacenter replication.
> > >
> > > For Pulsar release details and downloads, visit:
> > >
> > > https://pulsar.apache.org/download
> > >
> > > Release Notes are at:
> > > https://pulsar.apache.org/release-notes
> > >
> > > We would like to thank the contributors that made the release possible.
> > >
> > > Regards,
> > >
> > > The Pulsar Team
> >
>
>
> --
> Girish Sharma
>


Re: [Discuss] Suggestion for a "clear" parameter in Pulsar-admin to simplify tenant and namespace cleanup

2023-04-20 Thread Xiangying Meng
Hi Zike,

I agree that using regex to delete topics is a good approach, and I also
support the idea of letting users confirm the topics they're about to
delete. However, there's a concern that this might introduce a breaking
change, as users may have already used `pulsar-admin topics delete` in
their scripts.

To address this concern, we can introduce a new option flag (e.g.,
`--regex` or `--pattern`) to the existing `pulsar-admin topics delete`
command, which will be specifically used for regex-based deletion. This
way, the default behavior of the command remains the same and won't affect
existing users' scripts, while the new functionality is introduced for
those who want to use regex for deletion.

An example of how the updated command could look like:

```
pulsar-admin topics delete --regex "public/default/test-topic-\d+"
```

Please let me know what you think about this solution.

Best regards,
Xiangying

On Wed, Apr 19, 2023 at 4:36 PM Zike Yang  wrote:

> Is it possible to use regex to delete namespaces/topics here?
> For instance, we can use the following command to delete all topics in
> the default namespace:
> ```
> pulsar-admin topics delete public/default/*
> ```
> We can also use it to delete some specific topics:
> ```
> pulsar-admin topics delete public/default/*topic
> ```
> This way, we don't need to introduce a new command
> `delete-all-topics`. It also makes it easier for users to delete
> specific topic lists.
>
> > Part-1: Give a summary print of the namespaces, and topics to delete,
> >  and ask the user to confirm if they want to delete the resources.
> > Part-2: If users select “yes”, the deletion will be really executed.
> > Part-3: Print a summary of the results
>
> +1 for this. This will minimize the risk of accidentally deleting
> important resources.
>
> BR,
> Zike Yang
>
> On Wed, Apr 19, 2023 at 3:34 PM Yubiao Feng
>  wrote:
> >
> > > Just wondering - since it is such a dangerous
> > > command, how can we help the user not make
> > > an accidental mass deletion?
> >
> > Just a suggestion: We can make this command executed on two-part
> >
> > Part-1: Give a summary print of the namespaces, and topics to delete,
> >  and ask the user to confirm if they want to delete the resources.
> >
> > Part-2: If users select “yes”, the deletion will be really executed.
> >
> > Part-3: Print a summary of the results
> >
> >
> > Thanks
> > Yubiao Feng
> >
> > On Sun, Apr 16, 2023 at 9:45 PM Asaf Mesika 
> wrote:
> >
> > > How about "truncate" instead of "clear"?
> > >
> > > Just wondering - since it is such a dangerous command, how can we help
> the
> > > user not make an accidental mass deletion?
> > >
> > > On Sat, Apr 15, 2023 at 1:12 PM Girish Sharma  >
> > > wrote:
> > >
> > > > > However, the current goal is to keep the tenant and namespace
> intact
> > > > while
> > > > > cleaning up their contents.
> > > > Ah, I see now. Yes, in that case a clear command is better. Will this
> > > > command also take into account the value of the broker config
> > > > `forceDeleteNamespaceAllowed` in case someone is clearing the owner
> > > tenant?
> > > >
> > > > Regards
> > > >
> > > > On Sat, Apr 15, 2023 at 3:39 PM Enrico Olivelli  >
> > > > wrote:
> > > >
> > > > > The proposal sounds really useful, especially for automated
> testing.
> > > > > +1
> > > > >
> > > > > Enrico
> > > > >
> > > > > Il giorno sab 15 apr 2023 alle ore 12:07 Xiangying Meng
> > > > >  ha scritto:
> > > > > >
> > > > > > Dear Girish,
> > > > > >
> > > > > > Thank you for your response and suggestion to extend the use of
> the
> > > > > > `boolean force` flag for namespaces and tenants.
> > > > > > I understand that the `force` flag is already implemented for
> > > deleting
> > > > > > topics, namespaces, and tenants,
> > > > > > and it provides a consistent way to perform these actions.
> > > > > >
> > > > > > However, the current goal is to keep the tenant and namespace
> intact
> > > > > while
> > > > > > cleaning up their contents.
> > > > > > In other words, I want to have a way to remove all topics within
> a
> > > > > > namespace or all namespaces and top

ANNOUNCE] Apache Pulsar 2.10.4 released

2023-04-18 Thread Xiangying Meng
The Apache Pulsar team is proud to announce Apache Pulsar version 2.10.4.

Pulsar is a highly scalable, low latency messaging platform running on
commodity hardware. It provides simple pub-sub semantics over topics,
guaranteed at-least-once delivery of messages, automatic cursor management
for
subscribers, and cross-datacenter replication.

For Pulsar release details and downloads, visit:

https://pulsar.apache.org/download

Release Notes are at:
https://pulsar.apache.org/release-notes

We would like to thank the contributors that made the release possible.

Regards,

The Pulsar Team


Re: [VOTE] Pulsar Release 2.10.4 Candidate 4

2023-04-18 Thread Xiangying Meng
Close this vote with 4 +1 (binding)

- Mattison
- Penghui
- Jiwei
- Hang

On Tue, Apr 18, 2023 at 10:16 PM Hang Chen  wrote:

> +1(binding)
>
> - Checksum and signatures
> - Built from sources on MacOS (JDK 11 and Maven 3.8.6)
> - Run rat check and check-binary-license on the source.
> - Setup Pulsar cluster with one zookeeper node, one bookie node, and
> one broker node
> - Checked the Grafana dashboard metrics
> - Run Pulsar perf produce and consume
> - Run HDFS-based tiered storage offload and consume
> - Triggered the Pulsar CI [1], and all the tests passed
>
>
> Regards,
> Hang
>
> [1] https://github.com/hangc0276/pulsar/pull/19
>
> guo jiwei  于2023年4月18日周二 11:15写道:
> >
> > +1 (binding)
> >
> > - Check the signature
> > - Build from the source package
> > - Start the standalone
> > - Validate Pub/Sub and Java Functions
> > - Validate Stateful Functions
> >
> > Regards
> > Jiwei Guo (Tboy)
> >
> > On Mon, Apr 17, 2023 at 8:52 PM PengHui Li  wrote:
> > >
> > > +1 (binding)
> > >
> > > - Checked the signature
> > > - Build from the source package
> > > - Start standalone
> > > - Checked cassandra connector
> > > - Checked state function
> > >
> > > Regards,
> > > Penghui
> > >
> > > On Sat, Apr 15, 2023 at 5:37 PM  wrote:
> > >
> > > > +1 (Binding)
> > > >
> > > >  • Built from the source package (maven 3.8.6 OpenJDK 11)
> > > >  • Ran binary package standalone with pub/sub
> > > >  • Ran docker image(pulsar-all) standalone with pub/sub
> > > >  • Ran License check
> > > >
> > > > Best,
> > > > Mattison
> > > > On Apr 12, 2023, 17:10 +0800, Xiangying Meng ,
> > > > wrote:
> > > > > This is the fourth release candidate for Apache Pulsar, version
> 2.10.4.
> > > > >
> > > > > This release contains 126 commits by 37 contributors.
> > > > >
> https://github.com/apache/pulsar/compare/v2.10.3...v2.10.4-candidate-4
> > > > >
> > > > > *** Please download, test and vote on this release. This vote will
> stay
> > > > open
> > > > > for at least 72 hours ***
> > > > >
> > > > > Note that we are voting upon the source (tag), binaries are
> provided for
> > > > > convenience.
> > > > >
> > > > > Source and binary files:
> > > > >
> https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.4-candidate-4/
> > > > >
> > > > > SHA-512 checksums:
> > > > >
> > > >
> 63343005235be32e970574c9733f06cb472adfdd6511d53b91902d66c805b21cee4039b51b69013bf0f9cbcde82f4cd944c069a7d119d1c908a40716ff82eca3
> > > > > apache-pulsar-2.10.4-bin.tar.gz
> > > > >
> > > >
> 2d3398a758917bccefa8550f3f69ec8a72a29f541bcd45963e6fddaec024cc690b33f1d49392dc2437e332e90a89e47334925a50960c5f8960e34c1ac8ed2543
> > > > > apache-pulsar-2.10.4-src.tar.gz
> > > > >
> > > > > Maven staging repo:
> > > > >
> https://repository.apache.org/content/repositories/orgapachepulsar-1226
> > > > >
> > > > > The tag to be voted upon:
> > > > > v2.10.4-candidate-4
> > > > > (1fe05d5cd3ec9f70cd7179efa4b69eac72fd88bd)
> > > > > https://github.com/apache/pulsar/releases/tag/v2.10.4-candidate-4
> > > > >
> > > > > Pulsar's KEYS file containing PGP keys you use to sign the release:
> > > > > https://downloads.apache.org/pulsar/KEYS
> > > > >
> > > > > Docker images:
> > > > >
> > > > > 
> > > > >
> > > >
> https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.4/images/sha256-8b76d49401d3fe398be3cde395fb164ad8722b64691e31c44991f32746ca8119?context=repo
> > > > >
> > > > > 
> > > > >
> > > >
> https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.4/images/sha256-c20a13ed215e4837f95a99cf84914d03f557204d44ed610dfc41d2e23a77a92c?context=repo
> > > > >
> > > > > Please download the source package, and follow the README to build
> > > > > and run the Pulsar standalone service.
> > > >
>


Re: [DISCUSS] Sorting out pulsar's internal thread pools

2023-04-18 Thread Xiangying Meng
Thank you for bringing up this important topic. I completely agree with
this initiative.
This would be a great starting point for revisiting and improving the
Pulsar codebase.

Thanks,
Xiangying

On Tue, Apr 18, 2023 at 2:18 PM Lin Lin  wrote:

> This is a good idea.
>
> Thanks,
> Lin Lin
>
> On 2023/04/18 02:07:55 mattisonc...@gmail.com wrote:
> >
> > Hello, folks.
> >
> > I would like to start discussing the pulsar internal thread pool sorting
> out.
> >
> > How did I get this idea?
> >
> > Recently, we met some problems with the BK operation timeout. After
> investigating, we found an issue that is we share the IO
> executor(workgroup) with the Bookkeeper client and internal client and do
> some other async task in the dispatcher or somewhere to avoid deadlock.
> >
> > But the problem over here. If we use this executor to do some kind of
> `blocking`(or spend much time computing. e.g. reply to many delayed
> messages) operation, it will block BK clients from sending requests if they
> are using the same thread.
> >
> > And then, I checked all the usage of the thread pool. We need the rule
> to constrain what thread pool we should use.
> >
> > What am I expecting?
> >
> > I want to collect all the thread pools and define a clear usage guide to
> avoid wrong use and improve the fault tolerance(the component problem
> shouldn't affect the whole broker)
> >
> >
> >
> > I need to hear your guy's opinions. Please feel free to leave any
> questions. Thanks!
> >
> >
> > Best,
> > Mattison
> >
> >
> >
>


Re: [Discuss] Suggestion for a "clear" parameter in Pulsar-admin to simplify tenant and namespace cleanup

2023-04-17 Thread Xiangying Meng
Hi Penghui,

I appreciate your feedback and completely agree with your concern about the
learning curve for Pulsar users. Introducing additional keywords could
potentially increase the complexity for users who need to understand the
new terms. Therefore, I accept your suggestion to use delete-all-namespaces
and delete-all-topics for the proposed improvement.

Thank you for sharing your insights, and I look forward to working on this
enhancement with the community's support.

Best regards,
Xiangying

On Mon, Apr 17, 2023 at 3:24 PM PengHui Li  wrote:

> The new operation will delete all the data and the metadata under
> a tenant or namespace. I would like to suggest to use
>
> `delete-all-namespaces` and `delete-all-topics`
>
> The `delete` actually acts as a fact of deleting metadata and data.
> And `truncate` is for deleting the data. IMO, we'd better not
> introduce another new keyword, either `clear` or `wipe`, because
> it will bring more knowledge to Pulsar users who must understand.
>
>
> Thanks,
> Penghui
>
> On Mon, Apr 17, 2023 at 10:56 AM Xiangying Meng 
> wrote:
>
> > Hi Enrico,
> >
> > Thank you for your feedback. While I understand that
> > "delete-all-namespaces" is more explicit,
> > I also think it's a bit lengthy for a command-line parameter.
> > I personally believe the "wipe" option, combined with a safety
> confirmation
> > step,
> >  would be more user-friendly and efficient.
> >
> > By adding a safety confirmation step, we can minimize the risk of
> > accidental mass deletion.
> > Users would be required to confirm their intention to perform the
> deletion
> > by
> > typing 'YES' or a similar confirmation word before the operation
> proceeds.
> >
> > What do you think about this approach?
> > If there's a consensus, I can work on implementing this feature with the
> > "wipe" option and the safety confirmation step.
> >
> > Best regards,
> > Xiangying
> >
> > On Sun, Apr 16, 2023 at 11:25 PM Enrico Olivelli 
> > wrote:
> >
> > > Il Dom 16 Apr 2023, 15:45 Asaf Mesika  ha
> > scritto:
> > >
> > > > How about "truncate" instead of "clear"?
> > > >
> > >
> > >
> > > Truncate is better, or maybe 'wipe' (because truncate means another
> > > operation for topics currently)
> > >
> > > Another alternative, more explicit:
> > > pulsar-admin tenants delete-all-namespaces TENANT
> > >
> > > Enrico
> > >
> > > >
> > > > Just wondering - since it is such a dangerous command, how can we
> help
> > > the
> > > > user not make an accidental mass deletion?
> > > >
> > > > On Sat, Apr 15, 2023 at 1:12 PM Girish Sharma <
> scrapmachi...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > > However, the current goal is to keep the tenant and namespace
> > intact
> > > > > while
> > > > > > cleaning up their contents.
> > > > > Ah, I see now. Yes, in that case a clear command is better. Will
> this
> > > > > command also take into account the value of the broker config
> > > > > `forceDeleteNamespaceAllowed` in case someone is clearing the owner
> > > > tenant?
> > > > >
> > > > > Regards
> > > > >
> > > > > On Sat, Apr 15, 2023 at 3:39 PM Enrico Olivelli <
> eolive...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > The proposal sounds really useful, especially for automated
> > testing.
> > > > > > +1
> > > > > >
> > > > > > Enrico
> > > > > >
> > > > > > Il giorno sab 15 apr 2023 alle ore 12:07 Xiangying Meng
> > > > > >  ha scritto:
> > > > > > >
> > > > > > > Dear Girish,
> > > > > > >
> > > > > > > Thank you for your response and suggestion to extend the use of
> > the
> > > > > > > `boolean force` flag for namespaces and tenants.
> > > > > > > I understand that the `force` flag is already implemented for
> > > > deleting
> > > > > > > topics, namespaces, and tenants,
> > > > > > > and it provides a consistent way to perform these actions.
> > > > > > >
> > > > > > > However, the current goal is to keep the tenant and na

Re: [Discuss] Suggestion for a "clear" parameter in Pulsar-admin to simplify tenant and namespace cleanup

2023-04-16 Thread Xiangying Meng
Hi Enrico,

Thank you for your feedback. While I understand that
"delete-all-namespaces" is more explicit,
I also think it's a bit lengthy for a command-line parameter.
I personally believe the "wipe" option, combined with a safety confirmation
step,
 would be more user-friendly and efficient.

By adding a safety confirmation step, we can minimize the risk of
accidental mass deletion.
Users would be required to confirm their intention to perform the deletion
by
typing 'YES' or a similar confirmation word before the operation proceeds.

What do you think about this approach?
If there's a consensus, I can work on implementing this feature with the
"wipe" option and the safety confirmation step.

Best regards,
Xiangying

On Sun, Apr 16, 2023 at 11:25 PM Enrico Olivelli 
wrote:

> Il Dom 16 Apr 2023, 15:45 Asaf Mesika  ha scritto:
>
> > How about "truncate" instead of "clear"?
> >
>
>
> Truncate is better, or maybe 'wipe' (because truncate means another
> operation for topics currently)
>
> Another alternative, more explicit:
> pulsar-admin tenants delete-all-namespaces TENANT
>
> Enrico
>
> >
> > Just wondering - since it is such a dangerous command, how can we help
> the
> > user not make an accidental mass deletion?
> >
> > On Sat, Apr 15, 2023 at 1:12 PM Girish Sharma 
> > wrote:
> >
> > > > However, the current goal is to keep the tenant and namespace intact
> > > while
> > > > cleaning up their contents.
> > > Ah, I see now. Yes, in that case a clear command is better. Will this
> > > command also take into account the value of the broker config
> > > `forceDeleteNamespaceAllowed` in case someone is clearing the owner
> > tenant?
> > >
> > > Regards
> > >
> > > On Sat, Apr 15, 2023 at 3:39 PM Enrico Olivelli 
> > > wrote:
> > >
> > > > The proposal sounds really useful, especially for automated testing.
> > > > +1
> > > >
> > > > Enrico
> > > >
> > > > Il giorno sab 15 apr 2023 alle ore 12:07 Xiangying Meng
> > > >  ha scritto:
> > > > >
> > > > > Dear Girish,
> > > > >
> > > > > Thank you for your response and suggestion to extend the use of the
> > > > > `boolean force` flag for namespaces and tenants.
> > > > > I understand that the `force` flag is already implemented for
> > deleting
> > > > > topics, namespaces, and tenants,
> > > > > and it provides a consistent way to perform these actions.
> > > > >
> > > > > However, the current goal is to keep the tenant and namespace
> intact
> > > > while
> > > > > cleaning up their contents.
> > > > > In other words, I want to have a way to remove all topics within a
> > > > > namespace or all namespaces and topics
> > > > > within a tenant without actually deleting the namespace or tenant
> > > itself.
> > > > >
> > > > > To achieve this goal, I proposed adding a `clear` command for
> > > > `namespaces`
> > > > > and `tenants`.
> > > > >
> > > > > This approach would allow users to keep the tenant and namespace
> > > > structures
> > > > > in place
> > > > > while cleaning up their contents.
> > > > > I hope this clarifies my intention, and I would like to hear your
> > > > thoughts
> > > > > on this proposal.
> > > > >
> > > > > Best regards,
> > > > > Xiangying
> > > > >
> > > > > On Sat, Apr 15, 2023 at 5:49 PM Girish Sharma <
> > scrapmachi...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hello Xiangying,
> > > > > > This indeed is a cumbersome task to delete a filled namespace or
> > > > tenant. We
> > > > > > face this challenge in our organization where we use the
> > > multi-tenancy
> > > > > > feature of pulsar heavily.
> > > > > >
> > > > > > I would like to suggest a different command to do this though..
> > > > Similar to
> > > > > > how you cannot delete a topic without deleting its
> > > > > > subscribers/producers/consumers, unless we use the `boolean
> force`
> > > > flag.
> > > > > > Why not extend this to namespace and tenant as well 

Re: [DISCUSS] Add checklist for PMC binding vote of PIP

2023-04-16 Thread Xiangying Meng
Hi, Asaf
This is a great suggestion. I believe one significant advantage is that
it can help newcomers better understand the voting process and how
decisions are made.
The checklist can serve as a reference framework,
assisting new members in becoming familiar with the project's voting
requirements and standards more quickly,
thereby improving the overall participation and transparency of the project.

Moreover, this checklist can ensure that all participants have thoroughly
reviewed the PIP,
resulting in higher-quality PIPs.
Although introducing a checklist may bring some additional burden,
in the long run, it contributes to the project's robust development and
continuous improvement.

Thanks
Xiangying


On Sun, Apr 16, 2023 at 11:23 PM Enrico Olivelli 
wrote:

> Asaf,
> I understand your intent.
>
> I think that when anyone casts a +1, especially with '(binding)' they know
> well what they are doing.
> It is not an 'I like it', but it is an important assumption of
> responsibility.
> This applies to all the VOTEs.
>
> Requiring this checklist may be good in order to help new comers to
> understand better how we take our decisions.
>
> If you feel that currently there are people who cast binding votes without
> knowing what they do...then I believe that it is kind of a serious issue.
>
> It happened a few times recently that I  see this sort of ML threads about
> 'the PMC is not doing well', 'we want to retire people in the PMC...', 'PMC
> members vote on stuff without knowing what they do'...
>
> I wonder what is the root cause of this.
>
> Back to he original question, my position it:
> +1 to writing a clear and very brief summary of the consideration you hBe
> to take before casting your vote.
> -1 to requiring this checklist when we cast a vote
>
> Thanks
> Enrico
>
>
>
> Il Dom 16 Apr 2023, 15:47 Asaf Mesika  ha scritto:
>
> > Would love additional feedback on this suggestion.
> >
> >
> > On Fri, Mar 31, 2023 at 4:19 AM PengHui Li  wrote:
> >
> > > It looks like we can try to add a new section to
> > > https://github.com/apache/pulsar/blob/master/wiki/proposals/PIP.md
> > > like "Review the proposal" and it is not only for PMCs, all the
> reviewers
> > > can follow the checklist
> > > to cast a solemn vote.
> > >
> > > And I totally support the motivation of this discussion.
> > >
> > > Regards,
> > > Penghui
> > >
> > > On Fri, Mar 31, 2023 at 4:46 AM Asaf Mesika 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > When you read last year's PIPs, many lack background information,
> hard
> > to
> > > > read and understand even if you know pulsar in and out.
> > > >
> > > > First step to fix was to change the PIP is structured:
> > > > https://github.com/apache/pulsar/pull/19832
> > > >
> > > > In my opinion, when someone votes "+1" and it's binding, they
> basically
> > > > take the responsibility to say:
> > > >
> > > > * I read the PIP fully.
> > > > * A person having basic Pulsar user knowledge, can read the PIP and
> > fully
> > > > understand it
> > > >   Why? Since it contains all background information necessary to
> > > > understand the problem and the solution
> > > >It is written in a coherent and easy to understand way.
> > > > * I validated the solution technically and can vouch for it.
> > > >Examples:
> > > >The PIP adds schema compatibility rules for Protobuf Native.
> > > >  I learned / know protobuf well.
> > > >  I validated the rules written containing all rules
> needed
> > > and
> > > > not containing wrong rules, or missing rules.
> > > >
> > > >The PIP adds new OpenID Connect authentication.
> > > >   I learned / know Authentication in Pulsar.
> > > >I learned / know OpenID connect
> > > >I validated the solution is architecturally correct
> and
> > > > sound.
> > > >
> > > > Basically the PMC member voting +1 on it, basically acts as Tech Lead
> > of
> > > > Pulsar for this PIP.
> > > > It's a very big responsibility.
> > > > It's the only way to ensure Pulsar architecture won't go haywire over
> > the
> > > > next few years.
> > > >
> > > > Yes, it will slow the process down.
> > > > Yes, it will be harder to find people to review it like that.
> > > >
> > > > But, it will raise the bar for PIPs and for Pulsar architecture
> > overall.
> > > > IMO we need that, and it's customary.
> > > >
> > > > *My suggestion*
> > > > When PMC member replies to vote, it will look like this:
> > > >
> > > > "
> > > > +1 (binding)
> > > >
> > > > [v] PIP has all sections detailed in the PIP template (Background,
> > > > motivation, etc.)
> > > > [v] A person having basic Pulsar user knowledge, can read the PIP and
> > > fully
> > > > understand it
> > > > [v] I read PIP and validated it technically
> > > > "
> > > >
> > > > or
> > > > "
> > > > -1 (binding)
> > > >
> > > > I think this PIP needs:
> > > > ...
> > > > "
> > > >
> > > > Thanks,
> > > >
> > > > Asaf
> > > >
> > >
> >
>


Re: [Discuss] Suggestion for a "clear" parameter in Pulsar-admin to simplify tenant and namespace cleanup

2023-04-15 Thread Xiangying Meng
Dear Girish,

Thank you for your response and suggestion to extend the use of the
`boolean force` flag for namespaces and tenants.
I understand that the `force` flag is already implemented for deleting
topics, namespaces, and tenants,
and it provides a consistent way to perform these actions.

However, the current goal is to keep the tenant and namespace intact while
cleaning up their contents.
In other words, I want to have a way to remove all topics within a
namespace or all namespaces and topics
within a tenant without actually deleting the namespace or tenant itself.

To achieve this goal, I proposed adding a `clear` command for `namespaces`
and `tenants`.

This approach would allow users to keep the tenant and namespace structures
in place
while cleaning up their contents.
I hope this clarifies my intention, and I would like to hear your thoughts
on this proposal.

Best regards,
Xiangying

On Sat, Apr 15, 2023 at 5:49 PM Girish Sharma 
wrote:

> Hello Xiangying,
> This indeed is a cumbersome task to delete a filled namespace or tenant. We
> face this challenge in our organization where we use the multi-tenancy
> feature of pulsar heavily.
>
> I would like to suggest a different command to do this though.. Similar to
> how you cannot delete a topic without deleting its
> subscribers/producers/consumers, unless we use the `boolean force` flag.
> Why not extend this to namespace and tenant as well and let the force param
> do the cleanup (which your suggested `clear` command would do).
>
> As of today, using force to delete a namespace just returns 405 saying
> broker doesn't allow force delete of namespace containing topics.
>
> Any thoughts?
>
> On Sat, Apr 15, 2023 at 3:07 PM Xiangying Meng 
> wrote:
>
> > Dear Apache Pulsar Community,
> >
> > I hope this email finds you well.I am writing to suggest a potential
> > improvement to the Pulsar-admin tool,
> >  which I believe could simplify the process of cleaning up tenants and
> > namespaces in Apache Pulsar.
> >
> > Currently, cleaning up all the namespaces and topics within a tenant or
> > cleaning up all the topics within a namespace requires several manual
> > steps,
> > such as listing the namespaces, listing the topics, and then deleting
> each
> > topic individually.
> > This process can be time-consuming and error-prone for users.
> >
> > To address this issue, I propose the addition of a "clear" parameter to
> the
> > Pulsar-admin tool,
> > which would automate the cleanup process for tenants and namespaces.
> Here's
> > a conceptual implementation:
> >
> > 1. To clean up all namespaces and topics within a tenant:
> > ``` bash
> > pulsar-admin tenants clear 
> > ```
> > 2. To clean up all topics within a namespace:
> > ```bash
> > pulsar-admin namespaces clear /
> > ```
> >
> > By implementing these new parameters, users would be able to perform
> > cleanup operations more efficiently and with fewer manual steps.
> > I believe this improvement would greatly enhance the user experience when
> > working with Apache Pulsar.
> >
> > I'd like to discuss the feasibility of this suggestion and gather
> feedback
> > from the community.
> > If everyone agrees, I can work on implementing this feature and submit a
> > pull request for review.
> >
> > Looking forward to hearing your thoughts on this.
> >
> > Best regards,
> > Xiangying
> >
>
>
> --
> Girish Sharma
>


[Discuss] Suggestion for a "clear" parameter in Pulsar-admin to simplify tenant and namespace cleanup

2023-04-15 Thread Xiangying Meng
Dear Apache Pulsar Community,

I hope this email finds you well.I am writing to suggest a potential
improvement to the Pulsar-admin tool,
 which I believe could simplify the process of cleaning up tenants and
namespaces in Apache Pulsar.

Currently, cleaning up all the namespaces and topics within a tenant or
cleaning up all the topics within a namespace requires several manual
steps,
such as listing the namespaces, listing the topics, and then deleting each
topic individually.
This process can be time-consuming and error-prone for users.

To address this issue, I propose the addition of a "clear" parameter to the
Pulsar-admin tool,
which would automate the cleanup process for tenants and namespaces. Here's
a conceptual implementation:

1. To clean up all namespaces and topics within a tenant:
``` bash
pulsar-admin tenants clear 
```
2. To clean up all topics within a namespace:
```bash
pulsar-admin namespaces clear /
```

By implementing these new parameters, users would be able to perform
cleanup operations more efficiently and with fewer manual steps.
I believe this improvement would greatly enhance the user experience when
working with Apache Pulsar.

I'd like to discuss the feasibility of this suggestion and gather feedback
from the community.
If everyone agrees, I can work on implementing this feature and submit a
pull request for review.

Looking forward to hearing your thoughts on this.

Best regards,
Xiangying


[VOTE] Pulsar Release 2.10.4 Candidate 4

2023-04-12 Thread Xiangying Meng
This is the fourth release candidate for Apache Pulsar, version 2.10.4.

This release contains 126 commits by 37 contributors.
https://github.com/apache/pulsar/compare/v2.10.3...v2.10.4-candidate-4

*** Please download, test and vote on this release. This vote will stay open
for at least 72 hours ***

Note that we are voting upon the source (tag), binaries are provided for
convenience.

Source and binary files:
https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.4-candidate-4/

SHA-512 checksums:
63343005235be32e970574c9733f06cb472adfdd6511d53b91902d66c805b21cee4039b51b69013bf0f9cbcde82f4cd944c069a7d119d1c908a40716ff82eca3
 apache-pulsar-2.10.4-bin.tar.gz
2d3398a758917bccefa8550f3f69ec8a72a29f541bcd45963e6fddaec024cc690b33f1d49392dc2437e332e90a89e47334925a50960c5f8960e34c1ac8ed2543
 apache-pulsar-2.10.4-src.tar.gz

Maven staging repo:
https://repository.apache.org/content/repositories/orgapachepulsar-1226

The tag to be voted upon:
v2.10.4-candidate-4
(1fe05d5cd3ec9f70cd7179efa4b69eac72fd88bd)
https://github.com/apache/pulsar/releases/tag/v2.10.4-candidate-4

Pulsar's KEYS file containing PGP keys you use to sign the release:
https://downloads.apache.org/pulsar/KEYS

Docker images:


https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.4/images/sha256-8b76d49401d3fe398be3cde395fb164ad8722b64691e31c44991f32746ca8119?context=repo


https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.4/images/sha256-c20a13ed215e4837f95a99cf84914d03f557204d44ed610dfc41d2e23a77a92c?context=repo

Please download the source package, and follow the README to build
and run the Pulsar standalone service.


Re: Call for projects and mentors for OSPP 2023

2023-04-11 Thread Xiangying Meng
Hi Dianjin,

Thank you for sharing the exciting news about Apache Pulsar's participation
in OSPP 2023.
I would also like to express my interest in joining this year's event as a
mentor for the Pulsar project.

We are planning to work on implementing isolation levels for Pulsar.
I will send you the detailed proposal for this project idea soon.

Best regards,
Xiangying

On Tue, Apr 11, 2023 at 5:07 PM Yu  wrote:

> Thanks Dianjing!
>
> This is an activity worth attending based on my previous experience. As a
> mentor for OSPP 2021 and 2022, I collaborated with some students and
> @urfreespace to create fresh content experiences for Pulsar, such as
> Introducing a Bot to Improve the Efficiency of Developing Docs [1] and
> Automating Documentation Workflows.
>
> I'll continue to apply to participate as a mentor this year to infuse more
> new blood into our community with the project targeting improving content
> and website. Will send the application to you later.
>
> [1]
>
> https://docs.google.com/document/d/1bQfZkSu5nG1tNycpmXXtUFn-Z5-h-uqHv6IXsCEySQ8/edit
>
> On Tue, Apr 11, 2023 at 10:53 AM Dianjin Wang 
> wrote:
>
> > Hi all,
> >
> > Glad to share that Apache Pulsar is listed at the OSPP 2023 again. This
> > year, the Pulsar community can open 7 projects at most.
> >
> > For OSPP 2023, the project idea will be open from 4/04, 2023 to 04/28,
> > 2023(UTC+8). If you have great ideas, please reply to this email by
> > following the project template. Then I can help you to submit them.
> >
> > OSPP asks that Pulsar committers, PMC members, and contributors be the
> > mentors; a mentor can only mentor one project. Both mentors and students
> > will receive financial awards for completed projects.
> >
> > You may want to know about OSPP 2022, so refer to this email[1] for
> > details.
> >
> > --
> > [Template]
> >
> > ## Project Info
> > Project Name:
> > Project Description: (at most 1000 words)
> > Difficulty Level:
> > - [ ] Basic
> > - [ ] Advanced
> >
> > Project Output Requirements:
> > Item 1:__
> > Item 2:__
> > Item 3:__
> > …
> >
> > Project Technical Requirements:
> > Item 1:__
> > Item 2:__
> > Item 3:__
> > …
> >
> > ## Mentor Info
> > Mentor Name:
> > Mentor Email:
> > --
> >
> > [1] https://lists.apache.org/thread/7pplcd4c35qjzt2o58qxykkty8qqxvt3
> >
> > Best,
> > Dianjin Wang
> >
>


Re: [DISCUSS] PIP-255: Assign topic partitions to bundle by round robin

2023-04-11 Thread Xiangying Meng
Hi Linlin,
> This is an incompatible modification, so the entire cluster needs to be
upgraded, not just a part of the nodes

Appreciate your contribution to the new feature in PIP-255.
 I have a question regarding the load-balancing aspect of this feature.

You mentioned that this is an incompatible modification,
and the entire cluster needs to be upgraded, not just a part of the nodes.
 I was wondering why we can only have one load-balancing strategy.
Would it be possible to abstract the logic here and make it an optional
choice?
This way, we could have multiple load-balancing strategies,
such as hash-based, round-robin, etc., available for users to choose from.

I'd love to hear your thoughts on this.

Best regards,
Xiangying

On Mon, Apr 10, 2023 at 8:23 PM PengHui Li  wrote:

> Hi Lin,
>
> > The load managed by each Bundle is not even. Even if the number of
> partitions managed
>by each bundle is the same, there is no guarantee that the sum of the
> loads of these partitions
>will be the same.
>
> Do we expect that the bundles should have the same loads? The bundle is the
> base unit of the
> load balancer, we can set the high watermark of the bundle, e.g., the
> maximum topics and throughput.
> But the bundle can have different real loads, and if one bundle runs out of
> the high watermark, the bundle
> will be split. Users can tune the high watermark to distribute the loads
> evenly across brokers.
>
> For example, there are 4 bundles with loads 1, 3, 2, 4, the maximum load of
> a bundle is 5 and 2 brokers.
> We can assign bundle 0 and bundle 3 to broker-0 and bundle 1 and bundle 2
> to broker-2.
>
> Of course, this is the ideal situation. If bundle 0 has been assigned to
> broker-0 and bundle 1 has been
> assigned to broker-1. Now, bundle 2 will go to broker 1, and bundle 3 will
> go to broker 1. The loads for each
> broker are 3 and 7. Dynamic programming can help to find an optimized
> solution with more bundle unloads.
>
> So, should we design the bundle to have even loads? It is difficult to
> achieve in reality. And the proposal
> said, "Let each bundle carry the same load as possible". Is it the correct
> direction for the load balancer?
>
> > Doesn't shed loads very well. The existing default policy
> ThresholdShedder has a relatively high usage
>threshold, and various traffic thresholds need to be set. Many clusters
> with high TPS and small message
>bodies may have high CPU but low traffic; And for many small-scale
> clusters, the threshold needs to be
>modified according to the actual business.
>
> Can it be resolved by introducing the entry write/read rate to the bundle
> stats?
>
> > The removed Bundle cannot be well distributed to other Brokers. The load
> information of each Broker
>will be reported at regular intervals, so the judgment of the Leader
> Broker when allocating Bundles cannot
>be guaranteed to be completely correct. Secondly, if there are a large
> number of Bundles to be redistributed,
>the Leader may make the low-load Broker a new high-load node when the
> load information is not up-to-date.
>
> Can we try to force-sync the load data of the brokers before performing the
> distribution of a large number of
> bundles?
>
> For the Goal section in the proposal. It looks like it doesn't map to the
> issues mentioned in the Motivation section.
> IMO, the proposal should clearly describe the Goal, like which problem will
> be resolved with this proposal.
> Both of the above 3 issues or part of them. And what is the high-level
> solution to resolve the issue,
> and what are the pros and cons compared with the existing solution without
> diving into the implementation section.
>
> Another consideration is the default max bundles of a namespace is 128. I
> don't think the common cases that need
> to set 128 partitions for a topic. If the partitions < the bundle's count,
> will the new solution basically be equivalent to
> the current way?
>
> If this is not a general solution for common scenarios. I support making
> the topic-bundle assigner pluggable without
> introducing the implementation to the Pulsar repo. Users can implement
> their own assigner based on the business
> requirement. Pulsar's general solution may not be good for all scenarios,
> but it is better for scalability (bundle split)
> and enough for most common scenarios. We can keep improving the general
> solution for the general requirement
> for the most common scenarios.
>
> Regards,
> Penghui
>
>
> On Wed, Mar 22, 2023 at 9:52 AM Lin Lin  wrote:
>
> >
> > > This appears to be the "round-robin topic-to-bundle mapping" option in
> > > the `fundBundle` function. Is this the only place that needs an update?
> > Can
> > > you list what change is required?
> >
> > In this PIP, we only discuss topic-to-bundle mapping
> > Change is required:
> > 1)
> > When lookup, partitions is assigned to bundle:
> > Lookup -> NamespaceService#getBrokerServiceUrlAsync ->
> > 

Re: [DISCUSS] PIP-250: Add proxyVersion to CommandConnect

2023-04-11 Thread Xiangying Meng
Hi Michael,

I appreciate your detailed explanation.
I agree with your proposal.
It seems that sharing version information between the proxy
and broker would indeed be beneficial for debugging purposes without
creating
 an unnecessary tight coupling between the two components.

I appreciate your willingness to discuss this feature further
r and for taking the time to address the concerns raised.

Best regards,
Xiangying

On Tue, Apr 11, 2023 at 12:17 PM Michael Marshall 
wrote:

> Thanks for your feedback Mattison and Xiangying. I'll note that the
> PIP vote did close already and I have the implementation just about
> done, but I'm happy to discuss this feature a bit more here.
>
> > we should avoid coupling the concepts in the proxy and the broker
>
> Sharing version information does not tightly couple the proxy and the
> broker. It makes it easier to collect the relevant debugging
> information that can help solve problems faster. The client already
> tells the broker its version. Are you able to provide more insight
> here?
>
> > The proxy should be a separate component. Instead of continuing to
> couple the relevant proxy concepts into the broker, everyone should be a
> client to the broker.
>
> I don't think this analogy holds true. The pulsar proxy is not a layer
> 4 proxy. It sends its own pulsar protocol messages. I want to quickly
> know the proxy version when debugging issues observed in the broker,
> and this is the only way I see for the broker to get this information.
>
> > I think the intention behind this is excellent, but directly modifying
> the protocol might be a bit too heavy-handed.
>
> Do you disagree with sharing this information with the broker? I think
> we generally view the protocol as lightweight.
>
> > Wouldn't it be better to directly expose the proxyVersion and
> clientVersion information via Prometheus metrics
>
> Which server is producing these metrics? My goal is for the broker to
> get this information so it can be part of the logs. The only way to
> transport these metrics is via the protocol.
>
> If there is a serious objection, I can add an option for the proxy to
> withhold this data from the broker. However, I think we should enable
> it by default.
>
> Thanks,
> Michael
>
> On Sat, Apr 8, 2023 at 2:19 AM Xiangying Meng 
> wrote:
> >
> > Dear Michael,
> >
> > I think the intention behind this is excellent, but directly modifying
> the protocol might be a bit too heavy-handed.
> >  This approach may lead to data redundancy.
> >  In large-scale clusters, every client connection would need to transmit
> the extra proxy version information,
> > which might increase network overhead.
> > Therefore, we should consider a more lightweight solution.
> >
> > If the primary purpose of the proxy version information is for
> diagnostics or auditing,
> > we could explore alternative methods for collecting this information
> without modifying the protocol level.
> > Wouldn't it be better to directly expose the proxyVersion and
> clientVersion information via Prometheus metrics,
> > as mentioned in the "Future work(Anything else)`" section of the
> proposal?
> >
> > Please let me know what you think about this suggestion.
> >
> > Best regards,
> > Xiangying
> >
> > On Thu, Apr 6, 2023 at 3:46 PM  wrote:
> >>
> >> Sorry for the late response.
> >>
> >> Why do we need to make the broker aware of the proxy when, by normal
> software design, we should avoid coupling the concepts in the proxy and the
> broker? The previous authentication was for historical reasons, but we
> should not continue to introduce this coupling.
> >>
> >> The proxy should be a separate component. Instead of continuing to
> couple the relevant proxy concepts into the broker, everyone should be a
> client to the broker.
> >>
> >> Best,
> >> Mattison
> >> On Feb 25, 2023, 01:12 +0800, Michael Marshall ,
> wrote:
> >> > Great suggestions, Enrico.
> >> >
> >> > > It would be interesting to reject connections on the Proxy if the
> >> > > client tries to set that field.
> >> >
> >> > I support making the proxy reject invalid input. We could also have
> >> > the client reject connections where the client connect command
> >> > includes `original_principal`, `original_auth_data`, or
> >> > `original_auth_method`, since those are also only meant to be sent by
> >> > the proxy.
> >> >
> >> > > On the broker there is no way to distinguish a proxy from a client,
> that'

Re: [Discuss] Add a phase to process pending PRs before code freeze

2023-04-10 Thread Xiangying Meng
Hi Yunze,

Thank you for bringing up this critical issue regarding pending PRs before
the code freeze. I appreciate your thoughtful insights and suggestions.

I'd like to share my thoughts on this. In previous releases, we didn't have
a formal code freeze announcement; instead, we had a discussion email. The
release manager would notify everyone when the cherry-picking process was
complete and ask if there were any other PRs that needed to be included.
After waiting for some time, the release manager would then create an RC
label, and any PRs cherry-picked after this label would not be included in
the release.

In light of your suggestions, I believe we can carefully review the PRs
cherry-picked after the discussion email and before sending a code freeze
email. This can help us avoid potential issues caused by someone
overlooking the discussion email and cherry-picking PRs that should not be
included or introducing other unstable factors. By adopting these changes,
we can address the concerns you've raised and improve the overall release
process.

Thank you again for raising this concern, and I'm confident that
implementing these improvements will be beneficial for our community.

Best regards,
Xiangying

On Mon, Apr 10, 2023 at 11:11 AM Yunze Xu 
wrote:

> To be more specific, we can send a discussion thread one week before
> the code freeze. Then,
> 1. The PRs opened after the time point won't be considered to be
> included in this release
> 2. If someone has some pending PRs that are aimed to be included in
> this release, it's better to comment in the discussion thread.
>
> For example, [1] aimed to drop the streaming dispatcher in 3.0.0.
> However, there was no PR yet. If the PR was opened today and merged
> very quickly, there might not be enough time to complete the review
> process. We cannot assume a PR won't receive any request change.
>
> [1] https://lists.apache.org/thread/ky2bkzlz93njx3ntnvkpd0l77qzzgcmv
>
> Thanks,
> Yunze
>
> On Mon, Apr 10, 2023 at 10:56 AM Yunze Xu  wrote:
> >
> > Hi community,
> >
> > I see the code freeze of Pulsar 3.0.0 is coming tomorrow. But I found
> > the release process still lacks a key step that pending PRs should be
> > taken carefully of instead of simply delaying them to the next
> > release.
> >
> > The following cases were very often seen:
> > 1. A PR has opened for some days and no one reviewed it.
> > 2. A reviewer left some comments and the author disappeared for some
> time.
> > 3. The author of a PR has addressed the requested changes but the
> > reviewer has disappeared for some time.
> >
> > As we know, Apache committers are volunteers and have their own jobs.
> > So the cases above are all acceptable. However, before a release, I
> > think the release managers must take the responsibility to handle the
> > cases above.
> >
> > IMO, we must address the cases about (at least 1 and 3 are
> > controllable) one week in advance. If the PR cannot be merged due to
> > the disappearance of the author, it's okay to delay it to the next
> > release. But the release managers should address the PRs actively.
> > They can help review the PRs. Or at least,
> > - for case 1: they can ping some committers that are familiar with the
> > scope of the PR to review it.
> > - for case 2: they can ping the author
> > - for case 3: they can ping the reviewer
> >
> > I see Zike noticed the time point of the code freeze last week:
> > https://lists.apache.org/thread/tczgh4y8lcy2y85652vkctbkcrs40nq4. But
> > there is no clear process for how to process the pending PRs before
> > the code freeze.
> >
> > Thanks,
> > Yunze
>


Re: [DISCUSS] PIP-250: Add proxyVersion to CommandConnect

2023-04-08 Thread Xiangying Meng
Dear Michael,

I think the intention behind this is excellent, but directly modifying the
protocol might be a bit too heavy-handed.
 This approach may lead to data redundancy.
 In large-scale clusters, every client connection would need to transmit
the extra proxy version information,
which might increase network overhead.
Therefore, we should consider a more lightweight solution.

If the primary purpose of the proxy version information is for diagnostics
or auditing,
we could explore alternative methods for collecting this information
without modifying the protocol level.
Wouldn't it be better to directly expose the proxyVersion and clientVersion
information via Prometheus metrics,
as mentioned in the "Future work(Anything else)`" section of the proposal?

Please let me know what you think about this suggestion.

Best regards,
Xiangying

On Thu, Apr 6, 2023 at 3:46 PM  wrote:

> Sorry for the late response.
>
> Why do we need to make the broker aware of the proxy when, by normal
> software design, we should avoid coupling the concepts in the proxy and the
> broker? The previous authentication was for historical reasons, but we
> should not continue to introduce this coupling.
>
> The proxy should be a separate component. Instead of continuing to couple
> the relevant proxy concepts into the broker, everyone should be a client to
> the broker.
>
> Best,
> Mattison
> On Feb 25, 2023, 01:12 +0800, Michael Marshall ,
> wrote:
> > Great suggestions, Enrico.
> >
> > > It would be interesting to reject connections on the Proxy if the
> > > client tries to set that field.
> >
> > I support making the proxy reject invalid input. We could also have
> > the client reject connections where the client connect command
> > includes `original_principal`, `original_auth_data`, or
> > `original_auth_method`, since those are also only meant to be sent by
> > the proxy.
> >
> > > On the broker there is no way to distinguish a proxy from a client,
> that's fair.
> >
> > We can reject these connections when authentication and authorization
> > are enabled. My draft PR includes such logic.
> >
> > Thanks,
> > Michael
> >
> > On Fri, Feb 24, 2023 at 7:29 AM Enrico Olivelli 
> wrote:
> > >
> > > Makes sense.
> > >
> > > It would be interesting to reject connections on the Proxy if the
> > > client tries to set that field.
> > > On the broker there is no way to distinguish a proxy from a client,
> that's fair.
> > > But on the proxy it is not expected to see a connection from another
> proxy.
> > >
> > > +1
> > >
> > > Enrico
> > >
> > > Il giorno ven 24 feb 2023 alle ore 10:00 Zike Yang 
> ha scritto:
> > > > >
> > > > > Hi, Michael
> > > > >
> > > > > Thanks for initiating this PIP.
> > > > >
> > > > > +1
> > > > >
> > > > > BR,
> > > > > Zike Yang
> > > > >
> > > > >
> > > > > Zike Yang
> > > > >
> > > > > On Fri, Feb 24, 2023 at 12:16 PM Michael Marshall <
> mmarsh...@apache.org> wrote:
> > > > > > >
> > > > > > > Hi Pulsar Community,
> > > > > > >
> > > > > > > In talking with Zike Yang on
> > > > > > > https://github.com/apache/pulsar/pull/19540, we identified
> that it
> > > > > > > would be helpful for the proxy to forward its own version when
> > > > > > > connecting to the broker. Here is a related PIP to improve the
> > > > > > > connection information available to operators.
> > > > > > >
> > > > > > > Issue: https://github.com/apache/pulsar/issues/19623
> > > > > > > Implementation: https://github.com/apache/pulsar/pull/19618
> > > > > > >
> > > > > > > I look forward to your feedback!
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Michael
> > > > > > >
> > > > > > > Text of PIP copied below:
> > > > > > >
> > > > > > > ### Motivation
> > > > > > >
> > > > > > > When clients connect through the proxy, it is valuable to know
> which
> > > > > > > version of the proxy connected to the broker. That information
> isn't
> > > > > > > currently logged or reported in any easily identifiable way.
> The only
> > > > > > > way to get information about the connection is to infer which
> proxy
> > > > > > > forwarded a connection based on matching up the IP address in
> the
> > > > > > > logs.
> > > > > > >
> > > > > > > An additional change proposed in the implementation is to log
> this new
> > > > > > > information along with the `clientVersion`,
> `clientProtocolVersion`,
> > > > > > > and relevant authentication role information. This information
> will
> > > > > > > improve debug-ability and could also serve as a form of audit
> logging.
> > > > > > >
> > > > > > > ### Goal
> > > > > > >
> > > > > > > Improve the value of the broker's logs and metrics about
> connections
> > > > > > > to simplify debugging and to make it easier for Pulsar
> operators to
> > > > > > > understand how clients are connecting to their clusters.
> > > > > > >
> > > > > > > ### API Changes
> > > > > > >
> > > > > > > Add the following:
> > > > > > >
> > > > > > > ```proto
> > > > > > > message CommandConnect {
> > > > > > > // Other fields 

Re: [DISCUSS] PIP-263: Just auto-create no-partitioned DLQ And Prevent auto-create a DLQ for a DLQ

2023-04-07 Thread Xiangying Meng
Hi Yubiao,

Appreciate your effort in initiating this PIP. I believe these changes will
address the existing issues and make DLQ and Retry Topic handling more
efficient and straightforward.

The goals you outlined are clear and, upon implementation, will improve the
overall functionality of Pulsar. The proposed API changes also seem
suitable for achieving the desired outcomes.

Looking forward to the progress on this PIP.

Best regards,
Xiangying

On Fri, Apr 7, 2023 at 1:56 AM Yubiao Feng
 wrote:

> Hi community
>
> I started a PIP about "Just auto-create no-partitioned DLQ And Prevent
> auto-create a DLQ for a DLQ".
>
> PIP link: https://github.com/apache/pulsar/issues/20033
>
> ### Motivation
>
>  Just auto-create no-partitioned DLQ/Retry Topic
> If enabled the config `allowAutoTopicCreation,` Pulsar will auto-create a
> topic when the client loads it; After setting config
> `allowAutoTopicCreationType=partitioned, defaultNumPartitions=2`, Pulsar
> will auto-create a partitioned topic(which have two partitions) when the
> client loads it.
>
> After the above, if using the feature [Retry Topic](
>
> https://pulsar.apache.org/docs/2.11.x/concepts-messaging/#retry-letter-topic
> )
> and [DLQ](
> https://pulsar.apache.org/docs/2.11.x/concepts-messaging/#dead-letter-topic
> )
> enable topic auto-creation, we will get a partitioned DLQ and a partitioned
> Retry Topic like this:
> - `{primary_topic_name}-{sub_name}-DLQ`
>   -`{primary_topic_name}-{sub_name}-DLQ-partition-0`
>   -`{primary_topic_name}-{sub_name}-DLQ-partition-1`
> - `{primary_topic_name}-{sub_name}-RETRY`
>   -`{primary_topic_name}-{sub_name}-RETRY-partition-0`
>   -`{primary_topic_name}-{sub_name}-RETRY-partition-1`
>
> 
>
> I feel that almost all users will not use the multi-partitioned DLQ or
> multi-partitioned Retry topic because there is a bug that causes the above
> behavior to be incorrect, but we have yet to receive any issues about it.
> This bug causes the above behavior to look like this: When the partitioned
> DLQ is auto-created for the topic `tp1-partition-0`, Pulsar will create a
> partitioned topic meta which has two partitioned but only create a topic
> named `{primary_topic_name}-{sub_name}-DLQ,` there is no topic named
> `{primary_topic_name}-{sub_name}-DLQ-partition-x.` Please look at this
> [PR]( https://github.com/apache/pulsar/pull/19841) for a detailed bug
> description.
>
> So I want to change the behavior to Just auto-create no-partitioned
> DLQ/Retry Topic.
>
> 
>
>  Prevent auto-create the DLQ for a DLQ
> Please look at this [Discussion](
> https://lists.apache.org/thread/q1m23ckyy10wvtzy65v8bwqwnh7r0gc8) for the
> detail.
>
> 
>
> ### Goal
>
> - Just auto-create no-partitioned DLQ/Retry Topic(with the other words:
> prevent auto-create partitioned DLQ)
> - DLQ/Retry topic should not create for a DLQ/Retry Topic
>   - roles:
> - DLQ will not auto-create for a DLQ
> - Retry Topic will not auto-create for a Retry Topic
> - DLQ will not auto-create for a Retry Topic
> - Retry Topic will not auto-create for a DLQ
>   - client changes: Clients will not create a DLQ for a DLQ
>   - broker changes: rejected the request which wants to auto-create a DLQ
> for a DLQ
>
> 
>
> ### API Changes
>
>  CommandSubscribe.java
> ```java
> /**
>   * This is an enumeration value with tree options: "standard", "dead
> letter", "retry letter".
>   */
> private String topicPurpose;
> ```
>
>  Properties of Topic
> ```properties
> "purposeOfAutoCreatedTopic": value with tree options: "standard", "dead
> letter", "retry letter"
> ```
>
> Why not use two properties: `isAutoCreated` and `topicPurpose`?
> Because there is a scenario like this: auto-create a topic, use it as a DLQ
> after a few days, and not use it as a DLQ after a few days, this Topic will
> be allowed to have DLQ/Retry Topic. We only mark the topics created for
> DLQ/Retry purposes.
>
>
> Thanks
> Yubiao Feng
>


Re: [DISCUSS] PIP-257: Add Open ID Connect Support to Server Components

2023-04-07 Thread Xiangying Meng
Thanks for initiating the proposal, Michael.
I strongly support this addition, which will greatly enhance Pulsar's
security and reliability.
> I am new to this security component, but how do we support token revoke?
Regarding Heesung Sohn's question about token revocation,
I believe that we can initially rely on short-lived tokens and periodic
refreshes to ensure security.
If the community has a clear demand for token revocation in the future, we
can consider implementing this feature at that time.

I'm really looking forward to the successful implementation of this
proposal.
Best regards,
Xiangying

On Fri, Apr 7, 2023 at 1:57 AM Heesung Sohn
 wrote:

> Hi,
>
> I am new to this security component, but how do we support token revoke?
>
> Thanks,
> Heesung
>
> On Mon, Mar 27, 2023 at 12:31 PM Michael Marshall 
> wrote:
>
> > Here is the K8s integration: https://github.com/apache/pulsar/pull/19888
> .
> >
> > That PR makes it possible to configure a function running in
> > kubernetes to use the Service Account Token provided by kubernetes.
> >
> > Future work suggested by Eron Wright is to add support for the
> > function worker to mount tokens for any requested audience. I think
> > that feature will be very valuable, but in hopes of completing this
> > PIP by 3.0.0, I would like to defer that feature.
> >
> > Thanks,
> > Michael
> >
> > On Mon, Mar 20, 2023 at 5:59 PM Michael Marshall 
> > wrote:
> > >
> > > Update: the PR [0] to add the OIDC authentication provider module is
> > > ready for review.
> > >
> > > I plan to start looking at the function worker integration with k8s
> > tomorrow.
> > >
> > > I hope to start the vote later this week.
> > >
> > > Thanks,
> > > Michael
> > >
> > > [0] https://github.com/apache/pulsar/pull/19849
> > >
> > > On Mon, Mar 20, 2023 at 5:56 PM Michael Marshall  >
> > wrote:
> > > >
> > > > >> 2. Implement `KubernetesFunctionAuthProvider` with
> > > > >`KubernetesSecretsAuthProvider`.
> > > >
> > > > >It looks like we add an authentication provider for the Kubernetes
> > > > >environment. Is the OIDC authentication provider?
> > > >
> > > > The current KubernetesSecretsTokenAuthProvider [0] mounts the auth
> > > > data used to create a function. Because OIDC often has short lived
> > > > tokens, it won't work to copy the token from the call used to create
> > > > the function. Instead, my initial proposal was to let a user specify
> a
> > > > pre-existing k8s secret that will have the correct authentication
> > > > data. Because anything can be in the secret, there isn't a reason to
> > > > require this secret to have the client id and client secret.
> > > >
> > > > Eron suggested in the PIP issue [1] that we make it possible to
> easily
> > > > integrate with the Kubernetes Service Account. I'll be looking into
> > > > that integration this week.
> > > >
> > > > Thanks,
> > > > Michael
> > > >
> > > > [0]
> >
> https://github.com/apache/pulsar/blob/82237d3684fe506bcb6426b3b23f413422e6e4fb/pulsar-functions/runtime/src/main/java/org/apache/pulsar/functions/auth/KubernetesSecretsTokenAuthProvider.java
> > > > [1]
> > https://github.com/apache/pulsar/issues/19771#issuecomment-1463029346
> > > >
> > > >
> > > > On Mon, Mar 13, 2023 at 3:03 AM Zixuan Liu 
> wrote:
> > > > >
> > > > > Hi Michael,
> > > > >
> > > > > +1, Thank you for your PIP! That's important for modern
> > authentication!
> > > > >
> > > > > I have a question:
> > > > >
> > > > > > 2. Implement `KubernetesFunctionAuthProvider` with
> > > > > `KubernetesSecretsAuthProvider`.
> > > > >
> > > > > It looks like we add an authentication provider for the Kubernetes
> > > > > environment. Is the OIDC authentication provider?
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Zixuan
> > > > >
> > > > >
> > > > >
> > > > > Lari Hotari  于2023年3月10日周五 14:56写道:
> > > > >
> > > > > > Thanks for starting this PIP, Michael.
> > > > > > This is really important in improving Pulsar's security and
> > reducing
> > > > > > certain attack surfaces and eliminating certain attack vectors.
> > I'm looking
> > > > > > forward to having Open ID connect (OIDC) supported in Pulsar
> server
> > > > > > components so that Pulsar could be operated without the use of
> > static JWT
> > > > > > tokens such as the superuser token.
> > > > > >
> > > > > > -Lari
> > > > > >
> > > > > > On 2023/03/09 22:34:49 Michael Marshall wrote:
> > > > > > > Hi Pulsar Community,
> > > > > > >
> > > > > > > I would like to contribute Open ID Connect support to the
> server
> > > > > > > components in Pulsar. Here is a link to the PIP:
> > > > > > > https://github.com/apache/pulsar/issues/19771. I plan to start
> > working
> > > > > > > on the implementation next week. I look forward to your
> feedback.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Michael
> > > > > > >
> > > > > > > ### Motivation
> > > > > > >
> > > > > > > Apache Pulsar does not yet support a server side
> > > > > > > `AuthenticationProvider` that implements the Open ID Connect
> > 

Re: [DISCUSS] Dropping the StreamingDispatcher

2023-04-07 Thread Xiangying Meng
Hi all,

+1 for removing the StreamingDispatcher in Pulsar 3.0. Balancing
maintainability, scalability, and usability is critical for an open-source
project. In this case, the StreamingDispatcher seems to be neither widely
adopted nor actively maintained, and its code quality and unstable tests
have been causing an extra maintenance burden.


Best,
Xiangying

On Fri, Apr 7, 2023 at 6:30 PM Cong Zhao  wrote:

> +1, I support removing it if the code isn't being used or maintained.
>
> Thanks,
> Cong Zhao
>
> On 2023/04/04 06:47:24 Enrico Olivelli wrote:
> > Hello,
> > It has been a long time that we have in the Pulsar code a new
> > experimental Dispatcher implementation named StreamingDispatcher.
> >
> > https://github.com/apache/pulsar/pull/9056
> >
> > There are many flaky tests about that feature and I believe that it
> > has never been used in Production by anyone, because it happened a few
> > times that we did some changes in the regular Dispatcher and
> > introduced bugs on the StreamingDispacther (usually manifested as
> > flaky tests)
> >
> >
> > I propose to drop the StreamingDispatcher code for Pulsar 3.0.
> > I don't think we need a PIP for this, it is an experimental code that
> > was never delivered as a production ready feature.
> >
> > If anyone is aware of users please chime in.
> >
> > If anyone wants to sponsor that feature and objects in removing this
> > dead code (that we still have to maintain) please help us in
> > completing the feature.
> >
> > On paper it is a very appealing feature, and I am disappointed in
> dropping it.
> > On the other hand, this is dead code that we have to maintain with zero
> benefit
> >
> > Thoughts ?
> >
> > Enrico
> >
>


Transaction Key Mechanism for Fencing Transactions

2023-04-06 Thread Xiangying Meng
Dear Pulsar Community,

I am excited to invite you all to participate in the discussion of the
latest Pulsar proposal [1] for a transaction key mechanism. This proposal
aims to provide users with a way to ensure that only one active transaction
is associated with a given transaction key, while also aborting previous
transactions and preventing them from performing further operations. This
will help create a clean working environment for newly started transactions
and prevent issues such as duplicate message processing.

The proposal includes a detailed design that utilizes the concept of
transaction keys and epochs, and introduces a new system topic for storing
epoch information. Additionally, a new configuration option will be added
to the PulsarClient builder to set the transaction key, and a new exception
called ExpiredTransactionException will be introduced to handle cases where
an expired transaction is used for operations.

We believe that this proposal will greatly improve the user experience for
Pulsar transactions, and we encourage everyone to share their thoughts and
feedback. Your input is valuable in shaping the future of Pulsar, and we
look forward to hearing from you.

Please feel free to join the discussion by commenting on the proposal
document, or by starting a thread in the Pulsar mailing list. Let's work
together to make Pulsar better than ever before!

Best regards, Xiangying [1]
https://docs.google.com/document/d/17V1HaHxtd1DpGkfeJb_uKPBm_u97RAWsojeKmCw6J8U/edit#


Re: [VOTE] Pulsar Release 2.10.4 Candidate 3

2023-04-03 Thread Xiangying Meng
Dear Yubiao

Appreciate the reminder. And in light of the mentioned PRs, I would like to
suggest that we close this vote for now.

Thanks
Xiangying

On Mon, Apr 3, 2023 at 12:34 PM Yubiao Feng
 wrote:

> -1 (non-binding)
>
> - The PR 19989[1] fixes an issue that non-super role users cannot access
> the tenant's API event if disabled authorization. We should wait for this
> PR merge,
> - The PR 19971[2] fixes the un-forward-compatibility config
> `saslJaasServerRoleTokenSignerSecretPath`. We should wait for this PR merge
> too,
> Thanks
> Yubiao Feng
>
> [1] https://github.com/apache/pulsar/pull/19989
> [2] https://github.com/apache/pulsar/pull/19971
>
> On Wed, Mar 22, 2023 at 3:08 PM Xiangying Meng 
> wrote:
>
> > This is the third release candidate for Apache Pulsar, version 2.10.4.
> >
> > This release contains 111 commits by 35 contributors.
> > https://github.com/apache/pulsar/compare/v2.10.3...v2.10.4-candidate-3
> >
> > *** Please download, test and vote on this release. This vote will stay
> > open
> > for at least 72 hours ***
> >
> > Note that we are voting upon the source (tag), binaries are provided for
> > convenience.
> >
> > Source and binary files:
> > https://dist.apache.org/repos/dist/dev/pulsar/pulsar-2.10.4-candidate-3/
> >
> > SHA-512 checksums:
> >
> >
> 59f0326643cca9ef16b45b4b522ab5a1c1d8dc32ac19897704f8231f9bd4cef02af722848646332db461a807daacc9cb87993b81dcf1429b1f23e3872a32
> >  apache-pulsar-2.10.4-bin.tar.gz
> >
> >
> 5b2adbf0d371b79b1dbe141f152848049d19924151fa8827057038d81833accd70cf67429cb003aedb8d44ee705ed0609d49757e74fed377dce77b09d49062e3
> >  apache-pulsar-2.10.4-src.tar.gz
> >
> > Maven staging repo:
> > https://repository.apache.org/content/repositories/orgapachepulsar-1221/
> >
> > The tag to be voted upon:
> > v2.10.4-candidate-3
> > (e4898ac8eb37f698f29aa21e40a3abdda5489d45)
> > https://github.com/apache/pulsar/releases/tag/v2.10.4-candidate-3
> >
> > Pulsar's KEYS file containing PGP keys you use to sign the release:
> > https://downloads.apache.org/pulsar/KEYS
> >
> > Docker images:
> >
> > 
> >
> >
> https://hub.docker.com/layers/xiangyingmeng/pulsar/2.10.4/images/sha256-05bfb482c5b5aa66ac818651d8997745ac7d536ca0cb56bff8199a6de459ac45?context=repo
> >
> > 
> >
> >
> https://hub.docker.com/layers/xiangyingmeng/pulsar-all/2.10.4/images/sha256-d4f3de64a8ec4a9039ac500bbf4a0efae9a9f1d4e0a58e11cab020276dc5e6b3?context=repo
> >
> > Please download the source package, and follow the README to build
> > and run the Pulsar standalone service.
> >
>


Re: [ANNOUNCE] Qiang Zhao as new PMC member in Apache Pulsar

2023-03-28 Thread Xiangying Meng
 Congrats! Qiang.

Sincerely,
Xiangying

On Wed, Mar 29, 2023 at 11:51 AM Yubiao Feng
 wrote:

> Congrats! Qiang.
>
> Thanks
> Yubiao
>
> On Wed, Mar 29, 2023 at 11:22 AM guo jiwei  wrote:
>
> > Dear Community,
> >
> > We are thrilled to announce that Qiang Zhao
> > (https://github.com/mattisonchao) has been invited and has accepted the
> > role of member of the Apache Pulsar Project Management Committee (PMC).
> >
> > Qiang has been a vital asset to our community, consistently
> > demonstrating his dedication and active participation through
> > significant contributions. In addition to his technical contributions,
> > Qiang also plays an important role in reviewing pull requests and
> > ensuring the overall quality of our project. We look forward to his
> > continued contributions.
> >
> > On behalf of the Pulsar PMC, we extend a warm welcome and
> > congratulations to Qiang Zhao.
> >
> > Best regards
> > Jiwei
> >
>


  1   2   >