GitHub user tisonkun added a comment to the discussion: Multi-Topic
subscription across tenant and namespace using regex in Java client throws
IllegalArgumentException
Moved to discussion forum. This seems an interesting feature request, but it
requires volunteers work on it and more design
GitHub user jiazhai added a comment to the discussion: Multi-Topic subscription
across tenant and namespace using regex in Java client throws
IllegalArgumentException
@rnowacoski currently regex subscription supports the topics in the same
namespace. change this issue into a feature request.
GitHub user tisonkun added a comment to the discussion: Multi-Topic
subscription across tenant and namespace using regex in Java client throws
IllegalArgumentException
Closed as stale. Please create a new issue if it's still relevant to the
maintained versions.
GitHub link:
GitHub user rnowacoski created a discussion: Multi-Topic subscription across
tenant and namespace using regex in Java client throws IllegalArgumentException
**Describe the bug**
When using a regex to create a consumer and a regex pattern is present in the
tenant or namespace section of the
Hi,
I raised a PR to the master branch here,
https://github.com/apache/pulsar/pull/18807.
PTAL.
Thank you,
Heesung
On Wed, Nov 30, 2022 at 4:06 AM Enrico Olivelli wrote:
> Heesung,
> I also agree that we must preserve compatibility, if we want to pick
> this change to released versions.
>
GitHub user yebai1105 added a comment to the discussion: negativeAcknowledge()
parameter does not take effect
Unacknowledged messages will be redelivered by default. If A acked message can
not be redeliver again, what is the role of "negativeAcknowledge"?In what
scenarios is the
GitHub user codelipenghui added a comment to the discussion:
negativeAcknowledge() parameter does not take effect
@yebai1105 You can acked the message at
```
msg = (Message) obj;
System.out.println("Message received: " + msg.getValue());
consumer.acknowledge(msg);
```
A acked message can
GitHub user github-actions[bot] added a comment to the discussion: we have a
use case where requires send message across multiple data centers . A through B
to C。
The issue had no activity for 30 days, mark with Stale label.
GitHub link:
GitHub user github-actions[bot] added a comment to the discussion:
negativeAcknowledge() parameter does not take effect
The issue had no activity for 30 days, mark with Stale label.
GitHub link:
https://github.com/apache/pulsar/discussions/18857#discussioncomment-4352726
This is an
GitHub user drriguz added a comment to the discussion: negativeAcknowledge()
parameter does not take effect
I'm also confused about this, I found that messages are not redelivered in my
application by using the following code:
```java
try {
MetaDataChangedEvent message =
GitHub user codelipenghui added a comment to the discussion: we have a use case
where requires send message across multiple data centers . A through B to C。
@fengxiaokai In the above case you provided, you should also add cluster-3 for
the namespace in cluster-1
GitHub link:
GitHub user github-actions[bot] added a comment to the discussion: we have a
use case where requires send message across multiple data centers . A through B
to C。
The issue had no activity for 30 days, mark with Stale label.
GitHub link:
GitHub user github-actions[bot] added a comment to the discussion: What is the
advertage of InputStream for MultipartFile ?
The issue had no activity for 30 days, mark with Stale label.
GitHub link:
https://github.com/apache/pulsar/discussions/18855#discussioncomment-4352711
This is
GitHub user wangjialing218 added a comment to the discussion: we have a use
case where requires send message across multiple data centers . A through B to
C。
the messages replicated from cluster1 to cluster2 are marked as "replicated
message", these message will not replicate again to
GitHub user github-actions[bot] added a comment to the discussion: What is the
advertage of InputStream for MultipartFile ?
The issue had no activity for 30 days, mark with Stale label.
GitHub link:
https://github.com/apache/pulsar/discussions/18855#discussioncomment-4352710
This is
GitHub user github-actions[bot] added a comment to the discussion: Failed
messages 1 with no more exception message
The issue had no activity for 30 days, mark with Stale label.
GitHub link:
https://github.com/apache/pulsar/discussions/18854#discussioncomment-4352701
This is an
GitHub user codelipenghui added a comment to the discussion: Failed messages 1
with no more exception message
@Aaronzk There is an internal pending queue with 1000 messages by default for a
producer. if the server-side with a slow response but the application send
messages very fast that
GitHub user Aaronzk added a comment to the discussion: Failed messages 1 with
no more exception message
@codelipenghui
why not set `blockIfQueueFull(true)` as default?
GitHub link:
https://github.com/apache/pulsar/discussions/18854#discussioncomment-4352700
This is an automatically
GitHub user github-actions[bot] added a comment to the discussion: How to
dynamically produce and consume topic data
The issue had no activity for 30 days, mark with Stale label.
GitHub link:
https://github.com/apache/pulsar/discussions/18853#discussioncomment-4352696
This is an
GitHub user lhotari added a comment to the discussion: How to dynamically
produce and consume topic data
> When producing data, I want to dynamically send it to different topics. The
> topics may be in different namespaces and different tenants.
In Pulsar client, you must create a separate
GitHub user beyondyinjl2 added a comment to the discussion: How to dynamically
produce and consume topic data
When producing data, I want to dynamically send it to different topics. The
topics may be in different namespaces and different tenants.
GitHub link:
GitHub user liangyuanpeng added a comment to the discussion: How to dynamically
produce and consume topic data
You can use `client.newConsumer().topicsPattern()` for multiple topics.
GitHub link:
https://github.com/apache/pulsar/discussions/18853#discussioncomment-4352694
This is an
GitHub user beyondyinjl2 added a comment to the discussion: How to dynamically
produce and consume topic data
Like mqtt, a producer can send data on any topic, and a consumer can subscribe
to topics dynamically
GitHub link:
GitHub user beyondyinjl2 added a comment to the discussion: How to dynamically
produce and consume topic data
For example, first subscribe to topic:persistent://public/default/my-topic:
Consumer consumer = client.newConsumer()
.topic("persistent://public/default/my-topic")
GitHub user beyondyinjl2 created a discussion: How to dynamically produce and
consume topic data
After creating a Producer, I want to send data to multiple different topics, or
create a Consumer, I want to consume data from multiple topics
GitHub link:
+1 (non-binding)
Followed validation process @
https://github.com/apache/pulsar-client-reactive/wiki/Release-process#release-validation
- Verified checksum, signature, sources match git tag
- Ran simple pulsar-client-reactive app using staged maven artifacts
LGTM,
Chris
On 2022/12/08 13:30:58
GitHub user eolivelli added a comment to the discussion: ManagedLedger should
streamline the read requests
@michaeljmarshall you can start a discussion with a proposal.
you can start on dev@pulsar in order to have more people in the discussion.
GitHub link:
GitHub user michaeljmarshall added a comment to the discussion: ManagedLedger
should streamline the read requests
@eolivelli - thanks, I'll do that.
GitHub link:
https://github.com/apache/pulsar/discussions/18852#discussioncomment-4352616
This is an automatically sent email for
GitHub user eolivelli added a comment to the discussion: ManagedLedger should
streamline the read requests
@nicoloboschi you could be interested in working on a fix for this issue if
@MarvinCai doesn't have time to work on this topic
GitHub link:
GitHub user michaeljmarshall added a comment to the discussion: ManagedLedger
should streamline the read requests
@MarvinCai, @sijie, @eolivelli - It looks like the feature to improve read
throughput for blob storage is still outstanding. I'm very interested in
helping to contribute this
GitHub user sijie added a comment to the discussion: ManagedLedger should
streamline the read requests
@MarvinCai Are you working on this issue already?
GitHub link:
https://github.com/apache/pulsar/discussions/18852#discussioncomment-4352609
This is an automatically sent email for
GitHub user vicaya added a comment to the discussion: ManagedLedger should
streamline the read requests
Is there any progress on this issue? Whenever people ask me about why we don't
use tiered storage, I have to point them to this issue for why it's too slow
for us (readers cannot read fast
GitHub user sijie added a comment to the discussion: ManagedLedger should
streamline the read requests
@vicaya thank you for your feedback. @MarvinCai are you willing to give it a
try?
GitHub link:
https://github.com/apache/pulsar/discussions/18852#discussioncomment-4352607
This is an
GitHub user MarvinCai added a comment to the discussion: ManagedLedger should
streamline the read requests
@sijie I was new to BK code base and was reading the some LeadferHandler and
DL codes tog figure out what should be change, have a simple
GitHub user sijie added a comment to the discussion: ManagedLedger should
streamline the read requests
@MarvinCai yes. I think it is worth pushing this logic to BK to provide a
`StreamingReadHandle` over the BK `ReadHandle`. so that the read ahead logic
can be reused for both bookkeeper read
GitHub user MarvinCai added a comment to the discussion: ManagedLedger should
streamline the read requests
@sijie sorry just saw the replies, how about I start with doc with problem
statement and try propose a solution. If everything looks good then we can
proceed from there.
GitHub link:
GitHub user eolivelli added a comment to the discussion: ManagedLedger should
streamline the read requests
@sijie
I agree that pushing that mechanism to the low level API will be useful
GitHub link:
https://github.com/apache/pulsar/discussions/18852#discussioncomment-4352605
This is an
GitHub user vicaya added a comment to the discussion: ManagedLedger should
streamline the read requests
This is a also blocking issue for practical use of tiered storage as historical
retention, since replay from tiered storage (at least s3) is too slow.
It'd be great if number of read-ahead
GitHub user sijie added a comment to the discussion: ManagedLedger should
streamline the read requests
/cc @jiazhai @eolivelli in this thread. so they can provide some more thoughts
around this if it is worth adding this readahead logic to BK read handle.
GitHub link:
GitHub user sijie added a comment to the discussion: ManagedLedger should
streamline the read requests
I have seen in a production deployment the consumer can never catch up if
bookie's avg read latency is 10+ms bad due to the disk and other workloads
running on same machine. the problem can
GitHub user MarvinCai added a comment to the discussion: ManagedLedger should
streamline the read requests
Should the logic similar to the readAhead logic here in
GitHub user sijie created a discussion: ManagedLedger should streamline the
read requests
*Problems*
Currently managed ledger read entries in a very large batch requests - 100
entries by default. This is an inefficient approach. We should streamline the
read requests like what dlog is
GitHub user addisonj added a comment to the discussion: Loosely couple the
topic and schemaId association (allow many to many association)
@shiv4289 That document is not readable, did you not yet make it public?
GitHub link:
GitHub user shiv4289 created a discussion: Loosely couple the topic and
schemaId association (allow many to many association)
Currently, schema in pulsar is tightly coupled with a topic (one-to-one
association). As part of this enhancement, we want to make it a many-to-many
association, This
GitHub user shiv4289 added a comment to the discussion: Loosely couple the
topic and schemaId association (allow many to many association)
> @shiv4289 That document is not readable, did you not yet make it public?
Done now. Thanks!
GitHub link:
GitHub user neapolis123 added a comment to the discussion: Questions about
Registering Schema
> ClientCnx#sendGetOrCreateSchema will send the request to broker.
> ClientCnx#handleGetOrCreateSchemaResponse will handle the response.
So what does ProducerImpl#tryRegistringSchema do ?
GitHub
GitHub user neapolis123 added a comment to the discussion: Questions about
Registering Schema
I am writing my thesis about Pulsar and a part of it is to model producers and
consumers as state machines. When the reading the code I couldn't figure out
what state the producer assumes when going
GitHub user sijie added a comment to the discussion: Questions about
Registering Schema
If the schema is incompatible, creating a producer should fail. Did you see any
problems?
GitHub link:
https://github.com/apache/pulsar/discussions/18850#discussioncomment-4352509
This is an
GitHub user Technoboy- added a comment to the discussion: Questions about
Registering Schema
ClientCnx#sendGetOrCreateSchema will send the request to broker.
ClientCnx#handleGetOrCreateSchemaResponse will handle the response.
GitHub link:
GitHub user neapolis123 created a discussion: Questions about Registering Schema
Hi, I just wanted to ask about what internal state does the producer take after
failing to register the Schema because its incompatible( Line 592 in
ProducerImp ) . From what I have understood the state does not
GitHub user frankjkelly added a comment to the discussion: JSONSchema should
works with generic (parameterized type) POJO/DTO.
Thank you - will do!
GitHub link:
https://github.com/apache/pulsar/discussions/18849#discussioncomment-4352462
This is an automatically sent email for
GitHub user sijie added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
@frankjkelly Yeah, currently it is blocked by apache/avro#356. I would
encourage people to leave a comment in the avro issue and pull request to see
if AVRO community can
GitHub user sijie added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
@frankjkelly I see. So it is related to extracting the schema information using
the AVRO library. We will take a look and circle back.
GitHub link:
GitHub user frankjkelly added a comment to the discussion: JSONSchema should
works with generic (parameterized type) POJO/DTO.
Yes - as per @jerrypeng comment above
"Parameterized/generics in Java are currently not support for schema generation
in Avro. Though there is discussion and a PR to
GitHub user congbobo184 added a comment to the discussion: JSONSchema should
works with generic (parameterized type) POJO/DTO.
@frankjkelly I think, because you use generic type T, so use can't provide the
specific type. we can't generate avro schema according to the json schema
GitHub user jiazhai added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
@gaoran10 @congbobo184 Which of you have time to take a look at this issue?
GitHub link:
https://github.com/apache/pulsar/discussions/18849#discussioncomment-4352455
GitHub user frankjkelly added a comment to the discussion: JSONSchema should
works with generic (parameterized type) POJO/DTO.
We too have been hit by this issue trying to send generics
```
public class InteractionEvent
```
GitHub link:
GitHub user frankjkelly added a comment to the discussion: JSONSchema should
works with generic (parameterized type) POJO/DTO.
@sijie I am using 2.6.1 Client with 2.6.1 Server
When I use the code as follows
```
private Producer getSignalProducer(String tenant) throws
GitHub user sijie added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
@frankjkelly Which pulsar client are you using?
GitHub link:
https://github.com/apache/pulsar/discussions/18849#discussioncomment-4352456
This is an automatically
GitHub user shan-96 added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
@vamsi360
We are trying to write to Avro Files using version 1.8.3-ppe-9.10
We are writing a java object from memory to a specified file using
`DataFileWriter`.
While
GitHub user vamsi360 added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
Hi @rohts-patil @shan-96
Could you tell me how to replicate the issue you are facing with a test
project? I will debug this and replicate/fix
GitHub link:
GitHub user shan-96 added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
> @vamsi360 ,
> I cloned your fork and built it to version 1.8.3-ppe-9.10.
> I still get the errror.
> Caused by: org.apache.avro.AvroTypeException: Unknown type: T
>
GitHub user vamsi360 added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
Hi, I worked on AVRO-2248 and we stabilised it and use it in production in our
company. We will resume the discussion and make the required changes to get it
merged to
GitHub user rohts-patil added a comment to the discussion: JSONSchema should
works with generic (parameterized type) POJO/DTO.
@vamsi360 ,
I cloned your fork and built it to version 1.8.3-ppe-9.10.
I still get the errror.
```
Caused by: org.apache.avro.AvroTypeException: Unknown type: T
GitHub user ducquoc added a comment to the discussion: JSONSchema should works
with generic (parameterized type) POJO/DTO.
Thank you for your quick response.
I hope the Avro fix will be merged soon (and not much performance overhead).
Until then I will apply the work-around java.lang.Object
GitHub user jerrypeng added a comment to the discussion: JSONSchema should
works with generic (parameterized type) POJO/DTO.
So we use Avro underneath to generate a schema for the class.
Parameterized/generics in Java are currently not support for schema generation
in Avro. Though there is
GitHub user ducquoc created a discussion: JSONSchema should works with generic
(parameterized type) POJO/DTO.
Expected behavior
JSONSchema should works with generic (parameterized type) POJO/DTO. No
exception at runtime, or at least update documentation/errorMessage how to
GitHub user sijie created a discussion: [functions][k8s] Store jobNamespace as
part of function metadata
*Motivation*
Currently, we don't store jobNamespace as part of function metadata. If the
broker restarts with a different job namespace, it will cause the function
statefulset not
GitHub user jiazhai created a discussion: [Feature][Functions] dynamic update
configs
The configs of Pulsar Functions are provided before Functions start. If user
wants to update the config, the functions have to be deployed from start point
again.
It would be great to provide a way to
GitHub user codelipenghui added a comment to the discussion: Support zero queue
consumer for partitioned topic.
@merlimat How about syncing the backlog between the broker and client
periodically. Treat topics with more backlogs as a high priority. Since we
don't know if a topic will there be
GitHub user leizhiyuan added a comment to the discussion: Support zero queue
consumer for partitioned topic.
what is the current status?
GitHub link:
https://github.com/apache/pulsar/discussions/18846#discussioncomment-4351886
This is an automatically sent email for
GitHub user merlimat added a comment to the discussion: Support zero queue
consumer for partitioned topic.
It’s not enabled because we cannot know which partition the next message will
be coming from. Any suggestion on how to achieve that?
GitHub link:
GitHub user sundar-10 added a comment to the discussion: Support zero queue
consumer for partitioned topic.
Hello, I am planning to work on this issue. I am planning to read how zero
queue consumer is subscribed to a nonpartitioned topic and proceed from there
by getting some kind of input
GitHub user codelipenghui added a comment to the discussion: Support zero queue
consumer for partitioned topic.
@sundar-10 You can check the `ZeroQueueConsumerImpl`.
GitHub link:
https://github.com/apache/pulsar/discussions/18846#discussioncomment-4351884
This is an automatically sent
GitHub user codelipenghui created a discussion: Support zero queue consumer for
partitioned topic.
**Is your feature request related to a problem? Please describe.**
Currently, the zero queue consumer just can subscribe to a non-partitioned
topic. In some case, we need to use zero queue
Hello Pulsar community,
I recently joined this ML. I have been keenly following the RC, Voting and
PIP related email threads so far. I only have one question - is there a way
to disable the emails from GitBox about github discussions? Mainly for the
following reasons:
1. The GitHub
+1 (non-binding)
On Fri, Dec 9, 2022 at 12:18 AM Chris Bo wrote:
> +1 (non-binding)
>
> Followed validation process @
>
> https://github.com/apache/pulsar-client-reactive/wiki/Release-process#release-validation
>
> - Verified checksum, signature, sources match git tag
> - Ran simple
GitHub user haphut added a comment to the discussion: Allow topic compaction to
discard messages with duplicate key
Another variant of this problem occurs when we are using an in-order pub-sub
API, e.g. an MQTT API, or any ephemeral event source to feed Pulsar.
If we are running only one
GitHub user gmethvin created a discussion: Allow topic compaction to discard
messages with duplicate key
**Is your feature request related to a problem? Please describe.**
It's often the case that a producer gets interrupted in the process of
producing a series of messages to a topic,
GitHub user jaggerwang added a comment to the discussion: Requesting Pulsar to
support IoT protocols - STOMP, AMQP, MQTT, WSS
> > @davidlanouette sorry that I missed this message!
> > The AoP is available at https://github.com/streamnative/aop. MoP will be
> > also public soon.
> > @vzhikserg
GitHub user lloydchandran-zafinlabs added a comment to the discussion:
Requesting Pulsar to support IoT protocols - STOMP, AMQP, MQTT, WSS
> @davidlanouette sorry that I missed this message!
>
> The AoP is available at https://github.com/streamnative/aop. MoP will be also
> public soon.
>
>
GitHub user hpvd added a comment to the discussion: Requesting Pulsar to
support IoT protocols - STOMP, AMQP, MQTT, WSS
MQTT on Pulsar (MoP) can be found here:
https://github.com/streamnative/mop
GitHub link:
https://github.com/apache/pulsar/discussions/18841#discussioncomment-4350900
GitHub user vzhikserg added a comment to the discussion: Requesting Pulsar to
support IoT protocols - STOMP, AMQP, MQTT, WSS
This issue contains too many requests in one. I would suggest to create 4
separate issues (one for each protocol) and close this one. In this case, the
issues can be
GitHub user sijie added a comment to the discussion: Requesting Pulsar to
support IoT protocols - STOMP, AMQP, MQTT, WSS
@davidlanouette sorry that I missed this message!
The AoP is available at https://github.com/streamnative/aop. MoP will be also
public soon.
@vzhikserg I agree with you.
GitHub user sijie added a comment to the discussion: Requesting Pulsar to
support IoT protocols - STOMP, AMQP, MQTT, WSS
@davidlanouette No. `kop` doesn't depend on the streamnative platform. `kop` is
a protocol handler that you can install it natively in your existing Pulsar
cluster. Also
GitHub user davidlanouette added a comment to the discussion: Requesting Pulsar
to support IoT protocols - STOMP, AMQP, MQTT, WSS
My team also is seriously considering Pulsar to replace ActiveMQ 5 (yes, it's
way overdue), but we have a lot of STOMP and AMQP clients.
@sijie Does the `kop`
GitHub user davidlanouette added a comment to the discussion: Requesting Pulsar
to support IoT protocols - STOMP, AMQP, MQTT, WSS
@sijie Thanks for the update!
Do you have a link to that repo? I'd be interested in contributing if I can.
GitHub link:
GitHub user sijie added a comment to the discussion: Requesting Pulsar to
support IoT protocols - STOMP, AMQP, MQTT, WSS
Hi @PrashantKS thank you for creating the issue.
We have just added kafka protocol support to pulsar via KoP
(https://github.com/streamnative/kop). It is
We are also
GitHub user PrashantKS created a discussion: Requesting Pulsar to support IoT
protocols - STOMP, AMQP, MQTT, WSS
Use case:
In the IoT related use cases, we are using RabbitMQ as the message broker to
receive/send messages from IoT devices over the protocols STOMP, AMQP, MQTT,
WSS and further
GitHub user sijie added a comment to the discussion: Add meta-data to DLQ
@codelipenghui @congbobo184 this is a good feature to add to DLQ
GitHub link:
https://github.com/apache/pulsar/discussions/18840#discussioncomment-4350840
This is an automatically sent email for
GitHub user jefferyshivers-toast added a comment to the discussion: Add
meta-data to DLQ
Has there been any movement on this? This is an example of a similar feature in
Azure Service Bus:
GitHub user codelipenghui added a comment to the discussion: Add meta-data to
DLQ
Sorry for the late response, i will take a look soon.
GitHub link:
https://github.com/apache/pulsar/discussions/18840#discussioncomment-4350841
This is an automatically sent email for
GitHub user rocketraman created a discussion: Add meta-data to DLQ
**Is your feature request related to a problem? Please describe.**
The automatic DLQ is a nice feature, but its lacking any ability to add
meta-data to the entry in the DLQ. For example, setting a property like
GitHub user shiv4289 added a comment to the discussion: [offload] CLI/Rest
endpoint to list and describe offloaded topics of namespace
sure @ivan970101. Please go ahead. I would be happy to be work as a reviewer.
GitHub link:
GitHub user ivan970101 added a comment to the discussion: [offload] CLI/Rest
endpoint to list and describe offloaded topics of namespace
I'm interested in this issue. May I have a try?
GitHub link:
https://github.com/apache/pulsar/discussions/18839#discussioncomment-4350831
This is an
GitHub user shiv4289 created a discussion: [offload] CLI/Rest endpoint to list
and describe offloaded topics of namespace
**Is your feature request related to a problem? Please describe.**
Currently, it is difficult to find which topics of a namespace are offloaded to
secondary storage. It
I submitted https://github.com/apache/pulsar/pull/18837 to fix this issue.
Thanks,
Zixuan
Zixuan Liu 于2022年12月9日周五 17:47写道:
> Ok, let me make a new PR to fix this.
>
> Thanks,
> Zixuan
>
> Yunze Xu 于2022年12月9日周五 17:41写道:
>
>> > I think when an admin hasn't permission to create the namespace,
Ok, let me make a new PR to fix this.
Thanks,
Zixuan
Yunze Xu 于2022年12月9日周五 17:41写道:
> > I think when an admin hasn't permission to create the namespace, the
> Pulsar
> should be exited.
>
> Maybe. But it's something that requires a proposal because this change
> breaks many standalone
> I think when an admin hasn't permission to create the namespace, the Pulsar
should be exited.
Maybe. But it's something that requires a proposal because this change
breaks many standalone deployments of other Pulsar clients.
Thanks,
Yunze
On Fri, Dec 9, 2022 at 5:36 PM Zixuan Liu wrote:
>
>
+1(non-binding)
Thanks,
Jiaqi Shen
于2022年12月5日周一 15:23写道:
> +1(non-binding)
>
> Best,
> Mattison
> On Dec 5, 2022, 15:09 +0800, Zike Yang , wrote:
> > +1(non-binding)
> >
> > Best,
> > Zike Yang
> >
> > On Mon, Dec 5, 2022 at 2:41 PM Baodi Shi
> wrote:
> > >
> > > +1(non-binding)
> > >
> > >
1 - 100 of 110 matches
Mail list logo