zhangshenghang commented on issue #10101: URL: https://github.com/apache/seatunnel/issues/10101#issuecomment-3610393727
> hei [@zhangshenghang](https://github.com/zhangshenghang) , can you give some advice? i'm currently trying to decide whether this should support parallelism (with user defined split) or not. > > Nats jetstream act like kafka but it doesn't have partitioning. But when we subscribe to a stream, we can manually specify subject to subscribe to. Say we have a stream `stream1` which contains subject `sub1`, `sub2.1`, `sub2.2`, we could subscribe to `sub1` or `sub2.*` separately. > > My hesitation comes from reading the kafka connector, which separates between > > 1. [fetching the information regarding the split ](https://github.com/apache/seatunnel/blob/dev/seatunnel-connectors-v2/connector-kafka/src/main/java/org/apache/seatunnel/connectors/seatunnel/kafka/source/KafkaPartitionSplitReader.java#L91) > 2. [the actual fetching of the data](https://github.com/apache/seatunnel/blob/dev/seatunnel-connectors-v2/connector-kafka/src/main/java/org/apache/seatunnel/connectors/seatunnel/kafka/source/KafkaPartitionSplitReader.java#L409). > > With manual split in Nats, we don't need to do step 1 but on step 2 [we might return nothing after waiting for several time](https://javadoc.io/doc/io.nats/jnats/2.12.0/io/nats/client/JetStreamSubscription.html#iterate-int-java.time.Duration-) on certain split. > > what do you think? The implementation of NATS should be more convenient. It supports the QueueSubscribe method and does not need to be processed based on partitionNum like Kafka. When parallelism is set, each parallel instance can consume all subjects, namely: `sub1`, `sub2.1`, `sub2.2` refer : https://docs.nats.io/using-nats/developer/receiving/queues -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
