[ 
https://issues.apache.org/jira/browse/ROCKETMQ-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15869339#comment-15869339
 ] 

ASF GitHub Bot commented on ROCKETMQ-80:
----------------------------------------

Github user Jaskey commented on the issue:

    https://github.com/apache/incubator-rocketmq/pull/53
  
    @dongeforever 
    
    I have the same wishes for batch send too, but what drives me is that user 
may propably need a batch id for one batch of message, and these message should 
be success all to one single queue, which is nessecary when sneding order 
message. say msgA msgB and msgC should be consumed in order, they should be 
send to one same queue, but if we use for loop to send this, A may success and 
B may fail to the same queue since the queue numbers may changes at that exctly 
time.
    
    Batch send could solve this problem. But we may also need to generate a 
uniq batch id for this in client, which will help us to optimze the performance 
of consumeorderlyservice in the furture. Currently, message in one single queue 
can only be consumed only if the previous one consumed successfully which 
actually is too strict. Actully we only need the message in one batch consumed 
in order, batch id will help us to do this.
    
    **So in general, I suggest adding batch id  when sending batch message in 
all message property.**
    
    PS: There looks like two many repeated code, any ways or plans to clean it?


> Add batch feature
> -----------------
>
>                 Key: ROCKETMQ-80
>                 URL: https://issues.apache.org/jira/browse/ROCKETMQ-80
>             Project: Apache RocketMQ
>          Issue Type: New Feature
>    Affects Versions: 4.1.0-incubating
>            Reporter: dongeforever
>            Assignee: dongeforever
>             Fix For: 4.1.0-incubating
>
>
> Tests show that Kafka's million-level TPS is mainly owed to batch. When set 
> batch size to 1, the TPS is reduced an order of magnitude. So I try to add 
> this feature to RocketMQ.
> For a minimal effort, it works as follows:
> Only add synchronous send functions to MQProducer interface, just like 
> send(final Collection msgs).
> Use MessageBatch which extends Message and implements Iterable<Message>.
> Use byte buffer instead of list of objects to avoid too much GC in Broker.
> Split the decode and encode logic from lockForPutMessage to avoid too many 
> race conditions.
> Tests:
> On linux with 24 Core 48G Ram and SSD, using 50 threads to send 50Byte(body) 
> message in batch size 50, we get about 150w TPS until the disk is full.
> Potential problems:
> Although the messages can be accumulated in the Broker very quickly, it need 
> time to dispatch to the consume queue, which is much slower than accepting 
> messages. So the messages may not be able to be consumed immediately.
> We may need to refactor the ReputMessageService to solve this problem.
> And if guys have some ideas, please let me know or just share it in this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to