merlimat commented on pull request #7406:
URL: https://github.com/apache/pulsar/pull/7406#issuecomment-652449856


   > I just want to say the pending buffer is sensitive to write throughput, 
especially on machines with less memory. If we split it into multiple parts, 
when there are fewer connections, the channel will enable auto-read, disable 
auto-read frequently, 
   
   How much is less memory? Once you have 10s of MB per thread (or per 
connection, if you have a single connection), there will be no more improvement 
in letting it use more memory. 
   
   It's the same reason for which the OS network stack doesn't let you grow the 
TCP window indefinitely. There are OS limits that are in place, because: (1) 
too much doesn't help and (2) it starve other users. 
   
   The current default settings we have is `-XX:MaxDirectMemorySize=4g`.
   That means that with 16 cores, you'd get, by default, 32 IO threads.
   With this, each thread gets 128MB of buffer size, which is far greater than 
any TCP window size you'd get from Linux.
   
   Also, 4G for a VM with 16 cores is very little memory ratio. Such VM would 
typically be having 64GB or higher. 
   
   Reversely, in container environment, mem is usually capped lower, but so is 
the CPU limit. If you limit the CPUs on the broker container, the default IO 
threads count will adjust accordingly, balancing the situation.
   
   > If we split it into multiple parts, when there are fewer connections, the 
channel will enable auto-read, disable auto-read frequently,
   
   We're already doing that with the per-connection `pendingSendRequest` 
(default 1K) and there's no perf impact. 
   
   > And it is possible that some parts are available pending more messages.
   
   > The io threads count not much, will we add up to a few numbers each time 
will become a bottleneck?
   
   I'm not sure what you mean here.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to