gortiz opened a new pull request, #17667:
URL: https://github.com/apache/pinot/pull/17667

   We have found some issues in the way SSE handles exceptions thrown/caught in 
the netty channel handlers.
   
   A Netty handler should always either call the next handler in the chain or 
abort the execution with a response. But there were several cases where we just 
caught the exception, logged it and did nothing with the channel. If the 
exception occurs during a channelRead, the channel will remain idle until we 
receive a new message, which may never happen. The other, even more problematic 
case was in DataTransferHandler.exceptionCaught, which just logged without 
calling the next handler. As a result, the channel stays in an undefined state, 
which may leak byte buffers.
   
   I also found another issue: Since we introduced 
PooledByteBufAllocatorWithLimits, brokers have been creating one allocator per 
server connection. This means the actual memory limit used was `num_servers * 
max_memory_limit`, which is obviously wrong. Also, since we recorded the stats 
at that time, we exported only the stats for the last server. Before the 
introduction of PooledByteBufAllocatorWithLimits, that was not an issue because 
even if we overrode the old stats, the value used was the same. I fixed that 
problem as well, although TBH after https://github.com/apache/pinot/pull/16939 
we shouldn't need to use PooledByteBufAllocatorWithLimits, as these limits can 
be set globally with netty JAVA_OPTs. The reason we introudced 
PooledByteBufAllocatorWithLimits is because we these JAVA_OPTS were ignored and 
we didn't know why (as proven in https://github.com/apache/pinot/pull/16939, 
the reason why is because we needed to change the properties to add the 
shanding prefix we use)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to