RongtongJin opened a new pull request, #10198:
URL: https://github.com/apache/rocketmq/pull/10198

   <!-- Please make sure the target branch is right. In most case, the target 
branch should be `develop`. -->
   
   ### Which Issue(s) This PR Fixes
   
   <!-- Please ensure that the related issue has already been created, and 
[link this pull request to that issue using 
keywords](<https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword>)
 to ensure automatic closure. -->
   
   - Fixes #10197
   
   ### Brief Description
   
   Add per-requestCode traffic size (bytes) distribution to 
`RemotingCodeDistributionHandler`, in addition to the existing request count 
distribution.
   
   Key changes:
   - Replace 4 `ConcurrentHashMap`s with 2 using a composite `TrafficStats` 
class (`LongAdder count` + `LongAdder trafficSize`)
   - `calcCommandSize()` default path: fixed overhead (29 bytes) + body length 
— O(1), no iteration on hot path
   - `enableDetailedTrafficSize` in `NettyServerConfig`: when enabled, also 
counts remark + extFields byte lengths. Togglable at runtime via 
`updateBrokerConfig`
   - `NettyRemotingServer.printRemotingCodeDistribution()` now logs 
inbound/outbound traffic snapshots
   
   ### How Did You Test This Change?
   
   - Added `RemotingCodeDistributionHandlerTest` with 10 unit test cases 
covering: count, traffic with/without body, detailed mode on/off, snapshot 
reset, multiple requestCodes, non-RemotingCommand passthrough, concurrent 
correctness, and runtime toggle
   - Added `RemotingCodeDistributionBenchmark` (JMH) comparing detail-off vs 
detail-on across 1/4/8 threads. `recordInbound()` is called directly 
(package-private) to isolate recording overhead from Netty pipeline cost


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to