RongtongJin opened a new issue, #10197:
URL: https://github.com/apache/rocketmq/issues/10197

   ### Before Creating the Enhancement Request
   
   - [x] I have confirmed that this should be classified as an enhancement 
rather than a bug/feature.
   
   
   ### Summary
   
   `RemotingCodeDistributionHandler` currently tracks the request count per 
requestCode, but does not record how many bytes each requestCode consumes. This 
makes it hard to identify which commands are responsible for high network 
traffic.
   
   
   ### Motivation
   
   In production, it is common to have many requestCodes active at the same 
time. Count alone cannot tell us which command is a traffic hotspot. Adding 
per-requestCode traffic size distribution allows operators to quickly pinpoint 
bandwidth-heavy commands without relying on external packet capture tools.
   
   ### Describe the Solution You'd Like
   
   - Introduce `TrafficStats` (a pair of `LongAdder` for count and trafficSize) 
to replace four separate `ConcurrentHashMap`s with two.
   - Add `calcCommandSize()` that always includes a fixed protocol overhead (29 
bytes) plus body length — O(1), zero iteration cost on the hot path.
   - Add `enableDetailedTrafficSize` flag in `NettyServerConfig`. When enabled, 
remark and extFields variable-length bytes are also counted (O(n) path). The 
flag can be toggled at runtime via `updateBrokerConfig`.
   - Log inbound/outbound traffic snapshots in 
`NettyRemotingServer.printRemotingCodeDistribution()` alongside existing count 
snapshots.
   
   
   ### Describe Alternatives You've Considered
   
   Iterating over extFields on every message was considered but rejected as the 
default path due to O(n) cost on the hot path. It is kept as an opt-in mode 
behind the switch.
   
   
   ### Additional Context
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to