d-c-manning commented on PR #2061:
URL: https://github.com/apache/phoenix/pull/2061#issuecomment-2667223312

   > > IMO we don't need to track at further granularity. Just track time taken 
by a single `executeMutation` call. You can use nano seconds granularity for it.
   > 
   > @tkhurana Recently, we saw that huge time was being taken in 
`executeMutation` call and the reason turned out to be excess 1 ms coming from 
mutation plan creation. So, we don't track and publish at this granularity then 
during debugging we won't know which area to look into i.e. mutation plan 
creation, execution or something else. Thus, I wanted to track at further 
granularity. WDYT?
   
   1 ms was important for a huge time taken? How huge was the huge time?
   
   Did the 1 ms include any metadata RPCs, and if so, should the metric be 
captured specifically for calls to SYSTEM.CATALOG or meta or something similar? 
In this way, we need not spend cycles measuring local work that is expected to 
be fast.
   
   Is this metric/log only going to be useful in the cases where we send RPCs, 
or do we think that we really are spending a lot of time locally in planning, 
without any network calls?
   
   Any JVM pause for GC or otherwise could likely last longer than 0.5 
milliseconds, so the log message, if that's the choice, shouldn't be misleading 
that it is some kind of error or egregious performance scenario.
   
   I suppose we do want to know metrics around how many RPCs are required to 
serve the request, especially when those include additional RPCs like system 
RPCs, which may not always be required. But those are more difficult to 
instrument, and so we are choosing to instrument mutation planning, only 
because it's a top-level "span" and we don't need to plumb the instrumentation 
all the way down through the code?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to