Hello Xuebin Su, Csaba Ringhofer, Impala Public Jenkins,
I'd like you to reexamine a change. Please visit
http://gerrit.cloudera.org:8080/22062
to look at the new patch set (#6).
Change subject: IMPALA-13487: Add profile counters for memory allocation
......................................................................
IMPALA-13487: Add profile counters for memory allocation
This patch adds some profile counters to identify memory-bound queries,
i.e. query duration mostly spent in memory operations.
The following counters are added:
- Thread level page faults: TotalThreadsMinorPageFaults,
TotalThreadsMajorPageFaults.
- MemPool counters for tuple_mem_pool and aux_mem_pool of the scratch
batch in columnar scanners:
- ScratchBatchMemAllocTotalDuration
- ScratchBatchMemAllocMaxDuration
- ScratchBatchMemAllocTotalBytes
- ScratchBatchMemAllocMaxBytes
- ScratchBatchMemAllocTimes
- ScratchBatchMemFreeDuration
- ScratchBatchMemFreeTimes
- Duration in RowBatch::Reset() called in FragmentInstanceState::Exec:
RowBatchResetTime.
- Duration in HdfsColumnarScanner::GetCollectionMemory() which includes
memory allocation for collection values and memcpy when doubling the
tuple buffer.
Here is an example of a memory-bound query:
Fragment Instance
- RowBatchResetTime: 433.001ms
- TotalStorageWaitTime: 34.000ms
- TotalThreadsInvoluntaryContextSwitches: 2 (2)
- TotalThreadsMajorPageFaults: 0 (0)
- TotalThreadsMajorPageFaults: 0 (0)
- TotalThreadsMinorPageFaults: 563.24K (563242)
- TotalThreadsTotalWallClockTime: 9s630ms
- TotalThreadsSysTime: 1s405ms
- TotalThreadsUserTime: 1s892ms
- TotalThreadsVoluntaryContextSwitches: 5.45K (5453)
- TotalTime: 9s631ms
...
HDFS_SCAN_NODE (id=0):
- DecompressionTime: 1s260ms
- MaterializeCollectionGetMemTime: 5s082ms
- MaterializeTupleTime: 6s699ms
- ScannerIoWaitTime: 34.000ms
- ScratchBatchMemAllocBytes: 1.18 GB (1262827520)
- ScratchBatchMemAllocMaxDuration: 10.000ms
- ScratchBatchMemAllocTimes: 2.54K (2543)
- ScratchBatchMemAllocTotalDuration: 5s099ms
- ScratchBatchMemFreeDuration: 0.000ns
- ScratchBatchMemFreeTimes: 0 (0)
- TotalRawHdfsOpenFileTime: 0.000ns
- TotalRawHdfsReadTime: 319.000ms
- TotalTime: 9s147ms
The fragment instance took 9s631ms to finish. 433.001ms spent in
resetting the final RowBatch. The majority of the time is spent in the
scan node (9s147ms). Mostly it's DecompressionTime +
MaterializeTupleTime + ScannerIoWaitTime + TotalRawHdfsReadTime. The
majority is MaterializeTupleTime (6s699ms).
ScratchBatchMemAllocTotalDuration shows that invoking std::malloc() in
materializing the scratch batches took 5s099ms.
MaterializeCollectionGetMemTime shows that allocating memory for
collections and copying memory in doubling the tuple buffer took
5s082ms. So materializing the collections took most of the time.
The other observation is TotalThreadsTotalWallClockTime is much higher
than TotalThreadsSysTime + TotalThreadsUserTime and there is a large
number of TotalThreadsVoluntaryContextSwitches. So the thread is waiting
for resources (e.g. lock) for a long duration. In the above case, it's
waiting for locks in tcmalloc memory allocation (need off-cpu flame
graph to reveal this).
Implementation of MemPool counters
Add optional MemPoolCounters owned by MemPool callers (e.g. scanner) so
they can have longer life cycle than MemPools. Note that some counters
are updated in the destructor of MemPool so need longer life cycle.
MemPoolCounters is currently an optional parameter in the MemPool
constructor. MemPool is widely used in the code base. Callers that don't
need to track MemPool counters keep setting it as nullptr. Currently,
only track MemPool counters of the scratch batch in columnar scanners.
This patch also moves GetCollectionMemory() from HdfsScanner to
HdfsColumnarScanner since it's only used by parquet and orc scanners.
Tests
- tested in manually reproducing the memory-bound queries
- added e2e tests
Change-Id: I982315d96e6de20a3616f3bd2a2b4866d1ff4710
---
M be/src/exec/hdfs-columnar-scanner.cc
M be/src/exec/hdfs-columnar-scanner.h
M be/src/exec/hdfs-scanner.cc
M be/src/exec/hdfs-scanner.h
M be/src/exec/scratch-tuple-batch.h
M be/src/runtime/fragment-instance-state.cc
M be/src/runtime/mem-pool.cc
M be/src/runtime/mem-pool.h
M be/src/util/impala-profile-tool.cc
M be/src/util/runtime-profile-counters.h
M be/src/util/runtime-profile.cc
M tests/query_test/test_observability.py
12 files changed, 214 insertions(+), 37 deletions(-)
git pull ssh://gerrit.cloudera.org:29418/Impala-ASF refs/changes/62/22062/6
--
To view, visit http://gerrit.cloudera.org:8080/22062
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings
Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-MessageType: newpatchset
Gerrit-Change-Id: I982315d96e6de20a3616f3bd2a2b4866d1ff4710
Gerrit-Change-Number: 22062
Gerrit-PatchSet: 6
Gerrit-Owner: Quanlong Huang <[email protected]>
Gerrit-Reviewer: Csaba Ringhofer <[email protected]>
Gerrit-Reviewer: Impala Public Jenkins <[email protected]>
Gerrit-Reviewer: Quanlong Huang <[email protected]>
Gerrit-Reviewer: Xuebin Su <[email protected]>