debajyoti-truefoundry opened a new issue, #16619:
URL: https://github.com/apache/datafusion/issues/16619

   ### Describe the bug
   
   Initially, I thought `time_elapsed_scanning_total` is just the amount of CPU 
time spent in fetching and decoding the Parquet files, as described below:
   
https://github.com/apache/datafusion/blob/ebf49b4fe585820caaad3b8c04529633190f48be/docs/source/user-guide/explain-usage.md?plain=1#L211
   
   [Then from this 
discussion](https://github.com/apache/datafusion/discussions/16572#discussioncomment-13607637),
 I found out that the `time_elapsed_scanning_total`  may include the time of 
execution of the parent operator.
   
https://github.com/apache/datafusion/blob/9f3cc7b85b084102916acf489354d7f65f7c9e7c/datafusion/datasource/src/file_stream.rs#L463
   
   #### Query 1
   ```
   ProjectionExec: expr=[spanid@0 as spanid, traceid@1 as traceid, spanname@2 
as spanname, scopename@3 as scopename, timestamp@4 as timestamp, servicename@5 
as servicename, parentspanid@6 as parentspanid, duration@7 as duration, tags@8 
as tags, spankind@9 as spankind, mlrepoid@10 as mlrepoid, tracingprojectid@11 
as tracingprojectid, httpstatuscode@12 as httpstatuscode, matched@13 as 
matched], metrics=[output_rows=10, elapsed_compute=7.36µs]
     SortPreservingMergeExec: [Timestamp@14 ASC NULLS LAST], fetch=10, 
metrics=[output_rows=10, elapsed_compute=59.961µs]
       SortExec: TopK(fetch=10), expr=[Timestamp@14 ASC NULLS LAST], 
preserve_partitioning=[true], metrics=[output_rows=80, 
elapsed_compute=12.02048ms, row_replacements=799]
         ProjectionExec: 
expr=[first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanId)@1 as spanid, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.TraceId)@2 as traceid, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanName)@3 as spanname, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ScopeName)@4 as scopename, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Timestamp)@5 as timestamp, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ServiceName)@6 as servicename, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ParentSpanId)@7 as 
parentspanid, first_value(tbl_a568758dc0934c4498693bcb9754dca6.Duration)@8 as 
duration, first_value(tbl_a568758dc0934c4498693bcb9754dca6.Tags)@9 as tags, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanKind)@10 as spankind, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.MlRepoId)@11 as mlrepoid, 
first_value(TracingProjectId)@12 as tracingprojectid, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.StatusCode)@13 as 
httpstatuscode, ((first_va
 lue(tbl_a568758dc0934c4498693bcb9754dca6.ServiceName)@6 = user-service OR 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ServiceName)@6 = auth-service) 
AND first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanName)@3 LIKE %login% 
AND first_value(tbl_a568758dc0934c4498693bcb9754dca6.Duration)@8 >= 1000000 AND 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Duration)@8 <= 5000000 AND 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.StatusCode)@13 >= 200 AND 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.StatusCode)@13 <= 299) IS NOT 
DISTINCT FROM true as matched, 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Timestamp)@5 as Timestamp], 
metrics=[output_rows=492435, elapsed_compute=1.679348ms]
           AggregateExec: mode=FinalPartitioned, gby=[SpanId@0 as SpanId], 
aggr=[first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.TraceId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanName), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ScopeName), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Timestamp), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ServiceName), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ParentSpanId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Duration), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Tags), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanKind), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.MlRepoId), 
first_value(TracingProjectId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.StatusCode)], 
metrics=[output_rows=492435, elapsed_compute=3.89974745s, spill_count=0, 
spilled_bytes=0, spilled_rows=0, peak_mem_used=1391310907]
             CoalesceBatchesExec: target_batch_size=20000, 
metrics=[output_rows=492435, elapsed_compute=954.019µs]
               RepartitionExec: partitioning=Hash([SpanId@0], 8), 
input_partitions=8, metrics=[fetch_time=3.035408986s, 
repartition_time=74.415606ms, send_time=165.047905ms]
                 AggregateExec: mode=Partial, gby=[SpanId@2 as SpanId], 
aggr=[first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.TraceId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanName), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ScopeName), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Timestamp), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ServiceName), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.ParentSpanId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Duration), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.Tags), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.SpanKind), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.MlRepoId), 
first_value(TracingProjectId), 
first_value(tbl_a568758dc0934c4498693bcb9754dca6.StatusCode)], 
metrics=[output_rows=492435, elapsed_compute=2.378294013s, spill_count=0, 
spilled_bytes=0, spilled_rows=0, skipped_aggregation_rows=270299, 
peak_mem_used=633360309]
                   CoalesceBatchesExec: target_batch_size=20000, 
metrics=[output_rows=492435, elapsed_compute=63.812µs]
                     FilterExec: TraceId@2 = fb09d0c9b49136bb161464b3e32c5083 
AND ParentSpanId@4 = 662b7388122dfb79 AND Timestamp@0 >= 1744702200000000 AND 
TsBucketStart@1 >= 1745712000, projection=[Timestamp@0, TraceId@2, SpanId@3, 
ParentSpanId@4, ServiceName@5, SpanName@6, SpanKind@7, ScopeName@8, Duration@9, 
StatusCode@10, Tags@11, MlRepoId@12, TracingProjectId@13], 
metrics=[output_rows=492435, elapsed_compute=3.821363ms]
                       DeltaScan, metrics=[files_pruned=96, files_scanned=205]
                         DataSourceExec: file_groups={8 groups: [[...]]}, 
projection=[...], file_type=parquet, predicate=TraceId@2 = 
fb09d0c9b49136bb161464b3e32c5083 AND ParentSpanId@4 = 662b7388122dfb79 AND 
Timestamp@0 >= 1744702200000000 AND TsBucketStart@1 >= 1745712000, 
pruning_predicate=TraceId_null_count@2 != row_count@3 AND TraceId_min@0 <= 
fb09d0c9b49136bb161464b3e32c5083 AND fb09d0c9b49136bb161464b3e32c5083 <= 
TraceId_max@1 AND ParentSpanId_null_count@6 != row_count@3 AND 
ParentSpanId_min@4 <= 662b7388122dfb79 AND 662b7388122dfb79 <= 
ParentSpanId_max@5 AND Timestamp_null_count@8 != row_count@3 AND 
Timestamp_max@7 >= 1744702200000000 AND TsBucketStart_null_count@10 != 
row_count@3 AND TsBucketStart_max@9 >= 1745712000, 
required_guarantees=[ParentSpanId in (662b7388122dfb79), TraceId in 
(fb09d0c9b49136bb161464b3e32c5083)], metrics=[output_rows=492435, 
elapsed_compute=8ns, bytes_scanned=369890183, file_open_errors=0, 
file_scan_errors=0, num_predicate_creation_errors=0, page_inde
 x_rows_matched=5863737, page_index_rows_pruned=0, 
predicate_evaluation_errors=0, pushdown_rows_matched=12712344, 
pushdown_rows_pruned=5371302, row_groups_matched_bloom_filter=10, 
row_groups_matched_statistics=205, row_groups_pruned_bloom_filter=195, 
row_groups_pruned_statistics=0, bloom_filter_eval_time=8.413459ms, 
metadata_load_time=73.751446ms, page_index_eval_time=297.417µs, 
row_pushdown_eval_time=28.275053ms, statistics_eval_time=4.391944ms, 
time_elapsed_opening=21.034126ms, time_elapsed_processing=593.548393ms, 
time_elapsed_scanning_total=3.42025892s, 
time_elapsed_scanning_until_data=395.516949ms]
   ```
   * `output_rows=492435`
   * `bytes_scanned=369890183`
   * **`time_elapsed_scanning_total=3.42025892s`**
   
   #### Query 2
   ```
   ProjectionExec: expr=[spanid@0 as spanid, traceid@1 as traceid, spanname@2 
as spanname, scopename@3 as scopename, timestamp@4 as timestamp, servicename@5 
as servicename, parentspanid@6 as parentspanid, duration@7 as duration, tags@8 
as tags, spankind@9 as spankind, mlrepoid@10 as mlrepoid, tracingprojectid@11 
as tracingprojectid, httpstatuscode@12 as httpstatuscode, matched@13 as 
matched], metrics=[output_rows=10, elapsed_compute=2.43µs]
     SortPreservingMergeExec: [Timestamp@14 ASC NULLS LAST], fetch=10, 
metrics=[output_rows=10, elapsed_compute=29.09µs]
       SortExec: TopK(fetch=10), expr=[Timestamp@14 ASC NULLS LAST], 
preserve_partitioning=[true], metrics=[output_rows=80, 
elapsed_compute=12.585606ms, row_replacements=576]
         ProjectionExec: expr=[SpanId@0 as spanid, TraceId@1 as traceid, 
SpanName@2 as spanname, ScopeName@3 as scopename, Timestamp@4 as timestamp, 
ServiceName@5 as servicename, ParentSpanId@6 as parentspanid, Duration@7 as 
duration, Tags@8 as tags, SpanKind@9 as spankind, MlRepoId@10 as mlrepoid, 
TracingProjectId@11 as tracingprojectid, StatusCode@12 as httpstatuscode, 
((ServiceName@5 = user-service OR ServiceName@5 = auth-service) AND SpanName@2 
LIKE %login% AND Duration@7 >= 1000000 AND Duration@7 <= 5000000 AND 
StatusCode@12 >= 200 AND StatusCode@12 <= 299) IS NOT DISTINCT FROM true as 
matched, Timestamp@4 as Timestamp], metrics=[output_rows=492435, 
elapsed_compute=1.655463ms]
           AggregateExec: mode=FinalPartitioned, gby=[SpanId@0 as SpanId, 
TraceId@1 as TraceId, SpanName@2 as SpanName, ScopeName@3 as ScopeName, 
Timestamp@4 as Timestamp, ServiceName@5 as ServiceName, ParentSpanId@6 as 
ParentSpanId, Duration@7 as Duration, Tags@8 as Tags, SpanKind@9 as SpanKind, 
MlRepoId@10 as MlRepoId, TracingProjectId@11 as TracingProjectId, StatusCode@12 
as StatusCode, TsBucketStart@13 as TsBucketStart], aggr=[], 
ordering_mode=PartiallySorted([1, 6]), metrics=[output_rows=492435, 
elapsed_compute=345.710556ms, spill_count=0, spilled_bytes=0, spilled_rows=0, 
peak_mem_used=233345876]
             CoalesceBatchesExec: target_batch_size=20000, 
metrics=[output_rows=492435, elapsed_compute=631.275µs]
               RepartitionExec: partitioning=Hash([SpanId@0, TraceId@1, 
SpanName@2, ScopeName@3, Timestamp@4, ServiceName@5, ParentSpanId@6, 
Duration@7, Tags@8, SpanKind@9, MlRepoId@10, TracingProjectId@11, 
StatusCode@12, TsBucketStart@13], 8), input_partitions=8, 
metrics=[fetch_time=954.934766ms, repartition_time=64.976695ms, 
send_time=16.871715ms]
                 AggregateExec: mode=Partial, gby=[SpanId@0 as SpanId, 
TraceId@1 as TraceId, SpanName@2 as SpanName, ScopeName@3 as ScopeName, 
Timestamp@4 as Timestamp, ServiceName@5 as ServiceName, ParentSpanId@6 as 
ParentSpanId, Duration@7 as Duration, Tags@8 as Tags, SpanKind@9 as SpanKind, 
MlRepoId@10 as MlRepoId, TracingProjectId@11 as TracingProjectId, StatusCode@12 
as StatusCode, TsBucketStart@13 as TsBucketStart], aggr=[], 
ordering_mode=PartiallySorted([1, 6]), metrics=[output_rows=492435, 
elapsed_compute=345.95653ms, spill_count=0, spilled_bytes=0, spilled_rows=0, 
peak_mem_used=204458097]
                   ProjectionExec: expr=[SpanId@3 as SpanId, TraceId@2 as 
TraceId, SpanName@6 as SpanName, ScopeName@8 as ScopeName, Timestamp@0 as 
Timestamp, ServiceName@5 as ServiceName, ParentSpanId@4 as ParentSpanId, 
Duration@9 as Duration, Tags@11 as Tags, SpanKind@7 as SpanKind, MlRepoId@12 as 
MlRepoId, CAST(TracingProjectId@13 AS Utf8) as TracingProjectId, StatusCode@10 
as StatusCode, TsBucketStart@1 as TsBucketStart], metrics=[output_rows=492435, 
elapsed_compute=4.419444ms]
                     CoalesceBatchesExec: target_batch_size=20000, 
metrics=[output_rows=492435, elapsed_compute=51.84µs]
                       FilterExec: TraceId@2 = fb09d0c9b49136bb161464b3e32c5083 
AND ParentSpanId@4 = 662b7388122dfb79 AND Timestamp@0 >= 1744702200000000 AND 
TsBucketStart@1 >= 1745712000, metrics=[output_rows=492435, 
elapsed_compute=3.624526ms]
                         DeltaScan, metrics=[files_pruned=96, files_scanned=205]
                           DataSourceExec: file_groups={8 groups: [[...], 
...]}, projection=[...], file_type=parquet, predicate=TraceId@2 = 
fb09d0c9b49136bb161464b3e32c5083 AND ParentSpanId@4 = 662b7388122dfb79 AND 
Timestamp@0 >= 1744702200000000 AND TsBucketStart@1 >= 1745712000, 
pruning_predicate=TraceId_null_count@2 != row_count@3 AND TraceId_min@0 <= 
fb09d0c9b49136bb161464b3e32c5083 AND fb09d0c9b49136bb161464b3e32c5083 <= 
TraceId_max@1 AND ParentSpanId_null_count@6 != row_count@3 AND 
ParentSpanId_min@4 <= 662b7388122dfb79 AND 662b7388122dfb79 <= 
ParentSpanId_max@5 AND Timestamp_null_count@8 != row_count@3 AND 
Timestamp_max@7 >= 1744702200000000 AND TsBucketStart_null_count@10 != 
row_count@3 AND TsBucketStart_max@9 >= 1745712000, 
required_guarantees=[ParentSpanId in (662b7388122dfb79), TraceId in 
(fb09d0c9b49136bb161464b3e32c5083)], metrics=[output_rows=492435, 
elapsed_compute=8ns, bytes_scanned=369887916, file_open_errors=0, 
file_scan_errors=0, num_predicate_creation_errors=0, pa
 ge_index_rows_matched=5863737, page_index_rows_pruned=0, 
predicate_evaluation_errors=0, pushdown_rows_matched=12712344, 
pushdown_rows_pruned=5371302, row_groups_matched_bloom_filter=10, 
row_groups_matched_statistics=205, row_groups_pruned_bloom_filter=195, 
row_groups_pruned_statistics=0, bloom_filter_eval_time=8.39861ms, 
metadata_load_time=62.433084ms, page_index_eval_time=292.354µs, 
row_pushdown_eval_time=28.451684ms, statistics_eval_time=4.473339ms, 
time_elapsed_opening=8.687883ms, time_elapsed_processing=589.865242ms, 
time_elapsed_scanning_total=750.34081ms, 
time_elapsed_scanning_until_data=403.341742ms]
   ```
   * `output_rows=492435`
   * `bytes_scanned=369887916`
   * **`time_elapsed_scanning_total=750.34081ms`**
   
   As you can see, for similar output rows and data, the 
`time_elapsed_scanning_total` reports very different numbers.
   
   What metric should I use to identify the latency introduced in the query 
response time due to Parquet I/O (getting from object store or disk), 
decompression, and deserialization?
   
   
   ### To Reproduce
   
   _No response_
   
   ### Expected behavior
   
   _No response_
   
   ### Additional context
   
   I am using Datafusion 47.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to