YuriyKrasilnikov opened a new pull request, #37516:
URL: https://github.com/apache/superset/pull/37516
### SUMMARY
Fixes #37507: Chart data API now returns per-phase timing breakdown.
**Before:** Timing computed server-side → sent to `STATS_LOGGER` → discarded
by default (`DummyStatsLogger`). No timing in API response.
**After:** `timing` object included in response with per-phase breakdown.
## Problem Analysis
### Root Cause (from issue #37507)
| Problem | Description |
|---------|-------------|
| Inconsistency | SQL Lab shows execution time, `/api/v1/chart/data` does
not |
| Computed data discarded | `@statsd_metrics` computes timing → sends to
`STATS_LOGGER` → not in response |
| No user diagnostics | Slow chart → loading spinner → no way to identify
bottleneck |
| Silent metric loss | Default `DummyStatsLogger` discards all `timing()`
calls |
### Key Code Location
`QueryContextProcessor.get_df_payload()` in
`superset/common/query_context_processor.py` — the main query lifecycle method
that handles validation, cache lookup, DB execution, and result processing.
## Solution: Per-Phase Timing Instrumentation
### Architecture
Wrap each phase in `get_df_payload()` with `time.perf_counter()`:
```
get_df_payload()
├── Phase: validate → timing["validate_ms"]
├── Phase: cache_lookup → timing["cache_lookup_ms"]
├── Phase: db_execution → timing["db_execution_ms"] (only on cache miss)
├── Phase: result_processing → timing["result_processing_ms"]
├── Total → timing["total_ms"]
└── Cache status → timing["is_cached"]
```
### Response Format
```json
{
"result": [{
"cache_key": "...",
"data": [...],
"timing": {
"validate_ms": 2.1,
"cache_lookup_ms": 5.3,
"db_execution_ms": 850.0,
"result_processing_ms": 12.5,
"total_ms": 870.1,
"is_cached": false
}
}]
}
```
- `db_execution_ms` present only on cache miss
- All numeric values are `>= 0`, rounded to 2 decimal places
### Slow Query Logging
New config `CHART_DATA_SLOW_QUERY_THRESHOLD_MS` (default: `None` = disabled).
When set, queries exceeding the threshold emit WARNING with full timing
breakdown:
```
WARNING: Slow chart query: total=1250ms validate=2ms cache_lookup=5ms
db_execution=1200ms result_processing=43ms is_cached=False
```
### Per-Phase Metrics
Each phase emitted to `STATS_LOGGER.timing()`:
`chart_data.validate_ms`, `chart_data.cache_lookup_ms`, etc.
## Changes Summary
| File | Change |
|------|--------|
| `superset/common/query_context_processor.py` | Wrap phases with
`time.perf_counter()`, emit metrics, slow query log |
| `superset/common/query_actions.py` | Pass `timing` through RESULTS
result_type |
| `superset/config.py` | Add `CHART_DATA_SLOW_QUERY_THRESHOLD_MS` config |
| `tests/unit_tests/common/test_query_context_processor_timing.py` | 5 new
tests |
### New Tests
| Test | Scenario |
|------|----------|
| `test_timing_present_in_payload` | `timing` dict exists in result |
| `test_timing_values_are_non_negative` | All ms values >= 0 |
| `test_timing_no_db_execution_on_cache_hit` | `db_execution_ms` absent on
cache hit |
| `test_timing_has_db_execution_on_cache_miss` | `db_execution_ms` present
on cache miss |
| `test_slow_query_logging` | WARNING emitted when threshold exceeded |
### BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
Not applicable (API-only change, no UI)
### TESTING INSTRUCTIONS
1. Start Superset
2. Open any dashboard with charts
3. Inspect network → `/api/v1/chart/data` response
4. **Before:** No `timing` field
5. **After:** `timing` object with phase breakdown
### ADDITIONAL INFORMATION
- [x] Has associated issue: Fixes #37507
- [x] Required feature flags: N/A
- [x] Changes UI: No
- [x] Includes DB Migration: No
- [x] Introduces new feature or API: Yes (timing field in response + config)
- [x] Removes existing feature or API: No
Related discussions: #18431, #13044
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]