On Feb 16, 2019, at 6:44 PM, Tomas Vondra wrote: > > On 2/17/19 3:40 AM, David Fetter wrote: >> >> As someone not volunteering to do any of the work, I think it'd be a >> nice thing to have. How large an effort would you guess it would be >> to build a proof of concept? > > I don't quite understand what is meant by "actual cost metric" and/or > how is that different from running EXPLAIN ANALYZE.
Here is an example: Hash Join (cost=3.92..18545.70 rows=34 width=32) (actual cost=3.92..18500 time=209.820..1168.831 rows=47 loops=3) Now we have the actual time. Time can have a high variance (a change in system load, or just noises), but I think the actual cost would be less likely to change due to external factors. On 2/17/19 3:40 AM, David Fetter wrote: > On Sat, Feb 16, 2019 at 03:10:44PM -0800, Donald Dong wrote: >> Hi, >> >> When explaining a query, I think knowing the actual rows and pages >> in addition to the operation type (e.g seqscan) would be enough to >> calculate the actual cost. The actual cost metric could be useful >> when we want to look into how off is the planner's estimation, and >> the correlation between time and cost. Would it be a feature worth >> considering? > > As someone not volunteering to do any of the work, I think it'd be a > nice thing to have. How large an effort would you guess it would be > to build a proof of concept? Intuitively it does not feel very complicated to me, but I think the interface when we're planning (PlannerInfo, RelOptInfo) is different from the interface when we're explaining a query (QueryDesc). Since I'm very new, if I'm doing it myself, it would probably take many iterations to get to the right point. But still, I'm happy to work on a proof of concept if no one else wants to work on it. regards, Donald Dong