Martijn van Oosterhout <[email protected]> writes:
> On Wed, Jun 07, 2006 at 09:53:32AM -0400, Tom Lane wrote:
>> If we do have to revert, I'd propose that we pursue the notion of
>> interrupt-driven sampling like gprof uses.
> How would that work? You could then estimate how much time was spent in
> each node, but you no longer have any idea about when they were entered
> or left.
Hm? It would work the same way gprof works. I'm imagining something
along the lines of
global variable:
volatile Instrumentation *current_instr = NULL;
also add an "Instrumentation *parent_instr" field to Instrumentation
InstrStartNode does:
instr->parent_instr = current_instr;
current_instr = instr;
InstrStopNode restores the previous value of the global:
current_instr = instr->parent_instr;
timer interrupt routine does this once every few milliseconds:
total_samples++;
for (instr = current_instr; instr; instr = instr->parent_instr)
instr->samples++;
You still count executions and tuples in InstrStartNode/InstrStopNode,
but you never call gettimeofday there. You *do* call gettimeofday at
beginning and end of the whole query to measure the total runtime,
and then you blame a fraction samples/total_samples of that on each
node.
The bubble-up of sample counts to parent nodes could perhaps be done
while printing the results instead of on-the-fly as sketched above, but
the above seems simpler.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match