Hi,

On 07/15/2015 09:21 PM, Robert Haas wrote:
On Tue, Jul 14, 2015 at 9:14 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Firstly, do we really have good benchmarks and measurements? I really doubt
that. We do have some numbers for REINDEX, where we observed 0.5-1%
regression on noisy results from a Power machine (and we've been unable to
reproduce that on x86). I don't think that's a representative benchmark, and
I'd like to see more thorough measurements. And I agreed to do this, once
Jeff comes up with a new version of the patch.

Secondly, the question is whether the performance is impacted more by the
additional instructions, or by other things - say, random padding, as was
explained by Andrew Gierth in [1].

I don't know whether that's happening in this patch, but if it is,
it seems rather strange to use this against this patch and not the
others (because there surely will be other patches causing similar
issues).

It matters, at least to me, an awful lot where we're adding lines of
code. If you want to add modest amounts of additional code to CREATE
TABLE or CHECKPOINT or something like that, I really don't care,
because that stuff doesn't execute frequently enough to really
matter to performance even if we add a significant chunk of overhead,
and we're doing other expensive stuff at the same time, like fsync.
The same can't be said about functions like LWLockAcquire and
AllocSetAlloc that routinely show up at the top of CPU profiles.

I agree that there is room to question whether the benchmarks I did
are sufficient reason to think that the abstract concern that putting
more code into a function might make it slower translates into a
measurable drop in performance in practice. But I think when it comes
to these very hot code paths, extreme conservatism is warranted. We
can agree to disagree about that.

No, that is not what I tried to say. I certainly agree that we need to pay attention when adding stuff hot paths - there's no disagreement about this.

The problem with random padding is that adding code somewhere may introduce padding that affects other pieces of code. That is essentially the point of Andrew's explanation that I linked in my previous message.

And the question is - are the differences we've measured really due to code added to the hot path, or something introduced by random padding due to some other changes (possibly in a code that was not even executed during the test)?

I don't know how significant impact this may have in this case, or how serious this is in general, but we're talking about 0.5-1% difference on a noisy benchmark. And if such cases of random padding really are a problem in practice, we've certainly missed plenty of other patches with the same impact.

Because effectively what Jeff's last patch did was adding a single int64 counter to MemoryContextData structure, and incrementing it for each allocated block (not chunk). I can't really imagine a solution making it cheaper, because there are very few faster operations. Even "opt-in" won't make this much faster, because you'll have to check a flag and you'll need two fields (flag + counter).

Of course, this assumes "local counter" (i.e. not updating counters the parent contexts), but I claim that's OK. I've been previously pushing for eagerly updating all the parent contexts, so that finding out the allocated memory is fast even when there are many child contexts - a prime example was array_agg() before I fixed it. But I changed my mind on this and now say "screw them". I claim that aggregates using a separate memory context for each group are a lost case - they already suffer a significant overhead, and should be fixed just like we fixed array_agg().

That was effectively the outcome of pgcon discussions - to use the simple int64 counter, do the accounting for all contexts, and update only the local counter. For cases with many child contexts finding out the amount of allocated memory won't be cheap, but well - there's not much we can do about that.

I know there was a proposal to force all aggregates to use regular
data types as aggregate stats, but I can't see how that could work
without a significant performance penalty. For example array_agg()
is using internal to pass ArrayBuildState - how do you turn that to
a regular data type without effectively serializing/deserializing
the whole array on every transition?

That is a good point. Tom suggested that his new expanded-format
stuff might be able to be adapted to the purpose, but I have no idea
 how possible that really is.

Me neither, and maybe there's a good solution for that, making my concerns unfounded.

What might be possible is extending the aggregate API with another
custom function returning size of the aggregate state. So when
defining an aggregate using 'internal' for aggregate state, you'd
specify transfunc, finalfunc and sizefunc. That seems doable, I
guess.

And infunc and outfunc.  If we don't use the expanded-format stuff for
aggregates with a type-internal transition state, we won't be able to
use input and output functions to serialize and deserialize them,
either.

Sure, that is indeed necessary for spilling the aggregates to disk. But for aggregates with fixed-size state that's not necessary (Jeff's HashAgg patch handles them fine), so I see this as a separate thing.


I find the memory accounting as a way more elegant solution, though.

I think we're just going to have to agree to disagree on that.

Sure, it's certainly a matter of personal taste.


--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to