Hi,

In the thread [1] dealing with hashjoin bug introduced in 9.5, Tom voiced his dislike of dense_alloc. I kinda agree with him that introducing "local allocators" may not be the best idea, and as dense_alloc was introduced by me I was playing with the idea to wrap this into a regular memory context, perhaps with some restrictions (e.g. no pfree). But I'm having trouble with that approach ...

Let me quickly explain the idea behind dense_alloc. When building the tuple hash table in hash join, we simply allocate large chunk of memory using palloc (~32kB), and then store the tuples into the chunk on our own without calling palloc for each tuple. Each tuple already has length in the header, so we don't need chunk header. Also, we don't do the 2^k chunk sizes and instead store the tuples densely.

This means we can't do repalloc or pfree on the tuples, but fine. We never did repalloc in hashjoin anyway, and pfree is only needed when increasing the number of batches. But with dense_alloc we can simply walk through the tuples as stored in the allocated chunks, which has the nice benefit that it's sequential, making memory prefetching more efficient than with the old code (walking through buckets). Also, no freelists and such.

So the dense_alloc has several benefits:

(a) memory reduction thanks to eliminating StandardChunkHeader (which is 16B, and quite noticeable for narrow tuples)

(b) memory reduction thanks to dense packing tuples (not leaving free space in each chunk)

(c) improving efficiency by sequential memory accesses (compared to random accesses caused by access through buckets)

Per the measurements done in thread [2], (a) and (b) may reduce memory requirements by 50% in some cases. I also vaguely remember doing benchmarks for (c) and seeing measurable improvements, but I don't see the numbers in the thread, so either it was posted somewhere else or not at all :-/

Anyway, I'm explaining this because I think it's important the new reworked code achieves the same benefits. But when trying to implement it as a special memory context, I quickly ran into the requirement that each chunk has a chunk header [3] which would prevent (a).

I know it was proposed to only include the chunk header when compiled with asserts, but I don't like the idea of having a reusable code that depends on that (and fails with a segfault without it).

If I have to choose between a memory context that is essentially meant to be reused, but likely to fail unexpectedly in non-assert builds, and a special local allocator isolated to a single node, I choose the latter. Perhaps I'd see this differently had there been other places that could use the new memory context, but I can't think of one.

Opinions?


[1] https://www.postgresql.org/message-id/flat/7034.1454722453%40sss.pgh.pa.us

[2] https://www.postgresql.org/message-id/flat/53B4A03F.3070409%40fuzzy.cz

[3] https://github.com/postgres/postgres/blob/master/src/include/utils/memutils.h#L49

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to