Hi Jakub,
FYI I've posted my experimental NUMA patch series here:
https://www.postgresql.org/message-id/099b9433-2855-4f1b-b421-d078a5d82017%40vondra.me
I've considered posting it to this thread, but it seemed sufficiently
different to start a new thread.
regards
--
Tomas Vondra
On 7/1/25 11:04, Jakub Wartak wrote:
> On Mon, Jun 30, 2025 at 9:23 PM Tomas Vondra wrote:
>>
>> I wasn't suggesting to do "numactl --interleave=all". My argument was
>> simply that doing numa_interleave_memory() has most of the same issues,
>> because it's oblivious to what's stored in the sha
On Mon, Jun 30, 2025 at 9:23 PM Tomas Vondra wrote:
>
> I wasn't suggesting to do "numactl --interleave=all". My argument was
> simply that doing numa_interleave_memory() has most of the same issues,
> because it's oblivious to what's stored in the shared memory. Sure, the
> fact that local memory
On 6/30/25 12:55, Jakub Wartak wrote:
> Hi Tomas!
>
> On Fri, Jun 27, 2025 at 6:41 PM Tomas Vondra wrote:
>
>> I agree we should improve the behavior on NUMA systems. But I'm not sure
>> this patch does far enough, or perhaps the approach seems a bit too
>> blunt, ignoring some interesting st
On Mon, Jun 30, 2025 at 12:55 PM Jakub Wartak
wrote:
[..]
> > FWIW while I think the patch doesn't go far enough, there's one area
> > where I think it probably goes way too far - configurability. I agree
> > it's reasonable to allow running on a subset of nodes, e.g. to split the
> > system betwe
Hi Tomas!
On Fri, Jun 27, 2025 at 6:41 PM Tomas Vondra wrote:
> I agree we should improve the behavior on NUMA systems. But I'm not sure
> this patch does far enough, or perhaps the approach seems a bit too
> blunt, ignoring some interesting stuff.
>
> AFAICS the patch essentially does the same
Hi,
I agree we should improve the behavior on NUMA systems. But I'm not sure
this patch does far enough, or perhaps the approach seems a bit too
blunt, ignoring some interesting stuff.
AFAICS the patch essentially does the same thing as
numactl --interleave=all
except that it only does that
On Fri, Apr 18, 2025 at 7:48 PM Bertrand Drouvot
wrote:
>
> Hi,
>
> On Thu, Apr 17, 2025 at 01:58:44AM +1200, Thomas Munro wrote:
> > On Wed, Apr 16, 2025 at 9:14 PM Jakub Wartak
> > wrote:
> > > 2. Should we also interleave DSA/DSM for Parallel Query? (I'm not an
> > > expert on DSA/DSM at all)
On Fri, Apr 18, 2025 at 7:43 PM Bertrand Drouvot
wrote:
>
> Hi,
>
> On Wed, Apr 16, 2025 at 10:05:04AM -0400, Robert Haas wrote:
> > On Wed, Apr 16, 2025 at 5:14 AM Jakub Wartak
> > wrote:
> > > Normal pgbench workloads tend to be not affected, as each backend
> > > tends to touch just a small pa
Hi,
On Thu, Apr 17, 2025 at 01:58:44AM +1200, Thomas Munro wrote:
> On Wed, Apr 16, 2025 at 9:14 PM Jakub Wartak
> wrote:
> > 2. Should we also interleave DSA/DSM for Parallel Query? (I'm not an
> > expert on DSA/DSM at all)
>
> I have no answers but I have speculated for years about a very
> sp
Hi,
On Wed, Apr 16, 2025 at 10:05:04AM -0400, Robert Haas wrote:
> On Wed, Apr 16, 2025 at 5:14 AM Jakub Wartak
> wrote:
> > Normal pgbench workloads tend to be not affected, as each backend
> > tends to touch just a small partition of shm (thanks to BAS
> > strategies). Some remaining questions
On Wed, Apr 16, 2025 at 5:14 AM Jakub Wartak
wrote:
> Normal pgbench workloads tend to be not affected, as each backend
> tends to touch just a small partition of shm (thanks to BAS
> strategies). Some remaining questions are:
> 1. How to name this GUC (numa or numa_shm_interleave) ? I prefer the
On Thu, Apr 17, 2025 at 1:58 AM Thomas Munro wrote:
> I have no answers but I have speculated for years about a very
> specific case (without any idea where to begin due to lack of ... I
> guess all this sort of stuff): in ExecParallelHashJoinNewBatch(),
> workers split up and try to work on diffe
On Wed, Apr 16, 2025 at 9:14 PM Jakub Wartak
wrote:
> 2. Should we also interleave DSA/DSM for Parallel Query? (I'm not an
> expert on DSA/DSM at all)
I have no answers but I have speculated for years about a very
specific case (without any idea where to begin due to lack of ... I
guess all this
Thanks to having pg_numa.c, we can now simply address problem#2 of
NUMA imbalance from [1] pages 11-14, by interleaving shm memory in
PG19 - patch attached. We do not need to call numa_set_localalloc() as
we only interleave shm segments, while local allocations stay the same
(well, "local" means re
15 matches
Mail list logo