On Fri, Jun 2, 2017 at 6:38 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Jun 2, 2017 at 9:01 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
>> Your reasoning sounds sensible to me.  I think the other way to attack
>> this problem is that we can maintain some local queue in each of the
>> workers when the shared memory queue becomes full.  Basically, we can
>> extend your "Faster processing at Gather node" patch [1] such that
>> instead of fixed sized local queue, we can extend it when the shm
>> queue become full.  I think that way we can handle both the problems
>> (worker won't stall if shm queues are full and workers can do batched
>> writes in shm queue to avoid the shm queue communication overhead) in
>> a similar way.
>
> We still have to bound the amount of memory that we use for queueing
> data in some way.
>

Yeah, probably till work_mem (or some percentage of work_mem).  If we
want to have some extendable solution then we might want to back it up
with some file, however, we might not need to go that far.  I think we
can do some experiments to see how much additional memory is
sufficient to give us maximum benefit.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to