Hi,
> That's helpful, thanks!
>
> One thing to note is that postgres' work_mem is confusing - it applies to
> individual query execution nodes, not the whole query. Additionally, when
> you
> use parallel workers, each of the parallel workers can use the "full"
> work_mem, rather than work_mem be
Hi,
On 2024-06-07 21:42:58 +0300, Radu Radutiu wrote:
> On Fri, Jun 7, 2024 at 7:59 PM Andres Freund wrote:
> > On 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:
> > > I have a query that forces an out of memory error, where the OS will kill
> > > the postgresql process.
> >
> > FWIW, it can be u
>
>
> FWIW, it can be useful to configure the OS with strict memory overcommit.
> That
> causes postgres to fail more gracefully, because the OOM killer won't be
> invoked.
>
In the current setup the database is used as an embedded db, with the
application sharing the same host as the database. Se
On Fri, Jun 7, 2024 at 7:59 PM Andres Freund wrote:
> Hi,
>
> On 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:
> > I have a query that forces an out of memory error, where the OS will kill
> > the postgresql process.
>
> FWIW, it can be useful to configure the OS with strict memory overcommit.
>
Hi,
On 2024-06-06 13:58:24 +0100, Pantelis Theodosiou wrote:
> I am not qualified to answer on the OOM issue but why are you joining the
> same table (outputrequest) 4 times (using an identical join condition)?
The conditions aren't actually the same
rpl_rec_tro. input_sequence = r.input
Hi,
On 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:
> I have a query that forces an out of memory error, where the OS will kill
> the postgresql process.
FWIW, it can be useful to configure the OS with strict memory overcommit. That
causes postgres to fail more gracefully, because the OOM kille
>
>
>
>> The planner should recognize this situation and avoid use of hash
>> join in such cases, but maybe the statistics aren't reflecting the
>> problem, or maybe there's something wrong with the logic specific
>> to parallel hash join. You've not really provided enough information
>> to diagno
Radu Radutiu writes:
> The query itself runs fine in a reasonable time with enable_parallel_hash =
> 'off'. I see two problems - one is the wrong execution plan (right after
> running analyze), the second and the most important is the huge memory
> usage (far exceeding work_mem and shared buffers)
>
>
>> I am not qualified to answer on the OOM issue but why are you joining the
> same table (outputrequest) 4 times (using an identical join condition)?
> This essentially does a cross join, if an input_sequence value has say,
> 1000 related rows in outputrequest, you will be getting 1000^4 rows
On Thu, Jun 6, 2024 at 1:25 PM Radu Radutiu wrote:
> Hello all,
>
> I have a query that forces an out of memory error, where the OS will kill
> the postgresql process.
> The query plan (run immediately after a vacuum analyze) is at
> https://explain.depesz.com/s/ITQI#html .
>
> ...
>
> Any idea
Hello all,
I have a query that forces an out of memory error, where the OS will kill
the postgresql process.
The query plan (run immediately after a vacuum analyze) is at
https://explain.depesz.com/s/ITQI#html .
PostgreSQL version 16.3, running on RHEL 8.9, 16 vCPU, 64 GB RAM, 32 GB swap
shared
11 matches
Mail list logo