On Mon, Sep 30, 2024 at 8:46 PM Philip Semanchuk <
phi...@americanefficient.com> wrote:

>
>
> > On Sep 26, 2024, at 7:03 AM, yudhi s <learnerdatabas...@gmail.com>
> wrote:
> >
> > In a RDS postgres ...
>
> >  Is it fine to let it use "FreeLocalStorage" unless it goes till zero?
>
> Hi Yudhi,
> FreeLocalStorage and some of the other things you ask about are specific
> to AWS RDS, so you might have better luck getting answers on an
> RDS-specific mailing list. We also use RDS-hosted Postgres and so I
> completely understand how Postgres and RDS are intertwined.
>
> We have had runaway queries exhaust FreeLocalStorage. It has been quite a
> while since that happened, so my memories are hazy, but I’m pretty sure
> that when we used all of FreeLocalStorage, the result was that Postgres
> restarted. It might be equivalent to using all memory and disk space on a
> standalone system. Once there’s no storage left, behavior is unpredictable
> but we can’t be surprised if things crash. Usually our runaway queries got
> killed before FreeLocalStorage filled up, but not always.
>
> I second Veem’s suggestion to set work_mem on a per-session basis. Also
> note that the doc for work_mem says, “the total memory used could be many
> times the value of work_mem; it is necessary to keep this fact in mind when
> choosing the value."
>
>
> https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM
>
>
Thank you.

When I execute the query with explain (analyze, buffers),I see the section
below in the plan having "sort method" information in three places
each showing ~75MB size, which if combined is coming <250MB. So , does that
mean it's enough to set the work_mem as ~250MB for these queries before
they start?

 But yes somehow this query is finished in a few seconds when i execute
using explain(analyze,buffers) while if i run it without using explain it
runs for ~10minutes+. My expectation was that doing (explain analyze)
should actually execute the query fully. Is my understanding correct here
and if the disk spilling stats which I am seeing is accurate enough to go
with?


Limit  (cost=557514.75..592517.20 rows=300000 width=1430) (actual
time=2269.939..2541.527 rows=300000 loops=1)
  Buffers: shared hit=886206, temp read=38263 written=56947
  I/O Timings: temp read=70.040 write=660.073
  ->  Gather Merge  (cost=557514.75..643393.02 rows=736048 width=1430)
(actual time=2269.938..2513.748 rows=300000 loops=1)
        Workers Planned: 2
        Workers Launched: 2
        Buffers: shared hit=886206, temp read=38263 written=56947
        I/O Timings: temp read=70.040 write=660.073
        ->  Sort  (cost=556514.73..557434.79 rows=368024 width=1430)
(actual time=2227.392..2279.389 rows=100135 loops=3)
              Sort Key: column1, column2
              Sort Method: external merge  Disk: *77352kB*
              Buffers: shared hit=886206, temp read=38263 written=56947
              I/O Timings: temp read=70.040 write=660.073
              Worker 0:  Sort Method: external merge  Disk: *75592kB*
              Worker 1:  Sort Method: external merge  Disk: *74440kB*
              ->  Parallel Append  (cost=0.00..64614.94 rows=368024
width=1430) (actual time=0.406..570.105 rows=299204 loops=3)

Reply via email to