On Tue, Oct 1, 2024 at 2:52 AM yudhi s <learnerdatabas...@gmail.com> wrote:

> When I execute the query with explain (analyze, buffers),I see the section
> below in the plan having "sort method" information in three places
> each showing ~75MB size, which if combined is coming <250MB. So , does that
> mean it's enough to set the work_mem as ~250MB for these queries before
> they start?
>

work_mem is set per action, so you don't need to usually combine them.
However, these are parallel workers, so you probably need to account for
the case in which no workers are available, in which case you DO want to
combine the values - but only for parallel workers all doing the same
action.


>  But yes somehow this query is finished in a few seconds when i execute
> using explain(analyze,buffers) while if i run it without using explain it
> runs for ~10minutes+. My expectation was that doing (explain analyze)
> should actually execute the query fully. Is my understanding correct here
> and if the disk spilling stats which I am seeing is accurate enough to go
> with?
>

Running explain analyze does indeed run the actual query, but it also
throws away the output. It looks like your limit is set to 300,000 rows
(why!??), which could account for some or all of the time taken - to pass
back those rows and for your client to process them. But it's hard to say
if that's the total reason for the difference without more data. It might
help to see the query, but as a rule of thumb, don't use SELECT * and keep
your LIMIT sane - only pull back the columns and rows your application
absolutely needs.

Cheers,
Greg

Reply via email to