Indeed, increasing cursor_tuple_fraction TO 0.2 did the trick.
Thanks for the hint Tom !
Le lun. 17 mars 2025 à 16:22, Tom Lane a écrit :
> kimaidou writes:
> > I have seen that this DECLARE has bad perf compared to a simple SQL
> query :
>
> > Simple SQL quer
Hi list !
I have a simple but big spatial table with approx 93 000 000 lines.
I use QGIS, the open-source GIS software to display this data. To fetch the
polygons to draw on QGIS map, QGIS launch a first DECLARE CURSOR query,
then fetch data 2000 by 2000.
I have seen that this DECLARE has bad pe
Hi !
I would like to thank you all for your detailed answers and explanations.
I would give "partitioning" a try, by creating a dedicated new partition
table, and insert a (big enough) extract of the source data in it.
You are right, the best would be to try in real life !
Best wishe
ION VIEW will be able to
use the tables indexes (geom, department) and perform as well as the big
table.
Any hint appreciated !
Regards
Kimaidou
Tom, thanks a lot for your suggestion.
Indeed, setting random_page_cost to 2 instead of 4 improves this query a
lot !
See the new plan :
https://explain.dalibo.com/plan/h924389529e11244
30 seconds VS 17 minutes before
Cheers
Michaël
Le vendredi 9 février 2024, Tom Lane a écrit :
> kimai
x27;10', '54', '57', '67', '68'))
> ;
>
> On Fri, 9 Feb 2024 at 17:14, kimaidou wrote:
>
>> Hi all,
>>
>> I have performance issue for a pretty simple request in a PostgreSQL
>> server 14.10
>>
>&
ms PostgreSQL does not use the index parcelles_dep_idx on "dep" (text
field), even if the corresponding number of lines for this WHERE clause is
a smal subset of the entire data:
approx 6M against 80M in total
Thanks in advance for any hint regarding this cumbersome query.
Regards
Kimaidou
your help
Le lun. 25 févr. 2019 à 19:30, Michael Lewis a écrit :
>
>
> On Mon, Feb 25, 2019 at 2:44 AM kimaidou wrote:
>
>> I have better results with this version. Basically, I run a first query
>> only made for aggregation, and then do a JOIN to get oth
19 à 09:54, kimaidou a écrit :
> Thanks for your answers. I tried with
> > set session work_mem='250MB';
> > set session geqo_threshold = 20;
> > set session join_collapse_limit = 20;
>
> It seems to have no real impact :
> https://explain.depesz.com/s/
Thanks for your answers. I tried with
> set session work_mem='250MB';
> set session geqo_threshold = 20;
> set session join_collapse_limit = 20;
It seems to have no real impact :
https://explain.depesz.com/s/CBVd
Indeed an index cannot really be used for sorting here, based on the
complexity of t
Thanks for your answers.
I have tried via
--show work_mem; "10485kB" -> initial work_mem for my first post
-- set session work_mem='10kB';
-- set session geqo_threshold = 12;
-- set session join_collapse_limit = 15;
I have a small machine, with SSD disk and 8GB RAM. I cannot really increase
Hi all,
I need to optimize the following query
http://paste.debian.net/hidden/ef08f864/
I use it to create a materialized view, but I think there is room for
optimization.
I tried to
SET join_collapse_limit TO 15;
with to real difference.
Explain shows that the GROUP AGGREGATE and needed sort ki
Hi all,
I need to optimize the following query
http://paste.debian.net/hidden/ef08f864/
I use it to create a materialized view, but I think there is room for
optimization.
I tried to
SET join_collapse_limit TO 15;
with to real difference.
Explain shows that the GROUP AGGREGATE and needed sort ki
13 matches
Mail list logo