Alvaro>Something like
INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I
did not
Frits>try that, to be honest.
pgjdbc does automatically rewrite insert values(); into insert ...
values(),(),(),() when reWriteBatchedInserts=true. I don't expect manual
multivalues to be
Frits,
Would you mind sharing the source code of your benchmark?
>BTW: It seems you need a recent driver for this; I'm
using postgresql-42.1.1.jar
Technically speaking, reWriteBatchedInserts was introduced in 9.4.1209
(2016-07-15)
Vladimir
What could cause this? Note that there is no ANALYZE.
Can you capture pstack and/or perf report while explain hangs?
I think it should shed light on the activity of PostgreSQL.
Vladimir
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
on
entity_compounddict2document(name, a.hepval) might help.
Regards,
Vladimir Sitnikov
This leads to the WHERE clause, WHERE read_datetime = max_read, and hence
I'm only summing the last read for each device for each patient.
Is reads table insert-only? Do you have updates/deletes of the
historical rows?
3. Can I modify my tables to make this query (which is the crux of my
] for similar example).
As far as I understand, simple create table as select * from test1 order by
slsales_date_id, slsales_prod_id should improve cache locality.
[1]:
http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array
--
Regards,
Vladimir
,
Vladimir Sitnikov
value for
work_mem. You'll need 8*15494737 ~ 130Mb == 13 for work_mem (however,
that is way too high unless you have lots of RAM and just couple of active
database sessions)
Regards,
Vladimir Sitnikov
GROUP BY fk_societe_id
) AS stats_adresses_facturation ON
stats_adresses_facturation.societe_id = societes.pk_societe_id
WHERE societes.is_deleted = FALSE and il_y_avait_un_commande=1
ORDER BY LOWER(denomination_commerciale);
Bien a vous,
Vladimir Sitnikov
,
Vladimir Sitnikov
rows bitmap scan will require 60'000/8=7'500 bytes ~
8Kbytes of memory to run without additional recheck, thus I do not believe
it hurts you in this particular case
Regards,
Vladimir Sitnikov
query with or into
two separate index scans. There is no way to improve the query significantly
without rewriting it.
Note: for this case indices on (datecol), (cr) and (db) are not very
helpful.
Regards,
Vladimir Sitnikov
+--+---
btree | t| t
hash | f| f
gist | t| t
gin| f| f
bitmap | t| t
(5 rows)
Sincerely yours,
Vladimir Sitnikov
carefully: the index is only 30 pages long. Why is
PostgreSQL doing 2529 I/O? It drives me crazy.
Regards,
Vladimir Sitnikov
local_flush=0 file_read=0 file_write=0)
Filter: (i ~~ '%123%'::text)
Total runtime: 16.863 ms
Hopefully, there will be a clear distinction between filtering via index and
filtering via table access.
Regards,
Vladimir Sitnikov
You might get great improvement for '%' cases using index on
channel_name(field,
start_time) and a little bit of pl/pgsql
Basically, you need to implement the following algorithm:
1) curr_field = ( select min(field) from ad_log )
2) record_exists = ( select 1 from ad_log where field=cur_field
16 matches
Mail list logo