Re: [PERFORM] Improving PostgreSQL insert performance

2017-06-11 Thread Vladimir Sitnikov
Alvaro>Something like INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I did not Frits>try that, to be honest. pgjdbc does automatically rewrite insert values(); into insert ... values(),(),(),() when reWriteBatchedInserts=true. I don't expect manual multivalues to be

Re: [PERFORM] Improving PostgreSQL insert performance

2017-06-09 Thread Vladimir Sitnikov
Frits, Would you mind sharing the source code of your benchmark? >BTW: It seems you need a recent driver for this; I'm using postgresql-42.1.1.jar Technically speaking, reWriteBatchedInserts was introduced in 9.4.1209 (2016-07-15) Vladimir

Re: [PERFORM] EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT

2015-03-05 Thread Vladimir Sitnikov
What could cause this? Note that there is no ANALYZE. Can you capture pstack and/or perf report while explain hangs? I think it should shed light on the activity of PostgreSQL. Vladimir -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your

Re: [PERFORM] Query taking long time

2014-03-03 Thread Vladimir Sitnikov
on entity_compounddict2document(name, a.hepval) might help. Regards, Vladimir Sitnikov

Re: [PERFORM] Performance of complicated query

2013-05-23 Thread Vladimir Sitnikov
This leads to the WHERE clause, WHERE read_datetime = max_read, and hence I'm only summing the last read for each device for each patient. Is reads table insert-only? Do you have updates/deletes of the historical rows? 3. Can I modify my tables to make this query (which is the crux of my

Re: [PERFORM] FW: performance issue with a 2.5gb joinded table

2013-01-09 Thread Vladimir Sitnikov
] for similar example). As far as I understand, simple create table as select * from test1 order by slsales_date_id, slsales_prod_id should improve cache locality. [1]: http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array -- Regards, Vladimir

Re: [PERFORM] filesystems benchmark

2008-12-15 Thread Vladimir Sitnikov
, Vladimir Sitnikov

Re: [PERFORM] query plan with index having a btrim is different for strings of different length

2008-12-10 Thread Vladimir Sitnikov
value for work_mem. You'll need 8*15494737 ~ 130Mb == 13 for work_mem (however, that is way too high unless you have lots of RAM and just couple of active database sessions) Regards, Vladimir Sitnikov

Re: [PERFORM] Slow SQL query (14-15 seconds)

2008-11-13 Thread Vladimir Sitnikov
GROUP BY fk_societe_id ) AS stats_adresses_facturation ON stats_adresses_facturation.societe_id = societes.pk_societe_id WHERE societes.is_deleted = FALSE and il_y_avait_un_commande=1 ORDER BY LOWER(denomination_commerciale); Bien a vous, Vladimir Sitnikov

Re: [PERFORM] Increasing select max(datecol) from bilkaib where datecol=date'2008-11-01' and (cr='00' or db='00') speed

2008-11-12 Thread Vladimir Sitnikov
, Vladimir Sitnikov

Re: [PERFORM] slow full table update

2008-11-12 Thread Vladimir Sitnikov
rows bitmap scan will require 60'000/8=7'500 bytes ~ 8Kbytes of memory to run without additional recheck, thus I do not believe it hurts you in this particular case Regards, Vladimir Sitnikov

Re: [PERFORM] Increasing select max(datecol) from bilkaib where datecol=date'2008-11-01' and (cr='00' or db='00') speed

2008-11-12 Thread Vladimir Sitnikov
query with or into two separate index scans. There is no way to improve the query significantly without rewriting it. Note: for this case indices on (datecol), (cr) and (db) are not very helpful. Regards, Vladimir Sitnikov

Re: [PERFORM] Using index for IS NULL query

2008-11-11 Thread Vladimir Sitnikov
+--+--- btree | t| t hash | f| f gist | t| t gin| f| f bitmap | t| t (5 rows) Sincerely yours, Vladimir Sitnikov

Re: [PERFORM] Improve Seq scan performance

2008-11-10 Thread Vladimir Sitnikov
carefully: the index is only 30 pages long. Why is PostgreSQL doing 2529 I/O? It drives me crazy. Regards, Vladimir Sitnikov

Re: [PERFORM] Improve Seq scan performance

2008-11-09 Thread Vladimir Sitnikov
local_flush=0 file_read=0 file_write=0) Filter: (i ~~ '%123%'::text) Total runtime: 16.863 ms Hopefully, there will be a clear distinction between filtering via index and filtering via table access. Regards, Vladimir Sitnikov

Re: [PERFORM] indexing for distinct search in timestamp based table

2008-09-05 Thread Vladimir Sitnikov
You might get great improvement for '%' cases using index on channel_name(field, start_time) and a little bit of pl/pgsql Basically, you need to implement the following algorithm: 1) curr_field = ( select min(field) from ad_log ) 2) record_exists = ( select 1 from ad_log where field=cur_field