Vladimir Sitnikov wrote:
> Alvaro>Something like
> INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I
> did not
> Frits>try that, to be honest.
>
> pgjdbc does automatically rewrite insert values(); into insert ...
> values(),(),(),() when reWriteBatchedInserts=true. I do
Alvaro>Something like
INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I
did not
Frits>try that, to be honest.
pgjdbc does automatically rewrite insert values(); into insert ...
values(),(),(),() when reWriteBatchedInserts=true. I don't expect manual
multivalues to be not
Hi Alvaro,
I did not try that, to be honest. I am using a single prepared statement so
that the database needs to parse it only once. All executes then use the
batched parameters.
I will try this later on, but I wonder whether having to reparse the
statement every time compared to one prepared sta
On 06/10/2017 07:32 PM, Alvaro Herrera wrote:
> Frits Jalvingh wrote:
>
>> So, I am still very interested in getting normal inserts faster, because
>> that will gain speed for all work.. If Oracle can do it, and Postgres is
>> able to insert fast with copy- where lies the bottleneck with the inser
Frits Jalvingh wrote:
> So, I am still very interested in getting normal inserts faster, because
> that will gain speed for all work.. If Oracle can do it, and Postgres is
> able to insert fast with copy- where lies the bottleneck with the insert
> command? There seems to be quite a performance hi
I think binary is worse.. according to the postgres documentation:
The binary format option causes all data to be stored/read as binary format
rather than as text. It is somewhat faster than the text and CSV formats,
but a binary-format file is less portable across machine architectures and
Postgr
> I tried the copy command, and that indeed works quite brilliantly:
> Inserted 2400 rows in 22004 milliseconds, 1090710.7798582076 rows per
> second
>
> That's faster than Oracle. But with a very bad interface I have to say for
> normal database work.. I will try to make this work in the tool
On Sat, Jun 10, 2017 at 12:08 AM Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:
> Would you mind sharing the source code of your benchmark?
>
The source code for the several tests, plus the numbers collected so far,
can be found at:
https://etc.to/confluence/display/~admjal/PostgreSQL+p
Frits,
Would you mind sharing the source code of your benchmark?
>BTW: It seems you need a recent driver for this; I'm
using postgresql-42.1.1.jar
Technically speaking, reWriteBatchedInserts was introduced in 9.4.1209
(2016-07-15)
Vladimir
On Fri, Jun 9, 2017 at 6:04 AM, Frits Jalvingh wrote:
>
> I already changed the following config parameters:
> work_mem 512MB
> synchronous_commit off
>
Since you are already batching up commits into large chunks, this setting
is not very useful, but does risk you losing supposedly-committed dat
The parameter was
reWriteBatchedInserts = true
Either added in connection properties of in the connection URL like
jdbc:postgresql://localhost:5432/datavault_12_tst?reWriteBatchedInserts=true
BTW: It seems you need a recent driver for this; I'm
using postgresql-42.1.1.jar
On Fri, Jun 9, 2017 at
On Fri, Jun 09, 2017 at 03:22:35PM +, Frits Jalvingh wrote:
> Hi Babu,
>
> That was all already done, as it is common practice for JDBC. Your
> parameter was added to the code that already did all that - and worked
> brilliantly there ;)
>
Hi Frits,
What was the parameter? I did not see an E
Hi Babu,
That was all already done, as it is common practice for JDBC. Your
parameter was added to the code that already did all that - and worked
brilliantly there ;)
>
On Fri, Jun 9, 2017 at 9:12 AM, Frits Jalvingh wrote:
> Hi John,
>
> Yes, I was aware and amazed by that ;) It is actually the fetch size in
> combination with autocommit being on; that dies the sweet OOM death as soon
> as the table gets big.
>
> But Postgres read performance, with autocommit off
[mailto:
> pgsql-performance-ow...@postgresql.org] *On Behalf Of *Frits Jalvingh
> *Sent:* Friday, June 09, 2017 7:55 AM
> *To:* Sunkara, Amrutha; pgsql-performance@postgresql.org
>
>
> *Subject:* Re: [PERFORM] Improving PostgreSQL insert performance
>
>
>
> I am not doing
);
Regards
John
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Frits Jalvingh
Sent: Friday, June 09, 2017 7:55 AM
To: Sunkara, Amrutha; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Improving PostgreSQL insert performance
I am not
Hi Babu,
No, I did not, and the effect is quite great:
Inserted 100 rows in 2535 milliseconds, 394477.3175542406 rows per
second
Inserted 100 rows in 2553 milliseconds, 391696.0438699569 rows per
second
compared to (without your parameter):
Inserted 100 rows in 7643 milliseconds, 130
Hi Kenneth,
I tried unlogged before, but as long as the commit interval is long it had
no discerning effect that I could see.
I am not doing anything special I guess. I am adding the results of the
tests and the programs I'm using to the following page:
https://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests
The copy example, in Java, is at the end. All of the examples use trivial
data and the same data.
On Fri, Jun 09, 2017 at 02:39:37PM +, Frits Jalvingh wrote:
> Hi all,
>
> Thanks a lot for the many responses!
>
> About preparing statements: this is done properly in Java, and pgsql does
> it by itself. So that cannot be done better ;)
>
> I tried the copy command, and that indeed works qu
Frits,
When you use the copy command, are you doing anything special to get the
run time that you are indicating?
On Fri, Jun 9, 2017 at 10:39 AM, Frits Jalvingh wrote:
> Hi all,
>
> Thanks a lot for the many responses!
>
> About preparing statements: this is done properly in Java, and pgsql do
Hi all,
Thanks a lot for the many responses!
About preparing statements: this is done properly in Java, and pgsql does
it by itself. So that cannot be done better ;)
I tried the copy command, and that indeed works quite brilliantly:
Inserted 2400 rows in 22004 milliseconds, 1090710.779858207
On Fri, Jun 09, 2017 at 01:56:58PM +, Frits Jalvingh wrote:
> Hi Kenneth, Andreas,
>
> Thanks for your tips!
>
> I increased shared_buffers to 8GB but it has no measurable effect at all. I
> think that is logical: shared buffers are important for querying but not
> for inserting; for that the
On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh wrote:
> Hi Kenneth, Andreas,
>
> Thanks for your tips!
>
> I increased shared_buffers to 8GB but it has no measurable effect at all. I
> think that is logical: shared buffers are important for querying but not for
> inserting; for that the speed to w
Hi Kenneth, Andreas,
Thanks for your tips!
I increased shared_buffers to 8GB but it has no measurable effect at all. I
think that is logical: shared buffers are important for querying but not
for inserting; for that the speed to write to disk seems most important- no
big reason to cache the data
On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:
>
>
> Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
> >Hi all,
> >
> >I am trying to improve the runtime of a big data warehouse
> >application. One significant bottleneck found was insert
> >performance, so I am investigating w
Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
Hi all,
I am trying to improve the runtime of a big data warehouse
application. One significant bottleneck found was insert performance,
so I am investigating ways of getting Postgresql to insert data faster.
* use COPY instead of Insert, it i
Hi all,
I am trying to improve the runtime of a big data warehouse application. One
significant bottleneck found was insert performance, so I am investigating
ways of getting Postgresql to insert data faster. I ran several tests on a
fast machine to find out what performs best, and compared the re
28 matches
Mail list logo