On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:
> 
> 
> Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
> >Hi all,
> >
> >I am trying to improve the runtime of a big data warehouse
> >application. One significant bottleneck found was insert
> >performance, so I am investigating ways of getting Postgresql to
> >insert data faster.
> 
> * use COPY instead of Insert, it is much faster
> * bundle all Insert into one transaction
> * use a separate disk/spindel for the transaction log
> 
> 
> 
> >
> >I already changed the following config parameters:
> >work_mem 512MB
> >synchronous_commit off
> >shared_buffers 512mb
> >commit_delay 100000
> >autovacuum_naptime 10min
> >
> >Postgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K
> >with 16GB memory and an Intel 750 SSD. JDBC driver is
> >postgresql-42.1.1.
> >
> 
> increase shared_buffers, with 16gb ram i would suggest 8gb

+1 Without even checking, I think Oracle is configured to use a LOT
more memory than 512mb.

Regards,
Ken


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to