Re: [PERFORM] One long transaction or multiple short transactions?

2015-10-09 Thread Graeme B. Bell

> I don't think inserts can cause contention on the server. Insert do not lock 
> tables during the transaction. You may have contention on sequence but it 
> won't vary with transaction size.

Perhaps there could be a trigger on inserts which creates some lock contention?




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] One long transaction or multiple short transactions?

2015-10-09 Thread Laurent Martelli


Le 08/10/2015 01:40, Carlo a écrit :


>> How many cores do you have on that machine?

Test if limiting number of simultaneous feeds, like bringing their 
number down to half of your normal connections has the same positive 
effect.


<<

I am told 32 cores on a LINUX VM. The operators have tried limiting 
the number of threads. They feel that the number of connections is 
optimal. However, under the same conditions they noticed a sizable 
boost in performance if the same import was split into two successive 
imports which had shorter transactions.


I am just looking to see if there is any reason to think that lock 
contention (or anything else) over longer vs. shorter single-row-write 
transactions under the same conditions might explain this.


I don't think inserts can cause contention on the server. Insert do not 
lock tables during the transaction. You may have contention on sequence 
but it won't vary with transaction size.


Have you checked the resource usage (CPU,memory) on the client side ?

How do you insert rows ? Do you use plain postgres API ?

Regards,
Laurent


Re: [PERFORM] Re: Multi processor server overloads occationally with system process while running postgresql-9.4

2015-10-09 Thread Kevin Grittner
On Saturday, October 3, 2015 4:36 AM, ajaykbs  wrote:

> version: Postgresql-9.4
> Hardware: HP DL980 (8-processor, 80 cores w/o hyper threading, 512GB RAM)
> Operating system: Red Hat Enterprise Linux Server release 6.4 (Santiago)
> uname -a : Linux host1 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST
> 2013 x86_64 x86_64 x86_64 GNU/Linux Single database with separate tablespace
> for main-data, pg_xlog and indexes
>
> I have a database having 770GB size and expected to grow to 2TB within the
> next year. The database was running in a 2processor HP DL560 (16 cores) and
> as the transactions of the database were found increasing, we have changed
> the hardware to DL980 with 8 processors and 512GB RAM.
>
> Problem It is observed that at some times during moderate load the CPU
> usage goes up to 400% and the users are not able to complete the queries in
> expected time. But the load is contributed by some system process only. The
> average connections are normally 50. But when this happens the connections
> will shoot up to max-connections.

You might find this thread interesting:

http://www.postgresql.org/message-id/flat/55783940.8080...@wi3ck.info#55783940.8080...@wi3ck.info

The short version is that in existing production versions you can
easily run in to such symptoms when you get to 8 or more CPU
packages.  The problem seems to be solved in the development
versions of 9.5 (with changes not suitable for back-patching to a
stable branch).

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance