Re: [PERFORM] Problem with ExclusiveLock on inserts

2014-02-25 Thread Бородин Владимир
With these settings auto vacuuming of all tables became more often (as was 
expected), but not of big tables with lots of inserts (and with absolutely no 
updates). I've done a cron-script that does "vacuum analyze" for all databases 
which runs every hour. And it seems that right now I don't have any performance 
problems.

Ilya and Emre, thank you for help.

13.02.2014, в 16:20, Ilya Kosmodemiansky 
 написал(а):

> Vladimir,
> 
> pgbouncer works with pl/proxy in transaction pooling mode. A wide spread 
> phrase that statement mode is for plproxy does not mean any limitations for 
> transaction pooling mode until you have atocommit on client. Anyway, try to 
> reduce connections.
> 
> try to set your autovacuum a bit more aggressive:
> 
> 
>  autovacuum_analyze_scale_
> factor=0.05 #or like that
>  autovacuum_analyze_threshold=5 
>  autovacuum_freeze_max_age=2
>  autovacuum_max_workers=20 # that is fine for slow disks
>  autovacuum_naptime=1 
>  autovacuum_vacuum_cost_delay=5 # or at least 10
>  autovacuum_vacuum_cost_limit =-1   
>  autovacuum_vacuum_scale_factor=0.01 # this setting is to be really 
> aggressive, otherwise you simply postpone huge vacuums and related disk io, 
> smaller portions are better
>  autovacuum_vacuum_threshold=20   
> 
> probably you will also need some ionice for autovacuum workers
> 
> 
> On Thu, Feb 13, 2014 at 1:13 PM, Ilya Kosmodemiansky 
>  wrote:
> Vladimir,
> 
> pgbouncer works with pl/proxy in transaction pooling mode. A wide spread 
> phrase that statement mode is for plproxy does not mean any limitations for 
> transaction pooling mode until you have atocommit on client. Anyway, try to 
> reduce connections.
> 
> try to set your autovacuum a bit more aggressive:
> 
> 
>  autovacuum_analyze_scale_factor=0.05 #or like that
>  autovacuum_analyze_threshold=5 
>  autovacuum_freeze_max_age=2
>  autovacuum_max_workers=20 # that is fine for slow disks
>  autovacuum_naptime=1 
>  autovacuum_vacuum_cost_delay=5 # or at least 10
>  autovacuum_vacuum_cost_limit =-1   
>  autovacuum_vacuum_scale_factor=0.01 # this setting is to be really 
> aggressive, otherwise you simply postpone huge vacuums and related disk io, 
> smaller portions are better
>  autovacuum_vacuum_threshold=20   
> 
> probably you will also need some ionice for autovacuum workers
> 
> 
> 
> On Thu, Feb 13, 2014 at 11:26 AM, Бородин Владимир  wrote:
> 
> 13.02.2014, в 13:29, Ilya Kosmodemiansky  написал(а):
> 
>> Vladimir,
>> 
>> And, any effect on your problem?
> 
> It worked without problems longer than previous configuration but repeated 
> again several minutes ago :(
> 
>> 
>> On Thu, Feb 13, 2014 at 9:35 AM, Бородин Владимир  wrote:
>>> I have limited max connections to 1000, reduced shared buffers to 8G and 
>>> restarted postgres.
>> 
>> 1000 is still to much in most cases. With pgbouncer in transaction
>> pooling mode normaly pool size 8-32, max_connections = 100 (default
>> value) and client_connections 500-1500 looks more reasonable.
> 
> Clients for this db are plproxy hosts. As far as I know plproxy can work only 
> with statement pooling.
> 
>> 
>> 
>>> I have also noticed that this big tables stopped vacuuming automatically a 
>>> couple of weeks ago. It could be the reason of the problem, I will now try 
>>> to tune autovacuum parameters to turn it back. But yesterday I ran "vacuum 
>>> analyze" for all relations manually but that did not help.
>> 
>> How do your autovacuum parameters look like now?
> 
> They were all default except for vacuum_defer_cleanup_age = 10. I have 
> increased autovacuum_max_workers = 20 because I have 10 databases with about 
> 10 tables each. That did not make better (I haven't seen more than two auto 
> vacuum workers simultaneously). Then I have tried to set vacuum_cost_limit = 
> 1000. Still not vacuuming big tables. Right now the parameters look like this:
> 
> root@rpopdb01e ~ # fgrep vacuum 
> /var/lib/pgsql/9.3/data/conf.d/postgresql.conf 
> #vacuum_cost_delay = 0  # 0-100 milliseconds
> #vacuum_cost_page_hit = 1   # 0-1 credits
> #vacuum_cost_page_miss = 10 # 0-1 credits
> #vacuum_cost_page_dirty = 20# 0-1 credits
> vacuum_cost_limit = 1000# 1-1 credits
> vacuum_defer_cleanup_age = 10   # number of xacts by which cleanup is 
> delayed
> autovacuum = on # Enable autovacuum subprocess?  'on'
> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and
> autovacuum_max_workers = 20 # max number of autovacuum 
> subprocesses
> #autovacuum_naptime = 1min  # time between autovacuum runs
> #autovacuum_vacuum_threshold = 50   # min number of row updates before
> # vacuum
> #autovacuum_analyze_threshold = 50  # min number of row updates before
> #autovacuum_vacuum_scale_factor = 0.2   # fraction of table size befor

Re: [PERFORM] Planner performance extremely affected by an hanging transaction (20-30 times)?

2014-02-25 Thread Josh Berkus
On 02/25/2014 08:06 AM, Tom Lane wrote:
> Well, it's three months later, and none of the people who were complaining
> so vociferously in this thread seem to have bothered to test the proposed
> solution.

Sorry about that.  The client lost interest once they had a workaround
(fixing the hanging transaction), and I don't have direct access to
their test workload -- nor was I able to reproduce the issue on a purely
synthetic workload.

> However, over at
> http://www.postgresql.org/message-id/CAFj8pRDHyAK_2JHSVKZ5YQNGQmFGVcJKcpBXhFaS=vssch-...@mail.gmail.com
> Pavel did test it and reported that it successfully alleviates his
> real-world problem.  So I'm now inclined to commit this.  Objections?

None from me.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Planner performance extremely affected by an hanging transaction (20-30 times)?

2014-02-25 Thread Tom Lane
I wrote:
> Andres Freund  writes:
>> Also, this really isn't going to fix the issue discussed here - this was
>> just about the additional ProcArrayLock contention. I don't think it
>> would change anything dramatical in your case.

> All of these proposals are pretty scary for back-patching purposes,
> anyway.  I think what we should consider doing is just changing
> get_actual_variable_range() to use a cheaper snapshot type, as in
> the attached patch (which is for 9.3 but applies to 9.2 with slight
> offset).  On my machine, this seems to make the pathological behavior
> in BR's test case go away just fine.  I'd be interested to hear what
> it does in the real-world scenarios being complained of.

Well, it's three months later, and none of the people who were complaining
so vociferously in this thread seem to have bothered to test the proposed
solution.

However, over at
http://www.postgresql.org/message-id/CAFj8pRDHyAK_2JHSVKZ5YQNGQmFGVcJKcpBXhFaS=vssch-...@mail.gmail.com
Pavel did test it and reported that it successfully alleviates his
real-world problem.  So I'm now inclined to commit this.  Objections?

regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance