[GENERAL] Is pgbouncer still maintained?

2017-10-13 Thread Steven Winfield
Hi all,

I hope I'm posting this in the correct place.
We use pgbouncer, as I'm sure many other people do, but we are becoming 
increasingly worried by the lack of a new release since February 2016 and a 
slowdown in development activity on the master branch.
https://github.com/pgbouncer/pgbouncer

Issues and pull requests are mounting up in the tracker, often going 
unacknowledged - including a requests for a new release.

We would very much like to see SCRAM support added (for which there is a pull 
request https://github.com/pgbouncer/pgbouncer/pull/227), and have the 
unrestricted ability to set per-user connection limits 
(https://github.com/pgbouncer/pgbouncer/pull/204), as well as pick up any bug 
fixes since the last official release.
We are, however, wary about just using the head of the master branch, or 
pulling from a fork, as companies offering Postgres support to clients will not 
cover these unofficial releases.

As much as we love pgbouncer, I don't see how the current situation can 
continue. I mean no disrespect to the authors - it is a great tool, and I 
understand that priorities and time constraints vary over time, but given the 
number of outstanding pull requests it does seem like there might exist a 
willing group of developers who could continue pgbouncer's maintenance and 
development, so perhaps some more of them should be allowed full access to the 
master repository.  I don't know exactly how that can be done without the risk 
of compromising the quality of code that makes it into the master branch in 
future, but I thought I'd suggest this to stimulate some discussion.

I certainly wouldn't like to see the beginnings of a competing fork.

Any thoughts, comments, (constructive) criticisms?

Best,
Steve.


Re: R: [GENERAL] Slow queries on very big (and partitioned) table

2017-02-20 Thread Steven Winfield
> Unfortunately, that'll require locking each table and scanning it to make 
> sure that the CHECK constraint isn't violated.

Actually, CHECK constraints can be added with the NOT VALID clause. 
New tuples will be checked immediately, while the validation of existing tuples 
can be done later using ALTER TABLE ... VALIDATE CONSTRAINT ... which takes a 
less invasive lock than if you'd omitted NOT VALID.

Steve.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Build exclusion constraints USING INDEX

2017-02-10 Thread Steven Winfield
> I was wondering if there was any way to break down the creation of a new 
> exclusion constraint into stages such that table locks most likely to affect 
> performance during production hours are not taken.
>
> Something like:
>
> CREATE INDEX CONCURRENTLY new_index ON my_table USING gist (column1, column2, 
> column3);
> ALTER TABLE my_table ADD CONSTRAINT my_exclusion_constraint EXCLUDE USING 
> INDEX new_index (column1 WITH &&, column2 WITH =, column3 WITH &&) NOT VALID;
> ALTER TABLE my_table VALIDATE CONSTRAINT my_exclusion_constraint;
>
> AFAICT nothing like the second statement is currently available, but I wanted 
> to check that and see if there are any workarounds.

I'm guessing there aren't any workarounds then, and exclusion constraints will 
lock the whole table against writes while they are being created?

Steve.


[GENERAL] Build exclusion constraints USING INDEX

2017-02-07 Thread Steven Winfield
Hi,

I was wondering if there was any way to break down the creation of a new 
exclusion constraint into stages such that table locks most likely to affect 
performance during production hours are not taken.

Something like:

CREATE INDEX CONCURRENTLY new_index ON my_table USING gist (column1, column2, 
column3);
ALTER TABLE my_table ADD CONSTRAINT my_exclusion_constraint EXCLUDE USING INDEX 
new_index (column1 WITH &&, column2 WITH =, column3 WITH &&) NOT VALID;
ALTER TABLE my_table VALIDATE CONSTRAINT my_exclusion_constraint;

AFAICT nothing like the second statement is currently available, but I wanted 
to check that and see if there are any workarounds.

Thanks,
Steven.



Re: [GENERAL] Request to share approach during REINDEX operation

2016-12-21 Thread Steven Winfield
>>> In my production system, there are lot of read write operation performed 
>>> every hour.
>>Apologies if this sounds patronising but I just wanted to check - you know 
>>that indexes are updated automatically when write operations occur, right?
>Yes. You are correct.

OK - I think the folks here are having a hard time figuring out why you are 
reindexing so frequently, especially during times when the database is already 
under load.
What benefit does reindexing give you? Have you tried not doing it? The same 
goes for vacuuming - have you tried just letting autovacuum do its job, or 
tweaking its parameters to meet your needs?

(Also, list etiquette here seems to be to reply after the previous post, rather 
than “top-post”)

Steve.



This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. 
The registered name of Cantab- part of GAM Systematic is Cantab Capital 
Partners LLP. 
See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.


Re: [GENERAL] Request to share approach during REINDEX operation

2016-12-21 Thread Steven Winfield
> In my production system, there are lot of read write operation performed 
> every hour.

Apologies if this sounds patronising but I just wanted to check - you know that 
indexes are updated automatically when write operations occur, right?



This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. 
The registered name of Cantab- part of GAM Systematic is Cantab Capital 
Partners LLP. 
See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.


Re: [GENERAL] Moving pg_xlog

2016-12-02 Thread Steven Winfield
2016-12-02 17:10 GMT+13:00 Michael Paquier 
mailto:michael.paqu...@gmail.com>>:
On Fri, Dec 2, 2016 at 1:04 PM, Melvin Davidson 
mailto:melvin6...@gmail.com>> wrote:
>  Well, while the location of pg_xlog is not currently configurable, on Linux 
> system the way to do it is  to:
>  1. stop PostgreSQL
>  2. move the pg_xlog directory to a separate partition
>  3. create a symbolic link to point to the new partition
>  4. restart PostgreSQL

It’s also worth mentioning that the xlog directory can be specified at cluster 
creation time using ‘initdb -X ’, which (AFAICT) just creates the 
symlink for you.

Steve.


Dr. Steven Winfield
Scientist
D: +44 (0)1223 755 776

[Cantab email sig]




[GENERAL] Last reclustering time

2014-07-15 Thread Steven Winfield
Is there any way to find when a table was last successfully CLUSTER'd, say from 
pg_catalog?
Last analyzed and last vacuumed times are available, but I can't seem to find 
anything to do with reclustering.

Thanks,

Steve.