Re: [GENERAL] PgPool or alternatives
Hi. I had to implement something similar some time ago. Basically, a group of database servers (postgres) geographically distributed, with each one having a group of servers in each datacenter, and each server preferring the nearest database server, but allowing connections to a further one if the nearest is down. After going through different solutions (pgpool between them), I got to Postgres BDR with HAProxy. Each app server connects to the local HAProxy, which forwards the connection to the nearest available database server(preference is set directly in the HAProxy configuration). That way, I get high availability and replication happens really fast, right after the transaction is committed. The only drawback with Postgres BDR is it has some limitations: - New databases are NOT replicated; but you can have any number of databases with no problem. - Users & roles must be replicated manually, as BDR works at database-level. - There are some DDL restrictions: mostly due to how BDR works internally. In my experience, none of them has been a real problem. Full list: http://bdr-project.org/docs/stable/ddl-replication-statements.html - DDL replication may sometimes mean automatic cancellation of running transactions. so it must be carefully planned. However, consider TEMPORARY TABLES are not replicated, so DDL on them is not affected by BDR restrictions. - Even when BDR documentations says nothing about this, it can have trouble replicating really large transactions. In my experience, my BDR cluster stopped replicating (had to rebuild it) when an app made a 8 million records update in a single transaction. Since that app was corrected, nothing similar has ever happened, and I think most apps should not have problems with this. Alvaro Aguayo Jefe de Operaciones Open Comb Systems E.I.R.L. Oficina: (+51-1) 3377813 | RPM: #034252 / (+51) 995540103 | RPC: (+51) 954183248 Website: www.ocs.pe - Original Message - From: "Simon Windsor"To: pgsql-general@postgresql.org Sent: Saturday, 21 January, 2017 2:38:59 PM Subject: [GENERAL] PgPool or alternatives Hi My employer wants to move from an in house Oracle solution to a cloud based Postgres system. The system will involve a number of data loaders running 24x7 feeding several Postgres Databases that will be used by internal applications and external customer applications. For the record, internal and external applications make heavy use of Temporary tables, that are session related. This requirement means I cannot consider normal replication methods. Is PgPool the only viable that will allow the system the data loaders to feed [n] databases that will be functional identical? Simon -- Simon Windsor Eml: simon.wind...@cornfield.me.uk Tel: 01454 617689 Mob: 0755 197 9733 “There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers price only is that man's lawful prey.” -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Doubts regarding postgres Security
On 1/21/2017 1:37 PM, Stephen Frost wrote: * PAWAN SHARMA (er.pawanshr0...@gmail.com) wrote: So, there is no solution for my first question, we need if users enter the wrong password more than 5 times than their account gets locked and then only DBA will unlock this account. I understood the question and there is an answer- use PAM. The other option is to not use password-based authentication with PG and instead use a better method such as certificate-based auth or GSSAPI. or just don't let 'users' connect directly to the database at all. users run an app (or browser), which talks to an app server, which has the business logic and does the database connection and executes the business logic specific queries. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Doubts regarding postgres Security
* Thomas Kellerer (spam_ea...@gmx.net) wrote: > Stephen Frost schrieb am 21.01.2017 um 22:37: > >>So, there is no solution for my first question, we need if users enter the > >>wrong password more than 5 times than their account gets locked and then > >>only DBA will unlock this account. > > > >I understood the question and there is an answer- use PAM. > > What about LDAP? Perhaps, but not what I'd recommend. If you've got LDAP then you probably have Active Directory and you should just be using GSSAPI. If you've actually *only* got LDAP, then perhaps, but all the LDAP methods require the user's password to be sent, in one form or another, to the PG server, which isn't ideal, and usually there's a better option in such environments. Of course, PAM requires the user's password to be sent to the server also, which is also why I don't generally recommend using it either, but without more info ... Thanks! Stephen signature.asc Description: Digital signature
Re: [GENERAL] Doubts regarding postgres Security
Stephen Frost schrieb am 21.01.2017 um 22:37: So, there is no solution for my first question, we need if users enter the wrong password more than 5 times than their account gets locked and then only DBA will unlock this account. I understood the question and there is an answer- use PAM. What about LDAP? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PgPool or alternatives
Simon Windsor schrieb am 21.01.2017 um 21:26: I was wondering if there is another option that will allow me to spool all ALTER|CREATE|DELETE|DROP|INSERT|UPDATE commands to all nodes, and SELECTs to any of the connected nodes. The apllication can actually handle separate READ|WRITE nodes from how it was written for Oracle. You could logical replication: https://2ndquadrant.com/en/resources/pglogical/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Doubts regarding postgres Security
Pawan, Please be sure to include the mailing list on replies, so others can benefit from the discussion. Also, please reply in-line, as I do below, instead of top-posting. * PAWAN SHARMA (er.pawanshr0...@gmail.com) wrote: > So, there is no solution for my first question, we need if users enter the > wrong password more than 5 times than their account gets locked and then > only DBA will unlock this account. I understood the question and there is an answer- use PAM. The other option is to not use password-based authentication with PG and instead use a better method such as certificate-based auth or GSSAPI. Thanks! Stephen signature.asc Description: Digital signature
Re: [GENERAL] Doubts regarding postgres Security
Pawan, * PAWAN SHARMA (er.pawanshr0...@gmail.com) wrote: > 1. How can we set user account block feature after max number of > invalid password entries? There are ways to accomplish this, but they're unfortunately complicated. In the past, I've implemented these kinds of requirments by using the 'PAM' auth method and configuring saslauthd and PAM with various PAM modules. Unfortunately, most PAM modules are not very well suited to running as non-root or with alternate directories, so it can be a bit tricky. > 2. How can we use SSL encryption in Postgres on Linux environment? This can be a bit tricky too, depending on your PKI, but generally speaking, you simply need to configure the SSL-relevant options in postgresql.conf and then set SSLMODE=verify-ca (or verify-full, depending on what you want to do). Specific documentiaton is available here: https://www.postgresql.org/docs/current/static/ssl-tcp.html > 3. How can we transparent data encryption in Postgres? If you mean between PG and the filesystem, you could either use full disk encryption provided by the OS, or like pgcrypto. Documentation for pgcrypto is here: https://www.postgresql.org/docs/current/static/pgcrypto.html Thanks! Stephen signature.asc Description: Digital signature
Re: [GENERAL] PgPool or alternatives
Hi Thanks for the reply. We were not planning to use pgPools connection pool mode, but its replication mode. Our tests with pgPool allow us to install a backup db via pgPool to each node, and tests loads overnight of 10+GB of inserts/updates/deletes all work fine, with only a slight loss of performance vs a standalone DB. I was wondering if there is another option that will allow me to spool all ALTER|CREATE|DELETE|DROP|INSERT|UPDATE commands to all nodes, and SELECTs to any of the connected nodes. The apllication can actually handle separate READ|WRITE nodes from how it was written for Oracle. Simon On 21/01/2017 20:09, Stephen Frost wrote: Simon, * Simon Windsor (simon.wind...@cornfield.me.uk) wrote: My employer wants to move from an in house Oracle solution to a cloud based Postgres system. The system will involve a number of data loaders running 24x7 feeding several Postgres Databases that will be used by internal applications and external customer applications. For the record, internal and external applications make heavy use of Temporary tables, that are session related. This requirement means I cannot consider normal replication methods. Is PgPool the only viable that will allow the system the data loaders to feed [n] databases that will be functional identical? I'm not sure what you mean by 'functional identical', but I wouldn't generally consider that to be a property of pgpool (or pgbouncer, or any other connection pooler, really). That said, my general feeling is that pgbouncer tends to be simpler, faster, and less likely to introduce oddities that you don't expect. The 'session' mode might work for you, though it might be debatable if that really helps you all that much. 'transaction' mode is what I usually recommend as it allows idle connections to be handled by pgbouncer (unlike 'session' mode), but there are caveats to using that mode, of course. I'm a bit curious where you're thinking of using the connection pooler also though. If you have data loaders running 24x7 feeding data constantly to PG, do you really need a connection pooler for those? Connection poolers make a lot of sense for environments where there's lots of down-time on the connection, but the less down-time, the less they make sense. Thanks! Stephen -- Simon Windsor Eml: simon.wind...@cornfield.me.uk Tel: 01454 617689 Mob: 0755 197 9733 “There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers price only is that man's lawful prey.” -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PgPool or alternatives
Simon, * Simon Windsor (simon.wind...@cornfield.me.uk) wrote: > My employer wants to move from an in house Oracle solution to a > cloud based Postgres system. The system will involve a number of > data loaders running 24x7 feeding several Postgres Databases that > will be used by internal applications and external customer > applications. > > For the record, internal and external applications make heavy use of > Temporary tables, that are session related. This requirement means I > cannot consider normal replication methods. > > Is PgPool the only viable that will allow the system the data > loaders to feed [n] databases that will be functional identical? I'm not sure what you mean by 'functional identical', but I wouldn't generally consider that to be a property of pgpool (or pgbouncer, or any other connection pooler, really). That said, my general feeling is that pgbouncer tends to be simpler, faster, and less likely to introduce oddities that you don't expect. The 'session' mode might work for you, though it might be debatable if that really helps you all that much. 'transaction' mode is what I usually recommend as it allows idle connections to be handled by pgbouncer (unlike 'session' mode), but there are caveats to using that mode, of course. I'm a bit curious where you're thinking of using the connection pooler also though. If you have data loaders running 24x7 feeding data constantly to PG, do you really need a connection pooler for those? Connection poolers make a lot of sense for environments where there's lots of down-time on the connection, but the less down-time, the less they make sense. Thanks! Stephen signature.asc Description: Digital signature
[GENERAL] PgPool or alternatives
Hi My employer wants to move from an in house Oracle solution to a cloud based Postgres system. The system will involve a number of data loaders running 24x7 feeding several Postgres Databases that will be used by internal applications and external customer applications. For the record, internal and external applications make heavy use of Temporary tables, that are session related. This requirement means I cannot consider normal replication methods. Is PgPool the only viable that will allow the system the data loaders to feed [n] databases that will be functional identical? Simon -- Simon Windsor Eml: simon.wind...@cornfield.me.uk Tel: 01454 617689 Mob: 0755 197 9733 “There is nothing in the world that some man cannot make a little worse and sell a little cheaper, and he who considers price only is that man's lawful prey.” -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Doubts regarding postgres Security
On Sun, Jan 22, 2017 at 12:32:21AM +0530, PAWAN SHARMA wrote: > I have few question regarding Postgres security. > > 1. How can we set user account block feature after max number of > invalid password entries? > 2. How can we use SSL encryption in Postgres on Linux environment? > 3. How can we transparent data encryption in Postgres? The information you need to answer your questions is found here: https://www.postgresql.org/docs/devel/static/index.html Regards, Karsten Hilbert -- GPG key ID E4071346 @ eu.pool.sks-keyservers.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Doubts regarding postgres Security
Hi All, I have few question regarding Postgres security. 1. How can we set user account block feature after max number of invalid password entries? 2. How can we use SSL encryption in Postgres on Linux environment? 3. How can we transparent data encryption in Postgres? Thanks
Re: [GENERAL] pg_upgrade error on FreeBSD (9.1 -> 9.5)
Amitabh Kantwrites: > command: "/var/tmp/pgbin.SPOsRj4D/bin/pg_ctl" -w -l "pg_upgrade_server.log" > -D "/usr/local/pgsql/data91" -o "-p 50432 -b -c listen_addresses='' -c > unix_socket_permissions=0700 -c unix_socket_directory='/usr/local/pgsql'" Note the unix_socket_directory parameter, which is indeed being applied because we can see it again in the ps output: > pgsql 26636 0.0 1.4 66960 14512 - Is4:08AM 0:00.06 > /var/tmp/pgbin.SPOsRj4D/bin/postgres -D /usr/local/pgsql/data91 -p 50432 -b > -c listen_addresses= -c unix_socket_permissions=0700 -c > unix_socket_directory=/usr/local/pgsql However, your psql is looking for the socket in /tmp: > $ psql -p 50432 -d template1 > psql: could not connect to server: No such file or directory > Is the server running locally and accepting > connections on Unix domain socket "/tmp/.s.PGSQL.50432"? You could successfully connect to that server with "psql -p 50432 -h /usr/local/pgsql ...", I expect. The question is why pg_upgrade issued that option and then failed to cope with the consequences. I suspect it has something to do with one installation being configured with different default socket directory than the other, but I don't have enough facts. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Free seminar in Malmo: PostgreSQL at 10TB and Beyond
Just share the slides/video in this thread friend. On Sat, Jan 21, 2017 at 10:57 AM, Seref Arikanwrote: > Any chance this will be recorded? The content looks great and would be of > interest to many. > > Cheers > Seref > > > On Sat, Jan 21, 2017 at 8:55 AM, Chris Travers > wrote: > >> Hi; >> >> I have been asked to deliver a seminar on use of PostgreSQL in large data >> environments. I have decided to send this to the general list as well >> because I think it may be of general interest here, >> >> We will be talking about how problems and solutions for PostgreSQL when >> using it to manage 10-12TB of data under heavy load. >> >> In particular, 4 specific problems we faced will be discussed in >> technical detail: >> >> 1. How table partitioning solved a number of our performance problems. >> 2. How non-1NF designs solved some problems and created some new ones. >> 3. How PL/Perl was used to address large data sets of semi-structured >> data. >> 4. How use of PostgreSQL as job queue store raised unexpected problems, >> and their solutions. >> >> The discussion surrounds a case study which shows how well PostgreSQL >> scales, but offers a number of lessons for database management in smaller >> sizes as well. >> >> If you are interested and in the Southern Sweden or Copenhagen areas, >> please feel free to register at https://www.eventbrite.com/ >> e/postgresql-at-10-tb-and-beyond-tickets-30841174784 >> >> -- >> Best Wishes, >> Chris Travers >> >> Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor >> lock-in. >> http://www.efficito.com/learn_more >> > >
[GENERAL] pg_upgrade error on FreeBSD (9.1 -> 9.5)
Hi I am trying to upgrade an exisitng Postgresql cluster from 9.1 to 9.5 on FreeBSD. pg_upgrade fails with the following error: [Verbose Output: http://pastebin.com/YhR8vD03] == $ /usr/local/bin/pg_upgrade -k -d /usr/local/pgsql/data91 -D /usr/local/pgsql/data --old-bindir=/var/tmp/pgbin.SPOsRj4D/bin --new-bindir=/usr/local/bin --check *failure* Consult the last few lines of "pg_upgrade_server.log" for the probable cause of the failure. pg_ctl failed to start the old server, or connection failed Failure, exiting $ cat pg_upgrade_server.log - pg_upgrade run on Sat Jan 21 04:08:13 2017 - command: "/var/tmp/pgbin.SPOsRj4D/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/usr/local/pgsql/data91" -o "-p 50432 -b -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directory='/usr/local/pgsql'" start >> "pg_upgrade_server.log" 2>&1 waiting for server to start stopped waiting pg_ctl: could not start server Examine the log output. == On checking the process list, the old postgres server seems to have started, but I am unable to connect to it using psql. == $ ps auwwx |grep post pgsql 26636 0.0 1.4 66960 14512 - Is4:08AM 0:00.06 /var/tmp/pgbin.SPOsRj4D/bin/postgres -D /usr/local/pgsql/data91 -p 50432 -b -c listen_addresses= -c unix_socket_permissions=0700 -c unix_socket_directory=/usr/local/pgsql pgsql 26638 0.0 1.4 66960 14516 - Ss4:08AM 0:00.23 postgres: writer process(postgres) pgsql 26639 0.0 1.4 66960 14516 - Ss4:08AM 0:00.15 postgres: wal writer process(postgres) pgsql 26640 0.0 0.9 26720 9568 - Ss4:08AM 0:00.02 postgres: stats collector process(postgres) $ psql -p 50432 -d template1 psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.50432"? == If I run the pg_ctl command directly with just the minimum parameters, it still gives me an error, but seems to work correctly. == $ /var/tmp/pgbin.SPOsRj4D/bin/pg_ctl -o "-p 50432 " -w -l "pg_upgrade_server.log" -D "/usr/local/pgsql/data91" start waiting for server to start stopped waiting pg_ctl: could not start server Examine the log output. $ cat pg_upgrade_server.log $ psql -p 50432 -d template1 psql (9.5.5, server 9.1.20) Type "help" for help. template1=# == Where am I going wrong? Amitabh Detailed steps that I followed for upgrade: Since postgresql installed on FreeBSD through its ports/packages doesn't have version specific binaries or data directory till PG 9.5 (9.6 does have version specific data directory), use of pg_upgrade requires a little bit of work: i) Stopped the current PG 9.1 instance. Renamed the data directory to data91 (under /usr/local/pgsql). ii) Installed PG 9.1 binaries in a different location (/var/tmp/pgbin.SPOsRj4D/bin). Set owner and permission for pgsql user(default superuser in FreeBSD) iii) Removed PG 9.1 installation from server & installed PG 9.5 in it's default location (under /usr/local/bin) iv) Initiaized DB instance using initdb command (PG 9.5 data location is /usr/local/pgsql/data) v) Ran the following command for upgrade check: /usr/local/bin/pg_upgrade -k -d /usr/local/pgsql/data91 -D /usr/local/pgsql/data --old-bindir=/var/tmp/pgbin.SPOsRj4D/bin --new-bindir=/usr/local/bin --check --verbose Output: http://pastebin.com/YhR8vD03 OS: FreeBSD 10.3 (running as virtualbox guest OS, host is Mac OS)
Re: [GENERAL] Free seminar in Malmo: PostgreSQL at 10TB and Beyond
Any chance this will be recorded? The content looks great and would be of interest to many. Cheers Seref On Sat, Jan 21, 2017 at 8:55 AM, Chris Traverswrote: > Hi; > > I have been asked to deliver a seminar on use of PostgreSQL in large data > environments. I have decided to send this to the general list as well > because I think it may be of general interest here, > > We will be talking about how problems and solutions for PostgreSQL when > using it to manage 10-12TB of data under heavy load. > > In particular, 4 specific problems we faced will be discussed in technical > detail: > > 1. How table partitioning solved a number of our performance problems. > 2. How non-1NF designs solved some problems and created some new ones. > 3. How PL/Perl was used to address large data sets of semi-structured data. > 4. How use of PostgreSQL as job queue store raised unexpected problems, > and their solutions. > > The discussion surrounds a case study which shows how well PostgreSQL > scales, but offers a number of lessons for database management in smaller > sizes as well. > > If you are interested and in the Southern Sweden or Copenhagen areas, > please feel free to register at https://www.eventbrite.com/ > e/postgresql-at-10-tb-and-beyond-tickets-30841174784 > > -- > Best Wishes, > Chris Travers > > Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor > lock-in. > http://www.efficito.com/learn_more >
[GENERAL] Free seminar in Malmo: PostgreSQL at 10TB and Beyond
Hi; I have been asked to deliver a seminar on use of PostgreSQL in large data environments. I have decided to send this to the general list as well because I think it may be of general interest here, We will be talking about how problems and solutions for PostgreSQL when using it to manage 10-12TB of data under heavy load. In particular, 4 specific problems we faced will be discussed in technical detail: 1. How table partitioning solved a number of our performance problems. 2. How non-1NF designs solved some problems and created some new ones. 3. How PL/Perl was used to address large data sets of semi-structured data. 4. How use of PostgreSQL as job queue store raised unexpected problems, and their solutions. The discussion surrounds a case study which shows how well PostgreSQL scales, but offers a number of lessons for database management in smaller sizes as well. If you are interested and in the Southern Sweden or Copenhagen areas, please feel free to register at https://www.eventbrite.com/e/postgresql-at-10-tb-and-beyond-tickets-30841174784 -- Best Wishes, Chris Travers Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in. http://www.efficito.com/learn_more