Re: Best opensource Postgresql monitoring tool
Thank you Bhaumik, I was also searching for and found one - pgwatch from cybertec. Has anybody got views or feedback on this one? Has anybody used it for monitoring purposes? Thank you all for your valuable feedback and suggestions. Regards On Tue, 10 Jan 2023, 16:47 Bhautik Chudasama, wrote: > pgAdmin is one of the most popular tool. You can see all necessary metrics > such as qps, tps, and interaction with table. If you've Prometheus setup > then you can scrap necessary metrics. > > On Tue, Jan 10, 2023, 7:59 PM Vikas Sharma wrote: > >> Hi there, >> >> Could you please advise on the best opensource monitoring tool for >> Postgresql? >> >> Thanks & Best Regards >> Vikas >> >
Best opensource Postgresql monitoring tool
Hi there, Could you please advise on the best opensource monitoring tool for Postgresql? Thanks & Best Regards Vikas
pgbouncer: Warning xbuf_connect failed: no such or file directory
Hi There, Could someone shed some light on this error please? I am trying to set up pgbouncer on a postgres Paas instance. Regards Vikas
Re: how to get value of parameter set in session for other user
Hi Tom, Thanks for the clarification. Regards Vikas On Tue, 7 Dec 2021 at 14:45, Tom Lane wrote: > Vikas Sharma writes: > > Is it possible in postgres to get the value of the parameter currently > set > > in the session of the other user? > > No. That information only exists within the other session's process. > > regards, tom lane >
how to get value of parameter set in session for other user
Hi There, Is it possible in postgres to get the value of the parameter currently set in the session of the other user? The other user might have set the value using SET . If the other user has set the value as "alter role" then it should appear in select * from pg_roles. Thank you -Vikas
Cluster fencing tool/software for PostgreSQL cluster with streaming replication
Hi, I am planning for an enterprise grade PostgreSQL cluster and so looking for the tool/softwares which will do the cluster management or fencing to avoid split brain. Please, could you let me know if there are some tools/software which can do that so it can be used in a production environment. PostgreSQL12 with streaming replication + repmgr and pgpool for connection management and load balancing. Regards Vikas S
Re: pgcrypto - real life examples to encrypt / decrypt
Thanks you Guys, These are very helpful pointers. I will go away and see how much depth I do need. Regards Vikas S. On Tue, 3 Aug 2021 at 14:36, Joe Conway wrote: > On 8/3/21 8:43 AM, Luca Ferrari wrote: > > On Tue, Aug 3, 2021 at 1:03 PM Vikas Sharma wrote: > >> My question is, can I use the gpg public/secret key instead of the > 'Secret password' in above PGP_Sym_encrypt/decrypt? I can create a wrapper > function to read the public/secret keys to hide it from appearing as clear > text. > > > > I think you are looking for something like: > > > > pgp_pub_encrypt( clear_text, > > dearmor( '-BEGIN PGP PUBLIC KEY BLOCK- > > ... > > -END PGP PUBLIC KEY BLOCK-' ) ); > > > > > >> > >> still researching how to encrypt a column with sensitive data as a best > practice to use in OLTP production with minimal impact on performance. > > > > Clearly, as you add more stuff to do, performances will be lower. I > > strongly recommend you to analyze if column encryption is really what > > you need for your purposes, because in my little experience it is > > often too much work with regard to other approaches (e.g., disk and > > backup encryption). > > Generally agreed. This topic is vast and complex and probably beyond > what most people want to discuss by typing (at least for me) ;-) > > That said, you might find this extension written by Bruce Momjian useful: > > https://momjian.us/download/pgcryptokey/ > > HTH, > > Joe > -- > Crunchy Data - http://crunchydata.com > PostgreSQL Support for Secure Enterprises > Consulting, Training, & Open Source Development >
Re: pgcrypto - real life examples to encrypt / decrypt
Thank you Luca, For now I have seen the below: pgp_pub_encrypt -- using public gpg key pgp_pub_decrypt -- using secret gpg key Select crypt('test', gen_salt('md5')); Select PGP_SYM_DECRYPT(PGP_SYM_ENCRYPT('Some data','Secret password','compress-algo=1, cipher-algo=aes256'),'Secret password'); My question is, can I use the gpg public/secret key instead of the 'Secret password' in above PGP_Sym_encrypt/decrypt? I can create a wrapper function to read the public/secret keys to hide it from appearing as clear text. still researching how to encrypt a column with sensitive data as a best practice to use in OLTP production with minimal impact on performance. Regards Vikas S On Tue, 3 Aug 2021 at 11:03, Luca Ferrari wrote: > On Mon, Aug 2, 2021 at 11:14 PM Vikas Sharma wrote: > > > > Dear Experts, > > > > Could you please share some real life examples of using pgcrypto in > production? > > > > I am planning to use it in our environment and wondering what could be > the best practice for its use. > > It is not clear what you are going to do and which kind of encryption > you are going to use. > For a symmetric encryption this could be a starting point: > > UPDATE secret > SET secret_text = pgp_sym_encrypt( clear_text, > > 'A-Strong-Secret-Password' ); > > I do remember there was an extension made to overtake pgcrypto, but > currently I don't remember the name. > > Luca >
Re: pgcrypto - real life examples to encrypt / decrypt
Thanks Adrian, I will check them out. Regards Vikas S On Mon, 2 Aug 2021 at 22:22, Adrian Klaver wrote: > On 8/2/21 2:14 PM, Vikas Sharma wrote: > > Dear Experts, > > > > Could you please share some real life examples of using pgcrypto in > > production? > > > > I am planning to use it in our environment and wondering what could be > > the best practice for its use. > > I would start by doing a search on 'using pgcrypto', that will return > articles/blogs with pointers. > > > > > Thank you. Regards > > Vikas S > > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
pgcrypto - real life examples to encrypt / decrypt
Dear Experts, Could you please share some real life examples of using pgcrypto in production? I am planning to use it in our environment and wondering what could be the best practice for its use. Thank you. Regards Vikas S
pgpool-II 3.7.5 with ssl
Greetings, We use postgresql 9.6 and pgpool 3.7.5 and we are now asked to enable ssl for 'in transit'. I have setup the ssl server side on the postgresql dbs ( master and slave) and can see in pg_stat_ssl that the master slave communication and connections from the application are showing ssl = 't' I have set the parameters in pgpool.conf as well but not sure if the pgpool is working with ssl enabled. because when I try to connect with psql using pgpool I get below: [postgres@pgool-server ~]$ psql 'host=localhost port=5432 dbname=postgres user=user1 sslmode=require' psql: server does not support SSL, but SSL was required Can you please advise on the above error, is the pgpool not supporting ssl because it is not compiled with openssl? However, I can see libssl.so.10 when I do 'ldd' on pgpool binary. Best Regards Vikas
Partitioning large table (140GB)
Hi There, In our production, we use postgres 9.5 with streaming replication using repmgr, there is a large table of 140GB size which receives lots of inserts, Is it possible to partition this table in this version of postgres? and if so, please, can someone let me know the best way to accomplish this and the best practices around it? Thanks & Regards Vikas
PGPool version 4.0.6-1
Hi All, I have a confusion about the Pgpool -ii version. Can you let the know the difference between pgpool-II-pg11-4.0.6-1 and pgpool-II-96-4.0.6-1 ? To me, it looks like it is the same version of pgpool-II i.e. 4.0.6 but compiled with postgresql 11 or postgresql 9.6. otherwise they both have the same features... Regards Vikas
pg_receivexlog or archive_command
Hi, I am wondering which one is the best way to archive the xlogs for Backup and Recovery - pg_receivexlog or archive_command. pg_receivexlog seems best suited because the copied/archived file is streamed as it is being written to in xlog while archive_command only copies when the WAL is fully written to. Best wishes Vikas
pg_xlog on slaves has grown to 200GB
Hello, We are using postgresql 9.5 with repmgr 3.3.2 in streaming replication setup with 1 master and 2 slaves. I have noticed that the pg_xlog on slaves has grown to 200GB and is still growing. Please advise why pg_xlog is growing and not pruning itself, is there any parameter I need to setup to accomplish this? or repmgr will control it itself. I am not 100% sure but feel it pg_xlog never used to grow this much or was clearing itself, I can see there are WALs since May 2019 but not before that. I want to understand why the WALs started accumulating since then. we have this setup since more than a year. Best Regards Vikas Sharma
Re: Which version to upgrade upto
The architects and developers have perception that the latest release always will have bugs and others might be using in production. They feel 11.2 will be better bet than 11.4. On Wed, Jul 31, 2019, 16:24 Luca Ferrari wrote: > On Wed, Jul 31, 2019 at 4:55 PM Vikas Sharma wrote: > > Should I go for 10.9 or 11.2? The architects are suggesting 11.2 > > Moving fom 9.5 requires in any case a major version upgrade, therefore > I would go for the latest one, 11.4. > Are there any particular needs that feed your doubts about the version? > > Luca >
Which version to upgrade upto
Hi All, We are using postgres 9.5.9 in streaming replication with repmgr. The project is now considering to update postgreSQL instances to latest versions. I am looking for which version to upgrade to. I can see the current version in postgres 11 is 11.4, and 10.9 in 10. How to decide on which version we should use? Should I go for 10.9 or 11.2? The architects are suggesting 11.2 Please advise. Regards Vikas
SQL query
Hi, I have come across a query that a developer wrote to update a few rows in table, the query did update the two desired rows but also updated the rest of the table with the column value as 'false'. Update tableA set col1 = null and col2 in (1,2); The query updated col1 to null for the rows where col2 was either 1 or 2, rest of rows were also updated for col1 to 'false'. The above was run without where clause. Could the experts throw some light on this? Regards Vikas Sharma
Re: Out of memory: Kill process nnnn (postmaster) score nn or sacrifice child
Thank you Adrian for the reply, I did check the postgres processes running around the time when OOM was invoked, there were lots of high CPU consuming postgres processes running long running selects. I am not sure of how to interpret the memory terms appearing in linux dmeg or /var/log/messages but I can see out of memory happened and Postmaster invoked OOM. Regards Vikas Sharma On Tue, 12 Feb 2019 at 16:39, Adrian Klaver wrote: > On 2/12/19 8:20 AM, Vikas Sharma wrote: > > Hello All, > > > > I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we > > encounter today the Out of Memory Error on the Master which resulted in > > All postres processes restarted and cluster recovered itself. Please > > let me know the best way to diagnose this issue. > > For a start look back further in the Postgres log then the below. What > is shown below is the effects of the OOM killer. What you need to look > for is the statement that caused Postgres memory to increase to the > point that the OOM killer was invoked. > > > > > > > > > The error seen in the postgresql log: > > > > 2019-02-12 10:55:17 GMT LOG: terminating any other active server > processes > > 2019-02-12 10:55:17 GMT WARNING: terminating connection because of > > crash of another server process > > 2019-02-12 10:55:17 GMT DETAIL: The postmaster has commanded this > > server process to roll back the current transaction and exit, because > > another server process exited abnormally and possibly corrupted shared > > memory. > > 2019-02-12 10:55:17 GMT HINT: In a moment you should be able to > > reconnect to the database and repeat your command. > > 2019-02-12 10:55:17 GMT WARNING: terminating connection because of > > crash of another server process > > 2019-02-12 10:55:17 GMT DETAIL: The postmaster has commanded this > > server process to roll back the current transaction and exit, because > > another server process exited abnormally and possibly corrupted shared > > memory. > > 2019-02-12 10:55:17 GMT HINT: In a moment you should be able to > > reconnect to the database and repeat your command. > > 2019-02-12 10:55:17 GMT WARNING: terminating connection because of > > crash of another server process > > - > > > > Error from dmesg on linux: > > --- > > [4331093.885622] Out of memory: Kill process n (postmaster) score nn > > or sacrifice child > > [4331093.890225] Killed process n (postmaster) total-vm:18905944kB, > > anon-rss:1747460kB, file-rss:4kB, shmem-rss:838220kB > > > > Thanks & Best Regards > > Vikas Sharma > > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Out of memory: Kill process nnnn (postmaster) score nn or sacrifice child
Hello All, I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we encounter today the Out of Memory Error on the Master which resulted in All postres processes restarted and cluster recovered itself. Please let me know the best way to diagnose this issue. The error seen in the postgresql log: 2019-02-12 10:55:17 GMT LOG: terminating any other active server processes 2019-02-12 10:55:17 GMT WARNING: terminating connection because of crash of another server process 2019-02-12 10:55:17 GMT DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2019-02-12 10:55:17 GMT HINT: In a moment you should be able to reconnect to the database and repeat your command. 2019-02-12 10:55:17 GMT WARNING: terminating connection because of crash of another server process 2019-02-12 10:55:17 GMT DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. 2019-02-12 10:55:17 GMT HINT: In a moment you should be able to reconnect to the database and repeat your command. 2019-02-12 10:55:17 GMT WARNING: terminating connection because of crash of another server process - Error from dmesg on linux: --- [4331093.885622] Out of memory: Kill process n (postmaster) score nn or sacrifice child [4331093.890225] Killed process n (postmaster) total-vm:18905944kB, anon-rss:1747460kB, file-rss:4kB, shmem-rss:838220kB Thanks & Best Regards Vikas Sharma
Vacuum and Materialized view refresh slow
Hello There, We have postgres 9.6 with 3 slaves replicated in synchronous commit. we recently migrated to to another cloud provider. Though we have the same configuration for VMs and postgres, we are experiencing systemwide slowness with writes to database. I am more concerned about a scheduled script to vacuum and refresh two Materialized views after every 5 minutes. Time to refresh MV and vacuum has increased considerably and during this time a simple insert takes 10-15 seconds which normally should take milliseconds whereas with old provider that simple insert used to take not more than couple of seconds during refresh time. I have seen the vacuum and refresh MV process showing as waiting for a {change location} sometimes for a fraction of seconds. Could I be enlighted about what I can do to improve the performance to make write faster. Best regards Vikas
Copy over large data Postgresql 9.5
Hello Experts, I need to transfer pgsql 9.5 data of 90GB from one Cloud provider to other. We have plenty of downtime to do this. I will be copying over data directory after shutting down pgsql services on the source. The problem is how can I confirm that the data has been copied over correctly. Is the pgsql service starting OK on target is enough to ensure that or is there anything more funky to do that? I will create compressed archive of data and calculate SHA-256 checksum and compare after copying over to target. At source pgsql is in streaming replication using repmgr (3.5) so I think I will have to recreate repmgr database after copying over and register master again. Should I vacuum (full) and reindex pgsql on target to gather the statistics again. Could you advise on the above please? Regards Vikas
Postgresql & PGPool packages minor version different on Standby server
Hi All, We have Postgresql 9.5 Cluster with streaming replication and pgpool. The version of Postgres is 9.5.5 and Pgpool-II version 3.2.15. There is now hardware issue with the Standby Machine and it won't startup so we are building new Standby machine. My question is about the minor version of postgresql 9.5 available now. On postgresql repository site, I can't see postgres 9.5.5 and Pgpool-II-3.2.15 instead there are 9.5.13 and PGpool-II-3.5.15 Is it safe and fine to have postgres 9.5.5 and pgpool-II-3.2.15 on Master & postgres 9.5.13 and Pgpool-II-3.5.15 on Standby ? Please let me know. I know the postgres don't keep minor version in PG_VERSION file in data directory so should It be ok to use different minor versions on master & standby? Thanks in advance. Regards Vikas
Re: DB size growing exponentially when materialized view refreshed concurrently (postgres 9.6)
Hi Alban, I haven't disabled autovacuum task, it's running fine for other objects. I was also getting "Error: cancelling autovacuum task" on the materialized view when concurrently refreshed so decided to write a script and run vacuum (full, analyze) the MV immediately after concurrent refresh but still it's not working as intended. On Mon, Jun 25, 2018, 20:02 Alban Hertroys wrote: > > > On 25 Jun 2018, at 19:21, Vikas Sharma wrote: > > > > I am looking for advice in a issue where two materialized views are > being refreshed concurrently and dbsize has grown to 150gb from 4gb in two > days. > > > > We use two materialized views to keep processed data for faster query > results for a search function. Earlier materialized views were refreshed > not concurrently and all was good on DB. > > > > We changed mv refresh to concurrently to take advantage of simultaneous > access when mv refreshed. Now the refresh takes slightly longer and but DB > size has grown exponentially. > > > > I ran full vacuum on DB and size again reduced to 4gb from 150gb. > > You did not disable or tune down autovacuum perchance? > With materialized view refreshes that often, you probably need fairly > aggressive autovacuuming on that table - you can tune autovacuum parameters > per table (see Storage parameters). That probably won't put you at 4GB, > more around double that size, but it should stay a reasonable size that way. > > Regards, > > Alban Hertroys > -- > If you can't see the forest for the trees, > cut the trees and you'll find there is no forest. > >
DB size growing exponentially when materialized view refreshed concurrently (postgres 9.6)
Hi All, I am looking for advice in a issue where two materialized views are being refreshed concurrently and dbsize has grown to 150gb from 4gb in two days. We use two materialized views to keep processed data for faster query results for a search function. Earlier materialized views were refreshed not concurrently and all was good on DB. We changed mv refresh to concurrently to take advantage of simultaneous access when mv refreshed. Now the refresh takes slightly longer and but DB size has grown exponentially. I ran full vacuum on DB and size again reduced to 4gb from 150gb. We need to refresh mvs every 5 mins so I created a script to refresh MV concurrently and then running vacuum (full,analyze) on the mv immediately. I hoped it would solve the issue of DB size growing exponentially but it hasn't and size still growing. Please advice how can I refresh MV concurrently and DB size doesn't grow. Much appreciated. Regards Vikas
Re: Postgresql database encryption
Thanks a lot for the valuable information and apologies I didn't provide specify that the requirement is to encrypt data at rest and in transit. Regards Vikas On Fri, Apr 20, 2018, 21:56 Vick Khera <vi...@khera.org> wrote: > On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma <shavi...@gmail.com> wrote: > >> Hello Guys, >> >> Could someone throw light on the postgresql instance wide or database >> wide encryption please? Is this possible in postgresql and been in use in >> production?. >> > > For anyone to offer a proper solution, you need to say what purpose your > encryption will serve. Does the data need to be encrypted at rest? Does it > need to be encrypted in memory? Does it need to be encrypted at the > database level or at the application level? Do you need to be able to query > the data? There are all sorts of scenarios and use cases, and you need to > be more specific. > > For me, using whole-disk encryption solved my need, which was to ensure > that the data on disk cannot be read once removed from the server. For > certain fields in one table, I use application level encryption so only the > application itself can see the original data. Anyone else querying that > table sees the encrypted blob, and it was not searchable. >
Postgresql database encryption
Hello Guys, Could someone throw light on the postgresql instance wide or database wide encryption please? Is this possible in postgresql and been in use in production?. This is a requirement in our production implementation. Many Thanks Vikas Sharma
Re: Postgresql Split Brain: Which one is latest
Thanks Adrian and Edison, I also think so. At the moment I have 2 masters, as soon as slave is promoted to master it starts its own timeline and application might have added data to either of them or both, only way to find out correct master now is the instance with max count of data in tables which could incur data loss as well. Correct me if wrong please? Thanks and Regards Vikas On Tue, Apr 10, 2018, 17:29 Adrian Klaver <adrian.kla...@aklaver.com> wrote: > On 04/10/2018 08:04 AM, Vikas Sharma wrote: > > Hi Adrian, > > > > This can be a good example: Application server e.g. tomcat having two > > entries to connect to databases, one for master and 2nd for Slave > > (ideally used when slave becomes master). If application is not able to > > connect to first, it will try to connect to 2nd. > > So the application server had a way of seeing the new master(old slave), > in spite of the network glitch, that the original master database did not? > > If so and it was distributing data between the two masters on an unknown > schedule, then as Edison pointed out in another post, you really have a > split brain issue. Each master would have it's own view of the data and > latest update would really only be relevant for that master. > > > > > Regards > > Vikas > > > > On 10 April 2018 at 15:26, Adrian Klaver <adrian.kla...@aklaver.com > > <mailto:adrian.kla...@aklaver.com>> wrote: > > > > On 04/10/2018 06:50 AM, Vikas Sharma wrote: > > > > Hi, > > > > We have postgresql 9.5 with streaming replication(Master-slave) > > and automatic failover. Due to network glitch we are in > > master-master situation for quite some time. Please, could you > > advise best way to confirm which node is latest in terms of > > updates to the postgres databases. > > > > > > It might help to know how the two masters received data when they > > where operating independently. > > > > > > Regards > > Vikas Sharma > > > > > > > > -- > > Adrian Klaver > > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com> > > > > > > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Re: Posgresql Log: lots of parse statements
Thanks David, But why are there so many parse statement occurances for one query? Does postgres parse the statement everytime before execution or parse the query only first time it is loaded in memory and reuse the same parsed plan until it ages out of memory?. In the log I can see these parse statement occurances about 400 times in a day and everytime taking longer than 15 secs. Regards Vikas On Mar 1, 2018 15:30, "David G. Johnston" <david.g.johns...@gmail.com> wrote: > On Thu, Mar 1, 2018 at 8:23 AM, Vikas Sharma <shavi...@gmail.com> wrote: > >> Hi All, >> >> I need help to understand this please. I was looking to do performance >> tuning on slow queries so have stated logging queries taking more than 15 >> secs. In the postgresql log I can see a query which appears only as >> "parse" while others appear as execute. >> [...] >> I don't see this query to appear as execute, so how can find out how much >> time it's taking to execute? does this parse timing includes execute also? >> > > The most likely explanation is that executing the already parsed query > takes less than 15 seconds and so doesn't appear due to your filter. > > David J. > >
Posgresql Log: lots of parse statements
Hi All, I need help to understand this please. I was looking to do performance tuning on slow queries so have stated logging queries taking more than 15 secs. In the postgresql log I can see a query which appears only as "parse" while others appear as execute. 2018-01-21 14:01:16 GMT LOG: duration: 62952.558 ms parse : select this_.busi_type as business_type from tablea where this_.busi_id = $1 and this_.busi_date = $2 2018-01-21 14:05:00 GMT LOG: duration: 62952.558 ms parse : select this_.busi_type as business_type from tablea where this_.busi_id = $1 and this_.busi_date = $2 2018-01-21 14:17:12 GMT LOG: duration: 62952.558 ms parse : select this_.busi_type as business_type from tablea where this_.busi_id = $1 and this_.busi_date = $2 2018-01-21 14:34:08 GMT LOG: duration: 62952.558 ms parse : select this_.busi_type as business_type from tablea where this_.busi_id = $1 and this_.busi_date = $2 I don't see this query to appear as execute, so how can find out how much time it's taking to execute? does this parse timing includes execute also? Best Regards Vikas
Re: Multiple postmasters running from same directory
Thanks Tom, So is it normal for postgres to fork out new postmaster processes from the same data directory? I haven't seen this earlier. I will check from where those connection requests are coming in, Best Regards Vikas On Feb 13, 2018 15:50, "Tom Lane" <t...@sss.pgh.pa.us> wrote: > Laurenz Albe <laurenz.a...@cybertec.at> writes: > > Vikas Sharma wrote: > >> On the master I can see multiple postmaster processes from the same > data directory. > >> ps -ef |grep -i postgres|grep postm > >> postgres 81440 1 0 Jan31 ?00:11:37 > /usr/pgsql-9.4/bin/postmaster -D /var/lib/pgsql/9.4/data > >> postgres 97072 81440 0 12:17 ?00:00:00 > /usr/pgsql-9.4/bin/postmaster -D /var/lib/pgsql/9.4/data > >> postgres 97074 81440 0 12:17 ?00:00:00 > /usr/pgsql-9.4/bin/postmaster -D /var/lib/pgsql/9.4/data > > > The two other processes are children of the postmaster. > > It is strange that their process title did not get updated. > > Seeing that they're showing zero runtime, I bet that these are just-forked > children that have not had time to change their process title yet. > The thing that is strange is that you have a steady enough flow of new > connections that there are usually some children like that. > > > The "incomplete startup packet" is caused by processes that connect to > the > > PostgreSQL TCP port, but don't complete a database connection. > > Often these are monitoring or load balancing programs. > > Putting two and two together, you have some monitoring program that is > hitting the postmaster with a constant stream of TCP connection requests > none of which get completed, resulting in a whole lot of useless fork > activity. Dial down the monitoring. > > regards, tom lane >
Multiple postmasters running from same directory
Hi, We are running Postgresql 9.4 with streaming replication and repmgr. Operating system is RHEL6.8 On the master I can see multiple postmaster processes from the same data directory. ps -ef |grep -i postgres|grep postm postgres 81440 1 0 Jan31 ?00:11:37 /usr/pgsql-9.4/bin/postmaster -D /var/lib/pgsql/9.4/data postgres 97072 81440 0 12:17 ?00:00:00 /usr/pgsql-9.4/bin/postmaster -D /var/lib/pgsql/9.4/data postgres 97074 81440 0 12:17 ?00:00:00 /usr/pgsql-9.4/bin/postmaster -D /var/lib/pgsql/9.4/data The streaming replication with one standby looks fine. I was expecting to see only one postmaster process instead of three and the time shown in PS output for two extra processes changes to current time with every PS command I enter. Secondly, I logfile is full of "Incomplete startup packet" message. I need help from you experts, Is this the right behaviour of postgres? what could have gone wrong in my case. Best Regards Vikas
FATAL: failed to create a backend connection
Hi There, We are using Postgresql 9.3 with pgpool-II-93-3.5.10. One Master replicating to 3 Standbys using streaming replication. Pgpool is used for load balancing only. Lately we are seeing below on application servers. Caused by: org.postgresql.util.PSQLException: FATAL: failed to create a backend connection Detail: executing failover on backend Are we hitting the max_connections in PgSql instance or pgpool has reached to max_connections and can't spawn more Please advise. Regards Vikas
Materialized views in PGSQL 9.3 with replication on.
Hi There, We are using postgresql 9.3 with streaming replication to 3 standby nodes. I have a question about Materialized views. When the refresh for MV runs on Master how it's data is replicated to Stanby nodes? Does the refresh command runs on the standby nodes as well as it runs on Master and when the new data visible in standbys for the MV? I think it's the changed data replicated from Master to Standby's so changed data is streamed from master to Standbys. Regards Vikas Sharma