[GENERAL] Hot standby 9.2.1 PANIC: WAL contains references to invalid pages
Hi Team, For last 2 days we are facing issue with replication. WARNING: page 21 of relation base/1193555/19384612 does not exist CONTEXT: xlog redo insert: rel 1663/1193555/19384612; tid 21/1 PANIC: WAL contains references to invalid pages CONTEXT: xlog redo insert: rel 1663/1193555/19384612; tid 21/1 LOG: startup process (PID 20622) was terminated by signal 6: Aborted LOG: terminating any other active server processes Stand by server went down with this error. we just using warm stand by, but we enabled wal_level as 'hot_stanndby' in Master server. I just read this mailing list, and in postgres 9.2.7 we have fix, But as of now, if i change the wal level as archive, then this problem will go..? We are just using warm stand by. so shall we change the wal_level as archive..? Can you please reply this mail as soon as possible? -- Best Regards, Vishalakshi.N
Re: [GENERAL] How to avoid Force Autovacuum
Hi Team, Today also we faced issue in autovacuum.. Is there any workaround for this instead of upgrading,, If yes means can you please give me tuning parameters.. log_autovacuum_min_duration = 0 That is good for debugging. But what are you seeing in the log as the result of this? There is nothing logged during autovacuum This is the Pid in Pg_stat_activity postgres=# select * from pg_stat_activity where pid=25769; datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | waiting | state | query ---+-+---+--+--+--+-+-+-+---+---+-+--+-+---+--- 16408 | db1 | 25769 | 10 | postgres | | | | | 2013-08-13 04:00:14.767093-07 | 2013-08-13 04:00:14.765484-07 | | | f | | (1 row) This is the top command: postgres 25769 30705 93 03:54 ?00:01:45 postgres: autovacuum worker process db1 postgres 24680 30705 84 03:55 ?00:00:33 postgres: autovacuum worker process db2 postgres 24692 30705 79 03:55 ?00:00:26 postgres: autovacuum worker process db3 On Sat, Aug 10, 2013 at 12:23 AM, Jeff Janes jeff.ja...@gmail.com wrote: On Fri, Aug 9, 2013 at 9:06 AM, Kevin Grittner kgri...@ymail.com wrote: Vishalakshi Navaneethakrishnan nvishalak...@sirahu.com wrote: select * from pg_database where datname = 'template0'; -[ RECORD 1 ]-+ datname | template0 datdba| 10 encoding | 6 datcollate| en_US.UTF-8 datctype | en_US.UTF-8 datistemplate | t datallowconn | f datconnlimit | -1 datlastsysoid | 12865 datfrozenxid | 2025732249 dattablespace | 1663 datacl| {=c/postgres,postgres=CTc/postgres} select * from pg_stat_database where datname = 'template0'; -[ RECORD 1 ]--+-- datid | 12865 datname| template0 numbackends| 0 xact_commit| 320390 xact_rollback | 7 blks_read | 3797 blks_hit | 9458783 tup_returned | 105872028 tup_fetched| 1771782 tup_inserted | 10 tup_updated| 457 tup_deleted| 10 conflicts | 0 temp_files | 0 temp_bytes | 0 deadlocks | 0 blk_read_time | 0 blk_write_time | 0 stats_reset| 2013-04-19 19:22:39.013056-07 Well, that's why template0 is getting vacuumed. At some point someone must have set it to allow connections; otherwise you would have zero for commits, rollbacks, and all those block and tuple counts. Non-zero values are normal. There is no mechanism to prevent template0 from getting vacuumed. template0 will get vacuumed once every autovacuum_freeze_max_age even if no one has ever connected to it, and that vacuum will cause block reads and writes to happen. (But I'm not sure why it would contribute xact_rollback or tup_updated, and the tup_returned seems awfully high to be due to only anti-wrap-around vacs.) Cheers, Jeff -- Best Regards, Vishalakshi.N
Re: [GENERAL] How to avoid Force Autovacuum
Hi All, select * from pg_database where datname = 'template0'; -[ RECORD 1 ]-+ datname | template0 datdba| 10 encoding | 6 datcollate| en_US.UTF-8 datctype | en_US.UTF-8 datistemplate | t datallowconn | f datconnlimit | -1 datlastsysoid | 12865 datfrozenxid | 2025732249 dattablespace | 1663 datacl| {=c/postgres,postgres=CTc/postgres} select * from pg_stat_database where datname = 'template0'; -[ RECORD 1 ]--+-- datid | 12865 datname| template0 numbackends| 0 xact_commit| 320390 xact_rollback | 7 blks_read | 3797 blks_hit | 9458783 tup_returned | 105872028 tup_fetched| 1771782 tup_inserted | 10 tup_updated| 457 tup_deleted| 10 conflicts | 0 temp_files | 0 temp_bytes | 0 deadlocks | 0 blk_read_time | 0 blk_write_time | 0 stats_reset| 2013-04-19 19:22:39.013056-07 select name, setting from pg_settings where name ~ 'vacuum'; name | setting -+--- autovacuum | on autovacuum_analyze_scale_factor | 0.1 autovacuum_analyze_threshold| 5 autovacuum_freeze_max_age | 2 autovacuum_max_workers | 3 autovacuum_naptime | 60 autovacuum_vacuum_cost_delay| 20 autovacuum_vacuum_cost_limit| -1 autovacuum_vacuum_scale_factor | 0.2 autovacuum_vacuum_threshold | 5 log_autovacuum_min_duration | 0 vacuum_cost_delay | 0 vacuum_cost_limit | 200 vacuum_cost_page_dirty | 20 vacuum_cost_page_hit| 1 vacuum_cost_page_miss | 10 vacuum_defer_cleanup_age| 0 vacuum_freeze_min_age | 5000 vacuum_freeze_table_age | 15000 (19 rows) *Our Physical RAM size is 256GB* * * Please note : we are executing standard vacuum daily (Manual Vacuum) -- Vacuum freeze analyze.. *But during manual vacuum -- the load is normal* -- for all databases ( load is in 1 to 2) *Load increased to 200 during autovacuum process..* * * [Previously i had set maintenance_work_mem as 256MB at that time manual vacuum increased the load to 300. Then only i have increased the maintenance work memory to 2GB, Now manual vacuum is fine, Load is normal during vacuum process, so our application is fine during vacuum process also ] * * *Now the problem is autovacuum.. why it was invoked and increased the load? How to avoid this? * * * On Fri, Aug 9, 2013 at 5:21 AM, Sergey Konoplev gray...@gmail.com wrote: On Thu, Aug 8, 2013 at 11:18 AM, Kevin Grittner kgri...@ymail.com wrote: There were some fixes for autovacuum problems in 9.2.3. Some other fixes will be coming when 9.2.5 is released. Many of your problems are likely to go away by staying up-to-date on minor releases. By setting this so high, you are increasing the amount of work autovacuum will need to do when it does work on a table. A smaller value tends to give less bursty performance. Also, any small, frequently-updated tables may bloat quite a bit in 5 transactions. Each autovacuum worker will allocate this much RAM. If all of your autovacuum workers wake up at once, would losing 2GB for each one from your cache cause a significant performance hit? (Since you didn't say how much RAM the machine has, it's impossible to tell.) What does running this in psql this show?: \x on select * from pg_database where datname = 'template0'; select * from pg_stat_database where datname = 'template0'; In addition to Kevin's notes, I think it is also worth to look at the result of the query below. select name, setting from pg_settings where name ~ 'vacuum' and setting reset_val; -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com -- Best Regards, Vishalakshi.N
[GENERAL] How to avoid Force Autovacuum
Hi All, We have one production database server , having 6 DBs, Postgres 9.2.1 version. This is my vacuum settings in Production database #autovacuum = on# Enable autovacuum subprocess? 'on' # requires track_counts to also be on. log_autovacuum_min_duration = 0 autovacuum_vacuum_threshold = 5 # min number of row updates before # vacuum autovacuum_analyze_threshold = 5 maintenance_work_mem = 2GB # min 1MB # Daily once we are executing Vacuum Freeze analyze -- To prevent transaction id wraparound using this command vacuumdb -F -z -h localhost -U postgres dbname Even sometimes autovacuum running on the databases and increase the load (Above 200) very much and the server was unresponsive I have seen the autovacum worker process in top command, While i executing pg_stat_activity as postgres user, i have seen the pid of autovacuum process in the result but the query filed is Empty while i check in Pg_class table i got the value as last_autoanalyze_field is not null in one of the table. So i am guessing this is the auto analyze query. But why it increased the load very high? How can i avoid the autovacuum process ? And also autovacuum executed in the template0 database also. But I cant connect the database , since it has datallowconn=F If i update the value to true and then execute vacuum freeze analyze will make any problems? since template0 has no activities why the age(datfrozenxid) increasing heavily and reach the thresold value? Do i need to disable autovacuum for particular tables to avoid force autovacuum ? Can you please suggest me in this case? -- Best Regards, Vishalakshi.N
Re: [GENERAL] How to avoid Force Autovacuum
Hi, vacuumdb -F -z -h localhost -U postgres dbname It is not necessary to do. Autovacuum does it itself where and when needed. If we did not do this, then autovacuum will occur, Load was very high at that time and the server was unresponsive, To avoid this we are executing vacuum freeze analyze everyday. We are using Centos cat /etc/issue CentOS release 6.3 (Final) cat /proc/meminfo |grep Hugepagesize Hugepagesize: 2048 kB On Thu, Aug 8, 2013 at 6:59 AM, Sergey Konoplev gray...@gmail.com wrote: On Wed, Aug 7, 2013 at 2:46 AM, Vishalakshi Navaneethakrishnan nvishalak...@sirahu.com wrote: Daily once we are executing Vacuum Freeze analyze -- To prevent transaction id wraparound using this command vacuumdb -F -z -h localhost -U postgres dbname It is not necessary to do. Autovacuum does it itself where and when needed. Even sometimes autovacuum running on the databases and increase the load (Above 200) very much and the server was unresponsive I have seen the autovacum worker process in top command, While i executing pg_stat_activity as postgres user, i have seen the pid of autovacuum process in the result but the query filed is Empty Was autovacuum the only process that you saw in pg_stat_activity? What OS do you use? Do you use huge pages? -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com -- Best Regards, Vishalakshi.N
Re: [GENERAL] How to avoid Force Autovacuum
cat /proc/meminfo | grep -i huge AnonHugePages: 31576064 kB HugePages_Total: 0 HugePages_Free:0 HugePages_Rsvd:0 HugePages_Surp:0 Hugepagesize: 2048 kB cat /sys/kernel/mm/redhat_transparent_hugepage/enabled [always] never cat /sys/kernel/mm/redhat_transparent_hugepage/defrag [always] never On Thu, Aug 8, 2013 at 10:57 AM, Sergey Konoplev gray...@gmail.com wrote: On Wed, Aug 7, 2013 at 9:32 PM, Vishalakshi Navaneethakrishnan nvishalak...@sirahu.com wrote: cat /etc/issue CentOS release 6.3 (Final) cat /proc/meminfo |grep Hugepagesize Hugepagesize: 2048 kB Please show what commands below print. cat /proc/meminfo | grep -i huge cat /sys/kernel/mm/transparent_hugepage/enabled cat /sys/kernel/mm/transparent_hugepage/defrag -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com -- Best Regards, Vishalakshi.N
[GENERAL] Postgres Index
Hi all, Can we create composite index for one text column and integer column? Thanks in advance.. -- Best Regards, Vishalakshi.N
[GENERAL] Postgresql - 8.3 Replication in windows
Hi all, Can you please tell me , how to set up replication in win7, without slony tool.. -- Best Regards, Vishalakshi.N
Re: [GENERAL] Postgres Login Users Details
Hi, In my case is different. i have only 2 DB login user.. say postgres and dbuser1 My other friends login into this db server using different app server say app1,app2. Their user names are osuser1,osuser2 I want to know who is logging via which server and what db they are access and what queries they are running .. osuser1 from app1 login into dbserver as dbuser1 and accessing dbname1 db and execute this command.. I want this details.. Can you help me how to log this information? On Mon, Oct 22, 2012 at 8:20 PM, Chris Angelico ros...@gmail.com wrote: On Mon, Oct 22, 2012 at 7:47 PM, Vishalakshi Navaneethakrishnan nvishalak...@sirahu.com wrote: Hi all, I need to know who are all access database from different remote host. Example : User1@host1 logged / access db dbuser@dbname in Dbserver How can i get this information? As suggested, you can configure logging to record this. For up-to-the-moment who's currently logged in information, look at the pg_stat_activity table. ChrisA -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Best Regards, Vishalakshi.N
[GENERAL] Postgres Login Users Details
Hi all, I need to know who are all access database from different remote host. Example : User1@host1 logged / access db dbuser@dbname in Dbserver How can i get this information? Thanks in advance.. -- Best Regards, Vishalakshi.N
Re: [GENERAL] Postgres DB Migration from 8.3 to 9.1
Hi all, While testing upgrade facility, I have installed postgres 9.2 using source package. Because it needs to be compiled with --disable-integer-datetimes. I have used this command ./configure --prefix=/opt/PostgreSQL/9.2/ --disable-integer-datetimes --without-readline After installation i found pg_upgrade binary missing in bin directoy. How can i get this ? Thanks in Advance. On Sat, Oct 13, 2012 at 3:37 AM, Scott Marlowe scott.marl...@gmail.comwrote: On Thu, Oct 11, 2012 at 8:15 PM, Vishalakshi Navaneethakrishnan nvishalak...@sirahu.com wrote: Hi Friends, We have our production environment database server in Postgres 8.3 version. we have planned to upgrade to lastest version 9.1. Dump from 8.3 and restore in Postgres 9.1 takes more than 5 hours. Any other quick method to upgrade from 8.3 to 9.1. We need to reduce our downtime below 1 hour. Any Possibilities..? As downtime required approaches zero, the likelyhood of needing slony approaches 1. -- Best Regards, Vishalakshi.N
[GENERAL] Postgres DB Migration from 8.3 to 9.1
Hi Friends, We have our production environment database server in Postgres 8.3 version. we have planned to upgrade to lastest version 9.1. Dump from 8.3 and restore in Postgres 9.1 takes more than 5 hours. Any other quick method to upgrade from 8.3 to 9.1. We need to reduce our downtime below 1 hour. Any Possibilities..? Thanks in Advance. -- Best Regards, Vishalakshi.N