Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-10 Thread Chris Travers
On Wed, Oct 9, 2013 at 7:04 PM, John R Pierce pie...@hogranch.com wrote: On 10/8/2013 8:35 AM, Chris Travers wrote: First, while vacuum is usually preferred to vacuum full, in this case, I usually find that vacuum full clears up enough cruft to be worth it (not always, but especially if you

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-10 Thread Kevin Grittner
Chris Travers chris.trav...@gmail.com wrote: John R Pierce pie...@hogranch.com wrote: I often find in those cases it is a choice between vacuum full and dumpall/initdb/reload/analyze. Way back in the 8.1 days I often found CLUSTER to be my best option -- as long as I had room enough for a

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-09 Thread John R Pierce
On 10/8/2013 8:35 AM, Chris Travers wrote: First, while vacuum is usually preferred to vacuum full, in this case, I usually find that vacuum full clears up enough cruft to be worth it (not always, but especially if you are also having performance issues). IIRC, vacuum full was pretty broken

[GENERAL] Urgent Help Required

2013-10-08 Thread shailesh singh
I had got this message while running vacuum full from backend . Now My database is not starting , Help pls. backend vacuum full debug; WARNING: database debug must be vacuumed within 99 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in debug. ERROR:

Re: [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver
On 10/08/2013 03:55 AM, shailesh singh wrote: I had got this message while running vacuum full from backend . Now My database is not starting , Help pls. backend vacuum full debug; WARNING: database debug must be vacuumed within 99 transactions HINT: To avoid a database shutdown, execute

Re: [GENERAL] Urgent Help Required

2013-10-08 Thread David Johnston
Adrian Klaver-3 wrote On 10/08/2013 03:55 AM, shailesh singh wrote: I had got this message while running vacuum full from backend . Now My database is not starting , Help pls. backend vacuum full debug; WARNING: database debug must be vacuumed within 99 transactions HINT: To avoid a

Re: [GENERAL] Urgent Help Required

2013-10-08 Thread shailesh singh
Dear all, First of all i wish to share actual error meassge, Below are the queries i had executed on the terminal on my server -bash-3.2$ touch fix.sql -bash-3.2$ echo VACUUM FULL; fix.sql -bash-3.2$ postgres -D /var/lib/pgsql/data patnadbold fix.sql WARNING: database patnadbold must be

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread bricklen
On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh shaileshj...@gmail.comwrote: HINT: To avoid a database shutdown, execute a full-database VACUUM in patnadbold. ERROR: could not access status of transaction 33011 DETAIL: could not open file pg_clog/: No such file or directory exit After

Re: [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver
On 10/08/2013 08:03 AM, shailesh singh wrote: Dear all, First of all i wish to share actual error meassge, Below are the queries i had executed on the terminal on my server -bash-3.2$ touch fix.sql -bash-3.2$ echo VACUUM FULL; fix.sql -bash-3.2$ postgres -D /var/lib/pgsql/data patnadbold

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread Martijn van Oosterhout
On Tue, Oct 08, 2013 at 08:06:50AM -0700, bricklen wrote: On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh shaileshj...@gmail.comwrote: HINT: To avoid a database shutdown, execute a full-database VACUUM in patnadbold. ERROR: could not access status of transaction 33011 DETAIL: could not

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread bricklen
On Tue, Oct 8, 2013 at 8:13 AM, shailesh singh shaileshj...@gmail.comwrote: On Tue, Oct 8, 2013 at 8:36 PM, bricklen brick...@gmail.com wrote: On Tue, Oct 8, 2013 at 8:03 AM, shailesh singh shaileshj...@gmail.comwrote: HINT: To avoid a database shutdown, execute a full-database VACUUM in

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread shailesh singh
yes i am executing psql to connect to this database. su - postgres psql patnadbold On Tue, Oct 8, 2013 at 8:48 PM, bricklen brick...@gmail.com wrote: On Tue, Oct 8, 2013 at 8:13 AM, shailesh singh shaileshj...@gmail.comwrote: On Tue, Oct 8, 2013 at 8:36 PM, bricklen

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread bricklen
On Tue, Oct 8, 2013 at 8:25 AM, shailesh singh shaileshj...@gmail.comwrote: yes i am executing psql to connect to this database. su - postgres psql patnadbold ..and then what? Does it immediately throw an error stating that you must issue a VACUUM?

Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread Chris Travers
First, while vacuum is usually preferred to vacuum full, in this case, I usually find that vacuum full clears up enough cruft to be worth it (not always, but especially if you are also having performance issues). Secondly I would recommend using the vacuumdb command from the shell instead of psql

Re: [GENERAL] Urgent Help Required

2013-10-08 Thread David Johnston
postgres -D /var/lib/pgsql/data patnadbold fix.sql What the heck is the point of feeding VACUUM FULL; into the standard input of the postgres command? postgres simply starts the server, it does not execute arbitrary SQL. Once the database is started you want to use psql - either

Re: [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver
On 10/08/2013 09:03 AM, David Johnston wrote: postgres -D /var/lib/pgsql/data patnadbold fix.sql What the heck is the point of feeding VACUUM FULL; into the standard input of the postgres command? postgres simply starts the server, it does not execute arbitrary SQL. Once the database is

[GENERAL] Urgent Help required

2009-10-16 Thread Neha Patel
Hi, We are running with postgres sql 7.3.2. We were trying to create an index on a big table. The create index command ran for nearly 5 hours at which point we decided to interrupt it. Since this was interrupted, any operations attempted on the table on which the index was being created gives

Re: [GENERAL] Urgent Help required

2009-10-16 Thread Grzegorz Jaƛkiewicz
On Fri, Oct 16, 2009 at 11:26 AM, Neha Patel npa...@nevi-soft.com wrote: Hi, We are running with postgres sql 7.3.2. We were trying to create an index on a big table. The create index command ran for nearly 5 hours at which point we decided to interrupt it. Since this was interrupted, any

Re: [GENERAL] Urgent Help required

2009-10-16 Thread David Fetter
On Fri, Oct 16, 2009 at 11:26:40AM +0100, Neha Patel wrote: Hi, We are running with postgres sql 7.3.2. Whatever you thought your most urgent priority was, it's actually getting your database off of a major version of PostgreSQL, 7.3, whose end-of-life was well over a year ago. Your second

Re: [GENERAL] Urgent Help required

2009-10-16 Thread Neha Patel
] Urgent Help required On Fri, Oct 16, 2009 at 11:26:40AM +0100, Neha Patel wrote: Hi, We are running with postgres sql 7.3.2. Whatever you thought your most urgent priority was, it's actually getting your database off of a major version of PostgreSQL, 7.3, whose end-of-life was well over a year ago

Re: [GENERAL] Urgent Help required

2009-10-16 Thread John R Pierce
Neha Patel wrote: Hi, We are running with postgres sql 7.3.2. We were trying to... well, right off the bat, thats a -really- old version. Release notes say February 2003. 7.3 was updated to 7.3.21, and we're currently on 8.4 (while still supporting 8.3, 8.2, 8.1, and 7.4).There

Re: [GENERAL] Urgent Help required

2009-10-16 Thread Jeff Davis
On Fri, 2009-10-16 at 11:26 +0100, Neha Patel wrote: We are running with postgres sql 7.3.2. We were trying to create an index on a big table. The create index command ran for nearly 5 hours at which point we decided to interrupt it. Since this was interrupted, any operations attempted on the

Re: [GENERAL] Urgent Help required

2009-10-16 Thread Mike Christensen
Hmm would this be a bad time to ask for PostGres 1.0 support? On Fri, Oct 16, 2009 at 1:55 PM, Jeff Davis pg...@j-davis.com wrote: On Fri, 2009-10-16 at 11:26 +0100, Neha Patel wrote: We are running with postgres sql 7.3.2. We were trying to create an index on a big table. The create index