Re: [ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling
Hi, fd 0 is usually stdin, unless the program disconnects stdin. Maybe pg_restore is waiting for input, perhaps a password? certainly shouldn't be - the table where the problem happens is no different to any of the others, but I will try doing just that table later today and see if that makes any difference. Wouldn't it have outputted something though; like a prompt for input or something? Regards Shane On 11 Nov 2003, at 19:52, Jeff wrote: read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 fd 0 is usually stdin, unless the program disconnects stdin. Maybe pg_restore is waiting for input, perhaps a password? -- Jeff Trout [EMAIL PROTECTED] http://www.jefftrout.com/ http://www.stuarthamm.net/ ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] RHEL
Hi, Im currently on red hat 7.3 running postgres. Everything is running fine. Obviously, Im going to have to upgrade to RHEL 3 in order to receive updates, etc. Does anyone know of any problems with postgres running on RHEL 2.1 or RHEL 3? We run PostgreSQL on a RHEL 2.1 system without any problems here. (Hardware is a Dell 2600 with Perc4/Di) We also run PostgreSQL on our RedHat 9 development systems without problems ('Hardware' is a VMware ESX server running on a Dell 2600 with Perc4/Di), and since RHEL 3 is bases on RH9 I don't expect any problems there either. Good luck! Sander.
Re: [ADMIN] RHEL
Apologies for sending HTML mail to the list. Here is a decent version of my message: Hi, Im currently on red hat 7.3 running postgres. Everything is running fine. Obviously, Im going to have to upgrade to RHEL 3 in order to receive updates, etc. Does anyone know of any problems with postgres running on RHEL 2.1 or RHEL 3? We run PostgreSQL on a RHEL 2.1 system without any problems here. (Hardware is a Dell 2600 with Perc4/Di) We also run PostgreSQL on our RedHat 9 development systems without problems ('Hardware' is a VMware ESX server running on a Dell 2600 with Perc4/Di), and since RHEL 3 is bases on RH9 I don't expect any problems there either. Good luck! Sander. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] pg_dump still requires password
I want to do auto backup for pg with cronjob. The problem is it equesting for password. I have tried with pgpass but still it does not work. I created the pgpass as per below: echo your_host:5432:your_user:your_db:your_pass ~/.pgpass bash$ chmod 600 ~/.pgpass My cronjob entry is as below: /usr/local/pgsql/bin/pg_dump --no-owner -d helpdesk -U ganesh /www/htdocs/sg/backup/helpdesk.dump When I issue the command it is asking for passwd which I cannot used for cronjob to run at midnight. Please help. How I do solve this problem. I have posted this earlier the suggestion was to use pgpass but that does not help. What other way can I achive this. Regards, Ganesan
Re: [ADMIN] pg_dump still requires password
When I issue the command it is asking for passwd which I cannot used for cronjob to run at midnight. Execute the task as user postgres (crontab for postgres, not root) and modify pg_hba.conf to permit access in trust mode (without asking for password) e.g : local all postgrestrust You can subtitute postgres for the user you need and the database too (if the dump is not pg_dumpall) Please help. How I do solve this problem. I have posted this earlier the suggestion was to use pgpass but that does not help. What other way can I achive this. Regards, Ganesan -- Daniel Rubio Rodríguez OASI (Organisme Autònom Per la Societat de la Informació) c/ Assalt, 12 43003 - Tarragona Tef.: 977.244.007 - Fax: 977.224.517 e-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] [pg 7.1.rc2] pg_restore and large tables
Hi, Trying to restore a table that has about 80 million records. The database was dumped and restored according to the following procedure: 1) dump the db, data only time /usr/local/pgsql/bin/pg_dump -abf ./pgsql.7.4.rc1.pgdump.Z --format=c --compress=6 -U postgres testdb 2) create db schema from a separate file, including table structures, constraints, indexes 3) edit restore order to satisfy the constraints 4) restore the db time /usr/local/pgsql/bin/pg_restore -d testdb -U postgres -a ./pgsql.7.4.rc1.pgdump.Z -L ./restoreOrder.txt --verbose pg_restore has been running for 14 hours now and it does not appear that there's any end in sight. Meanwhile, postmaster is slowly eating away at the memory, it now has 46% of all available memory with about 900MB on swap. HD activity is non-stopping. In retrospective, I guess, the table with 80M records could've been created without indexes (it has 3, pk ak constraints and fk index) to speed up the pg_restore ... but then I'm not sure if creating the indexes afterwards would've been much faster. Anything I'm doing wrong? Thanks __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] [pg 7.4.rc2] pg_restore and large tables
--- ow [EMAIL PROTECTED] wrote: Hi, Trying to restore a table that has about 80 million records. The database was [...] I meant pg7.4.rc2, of course. Thanks __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] [pg 7.1.rc2] pg_restore and large tables
Hi, I have found, on 7.3.4, a _massive_ performance difference on restoring without indices - on a 25million row table from 8 hours down to 1 hour! I've found the best way is to do this... (there may be a script somewhere that automates this) - do a --schema-only restore to create the tables - manually drop the indices using psql - do a --data-only restore, also using --disable-triggers - manually recreate the indices. IIRC, it also helps to turn off fsync Hope that helps, Shane On 12 Nov 2003, at 16:55, ow wrote: Hi, Trying to restore a table that has about 80 million records. The database was dumped and restored according to the following procedure: 1) dump the db, data only time /usr/local/pgsql/bin/pg_dump -abf ./pgsql.7.4.rc1.pgdump.Z --format=c --compress=6 -U postgres testdb 2) create db schema from a separate file, including table structures, constraints, indexes 3) edit restore order to satisfy the constraints 4) restore the db time /usr/local/pgsql/bin/pg_restore -d testdb -U postgres -a ./pgsql.7.4.rc1.pgdump.Z -L ./restoreOrder.txt --verbose pg_restore has been running for 14 hours now and it does not appear that there's any end in sight. Meanwhile, postmaster is slowly eating away at the memory, it now has 46% of all available memory with about 900MB on swap. HD activity is non-stopping. In retrospective, I guess, the table with 80M records could've been created without indexes (it has 3, pk ak constraints and fk index) to speed up the pg_restore ... but then I'm not sure if creating the indexes afterwards would've been much faster. Anything I'm doing wrong? Thanks __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Upgrading to Solaris 9
On Tue, Nov 11, 2003 at 07:29:42PM +0100, Gaetano Mendola wrote: Seems that solaris is the worst choice for run Postgres. Am I completely wrong ? Windows is worse ;-) A -- Andrew Sullivan 204-4141 Yonge Street Afilias CanadaToronto, Ontario Canada [EMAIL PROTECTED] M2P 2A8 +1 416 646 3304 x110 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Upgrading to Solaris 9
On Tue, Nov 11, 2003 at 01:52:26PM -0700, scott.marlowe wrote: I thought .org and .info were being run on postgresql/solaris? They are. I'd happily dump the Solaris use overboard, however, if it weren't for all the nifty hardware support it give us. It is dog-slow for handling large numbers of processes. We get around that by throwing hardware at the problem. A -- Andrew Sullivan 204-4141 Yonge Street Afilias CanadaToronto, Ontario Canada [EMAIL PROTECTED] M2P 2A8 +1 416 646 3304 x110 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Upgrading to Solaris 9
Danielle Cossette wrote: Good morning, Could you please let me know if Postgres 7.1.3 will run on Solaris 9. If it does, are you aware of any issues. I've run 7.1, 7.2 and 7.3 on Solaris 8 x86 and Solaris 9 x86. 7.1 install, config, run was nearly 100% hands-off. 7.3 takes setting LD_LIBRARY_PATH to point to /usr/local/lib. They've run fine for 3+ years ... except I had always allocated a trivial amount of shared memory to Postgres. A few months back, I decided to increase the shmmin/shmmax parameters to something a bit more usable and the OS promptly kernel panic'd upon bootup. It took a few tries to get values that worked. But this week, Solaris just went nuts and kept kernel panicing until I dropped shared memory to next to nothing -- which set Postgres performance back to be unusable. I blew the OS away, installed Linux and everything runs fine. Actually better than fine; better performance due to better drivers (especially for ATA hard drives), better disk cache management, lower memory footprint, etc. I had already switched to Linux on one server due to a non-working LSI 32320-R driver and this shared memory problem has pretty much cemented our plans to move off Solaris. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] vacuum full problem
Stephan Szabo wrote: On Tue, 11 Nov 2003, pginfo wrote: It is possible to be one not closed transaction, but in this case nobody will be able to modify this table (tables) and the system will stop to respond. The paradox is that the system works well without Not necessarily. People are going to be able to insert/update/delete from the tables (the locks are AccessShareLock) because those don't get a conflicting table lock. They're not going to be able to do things like vacuum full or alter table however because those do. Can you point me to any place in docs to read more detailed about locks and statistic ( I have idea, butt also I will to know more if possible). Well, for a list of the lock levels and some examples of where they're used you might want to see: http://www.postgresql.org/docs/7.3/static/explicit-locking.html Thanks, it is good.regards, ivan. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] broken pipe
Hi all, Please help, I keep receiving this error message, see below, and I don't know how to go around this problem,can anyone help. Regards Denis tar: Removing leading `/' from member names tar (child): /dev/st0: Wrote only 0 of 10240 bytes tar (child): Error is not recoverable: exiting now /etc/jobs/backup: line 2: 12269 Broken pipe tar -czf /dev/st0 /home/* /groups/*
Re: [ADMIN] [pg 7.1.rc2] pg_restore and large tables
[EMAIL PROTECTED] (Shane Wright) writes: I have found, on 7.3.4, a _massive_ performance difference on restoring without indices - on a 25million row table from 8 hours down to 1 hour! I've found the best way is to do this... (there may be a script somewhere that automates this) - do a --schema-only restore to create the tables - manually drop the indices using psql - do a --data-only restore, also using --disable-triggers - manually recreate the indices. IIRC, it also helps to turn off fsync None of this should come as any great surprise... All but the turn off fsync part is described in the documentation tree thus: http://www.postgresql.org/docs/7.2/interactive/populate.html -- output = reverse(ofni.smrytrebil @ enworbbc) http://dev6.int.libertyrms.com/ Christopher Browne (416) 646 3304 x124 (land) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] Taking database offline
Hi, Is there a way to take the database offline to keep users out? Is there a way to take the database offline but still allowing pg_restore to run against it? Thanks __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Taking database offline
On Wed, Nov 12, 2003 at 12:07:23 -0800, ow [EMAIL PROTECTED] wrote: Hi, Is there a way to take the database offline to keep users out? Is there a way to take the database offline but still allowing pg_restore to run against it? You could shut the database down, change pg_hba.conf to only let in the postgres user, bring the system back up. Once you finish your restore you can repeat the procedure, this time change pg_hba.conf to let everyone back in. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Taking database offline
On Wed, 12 Nov 2003, ow wrote: Hi, Is there a way to take the database offline to keep users out? Is there a way to take the database offline but still allowing pg_restore to run against it? Yes, as of 7.3 you can create entries in pg_hba.conf that control who can connect to what database. Just make a backup of the original pg_hba.conf, and set up your copy to have only the postgres (or other) superuser be allowed to connect. Then, swap the two hba files and restart the server. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Question On 7.4RC2
Glenn Wiorek [EMAIL PROTECTED] writes: I just finished compiling 7.4RC2 on Solaris 8 and ran the regression tests = (make check). The last test out of 93 (stats) failed. Is this of concern?= It means the stats collector is failing, probably because it's unable to set up the self-connected UDP port that the stats code uses. Kiyoshi Sawada reported the same problem on Solaris a day or two back on pg-hackers, but we've not yet been able to identify why it's failing. (The code involved got rewritten in 7.4 for IPv6 support, so some portability problems aren't too surprising ...) Can you dig into it there? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] 7.4RC2: pg_crypto not working?
After having initdb'd and started the server (gmake check said it passed all 93 tests), and doing a make; make install in pgcrypto, I tried creating the pgcrytpo functions with # psql -d template1 -f pgcrypto.sql This produced a series of errors as follows: 2003-11-12 16:56:14.287480500 [9781] LOG: statement: CREATE OR REPLACE FUNCTION digest(text, text) 2003-11-12 16:56:14.287484500 RETURNS bytea 2003-11-12 16:56:14.287485500 AS '$libdir/pgcrypto', 'pg_digest' 2003-11-12 16:56:14.287487500 LANGUAGE 'C'; 2003-11-12 16:56:14.293247500 [9781] ERROR: could not load library /usr/local/pgsql-7.4/lib/pgcrypto.so: /usr/local/pgsql-7.4/lib/pgcrypto.so: undefined symbol: px_find_digest Is this from user error or is pgcrypto not working yet in RC2? Thanks, Murthy ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] 7.4RC2: pg_crypto not working?
Murthy Kambhampaty [EMAIL PROTECTED] writes: # psql -d template1 -f pgcrypto.sql 2003-11-12 16:56:14.293247500 [9781] ERROR: could not load library /usr/local/pgsql-7.4/lib/pgcrypto.so: /usr/local/pgsql-7.4/lib/pgcrypto.so: undefined symbol: px_find_digest Works for me (at least, the creation script succeeds; don't know how to test the functions). Maybe you have a ldconfig search-path problem? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] 7.4RC2: pg_crypto not working?
On Wed, 2003-11-12 at 14:14, Murthy Kambhampaty wrote: Is this from user error or is pgcrypto not working yet in RC2? It works for me (x86, redhat 7.3, linux 2.4.18). I did make install in pgcrypto/, then: CREATE FUNCTION digest (text, text) RETURNS bytea AS 'pgcrypto', 'pg_digest' LANGUAGE c STRICT; Note that I don't have '$libdir/' in this function definition. I can't remember why. [EMAIL PROTECTED] select encode( digest('AAA','md5'), 'hex' ); encode -- e1faffb3e614e6c2fba74296962386b7 (1 row) [EMAIL PROTECTED] select version(); version -- PostgreSQL 7.4RC2 on i686-pc-linux-gnu, compiled by GCC 2.96 I just discovered that md5 is now built in to the 7.4 backend, so I don't need pgcrypto anymore. -Reece -- Reece Hart, http://www.in-machina.com/~reece/, GPG:0x25EC91A0 0xD178AAF9
[ADMIN] easy way to copy all DBs and users on a server
Is there an easy way to copy all DBs and users on a server to another server? I need to get an exact duplicate. Thanks in avance! --- Josh Trefethen Exciteworks, Inc http://exciteworks.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] pg_dump still requires password
I have tried with pgpass but still it do es not work. I created the pgpass as per below: Version? ~/.pgpass is supported since 7.3. Two more solutions are: 1) Set PGPASSWORD variable to avoid pg_dump asking for password 2) Set connection as trust in pg_hba.conf regards, bhuvaneswaran regards, bhuvaneswaran signature.asc Description: This is a digitally signed message part
Re: [ADMIN] easy way to copy all DBs and users on a server
On Thu, 2003-11-13 at 07:47, exciteworks hosting wrote: Is there an easy way to copy all DBs and users on a server to another server? pg_dumpall is a good partner. If i am correct, you can also copy the entire data directory, provided if both versions are same. regards, bhuvaneswaran signature.asc Description: This is a digitally signed message part
Re: [ADMIN] Taking database offline
scott.marlowe wrote: On Wed, 12 Nov 2003, ow wrote: Hi, Is there a way to take the database offline to keep users out? Is there a way to take the database offline but still allowing pg_restore to run against it? Yes, as of 7.3 you can create entries in pg_hba.conf that control who can connect to what database. Just make a backup of the original pg_hba.conf, and set up your copy to have only the postgres (or other) superuser be allowed to connect. Then, swap the two hba files and restart the server. Rather than restart the server (kicking everyone out), does 'reload' help to keep new users out and let the old ones leave on their own? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html