[SQL] Block size with pg_dump?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 When I make a backup of a database, I put the output file directly on magnetic tape; i.e., my command looks like this: pg_dump --file=/dev/st0 This way I do not have to worry if the total backup exceeds the size of a file system, and it saves me the trouble of copying it to the tape as a separate step. My current tapes will hold 20 GBytes raw or 40GBytes if I enable hardware compression (assuming 2:1 compression happens). Now it says in the documentation that if I use format c it will compress the data in software, so I doubt the hardware compression will do much. I do not know what blocksize pg_dump uses, or if it insists on a particular blocksize on input. Now my tape drive will work with any blocksize, but prefers 65536-byte blocks. I do not see any options for this in pg_dump, but I could pipe the output of pg_dump through dd I suppose to make any blocksize I want. On the way back, likewise I could pipe the tape through dd before giving it to pg_restore. Does pg_dump care what blocksize it gets? If so, what is it? - -- .~. Jean-David Beyer Registered Linux User 85642. /V\ PGP-Key: 9A2FC99A Registered Machine 241939. /( )\ Shrewsbury, New Jerseyhttp://counter.li.org ^^-^^ 17:20:01 up 17 days, 20:42, 5 users, load average: 5.12, 5.26, 5.21 -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with CentOS - http://enigmail.mozdev.org iD8DBQFG0fITPtu2XpovyZoRAouwAKCTEour7jbi3uKWmEjerOM3U51xKQCeKYrQ 6jbamlqvTvH04jD7oRbTAKY= =piNw -END PGP SIGNATURE- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] List of FKeys ?
Hi, could I get a list of foreign keys that refer to a column? Say I have a table_1 (t1_id integer ..) and a lot of other tables that may refer to table 1. Is there a command that lists all tables that have a foreign key that points to t1_id? Preferably with the "on update/delete" options of the relation. Regards Andreas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Block size with pg_dump?
Jean-David Beyer wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > When I make a backup of a database, I put the output file directly on > magnetic tape; i.e., my command looks like this: > > pg_dump --file=/dev/st0 > > This way I do not have to worry if the total backup exceeds the size of a > file system, and it saves me the trouble of copying it to the tape as a > separate step. My current tapes will hold 20 GBytes raw or 40GBytes if I > enable hardware compression (assuming 2:1 compression happens). Now it says > in the documentation that if I use format c it will compress the data in > software, so I doubt the hardware compression will do much. > > I do not know what blocksize pg_dump uses, or if it insists on a particular > blocksize on input. > > Now my tape drive will work with any blocksize, but prefers 65536-byte > blocks. I do not see any options for this in pg_dump, but I could pipe the > output of pg_dump through dd I suppose to make any blocksize I want. > > On the way back, likewise I could pipe the tape through dd before giving it > to pg_restore. > > Does pg_dump care what blocksize it gets? If so, what is it? I assume you could pipe pg_dump into dd and specify the block size in dd. -- Bruce Momjian <[EMAIL PROTECTED]> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Block size with pg_dump?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruce Momjian wrote: > Jean-David Beyer wrote: >> -BEGIN PGP SIGNED MESSAGE- >> Hash: SHA1 >> >> When I make a backup of a database, I put the output file directly on >> magnetic tape; i.e., my command looks like this: >> >> pg_dump --file=/dev/st0 >> >> This way I do not have to worry if the total backup exceeds the size of a >> file system, and it saves me the trouble of copying it to the tape as a >> separate step. My current tapes will hold 20 GBytes raw or 40GBytes if I >> enable hardware compression (assuming 2:1 compression happens). Now it says >> in the documentation that if I use format c it will compress the data in >> software, so I doubt the hardware compression will do much. >> >> I do not know what blocksize pg_dump uses, or if it insists on a particular >> blocksize on input. >> >> Now my tape drive will work with any blocksize, but prefers 65536-byte >> blocks. I do not see any options for this in pg_dump, but I could pipe the >> output of pg_dump through dd I suppose to make any blocksize I want. >> >> On the way back, likewise I could pipe the tape through dd before giving it >> to pg_restore. >> >> Does pg_dump care what blocksize it gets? If so, what is it? > > I assume you could pipe pg_dump into dd and specify the block size in > dd. > Of course on the way out I can do that. The main question is, If I present pg_restore with a 65536-byte blocksize and it is expecting, e.g., 1024-bytes, will the rest of each block get skipped? I.e., do I have to use dd on the way back too? And if so, what should the blocksize be? - -- .~. Jean-David Beyer Registered Linux User 85642. /V\ PGP-Key: 9A2FC99A Registered Machine 241939. /( )\ Shrewsbury, New Jerseyhttp://counter.li.org ^^-^^ 21:05:01 up 18 days, 27 min, 0 users, load average: 4.32, 4.12, 4.09 -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with CentOS - http://enigmail.mozdev.org iD8DBQFG0iRlPtu2XpovyZoRAsXeAKCDuWnpDzTSEhvcBGjKXLO1oS2iAgCgrWB4 6Wj1bz9QoFOXrfL3galipDU= =pxyE -END PGP SIGNATURE- ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Block size with pg_dump?
On Aug 26, 2007, at 8:09 PM, Jean-David Beyer wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruce Momjian wrote: Jean-David Beyer wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 When I make a backup of a database, I put the output file directly on magnetic tape; i.e., my command looks like this: pg_dump --file=/dev/st0 This way I do not have to worry if the total backup exceeds the size of a file system, and it saves me the trouble of copying it to the tape as a separate step. My current tapes will hold 20 GBytes raw or 40GBytes if I enable hardware compression (assuming 2:1 compression happens). Now it says in the documentation that if I use format c it will compress the data in software, so I doubt the hardware compression will do much. I do not know what blocksize pg_dump uses, or if it insists on a particular blocksize on input. Now my tape drive will work with any blocksize, but prefers 65536- byte blocks. I do not see any options for this in pg_dump, but I could pipe the output of pg_dump through dd I suppose to make any blocksize I want. On the way back, likewise I could pipe the tape through dd before giving it to pg_restore. Does pg_dump care what blocksize it gets? If so, what is it? I assume you could pipe pg_dump into dd and specify the block size in dd. Of course on the way out I can do that. The main question is, If I present pg_restore with a 65536-byte blocksize and it is expecting, e.g., 1024-bytes, will the rest of each block get skipped? I.e., do I have to use dd on the way back too? And if so, what should the blocksize be? Postgres (by default) uses 8K blocks. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate & market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Block size with pg_dump?
Erik Jones wrote: > >>> On the way back, likewise I could pipe the tape through dd before > >>> giving it > >>> to pg_restore. > >>> > >>> Does pg_dump care what blocksize it gets? If so, what is it? > >> > >> I assume you could pipe pg_dump into dd and specify the block size in > >> dd. > >> > > Of course on the way out I can do that. > > > > The main question is, If I present pg_restore with a 65536-byte > > blocksize > > and it is expecting, e.g., 1024-bytes, will the rest of each block get > > skipped? I.e., do I have to use dd on the way back too? And if so, > > what > > should the blocksize be? > > Postgres (by default) uses 8K blocks. That is true of the internal storage, but not of pg_dump's output because it is using libpq to pull rows and output them in a stream, meaning there is no blocking in pg_dumps output itself. -- Bruce Momjian <[EMAIL PROTECTED]> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] List of FKeys ?
Andreas <[EMAIL PROTECTED]> writes: > could I get a list of foreign keys that refer to a column? The information_schema views constraint_column_usage and referential_constraints might help you, or you could dive into the underlying system catalogs. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend