Re: [HACKERS] [PATCHES] Dbsize backend integration
Bruce Momjian wrote: Dave Page wrote: pg_relation_size(text) - Get relation size by name/schema.name pg_relation_size(oid)- Get relation size by OID pg_tablespace_size(name) - Get tablespace size by name pg_tablespace_size(oid) - Get tablespace size by OID pg_database_size(name) - Get database size by name pg_database_size(oid)- Get database size by OID pg_table_size(text) - Get table size (including all indexes and toast tables) by name/schema.name pg_table_size(oid)- Get table size (including all indexes and toast tables) by OID pg_size_pretty(int8) - Pretty print (and round) the byte size specified (eg, 123456 = 121KB) OK, so you went with relation as heap/index/toast only, and table as the total of them. I am not sure that makes sense because we usually equate relation with table, and an index isn't a relation, really. Do we have to use pg_object_size? Is there a better name? Are indexes/toasts even objects? Relation is not an ideal names, but I heard people talk about heap relation and index relation. Indexes and tables (and sequences) are treated in a similar way quite often. Think of ALTER TABLE example_index RENAME TO another_index. This is even less obvious. Of course in relational theory, an index would not be a relation, because an index is just implementation detail. I don't like object_size any better, since that makes me rather think of large objects or rows as objects (object id...). Perhaps pg_table_size should be split into pg_table_size and pg_indexes_size, where pg_indexes_size is the aggregate of all indexes on a table und pg_table_size is just table+toast+toast-index. If noone has a better idea for pg_relation_size, I would rather keep it for consistency with the contrib module, and because it's not too far off. Best Regards, Michael Paesold ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[PATCHES] Dump comments on large objects in text mode
Hi, Attached patch enables dumping of LOB comments when in text mode. I really don't get the binary/custom format LOB stuff (and don't have time to investigate), so perhaps someone else can do that. Having it in text format is still an improvement though. Chris blobcomments.txt.gz Description: GNU Zip compressed data ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] plperl features
Bruce Momjian wrote: Sergej, are you going to repost this patch? Sorry for delaying. Yes, I working on it, but I wait for decision about Andrew and Abhijit patches. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[PATCHES] libpq: fix unlikely memory leak
The attached patch fixes a theoretical memory leak in libpq: if the second malloc() fails due to OOM, the memory returned by the first (successful) malloc() will be leaked. Barring any objections I'll apply this tomorrow. Per report from EnterpriseDB's Coverity analysis. -Neil Index: src/interfaces/libpq/fe-auth.c === RCS file: /var/lib/cvs/pgsql/src/interfaces/libpq/fe-auth.c,v retrieving revision 1.102 diff -c -r1.102 fe-auth.c *** src/interfaces/libpq/fe-auth.c 27 Jun 2005 02:04:26 - 1.102 --- src/interfaces/libpq/fe-auth.c 29 Jun 2005 03:09:01 - *** *** 407,434 { char *crypt_pwd2; ! if (!(crypt_pwd = malloc(MD5_PASSWD_LEN + 1)) || ! !(crypt_pwd2 = malloc(MD5_PASSWD_LEN + 1))) { fprintf(stderr, libpq_gettext(out of memory\n)); ! return STATUS_ERROR; } if (!EncryptMD5(password, conn-pguser, strlen(conn-pguser), crypt_pwd2)) ! { ! free(crypt_pwd); ! free(crypt_pwd2); ! return STATUS_ERROR; ! } if (!EncryptMD5(crypt_pwd2 + strlen(md5), conn-md5Salt, sizeof(conn-md5Salt), crypt_pwd)) ! { ! free(crypt_pwd); ! free(crypt_pwd2); ! return STATUS_ERROR; ! } free(crypt_pwd2); break; } case AUTH_REQ_CRYPT: { --- 407,438 { char *crypt_pwd2; ! crypt_pwd = malloc(MD5_PASSWD_LEN + 1); ! crypt_pwd2 = malloc(MD5_PASSWD_LEN + 1); ! ! if (crypt_pwd == NULL || crypt_pwd2 == NULL) { fprintf(stderr, libpq_gettext(out of memory\n)); ! goto md5_error; } + if (!EncryptMD5(password, conn-pguser, strlen(conn-pguser), crypt_pwd2)) ! goto md5_error; ! if (!EncryptMD5(crypt_pwd2 + strlen(md5), conn-md5Salt, sizeof(conn-md5Salt), crypt_pwd)) ! goto md5_error; ! free(crypt_pwd2); break; + + md5_error: + if (crypt_pwd) + free(crypt_pwd); + if (crypt_pwd2) + free(crypt_pwd2); + return STATUS_ERROR; } case AUTH_REQ_CRYPT: { ---(end of broadcast)--- TIP 8: explain analyze is your friend
[PATCHES] spi_query/spi_fetchrow for pl/perl
The attached patch implements spi_query() and spi_fetchrow() functions for PL/Perl, to avoid loading the entire result set into memory as the existing spi_exec_query() function does. Here's how one might use the new functions: $x = spi_query(select ...); while (defined ($y = spi_fetchrow($x))) { ... return_next(...); } The changes do not affect the spi_exec_query() interface in any way. Comments welcome. -- ams --- pl/plperl/spi_internal.h~ 2005-06-12 15:45:31.0 +0530 +++ pl/plperl/spi_internal.h2005-06-12 15:45:59.0 +0530 @@ -18,3 +18,5 @@ /* this is actually in plperl.c */ HV*plperl_spi_exec(char *, int); void plperl_return_next(SV *); +SV *plperl_spi_query(char *); +SV *plperl_spi_fetchrow(char *); --- pl/plperl/SPI.xs~ 2005-06-05 14:49:59.0 +0530 +++ pl/plperl/SPI.xs2005-06-12 15:31:37.0 +0530 @@ -103,5 +103,21 @@ CODE: plperl_return_next(rv); +SV * +spi_spi_query(query) + char *query; + CODE: + RETVAL = plperl_spi_query(query); + OUTPUT: + RETVAL + +SV * +spi_spi_fetchrow(cursor) + char *cursor; + CODE: + RETVAL = plperl_spi_fetchrow(cursor); + OUTPUT: + RETVAL + BOOT: items = 0; /* avoid 'unused variable' warning */ --- pl/plperl/plperl.c~ 2005-06-29 07:40:25.132358971 +0530 +++ pl/plperl/plperl.c 2005-06-29 14:29:30.269050201 +0530 @@ -118,6 +118,7 @@ void plperl_init(void); HV*plperl_spi_exec(char *query, int limit); +SV*plperl_spi_query(char *); static Datum plperl_func_handler(PG_FUNCTION_ARGS); @@ -225,6 +226,7 @@ $PLContainer-permit_only(':default'); $PLContainer-permit(qw[:base_math !:base_io sort time]); $PLContainer-share(qw[elog spi_exec_query return_next + spi_query spi_fetchrow DEBUG LOG INFO NOTICE WARNING ERROR %_SHARED ]); ; @@ -1519,3 +1521,77 @@ heap_freetuple(tuple); MemoryContextSwitchTo(cxt); } + + +SV * +plperl_spi_query(char *query) +{ + SV *cursor; + + MemoryContext oldcontext = CurrentMemoryContext; + ResourceOwner oldowner = CurrentResourceOwner; + + BeginInternalSubTransaction(NULL); + MemoryContextSwitchTo(oldcontext); + + PG_TRY(); + { + void *plan; + Portal portal = NULL; + + plan = SPI_prepare(query, 0, NULL); + if (plan) + portal = SPI_cursor_open(NULL, plan, NULL, NULL, false); + if (portal) + cursor = newSVpv(portal-name, 0); + else + cursor = newSV(0); + + ReleaseCurrentSubTransaction(); + MemoryContextSwitchTo(oldcontext); + CurrentResourceOwner = oldowner; + SPI_restore_connection(); + } + PG_CATCH(); + { + ErrorData *edata; + + MemoryContextSwitchTo(oldcontext); + edata = CopyErrorData(); + FlushErrorState(); + + RollbackAndReleaseCurrentSubTransaction(); + MemoryContextSwitchTo(oldcontext); + CurrentResourceOwner = oldowner; + + SPI_restore_connection(); + croak(%s, edata-message); + return NULL; + } + PG_END_TRY(); + + return cursor; +} + + +SV * +plperl_spi_fetchrow(char *cursor) +{ + SV *row = newSV(0); + Portal p = SPI_cursor_find(cursor); + + if (!p) + return row; + + SPI_cursor_fetch(p, true, 1); + if (SPI_processed == 0) { + SPI_cursor_close(p); + return row; + } + + row = plperl_hash_from_tuple(SPI_tuptable-vals[0], + SPI_tuptable-tupdesc); + SPI_freetuptable(SPI_tuptable); + + return row; +} --- pl/plperl/sql/plperl.sql~ 2005-06-29 08:51:26.602006376 +0530 +++ pl/plperl/sql/plperl.sql2005-06-29 15:04:25.744032754 +0530 @@ -247,3 +247,16 @@ return; $$ language plperl; SELECT * from perl_srf_rn() AS (f1 INTEGER, f2 TEXT, f3 TEXT); + +-- +-- Test spi_query/spi_fetchrow +-- + +CREATE OR REPLACE FUNCTION perl_spi_func() RETURNS SETOF INTEGER AS $$ +$x = spi_query(select 1 as a union select 2 as a); +while (defined ($y = spi_fetchrow($x))) { +return_next($y-{a}); +} +return; +$$ LANGUAGE plperl; +SELECT * from perl_spi_func(); --- pl/plperl/expected/plperl.out~ 2005-06-29 15:04:37.749935678 +0530 +++ pl/plperl/expected/plperl.out 2005-06-29 15:06:08.967002756 +0530 @@ -350,3 +350,20 @@ 3 | Hello | PL/Perl (3 rows) +-- +-- Test spi_query/spi_fetchrow +-- +CREATE OR REPLACE FUNCTION perl_spi_func() RETURNS SETOF INTEGER AS $$ +$x = spi_query(select 1 as a union select 2 as a); +while (defined ($y =
Re: [PATCHES] Dbsize backend integration
Dave Page wrote: -Original Message- From: Bruce Momjian [mailto:[EMAIL PROTECTED] Sent: Wed 6/29/2005 2:16 AM To: Dave Page Cc: PostgreSQL-patches; PostgreSQL-development Subject: Re: [PATCHES] Dbsize backend integration OK, so you went with relation as heap/index/toast only, and table as the total of them. I am not sure that makes sense because we usually equate relation with table, and an index isn't a relation, really. Err, yes - posted that before I got your reply! Do we have to use pg_object_size? Is there a better name? Are indexes/toasts even objects? Yeah, I think perhaps pg_object_size is better in some ways than pg_relation_size, however I stuck with relation because (certainly in pgAdmin world) we tend to think of pretty much anything as an object. I could go either way on that though, however Michael doesn't seem so keen. So, one for pg_object_size, one on the fench and one against :-). Anyone else got a preference? I have a new idea --- pg_storage_size(). That would do just the toast/index/heap, and pg_relation_size() gets a total of them all, and only works on heap, no index or toast. How is that? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] [PATCHES] Dbsize backend integration
Michael Paesold wrote: Do we have to use pg_object_size? Is there a better name? Are indexes/toasts even objects? Relation is not an ideal names, but I heard people talk about heap relation and index relation. Indexes and tables (and sequences) are treated in a similar way quite often. Think of ALTER TABLE example_index RENAME TO another_index. This is even less obvious. Of course in relational theory, an index would not be a relation, because an index is just implementation detail. I don't like object_size any better, since that makes me rather think of large objects or rows as objects (object id...). Perhaps pg_table_size should be split into pg_table_size and pg_indexes_size, where pg_indexes_size is the aggregate of all indexes on a table und pg_table_size is just table+toast+toast-index. If noone has a better idea for pg_relation_size, I would rather keep it for consistency with the contrib module, and because it's not too far off. Yea, but then we have toast and we would need another name. I suggested pg_storage_size() because it relates to a storage unit (index, toast, etc), and not a real object or relation. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] plperl features
Sergej Sergeev said: Bruce Momjian wrote: Sergej, are you going to repost this patch? Sorry for delaying. Yes, I working on it, but I wait for decision about Andrew and Abhijit patches. This is the polymorphic types plus perl to pg array patch, right? I am not working on this right now (or anything else for 8.1) - the original plperl array to pg array implementation was broken and I was not happy about the fix I first came up with, and ran out of time to work on an acceptable solution. Neither is Abhijit, to the best of my knowledge - he has today submitted a patch for SPI fetching via cursor, which should not affect this stuff. We recently put the two items on the TODO list, as I understood from Joshua that you guys weren't working on plperl at all in the 8.1 feature freeze time frame. cheers andrew ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] libpq: fix unlikely memory leak
Neil Conway [EMAIL PROTECTED] writes: The attached patch fixes a theoretical memory leak in libpq: if the second malloc() fails due to OOM, the memory returned by the first (successful) malloc() will be leaked. Since both allocations are only transient within this routine, there's a simpler more effective solution, which is to only do one malloc in the first place: char *crypt_pwd2; /* need enough space for 2 MD5 hashes */ if (!(crypt_pwd = malloc(2 * (MD5_PASSWD_LEN + 1 { fprintf(stderr, libpq_gettext(out of memory\n)); return STATUS_ERROR; } crypt_pwd2 = crypt_pwd + (MD5_PASSWD_LEN + 1); and drop the free(crypt_pwd2) calls. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[PATCHES] Change Ownership Permission Checks
Greetings, Attached please find a patch to change how the permissions checking for alter-owner is done. With roles there can be more than one 'owner' of an object and therefore it becomes sensible to allow specific cases of ownership change for non-superusers. The permission checks for change-owner follow the alter-rename precedent that the new owner must have permission to create the object in the schema. The roles patch previously applied did not require the role for which a database is being created to have createdb privileges, or for the role for which a schema is being created to have create privileges on the database (the role doing the creation did have to have those privileges though, of course). For 'container' type objects this seems reasonable. 'container' type objects are unlike others in a few ways, but one of the more notable differences for this case is that an owner may be specified as part of the create command. To support cleaning up the various checks, I also went ahead and modified is_member_of_role() to always return true when asked if a superuser is in a given role. This seems reasonable, won't affect what's actually seen in the various tables, and allows us to eliminate explicit superuser() checks in a number of places. I have also reviewed the other superuser() calls in src/backend/commands/ and feel pretty comfortable that they're all necessary, reasonable, and don't need to be replaced with *_ownercheck or other calls. The specific changes which have been changed, by file: aggregatecmds.c, alter-owner: alter-owner checks: User is owner of the to-be-changed object User is a member of the new owner's role New owner is permitted to create objects in the schema Superuser() requirement removed conversioncmds.c, rename: rename-checks: Changed from superuser() or same-roleId to pg_conversion_ownercheck alter-owner checks: User is owner of the to-be-changed object User is a member of the new owner's role New owner is permitted to create objects in the schema Superuser() requirement removed dbcommands.c: Moved superuser() check to have_createdb_privilege Cleaned up permissions checking in createdb and rename alter-owner checks: User is owner of the database User is a member of the new owner's role User has createdb privilege functioncmds.c: alter-owner checks: User is owner of the function User is a member of the new owner's role New owner is permitted to create objects in the schema opclasscmds.c: alter-owner checks: User is owner of the object User is a member of the new owner's role New owner has permission to create objects in the schema operatorcmds.c: alter-owner checks: User is owner of the object User is a member of the new owner's role New owner has permission to create objects in the schema schemacmds.c: Cleaned up create schema identify changing/setting/checking (This code was quite different from all the other create functions, these changes make it much more closely match createdb) alter-owner checks: User is owner of the schema User is a member of the new owner's role User has create privilege on database tablecmds.c: alter-owner checks: User is owner of the object User is a member of the new owner's role New owner has permission to create objects in the schema tablespace.c: alter-owner checks: User is owner of the tablespace User is a member of the new owner's role (No create-tablespace permission to check, tablespaces must be created by superusers and so alter-owner here really only matters if the superuser changed the tablespace owner to a non-superuser and then that non-superuser wants to change the ownership to yet another user, the other option would be to continue to force superuser-only for tablespace owner changes but I'm not sure I see the point if the superuser trusts the non-superuser enough to give them a tablespace...) typecmds.c: alter-owner checks: User is owner of the object User is a member of the new owner's role New owner has permission to create objects in the schema Many thanks. As always, comments, questions, concerns, please let me know. Thanks again, Stephen ? src/backend/commands/.typecmds.c.swp Index: src/backend/commands/aggregatecmds.c === RCS file: /projects/cvsroot/pgsql/src/backend/commands/aggregatecmds.c,v retrieving revision 1.27 diff -c -r1.27 aggregatecmds.c *** src/backend/commands/aggregatecmds.c28 Jun 2005 05:08:53 - 1.27 --- src/backend/commands/aggregatecmds.c29 Jun 2005 15:29:57 - *** *** 299,307
Re: [HACKERS] [PATCHES] Dbsize backend integration
Bruce Momjian wrote: Yea, but then we have toast and we would need another name. I suggested pg_storage_size() because it relates to a storage unit (index, toast, etc), and not a real object or relation. I'm not really happy that all functions change their names (more versioning handling in pgadmin), but pg_storage_size is certainly the most precise name. Regards, Andreas ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] [PATCHES] Dbsize backend integration
Andreas Pflug [EMAIL PROTECTED] writes: I'm not really happy that all functions change their names (more versioning handling in pgadmin), but pg_storage_size is certainly the most precise name. Actually, it seems excessively imprecise to me: the name conveys nothing at all to help you remember what the definition is. storage could mean any of the different definitions that have been kicked around in this thread. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[PATCHES] Autovacuum integration patch
Hackers, (Resend, like fifth time or so. bzip2'ing the patch for luck.) Here is a first cut at autovacuum integration. Please have a look at it. Note that this patch automatically creates three new files: src/backend/postmaster/autovacuum.c src/include/catalog/pg_autovacuum.h src/include/postmaster/autovacuum.h Note that the daemon is not activated by default. There are several things that are painfully evident with this thing on: - TRUNCATE does not update stats. It should send a stat message to which we can react. - If you empty a whole table using DELETE just after an automatically-issued VACUUM takes place, the new threshold may not be enough to trigger a new VACUUM. Thus you end up with a bloated table, and it won't get vacuumed until it grows again. This may be a problem with the cost equations, but those are AFAICT identical to those of pg_autovacuum, so we may need to rethink the equations. - The default value of on for reset stats on server start is going to be painful with autovacuum, because it reacts badly to losing the info. - We should make VACUUM and ANALYZE update the pg_autovacuum relation, in order to make the autovacuum daemon behave sanely with manually issued VACUUM/ANALYZE. - Having an autovacuum process running on a database can be surprising if you want to drop a database, or create a new one using it as a template. This happenned to me several times. - The shutdown sequence is not debugged nor very well tested. It may be all wrong. - The startup sequence is a mixture from pgarch, normal backend and pgstat. I find it relatively clean but I can't swear it's bug-free. - There are no docs - There are no ALTER TABLE commands to change the pg_autovacuum attributes for a table. (Enable/disable, set thresholds and scaling factor) - I compiled with -DEXEC_BACKEND, but I didn't look to see if it actually worked on that case. Apart from all these issues, it is completely functional :-) It can survive several make installcheck runs without problem, and the regression database is vacuumed/analyzed as it runs. Some of these issues are trivial to handle. However I'd like to release this right now, so I can go back to shared dependencies now that role support is in. Barring any objections I think this should be integrated, so these issues can be tackled by interested parties. -- Alvaro Herrera (alvherre[a]surnet.cl) World domination is proceeding according to plan(Andrew Morton) autovacuum-4.patch.bz2 Description: Binary data ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] libpq: fix unlikely memory leak
Tom Lane wrote: Since both allocations are only transient within this routine, there's a simpler more effective solution, which is to only do one malloc in the first place Yeah, true. Attached is a revised patch -- committed to HEAD. -Neil Index: src/interfaces/libpq/fe-auth.c === RCS file: /var/lib/cvs/pgsql/src/interfaces/libpq/fe-auth.c,v retrieving revision 1.102 diff -c -r1.102 fe-auth.c *** src/interfaces/libpq/fe-auth.c 27 Jun 2005 02:04:26 - 1.102 --- src/interfaces/libpq/fe-auth.c 30 Jun 2005 01:55:35 - *** *** 407,433 { char *crypt_pwd2; ! if (!(crypt_pwd = malloc(MD5_PASSWD_LEN + 1)) || ! !(crypt_pwd2 = malloc(MD5_PASSWD_LEN + 1))) { fprintf(stderr, libpq_gettext(out of memory\n)); return STATUS_ERROR; } if (!EncryptMD5(password, conn-pguser, strlen(conn-pguser), crypt_pwd2)) { free(crypt_pwd); - free(crypt_pwd2); return STATUS_ERROR; } if (!EncryptMD5(crypt_pwd2 + strlen(md5), conn-md5Salt, sizeof(conn-md5Salt), crypt_pwd)) { free(crypt_pwd); - free(crypt_pwd2); return STATUS_ERROR; } - free(crypt_pwd2); break; } case AUTH_REQ_CRYPT: --- 407,433 { char *crypt_pwd2; ! /* Allocate enough space for two MD5 hashes */ ! crypt_pwd = malloc(2 * (MD5_PASSWD_LEN + 1)); ! if (!crypt_pwd) { fprintf(stderr, libpq_gettext(out of memory\n)); return STATUS_ERROR; } + + crypt_pwd2 = crypt_pwd + MD5_PASSWD_LEN + 1; if (!EncryptMD5(password, conn-pguser, strlen(conn-pguser), crypt_pwd2)) { free(crypt_pwd); return STATUS_ERROR; } if (!EncryptMD5(crypt_pwd2 + strlen(md5), conn-md5Salt, sizeof(conn-md5Salt), crypt_pwd)) { free(crypt_pwd); return STATUS_ERROR; } break; } case AUTH_REQ_CRYPT: ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PATCHES] Dump comments on large objects in text mode
Christopher Kings-Lynne [EMAIL PROTECTED] writes: Attached patch enables dumping of LOB comments when in text mode. I really don't get the binary/custom format LOB stuff (and don't have time to investigate), so perhaps someone else can do that. That's pretty icky :-(. I think the right way is more like this. regards, tom lane bin7AFOCtYcS1.bin Description: blobcomments.patch.gz ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PATCHES] Dump comments on large objects in text mode
On Wed, Jun 29, 2005 at 11:05:43PM -0400, Tom Lane wrote: Christopher Kings-Lynne [EMAIL PROTECTED] writes: Attached patch enables dumping of LOB comments when in text mode. I really don't get the binary/custom format LOB stuff (and don't have time to investigate), so perhaps someone else can do that. That's pretty icky :-(. I think the right way is more like this. Wow, that's complex. No wonder I wasn't able to come up with a patch to dump ALTER DATABASE commands some time ago. -- Alvaro Herrera (alvherre[a]surnet.cl) Endurecerse, pero jamás perder la ternura (E. Guevara) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PATCHES] Dump comments on large objects in text mode
Attached patch enables dumping of LOB comments when in text mode. I really don't get the binary/custom format LOB stuff (and don't have time to investigate), so perhaps someone else can do that. That's pretty icky :-(. I think the right way is more like this. Hehe - in the world of open source, convincing someone to implement something is almost as good as doing it yourself :) Thanks for that, pg_dump is really going great guns now... Chris ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[PATCHES] tiny patch to fic unixware build
In the course of looking into Larry's buildfarm woes, I found that Unixware needs this patch on HEAD to build correctly cheers andrew Index: src/backend/utils/adt/timestamp.c === RCS file: /projects/cvsroot/pgsql/src/backend/utils/adt/timestamp.c,v retrieving revision 1.127 diff -c -r1.127 timestamp.c *** src/backend/utils/adt/timestamp.c 29 Jun 2005 22:51:56 - 1.127 --- src/backend/utils/adt/timestamp.c 30 Jun 2005 03:03:42 - *** *** 19,24 --- 19,25 #include math.h #include float.h #include limits.h + #include sys/time.h #include access/hash.h #include access/xact.h ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] Dump comments on large objects in text mode
Alvaro Herrera [EMAIL PROTECTED] writes: On Wed, Jun 29, 2005 at 11:05:43PM -0400, Tom Lane wrote: That's pretty icky :-(. I think the right way is more like this. Wow, that's complex. No wonder I wasn't able to come up with a patch to dump ALTER DATABASE commands some time ago. The trick in hacking pg_dump is to understand which layer you need to modify. The whole thing seems overly complex to me :-( ... but redesigning it is a project for another release cycle. regards, tom lane (BTW: Alvaro, I've been having direct mail to you bounce lately --- let's see if this goes through ...) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] Dump comments on large objects in text mode
The trick in hacking pg_dump is to understand which layer you need to modify. The whole thing seems overly complex to me :-( ... but redesigning it is a project for another release cycle. I just find the whole BLOB handling very tricky to understand :( I vote that we combine pg_dumpall and pg_dump into a single app in the future... It should be possible to make them both work backward compatibly. Means we can possibly gain custom format full cluster dumps. Chris ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] Dump comments on large objects in text mode
Christopher Kings-Lynne [EMAIL PROTECTED] writes: we can possibly gain custom format full cluster dumps. Do we care? The main rationale for inventing a custom format, as I recall it, was (a) plain text couldn't hack blobs and (b) you needed the ability to rearrange the dump order to work around pg_dump's lack of understanding of dependencies. (a) is a dead letter now and (b) is much less compelling than it once was. The custom format is still interesting in that it provides the option to do selective reload --- but if that's what you want, dumping a database at a time doesn't seem like a terrible restriction. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] Autovacuum integration patch
Alvaro Herrera wrote: There are several things that are painfully evident with this thing on: - TRUNCATE does not update stats. It should send a stat message to which we can react. How important is this really? The stats from before the truncate might be ok, especially since they might represent how the table will look in the future. Also, there isn't any free space in a table that was just truncated, so there is no need to run vacuum to update the FSM. - If you empty a whole table using DELETE just after an automatically-issued VACUUM takes place, the new threshold may not be enough to trigger a new VACUUM. Thus you end up with a bloated table, and it won't get vacuumed until it grows again. This may be a problem with the cost equations, but those are AFAICT identical to those of pg_autovacuum, so we may need to rethink the equations. I'm very open to a better equation if someone has one, but I'm not sure what the problem is. If there are 10,000 rows in a table and an autovacuum takes place, you will have a threshold of 5,000 (assuming you are using the default threshold parmeters: base = 1000, scaling factor = 0.4). So now when all the rows are deleted that will be enough activity to cross the threshold and cause another vacuum. I guess the problem is if the table is smaller say, 1,000 rows, now after a vacuum, the threshold will be 1,400, and deleting all the rows will not cause a vacuum. But that is OK because a 1,000 row table is probably not very big. The purpose of the base threshold value is so that vacuum commands don't get run continually on really small tables that are updated a lot, it's OK to have some slack space. If the default is deemed to high, we can always lower it. - The default value of on for reset stats on server start is going to be painful with autovacuum, because it reacts badly to losing the info. I agree, this is an issue. Is there any reason not to change stats_reset_on_restart to default to true? - We should make VACUUM and ANALYZE update the pg_autovacuum relation, in order to make the autovacuum daemon behave sanely with manually issued VACUUM/ANALYZE. Agree completly. This way autovacuum can work in harmony with manually issued or cron isssued vacuum commands. - Having an autovacuum process running on a database can be surprising if you want to drop a database, or create a new one using it as a template. This happenned to me several times. Not sure what to do about this. We could reduce the number of times autovacuum actually connects to a database by checking the stats flat file before we connect. If there hasn't been any activity since the last time we connected, then don't connect again. Better ideas anyone? - The shutdown sequence is not debugged nor very well tested. It may be all wrong. Ok, I'm testing it now, i'll let you know if I see anything funny. - The startup sequence is a mixture from pgarch, normal backend and pgstat. I find it relatively clean but I can't swear it's bug-free. Same as above. - There are no docs I can help here as long as I don't have to have the docs done before July 1. - There are no ALTER TABLE commands to change the pg_autovacuum attributes for a table. (Enable/disable, set thresholds and scaling factor) I don't think we need this do we? Mucking around in the autovacuum table shouldn't cause the system any serious problems, if you do mess up your values, it's easy to just reset them all to 0 and start back with the defaults. - I compiled with -DEXEC_BACKEND, but I didn't look to see if it actually worked on that case. Apart from all these issues, it is completely functional :-) It can survive several make installcheck runs without problem, and the regression database is vacuumed/analyzed as it runs. Cool. Some of these issues are trivial to handle. However I'd like to release this right now, so I can go back to shared dependencies now that role support is in. Barring any objections I think this should be integrated, so these issues can be tackled by interested parties. Couple of other thoughts: Do the vacuum commands respect the GUC vacuum delay settings? Should we be able to set per table vacuum delay settings? This patch doesn't have the maintenance window that was discussed a while ago. Can that be added after July 1? Thanks Alvaro for doing the integration work Matthew O'Connor ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] Dump comments on large objects in text mode
On Wed, Jun 29, 2005 at 11:44:58PM -0400, Tom Lane wrote: (BTW: Alvaro, I've been having direct mail to you bounce lately --- let's see if this goes through ...) Yeah, sorry about that :-( I realized I must change mail provider ASAP when I noticed it's been eating my own emails (e.g. the autovacuum patch.) Still not clear what should I switch to :-( -- Alvaro Herrera (alvherre[a]surnet.cl) Always assume the user will do much worse than the stupidest thing you can imagine.(Julien PUYDT) ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] Autovacuum integration patch
Alvaro Herrera wrote: Hackers, Here is a first cut at autovacuum integration. Please have a look at it. Note that this patch automatically creates three new files: Couple more things that I didn't think about while we were talking about this the other day. XID wraparound: The patch as submitted doesn't handle XID wraparound issues. The old contrib autovacuum would do an XID wraparound check as it's 1st operation upon connecting to a database. If XID wraparound was looks like it's going to be a problem soon, then the whole database would be vacuumed, eliminating the need to check specific tables. Better logging of autovacuum activity: I think the we could use some more detail in the debug elog statements. For example showing exactly what autovacuum believes the threshold and current count is. How to deal with shared relations: As an optimization, the contrib version of autovacuum treated shared relations different than it treated the rest when connected to any database that is not template1. That is, when connected to a DB other than template1, autovacuum would not issue vacuum commands. rather it would only issue analyze commands. When autovacuum got around to connecting to template1, it would then issue the vacuum command. The hope was that this would reducing a shared relation from getting vacuumed n times (where n is the number of databases in a cluster) whenever it crossed over it's threshold. I'm not sure if this optimizaion is really important, or even exactly correct. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster