Re: [PATCHES] Relation forks & FSM rewrite patches
Heikki Linnakangas wrote: Here's an updated version of the "relation forks" patch, and an incremental FSM rewrite patch on top of that. The relation forks patch is ready for review. The FSM implementation is more work-in-progress still, but I'd like to get some review on that as well, with the goal of doing more performance testing and committing it after the commit fest. The one part that I'm not totally satisfied in the relation forks patch is the smgrcreate() function. The question problem is: which piece of code decides which forks to create for a relation, and when to create them? I settled on the concept that all forks that a relation will need are created at once, in one smgrcreate() call. There's no function to create additional forks later on. Likewise, there's no function to unlink individual forks, you have to unlink the whole relation. Currently, heap_create() decides which forks to create. That's fine at the moment, but will become problematic in the future, as it's indexam-specific which forks an index requires. That decision should really be done in the indexam. One possibility would be to only create the main fork in heap_create(), and let indexam create any additional forks it needs in ambuild function. I've had a bit of a play with this, looks pretty nice. One point that comes to mind is that we are increasing the number of required file descriptors by a factor of 2 (and possibly 3 if we use a fork for the visibility map). Is this a problem do you think? Cheers Mark -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] Removal of the patches email list
Bruce Momjian wrote: > We have come to agreement that there is no longer a need for a separate > 'patches' email list --- the size of patches isn't a significant issue > anymore, and tracking threads between the patches and hackers lists is > confusing. > > I propose we close the patches list and tell everyone to start using > only the hackers list. This will require email server changes and web > site updates, and some people who are only subscribed to patches have to > figure out if they want to subscribe to hackers. > > I have CC'ed hackers, patches, and www because this does affect all > those lists. > > I think this is a good idea, and was expecting this to have happened already. Is there any time line or consensus that this is going to happen? Regards Russell Smith -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] Exposing keywords to clients
Nikhils <[EMAIL PROTECTED]> writes: >>> Attached is an updated patch, giving the following output. Applied with minor revisions. > Here are some comments from me: > a) Changed "localised" to "localized" to be consistent with the references > elsewhere in the same file. What I didn't like about the docs was the classification of the function as a "session information function". There's certainly nothing session-dependent about it. I ended up putting it under "System Catalog Information Functions", which isn't really quite right either but it's closer than the other one. > b) I wonder if we need the default case in the switch statement at all, > since we are scanning the statically populated ScanKeywords array with > proper category values for each entry. I left it in for safety, but made it just return nulls --- there's no point in wasting more code space on it than necessary, and certainly no point in asking translators to translate a useless string. > c) There was a warning during compilation since we were assigning a const > pointer to a char pointer > values[0] = ScanKeywords[funcctx->call_cntr].name; Yeah, I couldn't see a good way around that either --- we could pstrdup but that seems overkill, and adjusting the API of BuildTupleFromCStrings to take const char ** doesn't look appetizing either. > d) oid 2700 has been claimed by another function in the meanwhile. Modified > it to 2701. > DATA(insert OID = 2701 ( pg_get_keywordsPGNSP PGUID 12 10 400 f f t t s > 0 2249 2701 was claimed too. You need to use the unused_oids script to find unique free OIDs. regards, tom lane -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
[PATCHES] Re: [HACKERS] Solaris ident authentication using unix domain sockets
On Thu, Jul 03, 2008 at 02:01:22PM -0400, Tom Lane wrote: > Garick Hamlin <[EMAIL PROTECTED]> writes: > > I have a patch that I have been using to support postgresql's > > notion of ident authentication when using unix domain sockets on > > Solaris. This patch basically just adds support for using > > getupeercred() on Solaris so unix sockets and ident auth works just > > like it does on Linux and elsewhere. > > Cool. > > > + #if defined(HAVE_GETPEERUCRED) > > + #include > > + #endif > > But this is not cool. There might be systems out there that have > getpeerucred() but not , and this coding would cause a compile > failure (even if they actually wouldn't be trying to use getpeerucred() > because they have some other way to do it). You need an explicit > configure probe for the header file too, I think. Ok, I can fix that. > > Also, what is the rationale for putting this before the > HAVE_STRUCT_CMSGCRED case instead of after? Again, that seems like it > could cause unexpected behavioral changes on platforms that work fine > now (consider possibility that getpeerucred is there but broken). Good Point, It should be the other way. > > regards, tom lane Thanks, Garick -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] [HACKERS] Solaris ident authentication using unix domain sockets
Garick Hamlin <[EMAIL PROTECTED]> writes: > I have a patch that I have been using to support postgresql's > notion of ident authentication when using unix domain sockets on > Solaris. This patch basically just adds support for using > getupeercred() on Solaris so unix sockets and ident auth works just > like it does on Linux and elsewhere. Cool. > + #if defined(HAVE_GETPEERUCRED) > + #include > + #endif But this is not cool. There might be systems out there that have getpeerucred() but not , and this coding would cause a compile failure (even if they actually wouldn't be trying to use getpeerucred() because they have some other way to do it). You need an explicit configure probe for the header file too, I think. Also, what is the rationale for putting this before the HAVE_STRUCT_CMSGCRED case instead of after? Again, that seems like it could cause unexpected behavioral changes on platforms that work fine now (consider possibility that getpeerucred is there but broken). regards, tom lane -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
[PATCHES] Solaris ident authentication using unix domain sockets
Hi, I have a patch that I have been using to support postgresql's notion of ident authentication when using unix domain sockets on Solaris. This patch basically just adds support for using getupeercred() on Solaris so unix sockets and ident auth works just like it does on Linux and elsewhere. This was my first attempt wrestling with automake. I've tested it builds properly after it is applied and autoreconf is run on RHEL4/Linux/x86. I am using the patch currently on Solaris 10 / x86. Garick diff -cr postgresql_CVS/configure.in postgresql/configure.in *** postgresql_CVS/configure.in Tue Jun 24 15:52:30 2008 --- postgresql/configure.in Tue Jun 24 15:57:22 2008 *** *** 1095,1101 AC_FUNC_ACCEPT_ARGTYPES PGAC_FUNC_GETTIMEOFDAY_1ARG ! AC_CHECK_FUNCS([cbrt dlopen fcvt fdatasync getpeereid getrlimit memmove poll pstat readlink setproctitle setsid sigprocmask symlink sysconf towlower utime utimes waitpid wcstombs]) AC_CHECK_DECLS(fdatasync, [], [], [#include ]) AC_CHECK_DECLS(posix_fadvise, [], [], [#include ]) --- 1095,1101 AC_FUNC_ACCEPT_ARGTYPES PGAC_FUNC_GETTIMEOFDAY_1ARG ! AC_CHECK_FUNCS([getpeerucred cbrt dlopen fcvt fdatasync getpeereid getrlimit memmove poll pstat readlink setproctitle setsid sigprocmask symlink sysconf towlower utime utimes waitpid wcstombs]) AC_CHECK_DECLS(fdatasync, [], [], [#include ]) AC_CHECK_DECLS(posix_fadvise, [], [], [#include ]) diff -cr postgresql_CVS/src/backend/libpq/hba.c postgresql/src/backend/libpq/hba.c *** postgresql_CVS/src/backend/libpq/hba.c Tue Jun 24 15:52:32 2008 --- postgresql/src/backend/libpq/hba.c Tue Jun 24 15:53:00 2008 *** *** 25,30 --- 25,33 #include #include #endif + #if defined(HAVE_GETPEERUCRED) + #include + #endif #include #include #include *** *** 1500,1505 --- 1503,1539 strlcpy(ident_user, pass->pw_name, IDENT_USERNAME_MAX + 1); return true; + #elif defined(HAVE_GETPEERUCRED) /* Solaris > 10 */ + uid_t uid; + gid_t gid; + struct passwd *pass; + int ucred_ok=1; + ucred_t *ucred = NULL; + if (getpeerucred(sock, &ucred) == -1) + ucred_ok = 0; + if (ucred_ok && (uid = ucred_geteuid(ucred)) == -1 ) + ucred_ok = 0; + if (ucred_ok && (gid = ucred_getrgid(ucred)) == -1 ) + ucred_ok = 0; + if (ucred) + ucred_free(ucred); + if (!ucred_ok) { + /* We didn't get a valid credentials struct. */ + ereport(LOG, ( +"could not get peer credentials: %s", + strerror(errno))); + return false; + } + pass = getpwuid(uid); + if (pass == NULL) + { + ereport(LOG, + (errmsg("local user with ID %d does not exist", + (int) uid))); + return false; + } + strlcpy(ident_user, pass->pw_name, IDENT_USERNAME_MAX + 1); + return true; #elif defined(HAVE_STRUCT_CMSGCRED) || defined(HAVE_STRUCT_FCRED) || (defined(HAVE_STRUCT_SOCKCRED) && defined(LOCAL_CREDS)) struct msghdr msg; -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] pg_dump lock timeout
daveg <[EMAIL PROTECTED]> writes: > On Thu, Jul 03, 2008 at 11:15:10AM +0300, Marko Kreen wrote: >> - The statement_timeout is set back with "statement_timeout = default" >> Maybe it would be better to do "= 0" here? Although such decision >> would go outside the scope of the patch, I see no sense having >> any other statement_timeout for actual dumping. > I'd prefer to leave whatever policy is otherwise in place alone. The policy in place in CVS HEAD is that pg_dump explicitly sets statement_timeout to 0. Setting it to default would break that, and will certainly not be accepted. regards, tom lane -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] page macros cleanup
Just one quick note: Zdenek Kotala wrote: *** pgsql.orig.da8c485e0e2a/src/backend/access/gist/gistutil.c pá Ärn 13 18:00:35 2008 --- pgsql.orig/src/backend/access/gist/gistutil.c pá Ärn 13 18:00:35 2008 *** *** 592,598 /* * Additionally check that the special area looks sane. */ ! if (((PageHeader) (page))->pd_special != (BLCKSZ - MAXALIGN(sizeof(GISTPageOpaqueData ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), --- 592,598 /* * Additionally check that the special area looks sane. */ ! if ( PageGetSpecialPointer(page) - page != (BLCKSZ - MAXALIGN(sizeof(GISTPageOpaqueData ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), Should probably use PageGetSpecialSize here. Much simpler, and doesn't assume that the special area is always at the end of page (not that I see us changing that anytime soon). -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] EXPLAIN progress info
I like this idea in general. I'm envisioning a cool explain display in pgAdmin that's updated live, as the query is executed, showing how many tuples a seq scan in the bottom, and an aggregate above it, has processed. Drool. Currently the interface is that you open a new connection, and signal the backend using the same mechanism as a query cancel. That's fine for the interactive psql use case, but seems really heavy-weight for the pgAdmin UI I'm envisioning. You'd have to poll the server, opening a new connection each time. Any ideas on that? How about a GUC to send the information automatically every n seconds, for example? Other than that, this one seems to be the most serious issue: Tom Lane wrote: Gregory Stark <[EMAIL PROTECTED]> writes: There are downsides: Insurmountable ones at that. This one already makes it a non-starter: a) the overhead of counting rows and loops is there for every query execution, even if you don't do explain analyze. I wouldn't be surprised if the overhead of the counters turns out to be a non-issue, but we'd have to see some testing of that. InstrEndLoop is the function we're worried about, right? -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] pg_dump lock timeout
On Thu, Jul 03, 2008 at 11:15:10AM +0300, Marko Kreen wrote: > On 5/11/08, daveg <[EMAIL PROTECTED]> wrote: > > Attached is a patch to add a commandline option to pg_dump to limit how > > long > > pg_dump will wait for locks during startup. > > My quick review: > > - It does not seem important enough to waste a short option on. > Having only long option should be enough. Agreed. I'll change it. > - It would be more polite to do SET LOCAL instead SET. > (Eg. it makes safer to use pg_dump through pooler.) Also agreed. Thanks. > - The statement_timeout is set back with "statement_timeout = default" > Maybe it would be better to do "= 0" here? Although such decision > would go outside the scope of the patch, I see no sense having > any other statement_timeout for actual dumping. I'd prefer to leave whatever policy is otherwise in place alone. I can see use cases for either having or not having a timeout for pg_dump, but it does seem outside the scope of this patch. Thanks for you review and comments. -dg -- David Gould [EMAIL PROTECTED] 510 536 1443510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] page macros cleanup
On Fri, Jun 13, 2008 at 9:38 PM, Zdenek Kotala <[EMAIL PROTECTED]> wrote: > I attached code cleanup which is related to in-place upgrade. I replace > direct access to PageHeader structure with already existing macros and I > removed also unnecessary retyping. A quick review comment: One thing I noticed is that the modified definition of HashMaxItemSize now does not account for the size of ItemId which may not be the right thing. Please recheck that. Apart from that the patch looks good to me. As you said, we should probably replace the other direct usage of PageHeader with appropriate macros. Thanks, Pavan -- Pavan Deolasee EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] Regression test database name
Hi, Peter Eisentraut wrote: For some experiments I wanted to run the regression tests using a different database (possibly using pg_regress --dbname=), but the name "regression" is hardcoded in a few places. It's trivial to fix, see attached patch. Quick explanation: The fact that psql's \z prints the database name has always been an inconsistency, so it's good to get rid of anyway. The majority of the diff in prepare.out is whitespace differences. Objections? Looks good from here, passed the regressions on my (pretty standard debian unstable) box. However, there were some issues applying the patch, probably related to git's diff format (patch version 2.5.9): patch: malformed patch at line 255: diff -ur ../cvs-pgsql/src/test/regress/sql/prepare.sql ./src/test/regress/sql/prepare.sql For those who care, I've added the same patch in context diff format and modulo whitespaces. :-) +1 for the change per se. Regards Markus *** doc/src/sgml/ref/grant.sgml old --- doc/src/sgml/ref/grant.sgml new *** *** 419,425 to obtain information about existing privileges, for example: => \z mytable !Access privileges for database "lusitania" Schema | Name | Type | Access privileges +-+---+-- public | mytable | table | miriam=arwdxt/miriam --- 419,425 to obtain information about existing privileges, for example: => \z mytable ! Access privileges Schema | Name | Type | Access privileges +-+---+-- public | mytable | table | miriam=arwdxt/miriam *** src/bin/psql/describe.c old --- src/bin/psql/describe.c new *** *** 544,551 } myopt.nullPrint = NULL; ! printfPQExpBuffer(&buf, _("Access privileges for database \"%s\""), ! PQdb(pset.db)); myopt.title = buf.data; myopt.trans_headers = true; myopt.trans_columns = trans_columns; --- 544,550 } myopt.nullPrint = NULL; ! printfPQExpBuffer(&buf, _("Access privileges")); myopt.title = buf.data; myopt.trans_headers = true; myopt.trans_columns = trans_columns; *** src/test/regress/expected/dependency.out old --- src/test/regress/expected/dependency.out new *** *** 68,74 GRANT ALL ON deptest1 TO regression_user2; RESET SESSION AUTHORIZATION; \z deptest1 ! Access privileges for database "regression" Schema | Name | Type | Access privileges +--+---+ public | deptest1 | table | regression_user0=arwdxt/regression_user0 --- 68,74 GRANT ALL ON deptest1 TO regression_user2; RESET SESSION AUTHORIZATION; \z deptest1 ! Access privileges Schema | Name | Type | Access privileges +--+---+ public | deptest1 | table | regression_user0=arwdxt/regression_user0 *** *** 79,85 DROP OWNED BY regression_user1; -- all grants revoked \z deptest1 ! Access privileges for database "regression" Schema | Name | Type |Access privileges +--+---+-- public | deptest1 | table | regression_user0=arwdxt/regression_user0 --- 79,85 DROP OWNED BY regression_user1; -- all grants revoked \z deptest1 ! Access privileges Schema | Name | Type |Access privileges +--+---+-- public | deptest1 | table | regression_user0=arwdxt/regression_user0 *** src/test/regress/expected/prepare.out old --- src/test/regress/expected/prepare.out new *** *** 58,67 PREPARE q2(text) AS SELECT datname, datistemplate, datallowconn FROM pg_database WHERE datname = $1; ! EXECUTE q2('regression'); datname | datistemplate | datallowconn ! +---+-- ! regression | f | t (1 row) PREPARE q3(text, int, float, boolean, oid, smallint) AS --- 58,67 PREPARE q2(text) AS SELECT datname, datistemplate, datallowconn FROM pg_database WHERE datname = $1; ! EXECUTE q2('postgres'); datname | datistemplate | datallowconn ! --+---+-- ! postgres | f | t (1 row) PREPARE q3(text, int, float, boolean, oid, smallint) AS *** src/test/regress/expected/privileges.out old --- src/test/regress/expected/privileges.out new *** *** 582,588 (1 row) -- clean up ! \c regression DROP FUNCTION testfunc2(int); DROP FUNCTION testfunc4(boolean); DROP VIEW atestv1; --- 582,588 (1 row) -- clean up ! \c DROP FUNCTION testfunc2(int); DROP FUNCTION testfunc4(boolean);
Re: [PATCHES] pg_dump lock timeout
On 5/11/08, daveg <[EMAIL PROTECTED]> wrote: > Attached is a patch to add a commandline option to pg_dump to limit how long > pg_dump will wait for locks during startup. My quick review: - It does not seem important enough to waste a short option on. Having only long option should be enough. - It would be more polite to do SET LOCAL instead SET. (Eg. it makes safer to use pg_dump through pooler.) - The statement_timeout is set back with "statement_timeout = default" Maybe it would be better to do "= 0" here? Although such decision would go outside the scope of the patch, I see no sense having any other statement_timeout for actual dumping. -- marko -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches