Re: [PATCHES] psql tab completion enhancements
Neil, thanks for your review. I accepted what you wrote for all items I don't mention in this reply. On Sat, Jan 07, 2006 at 10:59:27PM -0500, Neil Conway wrote: + #define COMPLETE_WITH_MALLOCED_LIST(list) \ + do { COMPLETE_WITH_LIST((const char**) list); free(list); list = (char**) 0; } while(0) NULL is better style than 0 in a pointer context. Also, why are the casts necessary? The cast (const char**) list is necessary because completion_charpp is declared to be const. Removing the const means that you have to remove it in a lot of other places as well, do you think this would be cleaner? ! /* if we have only seen LOCK but not LOCK TABLE so far, offer !* the TABLE keyword as well */ Is this actually useful? The TABLE keyword is just noise. Yeah, but it is valid SQL, so why not support it? I got used to use long forms sometimes and get confused with psqls completion not supporting this sometimes, for example in: template1=# alter TABLE test ALTER column TAB DROP DEFAULTSET DEFAULT SET STATISTICS TYPE DROP NOT NULL SET NOT NULLSET STORAGE + static char** + get_query_list(const char *text, + const char *query, + const char* completion_info) + { + return _get_query_list(0, text, query, completion_info); + } The difference between get_query_list() and _get_query_list() is not reflected in the names of the methods. An _internal suffix would be better, for example. There are static char *complete_from_query(const char *text, int state); static char *complete_from_schema_query(const char *text, int state); static char *_complete_from_query(int is_schema_query, const char *text, int state); already, so I made analogous versions for static char **get_query_list(const char *text, const char *query, const char *completion_info); static char **get_schema_query_list(const char *text, const SchemaQuery* squery, const char *completion_info); static char **_get_query_list(int is_schema_query, const char *text, const char *query, const char *completion_info); bool for boolean variables rather than int, and consistent parameter declarations (char * not char*). See above, I used the int is_schema_query because of _complete_from_query(). I could change both to bool in a new patch if you want to? + static char** + list_add_item(char **list, char *item) + { + int size = 0; + while (list[size]) + size++; + list = realloc(list, sizeof(char*) * (size + 1 + 1)); + list[size] = item; + list[size+1] = NULL; + return list; + } That's a terribly inefficient implementation. It's not terribly inefficient, it just scales terribly. However psql limits its queries to return 1000 items at most. At this size even a PII 300 adds 1000 items in :/tmp$ time ./test real0m0.007s user0m0.000s sys 0m0.000s Program startup overhead for this example and doing the loop without any work takes 0m0.004s already. If the consensus is that it has to be better, I'll supply another version, but I thought that it's not necessary. Joachim ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[PATCHES] Change BgWriterCommLock to spinlock
The following patch changes BgWriterCommLock to a spinlock. To confirm to the spinlock coding rules(though not needed since we are in critical section), rewrite the requests array into a circular one, which will reduce the lock time when the bgwriter absorbs the requests. A byproduct-benefit is that we can avoid the critial sections in AbsorbFsyncRequests() since we only removes the requests when it is done. The concurrency control of the circular array relies on the single-reader fact, but I don't see there is any need in future to change it. Regards, Qingqing --- Index: backend/postmaster/bgwriter.c === RCS file: /projects/cvsroot/pgsql/src/backend/postmaster/bgwriter.c,v retrieving revision 1.22 diff -c -r1.22 bgwriter.c *** backend/postmaster/bgwriter.c 8 Dec 2005 19:19:22 - 1.22 --- backend/postmaster/bgwriter.c 8 Jan 2006 03:36:51 - *** *** 56,61 --- 56,62 #include storage/ipc.h #include storage/pmsignal.h #include storage/smgr.h + #include storage/spin.h #include tcop/tcopprot.h #include utils/guc.h #include utils/memutils.h *** *** 94,101 * reasonable, it should set this field TRUE just before sending the signal. * * The requests array holds fsync requests sent by backends and not yet ! * absorbed by the bgwriter. Unlike the checkpoint fields, the requests ! * fields are protected by BgWriterCommLock. *-- */ typedef struct --- 95,101 * reasonable, it should set this field TRUE just before sending the signal. * * The requests array holds fsync requests sent by backends and not yet ! * absorbed by the bgwriter. *-- */ typedef struct *** *** 115,126 sig_atomic_t ckpt_time_warn;/* warn if too soon since last ckpt? */ ! int num_requests; /* current # of requests */ ! int max_requests; /* allocated array size */ BgWriterRequest requests[1];/* VARIABLE LENGTH ARRAY */ } BgWriterShmemStruct; ! static BgWriterShmemStruct *BgWriterShmem; /* * GUC parameters --- 115,131 sig_atomic_t ckpt_time_warn;/* warn if too soon since last ckpt? */ ! /* The following implements a circular array */ ! slock_t req_lock; /* protects the array */ ! int req_oldest; /* start of requests */ ! int req_newest; /* end of requests */ ! int num_requests; /* current number of requests */ ! int max_requests; /* allocated array size */ BgWriterRequest requests[1];/* VARIABLE LENGTH ARRAY */ } BgWriterShmemStruct; ! /* use volatile pointer to prevent code rearrangement */ ! static volatile BgWriterShmemStruct *BgWriterShmem; /* * GUC parameters *** *** 528,535 --- 533,542 if (found) return; /* already initialized */ + /* Init the circular array */ MemSet(BgWriterShmem, 0, sizeof(BgWriterShmemStruct)); BgWriterShmem-max_requests = NBuffers; + SpinLockInit(BgWriterShmem-req_lock); } /* *** *** 638,660 bool ForwardFsyncRequest(RelFileNode rnode, BlockNumber segno) { ! BgWriterRequest *request; if (!IsUnderPostmaster) return false; /* probably shouldn't even get here */ Assert(BgWriterShmem != NULL); ! LWLockAcquire(BgWriterCommLock, LW_EXCLUSIVE); if (BgWriterShmem-bgwriter_pid == 0 || BgWriterShmem-num_requests = BgWriterShmem-max_requests) { ! LWLockRelease(BgWriterCommLock); return false; } ! request = BgWriterShmem-requests[BgWriterShmem-num_requests++]; request-rnode = rnode; request-segno = segno; ! LWLockRelease(BgWriterCommLock); return true; } --- 645,673 bool ForwardFsyncRequest(RelFileNode rnode, BlockNumber segno) { ! volatile BgWriterRequest *request; if (!IsUnderPostmaster) return false; /* probably shouldn't even get here */ Assert(BgWriterShmem != NULL); ! SpinLockAcquire(BgWriterShmem-req_lock); if (BgWriterShmem-bgwriter_pid == 0 || BgWriterShmem-num_requests = BgWriterShmem-max_requests) { ! SpinLockRelease(BgWriterShmem-req_lock); return false; } ! request = BgWriterShmem-requests[BgWriterShmem-req_newest++]; ! BgWriterShmem-num_requests++; request-rnode = rnode; request-segno = segno; ! ! /* handle wrap around */ ! if (BgWriterShmem-req_newest = BgWriterShmem-max_requests) !
Re: [PATCHES] Change BgWriterCommLock to spinlock
Qingqing Zhou [EMAIL PROTECTED] writes: The following patch changes BgWriterCommLock to a spinlock. Why is this a good idea? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] [HACKERS] Inconsistent syntax in GRANT
Tom, all, SELECT: currval UPDATE: nextval, setval USAGE: nextval, currval +1. --Josh ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] Change BgWriterCommLock to spinlock
On Sun, 8 Jan 2006, Tom Lane wrote: Why is this a good idea? In spirit of incremental improvement: (1) The spinlock itself are light weight than the LWLock here and we can reduce the lock contention a little bit in AbsorbFsyncRequests(); (2) Don't need the CRITICAL SECTION in AbsorbFsyncRequests() any more; Regards, Qingqing ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] Change BgWriterCommLock to spinlock
Qingqing Zhou [EMAIL PROTECTED] writes: On Sun, 8 Jan 2006, Tom Lane wrote: Why is this a good idea? In spirit of incremental improvement: (1) The spinlock itself are light weight than the LWLock here and we can reduce the lock contention a little bit in AbsorbFsyncRequests(); Spinlock-based coding is inherently much more fragile than LWLock-based coding. I'm against changing things in that direction unless a substantial performance improvement can be gained. You didn't offer any evidence of improvement at all. (2) Don't need the CRITICAL SECTION in AbsorbFsyncRequests() any more; Really? I think this coding still breaks, rather badly, if RememberFsyncRequest fails. Certainly the reasons for needing a critical section have nothing to do with what kind of lock is used. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] Change BgWriterCommLock to spinlock
On Sun, 8 Jan 2006, Tom Lane wrote: (1) The spinlock itself are light weight than the LWLock here and we can reduce the lock contention a little bit in AbsorbFsyncRequests(); Spinlock-based coding is inherently much more fragile than LWLock-based coding. I'm against changing things in that direction unless a substantial performance improvement can be gained. You didn't offer any evidence of improvement at all. Yeah, only theoretically there is some marginal performance improvements. Maybe you suggest we keep the LWLock but use the circular array part? (2) Don't need the CRITICAL SECTION in AbsorbFsyncRequests() any more; Really? I think this coding still breaks, rather badly, if RememberFsyncRequest fails. Certainly the reasons for needing a critical section have nothing to do with what kind of lock is used. Yeah, not related to lock. But I changed algorithm to circular array as well and notice there is only one reader, so we can remove the requests after the we are successfully done. In another word, if there is problem, we never remove the unhanlded requests. Regards, Qingqing ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] Change BgWriterCommLock to spinlock
Qingqing Zhou [EMAIL PROTECTED] writes: Yeah, only theoretically there is some marginal performance improvements. Maybe you suggest we keep the LWLock but use the circular array part? They're separable issues anyway. Yeah, not related to lock. But I changed algorithm to circular array as well and notice there is only one reader, so we can remove the requests after the we are successfully done. In another word, if there is problem, we never remove the unhanlded requests. I doubt that's more robust than before though. If we did run out of memory in the hashtable, this coding would essentially block further operation of the request queue, because it would keep trying to re-insert all the entries in the queue, and keep failing. If you want the bgwriter to keep working in the face of an out-of-memory condition in the hashtable, I think you'd have to change the coding so that it takes requests one at a time from the queue. Then, as individual requests get removed from the hashtable, individual new requests could get put in, even if the total queue is too large to fit all at once. However this would result in much greater lock-acquisition overhead, so it doesn't really seem like a win. We haven't seen any evidence that the current coding has a problem under field conditions. Another issue to keep in mind is that correct operation requires that the bgwriter not declare a checkpoint complete until it's completed every fsync request that was queued before the checkpoint started. So if the bgwriter is to try to keep going after failing to absorb all the pending requests, there would have to be some logic addition to keep track of whether it's OK to complete a checkpoint or not. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] Change BgWriterCommLock to spinlock
On Sun, 8 Jan 2006, Tom Lane wrote: If you want the bgwriter to keep working in the face of an out-of-memory condition in the hashtable, I think you'd have to change the coding so that it takes requests one at a time from the queue. Patched version will issue ERROR instead of PANIC at this condition, so the bgwriter can still keep running. I don't quite understand what do you mean by want the bgwriter keep working -- do you mean by not issuing an ERROR but do retry? An ERROR is not avoidable unless we change the out-of-memory handling logic inside hashtable. Another issue to keep in mind is that correct operation requires that the bgwriter not declare a checkpoint complete until it's completed every fsync request that was queued before the checkpoint started. So if the bgwriter is to try to keep going after failing to absorb all the pending requests, there would have to be some logic addition to keep track of whether it's OK to complete a checkpoint or not. As above, if bgwriter fails to absorb, it will quite the job (and checkpoint will not be finished). Do you suggest it makes sense that we continue to work on the patch or let it be? Regards, Qingqing ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] Change BgWriterCommLock to spinlock
Qingqing Zhou [EMAIL PROTECTED] writes: Do you suggest it makes sense that we continue to work on the patch or let it be? I don't see any problem here that urgently needs solving. If we ever see any reports of out-of-memory failures in the bgwriter, then it'll be time to worry about this, but I think it quite unlikely that we ever will. (Even if we do, a simpler answer would be to increase the initial size of the pending-requests hashtable.) regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] Change BgWriterCommLock to spinlock
On Sun, 8 Jan 2006, Tom Lane wrote: I don't see any problem here that urgently needs solving. If we ever see any reports of out-of-memory failures in the bgwriter, then it'll be time to worry about this, but I think it quite unlikely that we ever will. (Even if we do, a simpler answer would be to increase the initial size of the pending-requests hashtable.) Agreed, Regards Qingqing ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[PATCHES] plpgsql: check domain constraints
Attached is a patch that makes the following changes: (1) refactor execQual.c to expose a function for checking an instance of a domain value against a list of domain constraints (2) adjust pl/pgsql to fetch the constraints associated with the function's return value. Because this is expensive, the constraints are fetched once per session, when the function is compiled. I then modified pl/pgsql to check any applicable constraints on the return value of a function before returning it to the backend. (3) add some regression tests for #2 The patch does NOT add checking of domain constraints for other PLs, or for intermediate values in PL/PgSQL -- I plan to take a look at doing one or both of those soon. I also made a few semi-related cleanups. In pl_exec.c, it seems to me that estate.eval_econtext MUST be non-NULL during the guts of both plpgsql_exec_trigger() and plpgsql_exec_function(). Therefore checking that estate.eval_econtext is non-NULL when cleaning up is unnecessary (line 649 and 417 in current sources, respectively), so I've removed the checks. Am I missing something? Barring any objections I'll apply this patch tomorrow some time. -Neil *** src/backend/executor/execQual.c eb1daef85341439578a3f6bb73549c4420292bc3 --- src/backend/executor/execQual.c 20fa43d2335a50a1663c8e3e1d7d9e5d091dc79f *** *** 2673,2679 { CoerceToDomain *ctest = (CoerceToDomain *) cstate-xprstate.expr; Datum result; - ListCell *l; result = ExecEvalExpr(cstate-arg, econtext, isNull, isDone); --- 2673,2678 *** *** 2680,2686 if (isDone *isDone == ExprEndResult) return result; /* nothing to check */ ! foreach(l, cstate-constraints) { DomainConstraintState *con = (DomainConstraintState *) lfirst(l); --- 2679,2707 if (isDone *isDone == ExprEndResult) return result; /* nothing to check */ ! ExecCheckDomainConstraints(result, *isNull, cstate-constraints, ! ctest-resulttype, econtext); ! ! /* No constraints failed, so return the datum */ ! return result; ! } ! ! /* ! * Check a list of domain constraints against an instance of the ! * domain type, 'value'. 'isnull' indicates whether the instance is ! * null. 'constraints' is a list of DomainConstraintState nodes ! * describing the constraints to check. 'valtype' is the OID of the ! * domain type. 'econtext' is the ExprContext in which the constraint ! * expressions are evaluated. ! */ ! void ! ExecCheckDomainConstraints(Datum value, bool isnull, ! List *constraints, Oid valtype, ! ExprContext *econtext) ! { ! ListCell *l; ! ! foreach(l, constraints) { DomainConstraintState *con = (DomainConstraintState *) lfirst(l); *** *** 2687,2697 switch (con-constrainttype) { case DOM_CONSTRAINT_NOTNULL: ! if (*isNull) ereport(ERROR, (errcode(ERRCODE_NOT_NULL_VIOLATION), errmsg(domain %s does not allow null values, ! format_type_be(ctest-resulttype; break; case DOM_CONSTRAINT_CHECK: { --- 2708,2718 switch (con-constrainttype) { case DOM_CONSTRAINT_NOTNULL: ! if (isnull) ereport(ERROR, (errcode(ERRCODE_NOT_NULL_VIOLATION), errmsg(domain %s does not allow null values, ! format_type_be(valtype; break; case DOM_CONSTRAINT_CHECK: { *** *** 2709,2726 save_datum = econtext-domainValue_datum; save_isNull = econtext-domainValue_isNull; ! econtext-domainValue_datum = result; ! econtext-domainValue_isNull = *isNull; conResult = ExecEvalExpr(con-check_expr, econtext, conIsNull, NULL); ! if (!conIsNull ! !DatumGetBool(conResult)) ereport(ERROR, (errcode(ERRCODE_CHECK_VIOLATION), errmsg(value for domain %s violates check constraint \%s\, ! format_type_be(ctest-resulttype), con-name))); econtext-domainValue_datum = save_datum; econtext-domainValue_isNull = save_isNull; --- 2730,2746 save_datum = econtext-domainValue_datum; save_isNull = econtext-domainValue_isNull; ! econtext-domainValue_datum = value; ! econtext-domainValue_isNull = isnull; conResult = ExecEvalExpr(con-check_expr, econtext, conIsNull, NULL); ! if (!conIsNull !DatumGetBool(conResult)) ereport(ERROR, (errcode(ERRCODE_CHECK_VIOLATION), errmsg(value for domain %s violates check constraint \%s\, ! format_type_be(valtype), con-name))); econtext-domainValue_datum = save_datum; econtext-domainValue_isNull = save_isNull; *** *** 2733,2741 break; } } - - /* If all has gone well (constraints did not fail) return the datum */ - return result; } /* --- 2753,2758
Re: [PATCHES] plpgsql: check domain constraints
Neil Conway [EMAIL PROTECTED] writes: (2) adjust pl/pgsql to fetch the constraints associated with the function's return value. Because this is expensive, the constraints are fetched once per session, when the function is compiled. We have gone out of our way to make sure that domain constraint checking responds promptly (ie, within one query) to updates of the domain definition. Caching at the session level in plpgsql would be a significant regression from that, and I don't think it's acceptable. If you had a way of invalidating the cache when needed, it'd be great ... but not without that. I also made a few semi-related cleanups. In pl_exec.c, it seems to me that estate.eval_econtext MUST be non-NULL during the guts of both plpgsql_exec_trigger() and plpgsql_exec_function(). Therefore checking that estate.eval_econtext is non-NULL when cleaning up is unnecessary (line 649 and 417 in current sources, respectively), so I've removed the checks. Am I missing something? The code doesn't currently assume that, and it doesn't seem to me that saving one simple if-test is a reason to add the assumption ... regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings