[PATCHES] Foreign key type checking patch
Hello again patchers, Here is a proposed patch against 7.4.1 to check exact match of foreign key types wrt the referenced keys, and to show a warning if this is not the case. This is an attempt to prevent stupid bugs such as : CREATE TABLE foo(id INT4 NOT NULL PRIMARY KEY); CREATE TABLE bla(id INT2 REFERENCES foo); which may work at the beginning, and then fails later on. I'm not at ease with postgresql internals, however this implementation seems reasonnable to me, and in the spirit of how the surrounding code works. I could not find any simple way to tell the user about what is being processed, as there is not real context information and tell 'while processing this constraint'... However this situation seems to be the normal case with any postgresql messages, as far as I can tell from my use. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] fk_type_check.diff.gz Description: Binary data ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] Foreign key type checking patch
I can think of several cases where it might be reasonable for the types to be different. Sure. It's all about a warning, not about an error. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PATCHES] Notice about costly ri checks
Dear patchers, This patch adds a notice at constraint creation time if the referential integrity check is to be costly, that is the comparison operator involves some coercion. The patch also accepts the validation of the regression tests with the added notice. The patch was generated with the difforig script against the current cvs head. Thanks in advance for considering it, and for any comment or cheering, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/backend/commands/tablecmds.c.orig Wed Mar 3 09:18:19 2004 --- ./src/backend/commands/tablecmds.c Wed Mar 3 09:30:57 2004 *** *** 3131,3141 * fktypoid[i] is the foreign key table's i'th element's type * pktypoid[i] is the primary key table's i'th element's type * !* We let oper() do our work for us, including ereport(ERROR) if the !* types don't compare with = */ ! Operatoro = oper(makeList1(makeString(=)), !fktypoid[i], pktypoid[i], false); ReleaseSysCache(o); } --- 3131,3154 * fktypoid[i] is the foreign key table's i'th element's type * pktypoid[i] is the primary key table's i'th element's type * !* First we try to find an inexpensive comparison =, !* but there is no error if none is found. */ ! Operator o = compatible_oper(makeList1(makeString(=)), !fktypoid[i], pktypoid[i], true); ! ! if (o == (Operator) NULL) ! { ! /* Now we let oper() do our work for us, including ereport(ERROR) !* if the types don't compare with = !*/ ! o = oper(makeList1(makeString(=)), !fktypoid[i], pktypoid[i], false); ! ! if (o != (Operator) NULL) ! ereport(NOTICE, (errmsg(costly cross-type foreign key: ! coercion needed))); ! } ReleaseSysCache(o); } *** ./src/test/regress/expected/alter_table.out.origWed Mar 3 10:51:40 2004 --- ./src/test/regress/expected/alter_table.out Wed Mar 3 10:51:52 2004 *** *** 206,213 --- 206,215 DROP TABLE FKTABLE; CREATE TEMP TABLE FKTABLE (ftest1 varchar); ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable; + NOTICE: costly cross-type foreign key: coercion needed -- As should this ALTER TABLE FKTABLE ADD FOREIGN KEY(ftest1) references pktable(ptest1); + NOTICE: costly cross-type foreign key: coercion needed DROP TABLE pktable cascade; NOTICE: drop cascades to constraint $2 on table fktable NOTICE: drop cascades to constraint $1 on table fktable *** ./src/test/regress/expected/foreign_key.out.origWed Mar 3 10:51:37 2004 --- ./src/test/regress/expected/foreign_key.out Wed Mar 3 10:51:49 2004 *** *** 738,746 --- 738,748 -- This should succeed, even though they are different types -- because varchar=int does exist CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable); + NOTICE: costly cross-type foreign key: coercion needed DROP TABLE FKTABLE; -- As should this CREATE TABLE FKTABLE (ftest1 varchar REFERENCES pktable(ptest1)); + NOTICE: costly cross-type foreign key: coercion needed DROP TABLE FKTABLE; DROP TABLE PKTABLE; -- Two columns, two tables ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] ALSO keyword to CREATE RULE patch
I thought the syntax came from Berkeley. We can add ALSO if folks like it. I can't think of cases where we have keywords for both on and off behavior, and allow a default if the keyword is missing. ALTER TABLE ... DROP CONSTRAINT ... [ RESTRICT | CASCADE ] ; CREATE TABLE ... [ WITH OIDS | WITHOUT OIDS ] ... ; CREATE USER ... [ CREATEDB | NOCREATEDB ] ... ; IMHO, from the language design point of view, it seems better if all options have a name. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PATCHES] ALSO keyword to CREATE RULE patch
Shoot me the patch again and I will put in the the queue. Thanks. Please find attached (again) the patch I sent. It is against 7.4.1. If necessary, I can redo the job against current head. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./doc/src/sgml/rules.sgml.orig Sun Feb 29 17:35:15 2004 --- ./doc/src/sgml/rules.sgml Sun Feb 29 17:38:45 2004 *** *** 873,879 ListItem Para ! They can be literalINSTEAD/ or not. /Para /ListItem --- 873,879 ListItem Para ! They can be literalINSTEAD/ or literalALSO/ (default). /Para /ListItem *** *** 904,910 ProgramListing CREATE RULE replaceablerule_name/ AS ON replaceableevent/ TO replaceableobject/ [WHERE replaceablerule_qualification/] ! DO [INSTEAD] [replaceableaction/ | (replaceableactions/) | NOTHING]; /ProgramListing in mind. --- 904,910 ProgramListing CREATE RULE replaceablerule_name/ AS ON replaceableevent/ TO replaceableobject/ [WHERE replaceablerule_qualification/] ! DO [ALSO|INSTEAD] [replaceableaction/ | (replaceableactions/) | NOTHING]; /ProgramListing in mind. *** *** 920,926 Initially the query-tree list is empty. There can be zero (literalNOTHING/ key word), one, or multiple actions. To simplify, we will look at a rule with one action. This rule ! can have a qualification or not and it can be literalINSTEAD/ or not. /Para Para --- 920,926 Initially the query-tree list is empty. There can be zero (literalNOTHING/ key word), one, or multiple actions. To simplify, we will look at a rule with one action. This rule ! can have a qualification or not and it can be literalINSTEAD/ or literalALSO/ (default). /Para Para *** *** 937,943 variablelist varlistentry ! termNo qualification and not literalINSTEAD//term listitem para the query tree from the rule action with the original query --- 937,943 variablelist varlistentry ! termNo qualification and literalALSO//term listitem para the query tree from the rule action with the original query *** *** 957,963 /varlistentry varlistentry ! termQualification given and not literalINSTEAD//term listitem para the query tree from the rule action with the rule --- 957,963 /varlistentry varlistentry ! termQualification given and literalALSO//term listitem para the query tree from the rule action with the rule *** *** 980,986 /varlistentry /variablelist ! Finally, if the rule is not literalINSTEAD/, the unchanged original query tree is added to the list. Since only qualified literalINSTEAD/ rules already add the original query tree, we end up with either one or two output query trees for a rule with one action. --- 980,986 /varlistentry /variablelist ! Finally, if the rule is literalALSO/, the unchanged original query tree is added to the list. Since only qualified literalINSTEAD/ rules already add the original query tree, we end up with either one or two output query trees for a rule with one action. *** *** ,1117 /Para Para ! The rule is a qualified non-literalINSTEAD/ rule, so the rule system has to return two query trees: the modified rule action and the original query tree. In step 1, the range table of the original query is incorporated into the rule's action query tree. This results in: --- ,1117 /Para Para ! The rule is a qualified literalALSO/ rule, so the rule system has to return two query trees: the modified rule action and the original query tree. In step 1, the range table of the original query is incorporated into the rule's action query tree. This results in: *** *** 1190,1196 /para para ! That's it. Since the rule is not literalINSTEAD/, we also output the original query tree. In short, the output from the rule system is a list of two query trees that correspond to these statements: --- 1190,1196 /para para ! That's it. Since the rule is literalALSO/, we also output the original query tree. In short, the output from the rule system is a list of two query trees that correspond to these statements: *** ./src/backend/parser/gram.y.origSun Feb 29 17:32:48 2004 --- ./src/backend/parser/gram.y Sun Feb 29 17:33:21 2004 *** *** 327,333 /* ordinary key words in alphabetical order */ %token keyword ABORT_P ABSOLUTE_P ACCESS ACTION ADD AFTER ! AGGREGATE ALL ALTER ANALYSE ANALYZE AND ANY
[PATCHES] costly foreign key ri checks (4)
Dear patchers, Following the discussion about previous versions of this patch, please find attached a new patch candidate for warning about costly foreign key referential integrity checks. 1/ it generates a WARNING 2/ it DETAILs the attributes and types 3/ some regression tests are also appended to foreign_key.sql It validates for me against current head. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] costly_ri_notice.patch.gz Description: Binary data ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] notice about costly ri checks (2)
As a side question, if there are multiple cross-type conversions in one constraint on different column pairs, what do we think the message should be? One message with multiple column mentions in detail or multiple notices? (I haven't looked at the patch to see if one or the other is easier with how it's set up) I would expect it to generate one WARNING for each mismatch; doing anything else would make life a lot more complex, both as to writing the code and as to formatting the output readably. I agree. patch v1 issued the warning once for each mismatch, in the check loop. patch v2 issued the warning once for the foreign key, outside of the loop. patch v3 is yet to come;-) I'll re-submit a patch some time later, with a WARNING and mismatch column names and types specified. -- Fabien. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] ALSO keyword to CREATE RULE patch
I thought the syntax came from Berkeley. We can add ALSO if folks like it. I can't think of cases where we have keywords for both on and off behavior, and allow a default if the keyword is missing. ALTER TABLE ... DROP CONSTRAINT ... [ RESTRICT | CASCADE ] CREATE TABLE ... [ WITH OIDS | WITHOUT OIDS ] CREATE USER [ CREATEDB | NOCREATEDB ] ... IMHO, from the language design point of view, it seems better if all options have a name. -- Fabien. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PATCHES] [HACKERS] notice about costly ri checks (3)
Subject: Re: [HACKERS] notice about costly ri checks (3) I hope you put version (4), not version (3)! Otherwise the new wording is not the new new one discussed later... I just checked. You put version 3 in line (NOTICE, no precision about offending attributes), although a later discussion seemed to decide that a WARNING was better, and that attributes and types should be shown, hence version (4). So, if you can update, it would be better! Sorry for the inconvenience, have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[PATCHES] client side syntax error localisation for psql (v1)
Dear patchers, Please find attached my first attempt at providing a localisation information on the client side for syntax errors in psql. Basically, one function is added which takes care of the burden. I put a lot of comments in the function to try make it clear what is going on. It validates for me against current cvs head. I've also added new regression tests in errors.sql. Although I have taken great and painful care to deal with multi-byte encodings, I'm not sure how I can validate that, as I don't have some japanese terminal to play with. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/bin/psql/common.c.origThu Mar 11 15:42:17 2004 --- ./src/bin/psql/common.c Thu Mar 11 15:59:50 2004 *** *** 345,350 --- 345,491 } + #define DISPLAY_SIZE (60) + #define MIN_RIGHT_CUT (10) + + /* on errors, print localisation information if available. + * the query is expected to be in the client encoding. + */ + static void + ReportLocalisation(const PGresult *result, const char *query) + { + int loc = 0; + char * sp = PQresultErrorField(result, PG_DIAG_STATEMENT_POSITION); + + if (sp sscanf(sp, %d, loc)!=1) + { + psql_error(INTERNAL ERROR: unexpected statement position '%s'\n, sp); + return; + } + + /* do we have something to show? */ + if (query!=NULL loc0) + { + int clen, slen, i, * qidx, ibeg, iend, last_nl, loc_line; + char *wquery, *cursor; + + /* (1) let us first compute a character index for the query. */ + + /* we need a safe allocation size... */ + slen = strlen(query); + + /* the last one is needed to store last char mb length */ + qidx = (int*) palloc(sizeof(int)*(slen+1)); + + qidx[0] = 0; + for (i=1; query[qidx[i-1]]!='\0' islen+1; i++) + qidx[i] = qidx[i-1] + PQmblen(query[qidx[i-1]], pset.encoding); + + clen = i-1; + + /* we must be at the end! */ + psql_assert(query[qidx[clen]] == '\0'); + + /* our localisation index start at 0! it must be in the query! */ + loc--; + psql_assert(loc=0 locclen); + + /* (2) now we build a working copy of the query. */ + wquery = (char*) palloc(sizeof(char)*(strlen(query)+1)); + strcpy(wquery, query); + + /* the character number of the previous newline. */ + last_nl = -1; +/* input line number of our syntax error. */ + loc_line = 1; + /* first included char of extract. */ + ibeg = 0; + /* last not-included char of extract. */ + iend = clen; + + /* (3) we clean wquery string from tabs, carriage return and new lines. +* extract error line number and begin and end indexes. +*/ + for (i=0; iclen; i++) + { + /* how to find a '\t', a '\r' or a '\n'? +* I assume here that all encodings must be ascii compatible... +*/ + if ((qidx[i+1]-qidx[i]) == 1) + { + if (wquery[qidx[i]] == '\t') + wquery[qidx[i]] = ' '; + if (wquery[qidx[i]] == '\r' || wquery[qidx[i]] == '\n') + { + /* count lines */ + if (wquery[qidx[i]]=='\n' iloc) + loc_line++; + + /* set extract end. */ + if (iend==clen iloc) + iend = i; + + wquery[qidx[i]] = ' '; + last_nl = i; + } + } + /* set extract beginning. */ + if (i==loc last_nl!=-1) + ibeg = last_nl+1; + } + + /* (4) if the extract is too long, we truncate it. */ + if (iend-ibegDISPLAY_SIZE) + { + /* we first truncate right if it is enough. */ + if (ibeg+DISPLAY_SIZEloc+MIN_RIGHT_CUT) + iend = ibeg+DISPLAY_SIZE; + else + { + /* we truncate right. */ + if (loc+MIN_RIGHT_CUTiend) + iend = loc+MIN_RIGHT_CUT; + + /* still too long
[PATCHES] client side syntax error position (v2)
Dear patchers, Please find enclosed a new patch submission which take into account comments by Tom Lane about version 1. (1) the function is renamed as localisation does not mean what I meant. (2) three dots ... appear after and before the line when truncated. However, I still put the line number that I found useful. The new version validates for me against current cvs head. Same comment as with version one, I don't know how to test the multi-byte part. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/bin/psql/common.c.origThu Mar 11 15:42:17 2004 --- ./src/bin/psql/common.c Thu Mar 11 18:05:22 2004 *** *** 345,350 --- 345,506 } + #define DISPLAY_SIZE (60) + #define MIN_RIGHT_CUT (10) + + /* on errors, print syntax error position if available. + * the query is expected to be in the client encoding. + */ + static void + ReportSyntaxErrorPosition(const PGresult *result, const char *query) + { + int loc = 0; + char * sp = PQresultErrorField(result, PG_DIAG_STATEMENT_POSITION); + + if (sp sscanf(sp, %d, loc)!=1) + { + psql_error(INTERNAL ERROR: unexpected statement position '%s'\n, sp); + return; + } + + /* do we have something to show? */ + if (query!=NULL loc0) + { + int clen, slen, i, * qidx, ibeg, iend, last_nl, loc_line; + char *wquery, *cursor; + bool beg_trunc, end_trunc; + + /* (1) let us first compute a character index for the query. */ + + /* we need a safe allocation size... */ + slen = strlen(query); + + /* the last one is needed to store last char mb length */ + qidx = (int*) palloc(sizeof(int)*(slen+1)); + + qidx[0] = 0; + for (i=1; query[qidx[i-1]]!='\0' islen+1; i++) + qidx[i] = qidx[i-1] + PQmblen(query[qidx[i-1]], pset.encoding); + + clen = i-1; + + /* we must be at the end! */ + psql_assert(query[qidx[clen]] == '\0'); + + /* our localisation index start at 0! it must be in the query! */ + loc--; + psql_assert(loc=0 locclen); + + /* (2) now we build a working copy of the query. */ + wquery = (char*) palloc(sizeof(char)*(strlen(query)+1)); + strcpy(wquery, query); + + /* the character number of the last newline. */ + last_nl = -1; +/* input line number of our syntax error. */ + loc_line = 1; + /* first included char of extract. */ + ibeg = 0; + /* last not-included char of extract. */ + iend = clen; + + /* (3) we clean wquery string from tabs, carriage return and new lines. +* extract error line number and begin and end indexes. +*/ + for (i=0; iclen; i++) + { + /* how to find a '\t', a '\r' or a '\n'? +* I assume here that all encodings must be ascii compatible... +*/ + if ((qidx[i+1]-qidx[i]) == 1) + { + if (wquery[qidx[i]] == '\t') + wquery[qidx[i]] = ' '; + if (wquery[qidx[i]] == '\r' || wquery[qidx[i]] == '\n') + { + /* count lines */ + if (wquery[qidx[i]]=='\n' iloc) + loc_line++; + + /* set extract end. */ + if (iend==clen iloc) + iend = i; + + wquery[qidx[i]] = ' '; + last_nl = i; + } + } + /* set extract beginning. */ + if (i==loc last_nl!=-1) + ibeg = last_nl+1; + } + + /* (4) if the line extracted is too long, we truncate it. */ + beg_trunc = false; + end_trunc = false; + if (iend-ibeg DISPLAY_SIZE) + { + /* we first truncate right if it is enough. */ + if (ibeg+DISPLAY_SIZE loc+MIN_RIGHT_CUT) + { + iend = ibeg+DISPLAY_SIZE; + end_trunc = true; + } + else + { + /* we truncate right. */ + if (loc+MIN_RIGHT_CUT iend
Re: [PATCHES] [HACKERS] notice about costly ri checks (3)
Dear Bruce, # [PATCHES] notice about costly ri checks (2), Fabien COELHO This one which appears on the web page can also be removed, It is also replaced by version 4. Sorry again for the inconvenience, have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PATCHES] [HACKERS] notice about costly ri checks (3)
Dear Bruce, It is in the queue because I need that narrative for the commit messase. Ok, I get the point. I should have made a fully standalone version with all comments repeated. Thanks, have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] client side syntax error position (v3)
I have applied this patch, Thanks! after some considerable whacking around to make it ready to cope with multicolumn Kanji characters. It does not actually cope yet, since the necessary knowledge is not available from the character encoding logic. But replacing the two places that say scroffset += 1;/* XXX fix me when we have screen width info */ with calls to a get-the-screen-width-of-this-character subroutine should do the job. I also looked into it, and it is also a little bit more complex, as the extract width must be though from the terminal perspective. Thus the truncation part must take into account the terminal lengths. I think it would require an additionnal array to store character to terminal column mapping that would be used when truncating. I'll do that as soon as the needed routines are there for terminal lengths, and submit a new patch. I have temporarily fixed the problem shown in Fabien's original regression tests: CREATE FUNCTION test1 (int) RETURNS int LANGUAGE sql AS 'not even SQL'; ERROR: syntax error at or near not at character 1 + LINE 1: CREATE FUNCTION test1 (int) RETURNS int LANGUAGE sql + ^ [...] but I don't currently see how we can account for the string literal's starting position and possible internal quotes, backslashes, etc etc. Yep. A solution would be to generate the extract from the backend, where the information is really available, but this has been ruled out by previous discussions... -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[PATCHES] hint infrastructure setup (v1)
Dear patchers, Please find attached my first patch which sets an initial hint infrastructure into the sql parser. At the time it is pretty simple as I'm not yet convinced yet that I'll need a hint stack or something like that to cope with recursions, despite initial implementations I've already tried. The patch adds a current_hint variable in the parser which is to be updated by syntax rules as needed. If the variable is not NULL on a syntax error, the hint is displayed with the HINT tag already included in the error reporting functions, otherwise nothing happens. Also a new file about hints is added to the validation, but is not added to the default regression tests. The current status is that an initial hint is provided, and then no hints are available thanks to added stop-hint rules. My intent is to submit later new small incremental patches on this infrastructure so as to provide hints in various cases, typically for every sql commands such as CREATE USER, DROP TABLE and so on. My motivation not to submit a full patch is that: (1) It is a lot of work not likely to be finished quickly. As I work on that, it is pretty sure that the gram.y file will have been updated and thus there will be conflicts. (2) It would basically result in a drop-in replacement for gram.y, and that would be harder to pass through the review process. On the other hand, small patches would be much easier to be argued over and checked and then accepted or rejected. I think this is the only safe path to include such changes. Thus I wish this patch could be applied to current cvs head. I'm ready to update it if required for some stylistic or whatever reason. However, I really need to have such a patch so as to be able to go on. It modifies a lot of lines in a very simple way, but other patches should be much more narrow after this one, that's the idea... I don't think it harms the parser code, as it validates for me. Thanks in advance, -- Fabien Coelho - [EMAIL PROTECTED] hints_0.patch.gz Description: Binary data ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[PATCHES] hint infrastructure setup (v2)
Dear patchers, As I see that my previous attempt has met a lot of reactions;-) Please find attached my SECOND patch proposition which sets an initial hint infrastructure into the sql parser. At the time it is pretty simple as I'm not yet convinced yet that I'll need a hint stack or something like that to cope with recursions, despite initial implementations I've already tried. The patch adds a current_hint variable in the parser which is to be updated by syntax rules as needed. If the variable is not NULL on a syntax error, the hint is displayed with the HINT tag already included in the error reporting functions, otherwise nothing happens. Also a new file about hints is added to the validation, but is not added to the default regression tests. It also adds a new client option parser_error_hints which is false by default, and which governs whether hints are shown on parser errors. I've added this option and this default after a mail by several DBAs on the hacker lists who considers WARNING and NOTICE basically crazy things. My agenda is to help my students while learning SQL with postgreSQL, but I can understand that some people don't like being helped. So this option is an attempt at resolving the contradiction. The current status is that, if the option is set to true, an initial hint is provided, and then no hints are available thanks to added stop-hint rules. My intent is to submit later new small incremental patches on this infrastructure so as to provide hints in various cases, typically for every sql commands such as CREATE USER, DROP TABLE and so on. My motivation not to submit a full patch is that: (1) It is a lot of work not likely to be finished quickly. As I work on that, it is pretty sure that the gram.y file will have been updated and thus there will be conflicts. (2) It would basically result in a drop-in replacement for gram.y, and that would be harder to pass through the review process. On the other hand, small patches would be much easier to be argued over and checked and then accepted or rejected. I think this is the only safe path to include such changes. Thus I wish this patch could be applied to current cvs head. I'm ready to update it if required for some stylistic or whatever reason. However, I really need to have such a patch so as to be able to go on. It modifies a lot of lines in a very simple way, but other patches should be much more narrow after this one, that's the idea... I don't think it harms the parser code, as it validates for me. I just hope I did not forget any file. Thanks in advance, -- Fabien Coelho - [EMAIL PROTECTED] hints_0.patch.gz Description: Binary data ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[PATCHES] hint infrastructure setup (v3)
Dear patchers, Well, as the discussion rages over my previous patch submissions, I've time to improve the code;-) I finally figured out that there is 2 errhint functions (elog.c vs ipc_text.c), and the one I'm calling is the fist one, so I need to put a format. The second appears to ignore it's arguments after the first. Anyway, please consider the following patch for inclusion to current head. It validates for me. I need it to be able to go on. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] hints_0.patch.gz Description: Binary data ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[PATCHES] syntax error position CREATE FUNCTION bug fix
Dear patchers, Please find attached a patch to fix the CREATE FUNCTION syntax error position bug I reported a few days ago. As the exact query being processed in only known to the backend (it may be the initial query, it may be a subset of the initial query, it may be some generated query?), the offending string is returned with the error position, which is expressed with respect to this query (that has always been the case by the way). In order to do that, I did the following: 1. appended a new query field into the ErrorData structure, which is set with an added errquery function. 2. modified the error reporting part of the protocol (version 3). As the protocol implementation in libpq is fuzzy enough, there is not fix on the client reception part, only the server sending part is modified with a new field for the query (Q). Thus this addition should not harm old clients. 3. fixed yyerror to return the processed query on errors. a copy of the buffer is needed as the scanner buffer is modified and the convention about buffer termination are not the same. 4. fixed the psql position reporting code to use this reported query instead of the one it sent. Tom's quick fix around the problem is removed. 5. updated comments where it seemd appropriate in the code. 6. finally updated the protocol documentation for the error reporting part by adding the Q field and fixing the P field. I dit it like that because I don't think it is elegantly feasible to update the position to get back to the initial query, as escapes may have been processed within the string, so a simple offset would not fix the bug. It validates for me. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./doc/src/sgml/protocol.sgml.orig Wed Mar 10 15:56:59 2004 --- ./doc/src/sgml/protocol.sgmlThu Mar 18 10:19:24 2004 *** *** 3890,3901 VarListEntry Term literalP/ /Term ListItem Para Position: the field value is a decimal ASCII integer, indicating ! an error cursor position as an index into the original query string. The first character has index 1, and positions are measured in characters not bytes. /Para --- 3890,3916 VarListEntry Term + literalQ/ + /Term + ListItem + Para + Query: an optional string that contains the processed query string + on syntax errors. The position field is expressed with respect to + this string. In most cases this the the initial query sent by + the client, however in some cases such as with + commandCREATE FUNCTION/ this may be a subset of the initial query. + /Para + /ListItem + /VarListEntry + + VarListEntry + Term literalP/ /Term ListItem Para Position: the field value is a decimal ASCII integer, indicating ! an error cursor position as an index into the processed query string. The first character has index 1, and positions are measured in characters not bytes. /Para *** ./src/backend/parser/scan.l.origTue Feb 24 22:45:18 2004 --- ./src/backend/parser/scan.l Thu Mar 18 10:09:57 2004 *** *** 68,73 --- 68,74 /* Handles to the buffer that the lexer uses internally */ static YY_BUFFER_STATE scanbufhandle; static char *scanbuf; + static char *initial_scanbuf; /* for syntax errors, as scanbuf is touched */ unsigned char unescape_single_char(unsigned char c); *** *** 623,628 --- 624,630 (errcode(ERRCODE_SYNTAX_ERROR), /* translator: %s is typically syntax error */ errmsg(%s at end of input, message), +errquery(initial_scanbuf), errposition(cursorpos))); } else *** *** 631,636 --- 633,639 (errcode(ERRCODE_SYNTAX_ERROR), /* translator: first %s is typically syntax error */ errmsg(%s at or near \%s\, message, loc), +errquery(initial_scanbuf), errposition(cursorpos))); } } *** *** 658,663 --- 661,671 scanbuf[slen] = scanbuf[slen + 1] = YY_END_OF_BUFFER_CHAR; scanbufhandle = yy_scan_buffer(scanbuf, slen + 2); + /* +* Make a copy in case of error, as scanbuf may be touched. +*/ + initial_scanbuf = pstrdup(str); + /* initialize literal buffer to a reasonable but expansible size */ literalalloc = 128; literalbuf = (char *) palloc(literalalloc); *** *** 675,680 --- 683,689 { yy_delete_buffer(scanbufhandle); pfree(scanbuf); + pfree(initial_scanbuf); } *** ./src/backend/utils/error/elog.c.orig Mon Mar 15 17:57:28 2004 --- ./src/backend/utils/error
Re: [PATCHES] syntax error position CREATE FUNCTION bug fix
Dear Tom, Attached is a proposed patch that fixes the cursor-position-in-CREATE-FUNCTION issue per my earlier suggestion. The re-parsing of the original command is simplistic but will handle all normal cases. [...] That's quite a demonstration;-) However, I still stick with my bad simple idea because the simpler the better, and also because of the following example: psql CREATE OR REPLACE FUNCTION count_tup(TEXT) RETURNS INTEGER AS ' DECLARE n RECORD; BEGIN FOR n IN EXECUTE \'SELECT COUNT(*) AS c FRM \' || $1 LOOP RETURN n.c; END LOOP; RETURN NULL; END;' LANGUAGE plpgsql; psql SELECT count_tup('pg_shadow'); ERROR: syntax error at or near FRM at character 22 CONTEXT: PL/pgSQL function count_tup line 4 at for over execute statement As you can notice, the extract is not in the submitted query, so there is no point to show it there. Maybe real PL/pgSQL programmers will never have syntax errors with their SQL stuff. Thus I really think that the parser should return the processed query, at least in some cases. Anyway, have a nice day! -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PATCHES] [NOT] (LIKE|ILIKE) (ANY|ALL) (...)
Fabien COELHO [EMAIL PROTECTED] writes: Please find attached a patch which allows LIKE/ILIKE/NOT LIKE/NOT ILIKE as operators for ANY/SOME/ALL constructs. This seems to allow a whole lot of unintended and probably uncool things as well. ORDER BY NOT LIKE, for instance. Yes. Well, it seemed to me (maybe I'm wrong here/) that ORDER BY !~~ was allowed anyway by the parser, so I cannot see why it should not allow NOT LIKE as well, even if it does not make sense. I guess that it is filtered out later anyway? Or the rule factorization must be changed. It can also be done. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] [NOT] (LIKE|ILIKE) (ANY|ALL) (...)
Possibly. The case that I thought was a real bad idea was actually the one in def_arg --- we don't want that doing any behind-the-scenes translation of words to other things. The ORDER BY case is just silly. Ok. So the current code is silly, but the patch must be sound;-) Or the rule factorization must be changed. It can also be done. Yes. I think we must have an all_subselect_ops or similar. I'll do that and resubmit later. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[PATCHES] [NOT] (LIKE|ILIKE) (ANY|SOME|ALL) (subquery...)
Dear patchers, Please find my second micro-patch submission for this minor inconsistency. I put comments about why SIMILAR TO cannot be handled simply in the code rather that in the test, as it seemed more logical. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/backend/parser/gram.y.origThu Mar 18 09:01:11 2004 --- ./src/backend/parser/gram.y Fri Mar 26 19:23:56 2004 *** *** 194,200 database_name access_method_clause access_method attr_name index_name name function_name file_name ! %type list func_name handler_name qual_Op qual_all_Op opt_class opt_validator %type range qualified_name OptConstrFromTable --- 194,200 database_name access_method_clause access_method attr_name index_name name function_name file_name ! %type list func_name handler_name qual_Op qual_all_Op subquery_Op opt_class opt_validator %type range qualified_name OptConstrFromTable *** *** 5692,5698 /* Stick a NOT on top */ $$ = (Node *) makeA_Expr(AEXPR_NOT, NIL, NULL, (Node *) n); } ! | row qual_all_Op sub_type select_with_parens %prec Op { SubLink *n = makeNode(SubLink); --- 5692,5698 /* Stick a NOT on top */ $$ = (Node *) makeA_Expr(AEXPR_NOT, NIL, NULL, (Node *) n); } ! | row subquery_Op sub_type select_with_parens %prec Op { SubLink *n = makeNode(SubLink); *** *** 5702,5708 n-subselect = $4; $$ = (Node *)n; } ! | row qual_all_Op select_with_parens %prec Op { SubLink *n = makeNode(SubLink); --- 5702,5708 n-subselect = $4; $$ = (Node *)n; } ! | row subquery_Op select_with_parens %prec Op { SubLink *n = makeNode(SubLink); *** *** 5712,5718 n-subselect = $3; $$ = (Node *)n; } ! | row qual_all_Op row %prec Op { $$ = makeRowExpr($2, $1, $3); --- 5712,5718 n-subselect = $3; $$ = (Node *)n; } ! | row subquery_Op row %prec Op { $$ = makeRowExpr($2, $1, $3); *** *** 5807,5812 --- 5807,5829 | OPERATOR '(' any_operator ')' { $$ = $3; } ; + subquery_Op: + all_Op { $$ = makeList1(makeString($1)); } + | OPERATOR '(' any_operator ')' { $$ = $3; } + | LIKE { $$ = makeList1(makeString(~~)); } + | NOT LIKE { $$ = makeList1(makeString(!~~)); } + | ILIKE { $$ = makeList1(makeString(~~*)); } + | NOT ILIKE { $$ = makeList1(makeString(!~~*)); } + /* cannot put SIMILAR TO here, because SIMILAR TO is a hack. + * the regular expression is preprocessed by a function (similar_escape), + * and the ~ operator for posix regular expressions is used. + *x SIMILAR TO y -x ~ similar_escape(y) + * this transformation is made on the fly by the parser upwards. + * however the SubLink structure which handles any/some/all stuff + * is not ready for such a thing. + */ + ; + /* * General expressions * This is the heart of the expression syntax. *** *** 6132,6138 $$ = n; } } ! | a_expr qual_all_Op sub_type select_with_parens %prec Op { SubLink *n = makeNode(SubLink
Re: [PATCHES] hint infrastructure setup (v3)
Dear Bruce, Why did all the tags have to be renamed: + cmdGRANT: GRANT {noH;}; that's a good question. In order to add hints, I want to attach them to the states of the parser automaton. So as to do that, I need I put a name on every state, so I need to refactor the prefix that lead to a state, at give it a name. Otherwise, I would have : xxx: GRANT {noH;} ALL | GRANT {noH;} CREATE ; (1) I would have to put the after GRANT hint twice, one after each GRANT occurence. (2) this would result in shift/reduce conflicts, as the parser does not know which hint should be put. Well, it is the same code in both case, but yacc does not look at that. Also, I need to stop hints, otherwise the last 'pushed' hint would be shown on any error later. Also, what is typical output for a hint? Can you show one? Well, the current status of the infrastructure is that there is no hint;-) The only hint with the patch is if you do not put an SQL command. It may look like : psql creat table foo(); ERROR: syntax error at or near creat at character 1 creat table foo(); ^ HINT: sql command such as SELECT or CREATE... ? All other syntax errors result in no hint being displayed, thanks to all added noH calls I put. Also, As far as I remember, I put a new option to activate these hints. Hints are disactivated by default. This is because some people do not like advices and may want to turn them on or off. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] hint infrastructure setup (v3)
Dear Tom, Well, the current status of the infrastructure is that there is no hint;-) Ah, now I remember why I was waiting to review that stuff: I was expecting you to come out with a version that actually did some things :-( Well, if you wait for something from me, it is better to tell me directly. I was waiting for anything to happen to the patch (accept, discuss or reject) before submitting anything else. What you've got at the moment is a patch that significantly uglifies the grammar without actually doing anything useful, or even clearly pointing the way to what else will need to happen before it does do something useful. So it's difficult to form any reasoned judgment about whether we want to commit to following this path or not. I can see your point. The reasonnable way out of the deadlock that I can suggest is the following: I can resubmit a new patch that would provide the needed infrastructure AND some hints on some commands as a proof of concept of what can be achieved. Then you can decide whether it is worth having this kind of thing on all commands, or not. If not, I won't have passed 3 week-ends in the parser instead if my garden for nothing. If you are okay then you apply, and I'll submit some new patches later on, one command at a time, and when I have time. Does this sound reasonnable enough? -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PATCHES] hint infrastructure setup (v3)
Dear Tom, I quite agree that you shouldn't do a complete implementation when it's not clear if we'll accept it or not. What I'd like to see is a demo, basically. How about one example of each of several different kinds of hints? We need to get a feeling for what can be accomplished and how messy the grammar would get. I've already tried some implementation but it was not showable, and because the infrastructure was not in place, inappropriate hints could be shown. It is not that messy. Just prefix are given a name to attach a rule for hint updates. BTW, it seems like you are putting in a lot of infrastructure to recreate externally the parse-stack state that bison has internally. Yes. Maybe an interesting alternative would be to look at touching that state directly. I realize that bison doesn't consider that part of their official API, but bison is a stable tool and seems unlikely to change much in the future. We could probably get away with it... Interesting, however: I would not like to break postgresql portability with other parser generators. It would be enough to reject the patch from my point of view;-) Using some non standard undocumented API would not help other people who would like to touch the parser. It would not help me either, because I would have to learn the undocumented API;-) I really need to give a name to the state so as to be able to attach a hint. I cannot set how I could guess the state number given after GRANT just from the source, and without generating lots of conflicts. So I'll do some more programming maybe over the week-end a resubmit something soon. -- Fabien. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] hint infrastructure setup (v3)
at or near 123 at character 15 HINT: schema name or AUTHORIZATION expected CREATE SCHEMA sch foo; ERROR: syntax error at or near foo at character 19 HINT: AUTHORIZATION or ... expected CREATE SCHEMA sch AUTHORIZATION 123; ERROR: syntax error at or near 123 at character 33 HINT: user id such as calvin expected CREATE SCHEMA AUTHORIZATION 123; ERROR: syntax error at or near 123 at character 29 HINT: user id such as calvin expected CREATE SCHEMA sch AUTHORIZATION calvin CREATE foo; ERROR: syntax error at or near foo at character 47 HINT: TABLE, INDEX, SEQUENCE, TRIGGER or VIEW expected CREATE SCHEMA sch AUTHORIZATION calvin CREATE TABLE +; ERROR: syntax error at or near + at character 53 HINT: table name... expected CREATE SCHEMA sch AUTHORIZATION calvin CREATE TABLE foo() xxx; ERROR: syntax error at or near xxx at character 59 HINT: schema statement list such as CREATE TABLE/INDEX/SEQUENCE/TRIGGER/VIEW or GRANT or ; expected CREATE SCHEMA sch AUTHORIZATION calvin CREATE TABLE foo() CREATE TABLE bla() x; ERROR: syntax error at or near x at character 78 HINT: schema statement list such as CREATE TABLE/INDEX/SEQUENCE/TRIGGER/VIEW or GRANT or ; expected -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] hint infrastructure setup (v3)
Dear Tom, I would not like to break postgresql portability with other parser generators. I agree with Bruce's opinion that this is a nonissue. In particular I'd point out that the current grammar is already too big for any known yacc clone besides bison (even with bison you need a very recent version), and your proposed changes are going to make the grammar a whole lot larger yet, at least in terms of states and actions. Not really in terms of state. The state should basically be the same. However yes in terms of explicit state that are given explicit names. And definitely in terms of actions, as you say. Using some non standard undocumented API would not help other people who would like to touch the parser. It would help them a *lot* if it meant that they didn't have to be aware of this stuff in order to do anything at all with the grammar. My fundamental objection to this proposal continues to be that it will make the grammar unreadable and unmaintainable. If we could push the error message generation issues into a subroutine of yyerror() and not touch the grammar rules at all, the chances of getting this accepted would rise about a thousand percent. Hmmm. I'm so much below zero! So maybe I think I won't resubmit a infrastructure patch with some hints for some commands, as it seems just a lose of time. The sample hints you show in another message still aren't impressing me much, I don't mean them to be impressive! It is not an expert system or a seer I'm planing. I just want them to be useful to my students! but in any case they look like they only depend on being able to see what grammar symbols can follow the current point. Yes. That's basically where the parser is when it generates an error, it is expecting something and cannot find it. I could do more, but with more efforts. The last time I studied this stuff (which was quite a while back) the follow set was something an LR parser generator would have to compute anyway. Yes. I don't know whether bison's internal tables expose that set in any useful fashion, but it surely seems worth a look. Having a look is something I can do;-) I'm afraid it looks like internal state 1232, 43425 and 42523, but there may be some support enough in the generated code to get something more interesting. It would require to be able to get textual meta informations out of the state number, which is possible if bison/flex people did something about it. I can remembre something like that, somewhere deep in my memory... PS: another reason for doing it that way is that I suspect your current approach is a dead end anyhow. You've already hit one blocker problem where you got reduce/reduce errors when you tried to insert state-recording actions, and you've only gone as far as hinting the very first symbol, which is hardly one of the more complex parts of the grammar. These states exists anyway. Once the first keyword(s) are passed, there is much less troubles as the SQL syntax is really verbose, so it is usually hard to get lost. The only real problem I had passed the first symbol, for the part I played with, is with SET [SESSION|LOCAL] SESSION AUTHORIZATION which needed a real bit of refactoring. Also, it is better to avoid some factorizations if the semantics is different. For instance, user_list is used both for groups and users, so I switched during my tests to a user_list and a group_list. There are large parts of the grammar that only manage to avoid ambiguity by the skin of their teeth --- I don't believe you'll be able to do anything of this sort in those areas without breaking it. Take a look at SelectStmt and subsidiary productions as an example ... that's a fragile bit of work that took a long time to get right. I noticed that great care has been taken to avoid any ambiguity. I have no intention to change that. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] hint infrastructure setup (v3)
Dear Tom, Not really in terms of state. The state should basically be the same. However yes in terms of explicit state that are given explicit names. And definitely in terms of actions, as you say. But mid-rule actions are implemented by inventing additional internal productions Sure. That's not only more states, but more symbols, which is going to impose an O(N^2) cost on the raw tabular representation of the parsing rules. Maybe much of this will be bought back when bison compresses the tables, and maybe not. Mmh. Maybe. I don't know at the time. Have you checked how much the size of gram.o grows with the stuff you've installed so far? I have not looked at that. It was just a quick and dirty implementation, just to convince myself of what can be achieved. (I'm also slightly worried about what this will do to parsing speed, Well, more reductions are performed, and I'm not sure that the switch() implementation is really intelligent. Having a hash table could help big grammars, but that is bison problems, and I will not rewrite that. However, I'm not sure that parsing overhead is a big issue wrt other costs in the backend, but this is not a reason to make it explode. I'm afraid it looks like internal state 1232, 43425 and [...], The string names of the grammar symbols are all embedded in gram.c Yes, I've noticed yytname[]. anyway, so if you can figure out the symbols that are expected next, their names could be printed directly. That is done with YYERROR_VERBOSE, but the result is really poor most of the time, because it does not look for all possible terminals, just the ones easilly accessible. So you have something like: DROP TABL foo; ERROR: syntax error at unexpected Ident TABL, LANGUAGE expected. no comment. A better job could be done, but you need to touch a little bit gram.c for that. We could alter the symbol names to be more useful to novices, or alternatively install an additional lookup table to substitute reader-friendly phrases. Yep, I thought about that also. Some symbols are appended _P to avoid clashes. I'm investigating the internal way. Not really optimistic because of the details, but I may find workarounds. I'll post later when I'll have a definite opinion. -- Fabien COELHO _ http://www.cri.ensmp.fr/~coelho _ [EMAIL PROTECTED] CRI-ENSMP, 35, rue Saint-Honoré, 77305 Fontainebleau cedex, France phone: (+33|0) 1 64 69 {voice: 48 52, fax: 47 09, standard: 47 08} All opinions expressed here are mine _ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] hint infrastructure setup (v3)
You *really* don't want to go there. If you want to see what the parser is doing you can run bison -r all over the grammar and examine the .output file. But please, let's not examine the internal states. Talk about unmaintainability! What I was suggesting was that we might be able to extract the follow set from bison's tables, ie, the set of grammar symbols that are legal next inputs given the current parse state stack. I surely agree that we don't want code that goes like if state is N then print message X ... but the follow set should be stable These are 2 different issues. (1) computing the set of expected following tokens. It is possible to do that, with some processing of bison tables. (2) give advices based on guesses: for what I have looked so far, it could look like: - I'm here for rule user_list, the previous token was ',' OR I'm here for select_expressions_list, last token was ',' and current token is FROM = HINT: remove previous comma I don't think that (2) would be a bad idea, especially for my students. The good point is that the cost would only be paid at error time. One way of describing Fabien's existing patch is that it's essentially keeping track of the follow set by hand :-( Yes. I give name to existing states, and states leads to expected follow tokens. Also, I suspect that bison does a good bit of optimisation by way of combining states that removes some of the information you might need, but I haven't looked into it closely. That could be a showstopper if true, but it's all speculation at this point. I think the information is there. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[PATCHES] sound extract century/millennium date_part
Dear patchers, Please find a small patch to fix the brain damage century and millennium date part implementation in postgresql, both in the code and the documentation, so that it conforms to the official definition. If you do not agree with the official definition, please send your complaint to [EMAIL PROTECTED]. I'm not responsible for them;-) With the previous version, the centuries and millenniums had a wrong number and started the wrong year. Moreover century number 0, which does not exist in reality, lasted 200 years. Also, millennium number 0 lasted 2000 years. If you want postgresql to have it's own definition of century and millennium that does not conform to the one of the society, just give them another name. I would suggest pgCENTURY and pgMILLENNIUM;-) IMO, if someone may use the options, it means that postgresql is used for historical data, so it make sense to have an historical definition. Also, I just want to divide the year by 100 or 1000, I can do that quite easily. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./doc/src/sgml/func.sgml.orig Wed Mar 31 08:58:31 2004 --- ./doc/src/sgml/func.sgmlSun Apr 4 10:23:00 2004 *** *** 4948,4965 termliteralcentury/literal/term listitem para ! The year field divided by 100 /para screen ! SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40'); lineannotationResult: /lineannotationcomputeroutput20/computeroutput /screen para ! Note that the result for the century field is simply the year field ! divided by 100, and not the conventional definition which puts most ! years in the 1900's in the twentieth century. /para /listitem /varlistentry --- 4948,4978 termliteralcentury/literal/term listitem para ! The historical definition of a century. /para screen ! SELECT EXTRACT(CENTURY FROM TIMESTAMP '2000-12-16 12:21:13'); lineannotationResult: /lineannotationcomputeroutput20/computeroutput + SELECT EXTRACT(CENTURY FROM TIMESTAMP '2001-02-16 20:38:40'); + lineannotationResult: /lineannotationcomputeroutput21/computeroutput /screen para ! An historical century is a period of 100 years. ! The first century starts at 0001-01-01 00:00:00 AD, although ! they did not know at the time. This definition applies to all ! Gregorian calendar countries. There is no number 0 century, ! you go from -1 to 1. ! ! If you disagree with this, please write your complaint to: ! Pope, Cathedral Saint-Peter of Roma, Vatican. !/para ! !para ! Compatibility: if you want the previous postgres version of century, ! just divide the year by 100. Note that with this definition, ! century number 0 lasts 200 years. /para /listitem /varlistentry *** *** 5083,5100 termliteralmillennium/literal/term listitem para ! The year field divided by 1000 /para screen SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40'); ! lineannotationResult: /lineannotationcomputeroutput2/computeroutput /screen para ! Note that the result for the millennium field is simply the year field ! divided by 1000, and not the conventional definition which puts ! years in the 1900's in the second millennium. /para /listitem /varlistentry --- 5096,5112 termliteralmillennium/literal/term listitem para ! The conventional historical millennium. /para screen SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP '2001-02-16 20:38:40'); ! lineannotationResult: /lineannotationcomputeroutput3/computeroutput /screen para ! Years in the 1900's are in the second millennium. ! The third millennium starts January 1, 2001. /para /listitem /varlistentry *** ./src/backend/utils/adt/timestamp.c.origWed Mar 31 08:58:40 2004 --- ./src/backend/utils/adt/timestamp.c Sun Apr 4 10:45:59 2004 *** *** 3273,3283 break; case DTK_CENTURY: ! result = (tm-tm_year / 100); break; case DTK_MILLENNIUM: ! result = (tm-tm_year / 1000); break; case DTK_JULIAN: --- 3273,3295 break; case DTK_CENTURY: ! /* centuries AD, c0: year in [ (c-1)*100+1 : c*100 ] !* centuries BC, c0: year in [ c*100 : (c+1)*100-1 ] !* there is no number 0 century
Re: [PATCHES] hint infrastructure setup (v3)
it is an real option that old postgresql source would be broken against future bison releases. So, as far as I'm concerned, I don't find the internal way really convincing as the way to go. Thus I can see three options: (a) put hints in gram.y as I already suggested + I can do it + it can be incremental once the infrastructure is in place. + hints are simple to add or fix = hints are not necessarily marvellous - it would take time to develop - it may slow down the parser, because of added production rules. - it would change gram.y a lot and could make it harder to maintain - patchers don't like it... (b) write a new recursive descendant parser, and drop gram.y + I could do it + hints could be more intelligent + I think the parser would be faster, at least not slower = I'm not sure it would be really easier to maintain, but maybe not harder - it cannot be incremental at all - hints are at the price of some programming - it would take time to develop, but basically the structure would look like gram.y a lot, and existing data structures can be kept. - I'm not sure patchers would like it, and if it is thrown down the drain, I would not be happy for the one who spent the time (say, me;-) (c) do nothing. + I can do that quite easily;-) + patchers are ok with that + it does not take time = the maintainability of gram.y is not changed. - my students won't have hints. A funny technical detail is that the refactoring which is needed for the (a) option would be also needed for (b). (b) would require more time than (a), but the results could be better for the hint/parser error handling. I can have some time for this, especially over the next few months. Incremental options looked a better way to me, but (a) seems stuck and (b) is risky, and the internal way looks ugly. As a side effect of my inspection is that the automaton generated by bison could be simplified if a different tradeoff between the lexer, the parser and the post-processing would be chosen. Namelly, all tokens that are just identifiers should be dropped and processed differently. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] hint infrastructure setup (v3)
As another teaching aid idea, is there something that would auto-format queries to they are more readable? Not that I know of. But I guess it must exist in some interface, maybe with Oracle or DB2. My emacs does seem to do that. It just put colors on keywords. If I had to put it somewhere, I would try to enhance emacs sql-mode. But I don't like much lisp programming. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] hint infrastructure setup (v3)
Dear Tom, No, just redefine yyerror as a macro that passes additional parameters. Ok! That's just the kind of the hack-hook I was looking for!! The terminals are not available, they must be kept separatly if you want them. This can be done in yylex(). We don't need them. Any already-shifted tokens are to the left of where the error is, no? Yes and no. I agree that you don't need them for computing the follow set. However it would help a lot to generate nicer hints. I'm looking for help the user, and the follow set is just part of the help. Think of typical SELECT foo, bla, FROM xxx (it looks stupid on a one line query, but not so on a very large query) which is quite easier to detect and hint about because of FROM is just after a comma. The rule part is also a real issue. There is no easy way to translate states into rules, to know whether we're somewhere in a ShowStmt or AlterTableStmt... If you're after a comma in state 1232, should you just say IDENT... I'd rather say user name or table name if I can... Otherwise the hint stays at a low parser level. Good hints requires to know about the current context, and it is not easy to get that deep in the automaton. Yeah, I had come to the same conclusion --- state moves made without consuming input would need to be backed out if we wanted to report the complete follow set. I haven't yet looked to see how to do that. Well, following you're previous suggestion, one can redefine the YYLEX macro to save the current state each time a token is required. As you noted, for things like SHOW 123, the follow set basically includes all keywords although you can have SHOW ALL or SHOW name. So, as you suggested, you can either say ident as a simplification, but You're ignoring the distinction between classes of keywords. I would not recommend treating reserved_keywords as a subclass of ident. Ok, I agree that it can help in this very case a little bit here because ALL is reserved. I did not make this distinction when I played with bison. (5) anything that can be done would be hardwired to one version of bison. I think this argument is completely without merit. Possible;-) However I don't like writing hacks that depends on other people future behavior. (b) write a new recursive descendant parser, and drop gram.y Been there, done that, not impressed with the idea. There's a reason why people invented parser generators... Sure. It was just a list of options;-) As a side effect of my inspection is that the automaton generated by bison could be simplified if a different tradeoff between the lexer, the parser and the post-processing would be chosen. Namelly, all tokens that are just identifiers should be dropped and processed differently. We are not going to reserve the keywords that are presently unreserved. Reserving keywords would help, but I would not think it could be accepted, because it is not SQL philosophy. SQL is about 30 years also, as old as yacc ideas... but they did not care at the time;-) When you look at the syntax, it was designed with a recursive parser in mind. Part of my comment was to explain why the generated automaton is large. SQL is a small language, but it has a big automaton in postgresql, and this is IMHO the explanation. If you can think of a reasonable way to stop treating them as separate tokens inside the grammar without altering the user-visible behavior, I'm certainly interested. I was thinking about type names especially, and maybe others. I join a small proof-of-concept patch to drop some tokens out of the parser. As a results, 6 tokens, 6 rules and 9 states are removed, and the automaton table is reduced by 438 entries (1.5%). Not too bad;-) I think others could be dealt with similarily, or with more work. Thanks for the discussion, -- Fabien.*** ./src/backend/parser/gram.y.origMon Apr 5 12:06:42 2004 --- ./src/backend/parser/gram.y Mon Apr 5 18:21:21 2004 *** *** 336,343 AGGREGATE ALL ALSO ALTER ANALYSE ANALYZE AND ANY ARRAY AS ASC ASSERTION ASSIGNMENT AT AUTHORIZATION ! BACKWARD BEFORE BEGIN_P BETWEEN BIGINT BINARY BIT ! BOOLEAN_P BOTH BY CACHE CALLED CASCADE CASE CAST CHAIN CHAR_P CHARACTER CHARACTERISTICS CHECK CHECKPOINT CLASS CLOSE --- 336,343 AGGREGATE ALL ALSO ALTER ANALYSE ANALYZE AND ANY ARRAY AS ASC ASSERTION ASSIGNMENT AT AUTHORIZATION ! BACKWARD BEFORE BEGIN_P BETWEEN BINARY BIT ! BOTH BY CACHE CALLED CASCADE CASE CAST CHAIN CHAR_P CHARACTER CHARACTERISTICS CHECK CHECKPOINT CLASS CLOSE *** *** 362,368 ILIKE IMMEDIATE IMMUTABLE IMPLICIT_P IN_P INCLUDING INCREMENT INDEX INHERITS INITIALLY INNER_P INOUT INPUT_P ! INSENSITIVE INSERT INSTEAD INT_P INTEGER INTERSECT INTERVAL INTO INVOKER IS ISNULL ISOLATION JOIN --- 362,368 ILIKE IMMEDIATE IMMUTABLE
Re: [PATCHES] hint infrastructure setup (v3)
Dear Tom, I join a small proof-of-concept patch to drop some tokens out of the parser. I believe these were treated this way *specifically* because of the keyword-is-not-an-identifier issue. SQL99 calls out most of these as being keywords: Well, I think that the reserved keywords are fine as tokens in a lexer/parser, but think that the unreserved keywords should be dropped of the token status if possible. and if we don't treat them as keywords then we will have a couple of problems. One is case-conversion issues in locales where the standard downcasing is not an extension of ASCII (Turkish is the only one I know of offhand). Do you mean it should use an ASCII-only strcasecmp, not a possibly localised version? I agree, but this is just a proof of concept patch to show that you don't need so many tokens in the parser. Another is that depending on where you put the renaming that this patch removes without replacing :-(, I do not understand your point. It seems to me that the renaming is performed when a type name is expected? The boolean keyword (not token) is translated to system bool type in the GenericType rule?? ??? it would be possible for the renaming transformation to get applied to user-defined types with similar names, or for user-defined types to unexpectedly shadow system definitions. I don't think that the patch changes the result of the parsing. It drops *TOKENS* out of the lexer, but they are still *KEYWORDS*, although they are not explicitly in the lexer list. keyword.c deals with tokens, the file name was ill-chosen. If you think that keywords can only be lexical tokens, then you end-up with an automaton larger than necessary, IMVHO. Note that the removed tokens are still keywords as they are treated *especially* anyway. It is not a semantical transformation. Also, if you don't want these names as candidate function names, they could be filtered out at some other point in the parser. They really don't need to be special tokens. My point is that you can have the very same *semantical* result with a smaller automaton if you chose a different trade-off within the lexer/parser/post filtering. I don't want to change the language. The former would be surprising and the latter would violate the spec. I'm really not sure this is the case with the patch I sent. Check the archives; I'm sure this was discussed in the 7.3 development cycle and we concluded that treating these names as keywords was the only practical solution. Hmmm... I can check the archive, but I cannot see how different the language is with the patch. Maybe there is a missing filter out, or strcasecmp is not the right version, but no more. I think it is a small technical issue in the parser internals, and has nothing to do with great principles and whether this or that is a keyword. It's about what keywords need to be tokens. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] hint infrastructure setup (v3)
(b) write a new recursive descendant parser, and drop gram.y er, that's recursive descent :-) Sorry for my French version. Well, unless you are a serious masochist, I'm not a masochist;-) I'm arguing about where hint should/could be put. In fact, considering this thread I'm not sure any of the suggested steps will achieve Fabien's goal. ISTM that a smarter training wheels frontend, perhaps with some modest backend improvements, is likely to have better results. (Oh, you found an error near there - now what do I suggest belongs there?) I cannot see what you're suggesting practically as a frontend. I don't think having another parser next to the first one for better error messages is a serious option? I would not like to put another parser that need to be kept synchronized with the first one. So either it is integrated or linked with the current parser, or it is not there? Out of the parser, the only information is the offending token (embedded in the error string) and the character number in the string, that is quite small a clue to give a hint. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] hint infrastructure setup (v3)
Dear Tom, In particular you can't any longer tell the difference between BOOLEAN and boolean (with quotes), which are not the same thing --- a quoted string is never a keyword, per spec. [...] Ok, so you mean that on -boolean- the lexer returns a BOOLEAN_P token, but with -boolean- it returns an Ident and -boolean- as a lval. Indeed, in such a case I cannot recognize that simply boolean vs boolean if they are both idents that look the same. As a matter of fact, this can also be fixed with some post-filtering. Say, all quoted idents could be returned with a leading to show it was dquoted, and the IDENT rules in the parser could remove when it is not needed anymore to distinguish the case. Not beautiful, I agree, but my point is that the current number of tokens and number of states and automaton size are not inherent to SQL but to the way the lexing/parsing is performed in postgresql. The basic point here is that eliminating tokens as you propose will result in small changes in behavior, none of which are good or per spec. Making the parser automaton smaller would be nice, but not at that price. Ok. I don't want to change the spec. I still stand that it can be done, although some more twicking is required. It was just a proof of concept, not a patch submission. Well, a proof of concept must still be a proof;-) I attach a small patch that solve the boolean vs boolean issue, still as a proof of concept that it is 'doable' to preserve semantics with a different lexer/parser balance. I don't claim that it should be applied, I just claim that the automaton size could be smaller, especially by shortening the unreserved_keyword list. You have not proven that you can have the same result. Well, I passed the regression tests, but that does not indeed prove anything, because these issues are not tested at all. Maybe you could consider to add the regression part of the attached patcht, which creates a user boolean type. Anyway, my motivation is about hints and advises, and that does not help a lot to solve these issues. -- Fabien.*** ./src/backend/parser/gram.y.origTue Apr 6 18:15:39 2004 --- ./src/backend/parser/gram.y Tue Apr 6 17:56:46 2004 *** *** 95,100 --- 95,102 static Node *doNegate(Node *n); static void doNegateFloat(Value *v); + #define clean_dqname(n) (((*(n))!='')? (n): pstrdup((n)+1)) + %} *** *** 336,343 AGGREGATE ALL ALSO ALTER ANALYSE ANALYZE AND ANY ARRAY AS ASC ASSERTION ASSIGNMENT AT AUTHORIZATION ! BACKWARD BEFORE BEGIN_P BETWEEN BIGINT BINARY BIT ! BOOLEAN_P BOTH BY CACHE CALLED CASCADE CASE CAST CHAIN CHAR_P CHARACTER CHARACTERISTICS CHECK CHECKPOINT CLASS CLOSE --- 338,345 AGGREGATE ALL ALSO ALTER ANALYSE ANALYZE AND ANY ARRAY AS ASC ASSERTION ASSIGNMENT AT AUTHORIZATION ! BACKWARD BEFORE BEGIN_P BETWEEN BINARY BIT ! BOTH BY CACHE CALLED CASCADE CASE CAST CHAIN CHAR_P CHARACTER CHARACTERISTICS CHECK CHECKPOINT CLASS CLOSE *** *** 362,368 ILIKE IMMEDIATE IMMUTABLE IMPLICIT_P IN_P INCLUDING INCREMENT INDEX INHERITS INITIALLY INNER_P INOUT INPUT_P ! INSENSITIVE INSERT INSTEAD INT_P INTEGER INTERSECT INTERVAL INTO INVOKER IS ISNULL ISOLATION JOIN --- 364,370 ILIKE IMMEDIATE IMMUTABLE IMPLICIT_P IN_P INCLUDING INCREMENT INDEX INHERITS INITIALLY INNER_P INOUT INPUT_P ! INSENSITIVE INSERT INSTEAD INTERSECT INTERVAL INTO INVOKER IS ISNULL ISOLATION JOIN *** *** 386,398 PRECISION PRESERVE PREPARE PRIMARY PRIOR PRIVILEGES PROCEDURAL PROCEDURE ! READ REAL RECHECK REFERENCES REINDEX RELATIVE_P RENAME REPEATABLE REPLACE RESET RESTART RESTRICT RETURNS REVOKE RIGHT ROLLBACK ROW ROWS RULE SCHEMA SCROLL SECOND_P SECURITY SELECT SEQUENCE SERIALIZABLE SESSION SESSION_USER SET SETOF SHARE ! SHOW SIMILAR SIMPLE SMALLINT SOME STABLE START STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P SUBSTRING SYSID TABLE TEMP TEMPLATE TEMPORARY THEN TIME TIMESTAMP --- 388,400 PRECISION PRESERVE PREPARE PRIMARY PRIOR PRIVILEGES PROCEDURAL PROCEDURE ! READ RECHECK REFERENCES REINDEX RELATIVE_P RENAME REPEATABLE REPLACE RESET RESTART RESTRICT RETURNS REVOKE RIGHT ROLLBACK ROW ROWS RULE SCHEMA SCROLL SECOND_P SECURITY SELECT SEQUENCE SERIALIZABLE SESSION SESSION_USER SET SETOF SHARE ! SHOW SIMILAR SIMPLE SOME STABLE START STATEMENT STATISTICS STDIN STDOUT STORAGE STRICT_P SUBSTRING SYSID TABLE TEMP TEMPLATE TEMPORARY THEN TIME TIMESTAMP *** *** 959,965 } | IDENT { ! $$ = makeStringConst($1, NULL);
[PATCHES] pg_restore ignore error patch
Dear patchers, please find a small patch submission so that pg_restore ignores some sql errors. The implementation seems quite reasonnable to me, but pg-gods may have a different opinion. Two fields are added to the ArchiveHandler to trigger the behavior. A count summary of ignored sql errors is printed at the end of the restoration. I did not fixed the set session auth attempt as option -O can already be used to avoid it. It validates for me, but it seems that pg_dump/pg_restore is just *NOT* tested at all in the validation:-(. So at least I tested it. My tests suggest that a feature of pg_restore is that it is only expected to work with its own server. Indeed, it generates some new syntax, such as $$ quoting, which is not compatible with older servers. This fact does not seem to appear in the documentation. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/bin/pg_dump/pg_backup.h.orig Wed Mar 24 17:19:43 2004 --- ./src/bin/pg_dump/pg_backup.h Fri Apr 9 18:39:50 2004 *** *** 57,62 --- 57,67 int remoteVersion; int minRemoteVersion; int maxRemoteVersion; + + /* error handling */ + bool die_on_errors; /* whether to die on sql errors... */ + int n_errors; /* number of errors (if no die) */ + /* The rest is private */ } Archive; *** ./src/bin/pg_dump/pg_backup_archiver.c.orig Wed Mar 24 17:19:43 2004 --- ./src/bin/pg_dump/pg_backup_archiver.c Fri Apr 9 19:10:26 2004 *** *** 1197,1202 --- 1197,1218 va_end(ap); } + /* on some error, we may decide to go on... */ + void + warn_or_die_horribly(ArchiveHandle *AH, +const char *modulename, const char *fmt, ...) + { + va_list ap; + va_start(ap, fmt); + if (AH-public.die_on_errors) + _die_horribly(AH, modulename, fmt, ap); + else + { + _write_msg(modulename, fmt, ap); + AH-public.n_errors++; + } + va_end(ap); + } static void _moveAfter(ArchiveHandle *AH, TocEntry *pos, TocEntry *te) *** *** 1651,1656 --- 1667,1676 die_horribly(AH, modulename, unrecognized file format \%d\\n, fmt); } + /* sql error handling */ + AH-public.die_on_errors = true; + AH-public.n_errors = 0; + return AH; } *** *** 2042,2049 res = PQexec(AH-connection, cmd-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! die_horribly(AH, modulename, could not set default_with_oids: %s, !PQerrorMessage(AH-connection)); PQclear(res); } --- 2062,2070 res = PQexec(AH-connection, cmd-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! warn_or_die_horribly(AH, modulename, !could not set default_with_oids: %s, ! PQerrorMessage(AH-connection)); PQclear(res); } *** *** 2181,2188 res = PQexec(AH-connection, qry-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! die_horribly(AH, modulename, could not set search_path to \%s\: %s, !schemaName, PQerrorMessage(AH-connection)); PQclear(res); } --- 2202,2210 res = PQexec(AH-connection, qry-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! warn_or_die_horribly(AH, modulename, !could not set search_path to \%s\: %s, !schemaName, PQerrorMessage(AH-connection)); PQclear(res); } *** ./src/bin/pg_dump/pg_backup_archiver.h.orig Wed Mar 24 17:19:43 2004 --- ./src/bin/pg_dump/pg_backup_archiver.h Fri Apr 9 18:55:50 2004 *** *** 281,286 --- 281,287 extern const char *progname; extern void die_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt,...) __attribute__((format(printf, 3, 4))); + extern void warn_or_die_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt,...) __attribute__((format(printf, 3, 4))); extern void write_msg(const char *modulename, const char *fmt,...) __attribute__((format(printf, 2, 3))); extern void WriteTOC(ArchiveHandle *AH); *** ./src/bin/pg_dump/pg_backup_db.c.orig Wed Mar 3 22:28:54 2004 --- ./src/bin/pg_dump/pg_backup_db.cFri Apr
Re: [PATCHES] pg_restore ignore error patch
My tests suggest that a feature of pg_restore is that it is only expected to work with its own server. Indeed, it generates some new syntax, such as $$ quoting, which is not compatible with older servers. This fact does not seem to appear in the documentation. Wrong. It does appear in the documentation, and it can also be disabled. Good! http://developer.postgresql.org/docs/postgres/app-pgdump.html says: I looked briefly at *pg_restore* documentation, as I was interested in pg_restore. But the logic is just mine;-) Well, it means that you must decide a dump time if you may have to restore in an older version. I can guess why it eased the implementation, but it does not look good. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PATCHES] pg_restore ignore error patch
please find a small patch submission so that pg_restore ignores some sql errors. Yeah, we've been talking about doing that for awhile. But please define some errors --- what do you ignore exactly? Connection errors are not ignored. The error is triggered directly by the execute sql function, so it is hard to know what is going on there. There are 3 instances. I've skipped the set session authorization stuff, but 2 others are filtered. A more complete implementation would allow to ignore errors on language restoration or things like that, or initial cleanups... But that would require to change the current code structure a lot, so as to avoid direct exits... Moreover it does not look really necessary from my point of view. I just aim at having direct pg_restore behave as pg_restore|psql. + if (AH-n_errors) + /* translator: %s stands for error or errors */ + fprintf(stderr, _(warning: %d %s ignored on restore\n), + /* translator: in sentence warning: 123 errors ignored... */ + AH-n_errors, AH-n_errors1? _(errors): _(error)); Please read the message style guidelines: the above goes directly against the advice for writing translatable messages. Ok. Also, it might be wise to return a nonzero exit code when any errors are ignored. I'm not sure about that, but it might be best to err on the side of caution... Ok. I'll resubmit. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] pg_restore ignore error patch
Dear Andrew, Well, it means that you must decide a dump time if you may have to restore in an older version. I can guess why it eased the implementation, but it does not look good. You want to be able to backup from version x to version y for some yx? We support upgrades, not downgrades. What doesn't look good about that? Ok, I understand your comment about upgrading. However most of the time I use dump/restore as a backup/transfer facility, and seldom as an upgrade tool. I transfer some data from one server to the other. On such occasion, I use my laptop to connect with server X, I download the data, then I restore them to server Y. The versions on my laptop and both servers are likely to be different. My laptop is likely to have some development version, and the servers may be in 7.3 or 7.4. This may not be the intended use of pg_dump and pg_restore, but this is what I do. Hence from this it would be nice if pg_restore could restore to older versions. Hence my comment it does not look good. It is just a question of perspective. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[PATCHES] pg_restore ignore error patch v2
Dear patchers, please find attached a small patch so that pg_restore ignores some sql errors. This is the second submission, which integrates Tom comments about localisation and exit code. I also added some comments about one sql command which is not ignored. I tested it. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/bin/pg_dump/pg_backup.h.orig Wed Mar 24 17:19:43 2004 --- ./src/bin/pg_dump/pg_backup.h Sat Apr 10 13:04:21 2004 *** *** 57,62 --- 57,67 int remoteVersion; int minRemoteVersion; int maxRemoteVersion; + + /* error handling */ + bool die_on_errors; /* whether to die on sql errors... */ + int n_errors; /* number of errors (if no die) */ + /* The rest is private */ } Archive; *** ./src/bin/pg_dump/pg_backup_archiver.c.orig Wed Mar 24 17:19:43 2004 --- ./src/bin/pg_dump/pg_backup_archiver.c Sat Apr 10 13:09:55 2004 *** *** 1197,1202 --- 1197,1220 va_end(ap); } + /* on some error, we may decide to go on... */ + void + warn_or_die_horribly(ArchiveHandle *AH, +const char *modulename, const char *fmt, ...) + { + va_list ap; + va_start(ap, fmt); + if (AH-public.die_on_errors) + { + _die_horribly(AH, modulename, fmt, ap); + } + else + { + _write_msg(modulename, fmt, ap); + AH-public.n_errors++; + } + va_end(ap); + } static void _moveAfter(ArchiveHandle *AH, TocEntry *pos, TocEntry *te) *** *** 1651,1656 --- 1669,1678 die_horribly(AH, modulename, unrecognized file format \%d\\n, fmt); } + /* sql error handling */ + AH-public.die_on_errors = true; + AH-public.n_errors = 0; + return AH; } *** *** 2011,2016 --- 2033,2039 res = PQexec(AH-connection, cmd-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) + /* NOT warn_or_die_horribly... use -O instead to skip this. */ die_horribly(AH, modulename, could not set session user to \%s\: %s, user, PQerrorMessage(AH-connection)); *** *** 2042,2049 res = PQexec(AH-connection, cmd-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! die_horribly(AH, modulename, could not set default_with_oids: %s, !PQerrorMessage(AH-connection)); PQclear(res); } --- 2065,2073 res = PQexec(AH-connection, cmd-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! warn_or_die_horribly(AH, modulename, !could not set default_with_oids: %s, ! PQerrorMessage(AH-connection)); PQclear(res); } *** *** 2181,2188 res = PQexec(AH-connection, qry-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! die_horribly(AH, modulename, could not set search_path to \%s\: %s, !schemaName, PQerrorMessage(AH-connection)); PQclear(res); } --- 2205,2213 res = PQexec(AH-connection, qry-data); if (!res || PQresultStatus(res) != PGRES_COMMAND_OK) ! warn_or_die_horribly(AH, modulename, !could not set search_path to \%s\: %s, !schemaName, PQerrorMessage(AH-connection)); PQclear(res); } *** ./src/bin/pg_dump/pg_backup_archiver.h.orig Wed Mar 24 17:19:43 2004 --- ./src/bin/pg_dump/pg_backup_archiver.h Sat Apr 10 13:04:21 2004 *** *** 281,286 --- 281,287 extern const char *progname; extern void die_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt,...) __attribute__((format(printf, 3, 4))); + extern void warn_or_die_horribly(ArchiveHandle *AH, const char *modulename, const char *fmt,...) __attribute__((format(printf, 3, 4))); extern void write_msg(const char *modulename, const char *fmt,...) __attribute__((format(printf, 2, 3))); extern void WriteTOC(ArchiveHandle *AH); *** ./src/bin/pg_dump/pg_backup_db.c.orig Wed Mar 3 22:28:54 2004 --- ./src/bin/pg_dump/pg_backup_db.cSat Apr 10 13:04:21 2004 *** *** 316,323 AH
Re: [PATCHES] pg_restore ignore error patch
Dear Andrew, I transfer some data from one server to the other. On such occasion, I use my laptop to connect with server X, I download the data, then I restore them to server Y. The versions on my laptop and both servers are likely to be different. My laptop is likely to have some development version, and the servers may be in 7.3 or 7.4. What problems have you encountered other than dollar quoting? If you're interested to know, there were some complaints about the new default_with_oids setting, as far as I can remember. In the most general case, ISTM you would need to teach pg_dump how to degrade gracefully, and a m:1 sources to targets relationship suddenly becomes m:m. I'm not convinced it is worth the trouble. I don't know what is worth the trouble. I'm just reporting a feature that make the tool not work properly for me. I've fixed part of the problem and submitted a patch, and I just noted in the mail that there was another small problem that I did not fix. I agree that fixing this would mean redesigning the tool in depth. That's why I did not fixed it in the patch I submitted. My opinion would be that pg_dump should just dump the data in some raw format, and that all syntax re-generation issues should be dealt with by pg_restore (INSERT vs COPY, quoting, and so on). But the tool was not resigned this way in the beginning. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[PATCHES] aclitem accessor functions
Dear patchers, Please find a attached a small patch that adds accessor functions for aclitem so that it is not an opaque datatype. I needed these functions to browse aclitems from user land. I can load them when necessary, but it seems to me that these accessors for a backend type belong to the backend, so I submit them. I wasn't sure of what oid should be given... I attributed new numbers at the end of pg_proc.h. It validates for me against current cvs head. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/backend/utils/adt/acl.c.orig Sat Nov 29 20:51:57 2003 --- ./src/backend/utils/adt/acl.c Mon Apr 12 15:23:17 2004 *** *** 874,879 --- 874,916 PG_RETURN_ACLITEM_P(aclitem); } + /* give access to internal data within aclitem + */ + Datum + aclitem_grantee(PG_FUNCTION_ARGS) + { + AclItem * a = PG_GETARG_ACLITEM_P(0); + PG_RETURN_INT32(a-ai_grantee); + } + + Datum + aclitem_grantor(PG_FUNCTION_ARGS) + { + AclItem * a = PG_GETARG_ACLITEM_P(0); + PG_RETURN_INT32(a-ai_grantor); + } + + Datum + aclitem_idtype(PG_FUNCTION_ARGS) + { + AclItem * a = PG_GETARG_ACLITEM_P(0); + PG_RETURN_INT32(ACLITEM_GET_IDTYPE(*a)); + } + + Datum + aclitem_privs(PG_FUNCTION_ARGS) + { + AclItem * a = PG_GETARG_ACLITEM_P(0); + PG_RETURN_INT32(ACLITEM_GET_PRIVS(*a)); + } + + Datum + aclitem_goptions(PG_FUNCTION_ARGS) + { + AclItem * a = PG_GETARG_ACLITEM_P(0); + PG_RETURN_INT32(ACLITEM_GET_GOPTIONS(*a)); + } + static AclMode convert_priv_string(text *priv_type_text) { *** ./src/include/catalog/pg_proc.h.origMon Apr 5 12:06:43 2004 --- ./src/include/catalog/pg_proc.h Mon Apr 12 16:54:52 2004 *** *** 3526,3531 --- 3526,3543 DATA(insert OID = 1069 ( generate_series PGNSP PGUID 12 f f t t v 2 20 20 20 _null_ generate_series_int8 - _null_ )); DESCR(non-persistent series generator); + /* aclitem utils */ + DATA(insert OID = 2510 ( aclitem_grantorPGNSP PGUID 12 f f t f i 1 23 1033 _null_ aclitem_grantor - _null_ )); + DESCR(extract user id grantor from aclitem); + DATA(insert OID = 2511 ( aclitem_granteePGNSP PGUID 12 f f t f i 1 23 1033 _null_ aclitem_grantee - _null_ )); + DESCR(extract grantee (user or group id) from aclitem); + DATA(insert OID = 2512 ( aclitem_idtype PGNSP PGUID 12 f f t f i 1 23 1033 _null_ aclitem_idtype - _null_ )); + DESCR(extract id type of grantee (0 public, 1 user, 2 group) from aclitem); + DATA(insert OID = 2513 ( aclitem_privs PGNSP PGUID 12 f f t f i 1 23 1033 _null_ aclitem_privs - _null_ )); + DESCR(extract privileges from aclitem); + DATA(insert OID = 2514 ( aclitem_goptions PGNSP PGUID 12 f f t f i 1 23 1033 _null_ aclitem_goptions - _null_ )); + DESCR(extract grant options from aclitem); + /* * Symbolic values for provolatile column: these indicate whether the result *** ./src/test/regress/expected/privileges.out.orig Mon Sep 15 02:26:31 2003 --- ./src/test/regress/expected/privileges.out Mon Apr 12 16:51:00 2004 *** *** 581,586 --- 581,600 t (1 row) + -- aclitem utils small test + SELECT u1.usename AS u1, u2.usename AS u2, + aclitem_idtype(c.relacl[0]) AS idtype, + aclitem_privs(c.relacl[0]) AS privs, + aclitem_goptions(c.relacl[0]) AS goptions + FROM pg_class AS c, pg_user AS u1, pg_user AS u2 + WHERE u1.usesysid = aclitem_grantor(c.relacl[0]) + AND u2.usesysid = aclitem_grantee(c.relacl[0]) + AND c.relname LIKE 'atest4'; + u1 | u2 | idtype | privs | goptions + --+--++---+-- + regressuser1 | regressuser1 | 1 | 127 | 127 + (1 row) + -- clean up \c regression DROP FUNCTION testfunc2(int); *** ./src/test/regress/sql/privileges.sql.orig Wed May 14 05:26:03 2003 --- ./src/test/regress/sql/privileges.sql Mon Apr 12 16:49:48 2004 *** *** 316,321 --- 316,330 SELECT has_table_privilege('regressuser1', 'atest4', 'SELECT WITH GRANT OPTION'); -- true + -- aclitem utils small test + SELECT u1.usename AS u1, u2.usename AS u2, + aclitem_idtype(c.relacl[0]) AS idtype, + aclitem_privs(c.relacl[0]) AS privs, + aclitem_goptions(c.relacl[0]) AS goptions + FROM pg_class AS c, pg_user AS u1, pg_user AS u2 + WHERE u1.usesysid = aclitem_grantor(c.relacl[0]) + AND u2.usesysid = aclitem_grantee(c.relacl[0]) + AND c.relname LIKE 'atest4'; -- clean up ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] aclitem accessor functions
Dear Peter, I needed these functions to browse aclitems from user land. I can load them when necessary, but it seems to me that these accessors for a backend type belong to the backend, so I submit them. Can you explain what you want to do from the user level? Sure. Before developing that point, I want to underline that it should be a general principle to avoid opaque datatypes in pg that stores structured information without being able to access them directly. Either you trust the database as a general tool, and it can manipulate its own data, or you do no trust it and you want any special system thing to be programmed in C or whatever instead of queried from SQL. As for the specifics: I'm developing some pg_advisor schema that was discussed earlier, so as to check the design, performance, system... coherency of a database. This is developed in userland with simple relational queries on pg_catalog, and some help by plpgsql if I cannot do without it. Among many other things, I would like to check that granted rights are granted by grantors who have a with grant option, for all possible objects and users. We already have a bunch of functions for analyzing privileges. Sure, but these has_privilege functions answer to specific questions. I could do that with these functions, but they hide queries, and I think that some joins would be enough to get all the answers directly without sub-quering for all possible object and user. I order to do so, I need to be able to read the aclitem type, so I added these accessors. The patch just adds 5 2-lines C functions. Have a nice day, -- Fabien. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] guc variables flags explicitly initialisation
Dear Tom, please find attached a large but simple patch that insures that all guc variables flags are explicitly set, instead of relying on the default behavior of static variable area to be filled with zeros. This seems entirely pointless. Why should we add a large amount of noise to these table definitions to avoid relying on standard C behavior? Sure. As I wrote in the mail, I had a bug in another development I suspected it could be the source of the problem, hence I tried to fix it that way. Once it was written, and after I found my stupid bug elsewhere, I felt that I could sent it anyway. It is a personnal stylistic taste that I don't like too much of implicit things in code, such as relying of default initialisation of static variables, as the same behavior cannot be expected for heap and stack variables. Also, I don't like a not written 0 to have an implicit semantics everywhere. Moreover such implicit things do not help me understand the source code when I need to. If you want to reduce what you call noise, you can drop NULL everywhere at the end of the struct as they are as 'pointless'. I wouldn't do it. Anyway, if you don't like it, you reject it and it is fine with me. BTW, maybe you could either accept/discuss/reject some other 'small' patches I submitted earlier, if you have some time. They are not cosmetic as this one. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] pg_restore ignore error patch
I looked over the patch and it seems to continue on pg_restore errors by default. That isn't good. By default, any error should make it exit loudly. I'm not sure of that. pg_dump is really designed and tested for the case of text dump to a psql script, and if there is an error in the psql [...] Oh, OK, so make it behave like pg_dump's text output piped into psql. It is really easy to add an option to allow user change the 'ignore' behavior, and make pg_restore exit loudly if it is desired. Maybe it should be proposed just for backwards compatibility? -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] CSV patch applied
CSV - enable CSV mode QUOTE - specify quote character ESCAPE - specify escape character FORCE - force quoting of specified columns FORCE QUOTE QUOTING col1,col2? QUOTED col1,col2? IN QUOTES col1,cold LITERAL - prevent NULL checks for specific columns NO NULL CHECK QUOTED (meaning 'as quoted')? From a language design point of view, I think it may be better to stick to one word versions? -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[PATCHES] patches in the pipe?
Patch applied. Thanks. I have 3 others somehow minor patches that are being submitted: (1) Date: Mon, 12 Apr 2004 17:36:55 +0200 (CEST) Subject: [PATCHES] aclitem accessor functions (2) Date: Sat, 17 Apr 2004 19:35:57 +0200 (CEST) Subject: [PATCHES] 'information_schema' considered a system schema (3) Date: Sun, 18 Apr 2004 11:42:50 +0200 (CEST) Subject: [PATCHES] guc variables flags explicitly initialisation Could they be accepted/discussed/rejected as well? patch (3) was somehow dismissed by Tom, so it may mean a final 'reject'. As for (1) and (2), I answered all questions I received. (2) is somehow a small bug fix. (1) adds a minor set of functions to access fields in 'aclitem'. Thanks in advance, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] patches in the pipe?
(1) Date: Mon, 12 Apr 2004 17:36:55 +0200 (CEST) Subject: [PATCHES] aclitem accessor functions I thought Peter didn't like it. He asked 'why' I needed it. I answered his question. He may or may not agree, I don't know! Would you repost and we can review it again. Ok. (2) Date: Sat, 17 Apr 2004 19:35:57 +0200 (CEST) Subject: [PATCHES] 'information_schema' considered a system schema I don't remember that one at all. Would you repost? Ok. Basically, what happens on these patches is if someone says there is a problem, and you reply but it isn't clear that the problem is refuted or addressed, That's what I do, but I can only argue, not refute or address issues. Whether it is refuted or addressed is in the head of the decider. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] new aggregate functions v2
Dear Robert, (1) boolean-and and boolean-or aggregates named bool_and and bool_or. they should correspond to standard sql every and some/any aggregates. Just a minor nibble, but could you add something to the documentation entries that these are postgresql implementation of the sql any/some/every aggregate functions. Yep, that can be done. right now we have whole pages devoted to any/some for subqueries and arrays, I'd like to give people looking for the sql spec aggregate functions at least a chance of finding these. Do you mean one or both of the following? - add a note in the aggregate doc that they correspond to EVERY and SOME/ANY standard aggregates. - add a note somewhere else, maybe in the doc about arrays or subqueries, that there are also aggregates with a non standard name. if so, where precisely should it be put? Thanks in advance, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[PATCHES] new aggregate functions v1
Dear patchers, please find attached a small patch for adding new aggregate functions: (1) boolean-and and boolean-or aggregates named bool_and and bool_or. they correspond to standard sql every and some/any aggregates. they do not have the right name as there is a problem with the standard and the parser for some/any. (2) bitwise integer aggregates named bit_and and bit_or for int2, int4, int8 and bit types. They are not standard, however they exist in other db (eg mysql), and I needed them for some other stuff. The patch adds: - 3 new functions for boolean aggregates in src/backed/utils/adt/bool.c, src/include/utils/builtins.h and src/include/catalog/pg_proc.h - new aggregates declared in src/include/catalog/pg_proc.h and src/include/catalog/pg_aggregate.h - some documentation and validation about these new aggregates. It also updates the catalog version. It validates for me. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./doc/src/sgml/func.sgml.orig Mon Apr 26 14:00:58 2004 --- ./doc/src/sgml/func.sgmlSat May 1 15:49:39 2004 *** *** 7544,7549 --- 7544,7617 /row row + entry +indexterm + primarybit_and/primary +/indexterm +functionbit_and(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typesmallint/type, typeinteger/type, typebigint/type or +typebit/type, + /entry + entry + same as argument data type. + /entry + entrythe bitwise-and all non-null input values, or null if empty + /entry + /row + + row + entry +indexterm + primarybit_or/primary +/indexterm +functionbit_or(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typesmallint/type, typeinteger/type, typebigint/type or +typebit/type, + /entry + entry + same as argument data type. + /entry + entrythe bitwise-or all non-null input values, or null if empty + /entry + /row + + row + entry +indexterm + primarybool_and/primary +/indexterm +functionbool_and(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typebool/type + /entry + entry +typebool/type + /entry + entrytrue if all input values are true, otherwise false/entry + /row + + row + entry +indexterm + primarybool_or/primary +/indexterm +functionbool_or(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typebool/type + /entry + entry +typebool/type + /entry + entrytrue if at least one input value is true, otherwise false/entry + /row + + row entryfunctioncount(*)/function/entry entry/entry entrytypebigint/type/entry *** *** 7644,7649 --- 7712,7718 para It should be noted that except for functioncount/function, +functionbool_and/function and functionbool_or/function, these functions return a null value when no rows are selected. In particular, functionsum/function of no rows returns null, not zero as one might expect. The function functioncoalesce/function may be *** ./src/backend/utils/adt/bool.c.orig Sat Nov 29 20:51:58 2003 --- ./src/backend/utils/adt/bool.c Sat May 1 16:02:30 2004 *** *** 248,250 --- 248,325 PG_RETURN_BOOL(b); } + + /* + * boolean-and and boolean-or aggregates. + * + * this correspond to EVERY and ANY/SOME aggregate in SQL-2003. + * There is an ambiguity in the syntax shown by Tom: + * SELECT b1 = ANY((SELECT b2 FROM t2)) FROM t1; + * can be interpreted as either an aggregate or as an any sub-select. + * + * About the semantics: as per the sql standard, these aggregates + * must return either true or false, but not null, even on empty input. + */ + + /* EVERY aggregate implementation conforming to SQL 2003 standard. + * must be non strict. no special assumption about state and data. + */ + PG_FUNCTION_INFO_V1(booland_statefunc); + + Datum booland_statefunc(PG_FUNCTION_ARGS) + { + bool state, data; + + /* get data item. if in doubt (NULL boolean), EVERY is false. */ + if (PG_ARGISNULL(1)) + PG_RETURN_BOOL(false); + data = PG_GETARG_BOOL(1); + + /* get internal state */ + if (PG_ARGISNULL(0)) + PG_RETURN_BOOL(data); /* initialization with supplied non-null data */ + state = PG_GETARG_BOOL(0); + + PG_RETURN_BOOL(state data); + } + + /* SOME aggregate implementation conforming to SQL 2003 standard. + * must be non strict. no special assumption about state and data. + */ + PG_FUNCTION_INFO_V1(boolor_statefunc
Re: [PATCHES] new aggregate functions v1
Dear Alvaro, (2) bitwise integer aggregates named bit_and and bit_or for int2, int4, int8 and bit types. They are not standard, however they exist in other db (eg mysql), and I needed them for some other stuff. I'm sure people won't like to add functions just because some other DB has them. I develop them because I NEEDED them, NOT because they are available in mysql and I would want compatibility. I needed them for some queries over pg_catalog, as acl are encoded as bitfields. So the addition is about functionnality, not compatibility. The argument about mysql is rather to show that other people in the world found these functions also useful, and to justify the name I chose. You may also notice that the impact in close to void, there are two lines added for each of these bit_* functions. I'm not going to develop an external package for 16 lines of code. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] new aggregate functions v1
(1) boolean-and and boolean-or aggregates named bool_and and bool_or. they correspond to standard sql every and some/any aggregates. They do not, sorry. I'll resubmit. -- Fabien Coelho ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] new aggregate functions v1
Dear Peter, We are not going to accept any functions that are primarily intended to access the internals aclitem data. You need to think of a high-level description of your problem and implement functions for that. I already have what I juge a high level description of the problem, thanks. It nevertheless still use the acl bitfield for representings granted rights. I don't think the bitfield is a bad idea to represent a set of rights, as they are easy to combine (say, with a bit and aggregate to compute effective rights for instance). My problem rather is to deal with arrays in a relationnal query. It is not easy to query things when data are not normalized. So I will still need bit_* functions. I can implement them outside for sure, but I really think that it does cost nothing to propose them as a default (only the declaration is needed), and I know I'm not the first one to look for them and being desappointed not to find them (there was a discussion about these very functions last month on some other pg list). Although It is very interesting to learn about how to add aggregates in pg, maybe all users around don't have the time and the motivation for that. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] aclitem accessor functions
The oid's you chose were fine. I updated the system catalog version number, and added prototype for these functions. Ok, in acl.h and catversion.h. I'm considering sending some patch for the documentation, although there is no real documentation about the aclitem type in the doc tree. Would 'func.sgml' next to has_*_privileges be an appropriate place? -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] [BUGS] BUG #1134: ALTER USER ... RENAME breaks md5
Dear Bruce, Yes, the problem is that we used the username for the salt, just like FreeBSD does for its MD5 passwords. Not that I know of on FreeBSD? shell uname -a FreeBSD palo-alto2.ensmp.fr 4.9-STABLE FreeBSD 4.9-STABLE #5: Mon Mar 1 21:31:30 CET 2004 [EMAIL PROTECTED]:/usr/src/sys/compile/IAR2M i386 shell grep coelho /var/yp/master.passwd coelho:$1$00EacB0I$4kQ/HmqFFQANZP/mxj8ZX0:210:20::0:0:COELHO, Fabien:/users/cri/coelho:/usr/local/bin/bash ^^ salt some base 64 encoding of 1002 paranoid md5 computations. Even of the salt is based on the login, the point is that it is stored separatly, so the system does not rely on the login string to check the password. The only other scheme which requires the user password somehow is the HTTP digest authentification, and AFAIK no one in the world uses it;-) The attached patch clears the password field on rename: By 'clearing' and after a look at the patch, I understand that the access will be denied after the rename, which is the current behavior anyway;-) and adds documention explaining this behavior. I can't think of a better solution. Yes, I'm afraid there is no 'light' fix, other than acknowledging the fact... Not a big issue. Thanks, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[PATCHES] add build utilities in default install
Dear patchers, Please find attached my first submission for installing all necessary build files (makefile, scripts...) by default. - for the old behavior, do a make light-install - make install installs headers and other necessary files such as scripts and makefiles, for postgresql extensions to be build. - these files are installed in pg_config --insbuilddir the actual directory can be modified during configure with --with-insbuilddir=DIR option. Default is LIBDIR/build - the installation documentation is updated. - I also added a install-client-only target so as to simplify the doc. What is yet to be done: - check that all necessary files are really there... esp. wrt win32/cygwin - provide an example makefile (script?) to build extensions, and install it! I'm planning to do that later. - check that the documentation is clear enough. It validates (make check does perform a make install). I'm obviously ready to update the patch if necessary (missing files, other default directories...). Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./GNUmakefile.in.orig Tue Apr 20 09:35:41 2004 --- ./GNUmakefile.inTue May 18 11:52:08 2004 *** *** 13,30 $(MAKE) -C src all @echo All of PostgreSQL successfully made. Ready to install. ! install: $(MAKE) -C doc install $(MAKE) -C src install - @echo PostgreSQL installation complete. installdirs uninstall distprep: $(MAKE) -C doc $@ $(MAKE) -C src $@ install-all-headers: $(MAKE) -C src $@ # clean, distclean, etc should apply to contrib too, even though # it's not built by default clean: --- 13,51 $(MAKE) -C src all @echo All of PostgreSQL successfully made. Ready to install. ! install: light-install install-all-headers install-config-files ! @echo PostgreSQL installation complete. ! ! light-install: $(MAKE) -C doc install $(MAKE) -C src install installdirs uninstall distprep: $(MAKE) -C doc $@ $(MAKE) -C src $@ + uninstall: + $(RM) $(DESTDIR)$(insbuilddir)/* $(DESTDIR)$(insbuilddir)/config/* + install-all-headers: $(MAKE) -C src $@ + install-config-files: installdirs + $(MAKE) -C src $@ + $(INSTALL_DATA) config.status $(DESTDIR)$(insbuilddir) + $(INSTALL_DATA) config/install-sh $(DESTDIR)$(insbuilddir)/config + $(INSTALL_DATA) config/mkinstalldirs $(DESTDIR)$(insbuilddir)/config + + installdirs: + $(mkinstalldirs) $(DESTDIR)$(insbuilddir) + $(mkinstalldirs) $(DESTDIR)$(insbuilddir)/config + + install-client-only: + $(MAKE) -C src/bin install + $(MAKE) -C src/include install + $(MAKE) -C src/interfaces install + $(MAKE) -C doc install + # clean, distclean, etc should apply to contrib too, even though # it's not built by default clean: *** ./configure.in.orig Mon May 17 14:00:03 2004 --- ./configure.in Tue May 18 12:02:38 2004 *** *** 128,133 --- 128,144 # + # Installation directory for build utilities + # + PGAC_ARG(with, insbuilddir, + [ --with-insbuilddir=DIR install build utilities in DIR [[LIBDIR/build]]], + [AC_MSG_ERROR([option --with-insbuilddir requires an argument])], + [AC_MSG_ERROR([option --without-insbuilddir does not apply])], + [insbuilddir=$withval], + [insbuilddir='${libdir}/build']) + AC_SUBST(insbuilddir) + + # # Add non-standard directories to the library search path # PGAC_ARG_REQ(with, libraries, [ --with-libraries=DIRS look for additional libraries in DIRS], *** ./doc/src/sgml/installation.sgml.orig Mon May 17 18:54:02 2004 --- ./doc/src/sgml/installation.sgmlTue May 18 11:42:02 2004 *** *** 589,594 --- 589,606 /varlistentry varlistentry + termoption--with-insbuilddir=replaceableDIRECTORY/replaceable/option/term +listitem + para +Useful files for building productnamePostgreSQL/productname +extensions, such as makefiles or scripts, will be installed in this +directory. +The default is filenamereplaceableLIBDIR/replaceable/build/filename + /para +/listitem + /varlistentry + + varlistentry termoption--with-docdir=replaceableDIRECTORY//option/term termoption--without-docdir/option/term listitem *** *** 1035,1064 /para para ! The standard installation provides only the header files needed for client ! application development. If you plan to do any server-side program ! development (such as custom functions or data types written in C), ! then you may want to install the entire productnamePostgreSQL/ ! include tree into your target include directory. To do that, enter screen ! userinputgmake install-all-headers/userinput /screen ! This adds
Re: [PATCHES] add build utilities in default install
Dear Tom, Please find attached my first submission for installing all necessary build files (makefile, scripts...) by default. Why? They won't do anyone any good outside the original build tree. Is your question rhetorical? So as to allow postgresql extensions (types, functions) to be compiled as if they would have been compiled in the source tree, as contribs. Even if modified so that they are useful (which is something that is on my to-do list), I agree that it is not simple to make them usable as is. I'm trying to have a minimum effort solution, but I would be interested in having a better one, esp. if I don't have to do it myself. I disagree with installing them by default. See previous arguments about installing backend internal headers by default. http://archives.postgresql.org: Can not connect to search daemon well, my look at previous archived arguments will wait. Anyway, you both forbid extensions to be added within the main distribution, and to provide help for these extensions to be added from the outside. So instead of saying no, please help us solve the problem: how to make extensions easilly installable against a postgresql, preferably with minimum effort from the user who would like to try that? Under apache, I just do : apxs -c -i -a mod_foo.c and it is enough. If your solution is that extensions should require the sysadmin to fetch postgresql sources and reconfigure... I disagree. If you have something better, I'm all ears. -- Fabien COELHO _ http://www.cri.ensmp.fr/~coelho _ [EMAIL PROTECTED] CRI-ENSMP, 35, rue Saint-Honoré, 77305 Fontainebleau cedex, France phone: (+33|0) 1 64 69 {voice: 48 52, fax: 47 09, standard: 47 08} All opinions expressed here are mine _ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[PATCHES] new aggregate functions v4
Dear Bruce, Agreed, these seem to be of general interest and have been requested in the past. I will clean up the docs a little. Please find attached a new version to address Neil's comments. - add every anyway, next to bool_and. - cleaner source and comments. I also added more regression tests, including the added EVERY aggregate. It DOES NOT validate for me, as errors and rules are broken in current head: undefined symbol: pg_strcasecmp However the aggregate part works fine. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./doc/src/sgml/func.sgml.orig Mon May 17 14:00:06 2004 --- ./doc/src/sgml/func.sgmlWed May 19 10:27:17 2004 *** *** 7554,7559 --- 7554,7629 /row row + entry +indexterm + primarybit_and/primary +/indexterm +functionbit_and(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typesmallint/type, typeinteger/type, typebigint/type or +typebit/type, + /entry + entry + same as argument data type. + /entry + entrythe bitwise-and of all non-null input values, or null if empty + /entry + /row + + row + entry +indexterm + primarybit_or/primary +/indexterm +functionbit_or(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typesmallint/type, typeinteger/type, typebigint/type or +typebit/type, + /entry + entry + same as argument data type. + /entry + entrythe bitwise-or of all non-null input values, or null if empty. + /entry + /row + + row + entry +indexterm + primarybool_and/primary +/indexterm +functionbool_and(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typebool/type + /entry + entry +typebool/type + /entry + entrytrue if all input values are true, otherwise false. + Also known as functionbool_and/function. + /entry + /row + + row + entry +indexterm + primarybool_or/primary +/indexterm +functionbool_or(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typebool/type + /entry + entry +typebool/type + /entry + entrytrue if at least one input value is true, otherwise false/entry + /row + + row entryfunctioncount(*)/function/entry entry/entry entrytypebigint/type/entry *** *** 7571,7576 --- 7641,7664 /row row + entry +indexterm + primaryevery/primary +/indexterm +functionevery(replaceable class=parameterexpression/replaceable)/function + /entry + entry +typebool/type + /entry + entry +typebool/type + /entry + entrytrue if all input values are true, otherwise false. + Also known as functionbool_and/function. + /entry + /row + + row entryfunctionmax(replaceable class=parameterexpression/replaceable)/function/entry entryany numeric, string, or date/time type/entry entrysame as argument type/entry *** *** 7661,7666 --- 7749, /para note + indexterm + primaryANY/primary + /indexterm + indexterm + primarySOME/primary + /indexterm + para + Boolean aggregates functionbool_and/function and + functionbool_or/function correspond to standard SQL aggregates + functionevery/function and functionany/function or + functionsome/function. + As for functionany/function and functionsome/function, + it seems that there is an ambiguity built into the standard syntax: + programlisting + SELECT b1 = ANY((SELECT b2 FROM t2 ...)) FROM t1 ...; + /programlisting + Here functionANY/function can be considered both as leading + to a subquery or as an aggregate if the select expression returns 1 row. + Thus the standard name cannot be given to these aggregates. + /para + /note + + note para Users accustomed to working with other SQL database management systems may be surprised by the performance characteristics of *** ./src/backend/utils/adt/bool.c.orig Mon May 17 14:00:09 2004 --- ./src/backend/utils/adt/bool.c Wed May 19 10:18:13 2004 *** *** 248,250 --- 248,270 PG_RETURN_BOOL(b); } + + /* + * boolean-and and boolean-or aggregates. + */ + + /* function for standard EVERY aggregate implementation conforming to SQL 2003. + * must be strict. It is also named bool_and for homogeneity. + */ + Datum booland_statefunc(PG_FUNCTION_ARGS) + { + PG_RETURN_BOOL(PG_GETARG_BOOL(0) PG_GETARG_BOOL(1
Re: [PATCHES] add build utilities in default install
Dear patchers, Subject: [PATCHES] add build utilities in default install I withdraw this submission, which was not yet considered for inclusion anyway. I'll send a more complete patch, which includes this one, but also provide a working infrastructure for extensions, sometimes later. I've something working that needs some finalization and tests. -- Fabien. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[PATCHES] fix schema ownership for database owner on first connection
Dear patchers, Please find attached a patch to fix schema ownership on first connection, so that non system schemas reflect the database owner. (1) It adds a new datisinit attribute to pg_database, which tells whether the database initialization was performed or not. The documentation is updated accordingly. (2) This boolean is tested in postinit.c:ReverifyMyDatabase, and InitializeDatabase is called if necessary. (3) The routine updates pg_database datisinit, as well as pg_namespace ownership and acl stuff. (4) Some validation is added. This part validates for me (although rules and errors regression tests are broken in current cvs head, independtly of this patch). Some questions/comments : - is the place for the first connection housekeeping updates appropriate? it seems so from ReverifyMyDatabase comments, but these are only comments. - it is quite convenient to use SPI_* stuff for this very rare updates, but I'm not that confident about the issue... On the other hand, the all-C solution would result in a much longer and less obvious code:-( - it is unclear to me whether it should be allowed to skip this under some condition, when the database is accessed in debug mode from a standalone postgres for instance? - also I'm wondering how to react (what to do and how to do it) on errors within or under these initialization. For instance, how to be sure that the CurrentUser is reset as expected? Thanks in advance for any comments. Have a nice day. -- Fabien.*** ./doc/src/sgml/catalogs.sgml.orig Mon Jun 7 09:08:11 2004 --- ./doc/src/sgml/catalogs.sgmlTue Jun 8 10:14:26 2004 *** *** 1536,1541 --- 1536,1552 /row row + entrystructfielddatisinit/structfield/entry + entrytypebool/type/entry + entry/entry + entry +False when a database is just created but housekeeping initialization +tasks are not performed yet. On the first connection, the initialization +is performed and the boolean is turned to true. + /entry + /row + + row entrystructfielddatlastsysoid/structfield/entry entrytypeoid/type/entry entry/entry *** ./src/backend/commands/dbcommands.c.origWed May 26 17:28:40 2004 --- ./src/backend/commands/dbcommands.c Tue Jun 8 10:14:26 2004 *** *** 424,429 --- 424,430 new_record[Anum_pg_database_encoding - 1] = Int32GetDatum(encoding); new_record[Anum_pg_database_datistemplate - 1] = BoolGetDatum(false); new_record[Anum_pg_database_datallowconn - 1] = BoolGetDatum(true); + new_record[Anum_pg_database_datisinit - 1] = BoolGetDatum(false); new_record[Anum_pg_database_datlastsysoid - 1] = ObjectIdGetDatum(src_lastsysoid); new_record[Anum_pg_database_datvacuumxid - 1] = TransactionIdGetDatum(src_vacuumxid); new_record[Anum_pg_database_datfrozenxid - 1] = TransactionIdGetDatum(src_frozenxid); *** ./src/backend/utils/adt/acl.c.orig Thu Jun 3 15:05:57 2004 --- ./src/backend/utils/adt/acl.c Tue Jun 8 10:16:16 2004 *** *** 2203,2205 --- 2203,2225 errmsg(unrecognized privilege type: \%s\, priv_type))); return ACL_NO_RIGHTS; /* keep compiler quiet */ } + + /* acl acl_switch_grantor(acl, oldgrantor, newgrantor); + * switch grantor id in aclitem array. + * used internally for fixing owner rights in new databases. + * must be STRICT. + */ + Datum acl_switch_grantor(PG_FUNCTION_ARGS) + { + Acl * acls = PG_GETARG_ACL_P_COPY(0); + int i, + old_grantor = PG_GETARG_INT32(1), + new_grantor = PG_GETARG_INT32(2); + AclItem * item; + + for (i=0, item=ACL_DAT(acls); iACL_NUM(acls); i++, item++) + if (item-ai_grantor == old_grantor) + item-ai_grantor = new_grantor; + + PG_RETURN_ACL_P(acls); + } *** ./src/backend/utils/init/postinit.c.origTue Jun 1 10:21:23 2004 --- ./src/backend/utils/init/postinit.c Tue Jun 8 10:23:09 2004 *** *** 50,55 --- 50,109 /*** InitPostgres support ***/ + #include executor/spi.h + + /* Do housekeeping initializations if required, on first connection. + */ + static void InitializeDatabase(const char * dbname) + { + /* su */ + AclId saved_user = GetUserId(); + SetUserId(1); + + /* Querying in C is nice, but SQL is nicer. +* This is only done once in a lifetime of the database, +* so paying for the parser/optimiser cost is not that bad? +* What if that fails? +*/ + SetQuerySnapshot(); + + if (SPI_connect() != SPI_OK_CONNECT) + ereport(ERROR, (errmsg(SPI_connect failed))); + + if (SPI_exec(UPDATE SystemCatalogName . DatabaseRelationName + SET datisinit=TRUE + WHERE
Re: [PATCHES] fix schema ownership for database owner on first
Dear Tom, (2) This boolean is tested in postinit.c:ReverifyMyDatabase, and InitializeDatabase is called if necessary. And what happens if multiple backends try to connect at the same time? I took care of that one! There is a lock on the update of pg_database when switching off datisinit. The backend which gets the lock is to update the schema ownership, and others will wait for the lock to be released, and skip the stuff. I don't think I forgot something, but I may be wrong. Also, as I noted I used SPI internally to do that simply with sql. I don't know if this is an issue. (4) Some validation is added. I do not think it's a good idea for the regression tests to do anything to any databases other than regression. Especially not databases with names that might match people's real databases. Oh, you mean calvin and hobbes might use postgresql? ;-) Ok, so I guess I can use regressionuser[123], regression[123] as names in the validation. Writing tests cases is not fun, so I tried to put some fun by using these characters. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PATCHES] fix schema ownership for database owner on first
Ok, so I guess I can use regressionuser[123], regression[123] as names in the validation. Writing tests cases is not fun, so I tried to put some fun by using these characters. I don't really think it's necessary for the regression tests to test this functionality. Hummm... an interesting view, indeed. It fits neither my experience of programming nor my experience of computer security;-) It taught me that anything which is not tested does not work or will not work later on because someone will break some assumption. On the other hand, I understand that heavy test on make check are not necessary. So maybe some make big_check or the like? -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[PATCHES] ALSO keyword to CREATE RULE patch
Dear patchers, Please find attached a small patch to add an optionnal ALSO keyword to the CREATE RULE syntax. The ALSO keyword can be used where INSTEAD would be used, to mean the opposite, i.e. the current default behavior of rules which adds operations to the current one. IMHO, it makes the intended behavior much clearer for the basic user (say, me;-). CREATE RULE some_table_del AS ON DELETE TO some_table DO ALSO ( DELETE FROM this_other_table WHERE id=old.id; ); Of course, the absence of the ALSO keyword preserves the previous behavior... that is it behaves the same as with the ALSO keyword. This patch was made against 7.4.1 with the difforig script provided by postgresql. It adds ALSO keyword in the parser code (two lines), fixes somehow the documentation and sql help, and modifies four of the RULE test cases to use this keyword instead of the default nothing-ness. It validated for me with a make check. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./doc/src/sgml/rules.sgml.orig Sun Feb 29 17:35:15 2004 --- ./doc/src/sgml/rules.sgml Sun Feb 29 17:38:45 2004 *** *** 873,879 ListItem Para ! They can be literalINSTEAD/ or not. /Para /ListItem --- 873,879 ListItem Para ! They can be literalINSTEAD/ or literalALSO/ (default). /Para /ListItem *** *** 904,910 ProgramListing CREATE RULE replaceablerule_name/ AS ON replaceableevent/ TO replaceableobject/ [WHERE replaceablerule_qualification/] ! DO [INSTEAD] [replaceableaction/ | (replaceableactions/) | NOTHING]; /ProgramListing in mind. --- 904,910 ProgramListing CREATE RULE replaceablerule_name/ AS ON replaceableevent/ TO replaceableobject/ [WHERE replaceablerule_qualification/] ! DO [ALSO|INSTEAD] [replaceableaction/ | (replaceableactions/) | NOTHING]; /ProgramListing in mind. *** *** 920,926 Initially the query-tree list is empty. There can be zero (literalNOTHING/ key word), one, or multiple actions. To simplify, we will look at a rule with one action. This rule ! can have a qualification or not and it can be literalINSTEAD/ or not. /Para Para --- 920,926 Initially the query-tree list is empty. There can be zero (literalNOTHING/ key word), one, or multiple actions. To simplify, we will look at a rule with one action. This rule ! can have a qualification or not and it can be literalINSTEAD/ or literalALSO/ (default). /Para Para *** *** 937,943 variablelist varlistentry ! termNo qualification and not literalINSTEAD//term listitem para the query tree from the rule action with the original query --- 937,943 variablelist varlistentry ! termNo qualification and literalALSO//term listitem para the query tree from the rule action with the original query *** *** 957,963 /varlistentry varlistentry ! termQualification given and not literalINSTEAD//term listitem para the query tree from the rule action with the rule --- 957,963 /varlistentry varlistentry ! termQualification given and literalALSO//term listitem para the query tree from the rule action with the rule *** *** 980,986 /varlistentry /variablelist ! Finally, if the rule is not literalINSTEAD/, the unchanged original query tree is added to the list. Since only qualified literalINSTEAD/ rules already add the original query tree, we end up with either one or two output query trees for a rule with one action. --- 980,986 /varlistentry /variablelist ! Finally, if the rule is literalALSO/, the unchanged original query tree is added to the list. Since only qualified literalINSTEAD/ rules already add the original query tree, we end up with either one or two output query trees for a rule with one action. *** *** ,1117 /Para Para ! The rule is a qualified non-literalINSTEAD/ rule, so the rule system has to return two query trees: the modified rule action and the original query tree. In step 1, the range table of the original query is incorporated into the rule's action query tree. This results in: --- ,1117 /Para Para ! The rule is a qualified literalALSO/ rule, so the rule system has to return two query trees: the modified rule action and the original query tree. In step 1, the range table of the original query is incorporated into the rule's action query tree. This results in: *** *** 1190,1196 /para para ! That's it. Since the rule
[PATCHES] fix schema ownership on first connection v2
Dear hackers, Please find attached a patch to fix schema ownership on first connection, so that non system schemas reflect the database owner. This revised patch includes fixes after Tom comments on names used in the validation. However, the validation is still there. It is easy to edit it out if required, but I still think that such a test is a good thing. (1) It adds a new datisinit attribute to pg_database, which tells whether the database initialization was performed or not. The documentation is updated accordingly. (2) The datisnit boolean is tested in postinit.c:ReverifyMyDatabase, and InitializeDatabase is called if necessary. (3) The routine updates pg_database datisinit, as well as pg_namespace ownership and acl stuff. I think that the race condition if two backends connect for the first time to the same database is correctly taken care of, as only one backend will update the datisinit flag and then will fix the schema ownership. (4) Some validation is added. Some questions: - is the place for the first connection housekeeping updates appropriate? it seems so from ReverifyMyDatabase comments, but these are only comments. - it is quite convenient to use SPI_* stuff for this very rare updates, but I'm not that confident about the issue... On the other hand, the all-C solution would result in a much longer and less obvious code:-( - it is unclear to me whether it should be allowed to skip this under some condition, when the database is accessed in debug mode from a standalone postgres for instance? - also I'm wondering how to react (what to do and how to do it) on errors within or under these initialization. For instance, how to be sure that the CurrentUser is reset as expected? Thanks in advance for any comments. Have a nice day. -- Fabien.*** ./doc/src/sgml/catalogs.sgml.orig Mon Jun 7 09:08:11 2004 --- ./doc/src/sgml/catalogs.sgmlWed Jun 9 10:26:39 2004 *** *** 1536,1541 --- 1536,1552 /row row + entrystructfielddatisinit/structfield/entry + entrytypebool/type/entry + entry/entry + entry +False when a database is just created but housekeeping initialization +tasks are not performed yet. On the first connection, the initialization +is performed and the boolean is turned to true. + /entry + /row + + row entrystructfielddatlastsysoid/structfield/entry entrytypeoid/type/entry entry/entry *** ./src/backend/commands/dbcommands.c.origWed May 26 17:28:40 2004 --- ./src/backend/commands/dbcommands.c Wed Jun 9 10:26:39 2004 *** *** 424,429 --- 424,430 new_record[Anum_pg_database_encoding - 1] = Int32GetDatum(encoding); new_record[Anum_pg_database_datistemplate - 1] = BoolGetDatum(false); new_record[Anum_pg_database_datallowconn - 1] = BoolGetDatum(true); + new_record[Anum_pg_database_datisinit - 1] = BoolGetDatum(false); new_record[Anum_pg_database_datlastsysoid - 1] = ObjectIdGetDatum(src_lastsysoid); new_record[Anum_pg_database_datvacuumxid - 1] = TransactionIdGetDatum(src_vacuumxid); new_record[Anum_pg_database_datfrozenxid - 1] = TransactionIdGetDatum(src_frozenxid); *** ./src/backend/utils/adt/acl.c.orig Thu Jun 3 15:05:57 2004 --- ./src/backend/utils/adt/acl.c Wed Jun 9 10:26:39 2004 *** *** 2203,2205 --- 2203,2225 errmsg(unrecognized privilege type: \%s\, priv_type))); return ACL_NO_RIGHTS; /* keep compiler quiet */ } + + /* acl acl_switch_grantor(acl, oldgrantor, newgrantor); + * switch grantor id in aclitem array. + * used internally for fixing owner rights in new databases. + * must be STRICT. + */ + Datum acl_switch_grantor(PG_FUNCTION_ARGS) + { + Acl * acls = PG_GETARG_ACL_P_COPY(0); + int i, + old_grantor = PG_GETARG_INT32(1), + new_grantor = PG_GETARG_INT32(2); + AclItem * item; + + for (i=0, item=ACL_DAT(acls); iACL_NUM(acls); i++, item++) + if (item-ai_grantor == old_grantor) + item-ai_grantor = new_grantor; + + PG_RETURN_ACL_P(acls); + } *** ./src/backend/utils/init/postinit.c.origTue Jun 1 10:21:23 2004 --- ./src/backend/utils/init/postinit.c Wed Jun 9 11:52:02 2004 *** *** 50,55 --- 50,110 /*** InitPostgres support ***/ + #include executor/spi.h + + /* Do housekeeping initializations if required, on first connection. + * This function is expected to be called from within a transaction block. + */ + static void InitializeDatabase(const char * dbname) + { + /* su */ + AclId saved_user = GetUserId(); + SetUserId(1); + + /* Querying in C is nice, but SQL is nicer. +* This is only done once in a lifetime of the database, +* so paying for the
Re: [PATCHES] Foreign key type checking patch
Dear Tom, Thanks for your reply. Here is a proposed patch against 7.4.1 to check exact match of foreign key types wrt the referenced keys, and to show a warning if this is not the case. I think that this concern may be obsolete in CVS tip, I just get the current CVS and had a quick look at it. at least for the cases where we have indexable cross-type operators. The correct way to do this would be to look at the operator found by oper() and see whether it's indexable. I must admit that I do not understand your point. I wish I would have a WARNING if a foreign key is not declared exactly as the key it references. I think that it is a desirable feature for stupid users, including myself! I cannot see why whether the = comparison version which is chosen is indexable or not would lead to this information. It seems quite reasonnable to look directly at the attribute types and compare them for this purpose. I noticed the compatible_oper() function which would return a no-coersion binary operator between types. However that does not fit my purpose. For instance, it seems to me that the IsBinaryCoercible returns true for VARCHAR(12) and VARCHAR(16), as the type oid is the same, but I think a warning makes sense anyway. So it is not the same issue. So I can't see your point. Maybe some more lights would help? -- Fabien. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] fix schema ownership for database owner on first
Dear Bruce, On Wed, 9 Jun 2004, Bruce Momjian wrote: Would you adjust based on Tom's comments and resubmit? Thanks. Done. ! Date: Wed, 9 Jun 2004 14:31:59 +0200 (CEST) ! From: Fabien COELHO [EMAIL PROTECTED] ! To: PostgreSQL Patches [EMAIL PROTECTED] ! Subject: [PATCHES] fix schema ownership on first connection v2 Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PATCHES] pgxs: build infrastructure for extensions v1
Dear Peter, Thanks a lot for all these comments. I'll try to update my patch in the coming week, or maybe this weekend. Some responses to some of your comments: - Please don't invent new targets like light-install, install-local. Just install everything in the install target. The current status is that there are two targets: install and server-install. As I think that server-install should be the default, I renamed it install, but I wanted to let the old install still available, thus I had to find a name for it, hence light-install. It is the install a packager would like to chose, to have a separate -dev package. - Referring to the renaming of all makefiles under contrib/: I'm not in favor of naming makefiles Makefile.foo; it should be foo.mk. This makes it easier for tools that recognize files by extension. Ok. Actually, the Makefile.pgxs is the template example makefile, so its status is different from others that are to be included. Thanks again, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PATCHES] patch queue reminder
Dear Andreas, I have two minor patches that are being submitted but which do not appear so please be patient. I'm patient, no problem! I'm sorry if my English writing does not convey the soft sound of my voice that would tell that I'm not in a hurry. I'm just vaguely afraid that the submission could be forgotten, hence the reminder. I'd expect that patches stuck in pgsql-patches are still supposed to be in-time for feature freeze. Gut. Danke schoen. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PATCHES] pgxs: build infrastructure for extensions v2
Dear Peter, If you cannot compile extensions before postgresql is already installed, That assumption is wrong. The contrib modules can be built at the same time as the rest of the tree, before any installation takes place. This is essential for packaging: When a package is built, the tree is never really installed. The model I have in mind is the one of apache, where you use apxs -c -i ... mod_foo.c to install a new module AFTER apache has been compiled and installed, and it's not a problem to anyone there. So I must admit I'm not sure I understand the package building issue. The whole point of PGXS is to enable the ability to install a contrib or external module whenever one needs it, but not necessary at initial build time. As a side effect, because it relies on pg_contrib and other files being installed, then it suggests that the make in contrib can only be made after postgresql is installed, as it is with apache. - In Makefile.global: -L$(pkglibdir) is not necessary. There are not libraries to link at build time in there. It is done only ifdef PGXS, in which case it seems to me that this is really needed, as it is at extension-building time. There are by definition never any build-time linkable files in $(pkglibdir). PGXS is by definition to be used AFTER the postgresql installation. Thus the build-time when using is necessary AFTER postgresql has been installed. So I think that the libraries are expected to be in $(pkglibdir)? Or am I completly misunderstanding you? - maybe more files should be copied? if so which ones? Actually you might copy less files. You don't need to copy everything under src/makefiles/. It is enough to copy Makefile.port, since you already know the platform. Ok. I can look at that. Makefile.port is a symbolic link, so it may depends whether the install.sh copies the link as a file or as a link. Thanks for your comments, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[PATCHES] build infrastructure for extensions v3
Dear patchers, Please find attached version number 3 for a patch to enable extensions such as contribs or external add-ons to be installed simply with an already installed postgresql. This version addresses Peter's comments about src/makefiles/Makefile.* that do not need to be installed. It works for me, as previous versions. See other comments in previous submissions and responses. -- Fabien.*** ./GNUmakefile.in.orig Mon Jun 14 15:33:06 2004 --- ./GNUmakefile.inMon Jul 5 09:24:51 2004 *** *** 13,30 $(MAKE) -C src all @echo All of PostgreSQL successfully made. Ready to install. ! install: $(MAKE) -C doc install $(MAKE) -C src install - @echo PostgreSQL installation complete. ! installdirs uninstall distprep: $(MAKE) -C doc $@ $(MAKE) -C src $@ install-all-headers: $(MAKE) -C src $@ # clean, distclean, etc should apply to contrib too, even though # it's not built by default clean: --- 13,55 $(MAKE) -C src all @echo All of PostgreSQL successfully made. Ready to install. ! install: light-install install-all-headers install-config-files ! @echo PostgreSQL installation complete. ! ! light-install: $(MAKE) -C doc install $(MAKE) -C src install ! distprep: $(MAKE) -C doc $@ $(MAKE) -C src $@ + installdirs: + $(MAKE) -C doc $@ + $(MAKE) -C src $@ + $(MAKE) -C config $@ + $(mkinstalldirs) $(DESTDIR)$(pgxsdir) + + uninstall: + $(MAKE) -C doc $@ + $(MAKE) -C src $@ + $(MAKE) -C config $@ + $(DESTDIR)$(pgxsdir) + install-all-headers: $(MAKE) -C src $@ + install-config-files: installdirs + $(MAKE) -C src $@ + $(MAKE) -C config $@ + $(INSTALL_DATA) config.status $(DESTDIR)$(pgxsdir) + + install-client-only: + $(MAKE) -C src/bin install + $(MAKE) -C src/include install + $(MAKE) -C src/interfaces install + $(MAKE) -C doc install + # clean, distclean, etc should apply to contrib too, even though # it's not built by default clean: *** ./config/Makefile.orig Mon Jul 5 09:24:51 2004 --- ./config/Makefile Mon Jul 5 09:24:51 2004 *** *** 0 --- 1,19 + # $PostgreSQL$ + + subdir = config + top_builddir = .. + include $(top_builddir)/src/Makefile.global + + ins_files = install-sh mkinstalldirs + ins_dir = $(DESTDIR)$(pgxsdir)/config + + install-config-files: install + + install: installdirs + for f in $(ins_files) ; do $(INSTALL_DATA) $$f $(ins_dir) ; done; + + installdirs: + $(mkinstalldirs) $(ins_dir) + + uninstall: + for f in $(ins_files) ; do $(RM) $(ins_dir)/$$f ; done; *** ./contrib/btree_gist/pgxs.mk.orig Mon Jul 5 09:24:51 2004 --- ./contrib/btree_gist/pgxs.mkMon Jul 5 09:24:51 2004 *** *** 0 --- 1,28 + # local configuration + MODULE_big = btree_gist + OBJS= btree_common.o btree_int2.o btree_int4.o btree_int8.o btree_float4.o btree_float8.o btree_ts.o + DATA_built = btree_gist.sql + DOCS = README.btree_gist + REGRESS = btree_gist + + EXTRA_CLEAN = btree_int2.c btree_int4.c btree_int8.c btree_float4.c btree_float8.c + + # generic settings + PGXS := $(shell pg_config --pgxs) + include $(PGXS) + + # local rules + btree_int2.c: btree_num.c.in + sed 's,__DEFINE_BTREE_TYPE_HERE__,BTREE_GIST_INT2,g;s,__BTREE_GIST_TYPE__,int16,g;s,__BTREE_GIST_TYPE2__,int2,g' $ $@ + + btree_int4.c: btree_num.c.in + sed 's,__DEFINE_BTREE_TYPE_HERE__,BTREE_GIST_INT4,g;s,__BTREE_GIST_TYPE__,int32,g;s,__BTREE_GIST_TYPE2__,int4,g' $ $@ + + btree_int8.c: btree_num.c.in + sed 's,__DEFINE_BTREE_TYPE_HERE__,BTREE_GIST_INT8,g;s,__BTREE_GIST_TYPE__,int64,g;s,__BTREE_GIST_TYPE2__,int8,g' $ $@ + + btree_float4.c: btree_num.c.in + sed 's,__DEFINE_BTREE_TYPE_HERE__,BTREE_GIST_FLOAT4,g;s,__BTREE_GIST_TYPE__,float4,g;s,__BTREE_GIST_TYPE2__,float4,g' $ $@ + + btree_float8.c: btree_num.c.in + sed 's,__DEFINE_BTREE_TYPE_HERE__,BTREE_GIST_FLOAT8,g;s,__BTREE_GIST_TYPE__,float8,g;s,__BTREE_GIST_TYPE2__,float8,g' $ $@ *** ./contrib/chkpass/pgxs.mk.orig Mon Jul 5 09:24:51 2004 --- ./contrib/chkpass/pgxs.mk Mon Jul 5 09:24:51 2004 *** *** 0 --- 1,12 + # local conf + MODULE_big = chkpass + OBJS = chkpass.o + SHLIB_LINK = $(filter -lcrypt, $(LIBS)) + DATA_built = chkpass.sql + DOCS = README.chkpass + + # global + PGXS := $(shell pg_config --pgxs) + include $(PGXS) + + # local rules *** ./contrib/cube/pgxs.mk.orig Mon Jul 5 09:24:51 2004 --- ./contrib/cube/pgxs.mk Mon Jul 5 09:24:51 2004 *** *** 0 --- 1,35 + # generic macros + MODULE_big = cube + OBJS= cube.o cubeparse.o + + DATA_built = cube.sql + DOCS = README.cube + REGRESS = cube + + EXTRA_CLEAN = cubeparse.c cubeparse.h cubescan.c y.tab.c y.tab.h + + # postgresql extensions + PGXS := $(shell
Re: [PATCHES] build infrastructure for extensions v3
Dear Peter, I am still opposed to adding more targets of the form light-install, It is a renaming of the previous 'install' target, as the new install is the previous 'server-install', so it is no different from the current status. I let 'light-install' as a compromise wrt Tom view that the default installation should not include header files, so that it would be still easy to do that... client-only install, etc. Ok, as you wish. I just felt it was easy and useful. Please discuss this on -hackers. It can be done as a separate patch later on if need be. Ok. While I now understand what you are doing in contrib, I ask who is going to want to maintain two parallel sets of makefiles, one for PGXS and one for in-tree builds? Yep, that is indeed a good point. One who is going to want to use the PGXS ones anyway? Well, I've been disappointed in the past when I wanted to test a contrib and had to reconfigure everything to do so. So I really think it is useful to be able to add contribs as an after-thought. I really tend to think that there could be the pgxs only version, but that would break compile without installing, and you don't want that, as previously discussed. So there is no perfect solution:-( What I can do is to enable the current makefiles to use pgxs if desired by the user, maybe with some switch, say make install vs make USE_PGXS=yes install... I'll lose the clean illustration part but would have one makefile only and still enable using pgxs if needed. I realize they're good examples, but examples should be put into the documentation, so people can find them. Humm. Paste your documentation (pgxs.sgml) somewhere into xfunc.sgml, where it discusses writing user-defined functions. This material isn't long enough to warrant a chapter of its own. Ok. + ifdef PGXS + LDFLAGS += -L$(pkglibdir) + endif needs to disappear. There is nothing to link with in there. (If there is, that's a bug.) I think I added it because it did not work without that, but I can recheck whether it is really needed. libpgport should be installed in the normal libdir. Ok. I'll submit a new patch on tomorrow to hopefully address all these issues. Thanks a lot for all these comments, have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PATCHES] pgxs: build infrastructure for extensions v4
Dear peter, Please find attached another new version of a patch which provides a working infrastructure for pg extensions. I hope it addresses all of Peter's comments. I'll be away for the next 3 weeks, so if minor changes are required it would be best if you could proceed without me... This patch breaks building outside the source tree in a very elaborate and obvious way. Unfortunately, this is all tied together so I haven't figured out yet if it can be fixed easily. I do not get your point. the aim is to be able to build outside the source tree as well? Also, the use of the install targets is a bit strange (install-all-headers install libpgport.a). I would simply not bother and install everything all the time. However, those who advocate the install-all-headers target may want to propose a different scheme. part of this existed before the patch. I tried to make the best of existing targets, especially as you requested that less targets should be used. I do agree with you that installing libgport.a under install-all-headers looks stupid, but the idea behind all-headers is that all which is required for extensions is installed. What about install-dev-files? or anything less misleading? I updated all contrib makefiles so that they can be used either the standard way after a configure, or the new way without needing a configure but with an already installed postgreSQL. Just try them with cd contrib/foo ; make USE_PGXS=1 install *AFTER* postgresql has been configure, compiled and installed. It should be compiled and installed wrt to the first pg_config which is found in the path. This is redundant. I think by now I'm looking for a patch that does not touch contrib at all (except perhaps contrib-global.mk). I really just touch that file in contrib. The only other exceptions are when other files were directly included or to reorder include wrt mqcro definitions, as far as I can remember. Much of the trouble arises from being too clever around there. We're trying to allow external modules to build, not internal ones. I really want to be able to install contribs as an afterthought and without reconfiguring. anyway, sorry I cannot really help as I m away from home. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] fix schema ownership on first connection preliminary
I have added the v2 version of this patch to the patch queue (attached). I do apologize for not having looked at this patch sooner, but it's been at the bottom of the priority queue :-( No need to apoligize. In general I do not like the approach of using SPI for this. [...] Ok. Well, my actual motivation for using SPI is to write 3 lines of SQL were the intent is obvious instead of 60 lines of obscure C. I'm just lazy. Also, since we already have AlterSchemaOwner coded at the C level, the original rationale of not wanting to add code has been rendered moot. Well, I did not noticed this good function. I do not like the hardwired assumption that userID 1 exists and is a superuser. With UNIX, there is a pretty hard assumption that root is number 0, and I assumed - wrongly - that the same line of reasonning applied to pg. The code is also broken to assume that ID 1 is the owner of the public schema in the source database (though this part is at least easy to fix, and would go away anyway if using AlterSchemaOwner). My implicit assumption was that if it is not 1, it means that someone decide to give some template1 schemas to someone, and I don't have to change that. However, enough about implementation deficiencies. Anyway, it is good to know. Did we really have consensus that the system should do this in the first place? I'm convinced that the current status is not appropriate, but that does not mean that what the patch suggests a good fix. The current status is that when you do CREATE DATABASE foo WITH OWNER calvin, user calvin which owns the database is prevented from manipulating the public schema... The underlying question is what is a database owner?. My feeling is that it is like a unix-directory owner, that is someone who can do whatever it likes at home, but cannot change the system (/bin, /lib). I was only about halfway sold on the notion of changing public's ownership. I especially dislike the detail that it will alter the ownership of *all* non-builtin schemas, not only public. If someone has added special-purpose schemas to template1, it seems unlikely that this is the behavior they'd want. Mmm... my assumption here was that either it is a system schema, and it is special, or it is not, and then it is not special. But it is only an assumption, which makes sense with the unix-like owner approach. I think we should leave the behavior alone, at least for this release cycle. Ok. Thanks for your comments. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[PATCHES] more massaging on pgxs postresql extension infrastructure
Dear patchers, after Peter's massaging on pgxs, I think the infrastructure deserves some more massaging because: (a) some files are missing (namely libpgport.? needed by pgbench for instance, and I guess possibly by others). (b) I think it is a key feature that one should be able to compile contrib with the already installed postgresql, without having to reconfigure. so I submit this new patch. I've tried to preserve Peter modifications to my initial submissions, plus: (1) add libpgport installation under install-all-headers target (2) all contrib Makefiles can ALSO use of the dynamic pgxs stuff = no direct inclusion of Makefile.global (which is generated by configure) = USE_PGXS macro enables that (make USE_PGXS=1 install), otherwise it is just as before the patch, the static configured infrastructured is used. If there are still issues, which is perfectly possible, I'm really willing to fix them while preserving the ability to compile postgresql contribs with an already installed postgresql. Basically it works for me. Thanks in advance, -- Fabien Coelho - [EMAIL PROTECTED]*** ./contrib/btree_gist/Makefile.orig Wed Aug 11 13:38:31 2004 --- ./contrib/btree_gist/Makefile Wed Aug 11 13:41:31 2004 *** *** 1,7 subdir = contrib/btree_gist top_builddir = ../.. - include $(top_builddir)/src/Makefile.global MODULE_big = btree_gist --- 1,6 *** *** 16,19 REGRESS = init int2 int4 int8 float4 float8 cash oid timestamp timestamptz time timetz \ date interval macaddr inet cidr text varchar char bytea bit varbit numeric ! include $(top_srcdir)/contrib/contrib-global.mk --- 15,18 REGRESS = init int2 int4 int8 float4 float8 cash oid timestamp timestamptz time timetz \ date interval macaddr inet cidr text varchar char bytea bit varbit numeric ! include $(top_builddir)/contrib/contrib-global.mk *** ./contrib/chkpass/Makefile.orig Sat Nov 29 20:51:19 2003 --- ./contrib/chkpass/Makefile Wed Aug 11 13:37:23 2004 *** *** 2,8 subdir = contrib/chkpass top_builddir = ../.. - include $(top_builddir)/src/Makefile.global MODULE_big = chkpass OBJS = chkpass.o --- 2,7 *** *** 10,13 DATA_built = chkpass.sql DOCS = README.chkpass ! include $(top_srcdir)/contrib/contrib-global.mk --- 9,12 DATA_built = chkpass.sql DOCS = README.chkpass ! include $(top_builddir)/contrib/contrib-global.mk *** ./contrib/contrib-global.mk.origTue Aug 10 08:29:01 2004 --- ./contrib/contrib-global.mk Wed Aug 11 14:54:28 2004 *** *** 1,4 # $PostgreSQL: pgsql-server/contrib/contrib-global.mk,v 1.8 2004/07/30 12:26:39 petere Exp $ NO_PGXS = 1 ! include $(top_srcdir)/src/makefiles/pgxs.mk --- 1,11 # $PostgreSQL: pgsql-server/contrib/contrib-global.mk,v 1.8 2004/07/30 12:26:39 petere Exp $ + ifdef USE_PGXS + # use PGXS dynamic infrastructure to compile with an installed postgresql + PGXS := $(shell pg_config --pgxs) + include $(PGXS) + else + # simple local compilation for locally configured postgresql NO_PGXS = 1 ! include $(top_builddir)/src/makefiles/pgxs.mk ! endif *** ./contrib/cube/Makefile.origWed Aug 11 13:38:31 2004 --- ./contrib/cube/Makefile Wed Aug 11 13:42:19 2004 *** *** 2,8 subdir = contrib/cube top_builddir = ../.. - include $(top_builddir)/src/Makefile.global MODULE_big = cube OBJS= cube.o cubeparse.o --- 2,7 *** *** 11,16 --- 10,18 DOCS = README.cube REGRESS = cube + EXTRA_CLEAN = cubeparse.c cubeparse.h cubescan.c y.tab.c y.tab.h + + include $(top_builddir)/contrib/contrib-global.mk # cubescan is compiled as part of cubeparse cubeparse.o: cubescan.c *** *** 32,39 else @$(missing) flex $ $@ endif - - EXTRA_CLEAN = cubeparse.c cubeparse.h cubescan.c y.tab.c y.tab.h - - - include $(top_srcdir)/contrib/contrib-global.mk --- 34,36 *** ./contrib/dbase/Makefile.orig Wed Aug 11 13:38:31 2004 --- ./contrib/dbase/MakefileWed Aug 11 13:43:22 2004 *** *** 2,8 subdir = contrib/dbase top_builddir = ../.. ! include $(top_builddir)/src/Makefile.global PROGRAM = dbf2pg OBJS = dbf.o dbf2pg.o endian.o --- 2,8 subdir = contrib/dbase top_builddir = ../.. ! #include $(top_builddir)/src/Makefile.global PROGRAM = dbf2pg OBJS = dbf.o dbf2pg.o endian.o *** *** 18,21 DOCS = README.dbf2pg MAN = dbf2pg.1# XXX not implemented ! include $(top_srcdir)/contrib/contrib-global.mk --- 18,21 DOCS = README.dbf2pg MAN = dbf2pg.1# XXX not implemented ! include $(top_builddir)/contrib/contrib-global.mk *** ./contrib/dblink/Makefile.orig Wed Aug 11 13:38:31 2004 --- ./contrib/dblink/Makefile Wed Aug 11 13:45:35 2004
[PATCHES] pg_dump 'die on errors' option
Dear patchers, Please find attached a submission to add a die on errors option to pg_restore, as it seems that some people have scripts that rely on the previous abort on error default behavior when restoring data with a direct connection. It works for me. Maybe Philip could test that it works for him? Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./doc/src/sgml/ref/pg_restore.sgml.orig Thu Jul 15 10:29:00 2004 --- ./doc/src/sgml/ref/pg_restore.sgml Thu Aug 12 10:29:09 2004 *** *** 130,135 --- 130,147 /varlistentry varlistentry + termoption-D/option/term + termoption--die-on-errors/option/term + listitem +para + Die if an error is encountered while sending sql commands into + the database. Default is to keep going and to display a count of + errors at the end of the restoration. +/para + /listitem + /varlistentry + + varlistentry termoption-f replaceablefilename/replaceable/option/term termoption--file=replaceablefilename/replaceable/option/term listitem *** ./src/bin/pg_dump/pg_backup.h.orig Thu Jul 15 10:29:01 2004 --- ./src/bin/pg_dump/pg_backup.h Thu Aug 12 10:18:43 2004 *** *** 103,108 --- 103,109 char *username; int ignoreVersion; int requirePassword; + int die_on_errors; bool *idWanted; boollimitToList; *** ./src/bin/pg_dump/pg_backup_archiver.c.orig Tue Aug 10 08:29:06 2004 --- ./src/bin/pg_dump/pg_backup_archiver.c Thu Aug 12 14:41:35 2004 *** *** 461,466 --- 461,467 opts-format = archUnknown; opts-suppressDumpWarnings = false; + opts-die_on_errors = false; return opts; } *** ./src/bin/pg_dump/pg_restore.c.orig Thu Jul 15 10:29:01 2004 --- ./src/bin/pg_dump/pg_restore.c Thu Aug 12 14:44:13 2004 *** *** 90,95 --- 90,96 {create, 0, NULL, 'C'}, {data-only, 0, NULL, 'a'}, {dbname, 1, NULL, 'd'}, + {die-on-errors, 0, NULL, 'D'}, {file, 1, NULL, 'f'}, {format, 1, NULL, 'F'}, {function, 1, NULL, 'P'}, *** *** 141,147 } } ! while ((c = getopt_long(argc, argv, acCd:f:F:h:iI:lL:Op:P:RsS:t:T:uU:vWxX:, cmdopts, NULL)) != -1) { switch (c) --- 142,148 } } ! while ((c = getopt_long(argc, argv, acCd:Df:F:h:iI:lL:Op:P:RsS:t:T:uU:vWxX:, cmdopts, NULL)) != -1) { switch (c) *** *** 159,164 --- 160,168 case 'd': opts-dbname = strdup(optarg); break; + case 'D': + opts-die_on_errors = true; + break; case 'f': /* output file name */ opts-filename = strdup(optarg); break; *** *** 321,330 /* Let the archiver know how noisy to be */ AH-verbose = opts-verbose; ! /* restore keeps submitting sql commands as pg_restore ... | psql ... !* this behavior choice could be turned into an option. */ ! AH-die_on_errors = false; if (opts-tocFile) SortTocFromFile(AH, opts); --- 325,333 /* Let the archiver know how noisy to be */ AH-verbose = opts-verbose; ! /* whether to keep submitting sql commands as pg_restore ... | psql ... */ ! AH-die_on_errors = opts-die_on_errors; if (opts-tocFile) SortTocFromFile(AH, opts); *** *** 391,396 --- 394,400 printf(_( -p, --port=PORT database server port number\n)); printf(_( -U, --username=NAME connect as specified database user\n)); printf(_( -W, --password force password prompt (should happen automatically)\n)); + printf(_( -D, --die-on-errors die on errors, default is to keep going\n)); printf(_(\nIf no input file name is supplied, then standard input is used.\n\n)); printf(_(Report bugs to [EMAIL PROTECTED].\n)); ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] [BUGS] BUG #1219: pgxs does not work fully
Please find enclose a submission to fix these problems. The patch adds missing the libpgport.a file to the installation under install-all-headers. It is needed by some contribs. I install the library in pkglibdir, but I was wondering whether it should be libdir? I was wondering also whether it would make sense to have a libpgport.so? It fixes various macros which are used by contrib makefiles, especially libpq_*dir and LDFLAGS when used under PGXS. It seems to me that they are needed to It adds the ability to test and use PGXS with contribs, with make USE_PGXS=1. Without the macro, this is exactly as before, there should be no difference, esp. wrt the vpath feature that seemed broken by previous submission. So it should not harm anybody, and it is useful at least to me. It fixes some inconsistencies in various contrib makefiles (useless override, := instead of =). It works for me. it validates. I'm available to fix any problem with this patch. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./contrib/btree_gist/Makefile.orig Fri May 28 15:09:43 2004 --- ./contrib/btree_gist/Makefile Tue Aug 17 11:54:08 2004 *** *** 1,8 - subdir = contrib/btree_gist - top_builddir = ../.. - include $(top_builddir)/src/Makefile.global - MODULE_big = btree_gist OBJS= btree_gist.o btree_utils_num.o btree_utils_var.o btree_int2.o btree_int4.o btree_int8.o \ --- 1,4 *** *** 16,19 --- 12,23 REGRESS = init int2 int4 int8 float4 float8 cash oid timestamp timestamptz time timetz \ date interval macaddr inet cidr text varchar char bytea bit varbit numeric + ifdef USE_PGXS + PGXS = $(shell pg_config --pgxs) + include $(PGXS) + else + subdir = contrib/btree_gist + top_builddir = ../.. + include $(top_builddir)/src/Makefile.global include $(top_srcdir)/contrib/contrib-global.mk + endif *** ./contrib/chkpass/Makefile.orig Sat Nov 29 20:51:19 2003 --- ./contrib/chkpass/Makefile Tue Aug 17 11:54:08 2004 *** *** 1,13 # $PostgreSQL: pgsql-server/contrib/chkpass/Makefile,v 1.5 2003/11/29 19:51:19 pgsql Exp $ - subdir = contrib/chkpass - top_builddir = ../.. - include $(top_builddir)/src/Makefile.global - MODULE_big = chkpass OBJS = chkpass.o SHLIB_LINK = $(filter -lcrypt, $(LIBS)) DATA_built = chkpass.sql DOCS = README.chkpass include $(top_srcdir)/contrib/contrib-global.mk --- 1,17 # $PostgreSQL: pgsql-server/contrib/chkpass/Makefile,v 1.5 2003/11/29 19:51:19 pgsql Exp $ MODULE_big = chkpass OBJS = chkpass.o SHLIB_LINK = $(filter -lcrypt, $(LIBS)) DATA_built = chkpass.sql DOCS = README.chkpass + ifdef USE_PGXS + PGXS = $(shell pg_config --pgxs) + include $(PGXS) + else + subdir = contrib/chkpass + top_builddir = ../.. + include $(top_builddir)/src/Makefile.global include $(top_srcdir)/contrib/contrib-global.mk + endif *** ./contrib/cube/Makefile.origSat Nov 29 20:51:21 2003 --- ./contrib/cube/Makefile Tue Aug 17 11:54:08 2004 *** *** 1,9 # $PostgreSQL: pgsql-server/contrib/cube/Makefile,v 1.11 2003/11/29 19:51:21 pgsql Exp $ - subdir = contrib/cube - top_builddir = ../.. - include $(top_builddir)/src/Makefile.global - MODULE_big = cube OBJS= cube.o cubeparse.o --- 1,5 *** *** 11,16 --- 7,25 DOCS = README.cube REGRESS = cube + EXTRA_CLEAN = cubeparse.c cubeparse.h cubescan.c y.tab.c y.tab.h + + + ifdef USE_PGXS + PGXS = $(shell pg_config --pgxs) + include $(PGXS) + else + subdir = contrib/cube + top_builddir = ../.. + include $(top_builddir)/src/Makefile.global + include $(top_srcdir)/contrib/contrib-global.mk + endif + # cubescan is compiled as part of cubeparse cubeparse.o: cubescan.c *** *** 32,39 else @$(missing) flex $ $@ endif - - EXTRA_CLEAN = cubeparse.c cubeparse.h cubescan.c y.tab.c y.tab.h - - - include $(top_srcdir)/contrib/contrib-global.mk --- 41,43 *** ./contrib/dbase/Makefile.orig Sat Nov 29 20:51:22 2003 --- ./contrib/dbase/MakefileTue Aug 17 11:54:08 2004 *** *** 1,9 # $PostgreSQL: pgsql-server/contrib/dbase/Makefile,v 1.5 2003/11/29 19:51:22 pgsql Exp $ - subdir = contrib/dbase - top_builddir = ../.. - include $(top_builddir)/src/Makefile.global - PROGRAM = dbf2pg OBJS = dbf.o dbf2pg.o endian.o PG_CPPFLAGS = -I$(libpq_srcdir) --- 1,5 *** *** 18,21 --- 14,26 DOCS = README.dbf2pg MAN = dbf2pg.1# XXX not implemented + + ifdef USE_PGXS + PGXS = $(shell pg_config --pgxs) + include $(PGXS) + else + subdir = contrib/dbase + top_builddir = ../.. + include $(top_builddir)/src/Makefile.global include $(top_srcdir)/contrib/contrib-global.mk + endif *** ./contrib/dblink/Makefile.orig Sat Nov 29 20:51:34 2003 --- ./contrib/dblink/Makefile Tue Aug 17 11:54:08 2004 *** *** 1,9 # $PostgreSQL: pgsql-server
Re: [PATCHES] [BUGS] BUG #1219: pgxs does not work fully
Am Dienstag, 17. August 2004 14:26 schrieb Fabien COELHO: The patch adds missing the libpgport.a file to the installation under install-all-headers. It is needed by some contribs. I install the library in pkglibdir, but I was wondering whether it should be libdir? Yes it should. Please change it. Dear Peter, dear patchers, Please find attached a small patch against current CVS head that fixes pgport library installation so that it goes to libdir instead of pkglibdir. It works for me. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/Makefile.global.in.orig Mon Aug 23 09:15:09 2004 --- ./src/Makefile.global.inTue Aug 24 15:21:17 2004 *** *** 360,366 LIBS := -lpgport $(LIBS) ifdef PGXS # where libpgport.a is installed ! LDFLAGS := -L$(pkglibdir) $(LDFLAGS) else LDFLAGS := -L$(top_builddir)/src/port $(LDFLAGS) endif --- 360,366 LIBS := -lpgport $(LIBS) ifdef PGXS # where libpgport.a is installed ! LDFLAGS := -L$(libdir) $(LDFLAGS) else LDFLAGS := -L$(top_builddir)/src/port $(LDFLAGS) endif *** ./src/port/Makefile.origMon Aug 23 09:15:10 2004 --- ./src/port/Makefile Tue Aug 24 15:18:08 2004 *** *** 22,31 # libpgport is needed by some contrib install-all-headers: ! $(INSTALL_STLIB) libpgport.a $(DESTDIR)$(pkglibdir) uninstall: ! $(RM) $(DESTDIR)$(pkglibdir)/libpgport.a libpgport.a: $(LIBOBJS) $(AR) $(AROPT) $@ $^ --- 22,31 # libpgport is needed by some contrib install-all-headers: ! $(INSTALL_STLIB) libpgport.a $(DESTDIR)$(libdir) uninstall: ! $(RM) $(DESTDIR)$(libdir)/libpgport.a libpgport.a: $(LIBOBJS) $(AR) $(AROPT) $@ $^ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[PATCHES] pgxs default installation + various fixes
Dear patchers, This patch addresses open item make pgxs install by default in Bruce's list. (1) make intall installs everything. (2) make light-install does not install pgxs and server dev stuff. this is the previous version of make install. This target maybe of interest of packagers. If you want any other name, just tell me what you want! (3) the installation.sgml documentation reflects the new status... but only with my poor English. (4) I noticed that some pgxs related files were incidently installed under light-install, so I make sure that they are not. (5) a minor stylistic fix in the pgxs makefile, so that it does not have a double slash (/some/directory//sub/directory) in some paths. What might be consider for future improvements, but ISTM not mandatory for 8.0? - maybe rename install-all-headers target, as it also install scripts, makefiles and libraries useful for server developpement. I would suggest install-server-dev. - make the contrib regressions work with pgxs. I'm not sure about what is needed for that. I'm available to fix any problem with this patch. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./GNUmakefile.in.orig Tue Aug 10 08:29:01 2004 --- ./GNUmakefile.inThu Sep 2 09:30:22 2004 *** *** 14,23 $(MAKE) -C config all @echo All of PostgreSQL successfully made. Ready to install. ! install: $(MAKE) -C doc install $(MAKE) -C src install - $(MAKE) -C config install @echo PostgreSQL installation complete. installdirs uninstall distprep: --- 14,25 $(MAKE) -C config all @echo All of PostgreSQL successfully made. Ready to install. ! # default installation includes dev stuff ! install: light-install install-all-headers ! ! light-install: $(MAKE) -C doc install $(MAKE) -C src install @echo PostgreSQL installation complete. installdirs uninstall distprep: *** *** 27,32 --- 29,35 install-all-headers: $(MAKE) -C src $@ + $(MAKE) -C config $@ # clean, distclean, etc should apply to contrib too, even though # it's not built by default *** ./config/Makefile.orig Fri Jul 30 14:26:39 2004 --- ./config/Makefile Thu Sep 2 09:29:32 2004 *** *** 5,11 include $(top_builddir)/src/Makefile.global ! install: all installdirs $(INSTALL_SCRIPT) $(srcdir)/install-sh $(DESTDIR)$(pgxsdir)/config/install-sh $(INSTALL_SCRIPT) $(srcdir)/mkinstalldirs $(DESTDIR)$(pgxsdir)/config/mkinstalldirs --- 5,11 include $(top_builddir)/src/Makefile.global ! install-all-headers: all installdirs $(INSTALL_SCRIPT) $(srcdir)/install-sh $(DESTDIR)$(pgxsdir)/config/install-sh $(INSTALL_SCRIPT) $(srcdir)/mkinstalldirs $(DESTDIR)$(pgxsdir)/config/mkinstalldirs *** ./doc/src/sgml/installation.sgml.orig Mon Jun 21 08:36:52 2004 --- ./doc/src/sgml/installation.sgmlThu Sep 2 09:35:50 2004 *** *** 1042,1059 /para para ! The standard installation provides only the header files needed for client ! application development. If you plan to do any server-side program ! development (such as custom functions or data types written in C), ! then you may want to install the entire productnamePostgreSQL/ ! include tree into your target include directory. To do that, enter screen ! userinputgmake install-all-headers/userinput /screen ! This adds a megabyte or two to the installation footprint, and is only ! useful if you don't plan to keep the whole source tree around for ! reference. (If you do, you can just use the source's include ! directory when building server-side software.) /para formalpara --- 1042,1063 /para para ! The standard installation provides all the header files needed for client ! application development as well as for any server-side program ! development (such as custom functions or data types written in C). ! If you want a lighter installation of productnamePostgreSQL/ enter screen ! userinputgmake light-install/userinput /screen ! instead of the standard install. ! This makes the installation footprint a megabyte or two ligther by ! not installing server-side program developpement header files, ! libraries, scripts and other support files. ! It is only useful if you don't plan to add extension modules (such as ! contribs) later. ! If you do so and you change your mind later, you can still use the ! source's include directory when building server-side software. ! This target still installs client application header files. /para formalpara *** ./src/Makefile.orig Mon Aug 23 09:15:09 2004 --- ./src/Makefile Thu Sep 2 09:46:42 2004 *** *** 25,31 $(MAKE) -C makefiles $@ $(MAKE) -C utils
Re: [PATCHES] pgxs default installation + various fixes
Dear Peter, (2) make light-install does not install pgxs and server dev stuff. this is the previous version of make install. If we do that, we should remove install-all-headers. It's very confusing otherwise. This target maybe of interest of packagers. This I don't understand. I meant: This target may be of interest for packagers, sorry for my poor English writing and typing skills. This is to take into account Tom's view that distributions are often split with a separate dev package for developpements. For instance under Debian there are: postgresql, postgresql-client, postgresql-dev, postgresql-doc... So this target would help checking what files belong to what part of such split installation for the package maintainer. I guess that could possibly help administrators that would like to install a small-footprint database. Now, if everyone agree that having a distinct installation for server headers and other stuff is useless, then indeed we can just keep the install target and drop install-all-headers altogether, I agree with you. That would indeed simplify the patch and the various makefiles, and be less confusing. If there is a consensus, I can remove both light-install and install -all-headers targets in another submission. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] pgxs default installation + various fixes
Dear Peter, This is to take into account Tom's view that distributions are often split with a separate dev package for developpements. For instance under Debian there are: postgresql, postgresql-client, postgresql-dev, postgresql-doc... So this target would help checking what files belong to what part of such split installation for the package maintainer. No, the way this works is that the package building script installs everything in one tree and then has file lists (with wildcards) that determine which files belong in which package. Ok. Thus it suggests that the only use for the two targets is whether someone is interested in having a smaller footprint version... The other reason why I sticked to it in the submission is to be upward compatible with the previous state with 2 differents targets. I feel it is a political decision to shift to a single target install, and I'm not the one to take such a decision... I'm only really interested in having server headers and pgxs stuff installed by default, so that extensions can be reliably added on a default installation. So, the question remains the same: is there a consensus to drop install-all-headers/light-install targets for a simple and full install? If so, I'll make a new submission. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] pgxs default installation + various fixes
Dear Alvaro, As I'm into these files, I can say that one of the reason for that is that the shell scripts in the makefile looks inefficient, with nested for-loops and one-at-a-time config/install-sh forked-script copies for 350 header files, on the 971 files of a standard installation. Also the install-sh script apparently is way more complex than it needs to be. Maybe. I guess the options are there because they might be useful sometimes? There's probably a lot of that complexity (and subsequent slowness) that install-all-headers doesn't need. Maybe. I don't have a clear view about portability issues that I guess justify this script. A lot of time goes into processing the script itself rather than doing useful work. Yes. Is there an objection to trying to convert it to a simpler, faster alternative? Maybe even one that receives multiple files as arguments, which would reduce the number of times it is called by an order of magnitude. Yes, handling several files at a time could indeed improve the stuff. But this means changing the syntax somehow, and fixing makefiles... Also, most unix box have an install program which might be more efficient and which handles several files. I do not know whether it has all the required facilities. For instance, apache looks for a bsd install program at configuration time... and a slow but compatible shell substitute is used instead if none is available. Maybe this can be reused quite simply by postgresql, with their kind permission. As apache is quite portable, it might be good enough for pg. This seems a reasonnable todo objective, but I'm not sure it should be done for 8.0 as it changes the installation procedure significantly? Well, maybe it could be done quite quiclky. It does not look too difficult to implement with apache example at hand. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] pgxs default installation + various fixes
Dear Tom, dear Peter, By my count it adds about 2 megabytes to the installed footprint, which perhaps is not so much now as it was at the time. Ok. I agree. I'm investigating on how to do that, and the choice will be to committers. A larger problem from my point of view is that the installation process seems quite inefficient. make install-all-headers takes a full minute on my devel machine, which is above my threshold of pain for something that I often do more than once a day. As I'm into these files, I can say that one of the reason for that is that the shell scripts in the makefile looks inefficient, with nested for-loops and one-at-a-time config/install-sh forked-script copies for 350 header files, on the 971 files of a standard installation. I guess it a deliberate portability choice that only standard shell and file wildcards are used. I'm not sure there is a simple and *portable* performance fix on the installation. Some files are copied more that once: for instance, pg_config.h is copied both in include/ and include/server, and there are others examples file that. In fact, all files and directories under include/ but include/server seems also copied into include/server. It seems to be a 200KB and 28 files replication. Not really big deal, but not really clean either. If there is only one installation target which includes all header files, ISTM that the include/server subdirectory does not make much sense anymore? On the other hand, it might help packagers to have a clear list of the use of all headers files, in order that they can figure out where they belong to. Opinions? -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[PATCHES] pgxs default installation + various fixes - v2
Dear patchers, please find attached an alternate submission which addresses open item make pgxs install by default. It is up to the committers to chose. (1) there is only one install target. no more install-all-headers. it simplifies/changes several makefiles. (2) the documentation reflects the change. (3) a minor fix on pgxs to use a nicer patch without a double slash. I'm available to fix any issue with this patch. A possible improvement wrt pgxs, maybe not mandatory for 8.0, would be to allow regression tests, but again I'm not sure about what is needed and what are the implications. I have to investigate, and I don't have much time right now... Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./GNUmakefile.in.orig Tue Aug 10 08:29:01 2004 --- ./GNUmakefile.inFri Sep 3 10:01:40 2004 *** *** 15,23 @echo All of PostgreSQL successfully made. Ready to install. install: ! $(MAKE) -C doc install ! $(MAKE) -C src install ! $(MAKE) -C config install @echo PostgreSQL installation complete. installdirs uninstall distprep: --- 15,23 @echo All of PostgreSQL successfully made. Ready to install. install: ! $(MAKE) -C doc $@ ! $(MAKE) -C src $@ ! $(MAKE) -C config $@ @echo PostgreSQL installation complete. installdirs uninstall distprep: *** *** 25,33 $(MAKE) -C src $@ $(MAKE) -C config $@ - install-all-headers: - $(MAKE) -C src $@ - # clean, distclean, etc should apply to contrib too, even though # it's not built by default clean: --- 25,30 *** ./Makefile.orig Sat Feb 10 03:31:25 2001 --- ./Makefile Fri Sep 3 09:49:02 2004 *** *** 11,17 # GNUmakefile won't exist yet, so we catch that case as well. ! all check install installdirs install-all-headers installcheck uninstall dep depend clean distclean maintainer-clean: @if [ ! -f GNUmakefile ] ; then \ echo You need to run the 'configure' program first. See the file; \ echo 'INSTALL' for installation instructions. ; \ --- 11,17 # GNUmakefile won't exist yet, so we catch that case as well. ! all check install installdirs installcheck uninstall dep depend clean distclean maintainer-clean: @if [ ! -f GNUmakefile ] ; then \ echo You need to run the 'configure' program first. See the file; \ echo 'INSTALL' for installation instructions. ; \ *** ./doc/src/sgml/installation.sgml.orig Mon Jun 21 08:36:52 2004 --- ./doc/src/sgml/installation.sgmlFri Sep 3 09:59:56 2004 *** *** 1042,1059 /para para ! The standard installation provides only the header files needed for client ! application development. If you plan to do any server-side program ! development (such as custom functions or data types written in C), ! then you may want to install the entire productnamePostgreSQL/ ! include tree into your target include directory. To do that, enter ! screen ! userinputgmake install-all-headers/userinput ! /screen ! This adds a megabyte or two to the installation footprint, and is only ! useful if you don't plan to keep the whole source tree around for ! reference. (If you do, you can just use the source's include ! directory when building server-side software.) /para formalpara --- 1042,1050 /para para ! The standard installation provides all the header files needed for client ! application development as well as for any server-side program ! development (such as custom functions or data types written in C). /para formalpara *** ./src/Makefile.orig Mon Aug 23 09:15:09 2004 --- ./src/Makefile Fri Sep 3 09:51:37 2004 *** *** 33,42 $(INSTALL_DATA) $(srcdir)/Makefile.shlib $(DESTDIR)$(pgxsdir)/$(subdir)/Makefile.shlib $(INSTALL_DATA) $(srcdir)/nls-global.mk $(DESTDIR)$(pgxsdir)/$(subdir)/nls-global.mk - install-all-headers: - $(MAKE) -C include $@ - $(MAKE) -C port $@ - installdirs: installdirs-local installdirs-local: --- 33,38 *** ./src/include/Makefile.orig Sat Nov 29 20:52:08 2003 --- ./src/include/Makefile Fri Sep 3 10:51:42 2004 *** *** 2,10 # # Makefile for src/include # ! # 'make install' installs only those headers needed for client-side ! # programming. 'make install-all-headers' installs the whole contents ! # of src/include. # # $PostgreSQL: pgsql-server/src/include/Makefile,v 1.12 2003/11/29 19:52:08 pgsql Exp $ # --- 2,8 # # Makefile for src/include # ! # 'make install' installs whole contents of src/include. # # $PostgreSQL: pgsql-server/src/include/Makefile,v 1.12 2003/11/29 19:52:08 pgsql Exp $ # *** *** 18,25 all: pg_config.h pg_config_os.h ! # Install only selected headers install: all
Re: [PATCHES] Tsearch2 update for Win32.
Dear Magnus, Oh. Hmm ... wouldn't it be easier and safer to launch psql script as a subprocess? I'd say no. Executing processes from the installer environment can be a headache (we've had enough problems with initdb already..). And you have to go through all the weirdness with the commandline quoting etc on win32... It's a lot easier to execute a SQL script line-by-line. Which also lets us trap the errors easier etc. Maybe you're suggesting that psql and other commands functionnalities should be made into some kind of a dynamic library to be loaded and called from the installer, if using an external program is so difficult on win32? I used to think that windows was posix somehow, so that it should provide some 'exec*' functions which do not rely on shell scripting and should not require much quoting. Partially replicating functionnalities and adding bugs or limitations to them does not seem so good a idea from a software engineering perspective. Well, just my 0.02 EUR. I think it is a good thing that pg is available on windows, so thanks for that anyway! Have a nice day, -- Fabien. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] pgxs default installation + various fixes
Bruce Momjian [EMAIL PROTECTED] writes: Where are we on this? I think we're waiting on Peter to review it. Yes, indeed. I've made two different submissions on this issue. It is up to the reviewer/committer to chose, and I'm willing to solve any issue with these patches if needed. -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] pgxs default installation + various fixes
Dear Bruce, So, the question remains the same: is there a consensus to drop install-all-headers/light-install targets for a simple and full install? This is the 8.0 release --- let's remove those targets. Our logic has usually been to remove confusing options and document their removal rather than keep them around and continue confusing people forever. Would you resubmit with adjustments for comments made? What comments do you want me to adjust? Do you mean I should add a paragraph is the documentation about that now there is only one install target. I updated the documentation on both patch submission, but if more is required I'll do what I ask, just be more specific. The v2 version is the one which has a single and simple install target everywhere. -- Fabien. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] pg_regress --temp-keep
Dear Tom, What I'd rather see is some effort spent on speeding up our install script, maybe by allowing it to install multiple files per invocation. The recent changes to force install-all-headers have caused a serious degradation of make install performance, Yes, I agree. and I think that's where the issue really needs to be solved. Improving the performance would require to fix the install script so that it can handle more than one argument at once, and/or use a more or less standard install program when available. It seems that there was a disagreement on these issues, so nothing was done in the end. I think we could try to use the same procedure as apache: - look for a standard BSD-install program (in configure) - have a multi-argument script as a replacement if none is found The opposition to this was because of portability concerns, which indeed require some attention. -- Fabien. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PATCHES] pg_regress --temp-keep
Dear Tom, Improving the performance would require to fix the install script so that it can handle more than one argument at once, and/or use a more or less standard install program when available. We used to try to use the standard install program when available. After sufficient bitter experience, we concluded that standard and install are not words that should be used together. Ok. I trust you on that;-) Thus the only performance improvement I can see is to change the install script, for instance by considering the multi-argument apache version as a candidate replacement. This might require also to fix makefiles install target to take advantage of the fact. Also, it might require to deal with extensions of the postgres script somehow, if it has any such things. I might look into it as I'm somehow responsible for the install-all-headers new default, but I don't have much time right now. Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] pg_regress --temp-keep
Dear Peter, Thus the only performance improvement I can see is to change the install script, for instance by considering the multi-argument apache version as a candidate replacement. You can always override the default to use your own install program if you are really pressed for performance. (make install INSTALL=install or similar) That simple idea alone seems to give a 25% cpu performance gain on my laptop for a make check... I'll try to find time to test the multi-arg install for further improvements anyway. Thanks! -- Fabien Coelho - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PATCHES] pgxs under Win32 for PL/Java
Dear Thomas, I'm trying to change the Makefile system for PL/Java so that it uses PGXS instead of compiling using a complete PostgreSQL source tree. As it turns out, the directory include/port/win32 is not present in the PostgreSQL binary installation. Without it, it's not possible to compile on win32. Please find enclosed a patch which attempts to fix your use of pgxs under win32: - install port/* includes - install libpostgres.a by default (it seems to require MAKE_DLL=true) - fix include path under win32 portname The specific win32 fixes are performed in the Makefile.win32 file. I have no mean to test that on a win32 machine. Could you do it? I'm wondering whether the MAKE_DLL fix should also be done under cygwin. Any opinion? Thanks in advance, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/include/Makefile.orig Wed Nov 3 10:32:29 2004 --- ./src/include/Makefile Sun Nov 7 11:14:52 2004 *** *** 18,24 # Subdirectories containing headers for server-side dev SUBDIRS = access bootstrap catalog commands executor lib libpq mb \ ! nodes optimizer parser port regex rewrite storage tcop utils # Install all headers install: all installdirs remove-old-headers --- 18,25 # Subdirectories containing headers for server-side dev SUBDIRS = access bootstrap catalog commands executor lib libpq mb \ ! nodes optimizer parser port regex rewrite storage tcop utils \ ! port port/win32 port/win32/arpa port/win32/netinet port/win32/sys # Install all headers install: all installdirs remove-old-headers *** ./src/makefiles/Makefile.win32.orig Thu Oct 28 08:24:17 2004 --- ./src/makefiles/Makefile.win32 Sun Nov 7 10:44:58 2004 *** *** 35,37 --- 35,48 ifneq (,$(findstring src/pl/plpython,$(subdir))) override CPPFLAGS+= -DUSE_DL_IMPORT endif + + # special win32 headers are provided here + ifdef PGXS + override CPPFLAGS+= $(includedir_server)/port/win32 + endif + + # it is better to install shared-libraries anyway? + # may be overriden with make MAKE_DLL=false install + ifndef MAKE_DLL + MAKE_DLL = true + endif ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[PATCHES] pgxs patch for win32 (v2)
Dear patchers, please find attached a very small patch which: - install win32 headers on make install - install libpostgres.a library under win32 by default (MAKE_DLL=true) - fix CPPFLAGS under win32 to look for these added header under PGXS it was tested by Thomas Hallgren to build PL/Java with pgxs. it may interfere a little bit with Alvaro's patch about the now useless remove-all-headers target in src/include/Makefile still open question: - should the MAKE_DLL macro be set by default under cygwin? - what is the rationnal for this macro? is it still needed? Have a nice day, -- Fabien Coelho - [EMAIL PROTECTED]*** ./src/include/Makefile.orig Wed Nov 3 10:32:29 2004 --- ./src/include/Makefile Sun Nov 7 11:14:52 2004 *** *** 18,24 # Subdirectories containing headers for server-side dev SUBDIRS = access bootstrap catalog commands executor lib libpq mb \ ! nodes optimizer parser port regex rewrite storage tcop utils # Install all headers install: all installdirs remove-old-headers --- 18,25 # Subdirectories containing headers for server-side dev SUBDIRS = access bootstrap catalog commands executor lib libpq mb \ ! nodes optimizer parser port regex rewrite storage tcop utils \ ! port port/win32 port/win32/arpa port/win32/netinet port/win32/sys # Install all headers install: all installdirs remove-old-headers *** ./src/makefiles/Makefile.win32.orig Thu Oct 28 08:24:17 2004 --- ./src/makefiles/Makefile.win32 Sun Nov 7 10:44:58 2004 *** *** 35,37 --- 35,48 ifneq (,$(findstring src/pl/plpython,$(subdir))) override CPPFLAGS+= -DUSE_DL_IMPORT endif + + # special win32 headers are provided here + ifdef PGXS + override CPPFLAGS+= -I$(includedir_server)/port/win32 + endif + + # it is better to install shared-libraries anyway? + # may be overriden with make MAKE_DLL=false install + ifndef MAKE_DLL + MAKE_DLL = true + endif ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PATCHES] Users/Groups - Roles
Dear Stephen, Attached please find files and patches associated with moving from the User/Group system currently in place to Roles, as discussed previously. The files are: pg_authid.h New system table, contains role information To be placed in src/include/catalog, replacing pg_shadow.h pg_auth_members.h New system table, contains role/membership information To be placed in src/include/catalog, replacing pg_group.h I've looked very quickly at the patch. ISTM that the proposed patch is a reworking of the user/group stuff, which are both unified for a new role concept where a user is a kind of role and a role can be a member of another role. Well, why not. Some added files seems not to be provided in the patch : sh bzgrep pg_authid.h ./role_2005062701.ctx.patch.bz2 ? src/include/catalog/pg_authid.h ! pg_namespace.h pg_conversion.h pg_database.h pg_authid.h pg_auth_members.h \ ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h + #include catalog/pg_authid.h + #include catalog/pg_authid.h ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h ! #include catalog/pg_authid.h Or maybe I missed something, but I could not find the pg_authid file? the '?' line in the diff seems to suggest an unexpected file. Anyway, from what I can see in the patch it seems that the roles are per cluster, and not per catalog. So this is not so conceptually different from user/group as already provided in pg. What would have been much more interesting for me would be a per catalog role, so that rights could be administrated locally in each database. I'm not sure how to provide such a feature, AFAICS the current version does not give me new abilities wrt right management. Have a nice day, -- Fabien. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PATCHES] cast bytea to/from bit strings
Dear PostgreSQL developers, Please find attached a small patch to convert bytea to bit strings and vice versa. I used it in order to be able xor md5 results so as to checksum bundles of tuples together. The MD5 result is an hexa text convertible to bytea with decode, but then I was stuck... ISTM that having these types explicitely castable may be useful to others, hence this small contribution. The cast allows to work on a bytea at the bit level and to perform bitwise operations. ./src/backend/utils/adt/varbit.c - add two conversion functions ./src/include/catalog/pg_proc.h - declare the above functions in the catalog ./src/include/catalog/pg_cast.h - declare the 4 explicit casts ./src/test/regress/sql/bit.sql - test all those new casts ./src/test/regress/expected/bit.out - new regression results ./src/test/regress/expected/opr_sanity.out - pg figures out that bit and varbit are binary compatible, which is the case (well, at least I assumed it). -- Fabien.*** ./src/backend/utils/adt/varbit.c.orig Sun Mar 5 16:58:44 2006 --- ./src/backend/utils/adt/varbit.cThu May 4 15:57:34 2006 *** *** 1484,1486 --- 1484,1554 } PG_RETURN_INT32(0); } + + /* create a bit string from a byte array. + */ + Datum varbitfrombytea(PG_FUNCTION_ARGS) + { + bytea *arg = PG_GETARG_BYTEA_P(0); + int32 typmod = PG_GETARG_INT32(1); /* for ::BIT(10) syntax */ + int datalen = VARSIZE(arg) - VARHDRSZ; + int bitlen, len, resdatalen, needlen; + VarBit *result; + + /* truncate or expand if required */ + if (typmod=0) + { + bitlen = typmod; + resdatalen = (bitlen + BITS_PER_BYTE - 1) / BITS_PER_BYTE; + needlen = datalenresdatalen? resdatalen: datalen; + } + else + { + resdatalen = datalen; + bitlen = BITS_PER_BYTE * datalen; + needlen = datalen; + } + + len = VARBITTOTALLEN(bitlen); + result = (VarBit *) palloc(len); + VARATT_SIZEP(result) = len; + VARBITLEN(result) = bitlen; + memcpy(VARBITS(result), VARDATA(arg), needlen); + + if (resdatalen needlen) + { + char *ptr = VARBITS(result) + needlen; + while (needlenresdatalen) + { + *ptr++ = '\000'; + needlen++; + } + } + + PG_RETURN_VARBIT_P(result); + } + + /* create a byte array from a bit string. + */ + Datum varbittobytea(PG_FUNCTION_ARGS) + { + VarBit *arg = PG_GETARG_VARBIT_P(0); + boolisExplicit = PG_GETARG_BOOL(2); + int bitlen = VARBITLEN(arg); + int datalen = (bitlen + BITS_PER_BYTE - 1) / BITS_PER_BYTE; + int len = datalen + VARHDRSZ; + bytea *result; + + /* no implicit cast if data size is changed */ + if (!isExplicit (bitlen != BITS_PER_BYTE*datalen)) + ereport(ERROR, + (errcode(ERRCODE_STRING_DATA_LENGTH_MISMATCH), +errmsg(bit length %d would be round up, use explicit cast, + bitlen))); + + result = (bytea *) palloc(len); + VARATT_SIZEP(result) = len; + memcpy(VARDATA(result), VARBITS(arg), datalen); + + PG_RETURN_BYTEA_P(result); + } *** ./src/include/catalog/pg_cast.h.origSun Mar 5 16:58:54 2006 --- ./src/include/catalog/pg_cast.h Thu May 4 16:16:49 2006 *** *** 261,266 --- 261,271 DATA(insert ( 23 1560 1683 e )); DATA(insert ( 1560 20 2076 e )); DATA(insert ( 1560 23 1684 e )); + /* bytea to/from bit and varbit casts */ + DATA(insert ( 1560 17 2790 e )); + DATA(insert ( 1562 17 2791 e )); + DATA(insert ( 17 1560 2792 e )); + DATA(insert ( 17 1562 2793 e )); /* * Cross-category casts to and from TEXT *** ./src/include/catalog/pg_proc.h.origWed May 3 00:25:10 2006 --- ./src/include/catalog/pg_proc.h Thu May 4 15:57:34 2006 *** *** 3856,3861 --- 3856,3871 DATA(insert OID = 2749 ( arraycontained PGNSP PGUID 12 f f t f i 2 16 2277 2277 _null_ _null_ _null_ arraycontained - _null_ )); DESCR(anyarray contained); + /* BYTEA - BIT/VARBIT */ + DATA(insert OID = 2790 ( bytea PGNSP PGUID 12 f f t f i 3 17 1560 23 16 _null_ _null_ _null_ varbittobytea - _null_)); + DESCR(convert bit() to bytea); + DATA(insert OID = 2791 ( bytea PGNSP PGUID 12 f f t f i 3 17 1562 23 16 _null_ _null_ _null_ varbittobytea - _null_)); + DESCR(convert varbit() to bytea); + DATA(insert OID = 2792 ( tobit PGNSP PGUID 12 f f t f i 3 1560 17 23 16 _null_ _null_ _null_ varbitfrombytea - _null_)); + DESCR(convert bytea to bit()); + DATA(insert OID = 2793 ( varbit
Re: [PATCHES] cast bytea to/from bit strings
Dear Bruce, I am not sure this is of general enough usefulness to be in the backend. Hmm... I think that the inability to convert nearly binary compatible standard types one to the other is a postgresql issue. Even if it is not often useful, the point is completeness and soundness of the type provided by the core. More over I haven't found any work around with decode/encode and other casts functions. Bytea is somehow an isolated type, which makes it not so useful from within the database. Can you add it as a pgfoundry project? I could do it, but ISTM it is really overkill for two stupid 10 lines functions that deal with internal core types. -- Fabien. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] cast bytea to/from bit strings
Dear Tom, I think that the inability to convert nearly binary compatible standard types one to the other is a postgresql issue. Even if it is not often useful, the point is completeness and soundness of the type provided by the core. OK, can I get some feedback from others about this patch? I think Fabien is way overstating his case here. Maybe. It's not immediately obvious that there should be a cast between bit(n) and bytea, and it's even less obvious that it should be done in a way that exposes the internal representation of bit(n) as this does. Hmmm... I think people guessed it anyway;-) Well, if there is a big/little endian issue, I agree that it is not a good idea. As I cast at the byte level, it seems to me that it should be okay. If so, I see no real harm in having an *explicit* cast allowed, which by nature may be a little destructive, as long as it is reasonnable. There's no principled reason for one bit ordering over the other, for example, nor any very clean way to handle coercions where N isn't a multiple of 8. It could be rejected instead. I think this request has more to do with a lack of adequate operators for one type or the other. If we're missing, say, bitwise logical operators for bytea, then let's add those rather than create a bogus equivalence between the types. Indeed, what triggers my development for this cast was that I needed a xor on md5 results, which can only be converted to bytea with convert. I could develop a bunch of bitwise operators for bytea, but casting to varbit where they are already available was the quickest path. -- Fabien. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster