Re: [HACKERS] Any more must fix issues for 7.4?
Bruce Momjian schrieb: Peter Eisentraut wrote: Bruce Momjian writes: Oh, I forgot about that. This leaves datetime.h and decimal.h in /pgsql/include. I don't see how 7.4.1 can fix that because people will not be using initdb. This has nothing to do with initdb. Right. I mean install isn't going to remove those files in /include. If you use a packaged version (.deb/.rpm etc) it will remove them. So no problem for most(?) users. Christof ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Alter Table Column Datatype
Rod Taylor schrieb: Is this syntax ok? ALTER TABLE table ALTER COLUMN column TYPE new type shouldn't that be ALTER TABLE table ALTER [COLUMN] column [TYPE] new type which I clearly remember from different database systems. Christof ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Extending PostgreSQL in C or C++
Tom Lane wrote: Sebastien Lemieux [EMAIL PROTECTED] writes: Then I get: ERROR: Can't find function add_one in file /[PathToTheObject]/pgsql_bio.so Hmm. I can't see anything wrong with what you did, either. It's possible that the dynamic linker has printed additional messages to the backend's stderr. Make sure that you have stderr set up to go someplace useful (not /dev/null) and look there to see if you can learn more. most likely he has to declare the function as extern C. A nm pgsql_bio.so can tell more. Christof ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Roadmap for FE/BE protocol redesign
Peter Eisentraut wrote: I don't get it. Say I execute SELECT a, b, c FROM foo;. In order to update that query, the application needs to create some update statement, say UPDATE foo SET a = entered_value;. So the application already knows that foo is the table and a is the column. So if the application wants to know about details on the column a, it can execute SELECT whatever FROM pg_attribute, pg_class WHERE relname = 'foo' AND attname = 'a'; Once the statement uses aliases for the result columns this can go wrong. And to determine the correct table is difficult at best. With this proposed change, it can replace that with SELECT whatever FROM pg_attribute, pg_class WHERE oid = X AND attnum = Y; With the difference that the first version always works and the second version sometimes works, and when that sometimes is is determined by the rule that it should be obvious. That doesn't seem right to me. I have the impression that the first solution sometimes works (and other times gets it really wrong, you can't tell without parsing the query) and the second solution works when possible (and gives no answer elsewhere). The only cases I expect the second one to fail are: natural joins (where the table is ambigious), expressions and function calls. Of course you can't expect a sane result in these cases. On a second thought I do not want the column name (or alias) to get discarded (because otherwise you can't get the alias), but *DBC desperately needs attrelid and attnum. Christof ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Roadmap for FE/BE protocol redesign
Tom Lane wrote: Barry Lind [EMAIL PROTECTED] writes: The describe request is generally only done once even though you may do multiple fetchs (unlike todays protocol which includes the describe information on every fetch, even if you are fetching one row at a time). I'm less than excited about changing that, because it breaks clients that don't want to remember past RowDescriptions (libpq being the front-line victim), and it guarantees loss-of-synchronization failures anytime the client misassociates rowdescription with query. In exchange for that, we get what exactly? Fetching one row at a time is *guaranteed* to be inefficient. The correct response if that bothers you is to fetch multiple rows at a time, not to make a less robust protocol. I don't think that protocol support for cursors should change the behavior of executing all seven stages by default. A FETCH ... commmand would get processed like any other (e.g. SELECT ...) and metadata is sent back, too (which corresponds to decribe stage IIRC). New programs have the option to use the backwards compatible high level access via PQexec(c,FETCH FROM X) which does all seven steps at once, or use the new low level way e.g. PQexec_new(c,SELECT ..., query_parameter_descriptor, what_to_do (*), lines_to_return_without_cursor_overhead) which should return at most the specified lines and (if needed) a cursor descriptor (most likely an int) for subsequent PQfetch and PQclose calls. I really like the idea of PGresult as an argument (cursor descriptor) for PQfetch (instead of an int) because it may even copy the metadata to the new PGresult, or perhaps replace the values in the original PGresult (if we decide to go this way). [proposed signature: PGresult *PQfetch(PGresult*result_of_the_select, how_many_lines, perhaps_even_offset/position)] Additional there should be a PQclose and perhaps a PQprocess(PGresult *, things_to_do (*)) if we want to be able to separate every step. If you know you are never interested in metadata, you can omit the describe flag at all. [null indication and type specification is of course always needed to access the actual data] Christof *) open, parse, describe, bind, execute, fetch, close PS: If we decide to omit the lines_to_return_without_cursor_overhead optimization, the new architecture would still be a big win for *DBC. This optimization can not get a GUC variable instead of a protocol parameter since this would break clients: should they specify fetch+close to enable it? If yes, there's no easy way to implement the old behavior (all seven stages, no limit on returned lines). If no, the client cannot specify to omit the fetch without changing it (limit 0). PPS: Query parameter passing is another topic, but I tend to propose a PGresult variant for specifying them (of course each with its type). ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Roadmap for FE/BE protocol redesign
Christof Petig wrote: If you know you are never interested in metadata, you can omit the describe flag at all. [null indication and type specification is of course always needed to access the actual data] More exactly they are sent separately: null indication is per row 'D'/'B' and type specification is per query 'T'. If the client does not ask for metadata one might omit attrelid,attnum (*) and field name in the 'T' packet. One might argue whether this small win per query times column rectifies to implement the feature. But then we'd need a method to query them lateron (otherwise *DBC could never omit them at first). Christof *) they are not there, yet ;-) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Roadmap for FE/BE protocol redesign
Barry Lind wrote: 3) Protocol level support for CURSORs. It would be nice if cursor support was done at the protocol level and not as a SQL command. I want to second this proposal. Currently I avoid using cursors in my programs since a) they need much more logic and _string_concatenation_ to be handled transparently by a library (prepend the query with DECLARE X CURSOR FOR), then (FETCH n FROM X), then (CLOSE X). That's inefficient. b) I have a really bad feeling to have the backend parse (FETCH FROM X) every time I ask for a (single) row c) I hate that the backend retransmits column names etc. for every fetch I issue. This information is mostly unneeded but the backend cannot know better Of course these issues can be addressed by using FETCH n (n10) but this kludge is only needed because the FETCH protocol is so inefficient. Think about the amount of bytes transferred for select 2000 lines of integers with and without declare/fetch/close. Imagine a result set of 1 to 2 integers given back (depending on parameters) for an interactive program (e.g. browsing a customer list by initials). Prefer a cursor (much more constant overhead even for single results) or all in one (and wait longer for a first result)? I'd love to tell the backend to give a descriptor for this query back and use it efficiently to get data and/or metadata (see ODBC, JDBC, sqlda or dynamic sql). Perhaps it's most efficient to ask for N initial results (which are instantly returned). Christof (who implemented dynamic sql for ecpg) PS: perhaps this protocol infrastructure is also well suited to return large bytea values (M bytes : return inline, return a descriptor). [Also proposed by Barry Lind.] PPS: I'm perfectly fine with returning attrelid/attnum. Then the client can control how many effort is spent for determining only the asked for metadata. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] protocol change in 7.4
Neil Conway wrote: (6) Protocol-level support for prepared queries, in order to bypass the parser (and maybe be more compatible with the implementation of prepared queries in other databases). Let me add (6b) Protocol level support for query parameters. This would actually make (6) more powerful and speed up non prepared (but similar) queries via the query cache (which is already there IIRC). [I talk about statement USING :var ... ] (n) Platform independant binary representation of parameters and results (like in CORBA). This can _really_ speed up communication with compiled programs if you take the time to implement it. This was previously planned for a future CORBA fe/be protocol, but this does not seem to come any time soon. (n+1) Optional additional Result qualifiers. E.g. dynamic embedded sql has a flag to indicate that this column is a key. Previously it was impossible to set this flag to a meaningful value. Also the standard has additional statistical information about the size of the column etc. If it's unclear what I'm talking about I will look up the exact location in the standard (it's embedded sql, dynamic sql, get descriptor) Yours Christof ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] protocol change in 7.4
Christof Petig wrote: Neil Conway wrote: (6) Protocol-level support for prepared queries, in order to bypass the parser (and maybe be more compatible with the implementation of prepared queries in other databases). Let me add (6b)Protocol level support for query parameters. This would actually make (6) more powerful and speed up non prepared (but similar) queries via the query cache (which is already there IIRC). [I talk about statement USING :var ... ] (n)Platform independant binary representation of parameters and results (like in CORBA). This can _really_ speed up communication with compiled programs if you take the time to implement it. This was previously planned for a future CORBA fe/be protocol, but this does not seem to come any time soon. After one night's sleep I think that perhaps a CORBA based protocol might be less work (but I have no idea about a decent authentification schema, I'd tend to reuse the already authentificated stream). A corbaized query-only interface might easily cover these issues and be less work than a full corba backend access. JDBC (I don't know much about it) might give a reasonable interface design (perhaps combined with a libpq[++|xx] like interface if there's benefit to it). (n+1)Optional additional Result qualifiers. E.g. dynamic embedded sql has a flag to indicate that this column is a key. Previously it was impossible to set this flag to a meaningful value. Also the standard has additional statistical information about the size of the column etc. If it's unclear what I'm talking about I will look up the exact location in the standard (it's embedded sql, dynamic sql, get descriptor) This does not need an implementation soon. But the new protocol should allow future things like this. All these proposals are motivated by (future) ecpg [C/C++] needs. So IMHO the ODBC, JDBC, libpqxx people might be interested in many of these issues, too. We definitely should make sure to have asked them. Yours Christof ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] [INTERFACES] ECPG: FETCH ALL|n FROM cursor - Memory allocation?
Michael Meskes wrote: On Thu, Apr 25, 2002 at 12:42:00PM +0100, Lee Kindness wrote: Should the input pointers be NULL initialised? How should the memory be freed? A simple free() will do. You also can free all automatically allocated memory from the most recent executed statement by calling ECPGfree_auto_mem(). But this is not documented and will never be. The correct way is to free(array1) and free(array2) while libecpg will free the internal structures when the next statement is executed. Never, never mix these two! ECPGfree_auto_mem will free even memory which has already been free'd by the user, perhaps we should get rid of this method (any allocated memory regions are stored in a list, if you never call ECPGfree_auto_mem, this list grows and grows). Christof ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports
Marc G. Fournier wrote: hr ... do you have a working Windows development environment? I'm running WinXP at home, but don't have any of the compilers or anything yet, so all my work for the first part is going to be done under Unix ... but someone that knows something about building makefiles for Windows, and compiling under it, will definitely be a major asset ;) I think if you are familiar with make and gcc (and perhaps autoconf), MinGW and MSys are the development environment of choice on Windows. You even get /bin/sh. But the generated program does not depend on any custom library (like cygwin does). It's even possible to cross compile from a Linux box (actully powerpc in my case). Look at http://mingw.sourceforge.net (and there for msys). Christof ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] [INTERFACES] ECPG: FETCH ALL|n FROM cursor - Memory allocation?
Lee Kindness wrote: Okay, lets see if i've got this right... If I allocate the memory before the FETCH then I (naturally) free it. However If I NULL initialise the pointer then libecpg will allocate the memory and I must NOT free it - libecpg will free it automatically... Yeah? No, I only said: Never mix free and ECPGfree_auto_mem because ECPGfree_auto_mem will double free it if you free'd it already. And also: it might be a good idea to kill the undocumented function (and the list). And: You need to free it (by one of the two methods above). I think this highlights the need for some documentation on this aspect. Yes it does. Christof ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] HISTORY (ecpg enhancements not yet mentioned)
Hi Bruce, you might add that I did the following useful enhancement to ECPG: - EXECUTE ... INTO ...implemented - multiple row descriptor support (e.g. CARDINALITY) I don't feel that my humble contribution of a few lines is important but the improvement made really is important (n times performance if you use it). Yours Christof ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] ecpg - GRANT bug
Michael Meskes wrote: On Tue, Oct 16, 2001 at 10:27:42AM +0100, Lee Kindness wrote: And the patch below corrects a pet peeve I have with ecpg, all errors and warnings are output with a line number one less than reality... I wish I knew where this comes from. I've been trying to track this bug down for years now, but have yet to find the reason. Okay, didn't check for quite some time now, but the first time I committed a fix was March 1998. But somehow I still haven't found all problems it seems. I somewhat got the impression that using C++ style comments (//) are related to worse the problem. But I must confess I didn't dig deep enough to contribute anything substancial. Perhaps the problem is a misunderstanding of ecpg and cpp. I was confused by the blank lines following or preceding a #line statement every time I looked at it. This should be not necessary. While talking about warnings: ecpg warns about NULLIF being not implemented yet. But actually it works (for me). Christof ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Abort transaction on duplicate key error
Haller Christoph wrote: My first message: In a C application I want to run several insert commands within a chained transaction (for faster execution). From time to time there will be an insert command causing an ERROR: Cannot insert a duplicate key into a unique index As a result, the whole transaction is aborted and all the previous inserts are lost. Is there any way to preserve the data except working with autocommit ? What I have in mind particularly is something like Do not abort on duplicate key error. Simply select by the key you want to enter. If you get 100 an insert is ok, otherwise do an update. Oracle has a feature called 'insert or update' which follows this strategy. There also was some talk on this list about implementing this, but I don't remember the conclusion. BTW: I strongly recommend staying away from autocommit. You cannot control/know whether/when you started a new transaction. Christof PS: I would love to have nested transactions, too. But no time to spare ... Perhaps somebody does this for 7.3? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] Problem with setlocale (found in libecpg) [accessing a
Tom Lane wrote: Well at least on glibc-2.2 it seems that setlocale retuns a pointer to malloced memory, and frees this pointer on subsequent calls to setlocale. So I would kindly ask you to take a second look at every invokation of setlocale. I looked around, and am worried about the behavior of PGLC_current() in src/backend/utils/adt/pg_locale.c. It doesn't change locale but does retrieve several successive setlocale() results. Does that work in glibc? Well actually I did not check glibc's source code. But I tried to run my program with efence and it aborted in execute.c [ locale=setlocale(LC_NUMERIC,NULL); setlocale(LC_NUMERIC,C); ... setlocale(LC_NUMERIC,locale); // access to already freed memory (locale) ] So my best guess is that setlocale - uses a malloced memory for return (which copes best with variable length strings) - frees this on a subsequent calls and allocates a new one. Yes, I'm worried about PGLC_current(), too. IMHO we should definitely copy the result to a malloced area. Does the current solution work with static storage (old libcs?)? The last call would overwrite the first result, wouldn't it? Christof ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] [PATCHES] Problem with setlocale (found in libecpg) [accessing a
Michael Meskes wrote: On Thu, Sep 27, 2001 at 12:08:29AM -0400, Tom Lane wrote: I looked around, and am worried about the behavior of PGLC_current() in src/backend/utils/adt/pg_locale.c. It doesn't change locale but does retrieve several successive setlocale() results. Does that work in glibc? I haven't experienced any problem so far, but then I wasn't able to reproduce Christof's either on my glibc2.2 system. You have to link with efence to see it (see below). (BTW the bug is in libecpg) Otherwise the bug is hidden (setting an illegal locale simply does not do anything if we ignore it's return value (setlocale returns NULL on error)). Perhaps outputting a notice to the debug stream if setlocale fails is a good choice (I don't like to raise a SQL error). Christof [More detailed: if the former value is freed, the pointer still points to a valid memory region (without efence), further processing inside ecpg will reuse that region for just another string (an input variable's value in SQL notation). So setting locale '0' or 'ISO' or 'some string' silently fails.] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] anoncvs troubles (was Re: CVS vs anoncvs)
Marc G. Fournier wrote: Okay, its updated effective a few minutes ago ... and the upate should work as well ... Should ... ~/pgsql-cvs/pgsql/src/interfaces/ecpg/preproc cvs status preproc.y cvs server: failed to create lock directory for `/projects/cvsroot/pgsql/src/interfaces/ecpg/preproc' (/projects/cvsroot/pgsql/src/interfaces/ecpg/preproc/#cvs.lock): Permission denied cvs server: failed to obtain dir lock in repository `/projects/cvsroot/pgsql/src/interfaces/ecpg/preproc' cvs [server aborted]: read lock failed - giving up ~/pgsql-cvs/pgsql cvs update cannot create_adm_p /tmp/cvs-serv48812/ChangeLogs Permission denied ~/pgsql-cvs/pgsql cvs update -l cvs server: Updating . cvs server: failed to create lock directory for `/projects/cvsroot/pgsql' (/projects/cvsroot/pgsql/#cvs.lock): Permission denied cvs server: failed to obtain dir lock in repository `/projects/cvsroot/pgsql' cvs [server aborted]: read lock failed - giving up ... but it does not, yet. Christof ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] anoncvs troubles (was Re: CVS vs anoncvs)
Marc G. Fournier wrote: should be four hours, but I haven't had a chance, with the newest worm/virus going around right now having killed our core router yesterday, to redirect the sync'ag with the new server ... will do that first thing tomorrow ... On Wed, 19 Sep 2001, Tom Lane wrote: Peter Bierman [EMAIL PROTECTED] writes: If it's already been fixed (yay!), the fix isn't at anoncvs yet. I think there is some lag between the master CVS and anoncvs now. Marc, is that correct? How much lag? regards, tom lane It's definitely more than 16 hours. I still can't see M. Meskes' commits (16:09 MEST, 10:09 EDT) While you're at it, could you please fix this error: ~/pgsql-cvs/pgsql cvs -z3 update -dP cannot create_adm_p /tmp/cvs-serv2966/ChangeLogs Permission denied for i in `find -type d ! -name CVS ` ; do (cd $i ; cvs -z3 update -l -d ) done cvs server: Updating . cvs server: Updating . [.] This works somehow but is really ugly and bandwidth-wasting. This even occurs with a fresh checkout: ~/pgsql-cvs/tmp cvs -d :pserver:[EMAIL PROTECTED]:/projects/cvsroot co pgsql/ChangeLogs cvs server: Updating pgsql/ChangeLogs U pgsql/ChangeLogs/ChangeLog-7.1-7.1.1 U pgsql/ChangeLogs/ChangeLog-7.1RC1-to-7.1RC2 U pgsql/ChangeLogs/ChangeLog-7.1RC2-to-7.1RC3 U pgsql/ChangeLogs/ChangeLog-7.1RC3-to-7.1rc4 U pgsql/ChangeLogs/ChangeLog-7.1beta1-to-7.1beta3 U pgsql/ChangeLogs/ChangeLog-7.1beta3-to-7.1beta4 U pgsql/ChangeLogs/ChangeLog-7.1beta4-to-7.1beta5 U pgsql/ChangeLogs/ChangeLog-7.1beta5-to-7.1beta6 U pgsql/ChangeLogs/ChangeLog-7.1beta6-7.1RC1 U pgsql/ChangeLogs/ChangeLog-7.1rc4-7.1 ~/pgsql-cvs/tmp cd pgsql/ ~/pgsql-cvs/tmp/pgsql cvs update cannot create_adm_p /tmp/cvs-serv4350/ChangeLogs Permission denied Yours Christof ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] anoncvs troubles (was Re: CVS vs anoncvs)
Christopher Kings-Lynne wrote: While you're at it, could you please fix this error: ~/pgsql-cvs/pgsql cvs -z3 update -dP cannot create_adm_p /tmp/cvs-serv2966/ChangeLogs Permission denied Instead of checking out over your existing checkout, checkout to a new dir and there's no problem. Sorry, I want to update (only the differences cross the wire) or diff, not check out all again twice a day (which works). Christof ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Trans-transactions cursors
Chamanya wrote: I am currently building a small web based app, with postgres as back end. I found that in ecpg you can declare and use cursor without declaring a transaction. In several places I have used cursors for selects only. That's the only way I found to make ecpg fetch multiple rows. And in ecpg I have to give an explicit open cursor statement to make fetching possible. That's simply because ecpg starts a new transaction on any SQL statement if no transaction is active. I consider this (autocommit on) one of the worst traps you can lay for yourself. Christof ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Query Planning time increased 3 times on 7.1 compared to 7.0.3
Tom Lane wrote: Christof Petig [EMAIL PROTECTED] writes: We noticed that after upgrading to 7.1beta[245] the execution time for some often used queries went up by a factor of 2 or more. I get the desired plan after doing VACUUM ANALYZE ... regards, tom lane I apologize. I must have been smoking something when I did the vacuum analyze. And my nightly script did not work. 7.1 is much faster. Christof ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[HACKERS] Query Planning time increased 3 times on 7.1 compared to 7.0.3
Hello, We noticed that after upgrading to 7.1beta[245] the execution time for some often used queries went up by a factor of 2 or more. Considering the early beta state I was not alarmed. But since I noticed that yesterday's snapshot still has the problem, I'd really like to tell you about it. Here is one of the queries, it takes about half a second on our computer (PII 233 with 256MB RAM) to execute and returns typically 1-4 rows via two index scans with high selectivity. So it looks to me that planning time outwages execution time by far. 7.0 took about 0.15 seconds (which is still much). Here is the query: explain verbose select gaenge , s . artikelid , text from schaertabelle s , extartbez e where maschine = int2(109) and schaerdatum = '2001-01-13' and s . artikelid = e . artikelid and extartbezid = 1 and bezkomptype = 0 order by textlimit 10; And the plan for 7.0 and 7.1 (attached). The data and schema is accessible via http://home.wtal.de/petig/pg_test.sql.gz If you omit 'int2(' the index scan collapses into a sequential scan. (Well known problem with int2 indices) Christof Oh, I'll attach the schema, too. So if you just want to take a look at the table definition you don't have to download the data. NOTICE: QUERY DUMP: { LIMIT :startup_cost 11.70 :total_cost 11.70 :rows 1 :width 22 :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 21 :restypmod -1 :resname gaenge :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 4 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname artikelid :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 8 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 8}} { TARGETENTRY :resdom { RESDOM :resno 3 :restype 1043 :restypmod 54 :resname text :reskey 0 :reskeyop 0 :ressortgroupref 1 :resjunk false } :expr { VAR :varno 2 :varattno 3 :vartype 1043 :vartypmod 54 :varlevelsup 0 :varnoold 2 :varoattno 3}}) :qpqual :lefttree { SORT :startup_cost 11.70 :total_cost 11.70 :rows 1 :width 22 :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 21 :restypmod -1 :resname gaenge :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 4 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname artikelid :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 8 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 8}} { TARGETENTRY :resdom { RESDOM :resno 3 :restype 1043 :restypmod 54 :resname text :reskey 1 :reskeyop 1071 :ressortgroupref 1 :resjunk false } :expr { VAR :varno 2 :varattno 3 :vartype 1043 :vartypmod 54 :varlevelsup 0 :varnoold 2 :varoattno 3}}) :qpqual :lefttree { NESTLOOP :startup_cost 0.00 :total_cost 11.69 :rows 1 :width 22 :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 21 :restypmod -1 :resname gaenge :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 65001 :varattno 1 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname artikelid :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 65001 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 8}} { TARGETENTRY :resdom { RESDOM :resno 3 :restype 1043 :restypmod 54 :resname text :reskey 0 :reskeyop 0 :ressortgroupref 1 :resjunk false } :expr { VAR :varno 65000 :varattno 1 :vartype 1043 :vartypmod 54 :varlevelsup 0 :varnoold 2 :varoattno 3}}) :qpqual :lefttree { INDEXSCAN :startup_cost 0.00 :total_cost 2.02 :rows 1 :width 6 :qptargetlist ({ TARGETENTRY :resdom { RESDOM :resno 1 :restype 21 :restypmod -1 :resname :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 4 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 4}} { TARGETENTRY :resdom { RESDOM :resno 2 :restype 23 :restypmod -1 :resname :reskey 0 :reskeyop 0 :ressortgroupref 0 :resjunk false } :expr { VAR :varno 1 :varattno 8 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 8}}) :qpqual :lefttree :righttree :extprm () :locprm () :initplan :nprm 0 :scanrelid 1 :indxid ( 2424224) :indxqual (({ EXPR :typeOid 16 :opType op :oper { OPER :opno 94 :opid 63 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 1 :vartype 21 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2} { CONST :consttype 21 :constlen 2 :constbyval true :constisnull false :constvalue 2 [ 109 0 0 0 ] })} { EXPR :typeOid 16 :opType op :oper { OPER :opno 1093 :opid 1086 :opresulttype 16 } :args ({ VAR :varno 1 :varattno 2 :vartype 1082 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 3} { CONST :consttype
Re: [HACKERS] Query Planning time increased 3 times on 7.1 compared to 7.0.3
Justin Clift wrote: Hi Christof, I'm not aware of the problem with int2 indexes collapsing. Can you give me some more info, and I'll put it on the techdocs.postgresql.org website. Oh, I'm sorry for my strange wording. I said that the index search collapses to a sequential scan if you do not cast the number to int2. Because an int2 index is not used to look up an int4. And untyped numbers are int4 or numeric the int2 index is never used unless explicitely specified (by a type cast). Yes this is a known bug in PostgreSQL 7.1 and below. Hopefully this will get addressed in 7.2? Why don't I code it? I'm busy working on ecpg (dyn. SQL) at the moment. Christof ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Re: [PATCHES] Small patch to replace 'idle' by 'trans' if transactionis still open
If you don't know what is all about read the bottom (I'll requote my posting) Bruce Momjian wrote: On Fri, 19 Jan 2001, Bruce Momjian wrote: Seems we decided against this. Sorry. Huh? from reading Tom's response, sounds like it would be something useful? I know I find having as much information about state in the ps listing helps me alot, and knowing if its 'idle' vs 'idle (in transaction)' provides at lesat more detailed information then just 'idle' Who was the 'we' in the above decision? Tom seemed in favor of it, I know I'm in favor of it .. and you are not in favor of it ... There must have been some discussion about it. I don't see it in the code, and I remember it was rejected for some reason. Check the archives. The thing which comes most close to a rejection was the 'I can't decide' mail by you (answered by Tom). The conclusion sounded like 'since we're not clear on this subject we won't touch this, yet'. And there was some unsettled discussion about the best wording to show in 'ps'. 'trans' seemed too short (and too much unlike 'idle') (as indicated by Bruce) and 'idle (open transaction)' might give difficulties on platforms which limit the length of the string (as indicated by Tom) I'll CC Hackers (where this discussion belongs) Christof - Quoting: Subject: Re: [PATCHES] Small patch to replace 'idle' by 'trans' if transaction is still open Date: Mon, 09 Oct 2000 22:46:56 -0400 From: Tom Lane [EMAIL PROTECTED] Bruce Momjian [EMAIL PROTECTED] writes: I can't decide if this is of general use. My inclination is that someone in a transaction sitting a prompt should still show as idle. The idea seemed good to me, although I didn't look at the code to see if the implementation was any good ;-). I know we've frequently had questions on the lists where it was interesting to know if any transactions were being held open --- and right now there's no easy way to tell. regards, tom lane -- Subject: Small patch to replace 'idle' by 'trans' if transaction is still open Date: Tue, 03 Oct 2000 21:28:36 +0200 From: Christof Petig [EMAIL PROTECTED] If you are looking for programs which tend to hold longstanding transactions, this micro patch might be handy. Whether it is of general use is debatable. It will replace 'idle' by 'trans' if the backend is idle but a transaction is pending. Simply use ps to view the backend's status. Christof --- src/backend/commands/async.c~ Sun May 14 05:18:35 2000 +++ src/backend/commands/async.cTue Oct 3 10:31:54 2000 @@ -818,7 +818,7 @@ */ pq_flush(); - PS_SET_STATUS("idle"); + PS_SET_STATUS(IsTransactionBlock()?"trans":"idle"); TPRINTF(TRACE_NOTIFY, "ProcessIncomingNotify: done"); } --- src/backend/tcop/postgres.c~Thu Aug 31 09:18:57 2000 +++ src/backend/tcop/postgres.c Tue Oct 3 10:32:23 2000 @@ -1496,7 +1496,7 @@ for (;;) { - PS_SET_STATUS("idle"); + PS_SET_STATUS(IsTransactionBlock()?"trans":"idle"); /* * (1) tell the frontend we're ready for a new query.
Re: [HACKERS] Re: [INTERFACES] USE OF CURSORS IN ECPG
Maurizio wrote: But, how can I do ? I have to recall the same routine many times. I tried to prepare and declare the cursor inside the routine but when I compile with ecpg I receive the error Cursor already defined. you should drop the cursor (exec sql close name;) But there is trouble ahead (believe me - see below) Maurizioan (Oh, I misunderstood, if you get into trouble with transactions (waiting for you ;-() you might consider the following lines) You certainly should disable autocommit! exec sql set autocommit to off; otherwise you have no control over when a transaction starts and whether this succeded. Then at the start of your SubProgram do bool my_transaction; exec sql begin work; my_transaction=!sqlca.sqlcode; at the end: if (my_transaction) exec sql commit work; This should give you an idea on how to accomplish it. If not feel free to ask me. But (sigh) you need my NOTICE to sqlca patch applied. It's not in 7.0 but in 7.1! I don't know any other way (I'll attach the patch) Christof PS: You might consider applying the patch for (update where not_found) - 100 and the cache which doubles ecpg's speed. Pgsql_Ecpg_Notice.diff.gz Pgsql_Ecpg_Notice2.diff.gz Pgsql_Ecpg_Update_100.diff.gz Pgsql_Ecpg_Type_Cache.diff.gz
Re: AW: [HACKERS] Re: [INTERFACES] USE OF CURSORS IN ECPG
Zeugswetter Andreas SB wrote: PS: You might consider applying the patch for (update where not_found) - 100 No, this is not allowed. sqlcode is supposed to be 0 in above case. You need to explicitly check for the number of rows updated in your program if needed. Andreas According to my reading of the SQL standard this is the only compliant behaviour. Do you know better? Christof
Re: [HACKERS] Query caching
Karel Zak wrote: On Wed, 8 Nov 2000, Christof Petig wrote: Karel Zak wrote: What about parameters? Normally you can prepare a statement and execute it We have in PG parameters, see SPI, but now it's used inside backend only and not exist statement that allows to use this feature in be-fe. Sad. Since ecpg would certainly benefit from this. Postponed for future improvements ... PREPARE chris_query AS SELECT * FROM pg_class WHERE relname = $1 USING text; I would prefer '?' as a parameter name, since this is in the embedded sql standard (do you have a copy of the 94 draft? I can mail mine to you?) This not depend on query cache. The '$n' is PostgreSQL query parametr keyword and is defined in standard parser. The PREPARE statement not parsing query it's job for standard parser. I see. Also the standard says a whole lot about guessing the parameter's type. Also I vote for ?::type or type(?) or sql's cast(...) (don't know it's syntax) instead of abusing the using keyword. The postgresql executor expect types of parametrs in separate input (array). I not sure how much expensive/executable is survey it from query. That would involve changing the parser. Future project. EXECUTE chris_query USING 'pg_shadow'; Great idea of yours to implement this! Since I was thinking about implementing a more decent schema for ecpg but had no mind to touch the backend and be-fe protocol (yet). It would be desirable to do an 'execute immediate using', since using input parameters would take a lot of code away from ecpg. By the way, PREPARE/EXECUTE is face only. More interesting in this period is query-cache-kernel. SQL92 is really a little unlike my PREPARE/EXECUTE. I'm looking forward to get first experiences with the query cache kernel. I think it's the right way to go. Christof
Re: [HACKERS] Leaking definitions to user programs
Hi Bruce, Hi Michael, here is the really short patch for shutting out all postgres definitions from ecpg programs. (e.g. Datum, Pointer, DEBUG, ERROR). Someone really should take a look into libpq and do the same. But I had to copy a small part of c.h (bool,true,false,TRUE,FALSE) into ecpg/include/libecpg.h. And ... there is a possible bug in c.h. You can't check a typedef via #ifndef. typedef char bool; ... #ifndef bool typedef char bool; #endif will fail. But I don't know any decent solution to that problem! Perhaps c.h should be broken into seperate parts. Christof PS: to Jacek: you need this patch to compile libcommon++.a! Bruce Momjian wrote: Thanks. Yes, leaking into user programs is a bad practice. Is there a solution/patch for that? A solution would be a simple patch which is not available yet. But I plan on doing one (some other things still have higher priority). Christof ecpg.diff.gz
Re: [HACKERS] How to check that I am in transaction inside backend
Denis Perchine wrote: Hello, Having some expirience with catching errors with BLOBs, I realised, that it is really hard to understand that you forget to enclose BLOB operations in transaction... I would like to add a check for each BLOB operation which will check whether we are in transaction, and if not it will issue a notice. The question is how correctly check that I am in transaction. simply use IsTransactionBlock() Christof
Re: [HACKERS] embedded sql with indicators in other DBs
Michael Meskes wrote: What do other DBs do with their output variables if there is an embedded SQL query resulting in a NULL return value? What I mean is: exec sql select text into :txt:ind from ... If text is NULL, ind will be set, but does txt change? I was just told Informix blanks txt. Adabas D does not touch txt. So you might set txt to a reasonable value in case of NULL, or hold the value in txt of a previous sql statement. On the other hand if you forget to initialize txt, Informix protects you from yourself. At least the standard (sql94-bindings-3 clause 7.1) does not mention to change the value of the variable in null case. Looks like undefined. I'm undecided. Not touching it looks more right to me but it might break existing applications. Christof
Re: [HACKERS] Suggested change in include/utils/elog.h
Magnus Hagander wrote: If you do decide to prefix DEBUG, please consider prefixing at least ERROR as well. It produces a lot of warnings when compiling on Win32 (ERROR is already defined as part of the standard windows headers). It's just warnings, though. //Magnus Do other people have a comment on this. I am inclined to leave it along. This is the first complaint I have heard, and elog(PGDEBUG) just looks funny. We don't prefix NOTICE or ERROR. In the file include/utils/elog.h there is a macro named DEBUG that conflicts with the perl5.6 macro DEBUG. PostgreSQL would probably "play" better with other products if the DEBUG macro had a prefix, maybe PGSQLDEBUG or similar. Until there is some fix in this area, plperl will not build with a version of perl that has debugging enabled. It even got on my nerves (linux, ecpg) since I used to define a macro #define DEBUG(x) cout x or #define DEBUG(x) DEBUG and ERROR are far too common to get defined for client programs. But perhaps it is ecpg's fault for including "elog.h". IMHO these defines should never leave the database kernel. perhaps the common #ifdef _DBKERNEL_ #endif would do the trick. Christof PS: Having Datum unconditionally leaked to ecpg programs forced me to preced a namespace to my own class.