Re: [HACKERS] osprey buildfarm member has been failing for a long while
Le 28 mai 06 à 04:08, Andrew Dunstan a écrit : Tom Lane wrote: osprey hasn't been able to build HEAD since the GIN code was added. I'm not sure that GIN is really to blame though, as the error looks like an out-of-memory problem while trying to link the backend: ccache gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wendif-labels -fno-strict-aliasing -g -L../../src/port -Wl,-R'/ data/postgresql/buildfarm/workdir/HEAD/inst/lib' -Wl,-E access/ SUBSYS.o bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o storage/ SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o ../../src/timezone/ SUBSYS.o ../../src/port/libpgport_srv.a -lintl -lcrypt -lm -o postgres ld in malloc(): error: brk(2) failed [internal error] gcc: Internal error: Abort trap (program ld) Please submit a full bug report. See URL:http://www.netbsd.org/Misc/send-pr.html for instructions. gmake[2]: *** [postgres] Error 1 Perhaps the swap space or ulimit setting on the box needs to be raised? I don't think it's a swap problem. I've not seen the machine go much in the swap while running the build, but I've not checked since the failure appeared. I was sort of hoping the issue will resovle itself when the size of the link would change again... What kind of ulimit did you think of ? I'll try upping the data segment size. Or maybe ccache is the culprit - there have been suspicions before that ccache is responsible for errors, but it's never been confirmed. Remi, can you try turning it off and see what happens? just comment out the CC = cache gcc line in the config file. I'll try that. Regards, Rémi Zara smime.p7s Description: S/MIME cryptographic signature
Re: [HACKERS] Inefficient bytea escaping?
On 5/28/06, Martijn van Oosterhout kleptog@svana.org wrote: With -lpthread lock.enabled 323s lock.disabled 50s lock.unlocked 36s I forgot to test with -lpthread, my bad. Indeed by default something less expensive that full locking is going on. The crux of the matter is though, if you're calling something a million times, you're better off trying to find an alternative anyway. There is a certain amount of overhead to calling shared libraries and no amount of optimisation of the library is going save you that. The crux of the matter was if its possible to use fwrite as easy string combining mechanism and the answer is no, because it's not lightweight enough. -- marko ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Error in recent pg_dump change (coverity)
Coverity picked up an error in dumpStdStrings() since last night. At line 1448 there's PQclear(res) yet it's used several times further down (lines 1452, 1454 and 1456). I'd actually suggest zeroing out res-tuples in PQclear so this sort of problem becomes much more obvious. Coverity bug 304 for people watching at home. -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [HACKERS] osprey buildfarm member has been failing for a long while
=?ISO-8859-1?Q?R=E9mi_Zara?= [EMAIL PROTECTED] writes: Tom Lane wrote: Perhaps the swap space or ulimit setting on the box needs to be raised? What kind of ulimit did you think of ? I'll try upping the data segment size. Yeah, data segment size would be the most likely culprit if this is a ulimit thing. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Error in recent pg_dump change (coverity)
Martijn van Oosterhout wrote: Coverity picked up an error in dumpStdStrings() since last night. At line 1448 there's PQclear(res) yet it's used several times further down (lines 1452, 1454 and 1456). I'd actually suggest zeroing out res-tuples in PQclear so this sort of problem becomes much more obvious. Is it worthwhile to zero out the res-block array as well? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support Index: src/bin/pg_dump/pg_dump.c === RCS file: /home/alvherre/cvs/pgsql/src/bin/pg_dump/pg_dump.c,v retrieving revision 1.434 diff -c -r1.434 pg_dump.c *** src/bin/pg_dump/pg_dump.c 26 May 2006 23:48:54 - 1.434 --- src/bin/pg_dump/pg_dump.c 28 May 2006 15:39:14 - *** *** 1445,1452 check_sql_result(res, g_conn, qry-data, PGRES_TUPLES_OK); - PQclear(res); - resetPQExpBuffer(qry); std_strings = (strcmp(PQgetvalue(res, 0, 0), on) == 0); --- 1445,1450 *** *** 1454,1460 appendStringLiteral(qry, PQgetvalue(res, 0, 0), true, !std_strings); appendPQExpBuffer(qry, ;\n); puts(PQgetvalue(res, 0, 0)); ! } ArchiveEntry(AH, nilCatalogId, createDumpId(), --- 1452,1459 appendStringLiteral(qry, PQgetvalue(res, 0, 0), true, !std_strings); appendPQExpBuffer(qry, ;\n); puts(PQgetvalue(res, 0, 0)); ! ! PQclear(res); } ArchiveEntry(AH, nilCatalogId, createDumpId(), Index: src/interfaces/libpq/fe-exec.c === RCS file: /home/alvherre/cvs/pgsql/src/interfaces/libpq/fe-exec.c,v retrieving revision 1.184 diff -c -r1.184 fe-exec.c *** src/interfaces/libpq/fe-exec.c 23 May 2006 22:13:19 - 1.184 --- src/interfaces/libpq/fe-exec.c 28 May 2006 15:39:20 - *** *** 358,368 --- 358,372 { res-curBlock = block-next; free(block); + block = NULL; } /* Free the top-level tuple pointer array */ if (res-tuples) + { free(res-tuples); + res-tuples = NULL; + } /* Free the PGresult structure itself */ free(res); ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Error in recent pg_dump change (coverity)
Alvaro Herrera [EMAIL PROTECTED] writes: Martijn van Oosterhout wrote: I'd actually suggest zeroing out res-tuples in PQclear so this sort of problem becomes much more obvious. Is it worthwhile to zero out the res-block array as well? Your patch isn't doing that, merely zeroing a local variable that will be assigned to in a moment anyway. That loop already ensures that res-curBlock is null when it exits. So lose this: + block = NULL; This part seems OK: /* Free the top-level tuple pointer array */ if (res-tuples) + { free(res-tuples); + res-tuples = NULL; + } Another possibility is to just MemSet the whole PGresult struct to zeroes before free'ing it. Compared to the cost of obtaining a query result from the backend, this probably doesn't cost enough to be worth worrying about, and it would catch a few more problems of the same ilk. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Error in recent pg_dump change (coverity)
On Sun, May 28, 2006 at 12:00:33PM -0400, Tom Lane wrote: Another possibility is to just MemSet the whole PGresult struct to zeroes before free'ing it. Compared to the cost of obtaining a query result from the backend, this probably doesn't cost enough to be worth worrying about, and it would catch a few more problems of the same ilk. Probably better actually, since by setting ntups to zero also, PQgetvalue will return a warning (row number out of range) rather than segfaulting... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [HACKERS] Error in recent pg_dump change (coverity)
Martijn van Oosterhout kleptog@svana.org writes: On Sun, May 28, 2006 at 12:00:33PM -0400, Tom Lane wrote: Another possibility is to just MemSet the whole PGresult struct to zeroes before free'ing it. Probably better actually, since by setting ntups to zero also, PQgetvalue will return a warning (row number out of range) rather than segfaulting... Hm. But I think we'd *like* it to segfault; the idea is to make the user's programming error as obvious as possible. Is it worth the trouble to just zero out the pointer members of the PGresult? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Error in recent pg_dump change (coverity)
Tom Lane wrote: Martijn van Oosterhout kleptog@svana.org writes: On Sun, May 28, 2006 at 12:00:33PM -0400, Tom Lane wrote: Another possibility is to just MemSet the whole PGresult struct to zeroes before free'ing it. Probably better actually, since by setting ntups to zero also, PQgetvalue will return a warning (row number out of range) rather than segfaulting... Hm. But I think we'd *like* it to segfault; the idea is to make the user's programming error as obvious as possible. Is it worth the trouble to just zero out the pointer members of the PGresult? There are only five of them; four need to be zeroed out. void PQclear(PGresult *res) { PGresult_data *block; if (!res) return; /* Free all the subsidiary blocks */ while ((block = res-curBlock) != NULL) { res-curBlock = block-next; free(block); } /* Free the top-level tuple pointer array */ if (res-tuples) free(res-tuples); /* zero out the pointer fields to catch programming errors */ res-attDesc = NULL; res-tuples = NULL; res-noticeHooks = NULL; res-errFields = NULL; /* res-curBlock was zeroed out earlier */ /* Free the PGresult structure itself */ free(res); } -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Error in recent pg_dump change (coverity)
Alvaro Herrera wrote: Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: Hm. But I think we'd *like* it to segfault; the idea is to make the user's programming error as obvious as possible. Is it worth the trouble to just zero out the pointer members of the PGresult? There are only five of them; four need to be zeroed out. Works for me. Please commit, as I'm about to do some further work in those files and would rather not have to merge ... Done. They were actually four, not five. The one I mistakingly though was one was the notice processor hooks. The case Martijn was saying would be warned about by the memset approach, setting ntuples to 0, would actually be handled as a segfault, because functions like check_field_number actually follow res.noticeHooks pointer! ISTM we would just segfault at that point. Agreed. Anything to catch more errors is good. -- Bruce Momjian http://candle.pha.pa.us EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Error in recent pg_dump change (coverity)
Alvaro Herrera wrote: Done. They were actually four, not five. The one I mistakingly though was one was the notice processor hooks. The case Martijn was saying would be warned about by the memset approach, setting ntuples to 0, would actually be handled as a segfault, because functions like check_field_number actually follow res.noticeHooks pointer! ISTM we would just segfault at that point. I must be blind. The hooks-noticeRec == NULL case is handled first thing in pgInternalNotice (returns doing nothing). So we wouldn't segfault and we wouldn't emit any warning either! -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] COPY FROM view
Hi, I've been having the COPY FROM patch that was posted on pgsql-patches some time ago (I think from Hannu Krossing), sitting on my machine, with the intention to commit it for 8.2. However there's something I'm not very sure about -- the patch creates an execution plan by passing a literal SELECT * FROM view to pg_parse_query, pg_analyze_and_rewrite, and finally planner(). I'm sure we can make this work appropiately, patching over the quoting issues that the patch doesn't deal with, but I'm unsure if this is an acceptable approach. (Actually I think it isn't.) But what is an acceptable way to do it? -- Alvaro Herrerahttp://www.PlanetPostgreSQL.org No necesitamos banderas No reconocemos fronteras (Jorge González) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] anoncvs still slow
anoncvs (svr4, 66.98.251.159) is still slow responding to cvs update; it's been spotty for about a week now. Tcpdump shows connections being established but then long delays for ACKs, sometimes long enough for cvs to time out. Any updates on what's going on? Magnus apparently knows what the problem is: http://archives.postgresql.org/pgsql-hackers/2006-05/msg01002.php but I haven't seen any of the other mails he mentioned. Right, those mails were sent in private to Marc, because they outline some fairly severe (IMHO) configuration errors on svr4 and at least one other postgresql.org mailserver, that is the main reason behind the problems. I didn't want those details to go out public and be archived. svr4 / anoncvs needs a major upgrade ... the problem is that the only part of that vServer that I know nothing about is the bittorrent stuff, which, in itself, needs an upgrade ... I sent a note to Magnus that, whenever he's ready with the bittorrent stuff, I can do the rest of the upgrade, so its in his court right now :) Um, *what*? AFAICS, this is caused by the machine attempting to relay thousands and thousands of spam emails (some quick checked showed a rate of about 1 spam / 5 seconds enytering the queue - and I know I deleted almost 20,000 from the queue) This is a *configuration error*. if we *wanted* all this spam to be relayed, it would be a performance problem. But to be efficient, the spam has to be rejected *before* it enters the system. I've suggested a couple of different things to be done to fix or at least decrease this problem. From what I can tell, none have been implemented. Now for the other problems, I propose the following: For bittorrent, I propose we take it out. We've suggested it before, I don't recall receiving any real requests to keep it, and IMHO it's way much more pain than it's worth. Therefor, unless someone objects, I'll pull the bittorrent links from the website in a couple of days, and then we can just remove it from the server. Also, if anoncvs is a problem that we need quickly fixed, can we mov eit quickly to a different server. Say Ferengi, which has both bandwidth and horsepower to spare in loads. Do we require some special-special version of cvs, or just plain cvs? //Magnus ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] anoncvs still slow
Hi, On Sun, 2006-05-28 at 21:25 +0200, Magnus Hagander wrote: For bittorrent, I propose we take it out. We've suggested it before, I don't recall receiving any real requests to keep it, and IMHO it's way much more pain than it's worth. Therefor, unless someone objects, I'll pull the bittorrent links from the website in a couple of days, and then we can just remove it from the server. Please go for it. -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] COPY FROM view
Alvaro Herrera [EMAIL PROTECTED] writes: I've been having the COPY FROM patch that was posted on pgsql-patches some time ago (I think from Hannu Krossing), sitting on my machine, with the intention to commit it for 8.2. However there's something I'm not very sure about -- the patch creates an execution plan by passing a literal SELECT * FROM view to pg_parse_query, pg_analyze_and_rewrite, and finally planner(). I'm sure we can make this work appropiately, patching over the quoting issues that the patch doesn't deal with, but I'm unsure if this is an acceptable approach. (Actually I think it isn't.) But what is an acceptable way to do it? It seems to me that we had decided that COPY FROM VIEW is not even the conceptually right way to think about the missing feature. It forces you to create a view (at least a temporary one) in order to do what you want. Furthermore it brings up the question of why can't you COPY TO VIEW. The correct way to think about it is to have a way of dumping the output of any arbitrary SELECT statement in COPY-like format. There was some previous discussion of exactly how to go about that; check the archives. Offhand I think we might have liked the syntax COPY (parenthesized-SELECT-statement) TO ... but there was also some argument in favor of using a separate statement that basically sets the output mode for a subsequent SELECT. I'm not sure if anyone thought about how it would play with psql's \copy support, but that's obviously something to consider. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] psql: krb5_sendauth: Bad application version was sent (via sendauth) - Windows 2000, MIT Kerberos, PG v 8.1.1
I'm trying to setup Kerberos authentication with PG on Windows 2000. I have installed the MIT Kerberos Windows dlls into the PG bin directory - replacing the krb5_32.dll and comerr32.dll from the PG install. I did this because the PG install did not have the krbcc32.dll, which is needed to find the credentials cache on Windows. The name of the service is 'POSTGRES', as is the name of the user who starts the service. I've created the krb5.keytab, and set its location in postgresql.conf, and set krb_srvname = 'POSTGRES' krb_server_hostname = 'host.domain.com' (the host machine) I've set pg_hba.conf to use krb5 for my username. When I try to connect with psql, I get the following error in the cmd window: 'krb5_sendauth: Bad application version was sent (via sendauth)' and the following msg in the pg_log 'authentication LOG: Kerberos recvauth returned error -1765328179' I found a thread in the mailing list regarding a problem with the kerberos authenication on windows in the 8.1beta2 version of PG. Kerberos brokenness and oops question in 8.1beta2 http://archives.postgresql.org/pgsql-hackers/2005-10/msg00376.php Is this related to the error I'm getting? Thanks. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] LIKE, leading percent, bind parameters and indexes
On Sat, 27 May 2006, Martijn van Oosterhout wrote: On Sat, May 27, 2006 at 10:57:05AM -0400, Tom Lane wrote: * Up to now, the only functions directly invoked by an index AM were members of index opclasses; and since opclasses can only be defined by superusers, there was at least some basis for trusting the functions to behave sanely. But if an index AM is going to invoke arbitrary user-defined expressions then more care is needed. What's particularly bothering me is the notion of executing arbitrary functions while holding a buffer lock on an index page. Actually, for a first pass I was considering doing it within the nodeIndexScan.c/nodeBitmapScan.c and not within the AM at all. But I just remembered, the index interface has no way to return the actual values in the index, so you can't do that :( This discussion reminds me of the idea to do index-only scans, returning tuples directly from an index without hitting the heap at all. MVCC is the main problem there, but it would be nice that whatever you come up with here would be usable if we ever implement index-only scans. I don't know the planner internals, so this might not make any sense at all, but how about having separate index scan and fetch nodes. Index scan would return index tuples and fetch would get the corresponding heap tuples. You could then have whatever you want between them, perhaps deferring the fetch step until just before returning the rows to the client. - Heikki ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] LIKE, leading percent, bind parameters and indexes
On Sun, May 28, 2006 at 10:43:18PM +0300, Heikki Linnakangas wrote: I don't know the planner internals, so this might not make any sense at all, but how about having separate index scan and fetch nodes. Index scan would return index tuples and fetch would get the corresponding heap tuples. You could then have whatever you want between them, perhaps deferring the fetch step until just before returning the rows to the client. That's kinda what a bitmap scan does. Although, we never fetch tuples unless you're going to use the result in some way... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [HACKERS] LIKE, leading percent, bind parameters and indexes
Heikki Linnakangas [EMAIL PROTECTED] writes: On Sat, 27 May 2006, Martijn van Oosterhout wrote: Actually, for a first pass I was considering doing it within the nodeIndexScan.c/nodeBitmapScan.c and not within the AM at all. But I just remembered, the index interface has no way to return the actual values in the index, so you can't do that :( This discussion reminds me of the idea to do index-only scans, returning tuples directly from an index without hitting the heap at all. MVCC is the main problem there, but it would be nice that whatever you come up with here would be usable if we ever implement index-only scans. Given my worries about needing to copy the index tuples anyway, maybe the right way to approach this is to add a separate AM entry point that's like amgettuple except it hands you back whole index tuples and not just the heap TID part. This would only be implemented by those AMs that store the unmodified original index tuple (ie, not GiST/GIN). Then the filtering on auxiliary conditions can be done once in the executor code as envisioned by Martijn, and we'd also have the AM support in place to do generalized separate index and heap scans. I recall some discussion of using something like this to implement joining before visiting the heap, in situations where all the join keys are available in an index but there are too many rows for nestloop index joining to be sufficient. You'd pull the join keys from the index, run merge or hash join, and then visit the heap only for the candidate join rows. It hasn't got done yet but it seemed like a potentially good idea at the time. [ pokes around... ] The original discussion was Red Hat private, apparently, but I mentioned it here: http://archives.postgresql.org/pgsql-hackers/2004-05/msg00944.php Some of that is probably superseded now by bitmap indexscans, but not all of it. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] anoncvs still slow
For bittorrent, I propose we take it out. We've suggested it before, I don't recall receiving any real requests to keep it, and IMHO it's way much more pain than it's worth. We received a couple of requests for the torrent on the IRC channel when the update was released. Just FYI. Therefor, unless someone objects, I'll pull the bittorrent links from the website in a couple of days, and then we can just remove it from the server. Also, if anoncvs is a problem that we need quickly fixed, can we mov eit quickly to a different server. Say Ferengi, which has both bandwidth and horsepower to spare in loads. Do we require some special-special version of cvs, or just plain cvs? CMD has a spare machine we can host it on as well if you like. Sincerely, Joshua D. Drake //Magnus ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] pg_proc probin misuse
James William Pye [EMAIL PROTECTED] writes: I guess there are two ways to go about it. Simply remove the assumption that probin is only relevant to C functions; perhaps allowing a hardwired exception for builtin languages where allowing probin to be set would be deemed unsightly (ie, the easy way ;). Or, add a column to pg_language that specifies the language's probin usage so that pg_dump and the backend have an idea of how to handle these things for the given language(the takes a bit more work way). [I imagine the former could gracefully lead into the latter as well.] I believe the reason interpret_AS_clause() is written the way that it is is to provide some error checking, ie, not let the user specify a probin clause for languages where it's not meaningful. That check predates the invention of language validator functions, IIRC. It'd probably make sense to get rid of the centralized check and expect the validator functions to do it instead. But we're still avoiding the central issue: does it make sense to dump a probin clause at all for plpython functions? If it's a compiled form of prosrc then it probably doesn't belong in the dump. On reflection I'm kind of inclined to think that plpython is abusing the column. If it were really expensive to derive bytecode from source text then maybe it'd make sense to do what you're doing, but surely that's not all that expensive. Everyone else manages to parse prosrc on the fly and cache the result in memory; why isn't plpython doing that? If we think that plpython is leading the wave of the future, I'd be kind of inclined to invent a new pg_proc column in which derived text can be stored, rather than trying to use probin for the purpose. Although arguably probin itself was once meant to do that, there's too much baggage now. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] non-transactional pg_class
Hi, I've been taking a look at what's needed for the non-transactional part of pg_class. If I've understood this correctly, we need a separate catalog, which I've dubbed pg_ntclass (better ideas welcome), and a new pointer in RelationData to hold a pointer to this new catalog for each relation. Also a new syscache needs to be created (say, NTRELOID). Must every relation have a tuple in this catalog? Currently it is useful only for RELATION, INDEX and TOASTVALUE relkinds, so maybe we can get away with not requiring it for other relkinds. On the other hand, must this new catalog be boostrapped? We could initially create RelationDescs with a NULL relation-rd_ntrel, and then get the tuple from the syscache when somebody tries to read the fields. I'm envisioning this new catalog have only reltuples and relpages for now. (I'll add relvacuumxid and relminxid on the relminxid patch, but they won't be there on the first pass.) Obviously the idea is that we would never heap_update tuples there; only heap_inplace_update (and heap_insert when a new relation is created.) So there would be three patches: 1. to replace all uses of relation-rd_rel-reltuples and -relpages with macros RelationGetReltuples/Relpages. 2. to add the new catalog and syscache, and have the macros get the tuple from pg_ntclass when first requested. (Also, of course, mods to the functions that update pg_class.reltuples, etc, so that they also update pg_ntclass). 3. the relminxid patch Have I gotten it right? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] pg_proc probin misuse
If it were really expensive to derive bytecode from source text then maybe it'd make sense to do what you're doing, but surely that's not all that expensive. Everyone else manages to parse prosrc on the fly and cache the result in memory; why isn't plpython doing that? It depends on the number of imported modules in the function. If it imports a lot of modules, it can take some time to compile a python function (especially if the modules have some initialisation code which must be run on import). ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] COPY FROM view
Ühel kenal päeval, P, 2006-05-28 kell 13:53, kirjutas Alvaro Herrera: Hi, I've been having the COPY FROM patch that was posted on pgsql-patches some time ago (I think from Hannu Krossing), Not by/from me :) -- Hannu Krosing Database Architect Skype Technologies OÜ Akadeemia tee 21 F, Tallinn, 12618, Estonia Skype me: callto:hkrosing Get Skype for free: http://www.skype.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings