Re: [HACKERS] CVS pg_config --includedir-server broken
On Tue, Jun 28, 2005 at 08:12:16PM -0400, Bruce Momjian wrote: strk wrote: The valure returned from pg_config --includedir-server is broken as of CVS. It points to unexistent directory: /home/extra/pgroot-cvs/include/server Correct value would be: /home/extra/pgroot-cvs/include/postgresql/server Well, on my system on CVS is right: A make clean did the trick, sorry for bothering. --strk; ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Wierd panic with 7.4.7
Tom Lane [EMAIL PROTECTED] writes: In the second place, we don't treat communication failures as ERRORs, so how did step 3 happen? You probably realize this, but just in case: Broken Pipe probably means the backend received SIGPIPE, not just that some file operation syscall returned -1. -- greg ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] Startup successful message, even on failure
A number of times when I've attempted to start the postmaster (using pg_ctl start) I've gotten Fatal error messages (usually when I don't have the shmmax/shmall settings correct) followed by a success message (something like postmaster sucessfully started). I don't currently have any shell output saved showing this, but I believe it's happened with PG versions as late as 8.0.3. If needed, I'll try to replicate it (Don't want to reset my shmmax/shmall and restart my laptop if I don't have to.) Has anyone else seen this? Is this something that can be fixed for 8.1? Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] [PATCHES] Dbsize backend integration
Bruce Momjian wrote: Dave Page wrote: pg_relation_size(text) - Get relation size by name/schema.name pg_relation_size(oid)- Get relation size by OID pg_tablespace_size(name) - Get tablespace size by name pg_tablespace_size(oid) - Get tablespace size by OID pg_database_size(name) - Get database size by name pg_database_size(oid)- Get database size by OID pg_table_size(text) - Get table size (including all indexes and toast tables) by name/schema.name pg_table_size(oid)- Get table size (including all indexes and toast tables) by OID pg_size_pretty(int8) - Pretty print (and round) the byte size specified (eg, 123456 = 121KB) OK, so you went with relation as heap/index/toast only, and table as the total of them. I am not sure that makes sense because we usually equate relation with table, and an index isn't a relation, really. Do we have to use pg_object_size? Is there a better name? Are indexes/toasts even objects? Relation is not an ideal names, but I heard people talk about heap relation and index relation. Indexes and tables (and sequences) are treated in a similar way quite often. Think of ALTER TABLE example_index RENAME TO another_index. This is even less obvious. Of course in relational theory, an index would not be a relation, because an index is just implementation detail. I don't like object_size any better, since that makes me rather think of large objects or rows as objects (object id...). Perhaps pg_table_size should be split into pg_table_size and pg_indexes_size, where pg_indexes_size is the aggregate of all indexes on a table und pg_table_size is just table+toast+toast-index. If noone has a better idea for pg_relation_size, I would rather keep it for consistency with the contrib module, and because it's not too far off. Best Regards, Michael Paesold ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [SQL] ENUM like data type
On 6/28/05, Martín Marqués martin@bugs.unl.edu.ar wrote: El Mar 28 Jun 2005 13:58, PFC escribió: Personnally I use one table which has columns (domain, name) and which stores all enum values for all different enums. I have then CHECK( is_in_domain( column, 'domain_name' )) which is a simple function which checks existence of the value in this domain (SELECT 1 FROM domains WHERE domain=$2 AND name=$1 LIMIT 1) for instance. You can also use integers. I personally think that the ENUM data type is for databases that are not well designed. So, if you see the need for ENUM, that means you need to re-think your data design. I seem to remember some discussion here, half a year ago perhaps which was about something similar (while not exactly). I mean it I think it someone said that DB2 (I am not sure about that one) has a feature that enables it to normalize the table behind the scenes. As I remember it, it works somewhere along the lines of: -- you create table CREATE TABLE foo ( when timestamptz, useragent some_data_type_perhaps ); ...and RDBMS will create a lookup table for useragents for you, with serial key, etc, etc. And in our foo table useragent will be kept as a reference to that lookup table. When you do a select, lookup table will be consulted behind the scenes, etc, etc. All this is doable with RULEs and VIEWs (and triggers for populating). Well, what MRB had in mind was more like a special subcase of such approach (lookup table with read-only keys), but I think such a lookup table would be benefitial for many users, especially when dealing with large tables. Incidentally, does it qualify for todo? Or maybe its already there? Regards, Dawid ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] commit_delay, siblings
On Wed, 2005-06-22 at 11:11 -0700, Josh Berkus wrote: Hans, Tom, We have done extensive testing some time ago. We could not see any difference on any platform we have tested (AIX, Linux, Solaris). I don't think that there is one at all - at least not on common systems. Keen then. Any objections to removing the GUC? We desperately need means to cut down on GUC options. Group commit is a well-documented technique for improving performance, but the gains only show themselves on very busy systems. It is possible in earlier testing any apparent value was actually hidden by the BufMgrLock issues we have now resolved in 8.1. We now see XLogInsert as being very nearly the highest routine on the oprofile. That tells me that it could now be time for group commit to show us some value, if any exists. DB2 and Berkeley-DB use group commit, while other rdbms use log writer processes which effectively provide the same thing. It would surprise me if we were unable to make use of such a technique, and worry me too. I would ask that we hold off on their execution, at least for the complete 8.1 beta performance test cycle. We may yet see gains albeit, as Tom points out, that benefit may only be possible on only some platforms. Best Regards, Simon Riggs ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] GiST concurrency commited
And there is one more problem: it caused approximatly one time per 2-4 million statements, I got traps: TRAP: FailedAssertion(!((*curpage)-offsets_used == num_tuples), File: vacuum.c, Line: 2766) LOG: server process (PID 15847) was terminated by signal 6 Odd. Will look at it later (after feature freeze), if you don't find the cause beforehand. It's definitly bug in a vaccum code, I got the same trap without any GiST indexes (to reproduce, just comment out 'create index' command in my script). -- Teodor Sigaev E-mail: [EMAIL PROTECTED] WWW: http://www.sigaev.ru/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] bug in ALTER TABLE / TYPE
A coworker of mine reported a subtle issue in ATExecAlterColumnType() in tablecmds.c. Suppose we have the following DDL: CREATE TABLE pktable (a int primary key, b int); CREATE TABLE fktable (fk int references pktable, c int); ALTER TABLE pktable ALTER COLUMN a TYPE bigint; Circa line 4891 in current sources, we begin a system table scan to look for pg_depend rows that depend on the column we're modifying. In this case, there are two dependencies on the column: the pg_constraint row for pktable's PK constraint, and the pg_constraint row for fktable's FK constraint. The bug is that we require the systable scan to return the FK constraint before the PK constraint -- if we attempt to remove the PK constraint first, it will fail because the FK constraint still exists and depends on the PK constraint. This bug is difficult to reproduce with a normal postmaster and vanilla sources, as the systable scan is usually implemented via a B+-tree scan on (refclassid, refobjid, refobjsubid). For this particular test case these three fields have equal values for the PK and FK constraint pg_depend rows, so the order in which the two constraints are returned is undefined. It just so happens that the Postgres b+-tree implementation *usually* returns the FK constraint first (per comments in nbtinsert.c, we usually place an equal key value at the end of a chain of equal keys, but stop looking for the end of the chain with a probability of 1%). And of course there is no ordering constraint if the systable scan is implemented via a heap scan (which would happen if, say, we're ignoring indexes on system catalogs in a standalone backend). To reproduce, any of: (1) Run the above SQL in a standalone backend started with the -P flag (2) Change true to false in the third argument to systable_beginscan() at tablecmds.c:4891, which means a heap scan will be used by a normal backend. (3) I've attached a dirty kludge of a patch that shuffles the results of the systable scan with probability 50%. Applying the patch should repro the bug with a normal backend (approx. 1 in 2 times, naturally). I'm not too familiar with the pg_depend code, so I'm not sure the right fix. Comments? -Neil Index: src/backend/commands/tablecmds.c === RCS file: /var/lib/cvs/pgsql/src/backend/commands/tablecmds.c,v retrieving revision 1.162 diff -c -r1.162 tablecmds.c *** src/backend/commands/tablecmds.c 28 Jun 2005 05:08:54 - 1.162 --- src/backend/commands/tablecmds.c 29 Jun 2005 07:15:07 - *** *** 4801,4806 --- 4801,4809 ScanKeyData key[3]; SysScanDesc scan; HeapTuple depTup; + List *tmp_list = NIL; + List *tmp_list2 = NIL; + ListCell *lc; attrelation = heap_open(AttributeRelationId, RowExclusiveLock); *** *** 4893,4901 while (HeapTupleIsValid(depTup = systable_getnext(scan))) { ! Form_pg_depend foundDep = (Form_pg_depend) GETSTRUCT(depTup); ObjectAddress foundObject; /* We don't expect any PIN dependencies on columns */ if (foundDep-deptype == DEPENDENCY_PIN) elog(ERROR, cannot alter type of a pinned column); --- 4896,4941 while (HeapTupleIsValid(depTup = systable_getnext(scan))) { ! tmp_list = lappend(tmp_list, heap_copytuple(depTup)); ! } ! ! foreach (lc, tmp_list) ! { ! if (lnext(lc) != NULL) ! { ! Form_pg_depend foundDep = (Form_pg_depend) GETSTRUCT((HeapTuple) lfirst(lc)); ! Form_pg_depend foundDepNext = (Form_pg_depend) GETSTRUCT((HeapTuple) lfirst(lnext(lc))); ! ! if (foundDep-refclassid == foundDepNext-refclassid ! foundDep-refobjid == foundDepNext-refobjid ! foundDep-refobjsubid == foundDepNext-refobjsubid) ! { ! if (random() (MAX_RANDOM_VALUE / 2)) ! { ! tmp_list2 = lappend(tmp_list2, lfirst(lnext(lc))); ! tmp_list2 = lappend(tmp_list2, lfirst(lc)); ! lc = lnext(lc); ! elog(NOTICE, choosing to shuffle); ! continue; ! } ! else ! elog(NOTICE, choosing not to shuffle); ! } ! } ! ! tmp_list2 = lappend(tmp_list2, lfirst(lc)); ! } ! ! Assert(list_length(tmp_list) == list_length(tmp_list2)); ! ! foreach (lc, tmp_list2) ! { ! Form_pg_depend foundDep; ObjectAddress foundObject; + depTup = lfirst(lc); + foundDep = (Form_pg_depend) GETSTRUCT(depTup); + /* We don't expect any PIN dependencies on columns */ if (foundDep-deptype == DEPENDENCY_PIN) elog(ERROR, cannot alter type of a pinned column); ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Open items
Changes --- integrated auto-vacuum (Alvaro) ICU locale patch? That would be Palle, and he's said he thinks he can have it in place in time. I'll have to update it for win32 build specifics after that, but that should be ok after the freeze, right? Please consider removing the question mark ;-) The latest version of the patch is at http://people.freebsd.org/~girgen/postgresql-icu/readme.html. It needs to be updated for 8.1. //Magnus ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] commit_delay, siblings
Simon Riggs wrote: Group commit is a well-documented technique for improving performance, but the gains only show themselves on very busy systems. It is possible in earlier testing any apparent value was actually hidden by the BufMgrLock issues we have now resolved in 8.1. We now see XLogInsert as being very nearly the highest routine on the oprofile. That tells me that it could now be time for group commit to show us some value, if any exists. DB2 and Berkeley-DB use group commit, while other rdbms use log writer processes which effectively provide the same thing. It would surprise me if we were unable to make use of such a technique, and worry me too. I would ask that we hold off on their execution, at least for the complete 8.1 beta performance test cycle. We may yet see gains albeit, as Tom points out, that benefit may only be possible on only some platforms. I don't remember the details exactly, but isn't it so that postgres has some kind of group commits even without the commit_delay option? I.e. when several backends are waiting for commit concurrently, the one to get to commit will actually commit wal for all waiting transactions to disk? I remember the term ganged wal writes or something similar. Tom, can you elaborate on this? Please tell me if I am totally off track. ;-) Best Regards, Michael Paesold ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Implementing SQL/PSM for PG 8.2 - debugger
Hi guys, I lean with you and Tom. While running it over the same libpq protocol would be helpful in some ways, it would have a lot of drawbacks and would really change the function of libpq. I think a separate debugging protocol is in order. Just putting on my network hat for a moment... Would an approach be to come up with a generic encapsulation protocol, similar in principle to PPP, in which we could run any protocols we wanted? If we had something like a PGSQL Encapsulation Protocol (PGEP) used to transfer all data between a PostgreSQL client/server, then we can use this to tunnel libpq requests as they are at the moment through to the other side. However, it would also mean that people could add any other protocols as they see fit for debugging, statistics and all sorts of things that people have yet to think of. Obviously this would require a client/server interface change so it's not to be taken lightly, but I thought I'd mention it since people have mentioned the possibility of changes to the FE/BE protocol. Kind regards, Mark. WebBased Ltd 17 Research Way Tamar Science Park Plymouth PL6 8BT T: +44 (0)1752 797131 F: +44 (0)1752 791023 W: http://www.webbased.co.uk ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] symbol name clash with libpq.so: md5_hash
Hi, after linking my database client application with a crypto library (as shared library), I noticed that I couldn't connect to postgresql any longer. Error message was Password authentication failed. After some time I found out that the issue was caused by both the crypto library and libpq.so defining the symbol md5_hash. After renaming the function name (thanks to open source!), the error went away. Wouldn't it be a good idea to have all exported symbols in libpq.so carry a postgres related prefix like 'PQ', 'pq', or 'pg_' (most of them already do)? This way symbol name clashes would become less probable. Symbols in question (postgresql 80 on solaris) include (nm libpq.so | fgrep ' T '): EncryptMD5, SockAddr_cidr_mask, check_sigpipe_handler, freeaddrinfo_all, getaddrinfo_all, getnameinfo_all, md5_hash, promote_v4_to_v6_addr, promote_v4_to_v6_mask, rangeSockAddr, set_noblock. AFAICS these functions are used by the server and maybe not all of these functions need to be exported to the shared client libpq anyway? Regards, Martin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Implementing SQL/PSM for PG 8.2 - debugger
On K, 2005-06-29 at 10:33 +0100, Mark Cave-Ayland wrote: Hi guys, I lean with you and Tom. While running it over the same libpq protocol would be helpful in some ways, it would have a lot of drawbacks and would really change the function of libpq. I think a separate debugging protocol is in order. Just putting on my network hat for a moment... Would an approach be to come up with a generic encapsulation protocol, similar in principle to PPP, in which we could run any protocols we wanted? That's what I also thought, but was too busy/lazy to write up :) If we had something like a PGSQL Encapsulation Protocol (PGEP) used to transfer all data between a PostgreSQL client/server, then we can use this to tunnel libpq requests as they are at the moment through to the other side. also, additional channels un PGEP could be initiated in both directions, and things like NOTIFY could be put in a different channel. However, it would also mean that people could add any other protocols as they see fit for debugging, statistics and all sorts of things that people have yet to think of. One example would be connection keepalive protocol , run over its own channel in PGEP and used in case TCP link has a tendency to fail. This should be run from server to client during idle periods, just to see if client is still there. Obviously this would require a client/server interface change so it's not to be taken lightly, but I thought I'd mention it since people have mentioned the possibility of changes to the FE/BE protocol. As protocol is negotiated at startup anyway, this is a change that could be done in a backward compatible manner . -- Hannu Krosing [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] GiST concurrency commited
Teodor Sigaev [EMAIL PROTECTED] writes concur.pl - generator of SQL statements retrieving it is forbidden ... Regards, Qingqing ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] Dbsize backend integration
-Original Message- From: Bruce Momjian [mailto:[EMAIL PROTECTED] Sent: Wed 6/29/2005 2:16 AM To: Dave Page Cc: PostgreSQL-patches; PostgreSQL-development Subject: Re: [PATCHES] Dbsize backend integration OK, so you went with relation as heap/index/toast only, and table as the total of them. I am not sure that makes sense because we usually equate relation with table, and an index isn't a relation, really. Err, yes - posted that before I got your reply! Do we have to use pg_object_size? Is there a better name? Are indexes/toasts even objects? Yeah, I think perhaps pg_object_size is better in some ways than pg_relation_size, however I stuck with relation because (certainly in pgAdmin world) we tend to think of pretty much anything as an object. I could go either way on that though, however Michael doesn't seem so keen. So, one for pg_object_size, one on the fench and one against :-). Anyone else got a preference? Regards, Dave. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] GiST concurrency commited
Sorry, fixed. Qingqing Zhou wrote: Teodor Sigaev [EMAIL PROTECTED] writes concur.pl - generator of SQL statements retrieving it is forbidden ... Regards, Qingqing ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- Teodor Sigaev E-mail: [EMAIL PROTECTED] WWW: http://www.sigaev.ru/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] contrib/rtree_gist into core system?
Teodor Sigaev [EMAIL PROTECTED] writes: 1. In your meaning, btree has bad split algorithm too. Look at _bt_compare, if first keys on page are unique the the later keys will not be compared ;) Please look at BUG 1614/1616. Pleeaaaeee. There are also troubles with intarray, may be it can touch tsearch2. I don't know. But the bug exists. -- falcon mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] contrib/rtree_gist into core system?
On Wed, 29 Jun 2005, falcon wrote: Teodor Sigaev [EMAIL PROTECTED] writes: 1. In your meaning, btree has bad split algorithm too. Look at _bt_compare, if first keys on page are unique the the later keys will not be compared ;) Please look at BUG 1614/1616. Pleeaaaeee. There are also troubles with intarray, may be it can touch tsearch2. I don't know. But the bug exists. Yura, could you please refresh our memory what's the bug about ? Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Open items
Satoshi Nagayasu wrote: How about enable/disable triggers? From TODO: Allow triggers to be disabled. http://momjian.postgresql.org/cgi-bin/pgtodo?trigger I think this is good for COPY performance improvement. Now I have user functions to enable/disable triggers, not DDL. It modifies system tables. But I can rewrite this as a DDL. (ALTER TABLE?) Yea, it is a TODO item, and should be pretty straight-forward to code, so sure, go ahead. It has to be something that is super-user-only. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] contrib/rtree_gist into core system?
Yura, I found your message http://archives.postgresql.org/pgsql-bugs/2005-04/msg00213.php So, what's the problem ? Could you reproduce your problem without silly plpgsql functions ? Just plain create table, inserts and selects. Also, have you tried CVS HEAD before crying too much ? Oleg On Wed, 29 Jun 2005, falcon wrote: Teodor Sigaev [EMAIL PROTECTED] writes: 1. In your meaning, btree has bad split algorithm too. Look at _bt_compare, if first keys on page are unique the the later keys will not be compared ;) Please look at BUG 1614/1616. Pleeaaaeee. There are also troubles with intarray, may be it can touch tsearch2. I don't know. But the bug exists. Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Open items
Marc G. Fournier wrote: On Tue, 28 Jun 2005, Bruce Momjian wrote: Here are our open items. How hard are we going to be about the cutoff date? Do we give people the weekend to complete some items? Sounds reasonable to me ... Always hate doing stuff like this on a Friday myself ... Yep. This gives us a few wind-down days, so folks, keep working and send in stuff by this Monday. We would like to see an intermediate patch before Monday so we know you are working on stuff though. And once the patches are submitted, we will work to get them integrated into CVS, but it might take a few weeks to happen because some patches might need major work (we hope not). Also, remember that the weeks after feature freeze get very busy as we push to get everything into CVS, and people start getting worried their feature will not make it. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] Dbsize backend integration
Dave Page wrote: -Original Message- From: Bruce Momjian [mailto:[EMAIL PROTECTED] Sent: Wed 6/29/2005 2:16 AM To: Dave Page Cc: PostgreSQL-patches; PostgreSQL-development Subject: Re: [PATCHES] Dbsize backend integration OK, so you went with relation as heap/index/toast only, and table as the total of them. I am not sure that makes sense because we usually equate relation with table, and an index isn't a relation, really. Err, yes - posted that before I got your reply! Do we have to use pg_object_size? Is there a better name? Are indexes/toasts even objects? Yeah, I think perhaps pg_object_size is better in some ways than pg_relation_size, however I stuck with relation because (certainly in pgAdmin world) we tend to think of pretty much anything as an object. I could go either way on that though, however Michael doesn't seem so keen. So, one for pg_object_size, one on the fench and one against :-). Anyone else got a preference? I have a new idea --- pg_storage_size(). That would do just the toast/index/heap, and pg_relation_size() gets a total of them all, and only works on heap, no index or toast. How is that? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Open items
Magnus Hagander wrote: Changes --- integrated auto-vacuum (Alvaro) ICU locale patch? That would be Palle, and he's said he thinks he can have it in place in time. I'll have to update it for win32 build specifics after that, but that should be ok after the freeze, right? Yes, unless the Win32 adjustments are major. Please consider removing the question mark ;-) Done. The latest version of the patch is at http://people.freebsd.org/~girgen/postgresql-icu/readme.html. It needs to be updated for 8.1. OK. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] commit_delay, siblings
Simon Riggs wrote: On Wed, 2005-06-22 at 11:11 -0700, Josh Berkus wrote: Hans, Tom, We have done extensive testing some time ago. We could not see any difference on any platform we have tested (AIX, Linux, Solaris). I don't think that there is one at all - at least not on common systems. Keen then. Any objections to removing the GUC? We desperately need means to cut down on GUC options. Group commit is a well-documented technique for improving performance, but the gains only show themselves on very busy systems. It is possible in earlier testing any apparent value was actually hidden by the BufMgrLock issues we have now resolved in 8.1. We now see XLogInsert as being very nearly the highest routine on the oprofile. That tells me that it could now be time for group commit to show us some value, if any exists. DB2 and Berkeley-DB use group commit, while other rdbms use log writer processes which effectively provide the same thing. It would surprise me if we were unable to make use of such a technique, and worry me too. I would ask that we hold off on their execution, at least for the complete 8.1 beta performance test cycle. We may yet see gains albeit, as Tom points out, that benefit may only be possible on only some platforms. Interesting. I didn't know other databases used group commits. Your idea of keeping it for the 8.1 testing cycle has merit. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Implementing SQL/PSM for PG 8.2 - debugger
This is an interesting suggestion, particularly the addition of additional connections for management However it does require all clients rewrite (yet again ) their connection code. My reasoning for suggesting a separate port for debugging are: 1) no changes to existing clients ( this probably could be done by extending the existing protocol ) 2) Ability to run your existing applications, not just psql, but any application and remotely debug without interfering with the current connection data. 3) Relatively easy to create (name your favorite language) debuggers 4) Seems easier to connect, and disconnect from the process of interest. Dave On 29-Jun-05, at 6:06 AM, Hannu Krosing wrote: On K, 2005-06-29 at 10:33 +0100, Mark Cave-Ayland wrote: Hi guys, I lean with you and Tom. While running it over the same libpq protocol would be helpful in some ways, it would have a lot of drawbacks and would really change the function of libpq. I think a separate debugging protocol is in order. Just putting on my network hat for a moment... Would an approach be to come up with a generic encapsulation protocol, similar in principle to PPP, in which we could run any protocols we wanted? That's what I also thought, but was too busy/lazy to write up :) If we had something like a PGSQL Encapsulation Protocol (PGEP) used to transfer all data between a PostgreSQL client/server, then we can use this to tunnel libpq requests as they are at the moment through to the other side. also, additional channels un PGEP could be initiated in both directions, and things like NOTIFY could be put in a different channel. However, it would also mean that people could add any other protocols as they see fit for debugging, statistics and all sorts of things that people have yet to think of. One example would be connection keepalive protocol , run over its own channel in PGEP and used in case TCP link has a tendency to fail. This should be run from server to client during idle periods, just to see if client is still there. Obviously this would require a client/server interface change so it's not to be taken lightly, but I thought I'd mention it since people have mentioned the possibility of changes to the FE/BE protocol. As protocol is negotiated at startup anyway, this is a change that could be done in a backward compatible manner . -- Hannu Krosing [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] Bytecode and virtual machine
Jonah, What do you see as the advantages of using a VM and bytecode? Regarding Antlr etal, are there any that generate C code. I am more familiar with the java parsers. If we can't generate C this is probably a non-starter. Dave On 28-Jun-05, at 5:58 PM, Jonah H. Harris wrote: Dave, I lean with you and Tom. While running it over the same libpq protocol would be helpful in some ways, it would have a lot of drawbacks and would really change the function of libpq. I think a separate debugging protocol is in order. Also, as far as bytecode comments go, let's separate them from this thread. I have a pretty sweet hand-written stack-based VM that understands PL/SQL, but it's kinda old and written using PCCTS 1.33 (a recursive descent parser). It has compilation, decompilation, and full debugging capabilities. Unfortunately, PCCTS is no longer maintained as Terrence Parr (the originator) has since moved to ANTLR. ANTLR currently does not generate C code although I have done some starting work on it (ANTLR currently generates Python, Java, or C++). I don't suggest we really reuse one of the current VMs as it would require a lot more support and coordination. Let's take the bytecode discussion off this thread and move it to another. There is certainly a good and bad side to using bytecode and I would be glad to discuss it in another thread. Dave Cramer wrote: Pavel, I am in agreement with Tom here, we should use a separate port, and protocol specifically designed for this. My understanding is that this protocol would be synchronous, and be used for transferring state information, variables, etc back and forth whereas the existing protocol would still be used to transfer data back and forth Dave On 28-Jun-05, at 10:36 AM, Pavel Stehule wrote: On Tue, 28 Jun 2005, Tom Lane wrote: Pavel Stehule [EMAIL PROTECTED] writes: What do you think you need for enhanced protocol ? What I need? Some like synchronous elog(NOTICE,''), which can return some user's interaction, if it's possible. I didn't find how I do it with current set of messages. But my knowleadges of protocol are minimal. It'd probably be smarter to manage the debugging across a separate connection, so that you could carry out debugging without requiring sophisticated support for it inside the client program. If it's single-connection then it will be essentially impractical to debug except from a few specialized clients such as pgadmin; which will make it hard to investigate behaviors that are only seen under load from a client app. I don't think it. Debug process halt query process in bouth variants - remote | protocol. Remote debugging has one advance. I can monitor any living plpgsql process, but I have to connect to some special port, and it can be problem. Protocol debugging can be supported libpq, and all clients libpq can debug. But is problem if PostgreSQL support bouth variants? btw: debuging have to be only for some users, GRANT DEBUG ON LANGUAGE plpgsql TO .. For me, is better variant if I can debug plpgsql code in psql console. Without spec application. I don't speak so spec application don't have to exists (from my view, ofcourse). Maybe: set debug_mode to true; -- if 't' then func stmt has src reset function myfce(integer, integer); -- need recompilation create breakpoint on myfce(integer, integer) line 1; select myfce(10,10); dbg \l .. list current line \c .. continue \n .. next stmt \L .. show src \s .. show stack \b .. switch breakpoint \q .. quit function select myvar+10 .. any sql expression variable .. print variable \c myfce - 10 that's all. Maybe I have big fantasy :). Regards Pavel + small argument: if psql support debug mode, I don't need leave my emacs postgresql mode. I don't know exactly how to cause such a connection to get set up, especially remotely. But we should try to think of a way. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] Dbsize backend integration
Michael Paesold wrote: Do we have to use pg_object_size? Is there a better name? Are indexes/toasts even objects? Relation is not an ideal names, but I heard people talk about heap relation and index relation. Indexes and tables (and sequences) are treated in a similar way quite often. Think of ALTER TABLE example_index RENAME TO another_index. This is even less obvious. Of course in relational theory, an index would not be a relation, because an index is just implementation detail. I don't like object_size any better, since that makes me rather think of large objects or rows as objects (object id...). Perhaps pg_table_size should be split into pg_table_size and pg_indexes_size, where pg_indexes_size is the aggregate of all indexes on a table und pg_table_size is just table+toast+toast-index. If noone has a better idea for pg_relation_size, I would rather keep it for consistency with the contrib module, and because it's not too far off. Yea, but then we have toast and we would need another name. I suggested pg_storage_size() because it relates to a storage unit (index, toast, etc), and not a real object or relation. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Implementing SQL/PSM for PG 8.2 - debugger
On K, 2005-06-29 at 08:00 -0400, Dave Cramer wrote: This is an interesting suggestion, particularly the addition of additional connections for management However it does require all clients rewrite (yet again ) their connection code. My reasoning for suggesting a separate port for debugging are: I'm not objecting to the idea of a separate port, just suggesting one way to connect to that port using clients that are aware of the new PGEP protocol. the old ones can continue to work as they are. -- Hannu Krosing [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] commit_delay, siblings
Simon Riggs [EMAIL PROTECTED] writes: Group commit is a well-documented technique for improving performance, The issue here is not is group commit a good idea in the abstract?. It is is the commit_delay implementation of the idea worth a dime? ... and the evidence we have all points to the answer NO. We should not let theoretical arguments blind us to this. I posted an analysis some time ago showing that under heavy load, we already have the effect of ganged commits, without commit_delay: http://archives.postgresql.org/pgsql-hackers/2002-10/msg00331.php It's likely that there is more we can and should do, but that doesn't mean that commit_delay is the right answer. commit_delay doesn't do anything to encourage ganging of writes, it just inserts an arbitrary delay that's not synchronized to anything, and is probably an order of magnitude too large anyway on most platforms. I would ask that we hold off on their execution, at least for the complete 8.1 beta performance test cycle. I'm willing to wait a week while Tatsuo runs some fresh tests. I'm not willing to wait indefinitely for evidence that I'm privately certain will not be forthcoming. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Startup successful message, even on failure
Michael Glaesemann [EMAIL PROTECTED] writes: A number of times when I've attempted to start the postmaster (using pg_ctl start) I've gotten Fatal error messages (usually when I don't have the shmmax/shmall settings correct) followed by a success message (something like postmaster sucessfully started). Unless you use -w, this isn't a bug, it's the expected behavior. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] Proposal: associative arrays for plpgsql (concept)
Hello The concept is from Oracle 9i, but with some changes. http://www.oracle-10g.de/oracle_10g_documentation/appdev.101/b10807/05_colls.htm#i35672 Associative arrays are any arrays with index. Will be created DECLARE x varchar[] INDEX BY VARCHAR = '{}'; -- some format, haven't idea then I can use anywhere x[key]; two enhancing FOR cycle: -- iteration over all values FOR i IN VALUES OF x LOOP -- x array or associative array END LOOP; -- iteration over all keys FOR i IN INDICIES OF x LOOP -- x associatice array x[i] END LOOP; new functions: exists(x, key); delete(x, key); index is accessable only from PL/pgSQL. Associative arrays can be spec PostgreSQL type or clasic arrays with hash index. Comments, notes? Regards Pavel Stehule ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Implementing SQL/PSM for PG 8.2
Andrew Dunstan wrote: It could be done by putting the SPL parser in front of the SQL parser. Maybe Luss will tell us how it was done ;-) We added SPL 'CREATE [OR REPLACE] PROCEDURE' and 'CREATE [OR REPLACE] FUNCTION' Syntax support to the main scanner, parser for the backend. By entering exclusive state for scanning the body, similar to quoted string handling for the PostgreSQL language-agnostic function creation syntax, we achieve the desired result. We return the scanner to INITIAL state by encountering the last END token; based upon the block depth level, that we are keeping track of in the exclusive state. The rest of the handling, behind the parser, conforms to standard PostgreSQL Language-agnostic Function creation with all the required attributes set for PG_PROC via the CreateFunctionStmt node. Please note that I am using 'PG_PROC' and 'CreateFunctionStmt' just to maintain the standard PostgreSQL Reference Point, our implementation actually differs in terms of catalog(s), structure(s) naming as we went for semi-bifurcation between procedures and functions to meet our future needs. During the process, however, we have ensured full backward compatibility. This list of required attributes includes the language to be set as EDB-SPL (configured as the default PL for EnterpriseDB), in addition to other attributes such as parameter(s) information etc. Subsequently, like for any other PL in PostgreSQL, SPL Language Handler maintains the responsibility of performing the compilation and execution of the SPL proc/function body source text. -Affan Lead Database Architect, EnterpriseDB Corporation. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Bytecode and virtual machine
Hey Dave, I see a few of the advantages and disadvantages as follows: ADVANTAGES - Faster execution (a single parse/compile) - The ability for companies/people to write PL code and not directly share the source (though disassembly is always possible) - Built-in debugging support (could be added to something like PL/pgSQL, but would work better if built into the system from the ground-up) - Support for PL profiling (great for some of the newbie PL writers and would be useful for finding performance problems when packages come around) - Better/faster exception handling DISADVANTAGES - Generated bytecode would have to be platform independent - Maintenance of the VM itself (how many people would be able to pick up development/support) - Platform support for the VM (not really that big of an issue if it's done right) I have experience writing both stack and register based VMs but I believe that a stack-based VM would be a great way to implement PLs. Sure, a register-based VM sometimes runs faster than a stack-based machine, but it is also a great deal more complex. Just a couple thoughts :) -Jonah Dave Cramer wrote: Jonah, What do you see as the advantages of using a VM and bytecode? Regarding Antlr etal, are there any that generate C code. I am more familiar with the java parsers. If we can't generate C this is probably a non-starter. Dave On 28-Jun-05, at 5:58 PM, Jonah H. Harris wrote: Dave, I lean with you and Tom. While running it over the same libpq protocol would be helpful in some ways, it would have a lot of drawbacks and would really change the function of libpq. I think a separate debugging protocol is in order. Also, as far as bytecode comments go, let's separate them from this thread. I have a pretty sweet hand-written stack-based VM that understands PL/SQL, but it's kinda old and written using PCCTS 1.33 (a recursive descent parser). It has compilation, decompilation, and full debugging capabilities. Unfortunately, PCCTS is no longer maintained as Terrence Parr (the originator) has since moved to ANTLR. ANTLR currently does not generate C code although I have done some starting work on it (ANTLR currently generates Python, Java, or C++). I don't suggest we really reuse one of the current VMs as it would require a lot more support and coordination. Let's take the bytecode discussion off this thread and move it to another. There is certainly a good and bad side to using bytecode and I would be glad to discuss it in another thread. Dave Cramer wrote: Pavel, I am in agreement with Tom here, we should use a separate port, and protocol specifically designed for this. My understanding is that this protocol would be synchronous, and be used for transferring state information, variables, etc back and forth whereas the existing protocol would still be used to transfer data back and forth Dave On 28-Jun-05, at 10:36 AM, Pavel Stehule wrote: On Tue, 28 Jun 2005, Tom Lane wrote: Pavel Stehule [EMAIL PROTECTED] writes: What do you think you need for enhanced protocol ? What I need? Some like synchronous elog(NOTICE,''), which can return some user's interaction, if it's possible. I didn't find how I do it with current set of messages. But my knowleadges of protocol are minimal. It'd probably be smarter to manage the debugging across a separate connection, so that you could carry out debugging without requiring sophisticated support for it inside the client program. If it's single-connection then it will be essentially impractical to debug except from a few specialized clients such as pgadmin; which will make it hard to investigate behaviors that are only seen under load from a client app. I don't think it. Debug process halt query process in bouth variants - remote | protocol. Remote debugging has one advance. I can monitor any living plpgsql process, but I have to connect to some special port, and it can be problem. Protocol debugging can be supported libpq, and all clients libpq can debug. But is problem if PostgreSQL support bouth variants? btw: debuging have to be only for some users, GRANT DEBUG ON LANGUAGE plpgsql TO .. For me, is better variant if I can debug plpgsql code in psql console. Without spec application. I don't speak so spec application don't have to exists (from my view, ofcourse). Maybe: set debug_mode to true; -- if 't' then func stmt has src reset function myfce(integer, integer); -- need recompilation create breakpoint on myfce(integer, integer) line 1; select myfce(10,10); dbg \l .. list current line \c .. continue \n .. next stmt \L .. show src \s .. show stack \b .. switch breakpoint \q .. quit function select myvar+10 .. any sql expression variable .. print variable \c myfce - 10 that's all. Maybe
Re: [HACKERS] Proposal: associative arrays for plpgsql (concept)
On Wed, Jun 29, 2005 at 05:59:26PM +0200, Pavel Stehule wrote: Hello The concept is from Oracle 9i, but with some changes. http://www.oracle-10g.de/oracle_10g_documentation/appdev.101/b10807/05_colls.htm#i35672 Associative arrays are any arrays with index. Will be created DECLARE x varchar[] INDEX BY VARCHAR = '{}'; -- some format, haven't idea then I can use anywhere x[key]; two enhancing FOR cycle: -- iteration over all values FOR i IN VALUES OF x LOOP -- x array or associative array END LOOP; -- iteration over all keys FOR i IN INDICIES OF x LOOP -- x associatice array x[i] END LOOP; new functions: exists(x, key); delete(x, key); index is accessable only from PL/pgSQL. Associative arrays can be spec PostgreSQL type or clasic arrays with hash index. Comments, notes? Regards Pavel Stehule I'm all in favor of having associative arrays as a 1st-class data type in PostgreSQL. How much harder would it be to make these generally available vs. tied to one particular language? Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 510 893 6100 mobile: +1 415 235 3778 Remember to vote! ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] problem with plpgsql
Hello In my code I evaluate expr select array(select generate_series from generate_series(1,800) my code var = (PLpgSQL_var *) (estate-datums[stmt-varno]); value = exec_eval_expr(estate, stmt-expr, isnull, valtype); exec_eval_cleanup(estate); and iteration over array ndim = ARR_NDIM(value); dims = ARR_DIMS(value); nitems = ArrayGetNItems(ndim, dims); element_type = ARR_ELEMTYPE(value); p = ARR_DATA_PTR(value); get_typlenbyvalalign(element_type, typlen, typbyval, typalign); for (i = 0; i nitems; i++) // tak aby to zvladalo dimenze { Datum itemvalue; itemvalue = fetch_att(p, typbyval, typlen); exec_assign_value(estate, (PLpgSQL_datum *) var, itemvalue, element_type, isnull); p = att_addlength(p, typlen, PointerGetDatum(p)); p = (char *) att_align(p, typalign); works fine, but from random index 300 array is broken NOTICE: 400 NOTICE: 401 NOTICE: 402 NOTICE: 403 NOTICE: 404 NOTICE: 405 NOTICE: 406 NOTICE: 407 NOTICE: 408 NOTICE: 409 NOTICE: 410 NOTICE: 411 NOTICE: 412 NOTICE: 413 NOTICE: 414 NOTICE: 415 NOTICE: 157207208 NOTICE: 16 NOTICE: 3486004 NOTICE: 419 NOTICE: 420 NOTICE: 421 NOTICE: 157207208 NOTICE: 16 Can you help me, what I do wrong? Thank You Pavel Stehule ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Proposal: associative arrays for plpgsql (concept)
Pavel, The concept is from Oracle 9i, but with some changes. http://www.oracle-10g.de/oracle_10g_documentation/appdev.101/b10807/05_coll s.htm#i35672 How does this match the SQL2003 spec? -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Proposal: associative arrays for plpgsql (concept)
David Fetter [EMAIL PROTECTED] writes: I'm all in favor of having associative arrays as a 1st-class data type in PostgreSQL. How much harder would it be to make these generally available vs. tied to one particular language? We already have them--they're called tables with primary keys. :) What's the use-case for these things? Just imitating Oracle? -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] Users/Groups - Roles
I notice that AddRoleMems/DelRoleMems assume that ADMIN OPTION is not inherited indirectly; that is it must be granted directly to you. This seems wrong; SQL99 has under privileges 19) B has the WITH ADMIN OPTION on a role if a role authorization descriptor identifies the role as granted to B WITH ADMIN OPTION or a role authorization descriptor identifies it as granted WITH ADMIN OPTION to another applicable role for B. and in the Access Rules for grant role statement 1) Every role identified by role granted shall be contained in the applicable roles for A and the corresponding role authorization descriptors shall specify WITH ADMIN OPTION. I can't see any support in the spec for the idea that WITH ADMIN OPTION doesn't flow through role memberships in the same way as ordinary membership; can you quote someplace that implies this? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] commit_delay, siblings
On Wed, Jun 29, 2005 at 08:14:36AM +0100, Simon Riggs wrote: Group commit is a well-documented technique for improving performance, but the gains only show themselves on very busy systems. It is possible in earlier testing any apparent value was actually hidden by the BufMgrLock issues we have now resolved in 8.1. We now see XLogInsert as being very nearly the highest routine on the oprofile. That tells me that it could now be time for group commit to show us some value, if any exists. DB2 and Berkeley-DB use group commit, while other rdbms use log writer processes which effectively provide the same thing. It would surprise me if we were unable to make use of such a technique, and worry me too. I would ask that we hold off on their execution, at least for the complete 8.1 beta performance test cycle. We may yet see gains albeit, as Tom points out, that benefit may only be possible on only some platforms. Best Regards, Simon Riggs ---(end of broadcast)--- I would like to wiegh in on Simon's side on this issue. The fact that no benefit has been seen from the group commint yet may be in part do to the current WAL fsync structure where a page at a time is sync'd. I saw a patch/test just recently mentioned that showed dramatic performance improvements, up to the level of fsync = off, by writing multiple blocks with a gather algorithm. I would hope that with a similar patch, we should begin to see the benefit of the commit_delay GUC. Ken Marshall ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Proposal: associative arrays for plpgsql (concept)
On Wed, 29 Jun 2005, Josh Berkus wrote: Pavel, The concept is from Oracle 9i, but with some changes. http://www.oracle-10g.de/oracle_10g_documentation/appdev.101/b10807/05_coll s.htm#i35672 How does this match the SQL2003 spec? I don't know. What I can read about it, it's only PL/SQL feature and maybe reason for PL/pgSQL. I like and need a) hash arrays b) iteration over all items of array All I can use well in my codes. Pavel ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Checkpoint cost, looks like it is WAL/CRC
Tom, All: Ok, finally managed though the peristent efforts of Mark Wong to get some tests through. Here are two tests with the CRC and wall buffer checking completely cut out of the code, as Tom suggested: 5-min checkpoint: http://khack.osdl.org/stp/302738/results/0/ http://khack.osdl.org/stp/302706/results/0/ 60-min checkpoint: http://khack.osdl.org/stp/302739/results/0/ (please note that OSDL is having technical difficulties and some links may not work) This is the performance profile I'd expect and want to see, and the frequency for checkpoints doesn't affect the overall performance at all. Contrast it with these: 5-min checkpoint: http://khack.osdl.org/stp/302671/results/0/ I don't hae a 60-minute checkpoint for comparison because of failures on the STP :-( So, now that we know what the performance bottleneck is, how do we fix it? -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Proposal: associative arrays for plpgsql (concept)
On Wed, 29 Jun 2005, Douglas McNaught wrote: David Fetter [EMAIL PROTECTED] writes: I'm all in favor of having associative arrays as a 1st-class data type in PostgreSQL. How much harder would it be to make these generally available vs. tied to one particular language? We already have them--they're called tables with primary keys. :) What's the use-case for these things? Just imitating Oracle? -Doug no for example DECLARE _d varchar[] INDEX BY VARCHAR = {'cmd1' = '723:t:f:1', 'cmd2'=.. BEGIN FOR r IN SELECT * FROM data LOOP check_params(_r, _d[_r.cmd]) END LOOP; or without assoc. arrays DECLARE _d varchar; BEGIN FOR r IN SELECT * FROM data LOOP SELECT INTO par _d WHERE cmd = _r.cmd; check_params(_r, _d) END LOOP; I can't to speak about speed without tests but I can expect so hash array can be much faster. This sample is easy, but I can have procedure witch operate over big arrays of numbers(prices) and I need save somewhere this arrays if I don't wont to read them again and again. And if I have in data identification by key, I everytime have to find key, and translate it into number Regards Pavel Stehule ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] [PATCHES] Users/Groups - Roles
* Tom Lane ([EMAIL PROTECTED]) wrote: I notice that AddRoleMems/DelRoleMems assume that ADMIN OPTION is not inherited indirectly; that is it must be granted directly to you. This seems wrong; SQL99 has under privileges 19) B has the WITH ADMIN OPTION on a role if a role authorization descriptor identifies the role as granted to B WITH ADMIN OPTION or a role authorization descriptor identifies it as granted WITH ADMIN OPTION to another applicable role for B. and in the Access Rules for grant role statement 1) Every role identified by role granted shall be contained in the applicable roles for A and the corresponding role authorization descriptors shall specify WITH ADMIN OPTION. I can't see any support in the spec for the idea that WITH ADMIN OPTION doesn't flow through role memberships in the same way as ordinary membership; can you quote someplace that implies this? Hrm, no, sorry, I just interpreted the 'Access Rules' line for grant role statement differently. That is to say: 1) Every role identified by role granted shall be contained (Alright, all the roles which you're granting, right) in the applicable roles for A and the corresponding role (A must be in all the roles which are being granted) authorization descriptors shall specify WITH ADMIN OPTION. (the grants to A for those rules specify ADMIN OPTION) This came across to me as meaning there must exist an authorization descriptor such that the granted-role equals role granted, the grantee is A and WITH ADMIN OPTION is set. That could only be true if the grant was done explicitly. Reading from 19 above (which I don't recall seeing before, or at least not reading very carefully) I think you're right. Either way is fine with me. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Proposal: associative arrays for plpgsql (concept)
Pavel Stehule wrote: On Wed, 29 Jun 2005, Josh Berkus wrote: Pavel, The concept is from Oracle 9i, but with some changes. http://www.oracle-10g.de/oracle_10g_documentation/appdev.101/b10807/05_coll s.htm#i35672 How does this match the SQL2003 spec? I don't know. What I can read about it, it's only PL/SQL feature and maybe reason for PL/pgSQL. I like and need a) hash arrays b) iteration over all items of array All I can use well in my codes. Well, plperl and pltcl will buy you these (not to mention plruby and even pljavascript when I get around to creating it) That's not to say that we should not build them into plpgsql, but to suggest that there might be reasonable alternatives. cheers andrew ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Open items
* Tom Lane ([EMAIL PROTECTED]) wrote: Stephen Frost [EMAIL PROTECTED] writes: ... We really should also support SET ROLE. Perhaps if I have time I'll go through the SQL spec looking at the specific requirements of 'Basic Role Support' and 'Extended Role Support' and come up with what we've got, what we're missing, and then we can decide which are features, which are bugfixes, and what we can claim in the docs. Yes, that'd be a fine thing to do. Here's the results of this. I think we're pretty close to having both Basic roles and Extended roles personally. For 'Basic roles' we need SET ROLE and some information schema tables. For 'Extended roles' I think we need 'default option CURRENT_ROLE' (if this isn't already taken care of because CURRENT_ROLE is a function?), REVOKE ROLE w/ CASCADE drop behavior. There were a few other things in 'Extended roles' that I didn't entirely follow but think we probably meet or would meet with the above mentioned items... Here's the complete list. * = Already supported, ? = Might be supported, others are to-do items. Basic roles, Feature T331 * role name * CREATE ROLE * GRANT ROLE * DROP ROLE * REVOKE ROLE SET ROLE INFORMATION_SCHEMA.ADMINISTRABLE_ROLE_AUTHORIZATIONS INFORMATION_SCHEMA.APPLICABLE_ROLES INFORMATION_SCHEMA.ENABLED_ROLES INFORMATION_SCHEMA.ROLE_COLUMN_GRANTS INFORMATION_SCHEMA.ROLE_ROUTINE_GRANTS INFORMATION_SCHEMA.ROLE_TABLE_GRANTS INFORMATION_SCHEMA.ROLE_TABLE_METHOD_GRANTS INFORMATION_SCHEMA.ROLE_USAGE_GRANTS INFORMATION_SCHEMA.ROLE_UDT_GRANTS INFORMATION_SCHEMA.ADMIN_ROLE_AUTHS INFORMATION_SCHEMA.ROLE_ROUT_GRANTS Extended roles, Feature T332 (Implies Basic roles) ? default option CURRENT_ROLE * CURRENT_ROLE * CREATE ROLE w/ ADMIN OPTION * REVOKE ROLE w/ revoke option extension GRANT OPTION FOR (GRANT ADMIN FOR?) REVOKE ROLE w/ drop behavior CASCADE revoke statement containing privileges which contain an object name where the owner of the SQL-schema that is specified explicitly or implicitly in the object name is not the current authorization identifier (superuser()?) revoke statement with privilege descriptor PD which satisfies: (a) PD identifies the object identified by object name simply contained in privileges contained in the revoke statement (CURRENT_ROLE?) (b) PD identifies the grantee identified by any grantee simply contained in revoke statement and that grantee does not identify the owner of the SQL-schema that is specified explicitly or implicitly in the object name simply contained in privileges contained in the revoke statement (CURRENT_USER?) (c) PD identifies the action identified by the action simply contained in privileges contained in the revoke statement (drop bahavior ?) (d) PD indicates that the privilege is grantable (GRANT ADMIN FOR?) Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Open items
Stephen Frost [EMAIL PROTECTED] writes: Here's the results of this. I think we're pretty close to having both Basic roles and Extended roles personally. For 'Basic roles' we need SET ROLE and some information schema tables. The information schema views already exist, although I suspect the view definitions may need more work. For 'Extended roles' I think we need 'default option CURRENT_ROLE' (if this isn't already taken care of because CURRENT_ROLE is a function?), Yes, it is. REVOKE ROLE w/CASCADE drop behavior. I was just about to quiz you about the lack of any use of the grantor column in pg_auth_members. I suppose that revoking a membership that was held WITH ADMIN OPTION ought to lead to searching for and destroying all memberships granted by that ID (possibly indirectly?). DROP ROLE has got the same problem. Also, I've been working on converting the CREATEROLE privilege into something usable, and am about ready to commit that. The way it works is that CREATEROLE lets you do anything that user.c formerly required superuser for, *except* that you have to be superuser to mess with superuser roles in any way. This all seems fine as far as it goes, but should revoking CREATEROLE lead to dropping grants that were made by means of that power? Not sure. We ended up with some fairly carefully crafted compromises for ACL representation of grants made by superusers, and I think we'll likely need to think hard about it for role memberships too. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Proposal: associative arrays for plpgsql (concept)
On Wed, Jun 29, 2005 at 01:20:17PM -0400, Douglas McNaught wrote: David Fetter [EMAIL PROTECTED] writes: I'm all in favor of having associative arrays as a 1st-class data type in PostgreSQL. How much harder would it be to make these generally available vs. tied to one particular language? We already have them--they're called tables with primary keys. :) What's the use-case for these things? Just imitating Oracle? It would make named function parameters *very* easy to do. :) SELECT * FROM foo_func( a = 2, b = 5, c = current_timestamp::timestamp with time zone ); would be equivalent to SELECT * FROM foo_func( c = current_timestamp::timestamp with time zone, a = 2, b = 5 ); and both would Do The Right Thing. It also opens the door to default parameters for those who want them. Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 510 893 6100 mobile: +1 415 235 3778 Remember to vote! ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] commit_delay, siblings
On Wed, 2005-06-29 at 10:16 -0400, Tom Lane wrote: Simon Riggs [EMAIL PROTECTED] writes: Group commit is a well-documented technique for improving performance, The issue here is not is group commit a good idea in the abstract?. It is is the commit_delay implementation of the idea worth a dime? ... and the evidence we have all points to the answer NO. We should not let theoretical arguments blind us to this. OK, sometimes I sound too theoretical when I do my World History of RDBMS notes, :-) ... all I meant was lets hold off till we've measured it. I would ask that we hold off on their execution, at least for the complete 8.1 beta performance test cycle. I'm willing to wait a week while Tatsuo runs some fresh tests. I'm not willing to wait indefinitely for evidence that I'm privately certain will not be forthcoming. I'm inclined to agree with you, but I see no need to move quickly. The code's been there a while now. Best Regards, Simon Riggs ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Open items
* Tom Lane ([EMAIL PROTECTED]) wrote: Stephen Frost [EMAIL PROTECTED] writes: Here's the results of this. I think we're pretty close to having both Basic roles and Extended roles personally. For 'Basic roles' we need SET ROLE and some information schema tables. The information schema views already exist, although I suspect the view definitions may need more work. Ok. REVOKE ROLE w/CASCADE drop behavior. I was just about to quiz you about the lack of any use of the grantor column in pg_auth_members. I suppose that revoking a membership that was held WITH ADMIN OPTION ought to lead to searching for and destroying all memberships granted by that ID (possibly indirectly?). DROP ROLE has got the same problem. Not sure about indirectly, but I think a 'drop role' should check for existing entries where that role is the 'grantor' and fail if any exist unless 'cascade' is given. I think 'drop role' at one point (when it was still seq-scan based) dropped based on the 'grantor' field (regardless of 'cascade' or not). When I converted it to using an index apparently I missed that issue, sorry about that. Seems like that'd mean it'd have to go back to seq-scan based again. :/ Also, I've been working on converting the CREATEROLE privilege into something usable, and am about ready to commit that. The way it works is that CREATEROLE lets you do anything that user.c formerly required superuser for, *except* that you have to be superuser to mess with superuser roles in any way. This all seems fine as far as it goes, but should revoking CREATEROLE lead to dropping grants that were made by means of that power? Not sure. We ended up with some fairly carefully crafted compromises for ACL representation of grants made by superusers, and I think we'll likely need to think hard about it for role memberships too. I'd tend to think that revoking CREATEROLE wouldn't drop grants which were made using it. I do agree that it needs to be thought out more carefully than I believe it has been so far though. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] [PATCHES] Dbsize backend integration
Bruce Momjian wrote: Yea, but then we have toast and we would need another name. I suggested pg_storage_size() because it relates to a storage unit (index, toast, etc), and not a real object or relation. I'm not really happy that all functions change their names (more versioning handling in pgadmin), but pg_storage_size is certainly the most precise name. Regards, Andreas ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] Catch the commit
Hello, seems like a complicated question: is it possible in a module to receive an event, get a trigger fired, get a function called or something like this when the current transaction is about to be committed? Background: In a module (tablelog) i need the latest possible timestamp before committing the data for the case, that there is more then one started transaction. If this happens and the second transaction is commited first, i have for the time the first transaction is going on invalid data in my log. Has anybody an idea about this? kind regards -- Andreas 'ads' Scherbaum Failure is not an option. It comes bundled with your Microsoft product. (Ferenc Mantfeld) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] problem with plpgsql
Pavel Stehule [EMAIL PROTECTED] writes: Hello In my code I evaluate expr select array(select generate_series from generate_series(1,800) my code var = (PLpgSQL_var *) (estate-datums[stmt-varno]); value = exec_eval_expr(estate, stmt-expr, isnull, valtype); exec_eval_cleanup(estate); and iteration over array Uh, once you've done the exec_eval_cleanup, you can't use the result of exec_eval_expr anymore. Do I guess correctly that you're trying to do backend development without having configured --enable-cassert? Bad idea. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Checkpoint cost, looks like it is WAL/CRC
Josh Berkus josh@agliodbs.com writes: Ok, finally managed though the peristent efforts of Mark Wong to get some tests through. Here are two tests with the CRC and wall buffer checking completely cut out of the code, as Tom suggested: Uh, what exactly did you cut out? I suggested dropping the dumping of full page images, but not removing CRCs altogether ... regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] [PATCHES] Dbsize backend integration
Andreas Pflug [EMAIL PROTECTED] writes: I'm not really happy that all functions change their names (more versioning handling in pgadmin), but pg_storage_size is certainly the most precise name. Actually, it seems excessively imprecise to me: the name conveys nothing at all to help you remember what the definition is. storage could mean any of the different definitions that have been kicked around in this thread. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Open items
Bruce, I have another patch for the TODO item. From TODO item: Add ability to monitor the use of temporary sort files As I mentioned before, I created a sort statistics patch. http://archives.postgresql.org/pgsql-hackers/2004-09/msg00380.php Now my patch can work with 7.4.6 and it creates new system view, called pg_stat_sorts. sort=# select * from pg_stat_sorts ; datname | heap_all | index_all | heap_tape | index_tape | max_size ---+--+---+---++-- sort | 11 | 0 | 3 | 0 | 11141120 template1 |2 | 0 | 0 | 0 | 792 template0 |0 | 0 | 0 | 0 |0 (3 rows) Is this enough for this TODO? Any comments? -- NAGAYASU Satoshi [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Checkpoint cost, looks like it is WAL/CRC
Tom, Uh, what exactly did you cut out? I suggested dropping the dumping of full page images, but not removing CRCs altogether ... Attached is the patch I used. (it's a -Urn patch 'cause that's what STP takes) -- --Josh Josh Berkus Aglio Database Solutions San Francisco diff -urN pgsql/src/backend/access/transam/xlog.c pgsql-new/src/backend/access/transam/xlog.c --- pgsql/src/backend/access/transam/xlog.c 2005-06-21 16:28:37.0 -0700 +++ pgsql-new/src/backend/access/transam/xlog.c 2005-06-21 16:27:10.0 -0700 @@ -942,44 +942,6 @@ */ *lsn = page-pd_lsn; - if (XLByteLE(page-pd_lsn, RedoRecPtr)) - { - /* - * The page needs to be backed up, so set up *bkpb - */ - bkpb-node = BufferGetFileNode(rdata-buffer); - bkpb-block = BufferGetBlockNumber(rdata-buffer); - - if (rdata-buffer_std) - { - /* Assume we can omit data between pd_lower and pd_upper */ - uint16 lower = page-pd_lower; - uint16 upper = page-pd_upper; - - if (lower = SizeOfPageHeaderData -upper lower -upper = BLCKSZ) - { -bkpb-hole_offset = lower; -bkpb-hole_length = upper - lower; - } - else - { -/* No hole to compress out */ -bkpb-hole_offset = 0; -bkpb-hole_length = 0; - } - } - else - { - /* Not a standard page header, don't try to eliminate hole */ - bkpb-hole_offset = 0; - bkpb-hole_length = 0; - } - - return true; /* buffer requires backup */ - } - return false;/* buffer does not need to be backed up */ } ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Open items
Satoshi, sort=# select * from pg_stat_sorts ; datname | heap_all | index_all | heap_tape | index_tape | max_size ---+--+---+---++-- sort | 11 | 0 | 3 | 0 | 11141120 template1 | 2 | 0 | 0 | 0 | 792 template0 | 0 | 0 | 0 | 0 | 0 Good for me, if you explain the column names? -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Open items
Josh Berkus josh@agliodbs.com writes: Satoshi, sort=# select * from pg_stat_sorts ; datname | heap_all | index_all | heap_tape | index_tape | max_size Good for me, if you explain the column names? I was wondering about that too ... temporary sort files haven't got indexes ... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Checkpoint cost, looks like it is WAL/CRC
Josh Berkus josh@agliodbs.com writes: Uh, what exactly did you cut out? I suggested dropping the dumping of full page images, but not removing CRCs altogether ... Attached is the patch I used. OK, thanks for the clarification. So it does seem that dumping full page images is a pretty big hit these days. (In defense of the original idea, I believe it was not such a hit at the time --- but as we continue to improve performance, things that weren't originally at the top of the profile become significant.) It seems like we have two basic alternatives: 1. Offer a GUC to turn off full-page-image dumping, which you'd use only if you really trust your hardware :-( 2. Think of a better defense against partial-page writes. I like #2, or would if I could think of a better defense. Ideas anyone? regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Checkpoint cost, looks like it is WAL/CRC
Josh Berkus josh@agliodbs.com writes: 1. Offer a GUC to turn off full-page-image dumping, which you'd use only if you really trust your hardware :-( Are these just WAL pages? Or database pages as well? Database pages. The current theory is that we can completely reconstruct from WAL data every page that's been modified since the last checkpoint. So the first write of any page after a checkpoint dumps a full image of the page into WAL; subsequent writes only write differences. This is nice and secure ... at least when you are using hardware that guarantees write ordering ... otherwise it's probably mostly useless overhead. Still, I'd not like to abandon the contract that if the disk does what it is supposed to do then we will do what we are supposed to. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Open items
Tom Lane wrote: Josh Berkus josh@agliodbs.com writes: Satoshi, sort=# select * from pg_stat_sorts ; � datname �| heap_all | index_all | heap_tape | index_tape | max_size Good for me, if you explain the column names? I was wondering about that too ... temporary sort files haven't got indexes ... Sorry. It's my misunderstanding. index_tape will be zero forever... My patch counts inittapes(), tuplesort_begin_heap() and tuplesort_begin_index(), and collect them, and sum them through the stat collector. I'm ready to rewrite if it is required. Thanks. -- NAGAYASU Satoshi [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Build errors latest CVS freebsd
gmake distclean ./configure ... gmake install ... gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -fno-strict-aliasing -g -I../../../../src/include -c -o timestamp.o timestamp.c -MMD timestamp.c: In function `GetCurrentTimestamp': timestamp.c:955: storage size of `tp' isn't known timestamp.c:957: warning: implicit declaration of function `gettimeofday' timestamp.c:955: warning: unused variable `tp' timestamp.c:954: warning: `result' might be used uninitialized in this function gmake[4]: *** [timestamp.o] Error 1 gmake[4]: Leaving directory `/space/1/home/chriskl/pgsql-head/src/backend/utils/adt' gmake[3]: *** [adt-recursive] Error 2 gmake[3]: Leaving directory `/space/1/home/chriskl/pgsql-head/src/backend/utils' gmake[2]: *** [utils-recursive] Error 2 gmake[2]: Leaving directory `/space/1/home/chriskl/pgsql-head/src/backend' gmake[1]: *** [install] Error 2 gmake[1]: Leaving directory `/space/1/home/chriskl/pgsql-head/src' gmake: *** [install] Error 2 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Checkpoint cost, looks like it is WAL/CRC
Tom, 1. Offer a GUC to turn off full-page-image dumping, which you'd use only if you really trust your hardware :-( Are these just WAL pages? Or database pages as well? -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] HEAD: Compile issues on UnixWare 7.1.4
Larry Rosenman wrote: I'll play some more, but I'm at a loss. Especially since REL8_0_STABLE fails as well :( It was a PATH problem, as Larry discovered. With the patch I posted tonight both these branches run fine on Larry's machine (see below) FYI, I notice that, on this platform, on both HEAD and REL8_0_STABLE, the contrib/intarray tests run *extremely* slowly on both branches, apparently taking huge amounts of time over the last two gist index creation statements. It ran so slowly that I thought it was hung. cheers andrew bash-2.05a$ ./run_build.pl --verbose --nosend --nostatus --keepall REL8_0_STABLE checking out source ... checking if build run needed ... copying source to pgsql.12761 ... running configure ... running make ... running make check ... running make contrib ... running make install ... setting up db cluster ... starting db ... running make installcheck ... restarting db ... running make contrib install ... running make contrib installcheck ... stopping db ... OK Branch: REL8_0_STABLE All stages succeeded bash-2.05a$ ./run_build.pl --verbose --nosend --nostatus --keepall checking out source ... checking if build run needed ... copying source to pgsql.7066 ... running configure ... running make ... running make check ... running make contrib ... running make install ... setting up db cluster ... starting db ... running make installcheck ... restarting db ... running make PL installcheck ... restarting db ... running make contrib install ... running make contrib installcheck ... stopping db ... OK Branch: HEAD All stages succeeded bash-2.05a$ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] HEAD: Compile issues on UnixWare 7.1.4
Andrew Dunstan [EMAIL PROTECTED] writes: FYI, I notice that, on this platform, on both HEAD and REL8_0_STABLE, the contrib/intarray tests run *extremely* slowly on both branches, apparently taking huge amounts of time over the last two gist index creation statements. It ran so slowly that I thought it was hung. intarray's regression test has always seemed pretty slow to me ... are you sure there's something out of the ordinary here? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Open items
Satoshi Nagayasu [EMAIL PROTECTED] writes: My patch counts inittapes(), tuplesort_begin_heap() and tuplesort_begin_index(), and collect them, and sum them through the stat collector. Hm, that doesn't seem like quite the right level to be counting at. Shouldn't you be hacking fd.c to count operations on FD_XACT_TEMPORARY files? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Build errors latest CVS freebsd
Christopher Kings-Lynne [EMAIL PROTECTED] writes: timestamp.c: In function `GetCurrentTimestamp': timestamp.c:955: storage size of `tp' isn't known timestamp.c:957: warning: implicit declaration of function `gettimeofday' timestamp.c:955: warning: unused variable `tp' timestamp.c:954: warning: `result' might be used uninitialized in this function Wups. Looks like Neil already fixed it though. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings