[HACKERS] prev main developer page
Hi, --- a little off topic --- I want to do something so informative like the map that was in the prev main developer page. Can anyone point me about what tool to use? and maybe some guidance on this? thanks, Jaime Casanova ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] libpq API incompatibility between 7.4 and 8.0
Hi! Bruce Momjian [2005-02-09 18:05 -0500]: > > However, I just stumbled across another problem: libpq3 and the new > > libpq4 use the same translation domain "libpq4", thus they cannot be > > installed in parallel. Can you please change the domain to "libpq4" as > > well? This should generally be done anyway, but at least we can live > > with one breakage of this rule (for libpq3), so that we don't need to > > mess with the old libpq3 package. > > Uh, what is the translation domain? In short, that is the unique name for a bunch of translations for a particular application or group of related applications. The key point is that the name of the file in /usr/share/locale//LC_MESSAGES/.mo and the call bindtextdomain ("", NULL) must use the same so that libintl can find the mo file. However, that means that all applications that are installed in parallel need a distinct domain. Since the whole point of SONAMes is to allow several different library API versions to be installed in parallel, every library API (i. e. SONAME) should have an unique domain, which can be achieved easiest by just appending the SONAME to the translation domain. In my test packages I used the following patch: diff -ruN postgresql-8.0.1-old/src/interfaces/libpq/fe-misc.c postgresql-8.0.1/src/interfaces/libpq/fe-misc.c --- postgresql-8.0.1-old/src/interfaces/libpq/fe-misc.c 2004-12-31 23:03:50.0 +0100 +++ postgresql-8.0.1/src/interfaces/libpq/fe-misc.c 2005-02-07 22:55:13.177419296 +0100 @@ -1133,7 +1133,7 @@ { already_bound = 1; /* No relocatable lookup here because the binary could be anywhere */ - bindtextdomain("libpq", getenv("PGLOCALEDIR") ? getenv("PGLOCALEDIR") : LOCALEDIR); + bindtextdomain("libpq4", getenv("PGLOCALEDIR") ? getenv("PGLOCALEDIR") : LOCALEDIR); } return dgettext("libpq", msgid); diff -ruN postgresql-8.0.1-old/src/interfaces/libpq/nls.mk postgresql-8.0.1/src/interfaces/libpq/nls.mk --- postgresql-8.0.1-old/src/interfaces/libpq/nls.mk2005-01-14 09:57:06.0 +0100 +++ postgresql-8.0.1/src/interfaces/libpq/nls.mk2005-02-07 22:54:54.770217616 +0100 @@ -1,5 +1,5 @@ # $PostgreSQL: pgsql/src/interfaces/libpq/nls.mk,v 1.20 2005/01/14 08:57:06 petere Exp $ -CATALOG_NAME := libpq +CATALOG_NAME := libpq4 AVAIL_LANGUAGES:= af cs de es fr hr it ko nb pl pt_BR ru sk sl sv tr zh_CN zh_TW GETTEXT_FILES := fe-auth.c fe-connect.c fe-exec.c fe-lobj.c fe-misc.c fe-protocol2.c fe-protocol3.c fe-secure.c GETTEXT_TRIGGERS:= libpq_gettext pqInternalNotice:2 Compared to the SONAME, changing the translation domain is relatively uncritical, so if you don't want to change this upstream, I can maintain this patch for Debian/Ubuntu. However, I heard that some RPM guys plan a infrastructure similar to mine, and at that point they will have precisely the same problems :-) Thanks for considering and have a nice day! Martin -- Martin Pitt http://www.piware.de Ubuntu Developerhttp://www.ubuntulinux.org Debian GNU/Linux Developer http://www.debian.org signature.asc Description: Digital signature
Re: [HACKERS] enforcing a plan (in brief)
On Thu, 2005-02-10 at 14:37 -0500, Bruce Momjian wrote: > No, we feel that is of limited value. If the optimizer isn't doing > things properly, we will fix it. I agree that improving the optimizer is the right answer for normal usage, so I can't get excited about query-level plan hints, but I can see the capability to instruct the planner being useful in an academic context. -Neil ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] enforcing a plan (in brief)
Hicham For your experiments (VLDB ? :-) your best bet of specifically bolting on a plan (if you can't convince the optimizer to do the right thing) is to hack it in the query planner. I've done similar hacks in the past, but only in the TelegraphCQ code and not in PostgreSQL. -- Pip-pip Sailesh http://www.cs.berkeley.edu/~sailesh ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] enforcing a plan (in brief)
Hicham G. Elmongui wrote: > Is there a way to bypass the optimizer and to specify a plan to be executed? > > Something like: > > > > Limit > -> Nested Loop > -> Nested Loop > -> Seq Scan on tab00 t00 > -> Index Scan using tab03_pkey on tab03 t03 > Index Cond: ("outer".id = t03.id) > -> Index Scan using tab01_pkey on tab01 t01 > Index Cond: ("outer".id = t01.id) No, we feel that is of limited value. If the optimizer isn't doing things properly, we will fix it. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] enforcing a plan (in brief)
Is there a way to bypass the optimizer and to specify a plan to be executed? Something like: Limit -> Nested Loop -> Nested Loop -> Seq Scan on tab00 t00 -> Index Scan using tab03_pkey on tab03 t03 Index Cond: ("outer".id = t03.id) -> Index Scan using tab01_pkey on tab01 t01 Index Cond: ("outer".id = t01.id) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] New form of index "persistent reference"
> Lately, I have been thinking that a cool form of index would be some sort > of "persistent reference" index. Like the old ISAM days of yore, a fixed > number could point you right to the row that you want. I'm not sure if the > "persistent reference" is a specific auto numbering column type or > separate index structure or both. What you are talking about is a 'relative file'. It turns out on modern ISAM file systems, the win you get over b-tree indexing is not worth losing the ability to do simple things like run-length compression on strings. Anyways, while storing a physical offset is O(1), so is computing a hash. How would a hash index not fill your need? Merlin ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] New form of index "persistent reference"
> If that ID is the only thing you use to access that data, why not just > store > it in a flat file with fixed-length records? seek() (or your language's > equivalent) is usually fast. As a matter of policy, I would never manage data outside of the database. > > If you need to drive that from within PostgreSQL, you would need an > untrusted language to read the file, but you could also generate it from a > table using a trigger. Very ugly. > > Or maybe use a serial column, an index on that column, and cluster the > table > on that index. It's more than one lookup, but not much with a Btree index. > (Not sure if this is better than just using a serial and an index. > http://www.postgresql.org/docs/8.0/interactive/sql-cluster.html says it > isn't, if I read it correctly.) Clustering is OK, but it doesn't handle updates and additions until you recluster the data. If a static reference is all that is needed, then merely using CTID would suffice. I was thinking a little overhead for a reference table would allow it to hook into PostgreSQL and keep it up to date. > > Then anytime there is a batch of updates to the table, re-cluster it. Yea, like I said, there are easier ways of doing that with fairly static data. > >> -Original Message- >> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] >> Sent: Thursday, February 10, 2005 11:22 AM >> To: pgsql-hackers@postgresql.org >> Subject: [HACKERS] New form of index "persistent reference" >> >> >> For about 5 years now, I have been using a text search engine >> that I wrote >> and maintain. >> >> In the beginning, I hacked up function mechanisms to return >> multiple value >> sets and columns. Then PostgreSQL aded "setof" and it is was >> cool. Then it >> was able to return a set of rows, which was even better. >> >> Lately, I have been thinking that a cool form of index would >> be some sort >> of "persistent reference" index. Like the old ISAM days of >> yore, a fixed >> number could point you right to the row that you want. I'm >> not sure if the >> "persistent reference" is a specific auto numbering column type or >> separate index structure or both. >> >> I asked the question how do you get a record without going through an >> index, the answer was CTID, which unfortunately changes when >> the row is >> updated. >> >> Now, what I want to brainstorm is some sort of "persistent reference" >> where the value is not algorithmically stored, maybe just an >> offset into a >> table. The number of operations should be about 1 per lookup. >> >> Imagine a dynamically growing array that has one slot per >> row. Every row >> is considered unique. Rows which are updated, their CTID is >> updated in the >> reference. (with vacuum?) >> >> Imagine something like this: >> >> create table foobar(id reference, name varchar, value varchar); >> >> select * from foobar where id = 100; >> >> The reference type has an implicit index that is basically a >> lookup table. >> On unique references where the reference value is fairly >> arbitrary, this >> would be a HUGE gain for direct lookups. There is no need for >> the NlogN of >> a tree. >> >> On the surface level, this would be a huge win for websites that use >> semi-fixed tables of data. >> >> >> >> ---(end of >> broadcast)--- >> TIP 1: subscribe and unsubscribe commands go to >> [EMAIL PROTECTED] >> > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] New form of index "persistent reference"
Title: RE: [HACKERS] New form of index "persistent reference" If that ID is the only thing you use to access that data, why not just store it in a flat file with fixed-length records? seek() (or your language's equivalent) is usually fast. If you need to drive that from within PostgreSQL, you would need an untrusted language to read the file, but you could also generate it from a table using a trigger. Or maybe use a serial column, an index on that column, and cluster the table on that index. It's more than one lookup, but not much with a Btree index. (Not sure if this is better than just using a serial and an index. http://www.postgresql.org/docs/8.0/interactive/sql-cluster.html says it isn't, if I read it correctly.) Then anytime there is a batch of updates to the table, re-cluster it. > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] > Sent: Thursday, February 10, 2005 11:22 AM > To: pgsql-hackers@postgresql.org > Subject: [HACKERS] New form of index "persistent reference" > > > For about 5 years now, I have been using a text search engine > that I wrote > and maintain. > > In the beginning, I hacked up function mechanisms to return > multiple value > sets and columns. Then PostgreSQL aded "setof" and it is was > cool. Then it > was able to return a set of rows, which was even better. > > Lately, I have been thinking that a cool form of index would > be some sort > of "persistent reference" index. Like the old ISAM days of > yore, a fixed > number could point you right to the row that you want. I'm > not sure if the > "persistent reference" is a specific auto numbering column type or > separate index structure or both. > > I asked the question how do you get a record without going through an > index, the answer was CTID, which unfortunately changes when > the row is > updated. > > Now, what I want to brainstorm is some sort of "persistent reference" > where the value is not algorithmically stored, maybe just an > offset into a > table. The number of operations should be about 1 per lookup. > > Imagine a dynamically growing array that has one slot per > row. Every row > is considered unique. Rows which are updated, their CTID is > updated in the > reference. (with vacuum?) > > Imagine something like this: > > create table foobar(id reference, name varchar, value varchar); > > select * from foobar where id = 100; > > The reference type has an implicit index that is basically a > lookup table. > On unique references where the reference value is fairly > arbitrary, this > would be a HUGE gain for direct lookups. There is no need for > the NlogN of > a tree. > > On the surface level, this would be a huge win for websites that use > semi-fixed tables of data. > > > > ---(end of > broadcast)--- > TIP 1: subscribe and unsubscribe commands go to > [EMAIL PROTECTED] >
Re: [HACKERS] libpq API incompatibility between 7.4 and 8.0
Martin Pitt wrote: -- Start of PGP signed section. > Hi! > > Bruce Momjian [2005-02-09 18:05 -0500]: > > > However, I just stumbled across another problem: libpq3 and the new > > > libpq4 use the same translation domain "libpq4", thus they cannot be > > > installed in parallel. Can you please change the domain to "libpq4" as > > > well? This should generally be done anyway, but at least we can live > > > with one breakage of this rule (for libpq3), so that we don't need to > > > mess with the old libpq3 package. > > > > Uh, what is the translation domain? > > In short, that is the unique name for a bunch of translations for a > particular application or group of related applications. The key point > is that the name of the file in > /usr/share/locale//LC_MESSAGES/.mo and the call > > bindtextdomain ("", NULL) Well, that is certainly interesting. > must use the same so that libintl can find the mo file. > However, that means that all applications that are installed in > parallel need a distinct domain. Since the whole point of SONAMes is > to allow several different library API versions to be installed in > parallel, every library API (i. e. SONAME) should have an unique > domain, which can be achieved easiest by just appending the SONAME to > the translation domain. I see. So we have to auto-populate the SONAME into all those places. Now that you mention it, it makes sense because the language strings are fixed for every major release so I can see the need for this when using two major versions of PostgreSQL. --- > > In my test packages I used the following patch: > > diff -ruN postgresql-8.0.1-old/src/interfaces/libpq/fe-misc.c > postgresql-8.0.1/src/interfaces/libpq/fe-misc.c > --- postgresql-8.0.1-old/src/interfaces/libpq/fe-misc.c 2004-12-31 > 23:03:50.0 +0100 > +++ postgresql-8.0.1/src/interfaces/libpq/fe-misc.c 2005-02-07 > 22:55:13.177419296 +0100 > @@ -1133,7 +1133,7 @@ > { > already_bound = 1; > /* No relocatable lookup here because the binary could be > anywhere */ > - bindtextdomain("libpq", getenv("PGLOCALEDIR") ? > getenv("PGLOCALEDIR") : LOCALEDIR); > + bindtextdomain("libpq4", getenv("PGLOCALEDIR") ? > getenv("PGLOCALEDIR") : LOCALEDIR); > } > > return dgettext("libpq", msgid); > diff -ruN postgresql-8.0.1-old/src/interfaces/libpq/nls.mk > postgresql-8.0.1/src/interfaces/libpq/nls.mk > --- postgresql-8.0.1-old/src/interfaces/libpq/nls.mk2005-01-14 > 09:57:06.0 +0100 > +++ postgresql-8.0.1/src/interfaces/libpq/nls.mk2005-02-07 > 22:54:54.770217616 +0100 > @@ -1,5 +1,5 @@ > # $PostgreSQL: pgsql/src/interfaces/libpq/nls.mk,v 1.20 2005/01/14 08:57:06 > petere Exp $ > -CATALOG_NAME := libpq > +CATALOG_NAME := libpq4 > AVAIL_LANGUAGES:= af cs de es fr hr it ko nb pl pt_BR ru sk sl sv tr > zh_CN zh_TW > GETTEXT_FILES := fe-auth.c fe-connect.c fe-exec.c fe-lobj.c fe-misc.c > fe-protocol2.c fe-protocol3.c fe-secure.c > GETTEXT_TRIGGERS:= libpq_gettext pqInternalNotice:2 > > Compared to the SONAME, changing the translation domain is relatively > uncritical, so if you don't want to change this upstream, I can > maintain this patch for Debian/Ubuntu. However, I heard that some RPM > guys plan a infrastructure similar to mine, and at that point they > will have precisely the same problems :-) > > Thanks for considering and have a nice day! > > Martin > -- > Martin Pitt http://www.piware.de > Ubuntu Developerhttp://www.ubuntulinux.org > Debian GNU/Linux Developer http://www.debian.org -- End of PGP section, PGP failed! -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] slow SP with temporary tables, PLPGSQL problems
> Pavel Stehule <[EMAIL PROTECTED]> writes: > > Can you help me other possibilities? > > Create the temp table only once per connection (you can use ON COMMIT > DELETE ROWS instead of ON COMMIT DROP to clean it out). Then you won't > need to use EXECUTE. > I am not sure so it's possible. I use persistent connect via PHP. There isn't trigger for new connect. But I found same problem as you. PL/pgSQL can't FOR r IN EXPLAIN SELECT .. FOR r IN EXECUTE 'EXECUTE plan()' I rewrite SP, and I have only one SELECT without two, 20% time less, but it's not readable code. I don't know how much work or if its possible move compilation time for PREPARE on every processing of this command. I think so its more natural for cmd PREPARE. But in this part of PL/pgSQL are more problems: CREATE OR REPLACE FUNCTION foo() RETURNS SETOF varchar AS $$ DECLARE r RECORD; BEGIN PREPARE se(date) AS SELECT * FROM queue WHERE activated = $1; FOR r IN EXECUTE se(CURRENT_DATE) LOOP RETUTRN NEXT r.activated; END LOOP; DEALLOCATE se; RETURN; END; $$ LANGUAGE plpgsql; is this code correct? I think yes. But isn't true. I get message: function se(date) does not exist CONTEXT: SQL statement "SELECT se(CURRENT_DATE)". Is only theory: FOR rn IN EXECUTE 'EXECUTE se(CURRENT_DATE)' LOOP Now I get error: cannot open non-SELECT query as cursor. Prepared commands are good idea, but I cant use its now. I have Pg 8.0.1 Regards Pavel Stehule ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] New form of index "persistent reference"
For about 5 years now, I have been using a text search engine that I wrote and maintain. In the beginning, I hacked up function mechanisms to return multiple value sets and columns. Then PostgreSQL aded "setof" and it is was cool. Then it was able to return a set of rows, which was even better. Lately, I have been thinking that a cool form of index would be some sort of "persistent reference" index. Like the old ISAM days of yore, a fixed number could point you right to the row that you want. I'm not sure if the "persistent reference" is a specific auto numbering column type or separate index structure or both. I asked the question how do you get a record without going through an index, the answer was CTID, which unfortunately changes when the row is updated. Now, what I want to brainstorm is some sort of "persistent reference" where the value is not algorithmically stored, maybe just an offset into a table. The number of operations should be about 1 per lookup. Imagine a dynamically growing array that has one slot per row. Every row is considered unique. Rows which are updated, their CTID is updated in the reference. (with vacuum?) Imagine something like this: create table foobar(id reference, name varchar, value varchar); select * from foobar where id = 100; The reference type has an implicit index that is basically a lookup table. On unique references where the reference value is fairly arbitrary, this would be a HUGE gain for direct lookups. There is no need for the NlogN of a tree. On the surface level, this would be a huge win for websites that use semi-fixed tables of data. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] slow SP with temporary tables, any idea of solution?
Pavel Stehule <[EMAIL PROTECTED]> writes: > Can you help me other possibilities? Create the temp table only once per connection (you can use ON COMMIT DELETE ROWS instead of ON COMMIT DROP to clean it out). Then you won't need to use EXECUTE. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] How can I use large object on PostgreSQL Linux
Dave Page wrote: -Original Message- From: [EMAIL PROTECTED] on behalf of Michael Fuhr Sent: Wed 2/9/2005 4:22 AM To: Premsun Choltanwanich Cc: pgsql-hackers@postgresql.org Subject: Re: [HACKERS] How can I use large object on PostgreSQL Linux I wonder if admin is pgadmin-tools. Yes, it is. We really should try to get this stuff into contrib where it belongs. These are functions to support maintenance, and not everybody likes ls/cat/kill as admin tools. Regards, Andreas ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] about rules
How to write a rule to avoid queries like update,insert,delete,select from any database users except superuser. REVOKE UPDATE,INSERT,DELETE,SELECT ON table FROM PUBLIC, other users; Chris ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] about rules
How to write a rule to avoid queries like update,insert,delete,select from any database users except superuser. thanks, Prashant Do you Yahoo!? Yahoo! Search presents - Jib Jab's 'Second Term'