Re: [GENERAL] query planner weirdness?
On Jun 27, 2008, at 9:53 PM, Adam Rich wrote: "Bob Duffey" <[EMAIL PROTECTED]> writes: I'm seeing some query plans that I'm not expecting. The table in question is reasonably big (130,000,000 rows). The table has a primary key, indexed by one field ("ID", of type bigint). Thus, I would expect the following query to simply scan through the table using the primary key: select * from "T" order by "ID" This is not wrong, or at least not obviously wrong. A full-table indexscan is often slower than seqscan-and-sort. If the particular case is wrong for you, you need to look at adjusting the planner's cost parameters to match your environment. But you didn't provide any evidence that the chosen plan is actually worse than the alternative ... I think I understand what Bob's getting at when he mentions blocking. The seqscan-and-sort would return the last record faster, but the indexscan returns the first record faster. If you're iterating through the records via a cursor, the indexscan behavior would be more desirable. If you're iterating through the records with a cursor, the plan may be different, IIRC - weighted to provide first row quickly, as opposed to the query that was run that's weighted to provide last row quickly. You could get the initial rows back without waiting for all 130 million to be fetched and sorted. In oracle, there is a first-rows vs. all-rows query hint for this sort of thing. Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] freebsd + postgresql 8.3.3 = 100% of cpu usage on stats collector?
hubert depesz lubaczewski wrote: > On Fri, Jun 27, 2008 at 06:37:45PM -0400, Bill Moran wrote: >> Just in case you're not a FreeBSD expert, it's ktrace on FreeBSD. strace >> is the Linux equivalent. > > i'm not an freebsd expert. > > i ran ktrace -p - it exited immediately. > then i ran ktrace -p -f stats.ktrace.log -t\+ > and it also exited immediately. > in cwd i have 2 binary files - not text ones as i would expect from > strace. > > ktrace.out and stats.ktrace.log. > > are these files "ok"? i mean - i can make them available, but i'm not > sure if they're ok given the fact that they don't look like text output > of strace. > > Best regards, > > depesz > > >From ktrace man page DESCRIPTION The ktrace utility enables kernel trace logging for the specified pro- cesses. Kernel trace data is logged to the file ktrace.out. The kernel operations that are traced include system calls, namei translations,sig- nal processing, and I/O. Once tracing is enabled on a process, trace data will be logged until either the process exits or the trace point is cleared. A traced process can generate enormous amounts of log data quickly; It is strongly suggested that users memorize how to disable tracing before attempting to trace a process. The following command is sufficient to disable tracing on all user owned processes, and, if executed by root, all processes: $ ktrace -C So, read man page, and at least ktrace -C and read the output at ktrace.out file. Best regards Rodrigo Gonzalez -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
Bill Thoen wrote: What I'm wondering is how in PostgreSQL do you select only the first 10 records from a selection, then the next 10, then the next, and possibly go back to a previous 10? LIMIT with OFFSET has already been mentioned. There's another option if your web app is backed by an application server or some other environment that can retain resources across client queries: You can use a scrollable database cursor to access the results. This won't do you much (any?) good if your web app has to establish a connection or get one from the pool for every request. It's only really useful if you can store the connection in the user's session information. Using cursors probably isn't very good for very high user counts, because abandoned sessions will hold their database connections until the session times out and is destroyed. For more complex apps with fewer users, though, cursors could be a big win. Note that OFFSET isn't free either. The database server must still execute all of the query up to OFFSET+LIMIT results. With a high offset, that can get very slow. A cursor will be OK here if you still start from the beginning, but if you ever begin with a high offset you'll want to look into using one of the methods suggested in other replies that permit you to use an index. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] query planner weirdness?
2008/6/28 Adam Rich <[EMAIL PROTECTED]>: > > > This is not wrong, or at least not obviously wrong. A full-table > > indexscan is often slower than seqscan-and-sort. If the particular > > case is wrong for you, you need to look at adjusting the planner's > > cost parameters to match your environment. But you didn't provide any > > evidence that the chosen plan is actually worse than the alternative > > ... > > I think I understand what Bob's getting at when he mentions blocking. > The seqscan-and-sort would return the last record faster, but the > indexscan returns the first record faster. If you're iterating > through the records via a cursor, the indexscan behavior would be > more desirable. You could get the initial rows back without waiting > for all 130 million to be fetched and sorted. > > In oracle, there is a first-rows vs. all-rows query hint for this sort > of thing. > Yes, that's exactly what I mean. I've already tried your suggestion (set enable_seqscan to off) with no luck. Bob
Re: [GENERAL] query planner weirdness?
> > "Bob Duffey" <[EMAIL PROTECTED]> writes: > > I'm seeing some query plans that I'm not expecting. The table in > question > > is reasonably big (130,000,000 rows). The table has a primary key, > indexed > > by one field ("ID", of type bigint). Thus, I would expect the > following > > query to simply scan through the table using the primary key: > > > select * from "T" order by "ID" > > This is not wrong, or at least not obviously wrong. A full-table > indexscan is often slower than seqscan-and-sort. If the particular > case is wrong for you, you need to look at adjusting the planner's > cost parameters to match your environment. But you didn't provide any > evidence that the chosen plan is actually worse than the alternative > ... I think I understand what Bob's getting at when he mentions blocking. The seqscan-and-sort would return the last record faster, but the indexscan returns the first record faster. If you're iterating through the records via a cursor, the indexscan behavior would be more desirable. You could get the initial rows back without waiting for all 130 million to be fetched and sorted. In oracle, there is a first-rows vs. all-rows query hint for this sort of thing. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] query planner weirdness?
2008/6/28 Tom Lane <[EMAIL PROTECTED]>: > "Bob Duffey" <[EMAIL PROTECTED]> writes: > > I'm seeing some query plans that I'm not expecting. The table in > question > > is reasonably big (130,000,000 rows). The table has a primary key, > indexed > > by one field ("ID", of type bigint). Thus, I would expect the following > > query to simply scan through the table using the primary key: > > > select * from "T" order by "ID" > > This is not wrong, or at least not obviously wrong. A full-table > indexscan is often slower than seqscan-and-sort. If the particular > case is wrong for you, you need to look at adjusting the planner's > cost parameters to match your environment. But you didn't provide any > evidence that the chosen plan is actually worse than the alternative ... > Hi Tom, Thanks for the reply. Is there some way I can provide evidence of the alternative being slower/faster? I guess that's my intuition, but since I can't figure out how to get postgres to use the alternative as the query plan, I can't test if it's slower! Bob
Re: [GENERAL] freebsd + postgresql 8.3.3 = 100% of cpu usage on stats collector?
On Fri, Jun 27, 2008 at 06:37:45PM -0400, Bill Moran wrote: > Just in case you're not a FreeBSD expert, it's ktrace on FreeBSD. strace > is the Linux equivalent. i'm not an freebsd expert. i ran ktrace -p - it exited immediately. then i ran ktrace -p -f stats.ktrace.log -t\+ and it also exited immediately. in cwd i have 2 binary files - not text ones as i would expect from strace. ktrace.out and stats.ktrace.log. are these files "ok"? i mean - i can make them available, but i'm not sure if they're ok given the fact that they don't look like text output of strace. Best regards, depesz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] query planner weirdness?
"Bob Duffey" <[EMAIL PROTECTED]> writes: > I'm seeing some query plans that I'm not expecting. The table in question > is reasonably big (130,000,000 rows). The table has a primary key, indexed > by one field ("ID", of type bigint). Thus, I would expect the following > query to simply scan through the table using the primary key: > select * from "T" order by "ID" This is not wrong, or at least not obviously wrong. A full-table indexscan is often slower than seqscan-and-sort. If the particular case is wrong for you, you need to look at adjusting the planner's cost parameters to match your environment. But you didn't provide any evidence that the chosen plan is actually worse than the alternative ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] query planner weirdness?
Hi, I'm seeing some query plans that I'm not expecting. The table in question is reasonably big (130,000,000 rows). The table has a primary key, indexed by one field ("ID", of type bigint). Thus, I would expect the following query to simply scan through the table using the primary key: select * from "T" order by "ID" However, here is the result of explain: "Sort (cost=39903495.15..40193259.03 rows=115905552 width=63)" " Sort Key: "ID"" " -> Seq Scan on "T" (cost=0.00..2589988.52 rows=115905552 width=63)" Interestingly, if I use limit in the query (e.g., append "limit 100" to the end of the query), I get the plan I would expect (I think -- I'm not 100% sure what index scan is): "Limit (cost=0.00..380.12 rows=100 width=63)" " -> Index Scan using "T_pkey" on "T" (cost=0.00..440575153.49 rows=115905552 width=63)" There does seem to be some dependence on the size of the result set. If I use "limit 1100", I get the first query plan above, instead of the second. This is on PostgreSQL 8.3, running on Windows. I haven't made any changes to the default server configuration. How can I get postgres to use the second query plan when querying the entire table? My plan is to use a server-side cursor to iterate over the result of this query, and the second plan is non-blocking whereas the first is blocking (due to the sort operator). Any help appreciated. Thanks, Bob
Re: [GENERAL] freebsd + postgresql 8.3.3 = 100% of cpu usage on stats collector?
On Fri, 27 Jun 2008 18:04:19 -0400 Tom Lane <[EMAIL PROTECTED]> wrote: > hubert depesz lubaczewski <[EMAIL PROTECTED]> writes: > > friend asked me to setup replication on their machines, and i noticed > > that one of cpus (2 quad xeons) is used to 100%: > > pgsql 58241 99.0 0.2 22456 7432 ?? Rs Thu10AM 1530:35.93 postgres: > > stats collector process(postgres) > > Hmm, we had some problems with the stats collector going nuts a couple > of versions ago; maybe you've found another way to trigger that. > Anything unusual about this DB (lots of tables, for instance)? > Can you try strace and/or gdb to figure out what the collector is doing? Just in case you're not a FreeBSD expert, it's ktrace on FreeBSD. strace is the Linux equivalent. -- Bill Moran <[EMAIL PROTECTED]> -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] freebsd + postgresql 8.3.3 = 100% of cpu usage on stats collector?
hubert depesz lubaczewski <[EMAIL PROTECTED]> writes: > friend asked me to setup replication on their machines, and i noticed > that one of cpus (2 quad xeons) is used to 100%: > pgsql 58241 99.0 0.2 22456 7432 ?? Rs Thu10AM 1530:35.93 postgres: > stats collector process(postgres) Hmm, we had some problems with the stats collector going nuts a couple of versions ago; maybe you've found another way to trigger that. Anything unusual about this DB (lots of tables, for instance)? Can you try strace and/or gdb to figure out what the collector is doing? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] problem getting postgres 8.3.1 with xml support to work on Redhat Enterprise Linux 4
"Aswani Kumar" <[EMAIL PROTECTED]> writes: > The Postgres was built using ./configure > XML2_CONFIG=/usr/local/libxml/bin/xml2-config > --prefix=/usr/local/postgresql-8.3.1 --with-libxml. Libxml2-2.6.32 was > built using ./configure --prefix=/usr/local/libxml and is the location > pointed to in the Postgres configure script. Linux isn't real friendly to putting shared libraries outside the standard directories. If you want to keep libxml2.so in its own directory you'll need to teach the dynamic linker to look there. See ldconfig. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
On Fri, Jun 27, 2008 at 2:09 PM, Bill Thoen <[EMAIL PROTECTED]> wrote: > Thanks for tip on OFFSET. That's just what I needed. It's so easy when you > know the command you're looking for, and so hard when you know what you want > to do but don't know what the command is called! I would strongly suggest taking a second look at Ragnar's suggestion. It may be a bit more difficult to implement, but I is a clear performance winner when you start to deal with large datasets and the OFFSET get really big. -- Regards, Richard Broersma Jr. Visit the Los Angles PostgreSQL Users Group (LAPUG) http://pugs.postgresql.org/lapug -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
Thanks for tip on OFFSET. That's just what I needed. It's so easy when you know the command you're looking for, and so hard when you know what you want to do but don't know what the command is called! Thanks, - Bill Thoen -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
Hello apart from the increasing OFFSET method, you only need to traverse the results sequentially, you can do a variant of this: let us assume your resultset has a a unique column pk, and is ordered on column o: initial select: select * from foo order by o limit 10; next page select * from foo where (o,pk)>(o,?) order by o limit 10; (where the ? is the last pk value in previous select) this method will be able to make use of an index on (o,pk) gnari On fös, 2008-06-27 at 14:14 -0600, Bill Thoen wrote: > What I want to do is present the results of a query in a web page, but > only 10 rows at a time. My PostgreSQL table has millions of records and > if I don't add a LIMIT 10 to the SQL selection, the request can take too > long. The worst case scenario is when the user requests all records > without adding filtering conditions (e.g. SELECT * FROM MyTable;) That > can take 10-15 minutes, which won't work on a web application. > > What I'm wondering is how in PostgreSQL do you select only the first 10 > records from a selection, then the next 10, then the next, and possibly > go back to a previous 10? Or do you do the full selection into a > temporary table once, adding a row number to the columns and then > performing sub-selects on that temporary table using the row id? Or do > you run the query with Limit 10 set and then run another copy with no > limit into a temporary table while you let the user gaze thoughtfully at > the first ten records? > > I know how to get records form the database into a web page, and I know > how to sense user actions (PageDown, PageUp, etc.) so I'm basically > looking for techniques to extract the data quickly. > > Also, if this isn't the best forum to ask this sort of question, I'd > appreciate being pointed to a more appropriate one. > > TIA, > > - Bill Thoen > > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
On 6/27/08, Bill Thoen <[EMAIL PROTECTED]> wrote: > > What I want to do is present the results of a query in a web page, but only > 10 rows at a time. My PostgreSQL table has millions of records and if I > don't add a LIMIT 10 to the SQL selection, the request can take too long. > The worst case scenario is when the user requests all records without adding > filtering conditions (e.g. SELECT * FROM MyTable;) That can take 10-15 > minutes, which won't work on a web application. > > Also note, that huge OFFSET value can slow down the query as much as if you weren't using LIMIT at all. -- Sincerely yours, Olexandr Melnyk <>< http://omelnyk.net/
Re: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
use a dynamic select in the web page $1 = 10 $2 = 5 select * from mytable limit $1 OFFSET $2 --- On Fri, 6/27/08, Bill Thoen <[EMAIL PROTECTED]> wrote: From: Bill Thoen <[EMAIL PROTECTED]> Subject: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks To: pgsql-general@postgresql.org Date: Friday, June 27, 2008, 8:14 PM What I want to do is present the results of a query in a web page, but only 10 rows at a time. My PostgreSQL table has millions of records and if I don't add a LIMIT 10 to the SQL selection, the request can take too long. The worst case scenario is when the user requests all records without adding filtering conditions (e.g. SELECT * FROM MyTable;) That can take 10-15 minutes, which won't work on a web application. What I'm wondering is how in PostgreSQL do you select only the first 10 records from a selection, then the next 10, then the next, and possibly go back to a previous 10? Or do you do the full selection into a temporary table once, adding a row number to the columns and then performing sub-selects on that temporary table using the row id? Or do you run the query with Limit 10 set and then run another copy with no limit into a temporary table while you let the user gaze thoughtfully at the first ten records? I know how to get records form the database into a web page, and I know how to sense user actions (PageDown, PageUp, etc.) so I'm basically looking for techniques to extract the data quickly. Also, if this isn't the best forum to ask this sort of question, I'd appreciate being pointed to a more appropriate one. TIA, - Bill Thoen -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
> > What I want to do is present the results of a query in a web page, but > only 10 rows at a time. My PostgreSQL table has millions of records and > if I don't add a LIMIT 10 to the SQL selection, the request can take > too > long. The worst case scenario is when the user requests all records > without adding filtering conditions (e.g. SELECT * FROM MyTable;) That > can take 10-15 minutes, which won't work on a web application. > > What I'm wondering is how in PostgreSQL do you select only the first 10 > records from a selection, then the next 10, then the next, and possibly > go back to a previous 10? Or do you do the full selection into a > temporary table once, adding a row number to the columns and then > performing sub-selects on that temporary table using the row id? Or do > you run the query with Limit 10 set and then run another copy with no > limit into a temporary table while you let the user gaze thoughtfully > at > the first ten records? > > I know how to get records form the database into a web page, and I know > how to sense user actions (PageDown, PageUp, etc.) so I'm basically > looking for techniques to extract the data quickly. > In addition to LIMIT, Postgresql has an OFFSET clause: http://www.postgresql.org/docs/8.3/interactive/sql-select.html#SQL-LIMIT So if you want to show the records in pages of 10, your queries would look like this: SELECT * FROM MyTable ORDER BY my_key LIMIT 10 OFFSET 0; SELECT * FROM MyTable ORDER BY my_key LIMIT 10 OFFSET 10; SELECT * FROM MyTable ORDER BY my_key LIMIT 10 OFFSET 20; The "offset" clause tells postgresql how many rows to skip. Note that you always need an "order by" clause in there as well to get meaningful results. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Advice Wanted on Selecting Multi-row Data Requests in 10-Row Blocks
What I want to do is present the results of a query in a web page, but only 10 rows at a time. My PostgreSQL table has millions of records and if I don't add a LIMIT 10 to the SQL selection, the request can take too long. The worst case scenario is when the user requests all records without adding filtering conditions (e.g. SELECT * FROM MyTable;) That can take 10-15 minutes, which won't work on a web application. What I'm wondering is how in PostgreSQL do you select only the first 10 records from a selection, then the next 10, then the next, and possibly go back to a previous 10? Or do you do the full selection into a temporary table once, adding a row number to the columns and then performing sub-selects on that temporary table using the row id? Or do you run the query with Limit 10 set and then run another copy with no limit into a temporary table while you let the user gaze thoughtfully at the first ten records? I know how to get records form the database into a web page, and I know how to sense user actions (PageDown, PageUp, etc.) so I'm basically looking for techniques to extract the data quickly. Also, if this isn't the best forum to ask this sort of question, I'd appreciate being pointed to a more appropriate one. TIA, - Bill Thoen -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Nice to have: reverse() function in the core
2008/6/27 wstrzalka <[EMAIL PROTECTED]>: > Is there any possibility to have reverse() function in the PG core in > the future? > this function is in orafce package. http://www.pgsql.cz/index.php/Oracle_functionality_%28en%29 Regards Pavel Stehule > There are some implementation already like ie. this one: > http://blog.frosties.org/post/2007/08/28/Fonction-reverse-C-avec-PostgreSQL > > I think author will have nothing against using his code as he > published it on his blog. > > Not everyone is able to compile C function himself - while it's the > fastest and I think many users would be glad to see it ready to use. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with FOUND
2008/6/27 A B <[EMAIL PROTECTED]>: > Thanks for the suggestion on GET DIAGNOSTICS. > > But concerning EXECUTE, if I do > > BEGIN > EXECUTE QueryA > EXCEPTION WHEN OTHERS THEN > QueryB > END; > > > will it execute QueryB if QueryA fails? yes, but it's not preferable way. It creates subtransaction with some (less) overhead. Regards Pavel > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] IF ROW( NEW ) <> ROW( OLD )
On Wed, Jun 25, 2008 at 8:17 PM, Tom Lane <[EMAIL PROTECTED]> wrote: > I think IF ROW(NEW.*) <> ROW(OLD.*) will work in recent releases. > > Actually you'd better use IF ROW(NEW.*) IS DISTINCT FROM ROW(OLD.*) ... > you really don't want to rely on <> as it will not give the behavior > you want in the presence of null columns. Thanks that worked! -- Regards, Richard Broersma Jr. Visit the Los Angles PostgreSQL Users Group (LAPUG) http://pugs.postgresql.org/lapug -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] compiling, performance of PostGreSQL 8.3 on 64-bit processors
On Fri, Jun 27, 2008 at 10:21 AM, Tom Lane <[EMAIL PROTECTED]> wrote: > "Douglas McNaught" <[EMAIL PROTECTED]> writes: >> On Fri, Jun 27, 2008 at 11:52 AM, Benjamin Weaver >> <[EMAIL PROTECTED]> wrote: >>> Not knowing about such things, I was scared by the following quote. > >> Distro support for 64-bit x86 in 2004 was light-years behind where it >> is now. A lot of stuff was hard to get working back then. Now almost >> everything basically Just Works. > > Even in 2004, the guy would have had to be working on a rather old or > broken distro to justify such a complaint. Getting 64-bit to work was > a live issue maybe around 2001 or so... Yeah, I thought his complaint about pg 64 bit compiling was a bit wacky. I was building 8.0 and 8.1 64 bit on our servers back then and / or installing x86_64 rpms with no problems at all in 2004/5 or so. MySQL compiling on the other hand, has always been a frakking nightmare. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Nice to have: reverse() function in the core
Is there any possibility to have reverse() function in the PG core in the future? There are some implementation already like ie. this one: http://blog.frosties.org/post/2007/08/28/Fonction-reverse-C-avec-PostgreSQL I think author will have nothing against using his code as he published it on his blog. Not everyone is able to compile C function himself - while it's the fastest and I think many users would be glad to see it ready to use. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: could not open relation with OID 2836
Tom Lane wrote: > Rodrigo Gonzalez <[EMAIL PROTECTED]> writes: >> pg_dump is working fine now, the problem appear with the pg_buffercache >> query...without it I dont notice anything wrong with DBbut of course >> there is something wrong. Can be pg_buffercache the problem? > > Oh ... looking again at your latest problem query, the query is buggy: > > db=# SELECT ... pg_relation_size(c.relname) ... > FROM pg_class c INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode > ... > ERROR: relation "pg_toast_1255" does not exist > > The pg_toast schema isn't in your search path so you can't just do > "pg_relation_size(c.relname)". You'd be better off using > pg_relation_size(c.oid) anyway. > > I was misled by the chance coincidence that pg_proc's toast table was > the one mentioned, otherwise I'd probably have seen this sooner. > > So this is not a bug, and not related to the original problem. We still > don't know what the original problem was, but I wonder if it might have > been of the same ilk. I don't think you ever showed us the exact query > that led to the "could not open relation" message? > > regards, tom lane Ok, adding pg_toast to search path worked fine. I dont understand why in my laptop machine with the same search path ("$user",public) worked without problem. I will see how to get the problem again, cause this query was the one that returned problem at oid 2836. So cannot reproduce right now. Thanks for your time and if I see the error again I will write again. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] compiling, performance of PostGreSQL 8.3 on 64-bit processors
On Fri, Jun 27, 2008 at 11:52 AM, Benjamin Weaver <[EMAIL PROTECTED]> wrote: > Sorry, guys, for wasting bandwidth on this! You guys gave just the answer I > wanted to hear. Sounds like there aren't any problems. > > Not knowing about such things, I was scared by the following quote. Perhaps > binaries do not need to be compiled as 64 bit binaries on a 64 bit machine? > Or > perhaps it's way out of date (2004) or simply wrong. Distro support for 64-bit x86 in 2004 was light-years behind where it is now. A lot of stuff was hard to get working back then. Now almost everything basically Just Works. -Doug -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: could not open relation with OID 2836
Rodrigo Gonzalez <[EMAIL PROTECTED]> writes: > pg_dump is working fine now, the problem appear with the pg_buffercache > query...without it I dont notice anything wrong with DBbut of course > there is something wrong. Can be pg_buffercache the problem? Oh ... looking again at your latest problem query, the query is buggy: db=# SELECT ... pg_relation_size(c.relname) ... FROM pg_class c INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode ... ERROR: relation "pg_toast_1255" does not exist The pg_toast schema isn't in your search path so you can't just do "pg_relation_size(c.relname)". You'd be better off using pg_relation_size(c.oid) anyway. I was misled by the chance coincidence that pg_proc's toast table was the one mentioned, otherwise I'd probably have seen this sooner. So this is not a bug, and not related to the original problem. We still don't know what the original problem was, but I wonder if it might have been of the same ilk. I don't think you ever showed us the exact query that led to the "could not open relation" message? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] compiling, performance of PostGreSQL 8.3 on 64-bit processors
"Douglas McNaught" <[EMAIL PROTECTED]> writes: > On Fri, Jun 27, 2008 at 11:52 AM, Benjamin Weaver > <[EMAIL PROTECTED]> wrote: >> Not knowing about such things, I was scared by the following quote. > Distro support for 64-bit x86 in 2004 was light-years behind where it > is now. A lot of stuff was hard to get working back then. Now almost > everything basically Just Works. Even in 2004, the guy would have had to be working on a rather old or broken distro to justify such a complaint. Getting 64-bit to work was a live issue maybe around 2001 or so... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with FOUND
Thanks for the suggestion on GET DIAGNOSTICS. But concerning EXECUTE, if I do BEGIN EXECUTE QueryA EXCEPTION WHEN OTHERS THEN QueryB END; will it execute QueryB if QueryA fails? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] compiling, performance of PostGreSQL 8.3 on 64-bit processors
Sorry, guys, for wasting bandwidth on this! You guys gave just the answer I wanted to hear. Sounds like there aren't any problems. Not knowing about such things, I was scared by the following quote. Perhaps binaries do not need to be compiled as 64 bit binaries on a 64 bit machine? Or perhaps it's way out of date (2004) or simply wrong. from http://www.osnews.com/story/5768/Are_64-bit_Binaries_Really_Slower_than_32-bit_Binaries_/page3/ " ... The Compile Factor Getting applications to compile as 64-bit binaries can be tricky. The build process for some applications, such as OpenSSL, have 64-bit specifically in mind, and require nothing fancy. Others, like MySQL and especially PostgreSQL (I was originally going to include PostgreSQL benchmarks) took quite a bit of tweaking. There are compiler flags, linker flags, and you'll likely end up in a position where you need to know your way around a Makefile..." In message <[EMAIL PROTECTED]> Tom Lane <[EMAIL PROTECTED]> writes: > Greg Smith <[EMAIL PROTECTED]> writes: > > On Thu, 26 Jun 2008, Benjamin Weaver wrote: > >> I have heard of problems arising from compiling PostGreSQL (8.3) on > >> 64-bit processors. > > > From who? > > Perhaps someone who remembers PG 6.4 or thereabouts? > > Certainly any version released in the last couple of years has been > tested about as heavily on 64-bit platforms as 32-bit. > > regards, tom lane -- Benjamin Weaver Faculty Research Associate, Imaging Papyri, Greek Fragments Projects, Oxford email: [EMAIL PROTECTED] phone: (0)1865 610236 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Partial Index Too Literal?
That example also reports that it uses the index. Only the "is true" variation insists on seq. scan. On Thu, Jun 26, 2008 at 4:08 PM, Lennin Caro <[EMAIL PROTECTED]> wrote: > use this > > explain analyze select * from result where active = 't'; > > --- On *Thu, 6/26/08, Phillip Mills <[EMAIL PROTECTED]>* wrote: > > From: Phillip Mills <[EMAIL PROTECTED]> > Subject: [GENERAL] Partial Index Too Literal? > To: pgsql-general@postgresql.org > Date: Thursday, June 26, 2008, 7:24 PM > > > Under somewhat unusual circumstances, rows in one of our tables have an > 'active' flag with a true value. We check for these relatively often since > they represent cases that need special handling. We've found through > testing that having a partial index on that field works well. What seems > odd to me, however, is that the index gets used only if the query is a > textual match for how the index was specified. > > That is, using an index defined as 'where active = true': > dev=# explain analyze select * from result where active = true; > QUERY PLAN > > - > Bitmap Heap Scan on result (cost=5.31..472.34 rows=4206 width=1126) > (actual time=7.868..7.868 rows=0 loops=1) >Filter: active >-> Bitmap Index Scan on result_active_idx (cost=0.00..4.26 rows=2103 > width=0) (actual time=4.138..4.138 rows=16625 loops=1) > Index Cond: (active = true) > Total runtime: 7.918 ms > (5 rows) > > dev=# explain analyze select * from result where active is true; > QUERY PLAN > > -- > Seq Scan on result (cost=0.00..537.26 rows=4263 width=1126) (actual > time=55.631..55.631 rows=0 loops=1) >Filter: (active IS TRUE) > Total runtime: 55.668 ms > (3 rows) > > This is version 8.2.6. Is there something I'm missing that could make > these queries ever produce different results? > > >
Re: [GENERAL] what are rules for?
On Fri, Jun 27, 2008 at 3:06 AM, Dean Rasheed <[EMAIL PROTECTED]> wrote: >> Someone pointed out in an earlier thread that a >> way to fix this, for updates on a multi-table view (where most of the >> complication lies), is to write a "trigger" function that updates all >> the constituent tables except for one, and then write a rule that >> calls that function and then updates the one remaining table itself. >> This seems to work okay although I have not tested it with many >> clients. > > Yes that would seem to work. For UPDATE anyway. Although if it were purely > DML that you were doing, you would probably be better off just having multiple > UPDATE statements in the rule body. Then they would stand a better chance > of being rewritten and executed more efficiently. As Richard Broersma pointed out in the earlier thread, this approach has a tendency to result in "partial updates" if the WHERE clause in the UPDATE statement issued on the view includes more than just the primary key. http://archives.postgresql.org/pgsql-general/2008-06/msg00479.php http://archives.postgresql.org/pgsql-general/2006-12/msg01048.php This is probably another one of the "subtle pitfalls" you mentioned, but to me it means that using multiple UPDATE statements in the rule body is unacceptable. Mike -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: concurrent insert in progress
use REINDEX REINDEX ip_dst_idx OR REINDEX table_name http://www.postgresql.org/docs/8.3/static/sql-reindex.html tell us the result thank --- On Fri, 6/27/08, Ganbold <[EMAIL PROTECTED]> wrote: From: Ganbold <[EMAIL PROTECTED]> Subject: [GENERAL] ERROR: concurrent insert in progress To: pgsql-general@postgresql.org Date: Friday, June 27, 2008, 3:04 AM Hi, I have problem with my DB: snort=# vacuum full; WARNING: index "ip_src_idx" contains 1921678 row versions, but table contains 1921693 row versions HINT: Rebuild the index with REINDEX. WARNING: index "ip_dst_idx" contains 1921668 row versions, but table contains 1921693 row versions HINT: Rebuild the index with REINDEX. ERROR: could not read block 988 of relation 1663/16384/16472: Input/output error snort=# reindex index ip_src_idx; ERROR: concurrent insert in progress How to solve this problem? Is there any other method fixing the index? thanks in advance, Ganbold -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: concurrent insert in progress
use REINDEX http://www.postgresql.org/docs/8.3/static/sql-reindex.html tell us the result thank --- On Fri, 6/27/08, Ganbold <[EMAIL PROTECTED]> wrote: From: Ganbold <[EMAIL PROTECTED]> Subject: [GENERAL] ERROR: concurrent insert in progress To: pgsql-general@postgresql.org Date: Friday, June 27, 2008, 3:04 AM Hi, I have problem with my DB: snort=# vacuum full; WARNING: index "ip_src_idx" contains 1921678 row versions, but table contains 1921693 row versions HINT: Rebuild the index with REINDEX. WARNING: index "ip_dst_idx" contains 1921668 row versions, but table contains 1921693 row versions HINT: Rebuild the index with REINDEX. ERROR: could not read block 988 of relation 1663/16384/16472: Input/output error snort=# reindex index ip_src_idx; ERROR: concurrent insert in progress How to solve this problem? Is there any other method fixing the index? thanks in advance, Ganbold -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Windows Crash
On Fri, Jun 27, 2008 at 3:37 PM, Raymond O'Donnell <[EMAIL PROTECTED]> wrote: > On 26/06/2008 17:43, Ludwig Kniprath wrote: >> >> As far as I know pgadmin uses gtk, > > Correct me if I'm wrong, but I don't think PgAdmin uses gtk on Windows. No, it most certainly doesn't. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Windows Crash
On 26/06/2008 17:43, Ludwig Kniprath wrote: As far as I know pgadmin uses gtk, Correct me if I'm wrong, but I don't think PgAdmin uses gtk on Windows. Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: could not open relation with OID 2836
Rodrigo Gonzalez <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> When you say "restored from backup", are you talking about a pg_dump >> backup, or what? > yes, a pg_dump backup. There must be something mighty odd in that backup. Would you be willing to send it to me off-list, so I can try to reproduce the problem here? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: could not open relation with OID 2836
Alban Hertroys wrote: > On Jun 26, 2008, at 5:41 AM, Rodrigo Gonzalez wrote: > >> Tom Lane wrote: >>> Rodrigo Gonzalez <[EMAIL PROTECTED]> writes: Craig Ringer wrote: > What platform are you using? >>> It's running under CentOS 4.4 using ext3, no RAID or LVM. Server is quad xeon 64 bits 3 GHz >>> >>> Ugh, I'd have liked to think RHEL4/Centos4 would be more reliable than >>> that :-(. Still, you might have an issue with trying to use hardware >>> that's not supported by RHEL4, especially if it's not a very recent >>> version of RHEL4. Did you check compatibility charts before purchasing? >>> https://hardware.redhat.com/ >>> >>> regards, tom lane >> >> It had been working with pgsql 8.1 and 8.2 for 2 years without problems. >> Suspicious is that problems started next day I've upgraded to 8.3. >> >> I've tried reinstalling 8.3 from scratch and again, next morning, oid >> 2836 is missing... > > Ok, throwing a few "random" questions in your direction: > > What procedure did you use to do those upgrades? Maybe something went > wrong there? I'm assuming you upgraded using dump/restore, or postgres > would have complained about the version of the data files at startup, > but maybe you did something unusual. > > Are you sure there's only one version of postgres running? Yes, just 8.3.3 right now is running. > > Are all your libraries up to date, no old versions hanging around where > they should have been replaced? I have postgresql-libs for 8.3.3 and the compat rpm installed cause of other software that require it. > > Do you have any stored procedures in C? If so, do you perhaps use > malloc/free instead of the ones Postgres provides (reasoning you may be > freeing a reference to the toast table somehow)? No stored procedure in C, just SQL and PlPgSQL store procedures are used. Well, pg_buffercache of course is in C as I knowmaybe I should check taking it out and see what happens... > > Is that data-file on a mirror where one part of the mirror may be > mirroring a bad sector over the good one on the other drive(s)? > No, this is a small, simple server with just one disk for OS and one for data. > > I may be talking nonsense, I'm no Tom Lane, but I know a fair share > about postgres ;) Thank you for your help. > Regards, > > Alban Hertroys > > -- > If you can't see the forest for the trees, > cut the trees and you'll see there is no forest. > > > !DSPAM:825,4863ce39243482861390956! > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: could not open relation with OID 2836
Tom Lane wrote: > Rodrigo Gonzalez <[EMAIL PROTECTED]> writes: >> Tom Lane wrote: >>> No, it's clear that things are already broken before pg_dump started. >>> You need to show us how to get to this state from a fresh database. > >> Interestinga new problem maybe, or maybe the same one >> ... >> ERROR: relation "pg_toast_1255" does not exist > > Looks like almost the same problem: pg_proc's TOAST table is missing. > But this behavior implies that the pg_class row for it is missing, > whereas the other error suggested that the system catalog entries were > fine but the on-disk file was missing. Odd. > > When you say "restored from backup", are you talking about a pg_dump > backup, or what? > > regards, tom lane yes, a pg_dump backup. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Unreferenced temp tables disables vacuum to update xid
Hi all, I just want to report that we had here almost exactly the same problem as reported here: http://archives.postgresql.org/pgsql-hackers/2008-01/msg00134.php The whole scenario repeated the same: production DB refused to work, restarted in single user mode, run vacuum (few hours), postgres still complaining and refuse to start, more web searching, found leaked temporary tables (this time they were a few proper temporary tables created by our application, no toast table), dropped them, problem solved. Net effect: ~5 hours downtime affecting a few hundreds of our customers... Can this scenario be included on the doc page regarding routine vacuuming: http://www.postgresql.org/docs/8.3/static/routine-vacuuming.html If it would have been there, it would have saved us all this down time, I could have just simply drop the leaked temp tables in the first place... In general, I found very little information in the docs about the ways temporary tables work in postgres. There are a few gotchas about temporary tables, a special page discussing temp tables would be nice to have, and linked from all other places which currently discuss different aspects of this topic. One thing which I still don't know what is it exactly doing is vacuuming a temporary table from a different session: it worked for me in the sense it did not throw any error, but it did nothing to the temp table as far as I can tell... is there a way to vacuum/analyze temporary tables from another session ? The docs definitely don't say anything about this topic... I would think it can't work if the table lives in fact in private memory of it's session, but it would be nice if the docs would state these things clearly... In fact I could attempt to write that page but need guidance. Cheers, Csaba. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] freebsd + postgresql 8.3.3 = 100% of cpu usage on stats collector?
hi first of all - i know next to nothing about bsd. friend asked me to setup replication on their machines, and i noticed that one of cpus (2 quad xeons) is used to 100%: pgsql 58241 99.0 0.2 22456 7432 ?? Rs Thu10AM 1530:35.93 postgres: stats collector process(postgres) what might be the reason? here are settings: name |setting -+--- add_missing_from| off allow_system_table_mods | off archive_command | (disabled) archive_mode| off archive_timeout | 0 array_nulls | on authentication_timeout | 60 autovacuum | on autovacuum_analyze_scale_factor | 0.1 autovacuum_analyze_threshold| 50 autovacuum_freeze_max_age | 2 autovacuum_max_workers | 3 autovacuum_naptime | 60 autovacuum_vacuum_cost_delay| 20 autovacuum_vacuum_cost_limit| -1 autovacuum_vacuum_scale_factor | 0.2 autovacuum_vacuum_threshold | 50 backslash_quote | safe_encoding bgwriter_delay | 200 bgwriter_lru_maxpages | 100 bgwriter_lru_multiplier | 2 block_size | 8192 bonjour_name| check_function_bodies | on checkpoint_completion_target| 0.5 checkpoint_segments | 3 checkpoint_timeout | 300 checkpoint_warning | 30 client_encoding | UTF8 client_min_messages | notice commit_delay| 0 commit_siblings | 5 config_file | /usr/local/pgsql/data/postgresql.conf constraint_exclusion| off cpu_index_tuple_cost| 0.005 cpu_operator_cost | 0.0025 cpu_tuple_cost | 0.01 custom_variable_classes | data_directory | /usr/local/pgsql/data DateStyle | ISO, YMD db_user_namespace | off deadlock_timeout| 1000 debug_assertions| off debug_pretty_print | off debug_print_parse | off debug_print_plan| off debug_print_rewritten | off default_statistics_target | 10 default_tablespace | default_text_search_config | pg_catalog.simple default_transaction_isolation | read committed default_transaction_read_only | off default_with_oids | off dynamic_library_path| $libdir effective_cache_size| 16384 enable_bitmapscan | on enable_hashagg | on enable_hashjoin | on enable_indexscan| on enable_mergejoin| on enable_nestloop | on enable_seqscan | on enable_sort | on enable_tidscan | on escape_string_warning | on explain_pretty_print| on external_pid_file | extra_float_digits | 0 from_collapse_limit | 8 fsync | on full_page_writes| on geqo| on geqo_effort | 5 geqo_generations| 0 geqo_pool_size | 0 geqo_selection_bias | 2 geqo_threshold | 12 gin_fuzzy_search_limit | 0 hba_file| /usr/local/pgsql/data/pg_hba.conf ident_file | /usr/local/pgsql/data/pg_ident.conf ignore_system_indexes | off integer_datetimes | off join_collapse_limit | 8 krb_caseins_users | off krb_realm | krb_server_hostname | krb_server_keyfile | krb_srvname | postgres lc_collate | C lc_ctype| pl_PL.UTF-8 lc_messages | pl_PL.UTF-8 lc_monetary | pl_PL.UTF-8 lc_numeric | pl_PL.UTF-8 lc_time | pl_PL.UTF-8 listen_addresses| 192.168.0.160 local_preload_libraries | log_autovacuum_min_duration | -1 log_checkpoints | off log_connections | off log_destination | syslog log_directory | pg_log log_disconnections | off log_duration| off log_error_verbosity | default log_executor_stats | off log_filename| postgresql-%Y-%m-%d_%H%M%S.log log_hostname| off log_line_prefix | log_lock_waits | off log_min_duration_statement | -1 log_min_error_statement | error log_min_messages
Re: [GENERAL] Problem with FOUND
2008/6/27 A B <[EMAIL PROTECTED]>: >> I think you'd be well advised to rethink your table layout so you don't >> need so much dynamic SQL. The above is going to suck on both >> performance and readability grounds, and it doesn't look like it's >> accomplishing anything you couldn't do by combining all the Rating >> tables into one table with an extra key column. > > Yes, it sucks, but I have to live with it right now (I've also removed > a lot of code from the function to make it more readable for you) > There are a lot of other parameters and execute commands :-( > Since I don't run >=8.2 I cant use FOR-EXECUTE-UPDATE-RETURNING. > So I will have to find another way. > > But if UPDATE sets FOUND, what is the reason for EXECUTE not to set > FOUND if the query executed is an UPDATE? > Is it because it is impossible to tell in advance what kind of query > an EXECUTE statement will acctually execute? compatibility with Oracle's PL/SQL. Internally isn't reason for it :( try GET DIAGNOSTICS postgres=# create table foox(a integer); CREATE TABLE postgres=# insert into foox values(10); INSERT 0 1 postgres=# create function gg(v integer) returns void as $$ declare r integer; begin execute 'update foox set a = ' || v || ' where a = ' || v; get diagnostics r = row_count; raise notice '%', r; end; $$ language plpgsql; CREATE FUNCTION postgres=# select gg(11); NOTICE: 0 gg (1 row) postgres=# select gg(10); NOTICE: 1 gg Regards Pavel Stehule > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with FOUND
> I think you'd be well advised to rethink your table layout so you don't > need so much dynamic SQL. The above is going to suck on both > performance and readability grounds, and it doesn't look like it's > accomplishing anything you couldn't do by combining all the Rating > tables into one table with an extra key column. Yes, it sucks, but I have to live with it right now (I've also removed a lot of code from the function to make it more readable for you) There are a lot of other parameters and execute commands :-( Since I don't run >=8.2 I cant use FOR-EXECUTE-UPDATE-RETURNING. So I will have to find another way. But if UPDATE sets FOUND, what is the reason for EXECUTE not to set FOUND if the query executed is an UPDATE? Is it because it is impossible to tell in advance what kind of query an EXECUTE statement will acctually execute? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] what are rules for?
> Date: Thu, 26 Jun 2008 12:47:04 -0500 > From: [EMAIL PROTECTED] > To: [EMAIL PROTECTED] > Subject: Re: what are rules for? > CC: pgsql-general@postgresql.org; [EMAIL PROTECTED]; [EMAIL PROTECTED]; > [EMAIL PROTECTED] > > On Thu, Jun 26, 2008 at 12:11 PM, Dean Rasheed wrote: >> This can almost be implemented in PostgreSQL right now, using a rule of >> the form "... do instead select trigger_fn()" - except, as you point out, the >> caller won't know how many rows were actually updated. As far as the >> top-level query knows, it didn't update anything, which will break some >> (most?) clients. Apart from that, this does actually work! > > Yeah, I actually thought of that. But as you point out, many clients > would get confused. Someone pointed out in an earlier thread that a > way to fix this, for updates on a multi-table view (where most of the > complication lies), is to write a "trigger" function that updates all > the constituent tables except for one, and then write a rule that > calls that function and then updates the one remaining table itself. > This seems to work okay although I have not tested it with many > clients. > > Mike Yes that would seem to work. For UPDATE anyway. Although if it were purely DML that you were doing, you would probably be better off just having multiple UPDATE statements in the rule body. Then they would stand a better chance of being rewritten and executed more efficiently. The problem is that the rule system has a lot of subtle pitfalls waiting to trip you up. Suppose for example that your view did an inner join on the PK of 2 tables, and you tried to use that trick to implement a DELETE "trigger" to delete from both. After the first deletion, no rows in the view would match and the second delete wouldn't happen. OK, so there's an easy fix to this, but it is easy to overlook. In my case, I wanted to invoke a function after the delete, which did some complex logic relying on the tables being in their final state. So I really needed an "after delete" trigger, and this didn't seem possible with the rule system. As the documentation points out, there are some things that can't be done with rules (and also with triggers). Each has its own pros and cons in different situations. So I for one would love to see both available for views. I've used Oracle's "instead of" triggers, and they work really well*, but maybe there is some ever better way of implementing triggers on views. Dean. * Better in fact than their before and after triggers on tables, which in Oracle are much more prone to mutating table errors. _ Welcome to the next generation of Windows Live http://www.windowslive.co.uk/get-live -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general