[SQL] Please, HELP! Why is the query plan so wrong???
Hi, everybody! Here is the problem: test=# create table fb (a int, b int, c datetime); CREATE test=# create table fbr (a int, c datetime, d int); CREATE test=# create unique index fb_idx on fb(b); CREATE test=# create index fbr_idx on fbr(a,c) where d is null; CREATE test=# set enable_seqscan=off; SET VARIABLE rapidb=# explain select * from fb, fbr where fb.b=0 and fb.a=fbr.a and fb.c=fbr.c and fbr.d is null; NOTICE: QUERY PLAN: Hash Join (cost=10005.82..11015.87 rows=1 width=32) -> Seq Scan on fbr (cost=1.00..11010.00 rows=5 width=16) -> Hash (cost=5.81..5.81 rows=1 width=16) -> Index Scan using fb_idx on fb (cost=0.00..5.81 rows=1 width=16) Could someone PLEASE explain to me, why doesn't it want to use the index on fbr? If I get rid of the join, then it works: test=# explain select * from fbr where a=1 and c=now() and d is null; NOTICE: QUERY PLAN: Index Scan using fbr_idx on fbr (cost=0.00..5.82 rows=1 width=16) What's the catch??? Any help would be greatly appreciated! Thanks! Dima ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Please, HELP! Why is the query plan so wrong???
Jie Liang wrote: >I believe that SQL will use the index of join 'key' when you join the tables >if >have any, in your query the (a,c) is the join key but d is not. > > >Jie Liang > Not really... I tried this: explain select * from fb joing fbr on (fb.a=fbr.a and fb.c=fbr.c and fbr.d is null) where fb.b=0 It results in the same query plan (seq scan on fbr). Dima > > > >-----Original Message- >From: Dmitry Tkach [mailto:[EMAIL PROTECTED]] >Sent: Thursday, July 11, 2002 3:51 PM >To: [EMAIL PROTECTED]; [EMAIL PROTECTED] >Subject: [SQL] Please, HELP! Why is the query plan so wrong??? > > >Hi, everybody! > >Here is the problem: > >test=# create table fb (a int, b int, c datetime); >CREATE >test=# create table fbr (a int, c datetime, d int); >CREATE >test=# create unique index fb_idx on fb(b); >CREATE >test=# create index fbr_idx on fbr(a,c) where d is null; >CREATE >test=# set enable_seqscan=off; > >SET VARIABLE >rapidb=# explain select * from fb, fbr where fb.b=0 and fb.a=fbr.a and >fb.c=fbr.c and fbr.d is null; >NOTICE: QUERY PLAN: > >Hash Join (cost=10005.82..11015.87 rows=1 width=32) > -> Seq Scan on fbr (cost=1.00..11010.00 rows=5 width=16) > -> Hash (cost=5.81..5.81 rows=1 width=16) > -> Index Scan using fb_idx on fb (cost=0.00..5.81 rows=1 >width=16) > >Could someone PLEASE explain to me, why doesn't it want to use the index on >fbr? > >If I get rid of the join, then it works: > >test=# explain select * from fbr where a=1 and c=now() and d is null; >NOTICE: QUERY PLAN: > >Index Scan using fbr_idx on fbr (cost=0.00..5.82 rows=1 width=16) > >What's the catch??? > >Any help would be greatly appreciated! > >Thanks! > >Dima > > > > >---(end of broadcast)--- >TIP 3: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
It is a bug in pred_test()! (Was: [SQL] Please, HELP! Why is the query plan so wrong???)
Jie Liang wrote: >please copy and paste the whole msg and your query! >Note:what I mean ' join key' is the fields that link two tables. > The message (query plan) is exactly the same (you can see it in the bottom of this message). > >I don't think fb.b=0 is a join key! > Of course not. But it IS using the index on fb. It is the fbr, that is the problem (and the join key is explicitly specified in the join... on (..) clause). Actually, it looks like the problem is caused by the predicate on the index: if instead of create index fbr_idx on fbr(a,c) where d is null; I do just: create index fbr_idx on fbr(a,c,d); Then this index is used by the query... It now looks like a bug in the query planner to me - it seems that it just doesn't consider indices with predicates for join plans... I was looking at the source code, and it looks like pred_test() is responsible for that. Ater debugging it a little, I found out that, if I list the tables in the query in the opposite order: explain select * from fbr, fb where fb.b=0 and fb.a=fbr.a and fb.c=fbr.c and fbr.d is null; Then it works (both indexes are used!). Here is what is causing it: create_index_paths() calls pred_test () to check whether it is OK to use a partial index or not. pred_test () evaluates the index's predicate list against the restriction clauses of the query. In my case, the predicate list "d is NULL", so the goal is to find the equivalent clause in the query. pred_test () does that by iterating through the query's clauses and comparing them to the predicate with the equal() function. equal () calls _equalNullTest(), which in turn calls _equalVar(), that looks at the varno parameter in the NullTest's argument. Now the value of varno in the predicate is (of course) 1, (I believe, it always is, because we don't have multitable indexes), however, the varno in the clause is 2 (it is the index of the table in the join list), if the fbr is listed second - and 1, if it is first - so, in the former case it does not work, and in the latter it does. Knowing all this doesn't help much unfortunately, because, if you needed to join 2 (or more) tables that have indexes with predicates, then whatever order you put them in, would not help (the pred_test() will only succeed for the first table in the join list) :-( Perhaps, somebody, familiar with this code could come up with a patch for this problem? This would be really great! Dima > > >Jie Liang > >-Original Message- >From: Dmitry Tkach [mailto:[EMAIL PROTECTED]] >Sent: Friday, July 12, 2002 7:34 AM >To: Jie Liang >Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] >Subject: Re: [SQL] Please, HELP! Why is the query plan so wrong??? > > >Jie Liang wrote: > >>I believe that SQL will use the index of join 'key' when you join the >> >tables > >>if >>have any, in your query the (a,c) is the join key but d is not. >> >> >>Jie Liang >> > >Not really... I tried this: > >explain select * from fb joing fbr on (fb.a=fbr.a and >fb.c=fbr.c and fbr.d is null) where fb.b=0 > >It results in the same query plan (seq scan on fbr). > >Dima > > > > >> >> >>-Original Message- >>From: Dmitry Tkach [mailto:[EMAIL PROTECTED]] >>Sent: Thursday, July 11, 2002 3:51 PM >>To: [EMAIL PROTECTED]; [EMAIL PROTECTED] >>Subject: [SQL] Please, HELP! Why is the query plan so wrong??? >> >> >>Hi, everybody! >> >>Here is the problem: >> >>test=# create table fb (a int, b int, c datetime); >>CREATE >>test=# create table fbr (a int, c datetime, d int); >>CREATE >>test=# create unique index fb_idx on fb(b); >>CREATE >>test=# create index fbr_idx on fbr(a,c) where d is null; >>CREATE >>test=# set enable_seqscan=off; >> >>SET VARIABLE >>rapidb=# explain select * from fb, fbr where fb.b=0 and fb.a=fbr.a and >>fb.c=fbr.c and fbr.d is null; >>NOTICE: QUERY PLAN: >> >>Hash Join (cost=10005.82..11015.87 rows=1 width=32) >> -> Seq Scan on fbr (cost=1.00..11010.00 rows=5 width=16) >> -> Hash (cost=5.81..5.81 rows=1 width=16) >>-> Index Scan using fb_idx on fb (cost=0.00..5.81 rows=1 >>width=16) >> >>Could someone PLEASE explain to me, why doesn't it want to use the index on >>fbr? >> >>If I get rid of the join, then it works: >> >>test=# explain select * from fbr where a=1 and c=now() and d is null; >>NOTICE: QUERY PLAN: >> >>Index Scan using fbr_idx on fbr (cost=0.00..5.82 rows=1 width=16) >> >>What's the catch??? >> >>Any help would be greatly appreciated! &g
Re: [SQL] Table Copy.
what about CREATE TABLE one ( fileda INTEGER, filedb INTEGER, filedc INTEGER ); CREATE VIEW two AS SELECT * FROM one; ? Dima PostgreSQL Server wrote: > HI! > > I'm new to postgres. I need to have a table as a copy of another one. > > Example: > > CREATE TABLE one ( > fileda INTEGER, > filedb INTEGER, > filedc INTEGER ); > > CREATE TABLE two ( > fileda INTEGER, > filedb INTEGER, > filedc INTEGER ); > > As on insert to table one I should get the same insert on table two. > As on delete to table one I should get the same delete on table two. > As on update to table one I should get the same update on table two. > > Can someone provide the examples i can study ? > > Thanks in advance. > > Alex > > > > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] database abstraction -> functions
Jeroen Olthof wrote: > Hi, > > When developing applications is a good thing to create abstraction between > different layers > The one concerning the database would be the persistence layer. To create > such abstraction I want all mij datababase activitie runned through > functions. But how can I return a set of rows instead of a single datatype > (I looked into the setof but never found a clear simple)? > A very simple answer - there is no way to do what you want :-( They promise to have it in 7.3... I am using a workaround, where my functions glue all the columns together into a pipe-separated line, that I then parse back into columns in my application... Also, note that, unless your functions are in C (anything like plpgsql, or just sql), they are slow like hell... In that case, you are much better off implementing your abstraction layer through views... Dima ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Is there a better way than this to get the start and end of a month?
Well, month_end could be more straightforward as something like select month_start ($1) + interval '1 day' - interval '1 month'; Dima David Stanaway wrote: > Here are the 2 functions I have at the moment. I was wondering if > someone had a better way? > > CREATE OR REPLACE FUNCTION month_start (date) > RETURNS date > AS ' >DECLARE > day ALIAS FOR $1; >BEGIN > RETURN day - (extract(''day'' FROM day)||'' days'')::interval + > ''1 day''::interval; >END; > ' > LANGUAGE 'plpgsql'; > > CREATE OR REPLACE FUNCTION month_end (date) > RETURNS date > AS ' >DECLARE > day ALIAS FOR $1; > month int; > year int; >BEGIN > month := extract(''month'' FROM day); > year := extract(''year'' FROM day); > IF month = 12 THEN > month := 1; > year := year +1; > ELSE > month := month +1; > END IF; > RETURN (''01-''||month||''-''||year)::date - > ''1 day''::interval; > END; >' >LANGUAGE 'plpgsql'; > > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Is there a better way than this to get the start and end of a month?
Sorry, the previous message was wrong... This is better: create function month_start (date) as 'select date_trunc ('month', $1)::date;' language 'sql'; create function month_end (date) as 'select month_start ($1) - 1 + interval '1 month'; language 'sql'; I hope, it helps... Dima David Stanaway wrote: > Here are the 2 functions I have at the moment. I was wondering if > someone had a better way? > > CREATE OR REPLACE FUNCTION month_start (date) > RETURNS date > AS ' >DECLARE > day ALIAS FOR $1; >BEGIN > RETURN day - (extract(''day'' FROM day)||'' days'')::interval + > ''1 day''::interval; >END; > ' > LANGUAGE 'plpgsql'; > > CREATE OR REPLACE FUNCTION month_end (date) > RETURNS date > AS ' >DECLARE > day ALIAS FOR $1; > month int; > year int; >BEGIN > month := extract(''month'' FROM day); > year := extract(''year'' FROM day); > IF month = 12 THEN > month := 1; > year := year +1; > ELSE > month := month +1; > END IF; > RETURN (''01-''||month||''-''||year)::date - > ''1 day''::interval; > END; >' >LANGUAGE 'plpgsql'; > > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] start and end of the week
Does any one know what is the reason not to put this logic into date_trunc () function? It seems to work with pretty much *any* unit imaginable, *except* for 'week'... Dima Bruno Wolff III wrote: > On Thu, Sep 26, 2002 at 11:55:48 -0400, > Jean-Luc Lachance <[EMAIL PROTECTED]> wrote: > >>How about: >> >>select now() - date_part( 'DOW', now()) as starts_on, >> now() -date_part( 'DOW', now()) + 6 as ends_on; >> > > That won't work in 7.3. > > The following works in both 7.2 and 7.3: > area=> select current_date - extract(dow from current_date)::int as start_date, > area-> current_date - extract(dow from current_date)::int + 6 as end_date; > start_date | end_date > + > 2002-09-22 | 2002-09-28 > (1 row) > > Extract returns double precision and so needs a cast to int to work. > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Help tuning query
First of all, try replacing the username/foldername indexes on operator_messages with a single combined index on, say (username,foldername)... It is still not clear to me why it decides not to use one of those indexes you have (it would be less efficient than a combined index, but still better than a seq. scan) - let's see if having a combined index helps... If it doesn't, we'll need to look deeper into what exactly it is that makes it choose seqscan over an index... I hope, it helps... Dima Kevin Traub wrote: > All; > > Can anyone please help with the tuning of this query? > With 77000 rows in the operator_messages database the query is taking almost > 15 seconds to return. Preference woul dbe under 5 seconds if possible. > System load on a dual processor P3 with 1.5GB of memory remains under .4 > during the query. > The query and explain are noted below as well as description of the tables; > Note both ANALYZE and VACUUM have been run numerous times. > any help would be appreciated. -Kev > > > virgin=# explain SELECT opr.msg_id, opr.status, opc.std_time > virgin-# FROM op_msg_folder opc, operator_messages opr > virgin-# WHERE opr.username = 'khp' > virgin-# AND opr.foldername = 'inbox' > virgin-# and opr.msg_id = opc.msg_id; > NOTICE: QUERY PLAN: > > Merge Join (cost=25037.29..27675.47 rows=47958 width=54) > -> Index Scan using opmf_i on op_msg_folder opc (cost=0.00..1797.37 > rows=48579 width=32) > -> Sort (cost=25037.29..25037.29 rows=47958 width=22) > -> Seq Scan on operator_messages opr (cost=0.00..20722.26 > rows=47958 width=22) > > virgin=# \d operator_messages >Table "operator_messages" >Column | Type | Modifiers > +--+--- > msg_id | numeric | > username | text | > foldername | text | > status | character(1) | > Indexes: op_msgs_i, > opr_msgs_foldername_i, > opr_msgs_username_i > > virgin=# \d op_msgs_i > Index "op_msgs_i" > Column | Type > +- > msg_id | numeric > btree > > virgin=# \d opr_msgs_foldername_i > Index "opr_msgs_foldername_i" >Column | Type > +-- > foldername | text > btree > > virgin=# \d opr_msgs_username_i > Index "opr_msgs_username_i" > Column | Type > --+-- > username | text > btree > > virgin=# \d op_msg_folder >Table "op_msg_folder" > Column | Type | Modifiers > +--+--- > msg_id | numeric | > status | character(1) | > std_time | text | > julian_time| text | > smi| character(3) | > description| text | > type | text | > flight | text | > tail | text | > dep_station| text | > dest_station | text | > op_description | text | > Unique keys: opmf_i > > virgin=# \d opmf_i; > Index "opmf_i" > Column | Type > +- > msg_id | numeric > unique btree > > > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Preventing DELETEs
I think this should work: alter table mytable owner to postgres; grant all on my table to public; revoke delete on my table from public; I hope, it helps... Dima Rajesh Kumar Mallah. wrote: > Hi , > > I have a created a database and a table in it, > > I want to prevent "DELETES" on the table in this > database by everyone except superuser postgres. > even by me (the creator of this database and table) > > > I have tried in many ways (ALTER TABLE REVOKE DELETE .. etc etc) > > but i always end up with having the permission > > > can any one tell me how the prevention can be accomplished? > > thanks in advance. > > regds > mallah. > > > > --=20 > Rajesh Kumar Mallah, > Project Manager (Development) > Infocom Network Limited, New Delhi > phone: +91(11)6152172 (221) (L) ,9811255597 (M) > > Visit http://www.trade-india.com , > India's Leading B2B eMarketplace. > > > > ---(end of broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] null foreign key column
Vernon Wu wrote: 12/02/2003 2:24:49 PM, Dmitry Tkach <[EMAIL PROTECTED]> wrote: You don't want it to be serial - just make it 'person_id in' Any reasons? Yeah... Two of them: - It does not make sense for a serial column to reference other tables - the only purpose of serial is to generate unique keys, in your case you do not want them generated, but rather copied from the entries in the referenced table. - The other reason is that 'serial' implies 'not null' - that is why your insert statement fails. Dima I hope, it helps... Dima Arunachalam Jaisankar wrote: This is a multi-part message in MIME format. --=_NextPart_000_0005_01C2D1EE.61998D70 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi all, I would like to have a foreign key column in my table which allows null val= ue also. But the below create table sql command doesn't accept null value for person= _id. How to do in postgres? create table event ( event_id serialnot null, event_description char(255) , person_id serial, primary key (event_id), foreign key (person_id) references person (person_id) ); regards Jai --=_NextPart_000_0005_01C2D1EE.61998D70 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi all, I would like to have a foreign key column = in my=20 table which allows null value also. But the below create table sql command doe= sn't=20 accept null value for person_id. How to do in postgres? = create table event( &nb= sp;=20 event_id &= nbsp; =20 serial &nb= sp; =20 not null, event_description&n= bsp;=20 char(255) = =20 , =20 person_id = =20 serial &nb= sp; =20 , primary key (event_id), forei= gn=20 key (person_id) references pe= rson=20 (person_id)); regards Jai --=_NextPart_000_0005_01C2D1EE.61998D70-- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Debugging postmaster to fix possible bug in Postgres? Followup
SQL92 says: ::= [ ] [...] 3) Let T be the table specified by the . 4) If ORDER BY is specified, then each in the shall identify a column of T. Then it looks like postgres behaviour is still not compliant, if I read it correctly, because select x from mytable order by y; should be invalid according to this, but works just fine in postres. Dima. P.S. I think, this is a great feature actually (missed it a lot in informix), so, I hope, you guys won't start discussing how to fix it :-) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] null foreign key column
You don't want it to be serial - just make it 'person_id in' I hope, it helps... Dima Arunachalam Jaisankar wrote: This is a multi-part message in MIME format. --=_NextPart_000_0005_01C2D1EE.61998D70 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi all, I would like to have a foreign key column in my table which allows null val= ue also. But the below create table sql command doesn't accept null value for person= _id. How to do in postgres? create table event ( event_id serialnot null, event_description char(255) , person_id serial, primary key (event_id), foreign key (person_id) references person (person_id) ); regards Jai --=_NextPart_000_0005_01C2D1EE.61998D70 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi all, I would like to have a foreign key column = in my=20 table which allows null value also. But the below create table sql command doe= sn't=20 accept null value for person_id. How to do in postgres? = create table event( &nb= sp;=20 event_id &= nbsp; =20 serial &nb= sp; =20 not null, event_description&n= bsp;=20 char(255) = =20 , =20 person_id = =20 serial &nb= sp; =20 , primary key (event_id), forei= gn=20 key (person_id) references pe= rson=20 (person_id)); regards Jai --=_NextPart_000_0005_01C2D1EE.61998D70-- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] join/case
I think, something like this should work: select o.id,o.num_purch,o.program from orders o left join lists l on (l.order_id=o.id) where (l.status is null or l.status!='processing') and o.uid=1 and o.status!='closed'. (l.status is null should take care about the case when there is no matching row for the join)... I hope, it helps... Dima jtx wrote: Hi everyone, I'm trying to do a left join on two tables, mainly because data from table 'b' (lists) may or may not exist, and if it doesn't I want results. However, if data from table lists DOES exist, I want to run a conditional on it, and then return data based on whether the conditional is true or false. Basically, I have something like this: Select o.id,o.num_purch,o.program from orders o left join lists l on l.order_id=o.id where o.uid=1 and o.status!='closed' This query would return something like: id | num_purch | program +---+- 1 | 100 | 1 2 | 150 | 2 However, I want to throw an extra conditional in there that says if l.status='processing', then don't return anything. So, I tried: Select o.id,o.num_purch,o.program from orders o left join lists l on l.order_id=o.id and l.status!='processing' where o.uid=1 and o.status!='closed'. Well, that doesn't work, it returns all the data anyway. I'm guessing it's because l.status!='processing' is part of the left join. The trick here is that, like I said, there may be NO data from the lists table, and if not, return everything. If there is data in lists that has the order id in it, check to make sure l.status!='processing'. If it does, don't return it, if it doesn't, return. Thanks for your help, and sorry if I don't make much sense I tend to ramble :) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] SQL problem: bank account
I am afraid, this looks even uglier then your second solution, but should work, and be quicker... -- You need this to avoid having to rescan the whole table for each customerid every time and resort the results create index customer_txstamp_idx on bank_account (customer_id, ts); select ba.* from bank_account ba where transaction_id = (select transaction_id from bank_account where customer_id = ba.customer_id order by customer_id desc, ts desc limit 1); Now, note that, if you have lots of different customers in that table, it will still take a while to fetch them all (although, it should still be a lot quicker then half an hour) - in that case, you may consider either getting them one-by-one (by adding ... and customer_id=? to the above query) or using cursors... I hope, it helps... Dima Erik G. Burrows wrote: It seems to me this is a simple problem, but the solution eludes me. I have a table: bank_account ( transaction_id int not null serial, customer_id int not null references customer(id), ts timestamp not null default now(), amount float not null, balance float not null, primary key(transaction_id) ) I need to get the most recent transaction for each customer. I need only the transaction ID, but the entire row would be best. I have two solutions, both of which are too slow for use in my interactive web-based interface: Solution1: Outer left self join: SELECT ba1.* FROM bank_account ba1 LEFT OUTER JOIN bank_account ba2 ON ba1.customer_id = ba2.customer_id AND ba1.ts < ba2.ts WHERE ba2.ts IS NULL; This query works great on tables of less than a few thousand rows. For my 300k row table, it takes several hours. Solution2: max-concat trick SELECT split_part(max( extract(EPOCH from ts)::VARCHAR || '' || transaction_id::VARCHAR), '', 2)::INT FROM bank_account GROUP BY customer_id This is an ugly and obviously inefficient solution, but it does the job in about 1/2 hour. Still too long though. I've been working on this problem for days, and consulting friends. No elegant, fast solution is presenting itself. As I said, I feel I'm not seeing the obvious solution in front of my face. In the mean-time I can use this query to do the job on a per-customer basis: select * from bank_account where id = and ts = (select max(ts) from bank_account ba2 where ba2.customer_id = bank_account.customer_id); However, doing this for all 40,000 customers is not workable as a manual process. My last resort is to do it this way to pre-generate the report, but I'd far rather do it real-time. Help! My brain hurts! ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Using & - operator
Rado Petrik wrote: Hi, I have table users; id name bin -- 1 xx 9 dec(1) & dec(9) = dec(1) bin(0001) & bin(1001) = bin(0001) This query is bad. SELECT name FROM users WHERE id_user=1 AND (bin & 1) This query return "Warning: PostgreSQL query failed: ERROR: right- hand side of AND is type 'integer', not 'boolean' in" I think, you want: ... AND (bin & 1) = 1 just bin&1 returns an *integer*, and you need a *boolean* expression for your criteria. I hope, it helps... Dima Wow I write corect this query ? Thanks. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Changing owner of function -- best method?
Josh Berkus wrote: Folks, I have several functions in a 7.2.4 database that I accidentally REPLACEd while logged in as the superuser, instead of the db owner. As a result, the db owner can no longer modify those functions -- they belong to the superuser. As this is a production database, I can't drop the functions and re-create them as a different user ... the interruption in service would not be acceptable. I was wondering whether there were any problems with either of the following methods: A) Updating the pg_proc table to change the function owner; B) Making the db_owner a superuser, replacing the functions, and then making the db_owner a non-superuser again. I believe, either of two should work. I have done it many times in the past... Dima ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] trigger error
You must have dropped and recreated the function after your trigger was created... You need to recreate the trigger now, so that it picks up the new function id. In the future use 'CREATE OR REPLACE' to modify a function instead of DROP and CREATE - this will make sure the modified function keeps its id. I hope, it helps... Dima Yudie wrote: Hi, I',m trying to create trigger with plpgsql trigger function then I got this error message when trigger executed: Number: -2147467259 Error while executing th query; ERROR: fmgr_info: function 1546856080: cache lookup failed Here is the function code: CREATE FUNCTION TRIGGER_UPDATE_AXPRDT_STATUS() RETURNS OPAQUE AS 'BEGIN IF NEW.ONHAND = 0 THEN NEW.STATUS = ''D''; END IF; RETURN NEW; END;' LANGUAGE 'plpgsql'; CREATE TRIGGER TRIGGER_AXPRDT_FUTURESTATUS BEFORE UPDATE ON AXPRDT FOR EACH ROW EXECUTE PROCEDURE TRIGGER_UPDATE_AXPRDT_STATUS(); Please Help.. Yudie ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Datatype conversion help
What about lpad? select lpad (7, 2, 0) || '-' || lpad (9, 2, '0') || '-2003'; ?column? 07-09-2003 (1 row) I hope, it helps... Dima Yasir Malik wrote: Thank you so much! But my problem is that when I do to_char(mn, '00') || '-' || to_char(dy, '00') || '-' || to_char(yr, '') where mn, dy, and yr are ints, is that the output has a space after the the dash. For example, I get 07- 25- 1994 instead of what I want: 07-25-1994 Thanks, Yasir On Tue, 8 Jul 2003, Richard Rowell wrote: Date: 08 Jul 2003 15:21:33 -0500 From: Richard Rowell <[EMAIL PROTECTED]> To: Yasir Malik <[EMAIL PROTECTED]> Subject: Re: [SQL] Datatype conversion help On Tue, 2003-07-08 at 15:07, Yasir Malik wrote: I've tried to_char(in_val, '99'), and that returns a string that is two select to_char(9,'00'); ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Datatype conversion help
Yasir Malik wrote: I used trim and here's what I came up with: to_date(trim(to_char(yr, '') || trim(to_char(mn, '00')) || trim(to_char(dy, '00'))), 'MMDD') Apparently to_char adds a space to the charecter you are casting. I know :-) And lpad doesn't - that's why I suggested it :-) Dima On Wed, 9 Jul 2003, Dmitry Tkach wrote: Date: Wed, 09 Jul 2003 18:40:37 -0400 From: Dmitry Tkach <[EMAIL PROTECTED]> To: Yasir Malik <[EMAIL PROTECTED]> Cc: [EMAIL PROTECTED] Subject: Re: [SQL] Datatype conversion help What about lpad? select lpad (7, 2, 0) || '-' || lpad (9, 2, '0') || '-2003'; ?column? 07-09-2003 (1 row) I hope, it helps... Dima Yasir Malik wrote: Thank you so much! But my problem is that when I do to_char(mn, '00') || '-' || to_char(dy, '00') || '-' || to_char(yr, '') where mn, dy, and yr are ints, is that the output has a space after the the dash. For example, I get 07- 25- 1994 instead of what I want: 07-25-1994 Thanks, Yasir On Tue, 8 Jul 2003, Richard Rowell wrote: Date: 08 Jul 2003 15:21:33 -0500 From: Richard Rowell <[EMAIL PROTECTED]> To: Yasir Malik <[EMAIL PROTECTED]> Subject: Re: [SQL] Datatype conversion help On Tue, 2003-07-08 at 15:07, Yasir Malik wrote: I've tried to_char(in_val, '99'), and that returns a string that is two select to_char(9,'00'); ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[SQL] Count dates distinct within an interval
Hi, everybody! I was trying to formulate a sql query, that I now think is impossible :-( I just thought, I'd run it by you before I give up - perhaps, you guys would have some ideas... Suppose, I have a table like this create table test ( stuff int, stamp timestamp ); Now, I want to count the occurences of each value of stuff in the table, but so that entries within 24 hours from each other count as one... The closest I could think of is: select stuff, count (distinct date_trunc ('day', stamp)) from test group by stuff; This doesn't do exactly what I need though - for example, if one entry is one minute before midnight, and the other one is two minutes later, they'd get counted as 2, and what I am looking for is the way to get them collapsed into one as long as they are less then 24 hours apart... Now, I am pretty sure, it is simply impossible to do what I want with count (distinct...) because my 'equality' is not transitive - for example, three entries, like A = 2001 - 01- 01 20:20:00 B = 2001 - 01 - 02 20:19:00 C = 2001 - 01 - 02 20:21:00 Should be counted as *two* (A === B, and B === C, but *not* A === C)... Also, I could certainly write a simple function, that would get all the entries in order, and scan through them, counting according to my rules... But I was hoping to find some way to do this in plain sql though... Any ideas? Thanks! Dima ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Cannot insert dup id in pk
You must have your sequence out of date with the content of the table (so that the next value in the sequence has already been inserted). One way to get into a situation like that is loading the table data with COPY (the input contains the pks, and the COPY command does not update the sequence, you have to do that manually after the copy is done). Another way is simply inserting a row with an explicitly specified pkey: insert into fdata (fid,...) values (100, ...); Now, assuming, that you current sequence value is less then 100, and that the statement above succeedes (i.e., there is no fid=100 in the table yet), you'll get your sequence out of date. You'll still be able to use it, and insert the rows into the table *until* the current value reaches 100 - once that happens, an attempt to insert with the default fid will cause an error, because the sequence will generate a key, that already exists. To fix this, you need to do something like: select setval ('fdata_fid_seq', (select fid from fdata order by fid limit 1)); This will make sure that the next value your sequence generates is greater than any key that already exists in the table. I hope, it helps... Dima insert into fdata Scott Cain wrote: Hello, I sent this question yesterday morning, but it was not allowed because I wasn't subscribed to the list. If it did make it through, I appologize for the dup. I am having strange behavior when I try to do an insert. Postgres tells me that it cannot insert a duplicate key into a primary key index, when I am not trying to insert into it. Can anyone shed light on why this is happening, or point out the error of my ways? Here are the details: wormbase=> \d fdata Table "public.fdata" Column | Type | Modifiers ---++--- - fid | integer| not null default nextval('public.fdata _fid_seq'::text) fref | character varying(100) | not null default '' fstart| integer| not null default '0' fstop | integer| not null default '0' fbin | double precision | not null default '0.00' ftypeid | integer| not null default '0' fscore| double precision | fstrand | character varying(3) | fphase| character varying(3) | gid | integer| not null default '0' ftarget_start | integer| ftarget_stop | integer| Indexes: pk_fdata primary key btree (fid), fdata_fref_idx btree (fref, fbin, fstart, fstop, ftypeid, gid), fdata_ftypeid_idx btree (ftypeid), fdata_gid_idx btree (gid) Check constraints: "chk_fdata_fstrand" ((fstrand = '+'::character varying) OR (f strand = '-'::character varying)) "chk_fdata_fphase" (((fphase = '0'::character varying) OR (fp hase = '1'::character varying)) OR (fphase = '2'::character varying)) Now a chunk from my query log: Jul 14 12:48:47 localhost postgres[2998]: [107-1] LOG: query: INSERT INTO fdata (fref,fstart,fstop,fbin,ftypeid,fscore,fstrand,fphase,gid,ftarget_start,ftarget_stop) Jul 14 12:48:47 localhost postgres[2998]: [107-2] VALUES('XX',7327656,7327658,'1000.007327','41',NULL,'+',NULL,'12358',NULL,NULL) Jul 14 12:48:47 localhost postgres[2998]: [108] ERROR: Cannot insert a duplicate key into unique index pk_fdata Note that I do not try to insert anything into fid, the primary key on this table. Why does Postgres think I am? Thanks much, Scott ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Count dates distinct within an interval
You need to elaborate on your logic some more, and state exactly what you would want in the A,B,C case above. Does B get lumped with A or with C? It is within 24 hours of both, after all. Does C not get lumped in with B simply because B has already been lumped in with A? Yes. The first (earliest) entry is counted, then all the later ones a skipped as long as they are within 24 hours from the last one, that's counted. In this case, A is counted, B is skipped, because it is within 24 hours of A, then C is counted, because it is more than 24 hours from A. Thanks! Dima ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Non-Blocking Locks (i.e. Oracle NOWAIT)
There is no such thing, as far as I know :-( Here is the poor man solution I used to emulate this 'nowait' behaviour: create table master_lock ( projectid text primary key, locker int ); Now, the application first acquires an exclusive lock on the table, then, while the table is locked it looks at the projectid row and tests the 'locker' column. If it is null, the app sets it to its connection id (the pid of the backend), and COMMITS (to release the table lock), then starts another transaction and goes about its business, after it is done, it updates the master_lock, and sets the locker back to null. If the locker column is not null, I use pg_stat_activity to test if the connection with this pid still exists (in case the app that locked this project had crashed before releasing the lock)... if the pid isn't there, it considers the project unlock, and does what's described above. Otherwise, it releases the lock on the table, and generates an error message, reporting that the project is locked. Also see the earlier reply to your message, about STATEMENT_TIMEOUT as an alternative... I am using 7.2, so this wasn't an option for me when I came up with this work around I doubt, I'd use that timeout thing if I was on 7.3 anyway though - one problem is, you'd have to parse the actual error message to figure out if the cause of the error is really a timeout, or just something bad happenning in the database... Another problem is that the timeout number is arbitrary - if it is too small, you risk to get a situation when the row is not locked, but the query still gets canceled, because the database is slow at the moment, if it is too large, you'll have to wait for a long time before getting the response (and even then, you can't be 100% sure it really happened because of the lock) I hope, it helps.. Dima Jan Bernhardt wrote: Hi there, though this question has been asked several times before (but never really answered), I have to give it another try. I have a multi-user application which synchronizes access to certain datasets via the database itself. If a user has a project in that application open no other user should be able to work on it too. When developing the application I considered the database to be a place to handle the synchronization since transactions are (normally) an integral part of a database system. When a user opens a project the application firstly locks a corresponding row. But since I don't want the application to block if that row is already locked I use the Oracle "NOWAIT" feature (SELECT ... FOR UPDATE NOWAIT) which tries to receive the lock on the specified row and if that row is already locked it returns with an error without blocking and I can tell the user that the project is already in use. Now that the application is to be ported to PG I need a similar functionality. Is there that an animal? If not, what would you recommend? TIA, - Jan. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] numerical sort on mixed alpha/numeric data
Gary Stainburn wrote: Hi folks, I've got a table holding loco numbers and an id which references the locos table. How can I sort this table, so that numeric values appear first in numerical order followed by alpha in alpha order.? What about select lnid,lnumber,lncurrent from (select *, case when lnumber ~ '^[0-9]+' then lnumber::int else null end as number from lnumber) order by number, lnumber I hope, it helps... Dima nymr=# \d lnumbers Table "lnumbers" Column | Type | Modifiers ---+---+--- lnid | integer | not null lnumber | character varying(10) | not null lncurrent | boolean | Primary key: lnumbers_pkey Triggers: RI_ConstraintTrigger_7121182 nymr=# select * from lnumbers order by lnumber; lnid | lnumber | lncurrent --+-+--- 26 | 08556 | t 13 | 08850 | f 2 | 2392| f 15 | 24 061 | t 12 | 25 278 | f 1 | 29 | t 5 | 30926 | t 3 | 4277| t 7 | 44767 | t 21 | 45157 | t 13 | 4518| t 6 | 45212 | t 16 | 45337 | t 23 | 4771| f 19 | 5 | t 24 | 55019 | t 27 | 59 | f 11 | 60007 | t 8 | 60532 | t 23 | 60800 | t 14 | 62005 | t 14 | 62012 | f 18 | 64360 | f 2 | 65894 | t 17 | 6619| t 27 | 69023 | t 9 | 75014 | t 10 | 75029 | t 22 | 76079 | t 4 | 80135 | t 20 | 825 | t 18 | 901 | t 5 | 926 | f 26 | D3723 | f 15 | D5061 | t 12 | D7628 | t 25 | D9009 | t 24 | D9019 | f (38 rows) nymr=# ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] parse error for function def
Terence Kearns wrote: CREATE FUNCTION base.fn_fkey_check(text, text, int4) RETURNS bool AS 'DECLARE BEGIN RETURN (SELECT count($1) FROM $2 where $1 = $3)::bool; END;' LANGUAGE 'sql'; produces this error ERROR: parser: parse error at or near "RETURN" at character 20 I'm trying to create a function to use on a trigger to check reference to views since pg does not support foreign keys referencing views. First, you are using wrong language. In 'sql' you do not need begin, end or return. Second, you cannot have trigger functions in sql anyway, so you'd need to change your language to 'plpgsql' - it may than even compile, but I am not sure, because I never used that language. Third, trigger functions are special in that they can only take constant strings as arguments, so your $1 = $3 is, most probably not going to work. They also must return 'opaque' (in 7.2) or 'triggers' (in 7.3) - you can't return bool, because there is nobody who'd be able to look at the result after the function is called. Instead, you should check your condition, and if it is not satisfied, raise an error to abort the transaction. Fourth, select count ... may not be very efficient if you just need to check if the key exists - you may be better off with select true ... limit 1; And finally, you can (relatively easily) write a function that will check if the key exists in the view whenever you insert/update the table... But what about the other way around - what if somebody deletes a key from the underlying table in the view while there is still referencing entries on the other table? You can't have a trigger on a view, so there would be no way to check that... Why not avoid all that by just creating an FK between the actual table(s), used by the view and the 'child' table you care about? Dima ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] unique value - trigger?
Gary Stainburn wrote: Hi folks, I'm back with my lnumbers table again. nymr=# \d lnumbers Table "lnumbers" Column | Type | Modifiers ---+---+--- lnid | integer | not null lnumber | character varying(10) | not null lncurrent | boolean | Primary key: lnumbers_pkey Triggers: RI_ConstraintTrigger_7575462 While each loco can have a number of different numbers, only one can be current at any one time. I want to make it so that if I set lncurrent to true for one row, any existing true rows are set to false. I'm guessing that I need to create a trigger to be actioned after an insert or update which would update set lncurrent=false where lnid not = Why "not"? I thought, you wanted just the opposite - update the ones that *do* have the same lnid? I'd also recommend you to add ' and lncurrent' to the query - otherwise every insert would be updating *every* row with the same lnid (it doesn't check if the new row is actually the same as the old one) before updating, and that may be expensive. You may also want to create a pratial index on lnumbers (lnid) where lncurrent to speed up your trigger but I can't seem to sus it put. What is the problem? Dima ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] NOT and AND problem
I can't help you explain what is going on with this query - like you, I am puzzled by the fact that it actually works, and have no idea how it is interpreted, and what it is doing... The right way to do what you want, I think, would be something like: delete from mytable where not exists (select 1 from item where item_id = mytable.item_id) or not exists (select 1 from ep where group_id=mytable.group_id); I replaced your AND with OR, because that's what you seem to be saying in the description of your problem... I hope, it helps.. Dima Richard Jones wrote: Dear All, I am having some confusion over a query which is supposed to achieve the following: To remove a record from a table if the one or both of the columns containing references to other tables no longer point to table rows which still exist. There are good reasons why I cannot use foreign keys to maintain referential integrity, but I will not go into them, but they give rise to the need to "clean-up" my database table from time to time. The query that I have had most success with looks like this: DELETE FROM myTable WHERE (NOT myTable.item_id = item.item_id) AND (NOT myTable.group_id = ep.group_id); Which is odd, because logically it shouldn't work. What I find with the above queries is that as follows: let myTable.item_id = item.item_id be A let myTable.group_id = ep.group_id be B The derived and actual truth tables for the results of the where clause follow: Derived: A | B | Result 1 | 1 | 0 1 | 0 | 0 0 | 1 | 0 0 | 0 | 1 Actual: A | B | Result 1 | 1 | 0 1 | 0 | 0 0 | 1 | 1 0 | 0 | 1 This makes no sense to me, as effectively rows 2 and 3 of the Actual results truth table are the same (unless there's some subtle difference with regards to the order of the statements, otherwise just substitute A for B and vice versa). The result that I actually want from the operation is this: A | B | Result 1 | 1 | 0 1 | 0 | 1 0 | 1 | 1 0 | 0 | 1 which would suggest a query like: DELETE FROM myTable WHERE NOT (myTable.item_id = item.item_id AND myTable.group_id = ep.group_id); which ought to provide the above output. Instead, using this query, the output I get is as follows: A | B | Result 1 | 1 | 1 1 | 0 | 1 0 | 1 | 1 0 | 0 | 1 I can only conclude that Postgres is doing something with regards to the other two tables which I am unaware of. Can anyone help me understand what is going on? Any suggestions gratefully received. Cheers Richard Richard Jones --- Systems Developer Theses Alive! - www.thesesalive.ac.uk Edinburgh University Library [EMAIL PROTECTED] 0131 651 1611 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] unique value - trigger?
The problem is I don't know how to convert the following pseudo code to valid SQL: create trigger unique_current on insert/update to lnumbers if new.lncurrent = true update lnumbers set all other records for this loco to false I see... The bad news is you can't do it directly... You can only create functions to run as triggers, not plain sql statements for some reason :-( The correct syntax is create trigger unique_current before insert or update on lnumbers for each row execute procedure reset_current_lnum(); Where reset_current_lnum () is a function, that you have to write either in "C" or in 'plpgsql'; I could give you some sample code in "C" to do that, but it's rather complicated if you are not used to writing postgres stored procs in C... plpgsql would be much easier, but I can't help you there, because I don't know the syntax ... something like this, I guess, but I doubt this will compile exactly as it is: create function reset_current_lnum () returns triggers as ' begin if new.lncurrent = true update lnumbers set lncurrent=false where lnid=new.lnid and lncurrent; endif return new; end;' language 'plpgsql'; Dima ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Table Partitioning and Rules
But what makes you think, that it is quicker to scan 10 tables with 25 million rows each than it would be to scan one table with 250 million rows? It won't... In fact, it will be *longer*. If you have a way to narrow the number of tables to scan down based on the condition, you can have that logic implemented with partial indices, as it was suggested earlier in this thread... Dima Girish Bajaj wrote: The problem is that Im worried about sequential scans. This particular table can have upto 150 cols and 250 million records. Now we have a reporting requirement that someone could select on ANY col and filter on any col as well. Meaning someone could so a SELECT on col number 1,2,310,1000 from contact where col num 75='X' and col num 139 = 'Y'. I cant possibly index all the cols in the table. So I thought Id best manage the data by splitting up the table into multiple partitions and eventually depending on application logic, only scan those tables that are necessary to scan sequentially instead of the whole big table. Im getting a little confused here cause eventually I would want to join in this 250 million gigantic table as well.. and that would be a real big problem causing loads of sequential scans wouldn't it? Thanks, Girish -Original Message- From: Josh Berkus [mailto:[EMAIL PROTECTED] Sent: Thursday, July 17, 2003 2:03 PM To: Richard Huxton; Girish Bajaj; [EMAIL PROTECTED] Subject: Re: [SQL] Table Partitioning and Rules Girish, Essentially Im trying to store a persons information in a table in the database. Since we could have millions of people, with duplicates! Ive decided we need to partition the table into segments where all people with the LastName starting from A to G will be in one table. H-N will be in another table and O-Z in the third. Ive created a VIEW that does a UNION on all the tables. This sounds hideously inefficient and a management headache besides. I think PostgreSQL will accept up to 2 billion rows in any one table, and splitting stuff into 3 tables will not improve your performance ... quite the opposite. Change your database design. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] (trigger function) -> ERROR: NEW used in non-rule query
Josh Berkus wrote: Well I suppose I could try TCL. The problem is that there is little to no documentation on postgres stored procedures in TCL and I've never even seen the language before. None the less, I'll look into it. It's almost worth it. If that fails, I may even try perl . And what's wrong with Perl? Other than the inability to write triggers with it? That's *exactly* "what's wrong" with it :-) If I understand what he is talking about correctly, his whole problem is that he is writing a trigger :-) Dima ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] how to copy table to another database?
Yudie wrote: Hi, Anyone know how the procedure or commands to copy table to another database. or querying from another database if possible? thank you yudie Something like this, perhaps? psql -d first_database -c '\copy mytable to stdout' | psql -d second_database -c '\copy mytable from stdin' I hope, it helps... Dima ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] rule causes nextval() to be invoked twice
I think, your example would work if you replaced the new.id in the rule with curval ('main_id_seq'); ... but see Tom's earlier reply - this is still not a very good thing to do... For example, it won't work if you try to insert into main anything with explicitly specified id (not generated by the sequence), or if you insert multiple rows with the single statement (like insert... select), or if you do COPY (besides the fact that it doesn't touch sequence, it also doesn't invoke rules at all). To do what you are trying to do, an after trigger seems to be the only thing that will work completely. Dima paul cannon wrote: On Tue, Jul 22, 2003 at 07:47:00PM -0600, paul cannon wrote: Until then, I'll have to make a function to do nextval('main_id_seq') with every insert, and have the primary key be INTEGER. Nevermind- that doesn't work either! Here's the new sample code: -- Begin demo SQL CREATE SEQUENCE main_id_seq; CREATE TABLE main ( id INTEGER PRIMARY KEY, contents VARCHAR ); CREATE TABLE othertable ( main_id INTEGER REFERENCES main(id) ); CREATE RULE main_insert AS ON INSERT TO main DO INSERT INTO othertable VALUES (new.id); INSERT INTO main(id, contents) VALUES (nextval('main_id_seq'), 'Fails here'); -- End demo SQL The same thing happens. The rule tries to put 2 into othertable. Surely this is a bug? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Problem using Subselect results
[EMAIL PROTECTED] wrote: SELECT my_c.b, my_c.c FROM (SELECT table2.b, table2.c FROM table2, (SELECT table1.a, table1.b FROM table1 WHERE (table1.b = 1)) my_ab WHERE (table2.a = my_ab.a)) my_c; You were doing what I wanted to avoid - you are using a "SUBSUBSELECT". What about: CREATE VIEW my_view AS SELECT b,c from (SELECT a, b FROM table1 WHERE b=1) as my_ab, (SELECT a, c FROM table2) as my_ac WHERE my_ac.a=my_ab.a This looks like what you are trying to do, and doesn't use that 'subsubselect' you were trying to avoid... BTW, what is special to the second-level subselect, compared to the first level one? Why are you trying to avoid one, but not the other? I mean, I could understand, if you (like me) just hated subselects alltogether (then you would have converted your query into a join), but it looks like you don't... Dima But (now) I believe it's not possible to refer to a subselect's resultset on the same level of hierarchy - which sounds rather meaningful - because you couldn't tell which of them was being processsed first. So I'll have to get my SELECT statement into some kind of hierarchy, which makes things a bit more complicated (with twentysomething SELECT statements) Thanks, Oliver Quoting Christoph Haller <[EMAIL PROTECTED]>: Does this match your intentions: CREATE VIEW my_view AS SELECT b,c FROM (SELECT b,c FROM table2, (SELECT a, b FROM table1 WHERE b=3D1) my_ab WHERE table3.a=3Dmy_ab.a) my_c; I assume the reference table3.a is a typo. Regards, Christoph I want to use the result of a subselect as condition in another one. table1: a,b table2: a,c CREATE VIEW my_view AS SELECT b,c (SELECT a, b FROM table1 WHERE b=3D1) my_ab, (SELECT c FROM table2, my_ab WHERE table3.a=3Dmy_ab.a) my_c; this is just an example - i know i could cross join this one, but i need = to=20 refer to the results of several subselects in several other. does return "relation my_ab unknown". it is not just a problem of executi= on=20 order - if i turn it the other way round it's still the same. Am I just trying to do something really stupid? And what for is the (nece= ssary)=20 AS statement for subselects, if it's not possible to access their results= by=20 that name? And as I need the result of a subselect in several other subselects it's= not=20 possible to transform them into a cascade of sub, subsub, subsubsub s= elects. Any ideas?=20 - This mail sent through IMP: http://horde.org/imp/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Problem using Subselect results
[EMAIL PROTECTED] wrote: Quoting Dmitry Tkach <[EMAIL PROTECTED]>: What about: CREATE VIEW my_view AS SELECT b,c from (SELECT a, b FROM table1 WHERE b=1) as my_ab, (SELECT a, c FROM table2) as my_ac WHERE my_ac.a=my_ab.a This looks like what you are trying to do, and doesn't use that 'subsubselect' you were trying to avoid... I assume that with this statement postgresql will compute both subselects, do a cross join on both results an then reduce them to those who match the condition my_ac.a=my_ab.a, right? I don't think so... Not totally sure, but I believe, that, at least in this case, the query plan will be equivalent to a join... What I was trying to do is reduce the results to a minimum before joining them. It's not only two or three tables and some of them will grow big, so joining them first and reducing them later may not be such a good idea. I am not sure I understand what you mean by 'reducing'. It seems to me that you could make your query a lot simpler by converting it into a join, and I don't see anything you are buying by those subselects Also, you may want to get rid of max(), and replace those things with 'select column from table order by column desc limit 1'. This should be a lot quicker (provided that you have an index on that column). Dima My first try (which does not work, because I'm trying to access results of subselects on the same hierarchy level): CREATE VIEW public.aufwaende_test AS SELECT aufwaende.aufwandsid, aufwaende.auftragsid, aufwaende.taetigkeitsid, my_taetigkeiten.taetigkeit, aufwaende.mitarbeiterid, my_mitarbeiter.vorname, my_mitarbeiter.nachname, my_mitarbeiter.pknummer, aufwaende.dauer, aufwaende.datum, my_auftraege.aufgabenid, my_aufgaben.aufgabe, my_auftraege.systemid, my_systeme.system, my_systeme.kundenid, my_kunden.kunde, aufwaende.status FROM aufwaende, (SELECT auftraege_complete.systemid, auftraege_complete.aufgabenid, auftraege_complete.updatenr FROM auftraege_complete WHERE (auftraege_complete.updatenr = (SELECT max(auftraege_complete.updatenr) AS max FROM auftraege_complete WHERE ((auftraege_complete.auftragsid = aufwaende.auftragsid) my_auftraege, (SELECT aufgaben_complete.bezeichnung AS aufgabe FROM aufgaben_complete WHERE (aufgaben_complete.updatenr = (SELECT max(aufgaben_complete.updatenr) AS max FROM aufgaben_complete WHERE ((aufgaben_complete.aufgabenid = auftraege_complete.aufgabenid) AND (aufgaben_complete.updatenr < my_auftraege.updatenr) my_aufgaben, (SELECT taetigkeiten_complete.bezeichnung AS taetigkeit FROM taetigkeiten_complete WHERE (taetigkeiten_complete.updatenr = (SELECT max (taetigkeiten_complete.updatenr) AS max FROM taetigkeiten_complete WHERE ((taetigkeiten_complete.taetigkeitsid = aufwaende.taetigkeitsid) AND (taetigkeiten_complete.updatenr < my_auftraege.updatenr) my_taetigkeiten, (SELECT systeme_complete.name AS system, systeme_complete.kundenid, systeme_complete.systemid AS sys_sysid FROM systeme_complete WHERE (systeme_complete.updatenr = (SELECT max(systeme_complete.updatenr) AS max FROM systeme_complete WHERE ((systeme_complete.systemid = auftraege_complete.systemid) AND (systeme_complete.updatenr < my_auftraege.updatenr) my_systeme, (SELECT kunden_complete.name AS kunde FROM kunden_complete WHERE (kunden_complete.updatenr = (SELECT max(kunden_complete.updatenr) AS max FROM kunden_complete WHERE ((kunden_complete.kundenid = my_systeme.kundenid) AND (kunden_complete.updatenr < aufwaende.updatenr) my_kunden, (SELECT mitarbeiter_complete.nachname, mitarbeiter_complete.vorname, mitarbeiter_complete.pknummer FROM mitarbeiter_complete WHERE (mitarbeiter_complete.updatenr = (SELECT max(mitarbeiter_complete.updatenr) AS max FROM mitarbeiter_complete WHERE ((mitarbeiter_complete.mitarbeiterid = aufwaende.mitarbeiterid) AND (mitarbeiter_complete.updatenr < my_auftraege.updatenr) my_mitarbeiter; as you can see most of them use my_auftraege.updatenr as one condition, and the subselect on kunden_complete uses results from the my_systeme subselect (my_systeme.kundenid) Now I see two possibilities - join the early and reduce them later - create a hierarchy so that (sub...)selects which rely on the result of another select include this select-statement as a (sub...)subselect. Any better Ideas? TIA, Oliver BTW, what is special to the second-level subselect, compared to the first level one? Why are you trying to avoid one, but not the other? I mean, I could understand, if you (like me) just hated subselects alltogether (then you would have converted your query into a join), but it looks like you don't... Dima But (now) I believe it's not possible to refer to a subselect's resultset on the same level of hierarchy - which sounds rather meaningful - because you couldn't tell which of them was being processsed first.
Re: [SQL] [GENERAL] Query analyse
The first query is able to use the index on nr_proponente, because the condition involves that column directly, the second query is not, because the index only contains the values of nt_proponente, not results of trunc(..)/ Try replacing that condition with something like pa.nr_proponente BETWEEN op.nr_proponente AND op.nr_proponente + 0.1 I hope, it helps... Dima Elielson Fontanezi wrote: Good morning! First of all, my envoronment is: Linux netlab142.prodam 2.4.8-26mdk #1 Sun Sep 23 17:06:39 CEST 2001 i686 unknown pg_ctl (PostgreSQL) 7.2.1 I would like some suggestions on how to speed up a query. Both of the queries below are identical except that one of them use the *trunc* function. You can see that the TRUNC function rise hardly up the query response time in the second query. That shouldn´t be happen. Only because a trunc function? What can I be in that case? What does it happen? Sure, there are indexes: CREATE INDEX idx_proposta_2 ON proposta USING btree (in_situacao_proposta); CREATE INDEX idx_proposta_4 ON proposta USING btree (nr_proponente); And pa.nr_proponente is fk and op.nr_proponte is pk. These are the queries: 1o. That is ok. DEBUG: query: select pa.nr_projeto, pa.dc_denom_projeto, pa.nr_proponente, pa.dc_coordenador, op.dc_proponente from proposta pa inner join orgao_proponente op on (pa.nr_proponente = op.nr_proponente) where pa.in_situacao_proposta <> 'E' ORDER BY 1 DESC; DEBUG: QUERY STATISTICS ! system usage stats: ! 0.015904 elapsed 0.00 user 0.02 system sec ! [0.01 user 0.02 sys total] ! 0/0 [0/0] filesystem blocks in/out ! 143/42 [353/172] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [0/0] voluntary/involuntary context switches ! postgres usage stats: ! Shared blocks: 88 read, 0 written, buffer hit rate = 89.19 % ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written 2o. But I need to use the trunc function: DEBUG: query: select pa.nr_projeto, pa.dc_denom_projeto, pa.nr_proponente, pa.dc_coordenador, op.dc_proponente from proposta pa inner join orgao_proponente op on (trunc(pa.nr_proponente/10,0)*10 = op.nr_proponente) where pa.in_situacao_proposta <> 'E' ORDER BY 1 DESC; DEBUG: QUERY STATISTICS ! system usage stats: ! 104.665005 elapsed 10.09 user 0.42 system sec ! [10.10 user 0.42 sys total] ! 0/0 [0/0] filesystem blocks in/out ! 141/50 [352/180] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [0/0] voluntary/involuntary context switches ! postgres usage stats: ! Shared blocks: 7408 read, 0 written, buffer hit rate = 13.23 % ! Local blocks: 0 read, 0 written, buffer hit rate = 0.00% ! Direct blocks: 0 read, 0 written ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Very strange 'now' behaviour in nested triggers.
Tom Lane wrote: I put up a proposal in pgsql-hackers to change this behavior: http://archives.postgresql.org/pgsql-hackers/2003-07/msg00818.php If we made that change then the "wrong" way of defining the default would fail in an obvious fashion --- the 'now' would get reduced to a particular time immediately at CREATE TABLE. Doubtless this would annoy some people, but the "right" way of defining the default isn't really any harder, and it would save folks from getting burnt in corner cases, like you were. Any comments? Why not get rid of 'now' alltogether? Are there any cases when it is actually useful as opposed to now()? Dima ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] Very strange 'now' behaviour in nested triggers.
Tom Lane wrote: Dmitry Tkach <[EMAIL PROTECTED]> writes: Does it mean that the *application* (not the database) user would then have to know the exact specific way to represent the current time in his data entry form? Such an application looks like (how do I say it politely?) not a very user-friendly one to me :-) So? "now()" is certainly not more user-friendly than "now". Nope... it isn't. My point was that, if the app wanted to be user friendly, it would not attempt to take the input directly from user and stuff it into the sql - it would probably have some checkbox or drop-down list in the GUI form, that would indicate that the user wants the current time stamp, and use the the proper internal represntation in the generated sql... In that case having to execute a function (now()) would not make it vulnerable to a sql injection... My point is that wherever you are making the decision that you want to input current time, there may be layers between you and the database that will only want to pass data-value strings and not function invocations. Yeah... I've actually found one after I sent that last message :-) - it does seem to come handy in COPY TABLE - although, in that case, I'd say it would be much more useful to make COPY TABLE understand the defined defaults on the table, just the way INSERT does ... Dima ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Very strange 'now' behaviour in nested triggers.
Data entry. You don't necessarily have the option to invoke a function, as opposed to just sending a string for the datetime input parser. (It would be rather difficult for an application to allow this one case without permitting SQL-injection attacks, I'd think.) Does it mean that the *application* (not the database) user would then have to know the exact specific way to represent the current time in his data entry form? Such an application looks like (how do I say it politely?) not a very user-friendly one to me :-) Dima ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] One to many query question
Dave Dribin wrote: Hi, I'm having trouble with what I think should be an easy query. For simplicity, I will use a CD database as an example. Each CD may have multiple genres. Here's some sample data: Artist Title Genres -- -- Miles Davis Some Kind of Blue Jazz Metallica Ride the Lightning Rock Chemical Brothers Surrender Electronic Radiohead OK Computer Rock, Electronic For simplicities sake, let's ignore normalization on artist and genre, and say the tables look like: CREATE TABLE cd ( id integer unique, artist varchar(25), title varchar(25) ); CREATE TABLE cd_genres ( cd_id integer, genre varchar(25) ); How do I write a query to find all CDs that are NOT Rock? What about select * from cd where not exists (select 1 from cd_genres where cd_id = cd.id and genre='Rock')? Dima ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Trouble with explicit joins
Hi, everybody! I am writing a fairly long query, that joins several (like 10-15) tables. It is dynamically generated, and the tables in the join may be different, depending on the criteria etc... The problem is that I need to outer (left) join some of those tables. The only way I know to do that is using explicit join sytax ("select blah from foo left join bar using (x)"). BUT when I try to write my query with that syntax, postgres comes up with a totally wrong query plan (e.g., seq scan on a table with 100 million rows), even when no outer joins are used at all (as I said, the query is dynamic - sometimes it needs a left join, sometimes it does not, but it is the same code that generates it). The same exact query with implicit join syntax ("select blah from foo, bar where foo.x=bar.x" works fine. I suppose, this is because the planner takes the order, in which the tables appear in the explicit joins as some kind of a hint to how I want that query to be executed, and, if I changed the order o fthose joins, I believe, I would be able to get the same query plan as without explicit joins, but unfortunately this is not an option, since, as I said, the query is dynamically generated the set of tables being joined is different every time, the criteria varies too, there are just too many possibilities. So, my only hope is that, perhaps, I am missing something simple here, and somebody would be able to tell me either how to do an outer join with implicit syntax or how to make the planner behave the same way when it reads explicit syntax. I'd appreciate any ideas. Thanks a lot! Dima ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Trouble with explicit joins
Tom Lane wrote: Dmitry Tkach <[EMAIL PROTECTED]> writes: I suppose, this is because the planner takes the order, in which the tables appear in the explicit joins as some kind of a hint to how I want that query to be executed, It's not a "hint", it's a requirement. In general, changing the order in which outer joins are executed changes the results. There are some cases in which it is safe to rearrange the order, but determining this takes close analysis of the join conditions, and we don't (yet) have any code to do that. So the planner must be conservative and take your join order as gospel. regards, tom lane Yeah.. that's what I figured. Are you saying there is no way around it at all? Isn't there a syntax supported to write a left join with implicit joins? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]