Re: [SQL] the best way to get the topest 3 record in every group
Dima My question is that I want to produce ALL the lastest 3 records for EACH itemNo and supplier. Jack - Original Message - From: "dima" <[EMAIL PROTECTED]> To: "jack" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Monday, September 09, 2002 4:34 PM Subject: Re: [SQL] the best way to get the topest 3 record in every group > > There is a table like : > > << > > itemNo > > supplier > > purchaseDate > > Price > > Qty > > << > > Please provide an idea if I want to get the latest 3 puchase records for > > each item and supplier. Thank you in advance. > select * from table_name where supplier=value order by purchaseDate desc > limit 3 > ??? > > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] the best way to get the topest 3 record in every group
jack : select * from purchase as p where purchase_date >= ( select min(ppp.purchase_date) from (select pp.purchase_date from purchase as pp where p.item_no = pp.item_no and p.supplier = pp.supplier order by 1 desc limit 3 ) as ppp ); But this query have leak, if more than three purchases at day. For avoid this leak your need unique row identifier. In attachement file with test data and valid queries. regards. >Dima >My question is that I want to produce ALL the lastest 3 records for EACH >itemNo and supplier. > >Jack >- Original Message - >From: "dima" <[EMAIL PROTECTED]> >To: "jack" <[EMAIL PROTECTED]> >Cc: <[EMAIL PROTECTED]> >Sent: Monday, September 09, 2002 4:34 PM >Subject: Re: [SQL] the best way to get the topest 3 record in every group > > > > >>>There is a table like : >>><< >>>itemNo >>>supplier >>>purchaseDate >>>Price >>>Qty >>><< >>>Please provide an idea if I want to get the latest 3 puchase records for >>>each item and supplier. Thank you in advance. >>> >>> >>select * from table_name where supplier=value order by purchaseDate desc >>limit 3 >>??? >> >> >> >> > > >---(end of broadcast)--- >TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > > > > sqls.zip Description: Zip compressed data ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] the best way to get the topest 3 record in every group
jack : select * from purchase as p where purchase_date >= ( select min(ppp.purchase_date) from (select pp.purchase_date from purchase as pp where p.item_no = pp.item_no and p.supplier = pp.supplier order by 1 desc limit 3 ) as ppp ); But this query have leak, if more than three purchases at day. For avoid this leak your need unique row identifier. regards. >Dima >My question is that I want to produce ALL the lastest 3 records for EACH >itemNo and supplier. > >Jack >- Original Message - >From: "dima" <[EMAIL PROTECTED]> >To: "jack" <[EMAIL PROTECTED]> >Cc: <[EMAIL PROTECTED]> >Sent: Monday, September 09, 2002 4:34 PM >Subject: Re: [SQL] the best way to get the topest 3 record in every group > > > > >>>There is a table like : >>><< >>>itemNo >>>supplier >>>purchaseDate >>>Price >>>Qty >>><< >>>Please provide an idea if I want to get the latest 3 puchase records for >>>each item and supplier. Thank you in advance. >>> >>> >>select * from table_name where supplier=value order by purchaseDate desc >>limit 3 >>??? >> >> >> >> > > >---(end of broadcast)--- >TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > > > > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] the best way to get the topest 3 record in every group
jack : select * from purchase as p where purchase_date >= ( select min(ppp.purchase_date) from (select pp.purchase_date from purchase as pp where p.item_no = pp.item_no and p.supplier = pp.supplier order by 1 desc limit 3 ) as ppp ); But this query have leak, if more than three purchases at day. For avoid this leak your need unique row identifier. In attachement file with test data and valid queries. regards. >Dima >My question is that I want to produce ALL the lastest 3 records for EACH >itemNo and supplier. > >Jack >- Original Message - >From: "dima" <[EMAIL PROTECTED]> >To: "jack" <[EMAIL PROTECTED]> >Cc: <[EMAIL PROTECTED]> >Sent: Monday, September 09, 2002 4:34 PM >Subject: Re: [SQL] the best way to get the topest 3 record in every group > > > > >>>There is a table like : >>><< >>>itemNo >>>supplier >>>purchaseDate >>>Price >>>Qty >>><< >>>Please provide an idea if I want to get the latest 3 puchase records for >>>each item and supplier. Thank you in advance. >>> >>> >>select * from table_name where supplier=value order by purchaseDate desc >>limit 3 >>??? >> >> >> >> > > >---(end of broadcast)--- >TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > > > > sqls.zip Description: Zip compressed data ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] new calculated column
Hi; I've got a table with two fields and about 3000 rows, the second one is a character field, what can have about twenty different values; of course these values are repeated a lot of times in the table. I need to create a new column of type integer, whose value depens on the character fields. The values of the new column are not important, the important thing is who can I create this column and assign a different integer to a different char value in the other column. Thanks -- Javier ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Character translation?
Hi, I have a view like this: create view myview as select recid, title, firstname || chr(32) || lastname as expert, rank from mytable; When I use this view via odbc on a Windows 2000 system using Visual FoxPro the expert field shows up as a memo field. This is analogous to the text field in PSQL. What I'd like to do is have the expert column come through as a varchar type so that it shows up as a text field in VFP. Any suggestions? I was looking at the functions and didn't see anything that would do the trick for me but I could have just missed something. I also saw a reference to the cast() function but couldn't find any usage info on it in the online docs. Any help is greatly appreciated. Thanks in advance, Linn -- Please remove the number two from domain name for email. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] Efficiency Question
Hi, all. I'm putting together a small query that should return the most recent entry from a table by date (I can't use an ID field as new entries may be added with older dates). It's not crucial that this run at 100% efficiency, but I'm interested in the results and/or discussion, as this will probably relate to other larger queries later. Two methods occur to me, and I don't have a large enough data set to get any kind of idea of the actual response, and I'm not sure I understand the explain plan. Method 1 is like this: select * from motm where creation_date = (select max(creation_date) from motm); Which requires a subselect and a max operator on a date field. Method two is: select * from motm order by creation_date desc limit 1; So in the first case I select the record that has the largest date. In the second case, I order all the records, and then return only one. Here's the explain for both: -- Method 1: NOTICE: QUERY PLAN: Merge Join (cost=23.77..23.96 rows=1 width=60) InitPlan -> Aggregate (cost=22.50..22.50 rows=1 width=8) -> Seq Scan on motm (cost=0.00..20.00 rows=1000 width=8) -> Sort (cost=22.67..22.67 rows=10 width=20) -> Seq Scan on motm m (cost=0.00..22.50 rows=10 width=20) -> Sort (cost=1.11..1.11 rows=5 width=40) -> Seq Scan on people p (cost=0.00..1.05 rows=5 width=40) EXPLAIN -- Method 2: NOTICE: QUERY PLAN: Limit (cost=84.91..84.91 rows=1 width=68) -> Sort (cost=84.91..84.91 rows=50 width=68) -> Merge Join (cost=70.94..83.50 rows=50 width=68) -> Sort (cost=69.83..69.83 rows=1000 width=28) -> Seq Scan on motm m (cost=0.00..20.00 rows=1000 width=28) -> Sort (cost=1.11..1.11 rows=5 width=40) -> Seq Scan on people p (cost=0.00..1.05 rows=5 width=40) EXPLAIN -- According to the cost score, it seems that method 1 is faster, almost 4x! Is that actually the case? Opinions welcome. :) Colin ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Slow Multi-joins performance
Has the performance for queries with lots of joins (more than 5) been improved in v7.2 ? I'm using 7.0.3 on Redhat 7.1 on a K2-6 500 CPU, and it is about 3 times slower than MySQL, or Access on windoze platform :-( I tried different command-line optimisations, and got the best results (on other data) with "-fm -fh -fs", but still not to the expected results of a fraction of a second to return the data. Changing the sort buffer options, etc, had little effect. To prove the point (albeit a trivial example), here is some test tables, that take over 3 seconds to retrieve one row of data from tables containing only one row of data each. The SQL statement to test this is "SELECT * FROM test" == CREATE TABLE "a" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id") ); REVOKE ALL on "a" from PUBLIC; GRANT ALL on "a" to PUBLIC; CREATE TABLE "b" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id") ); REVOKE ALL on "b" from PUBLIC; GRANT ALL on "b" to PUBLIC; CREATE TABLE "c" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id") ); REVOKE ALL on "c" from PUBLIC; GRANT ALL on "c" to PUBLIC; CREATE TABLE "d" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id") ); REVOKE ALL on "d" from PUBLIC; GRANT ALL on "d" to PUBLIC; CREATE TABLE "e" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id") ); REVOKE ALL on "e" from PUBLIC; GRANT ALL on "e" to PUBLIC; CREATE TABLE "f" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id") ); REVOKE ALL on "f" from PUBLIC; GRANT ALL on "f" to PUBLIC; CREATE TABLE "g" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id","name") ); REVOKE ALL on "g" from PUBLIC; GRANT ALL on "g" to PUBLIC; CREATE TABLE "h" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id","name") ); REVOKE ALL on "h" from PUBLIC; GRANT ALL on "h" to PUBLIC; CREATE TABLE "i" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id","name") ); REVOKE ALL on "i" from PUBLIC; GRANT ALL on "i" to PUBLIC; CREATE TABLE "j" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id","name") ); REVOKE ALL on "j" from PUBLIC; GRANT ALL on "j" to PUBLIC; CREATE TABLE "k" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id","name") ); REVOKE ALL on "k" from PUBLIC; GRANT ALL on "k" to PUBLIC; CREATE TABLE "l" ( "id" int4 NOT NULL, "name" text, PRIMARY KEY ("id","name") ); REVOKE ALL on "l" from PUBLIC; GRANT ALL on "l" to PUBLIC; CREATE TABLE "t" ( "id" int4 NOT NULL, "ta" int4, "tb" int4, "tc" int4, "td" int4, "te" int4, "tf" int4, "tg" int4, "th" int4, "ti" int4, "tj" int4, "tk" int4, "tl" int4, PRIMARY KEY ("id") ); REVOKE ALL on "t" from PUBLIC; GRANT ALL on "t" to PUBLIC; CREATE TABLE "test" ( "id" int4, "ta" text, "tb" text, "tc" text, "td" text, "te" text, "tf" text, "tg" text, "th" text, "ti" text, "tj" text, "tk" text, "tl" text ); COPY "a" FROM stdin; 1 a \. COPY "b" FROM stdin; 1 b \. COPY "c" FROM stdin; 1 c \. COPY "d" FROM stdin; 1 d \. COPY "e" FROM stdin; 1 e \. COPY "f" FROM stdin; 1 f \. COPY "g" FROM stdin; 1 g \. COPY "h" FROM stdin; 1 h \. COPY "i" FROM stdin; 1 i \. COPY "j" FROM stdin; 1 j \. COPY "k" FROM stdin; 1 k \. COPY "l" FROM stdin; 1 l \. COPY "t" FROM stdin; 1 1 1 1 1 1 1 1 1 1 1 1 1 \. CREATE RULE "_RETtest" AS ON SELECT TO test DO INSTEAD SELECT t.id, a.name AS ta, b.name AS tb, c.name AS tc, d.name AS td, e.name AS te, f.name AS tf, g.name AS tg, h.name AS th, i.name AS ti, j.name AS tj, k.name AS tk, l.name AS tl FROM t, a, b, c, d, e, f, g, h, i, j, k, l WHERE t.ta = a.id) AND (t.tb = b.id)) AND (t.tc = c.id)) AND (t.td = d.id)) AND (t.te = e.id)) AND (t.tf = f.id)) AND (t.tg = g.id)) AND (t.th = h.id)) AND (t.ti = i.id)) AND (t.tj = j.id)) AND (t.tk = k.id)) AND (t.tl = l.id)); 6BððA = The debug info is as follows: 020906.19:53:23.041 [7893] StartTransactionCommand 020906.19:53:23.041 [7893] query: select getdatabaseencoding() 020906.19:53:23.137 [7893] ProcessQuery ! system usage stats: ! 0.096804 elapsed 0.01 user 0.01 system sec ! [0.03 user 0.05 sys total] ! 0/0 [0/0] filesystem blocks in/out ! 78/6 [203/110] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent ! 0/0 [0/0] voluntary/involuntary context switches ! postgres usage stats: ! Shared blocks:
[SQL] Database joins
Hi, I have created two databases, db1 and db2 in my POSTGRESQL database system. And both the databases contains few tables also. Could you please help me to write a query which should retrieve data from both the databases (Database joins and table joins) Expecting and earliest reply Regards Jay
[SQL] Cross tables Like%
I have 2 tables, one is products and one is manufactors. The products table has a col. for Product_descriptions and manufactor_id column that is a foriegn key to the manufactors table. The manufactors table has a manfuactor_description column, for each unique Manufactor_id. I want to search (using like) on both tables (columns: Products.Product_description and Manufactors.Manufactor_description). Can anyone suggest a cross-table %Like% Query for it? Thanks! ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Efficiency Question
On Mon, Sep 09, 2002 at 00:13:04 +, Colin Fox <[EMAIL PROTECTED]> wrote: > > select >* > from >motm > order by >creation_date desc > limit 1; > > So in the first case I select the record that has the largest date. In the > second case, I order all the records, and then return only one. If you have an index on creation_date an index scan can be used which will make the second form run a lot faster if there are lots of records. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] new calculated column
For implicit modification you can use a trigger on the table. If you wan't store this data, you can use view. andres javier garcia garcia ?: >Hi; > >I've got a table with two fields and about 3000 rows, the second one is a >character field, what can have about twenty different values; of course these >values are repeated a lot of times in the table. I need to create a new >column of type integer, whose value depens on the character fields. The >values of the new column are not important, the important thing is who can I >create this column and assign a different integer to a different char value >in the other column. >Thanks >-- >Javier > >---(end of broadcast)--- >TIP 3: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly > > > > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Cross tables Like%
select products.*, manufactors.* from products, manufactors where products.manufactor_id=manufactors.manufactor_id and ( products.product_description like 'param%' or manufactors.manufactor_description like 'param%'); regards Jason Davis ?: >I have 2 tables, one is products and one is manufactors. > >The products table has a col. for Product_descriptions and manufactor_id >column that is a foriegn key to the manufactors table. > >The manufactors table has a manfuactor_description column, for each unique >Manufactor_id. > >I want to search (using like) on both tables (columns: >Products.Product_description and Manufactors.Manufactor_description). > >Can anyone suggest a cross-table %Like% Query for it? > >Thanks! > >---(end of broadcast)--- >TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/users-lounge/docs/faq.html > > > > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Character translation?
On Thursday 05 Sep 2002 11:15 pm, Linn Kubler wrote: > Hi, > > I have a view like this: > create view myview as > select recid, title, firstname || chr(32) || lastname as expert, rank > from mytable; > > When I use this view via odbc on a Windows 2000 system using Visual FoxPro > the expert field shows up as a memo field. This is analogous to the text > field in PSQL. > What I'd like to do is have the expert column come through as a varchar > type so that it shows up as a text field in VFP. Any suggestions? > I was looking at the functions and didn't see anything that would do the > trick for me but I could have just missed something. I also saw a > reference to the cast() function but couldn't find any usage info on it in > the online docs. I think you're on the right track, try something like: select (firstname || ' ' || lastname)::varchar as expert ... If it works, could you let the list know in case anyone else needs this in future. If not, there is an ODBC list too (see postgresql.org website for details) - Richard Huxton ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Character translation?
Would this work? select recid, title, firstname || chr(32) || lastname::char(50) as expert, rank from mytable where length(lastname) <= 50; Troy > > Hi, > > I have a view like this: > create view myview as > select recid, title, firstname || chr(32) || lastname as expert, rank > from mytable; > > When I use this view via odbc on a Windows 2000 system using Visual FoxPro > the expert field shows up as a memo field. This is analogous to the text > field in PSQL. > > What I'd like to do is have the expert column come through as a varchar type > so that it shows up as a text field in VFP. Any suggestions? > > I was looking at the functions and didn't see anything that would do the > trick for me but I could have just missed something. I also saw a reference > to the cast() function but couldn't find any usage info on it in the online > docs. > > Any help is greatly appreciated. > Thanks in advance, > Linn > > > -- > Please remove the number two from domain name for email. > > > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Slow Multi-joins performance [DEVELOPERS attn please]
On Friday 06 Sep 2002 11:59 am, [EMAIL PROTECTED] wrote: > Has the performance for queries with lots of joins (more than 5) been > improved in v7.2 ? > > I'm using 7.0.3 on Redhat 7.1 on a K2-6 500 CPU, and it is about 3 times > slower than MySQL, or Access on windoze platform :-( > > I tried different command-line optimisations, and got the best results (on > other data) with "-fm -fh -fs", but still not to the expected results of a > fraction of a second to return the data. > Changing the sort buffer options, etc, had little effect. > > To prove the point (albeit a trivial example), here is some test tables, > that take over 3 seconds to retrieve one row of data from tables containing > only one row of data each. (Tom - sorry to cc: you on this, but I'm not sure if I'm talking rubbish here) Interesting - I get something similar here. If I rewrite the view with explicit joins as below: SELECT t.id, a.name AS ta, b.name AS tb ... FROM t JOIN a ON t.ta=a.id JOIN b ON t.tb=b.id ... it returns instantly. Running an EXPLAIN ANALYSE, both have similar query plans with twelve nested joins and 13 seq scans (as you'd expect for tables with 1 row each). The only apparent difference is the order of the seq scans. The best bit is the Total runtime: 4.32 msec (original) Total runtime: 5.32 msec (explicit JOINs) Which says to me that your form is fine. Testing says otherwise, so there must be some element of the query that is not being accounted for in EXPLAIN ANALYSE. Your log shows the genetic algorithm (geqo_main line) kicking in because it sees a complex query and it could be that this is the problem - PostgreSQL takes a look at the 13-way join and thinks it's going to be very expensive. If you had a genuinely complex query, the time to analyse options would be a benefit, but here I'm guessing it's not. Perhaps try it with increasing amounts of data and more restrictions and see if performance stays constant. - Richard Huxton ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Database joins
On Mon, 9 Sep 2002, Jay wrote: > Hi, > I have created two databases, db1 and db2 in my POSTGRESQL database system. > And both the databases contains few tables also. > Could you please help me to write a query which should retrieve data from both the >databases > (Database joins and table joins) You cannot currently directly via sql only do cross database joins. You might want to see if contrib/dblink will let you do what you want (iirc it's gotten better in 7.3 with the functions returning result set stuff). ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] Character translation?
On Thu, 5 Sep 2002, Linn Kubler wrote: > Hi, > > I have a view like this: > create view myview as > select recid, title, firstname || chr(32) || lastname as expert, rank > from mytable; > > When I use this view via odbc on a Windows 2000 system using Visual FoxPro > the expert field shows up as a memo field. This is analogous to the text > field in PSQL. > > What I'd like to do is have the expert column come through as a varchar type > so that it shows up as a text field in VFP. Any suggestions? > > I was looking at the functions and didn't see anything that would do the > trick for me but I could have just missed something. I also saw a reference > to the cast() function but couldn't find any usage info on it in the online > docs. cast() isn't a function precisely speaking. It works as CAST(expr as datatype). ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] the best way to get the topest 3 record in every group
On Mon, 9 Sep 2002 18:08:21 +1000 "jack" <[EMAIL PROTECTED]> wrote: > Dima > My question is that I want to produce ALL the lastest 3 records for EACH > itemNo and supplier. > > Jack I often use the next query for ranking, which is equivalent to Oracle's PARTITION BY(). It isn't influenced by what kind of data your table has; it only depends on the result of sorting CREATE TEMP SEQUENCE seq_purchase; SELECT t5.item_no, t5.supplier, t5.purchase_date, t5.price, t5.qty, t5.i - t3.n + 1 AS rank FROM (SELECT t2.item_no, t2.supplier, min(t2.i) AS n FROM (SELECT t1.*, nextval('seq_purchase') - 1 AS i FROM (SELECT (SELECT setval('seq_purchase',1)), * FROM purchase ORDER BY item_no, supplier, purchase_date desc ) AS t1 LIMIT ALL ) AS t2 GROUP BY t2.item_no, t2.supplier ) AS t3, (SELECT t4.*, nextval('seq_purchase') - 1 AS i FROM (SELECT (SELECT setval('seq_purchase',1)), * FROM purchase ORDER BY item_no, supplier, purchase_date DESC ) AS t4 LIMIT ALL ) AS t5 WHERE t3.item_no = t5.item_no AND t3.supplier = t5.supplier AND t5.i - t3.n + 1 <= 3 ; Regards, Masaru Sugawara ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Slow Multi-joins performance [DEVELOPERS attn please]
Richard - Your analysis of this looks right on, to me. With current code, if you put in explicit JOINS, the table get joined in that order, no questions. By specifying an all JOIN version, you've made the optimizers job very easy: only one plan to consider. Your point about realistic data and complexity of queries is a good one. There has been some recent work on doing something to cache query plans, so if the same query gets run a lot, you only pay the planning cost a few times. Not sure hoe much of that code (if any) made it into 7.3. As an aside, the EXPLAIN text shows row estimates of 10, when we _know_ the tables have 1 row each, so VACUUM ANALYZE needs to be run. Doing so (on a 7.1.2 datbse, BTW) cuts the measured execution time in half (though not to instantanious, since planning still occurs) Letting the planner/optimzer know as much as possible is almost always a good thing. Ross On Mon, Sep 09, 2002 at 04:24:08PM +0100, Richard Huxton wrote: > On Friday 06 Sep 2002 11:59 am, [EMAIL PROTECTED] wrote: > Interesting - I get something similar here. If I rewrite the view with > explicit joins as below: > > SELECT t.id, a.name AS ta, b.name AS tb ... FROM t JOIN a ON t.ta=a.id JOIN b > ON t.tb=b.id ... > > it returns instantly. Running an EXPLAIN ANALYSE, both have similar query > plans with twelve nested joins and 13 seq scans (as you'd expect for tables > with 1 row each). The only apparent difference is the order of the seq scans. > The best bit is the > > Total runtime: 4.32 msec (original) > Total runtime: 5.32 msec (explicit JOINs) > > Which says to me that your form is fine. Testing says otherwise, so there must > be some element of the query that is not being accounted for in EXPLAIN > ANALYSE. Your log shows the genetic algorithm (geqo_main line) kicking in > because it sees a complex query and it could be that this is the problem - > PostgreSQL takes a look at the 13-way join and thinks it's going to be very > expensive. If you had a genuinely complex query, the time to analyse options > would be a benefit, but here I'm guessing it's not. Perhaps try it with > increasing amounts of data and more restrictions and see if performance stays > constant. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] Changing Column Type
All, Is there a way to easily change the type of column? Or do I have to drop and create again. From: assignment_notes | character varying(255) To: assignment_notes | text Thanks, -p ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] Displaying current queries on the database?
I was wondering if there is a way to display all of the current queries that are being submitted to the database. Thanks in advance. -Sloan Bowman ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Displaying current queries on the database?
its possible, select * from pg_stat_activity after AS connect as postgres. THOUGH NEEd to configure postgresql.conf accordigly regds mallaH > I was wondering if there is a way to display all of the current queries that are >being > submitted to the database. Thanks in advance. > > -Sloan Bowman > > > ---(end of broadcast)--- TIP 4: >Don't 'kill -9' > the postmaster - Get your free web based email at trade-india.com. "India's Leading B2B eMarketplace.!" http://www.trade-india.com/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] stored procedures: sybase -> postgreSQL ?
I am trying to port a Sybase table create script to one usable for postgreSQL. (note I am not a DBA) In particular I am not well versed on how to use/recode the stored procedures such as that in the example below. ALTER TABLE DnaFragment ADD PRIMARY KEY (dna_fragment_id) go exec sp_primarykey DnaFragment, dna_fragment_id go exec sp_bindrule DnaFragment_type_rule, 'DnaFragment.type' exec sp_bindefault Set_To_Current_Date, 'DnaFragment.date_last_modified' exec sp_bindefault Set_to_False, 'DnaFragment.is_obsolete' go regards, Charles ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Slow Multi-joins performance [DEVELOPERS attn please]
Richard Huxton <[EMAIL PROTECTED]> writes: > Which says to me that your form is fine. Testing says otherwise, so there must > be some element of the query that is not being accounted for in EXPLAIN > ANALYSE. To wit, planning time. EXPLAIN ANALYZE only counts execution time. And planning time on a 13-way join is going to be nontrivial --- especially compared to execution against trivial-size tables. You can turn on some query stats logging (I forget the SET-variable names) to get a feeling for the relative costs of planning and execution; but usually planning drops into the noise once you start looking at production-sized cases. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] How the R-Tree index works?.
Hi, Can someone explain my how the R-Tree index works?. I just buy a Postgres's Developer book, but it only mentioned it. Thanks. Andrés Sommerhoff
[SQL] Transaction Newbie
Hi, I've been using Postgres for a while, almost exclusively through the perl DBI (although I do plenty of work on the command line). I have realized, belatedly, that I need transactions for this thing I want to accomplish, but I've not done transactions before, so I need a bit of help. And, I'm not sure whether it's a transaction I need, or a lock. I have (many) tables with automatically entering serial value as primary key, set by a sequence. I need to insert a row, and then get the value of that row I just entered. I thought first of doing two sql statements in a row: if the primary key is table_id, with default value "nextval('table_seq') - then these two statements: insert into table (field1,field2,field3) values (value1,value2,value3) select currval('table_seq') work to get me the value I need. Except, of course if someone else has inserted a row inbetween these two statements. I tried a transaction test, and this is what I got: pew=# begin work; BEGIN pew=# insert into categories values ('23423423','test','testing','3','today','today','mpm','test category'); INSERT 83910 1 pew=# select currval('category_id'); NOTICE: current transaction is aborted, queries ignored until end of transaction block *ABORT STATE* pew=# commit work pew-# ; COMMIT pew=# select * from categories; And the insert didn't happen. Am I thinking about this right? Is there a better way to get the value of a newly inserted record? Thanks! PS: I'm subscribed to sql, odbc and general, and have not been getting general mail for quite some time. I've send emails to the address that's supposed to be read by humans, but gotten no response. If anyone is in a position to help me out - much appreciated! -- .Michelle -- Michelle Murrain, Technology Consulting [EMAIL PROTECTED] http://www.murrain.net 413-253-2874 ph 413-222-6350 cell 413-825-0288 fax AIM:pearlbear0 Y!:pearlbear9 ICQ:129250575 "A vocation is where the world's hunger & your great gladness meet." ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Transaction Newbie
> if the primary key is table_id, with default value > "nextval('table_seq') - then these two statements: > > insert into table (field1,field2,field3) values (value1,value2,value3) > select currval('table_seq') > > work to get me the value I need. Except, of course if someone else > has inserted a row inbetween these two statements. Hmmm - I'm not sure currval has that problem - have you actually tried it with two psql windows? > I tried a transaction test, and this is what I got: > > pew=# begin work; You can just go 'begin;' > BEGIN > pew=# insert into categories values > ('23423423','test','testing','3','today','today','mpm','test > category'); > INSERT 83910 1 > pew=# select currval('category_id'); > NOTICE: current transaction is aborted, queries ignored until end of > transaction block > *ABORT STATE* As soon as you see this, it means you have made a syntax error or something in your sql, which causes an automatic abort. > pew=# commit work > pew-# ; You can't commit once the transaction is aborted, you need to ROLLBACK; > COMMIT > pew=# select * from categories; > > And the insert didn't happen. It didn't happen because something caused the whole transaction to be aborted. > Am I thinking about this right? Is there a better way to get the > value of a newly inserted record? Chris ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] How the R-Tree index works?.
Hi Andreas, I asked myself the same question just yesterday. Here's something I found on google (and there's much more there, as usual!) http://icg.harvard.edu/~cs265/lectures/readings/guttman-1984.pdf Cheers, Markus Gieppner -Original Message-From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Andres SommerhoffSent: Monday, September 09, 2002 5:52 PMTo: [EMAIL PROTECTED]Subject: [SQL] How the R-Tree index works?. Hi, Can someone explain my how the R-Tree index works?. I just buy a Postgres's Developer book, but it only mentioned it. Thanks. Andrés Sommerhoff
[SQL] pgsql-performance mailing list / newsgroup created
Morning all ... Josh Berkus the other day shook my cage a bit and reminded me to create the -performance list that had been discussed awhile back ... so I did :) [EMAIL PROTECTED] or comp.databases.postgresql.performance archives wont' show it up yet, still have to reconfig all of that stuff, but the list is there and ready to go ... or should be. If there are any problems, please let me know ... ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]