[SQL] PQexec and SPI_exec
Hello everyone. I'm experiencing some doubts regarding a procedure i have (.c compiled as .so) running as an 'after insert for each row' trigger. This trigger is supposed to do a simple query, something like SELECT * FROM table order by id where processed=0 limit 1 It's not the perfect way to get the vars of the insert itself, but the result is a set of 45 columns, and the operations of the trigger are somehow complex, so .c is really a necessity on this one, and as long as this select actually returns the proper values, i can deal with it later. But my problem is not one of a structure nature: my problem is the fact that using SPI_exec, and the SPI_getvalue(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, columnX), and DatumGetInt32(DirectFunctionCall1(int4in, CStringGetDatum(SPI_getvalue(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, columnX, etc, it all works fine. the SPI-running-from-the-triggered-.so can detect the correct values - from the insert that triggered it. I would prefer to use the more friendly PQexec and the simpler PQgetvalue(res,0,X), but this last approach does not return the values of the insert that triggered it. It returns the values from the "the last insert before this one". What is the proper way to make this method work? I'm sorry if this might be a basic question, but i have tried so many things, that i'm probably too puzzled right now to make any sense. Any help is deeply appreciated. Thanks, \\Pedro ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] PQexec and SPI_exec
On 8/25/2004 10:21 AM, Pedro B. wrote: Hello everyone. I'm experiencing some doubts regarding a procedure i have (.c compiled as .so) running as an 'after insert for each row' trigger. This trigger is supposed to do a simple query, something like SELECT * FROM table order by id where processed=0 limit 1 It's not the perfect way to get the vars of the insert itself, but the result is a set of 45 columns, and the operations of the trigger are somehow complex, so .c is really a necessity on this one, and as long as this select actually returns the proper values, i can deal with it later. But my problem is not one of a structure nature: my problem is the fact that using SPI_exec, and the SPI_getvalue(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, columnX), and DatumGetInt32(DirectFunctionCall1(int4in, CStringGetDatum(SPI_getvalue(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, columnX, etc, it all works fine. the SPI-running-from-the-triggered-.so can detect the correct values - from the insert that triggered it If the trigger is supposed to use values from the tuple that triggered its call, then it should not select them from the table but use the provided tg_trigtuple in the TriggerData structure. See http://www.postgresql.org/docs/current/static/trigger-interface.html for details. I would prefer to use the more friendly PQexec and the simpler PQgetvalue(res,0,X), but this last approach does not return the values of the insert that triggered it. It returns the values from the "the last insert before this one". What is the proper way to make this method work? There is no proper way to make this work at all. What you are doing in this case is to have the database server process that is handling your query open another client connection to the server, starting another database server process. This cannot work. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 8: explain analyze is your friend
[SQL] Complicated "group by" question
I have a table of people ("reviewers"), a table of review assignments ("assign"), and a table of review acceptances ("accept"). I would like to be able to write a query to return the latest (e.g., max(assign_date)) assignment for each reviewer, plus the acc_id field from "accept". I think I should be able to do this with a GROUP BY clause, but am having no luck. Table structure: reviewers assign accept - reviewer_id assign_id accept_id reviewer_id assign_id ... assign_date ... ... Thanks for any guidance. Andy -- Andrew J Perrin - http://www.unc.edu/~aperrin Assistant Professor of Sociology, U of North Carolina, Chapel Hill [EMAIL PROTECTED] * andrew_perrin (at) unc.edu ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Complicated "group by" question
Andrew, > I have a table of people ("reviewers"), a table of review assignments > ("assign"), and a table of review acceptances ("accept"). I would like to > be able to write a query to return the latest (e.g., max(assign_date)) > assignment for each reviewer, plus the acc_id field from "accept". I > think I should be able to do this with a GROUP BY clause, but am having no > luck. Some vagueness: you didn't say whether you wanted to see two assignments if they have the same, latest date. Nor did you specify whether you wanted to see assignments that had not been accepted (the below assumes yes to both) Hmmm ... one way, SQL-standard: SELECT reviewer.name, assign_date, acc_id FROM reviewers JOIN assign ON reviewer.id = assign.reviewer_id LEFT OUTER JOIN accept ON assign.id = accept.assign_id WHERE assign_date IN (SELECT max(ass2.assign_date) FROM assign ass2 WHERE ass2.reviewer_id = reviewers.id) or for a bit faster execution on PG you cann replace that WHERE clause with: WHERE assign_date IN (SELECT ass2.assign_date FROM assign ass2 WHERE ass2.reviewer_id = reviewers.id ORDER BY ass2.assign_date DESC LIMIT 1) -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Complicated "group by" question
Andrew, If assing is not a many to many relation, why did you not fold accept_id into assign? Any way, here is the query you need: select assign.reviewer_id, ss.max_assign_date, accept.assign_id, accept.accept_id from ( select reviewer_id, max( assign_date) as max_assign_date from assign group by reviewer_id) as ss, assign, accept where ss.reviewer_id = assign.reviewer_id and ss.max_assign_date = assign.assign_date and assign.assign_id = accept.assign_id; Andrew Perrin wrote: I have a table of people ("reviewers"), a table of review assignments ("assign"), and a table of review acceptances ("accept"). I would like to be able to write a query to return the latest (e.g., max(assign_date)) assignment for each reviewer, plus the acc_id field from "accept". I think I should be able to do this with a GROUP BY clause, but am having no luck. Table structure: reviewers assign accept - reviewer_id assign_id accept_id reviewer_id assign_id ... assign_date ... ... Thanks for any guidance. Andy -- Andrew J Perrin - http://www.unc.edu/~aperrin Assistant Professor of Sociology, U of North Carolina, Chapel Hill [EMAIL PROTECTED] * andrew_perrin (at) unc.edu ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Complicated "group by" question
Excellent - thanks, Josh! -- Andrew J Perrin - http://www.unc.edu/~aperrin Assistant Professor of Sociology, U of North Carolina, Chapel Hill [EMAIL PROTECTED] * andrew_perrin (at) unc.edu On Wed, 25 Aug 2004, Josh Berkus wrote: > Andrew, > > > I have a table of people ("reviewers"), a table of review assignments > > ("assign"), and a table of review acceptances ("accept"). I would like to > > be able to write a query to return the latest (e.g., max(assign_date)) > > assignment for each reviewer, plus the acc_id field from "accept". I > > think I should be able to do this with a GROUP BY clause, but am having no > > luck. > > Some vagueness: you didn't say whether you wanted to see two assignments if > they have the same, latest date. Nor did you specify whether you wanted to > see assignments that had not been accepted (the below assumes yes to both) > > Hmmm ... one way, SQL-standard: > > SELECT reviewer.name, assign_date, acc_id > FROM reviewers JOIN assign ON reviewer.id = assign.reviewer_id > LEFT OUTER JOIN accept ON assign.id = accept.assign_id > WHERE assign_date IN (SELECT max(ass2.assign_date) FROM assign ass2 > WHERE ass2.reviewer_id = reviewers.id) > > or for a bit faster execution on PG you cann replace that WHERE clause with: > > WHERE assign_date IN (SELECT ass2.assign_date FROM assign ass2 > WHERE ass2.reviewer_id = reviewers.id ORDER BY ass2.assign_date DESC LIMIT 1) > > -- > Josh Berkus > Aglio Database Solutions > San Francisco > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] problem with RULE
I have two tables. One is test_main, and second is named result. Also I have view which summarizes results from test_main table and groups them by ID column: CREATE TABLE test_main(id varchar(4), value int4); CREATE TABLE result(id varchar(4), value int4); CREATE VIEW summing AS SELECT test_main.id, sum(test_main.value) AS suma GROUP BY test_main.id; ** And here is my problem: I want to make a rule which will insert new row in table result which will have ID and SUM value of that ID. * CREATE RULE tester AS ON INSERT TO test_main DO INSERT INTO "result" (id, value) VALUES (new.id, summing.suma); * So, when this rule is executed, for this INSERT commands: * insert into test_main values('0003', 100) insert into test_main values('0004', 100) * I get in column result: * 0003, 100 0004, 100 0004, 100 ** and I need * 0003, 100 0004, 100 Can anybody help me in doing this? Thanks in advance ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] olympics ranking query
That 'running aggregate' notion comes up quite regularily, and it has a number of nonintuitive holes, to do with what happens when your ordering allows for ties. ASTRID had it, RedBrick SQL had it, the latest MSSQL has it ... not necessarily a recommendation. Tom Lane wrote: David Garamond <[EMAIL PROTECTED]> writes: This is not quite the same. The ranks are sequential, but they skip, so as to match the number of participating countries. Oh, I missed that bit. What you really want here is a "running sum" function, that is SELECT running_sum(numranker) as rank, * FROM (same subselect as before) ss; There is no such thing in standard SQL, because it's fundamentally dependent on the assumption of the input data coming in a particular order, which is Evil Incarnate according to the relational worldview. But it's not too hard to do in PLs that allow persistent state. I recall Elein having exhibited one in plpython(?) not too long ago --- you might find it on techdocs or varlena.com. You could brute-force it with a subselect (essentially "SUM(numranker) over all rows that should precede this one") but that would involve recomputing the whole subselect for each output row, which doesn't seem very attractive. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] help with scheme changes to live system.
Hi Gurus, Please forgive this naive question: Say, I have a table (containerId, itemId) where for each containerId there are several rows (different itemId value) in that table. Now I want to give those rows (with same containerId) a sequence, so add one colum there to make it become (containerId, itemId, sequence); how do I make such changes through SQL? I know how to alter the table to insert that new column, but how do I populate the existed rows on that new column? note the sequence dose not need to confirm to any sorting of other columns, i.e., as long as there is a sequence is ok. for example, if I already have : containerId | itemId | 1 1001 1 1002 1 1003 2 2001 2 2002 then either containerId | itemId | sequence 1 1001 0 1 1002 1 1 1003 2 2 2001 1 2 2002 0 or containerId | itemId | sequence 1 1001 2 1 1002 1 1 1003 0 2 2001 0 2 2002 1 are all OK. Thanks, Gnale _ Is your PC infected? Get a FREE online computer virus scan from McAfee® Security. http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] view triggers/procedures
Hi, I have a table where in lot of triggers were included in that as shown below. How to view/access triggers and procedures from postgresql. I am using postgresql 7.4.1 on solaris. regds -raju Process=# \d reviews Table "public.reviews" Column | Type | Modifiers --+---+- -- review_id| integer | not null default nextval('"reviews_review_id_seq"'::text) project | character varying(32) | not null phase| character varying(32) | not null default 'Unknown'::character varying artifact_type| character varying(32) | not null description | text | author | character varying(32) | moderator| character varying(32) | meeting_end_date | date | meeting_date | date | formal | boolean | default true status | character varying(16) | disposition | character varying(16) | default 'To be decided'::character varying comments | text | num_attendees| smallint | review_size | smallint | kickoff_time | smallint | meeting_time | smallint | review_name | character varying(64) | rework_effort| integer | default 0 Indexes: "reviews_pkey" primary key, btree (review_id) Triggers: "RI_ConstraintTrigger_414655" AFTER DELETE ON reviews FROM documents NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_noaction_del"('', 'documents', 'reviews', 'UNSPECIFIED', 'review_id', 'review_id') "RI_ConstraintTrigger_414656" AFTER UPDATE ON reviews FROM documents NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_noaction_upd"('', 'documents', 'reviews', 'UNSPECIFIED', 'review_id', 'review_id') "RI_ConstraintTrigger_414658" AFTER DELETE ON reviews FROM inspectors NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_noaction_del"('', 'inspectors', 'reviews', 'UNSPECIFIED', 'review_id', 'review_id') "RI_ConstraintTrigger_414659" AFTER UPDATE ON reviews FROM inspectors NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_noaction_upd"('', 'inspectors', 'reviews', 'UNSPECIFIED', 'review_id', 'review_id') "RI_ConstraintTrigger_414661" AFTER DELETE ON reviews FROM defects NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_noaction_del"('', 'defects', 'reviews', 'UNSPECIFIED', 'review_id', 'review_id') "RI_ConstraintTrigger_414662" AFTER UPDATE ON reviews FROM defects NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE "RI_FKey_noaction_upd"('', 'defects', 'reviews', 'UNSPECIFIED', 'review_id', 'review_id') ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] ERROR: Invalid regular expression: parentheses ( ) not balanced
Below is a copy of my sql sentence including the error I am getting. What does the error mean? What have I done wrong? kathrirs=# insert into faglaerer kathrirs-# values ('f-001', '13056802876', 'Petter Lær', lo_import('/home/studenter/it03/kathrirs/img/img01.jpg'), kathrirs(# '2001-12-25', 100, 'Professor', 'A', 1, '{"linux","programmering","matematikk","neutrale nettverk"}', kathrirs(# 12, '[EMAIL PROTECTED]', '8212'); ERROR: Invalid regular expression: parentheses ( ) not balanced ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] CREATE TYPE VARCHAR2
Hi, I try to emulate with PostgreSQL an Oracle database. My problem is that PostgreSQL does not support any Oracle specific types. PostgreSQL provides the TEXT and Oracle uses the CLOB or VARCHAR2 type. I would like to use the CREATE TYPE statement to tell PostgreSQL about the Oracle types, but the documentation does not describe how to define a simple alias type. I would like to define a new type VARCHAR2 which should behave exactly like VARCHAR. Is this possible with PostgreSQL? By this it would be possible to run the Oracle create table scripts without modification. cu Sascha -- secunet Security Networks AG, Im Teelbruch 116, 45219 Essen Tel: +49-2054-123-408 Fax: +49-2054-123-123 PGP: FBE2 A49B 6526 C1B4 7F10 24E4 5004 7C27 6E9A 9698 Besuchen Sie uns auf der DMS-Expo vom 07. - 09. September 2004 in Essen, Halle 2, Stand 2435 c/d (Consulting Corner). ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] refer to computed columns
Hi Think of the following query: 1 select 2 country, 3 (select sum(salary) from employees) as totalSalary, 4 sum(salary) as countrySalary, 5 countrySalary / totalSalary as countryPct 6 from employees 7 group by country 8 order by country ; I know it does not work, but is there a way to refer to the "computed" columns in line 5, so that I do not have to repeat the calculations again. BTW, are the calculation done more then once if I specify 1 select 2 country, 3 (select sum(salary) from employees) as totalSalary, 4 sum(salary) as countrySalary, 5 sum(salary) / (select sum(salary) from employees) as countryPct 6 from employees 7 group by country 8 order by country ; Thanks for the help Bruno ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] Optimizer Selecting Incorrect Index
I have 2 servers both with the exact same data, the same O.S., the same version of Postgres (7.4.5) and the exact same db schema's (one production server, one development server). One server is using the correct index for SQL queries resulting in extremely slow performance, the other server is properly selecting the index to use and performance is many times better. I have tried vacuum, but that did not work. I finally resorted to dumping the data, removing the database completely, creating a new database and importing the data only to have to problem resurface. The table has 5,000,000+ rows on both the systems. When I run 'analyze verbose' on the correctly working system, the following is displayed: {INDEXSCAN :startup_cost 0.00 :total_cost 465.10 :plan_rows 44 :plan_width 118 :targetlist ( {TARGETENTRY :resdom {RESDOM :resno 1 :restype 23 :restypmod -1 :resname trn_integer :ressortgroupref 0 :resorigtbl 789839 :resorigcol 1 :resjunk false } :expr {VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1 } } {TARGETENTRY :resdom {RESDOM :resno 2 :restype 23 :restypmod -1 :resname trn_patno :ressortgroupref 0 :resorigtbl 789839 :resorigcol 2 :resjunk false } :expr {VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 2 } } {TARGETENTRY :resdom {RESDOM :resno 3 :restype 1042 :restypmod 5 :resname trn_bill_inc :ressortgroupref 0 :resorigtbl 789839 :resorigcol 3 :resjunk false } :expr {VAR :varno 1 :varattno 3 :vartype 1042 :vartypmod 5 :varlevelsup 0 :varnoold 1 :varoattno 3 } } {TARGETENTRY :resdom {RESDOM :resno 4 :restype 1043 :restypmod 13 :resname trn_userid :ressortgroupref 0 :resorigtbl 789839 :resorigcol 4 :resjunk false } :expr {VAR :varno 1 :varattno 4 :vartype 1043 :vartypmod 13 :varlevelsup 0 :varnoold 1 :varoattno 4 } } {TARGETENTRY :resdom {RESDOM :resno 5 :restype 23 :restypmod -1 :resname trn_location :ressortgroupref 0 :resorigtbl 789839 :resorigcol 5 :resjunk false } :expr {VAR :varno 1 :varattno 5 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 5 } } {TARGETENTRY :resdom {RESDOM :resno 6 :restype 1082 :restypmod -1 :resname trn_date :ressortgroupref 0 :resorigtbl 789839 :resorigcol 6 :resjunk false } :expr {VAR :varno 1 :varattno 6 :vartype 1082 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 6 } } {TARGETENTRY :resdom {RESDOM :resno 7 :restype 23 :restypmod -1 :resname trn_sercode :ressortgroupref 0 :resorigtbl 789839 :resorigcol 7 :resjunk false } :expr {VAR :varno 1 :varattno 7 :vartype 23 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 7 } } {TARGETENTRY :resdom {RESDOM :resno 8 :restype 1043 :restypmod 28 :resname trn_descr :ressortgroupref 0 :resorigtbl 789839 :resorigcol 8 :resjunk false } :expr {VAR :varno 1 :varattno 8 :vartype 1043 :vartypmod 28 :varlevelsup 0 :varnoold 1 :varoattno 8 } } {TARGETENTRY :resdom {RESDOM :resno 9 :restype 23 :restypmod -1 :resname trn_employr :ressortgroupref 0 :resorigtbl 789839 :resorigcol 9 :resjunk false } :expr {VAR :varno 1 :varattno 9 :vart
Re: [SQL] CREATE TYPE VARCHAR2
Sascha Ziemann <[EMAIL PROTECTED]> writes: > I would like to define a > new type VARCHAR2 which should behave exactly like VARCHAR. You could get about halfway there with CREATE DOMAIN varchar2 AS varchar; But it's only halfway because the domain will not accept length decorations; that is "varchar2(100)" will not work. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] ERROR: Invalid regular expression: parentheses ( ) not balanced
"Kathrine S" <[EMAIL PROTECTED]> writes: > Below is a copy of my sql sentence including the error I am getting. What > does the error mean? What have I done wrong? > kathrirs=# insert into faglaerer > kathrirs-# values ('f-001', '13056802876', 'Petter Lær', > lo_import('/home/studenter/it03/kathrirs/img/img01.jpg'), > kathrirs(# '2001-12-25', 100, 'Professor', 'A', 1, > '{"linux","programmering","matematikk","neutrale nettverk"}', > kathrirs(# 12, '[EMAIL PROTECTED]', '8212'); > ERROR: Invalid regular expression: parentheses ( ) not balanced There's no regular expression in what you've shown us. Maybe you have rules or triggers that are fired by this INSERT? If so, you need to look at what they are doing. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] ERROR: Invalid regular expression: parentheses ( ) not balanced
On Wednesday 25 August 2004 07:21 pm, Tom Lane wrote: > "Kathrine S" <[EMAIL PROTECTED]> writes: > > Below is a copy of my sql sentence including the error I am getting. What > > does the error mean? What have I done wrong? > > > > kathrirs=# insert into faglaerer > > kathrirs-# values ('f-001', '13056802876', 'Petter Lær', > > lo_import('/home/studenter/it03/kathrirs/img/img01.jpg'),<1 > > kathrirs(# '2001-12-25', 100, 'Professor', 'A', 1,<--2 > > '{"linux","programmering","matematikk","neutrale nettverk"}', > > kathrirs(# 12, '[EMAIL PROTECTED]', '8212'); > > ERROR: Invalid regular expression: parentheses ( ) not balanced > > There's no regular expression in what you've shown us. Maybe you have > rules or triggers that are fired by this INSERT? If so, you need to > look at what they are doing. > > regards, tom lane > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org I don't know what to make of it but the problem seems to start at the lines I have marked 1 & 2 and involves the lo_import function. I have not used lo_import and so do not know how to call it. What I do see is that it is called at 1 and a '(' shows up to the left of the prompt at 2 and stays there indicating to me at least the parser is not happy. -- Adrian Klaver [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] ERROR: Invalid regular expression: parentheses ( ) not balanced
Adrian Klaver <[EMAIL PROTECTED]> writes: > I don't know what to make of it but the problem seems to start at the lines I > have marked 1 & 2 and involves the lo_import function. I have not used > lo_import and so do not know how to call it. What I do see is that it is > called at 1 and a '(' shows up to the left of the prompt at 2 and stays > there indicating to me at least the parser is not happy. No, that's just because psql has noticed that the left paren following VALUES is unmatched. I suppose that what Kathrine is showing us is line-wrapped text and that the lo_import function call was actually typed on the same line as "values(...", since there's not another psql prompt visible there. But in any case, the given error message could only have come from a regex match operator (~ or ~*) that does not like the pattern operand it was given. Since there's no ~ operator in the given query, I have to suppose that the problem is in something that's being invoked behind-the- scenes, like a trigger or rule. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] Aggregate query for multiple records
Hello, I am new to the list, my apology if this question is beyond the scope or charter of this list. My questions is: What is the best method to perform an aggregate query to calculate sum() values for each distinct wid as in the example below, but except for all wid's (not just WHERE wid='01/1-6-1-30w1/0'). Also, performance wise, would it be better to build a function for this query. The table has 9 million records and these aggregate queries take hours. SELECT SUM(oil) as sumoil, SUM(hours) as sumhours, FROM (SELECT oil, hours prd_data WHERE wid='01/1-6-1-30w1/0' ORDER BY date LIMIT 6) subtable ; Table description: Table "prd_data" Column | Type | Modifiers +---+--- date | integer | hours | real | oil| real | gas| real | water | real | pwid | integer | wid| character varying(20) | year | smallint | Indexes: wid_index6 Actual table (prd_data), 9 million records: date | hours | oil | gas | water | pwid | wid | year +---+---+--+---+--+-+-- 196507 | 360 | 159.4 | 11.3 | 40.9 | 413 | 01/1-1-1-31w1/0 | 1965 196508 | 744 | 280 | 20 | 27.2 | 413 | 01/1-1-1-31w1/0 | 1965 196509 | 360 | 171.1 | 11.4 | 50.4 | 413 | 01/1-1-1-31w1/0 | 1965 196510 | 744 | 202.1 | 25 | 89.8 | 413 | 01/1-1-1-31w1/0 | 1965 196512 | 744 | 201.3 | 23.8 | 71.9 | 413 | 01/1-1-1-31w1/0 | 1965 196511 | 720 | 184 | 17.6 | 78.9 | 413 | 01/1-1-1-31w1/0 | 1965 196610 | 744 | 99.8 | 15.4 | 53.7 | 413 | 01/1-1-1-31w1/0 | 1966 196612 | 744 |86 | 12.8 | 36.1 | 413 | 01/1-1-1-31w1/0 | 1966 196611 | 720 |86 | 12.6 | 41.7 | 413 | 01/1-1-1-31w1/0 | 1966 196601 | 744 | 191.6 | 22.6 | 50.7 | 413 | 01/1-1-1-31w1/0 | 1966 200301 | 461 | 68.8 | 0 | 186.3 | 47899 | 9G/6-1-50-24w3/0 | 2003 200310 | 740 | 446.3 | 0 | 563.1 | 47899 | 9G/6-1-50-24w3/0 | 2003 200306 | 667 | 92.1 | 0 | 968.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 200304 | 0 | 0 | 0 | 0 | 47899 | 9G/6-1-50-24w3/0 | 2003 200308 | 457 | 100.7 | 0 | 82.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 200307 | 574 |78 | 0 | 752 | 47899 | 9G/6-1-50-24w3/0 | 2003 200312 | 582 | 360.9 | 0 | 569 | 47899 | 9G/6-1-50-24w3/0 | 2003 200311 | 681 | 260.8 | 0 | 563.9 | 47899 | 9G/6-1-50-24w3/0 | 2003 200305 | 452 | 0 | 0 | 0 | 47899 | 9G/6-1-50-24w3/0 | 2003 200309 | 637 | 244.6 | 0 | 193.8 | 47899 | 9G/6-1-50-24w3/0 | 2003 (20 rows) Thanks, -- Scott A. Gerhardt, P.Geo. Gerhardt Information Technologies ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings