[GENERAL] Connection Oracle database from Postgres function
I want to access Oracle database from Postgres. One way to use DBI-LINK and use plperl function in Postgres to make connection with Oracle. Using that I am getting the following error. Environment Solaris 9 SPARC PostgreSQL 8.0 DBI-LINK Perl-5.8.5 --- My Function CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS $$ if ($_[0] $_[1]) { return $_[0]; } return $_[1]; $$ LANGUAGE plperl; --- On running this above function from sql prompt, I am getting this error, and connection with database lost test=# select perl_max(1,2); server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed.
[GENERAL] odbc and psql 8.1.0
Hi , has anybody experienced any problem with the odbc driver that comes with postgres 8.1.0 , I was testing my application against the beta 4 and everything was working OK, now I installed the 8.1.0 plus the odbc driver , and now my application is returning wrong results , the application is the same, no one modification was done, if I run my query from withing the interactive sql editor of pgAdmin, the query returns what it is supposed to return, the same query from my application doesnt work ( I tested the query in PowerBulder 8 and VASmalltalk 7 both with same wrong result). I would appreciate any hint on this thanks in advance Hugo below is the query, this query should return rows with D's and P's , but it returns only rows with D's, the problem is that if I count the rows I realize that it returns all the rows it should (642), but it returns all of them with D's, if I run the same query from pgAdmin it returns all the rows(642) but with the correct values , 412 P's and 230 D's * select distinct 'D' as estado , zona, ano, campana , contrato from comisiones.avon as avon where zona = :zona and ano = :ano and campana = :campana and tipo in ('50') and subtipo in ('02' , '03') group by zona, ano, campana , contrato union select distinct 'D' as estado , zona, ano, campana , contrato from comisiones.viru as viru where zona = :zona and ano = :ano and campana = :campana and tipo in ('50') and subtipo in ('02' , '03') group by zona, ano, campana , contrato union select distinct 'P' as estado, zona, ano, campana , contrato from comisiones.avon as avon where zona = :zona and ano = :ano and campana = :campana and tipo in ('10', '20') and subtipo in ('01') and contrato not in ( select distinct contrato from comisiones.viru as viru where zona = :zona and ano = :ano and campana = :campana and tipo in ('10', '20') and subtipo in ('01') ) group by zona, ano, campana , contrato union select distinct 'P' as estado, zona, ano, campana , contrato from comisiones.viru as viru where zona = :zona and ano = :ano and campana = :campana and tipo in ('10', '20') and subtipo in ('01') and contrato not in ( select distinct contrato from comisiones.avon as avon where zona = :zona and ano = :ano and campana = :campana and tipo in ('10', '20') and subtipo in ('01') ) group by zona, ano, campana , contrato union SELECT DISTINCT 'P' as estado, avon.zona as zona, avon.ano as ano , avon.campana as campana , avon.contrato FROM comisiones.avon as avon, comisiones.viru as viru WHERE ( avon.zona = viru.zona ) and ( avon.contrato = viru.contrato ) and ( avon.ano = viru.ano ) and ( avon.campana = viru.campana ) and avon.zona = :zona and avon.ano = :ano and avon.campana = :campana and ( ( avon.tipo in ('10', '20') and avon.subtipo in ('01') ) and ( viru.tipo in ('10', '20') and viru.subtipo in ('01') )) *
[GENERAL] Outer join with where conditions
Hello, I wonder If I could move additional join condition from ON part of query to where part. For example instead of: SELECT * FROM booking_load AS bload LEFT OUTER JOIN dict_load_type_tsl AS load_tsl ON ( load_tsl.dict_load_type_id = bload.dict_load_type_id AND load_tsl.dict_language_id = 'EN' )) Could I write: SELECT * FROM booking_load AS bload LEFT OUTER JOIN dict_load_type_tsl AS load_tsl USING (dict_load_type_id) WHERE load_tsl.dict_language_id = 'EN' I thought that second query could be more 'optimizable', but gave no results. Where is the problem ? Best regards, Michal ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Outer join with where conditions
On Mon, 14 Nov 2005, [ISO-8859-2] Micha? Otroszczenko wrote: I wonder If I could move additional join condition from ON part of query to where part. For example instead of: SELECT * FROM booking_load AS bload LEFT OUTER JOIN dict_load_type_tsl AS load_tsl ON ( load_tsl.dict_load_type_id = bload.dict_load_type_id AND load_tsl.dict_language_id = 'EN' )) Could I write: SELECT * FROM booking_load AS bload LEFT OUTER JOIN dict_load_type_tsl AS load_tsl USING (dict_load_type_id) WHERE load_tsl.dict_language_id = 'EN' I thought that second query could be more 'optimizable', but gave no results. Where is the problem ? This is probably due to the differences in semantics between the two queries as we interpret them. We treat a condition in ON as part of the join itself, therefore the first query is basically join rows of booking_load with rows of dict_load_type_tsl that have a dict_language_id equal to 'EN' and the same dict_load_type_id and if no such rows in dict_load_type_tsl are found extend with NULLs. Conditions in WHERE are conditions logically applied after the join, so the second query is join rows of booking_load with rows of dict_load_type_tsl that have the same dict_load_type_id and if no such rows in dict_load_type_tsl are found extend with NULLs then throw out any rows for which dict_language_id is not equal to 'EN'. If for example, there wasn't a matching dict_load_type_tsl row, in the first, you'd get a NULL extended row, but in the second, the row generated by the join (NULL extended) would fail the WHERE condition and not be returned. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Fuzzy text search
We've heard that PostgreSQL can do fuzzy search, but haven't had much luck. I'm brand new to PostgreSQL, so this might be completely obvious for an experienced user. Are there any how-tos on fuzzy text searching? Someone said to try using tsearch2, but it seems that it does full-text searching, but not fuzzy search. Ideally, we'd like to use that w/ a fuzzy search option. In particular, if someone searches for imat we want to return results including immaterial and imaterial (so misspellings plus partial match). Thanks! Jen ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Fuzzy text search
On Mon, Nov 14, 2005 at 07:46:51AM -0800, jennyw wrote: We've heard that PostgreSQL can do fuzzy search, but haven't had much luck. I'm brand new to PostgreSQL, so this might be completely obvious for an experienced user. Are there any how-tos on fuzzy text searching? Someone said to try using tsearch2, but it seems that it does full-text searching, but not fuzzy search. Ideally, we'd like to use that w/ a fuzzy search option. In particular, if someone searches for imat we want to return results including immaterial and imaterial (so misspellings plus partial match). In the contrib directory, there is a directory fuzzystrmatch which includes code for various forms of fuzzy matching. Good luck. -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpC6GZE6OlFv.pgp Description: PGP signature
Re: [GENERAL] Fuzzy text search
jennyw wrote: We've heard that PostgreSQL can do fuzzy search, but haven't had much luck. I'm brand new to PostgreSQL, so this might be completely obvious for an experienced user. Are there any how-tos on fuzzy text searching? Someone said to try using tsearch2, but it seems that it does full-text searching, but not fuzzy search. Ideally, we'd like to use that w/ a fuzzy search option. In particular, if someone searches for imat we want to return results including immaterial and imaterial (so misspellings plus partial match). Look at contrib/pg_trgm. If you want fulltext search with mispelling correction then read Tsearch2 Integration in contrib/pg_trgm/README. -- Teodor Sigaev E-mail: [EMAIL PROTECTED] WWW: http://www.sigaev.ru/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Fuzzy text search
On 14.11.2005 16:46, jennyw wrote: We've heard that PostgreSQL can do fuzzy search, but haven't had much luck. I'm brand new to PostgreSQL, so this might be completely obvious for an experienced user. Are there any how-tos on fuzzy text searching? Someone said to try using tsearch2, but it seems that it does full-text searching, but not fuzzy search. Ideally, we'd like to use that w/ a fuzzy search option. In particular, if someone searches for imat we want to return results including immaterial and imaterial (so misspellings plus partial match). tsearch2 + pg_trgm might be what your are looking for, both can be found in the contrib directory. -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Outer join with where conditions
On Mon, Nov 14, 2005 at 14:45:22 +0100, Michał Otroszczenko [EMAIL PROTECTED] wrote: Hello, I wonder If I could move additional join condition from ON part of query to where part. Yes, but the semantics are different for outer joins. For example instead of: SELECT * FROM booking_load AS bload LEFT OUTER JOIN dict_load_type_tsl AS load_tsl ON ( load_tsl.dict_load_type_id = bload.dict_load_type_id AND load_tsl.dict_language_id = 'EN' )) Could I write: SELECT * FROM booking_load AS bload LEFT OUTER JOIN dict_load_type_tsl AS load_tsl USING (dict_load_type_id) WHERE load_tsl.dict_language_id = 'EN' I thought that second query could be more 'optimizable', but gave no results. Where is the problem ? Best regards, Michal ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Outer join with where conditions
Conditions in WHERE are conditions logically applied after the join, so the second query is join rows of booking_load with rows of dict_load_type_tsl that have the same dict_load_type_id and if no such rows in dict_load_type_tsl are found extend with NULLs then throw out any rows for which dict_language_id is not equal to 'EN'. Thank you for extensive explanation. I supposed that it is like that, but I couldn't find confirmation in docs. Regards, Michal ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Choosing PostgreSQL as the database for our next project
Scott Ribe wrote: From what I understand this will be possible but I may need to roll my own replication code to handle data conflict issues? Especially since center 1/2 may be down at the same time and then might change the same data. How do the tools for this compare against the ones from Oracle? snip some useful inputs There is no simple solution, and the out-of-the-box solutions require a whole lotta configuration work. That is what I have been suspecting. Luckily the databases aren't that complex so we may be able to fix them so that there will be no conflicts (in my dreams perhaps...) that need manual intervention. Anyhow, are there any (big) companies that will do support for a PostgreSQL installation? -- Johnny Ljunggren, Vestlia 6, 3080 HOLMESTRAND, 918 50 411 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Bug with index-usage?
Hello, I get unpredictibale results selecting from a view depending on index-usage. Please see the attached script for details. Is it a bug or some weird feature? Any help appreciated to get predictibale results Sebastian CREATE TABLE test1 ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); INSERT INTO test1 (name) VALUES ('test1_1'); INSERT INTO test1 (name) VALUES ('test1_2'); INSERT INTO test1 (name) VALUES ('test1_3'); CREATE TABLE test2 ( id SERIAL PRIMARY KEY, type TEXT NOT NULL CHECK (type IN ('a','b','c')), test1_id INTEGER REFERENCES test1 ); INSERT INTO test2 (type,test1_id) VALUES ('a',1); INSERT INTO test2 (type,test1_id) VALUES ('a',2); INSERT INTO test2 (type,test1_id) VALUES ('a',3); INSERT INTO test2 (type,test1_id) VALUES ('b',1); INSERT INTO test2 (type,test1_id) VALUES ('b',2); INSERT INTO test2 (type,test1_id) VALUES ('b',3); INSERT INTO test2 (type,test1_id) VALUES ('c',1); INSERT INTO test2 (type,test1_id) VALUES ('c',2); INSERT INTO test2 (type,test1_id) VALUES ('c',3); CREATE OR REPLACE VIEW test AS SELECT test2.* FROM test2 LEFT JOIN test2 AS t2 ON test2.type IN ('c','b') AND t2.type = 'a'; SELECT * from test WHERE type = 'a'; CREATE INDEX index_a ON test2 (id) WHERE type = 'a'; CREATE INDEX index_b ON test2 (id) WHERE type = 'b'; CREATE INDEX index_c ON test2 (id) WHERE type = 'c'; SET enable_seqscan TO OFF; SELECT * from test WHERE type = 'a'; ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Bug with index-usage?
On Mon, 2005-11-14 at 11:07, Sebastian Böck wrote: Hello, I get unpredictibale results selecting from a view depending on index-usage. PostgreSQL uses a cost based planner. So, it tends to not use the plan you might expect, especially in toy test cases with small data sets. I.e. why use an index to look up 10 values, when they all fit on the same page. Just seq scan the data from the table. Fill up your table with REAL data (or a close substitute) and test again. Also, read up on the admin section, specifically the part on the postgresql.conf file and what the settings in there mean, then read http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Bug with index-usage?
Scott Marlowe wrote: On Mon, 2005-11-14 at 11:07, Sebastian Böck wrote: Hello, I get unpredictibale results selecting from a view depending on index-usage. PostgreSQL uses a cost based planner. So, it tends to not use the plan you might expect, especially in toy test cases with small data sets. I.e. why use an index to look up 10 values, when they all fit on the same page. Just seq scan the data from the table. Fill up your table with REAL data (or a close substitute) and test again. Also, read up on the admin section, specifically the part on the postgresql.conf file and what the settings in there mean, then read http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html I think you didn't test my small script or don't see the same results. I don't speak about index-usage per se, I'm talkung about the results. Without indices I get: SELECT * from test WHERE typ = 'a'; id | typ | test1_id +-+-- 1 | a |1 2 | a |2 3 | a |3 (3 rows) But with defined indices I get: SELECT * from test WHERE typ = 'a'; id | typ | test1_id +-+-- (0 rows) By the way, this is 8.1 (forgot to mention in my first mail). Sebastian ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Bug with index-usage?
On Mon, 2005-11-14 at 11:07, Sebastian Böck wrote: Hello, I get unpredictibale results selecting from a view depending on index-usage. Also read up on vacuum, analyze, and explain analyze. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Bug with index-usage?
On Mon, 2005-11-14 at 11:25, Sebastian Böck wrote: Scott Marlowe wrote: On Mon, 2005-11-14 at 11:07, Sebastian Böck wrote: Hello, I get unpredictibale results selecting from a view depending on index-usage. PostgreSQL uses a cost based planner. So, it tends to not use the plan you might expect, especially in toy test cases with small data sets. I.e. why use an index to look up 10 values, when they all fit on the same page. Just seq scan the data from the table. Fill up your table with REAL data (or a close substitute) and test again. Also, read up on the admin section, specifically the part on the postgresql.conf file and what the settings in there mean, then read http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html I think you didn't test my small script or don't see the same results. I don't speak about index-usage per se, I'm talkung about the results. Without indices I get: SELECT * from test WHERE typ = 'a'; id | typ | test1_id +-+-- 1 | a |1 2 | a |2 3 | a |3 (3 rows) But with defined indices I get: SELECT * from test WHERE typ = 'a'; id | typ | test1_id +-+-- (0 rows) By the way, this is 8.1 (forgot to mention in my first mail). I don't get this problem in 7.4. I'll try 8.1 and get back to you. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Bug with index-usage?
The OP was complaining about the results of the above script, which I could readily reproduce on a 8.1.0 installation on debian (see below). The same select which returned 3 rows will return nothing after creating the partial indexes, which looks as a bug to me... I can't tell anything about why it happens, just confirm that I can reproduce too... Cheers, Csaba. cnagy= CREATE TABLE test1 ( cnagy( id SERIAL PRIMARY KEY, cnagy( name TEXT NOT NULL cnagy( ); NOTICE: CREATE TABLE will create implicit sequence test1_id_seq for serial column test1.id NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index test1_pkey for table test1 CREATE TABLE cnagy= cnagy= INSERT INTO test1 (name) VALUES ('test1_1'); INSERT 0 1 cnagy= INSERT INTO test1 (name) VALUES ('test1_2'); INSERT 0 1 cnagy= INSERT INTO test1 (name) VALUES ('test1_3'); INSERT 0 1 cnagy= cnagy= CREATE TABLE test2 ( cnagy( id SERIAL PRIMARY KEY, cnagy( type TEXT NOT NULL CHECK (type IN ('a','b','c')), cnagy( test1_id INTEGER REFERENCES test1 cnagy( ); NOTICE: CREATE TABLE will create implicit sequence test2_id_seq for serial column test2.id NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index test2_pkey for table test2 CREATE TABLE cnagy= cnagy= INSERT INTO test2 (type,test1_id) VALUES ('a',1); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('a',2); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('a',3); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('b',1); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('b',2); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('b',3); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('c',1); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('c',2); INSERT 0 1 cnagy= INSERT INTO test2 (type,test1_id) VALUES ('c',3); INSERT 0 1 cnagy= cnagy= CREATE OR REPLACE VIEW test AS cnagy- SELECT test2.* cnagy- FROM test2 cnagy- LEFT JOIN test2 AS t2 ON cnagy- test2.type IN ('c','b') AND cnagy- t2.type = 'a'; CREATE VIEW cnagy= cnagy= SELECT * from test WHERE type = 'a'; id | type | test1_id +--+-- 1 | a|1 2 | a|2 3 | a|3 (3 rows) cnagy= cnagy= CREATE INDEX index_a ON test2 (id) WHERE type = 'a'; CREATE INDEX cnagy= CREATE INDEX index_b ON test2 (id) WHERE type = 'b'; CREATE INDEX cnagy= CREATE INDEX index_c ON test2 (id) WHERE type = 'c'; CREATE INDEX cnagy= cnagy= SET enable_seqscan TO OFF; SET cnagy= cnagy= SELECT * from test WHERE type = 'a'; id | type | test1_id +--+-- (0 rows) On Mon, 2005-11-14 at 18:17, Scott Marlowe wrote: On Mon, 2005-11-14 at 11:07, Sebastian Böck wrote: Hello, I get unpredictibale results selecting from a view depending on index-usage. PostgreSQL uses a cost based planner. So, it tends to not use the plan you might expect, especially in toy test cases with small data sets. I.e. why use an index to look up 10 values, when they all fit on the same page. Just seq scan the data from the table. Fill up your table with REAL data (or a close substitute) and test again. Also, read up on the admin section, specifically the part on the postgresql.conf file and what the settings in there mean, then read http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Bug with index-usage?
Sebastian Böck [EMAIL PROTECTED] schrieb: Hello, I get unpredictibale results selecting from a view depending on index-usage. [ snipp ] SELECT * from test WHERE type = 'a'; unfortunately, no result. What Du you expect? Andreas -- Really, I'm not out to destroy Microsoft. That will just be a completely unintentional side effect. (Linus Torvalds) Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889° ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Bug with index-usage?
On 11/14/05, Sebastian Böck [EMAIL PROTECTED] wrote: Hello, I get unpredictibale results selecting from a view depending on index-usage. Please see the attached script for details. Is it a bug or some weird feature? Any help appreciated to get predictibale results Sebastian CREATE TABLE test1 ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); INSERT INTO test1 (name) VALUES ('test1_1'); INSERT INTO test1 (name) VALUES ('test1_2'); INSERT INTO test1 (name) VALUES ('test1_3'); CREATE TABLE test2 ( id SERIAL PRIMARY KEY, type TEXT NOT NULL CHECK (type IN ('a','b','c')), test1_id INTEGER REFERENCES test1 ); INSERT INTO test2 (type,test1_id) VALUES ('a',1); INSERT INTO test2 (type,test1_id) VALUES ('a',2); INSERT INTO test2 (type,test1_id) VALUES ('a',3); INSERT INTO test2 (type,test1_id) VALUES ('b',1); INSERT INTO test2 (type,test1_id) VALUES ('b',2); INSERT INTO test2 (type,test1_id) VALUES ('b',3); INSERT INTO test2 (type,test1_id) VALUES ('c',1); INSERT INTO test2 (type,test1_id) VALUES ('c',2); INSERT INTO test2 (type,test1_id) VALUES ('c',3); CREATE OR REPLACE VIEW test AS SELECT test2.* FROM test2 LEFT JOIN test2 AS t2 ON test2.type IN ('c','b') AND t2.type = 'a'; SELECT * from test WHERE type = 'a'; CREATE INDEX index_a ON test2 (id) WHERE type = 'a'; CREATE INDEX index_b ON test2 (id) WHERE type = 'b'; CREATE INDEX index_c ON test2 (id) WHERE type = 'c'; SET enable_seqscan TO OFF; SELECT * from test WHERE type = 'a'; i don't have my machine at hand but i don't think that even the select is right, you have a join but without joining clauses you will get a cartesian product... what do you believe is the right answer... just for my probe later... -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Bug with index-usage?
On Mon, 2005-11-14 at 11:30, Csaba Nagy wrote: The OP was complaining about the results of the above script, which I could readily reproduce on a 8.1.0 installation on debian (see below). The same select which returned 3 rows will return nothing after creating the partial indexes, which looks as a bug to me... I can't tell anything about why it happens, just confirm that I can reproduce too... Yep, I just reproduced it too. In the future, I'd recommend they include the bad output, as I simply thought unpredictable output was referring to performance, not the actual data. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Bug with index-usage?
Jaime Casanova wrote: On 11/14/05, Sebastian Böck [EMAIL PROTECTED] wrote: Hello, I get unpredictibale results selecting from a view depending on index-usage. Please see the attached script for details. Is it a bug or some weird feature? Any help appreciated to get predictibale results Sebastian CREATE TABLE test1 ( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); INSERT INTO test1 (name) VALUES ('test1_1'); INSERT INTO test1 (name) VALUES ('test1_2'); INSERT INTO test1 (name) VALUES ('test1_3'); CREATE TABLE test2 ( id SERIAL PRIMARY KEY, type TEXT NOT NULL CHECK (type IN ('a','b','c')), test1_id INTEGER REFERENCES test1 ); INSERT INTO test2 (type,test1_id) VALUES ('a',1); INSERT INTO test2 (type,test1_id) VALUES ('a',2); INSERT INTO test2 (type,test1_id) VALUES ('a',3); INSERT INTO test2 (type,test1_id) VALUES ('b',1); INSERT INTO test2 (type,test1_id) VALUES ('b',2); INSERT INTO test2 (type,test1_id) VALUES ('b',3); INSERT INTO test2 (type,test1_id) VALUES ('c',1); INSERT INTO test2 (type,test1_id) VALUES ('c',2); INSERT INTO test2 (type,test1_id) VALUES ('c',3); CREATE OR REPLACE VIEW test AS SELECT test2.* FROM test2 LEFT JOIN test2 AS t2 ON test2.type IN ('c','b') AND t2.type = 'a'; SELECT * from test WHERE type = 'a'; CREATE INDEX index_a ON test2 (id) WHERE type = 'a'; CREATE INDEX index_b ON test2 (id) WHERE type = 'b'; CREATE INDEX index_c ON test2 (id) WHERE type = 'c'; SET enable_seqscan TO OFF; SELECT * from test WHERE type = 'a'; i don't have my machine at hand but i don't think that even the select is right, you have a join but without joining clauses you will get a cartesian product... what do you believe is the right answer... just for my probe later... I think it should be: id | type | test1_id +--+-- 1 | a|1 2 | a|2 3 | a|3 because a EXPLAIN SELECT * from test WHERE type = 'a'; shows some weird assumptions Index Scan using index_a on test2 (cost=0.00..4.69 rows=1 width=40) Filter: ((type = 'c'::text) OR (type = 'b'::text)) note that index_a is defined as: CREATE INDEX index_a ON test2 (id) WHERE type = 'a'; Sebastian ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Choosing PostgreSQL as the database for our next project
William Yu wrote: Johnny Ljunggren wrote: 1. Replication - multimaster I'll try to explain the setup to the best of my ability: Three centers: Main center - database with a backup database Center 1 - database with a backup database Center 2 - database with a backup database (same as center 1) Hmm, okay. We're now looking at another, simpler, way to do it. The same setup but the clients on Center 1/2 will connect directly to the Main center (2Mb leased line). The databases on Center 1/2 will then just be a replica of the Main databases. The biggest issue though is what happens when the lines go down: 1. connect to local database server 2. line comes up = changes to local database sent to main database 3. connect to main database server From what I understand this will be possible but I may need to roll my own replication code to handle data conflict issues? Especially since center 1/2 may be down at the same time and then might change the same data. How do the tools for this compare against the ones from Oracle? -- Johnny Ljunggren, Vestlia 6, 3080 HOLMESTRAND, 918 50 411 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Bug with index-usage?
=?ISO-8859-1?Q?Sebastian_B=F6ck?= [EMAIL PROTECTED] writes: I get unpredictibale results selecting from a view depending on index-usage. It's not actually *using* the indexes, although presence of the indexes does seem to be needed to trigger the bug: regression=# explain SELECT * from test WHERE type = 'a'; QUERY PLAN --- Nested Loop Left Join (cost=0.00..2.29 rows=1 width=40) Join Filter: ((outer.type = 'c'::text) OR (outer.type = 'b'::text)) - Seq Scan on test2 (cost=0.00..1.16 rows=1 width=40) Filter: ((type = 'a'::text) AND ((type = 'c'::text) OR (type = 'b'::text))) - Seq Scan on test2 t2 (cost=0.00..1.11 rows=1 width=0) Filter: (type = 'a'::text) (6 rows) regression=# drop index index_b; DROP INDEX regression=# explain SELECT * from test WHERE type = 'a'; QUERY PLAN --- Nested Loop Left Join (cost=0.00..2.24 rows=1 width=40) Join Filter: ((outer.type = 'c'::text) OR (outer.type = 'b'::text)) - Seq Scan on test2 (cost=0.00..1.11 rows=1 width=40) Filter: (type = 'a'::text) - Seq Scan on test2 t2 (cost=0.00..1.11 rows=1 width=0) Filter: (type = 'a'::text) (6 rows) It looks like the problem is that the new 8.1 OR-index-qual code is confused about when it can apply outer-join conditions. It shouldn't be propagating the outer-join condition into the scan condition on test2, but it is. Will fix. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Queries causing highest I/O load since pg_stat_reset?
Hi, is it possible to retrieve a list of queries that has caused the highest i/o load? Something like (1) select pg_stat_reset(); (2) run some queries for a while (3) list the queries that caused the highest i/o since step (1) ? /Mikael ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Queries causing highest I/O load since pg_stat_reset?
On 11/14/05, Mikael Carneholm [EMAIL PROTECTED] wrote: Hi, is it possible to retrieve a list of queries that has caused the highest i/o load? Something like (1) select pg_stat_reset(); (2) run some queries for a while (3) list the queries that caused the highest i/o since step (1) ? /Mikael select * from pg_stat_activity; -- Atentamente, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Updated: partitioning functions
On Sat, 2005-11-12 at 18:37 +0100, Mikael Carneholm wrote: don't know if someone noticed my previous post, but here's now an updated version of the script (see attachment) Comments welcome. This looks like a good set of examples of how to set up custom partition management. Not sure whether a generic approach is that useful though, but maybe it could be a tech docs paper? Best Regards, Simon Riggs ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] 8.1 beta3 initdb required?
Hi, Is a initdb required to go from 8.1beta 3 to 8.1.0? Thanks, Tony ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [8.1] drop table in plpgsql function
Sergey, On Nov 14, 2005, at 2:03 AM, Sergey Karin wrote: I have found a strange error. Perhaps your f_is_table_exist function is not working correctly. Using variables for table references is not supported in plpgsql (any version). You must use execute. Here is my test on 8.0.4: create table test_table(a text, b integer); create or replace function del_table(varchar) returns boolean as ' declare tname alias for $1; begin drop table tname; return true; end; ' language plpgsql; select del_table('test_table'); CREATE TABLE CREATE FUNCTION psql:16: ERROR: syntax error at or near $1 at character 13 QUERY: drop table $1 CONTEXT: PL/pgSQL function del_table line 4 at SQL statement psql:16: LINE 1: drop table $1 psql:16: It works Ok if you change the drop line to execute ''drop table '' || tname; Also, the example that you indicate is working with execute is not quoted correctly. Best, John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] IF EXISTS
Hi, I would like to know if IF EXISTS exists under postgresql ? because i did not find it. before to create users or database, i would like to be sure that they do not exist already. so how can i test it and do something like : IF EXISTS database test DROP database test; thanks a lot, Maileen __ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] IF EXISTS
On Mon, Nov 14, 2005 at 13:20:59 -0800, P.M [EMAIL PROTECTED] wrote: Hi, I would like to know if IF EXISTS exists under postgresql ? because i did not find it. before to create users or database, i would like to be sure that they do not exist already. so how can i test it and do something like : IF EXISTS database test DROP database test; I don't believe there is a feature like that for users (roles) or databases. The normal thing to do would be to have the application try to delete the object and ignore the problem if things fail. If this is happening in a larger transaction that you don't want rolled back, then you can use savepoints. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] 3 x PostgreSQL in cluster/redunant
Hello *, I have three Sun Server where I have reserved on each Server a Raid-5 of 1 TByte for my PostgreSQL. The first PostgreSQL is up and running with a database of 150 GByte. Now I like to make the three Sun Servers redunant but I do not find any usefull HOWTO's or manuals how to setup PostgreSQL to work redunant. Can anyone point me to the right documentation please? How can I redirect requests to one of the other PostgreSQL servers, if one has to much load ? My servers have only 32 CPU's of 650 MHz and 64 GByte of memory running Debian GNU/Linux 3.1 (sparc), and they must support around 17.000 $USER currently. The Internet connection is curently for Testing an E1 (with Backup) but I am looking for FiberOptic Provider E1/STM1. Greetings Michelle -- Linux-User #280138 with the Linux Counter, http://counter.li.org/ Michelle Konzack Apt. 917 ICQ #328449886 50, rue de Soultz MSM LinuxMichi 0033/3/8845235667100 Strasbourg/France IRC #Debian (irc.icq.com) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] IF EXISTS
On 11/14/05, P.M [EMAIL PROTECTED] wrote: Hi, I would like to know if IF EXISTS exists under postgresql ? because i did not find it. before to create users or database, i would like to be sure that they do not exist already. so how can i test it and do something like : IF EXISTS database test DROP database test; thanks a lot, Maileen something in shell is fine? -SCRIPT BEGIN HERE #!/bin/sh psql template1 -A -t -c SELECT datname FROM pg_database WHERE datname = 'test' | while read D ; do dropdb test # or you can do # psql template1 -c DROP DATABASE test # whatever you feel comfortable with done createdb test -SCRIPT END HERE -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] 3 x PostgreSQL in cluster/redunant
On Mon, 2005-11-14 at 12:36, Michelle Konzack wrote: Hello *, I have three Sun Server where I have reserved on each Server a Raid-5 of 1 TByte for my PostgreSQL. The first PostgreSQL is up and running with a database of 150 GByte. Now I like to make the three Sun Servers redunant but I do not find any usefull HOWTO's or manuals how to setup PostgreSQL to work redunant. Can anyone point me to the right documentation please? How can I redirect requests to one of the other PostgreSQL servers, if one has to much load ? My servers have only 32 CPU's of 650 MHz and 64 GByte of memory running Debian GNU/Linux 3.1 (sparc), and they must support around 17.000 $USER currently. The Internet connection is curently for Testing an E1 (with Backup) but I am looking for FiberOptic Provider E1/STM1. You want to look at a couple of different options. slony http://gborg.postgresql.org/project/slony1/projdisplay.php pgpool http://pgpool.projects.postgresql.org/ mammoth replicator http://www.commandprompt.com/products/mammothreplicator pgcluster http://pgfoundry.org/projects/pgcluster/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Choosing PostgreSQL as the database for our next project
On Mon, Nov 14, 2005 at 06:02:03PM +0100, Johnny Ljunggren wrote: Scott Ribe wrote: From what I understand this will be possible but I may need to roll my own replication code to handle data conflict issues? Especially since center 1/2 may be down at the same time and then might change the same data. How do the tools for this compare against the ones from Oracle? snip some useful inputs There is no simple solution, and the out-of-the-box solutions require a whole lotta configuration work. That is what I have been suspecting. Luckily the databases aren't that complex so we may be able to fix them so that there will be no conflicts (in my dreams perhaps...) that need manual intervention. Anyhow, are there any (big) companies that will do support for a PostgreSQL installation? Yes, there are a number of companies offering commercial support. Pervasive (who I work for) is one; there's also Command Prompt and others. http://www.postgresql.org/support/professional_support has a good list of options. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] IF EXISTS
How about if exists (select .. from ..) On 11/14/05, Jaime Casanova [EMAIL PROTECTED] wrote: On 11/14/05, P.M [EMAIL PROTECTED] wrote: Hi, I would like to know if IF EXISTS exists under postgresql ? because i did not find it. before to create users or database, i would like to be sure that they do not exist already. so how can i test it and do something like : IF EXISTS database test DROP database test; thanks a lot, Maileen something in shell is fine? -SCRIPT BEGIN HERE #!/bin/sh psql template1 -A -t -c SELECT datname FROM pg_database WHERE datname = 'test' | while read D ; do dropdb test # or you can do # psql template1 -c DROP DATABASE test # whatever you feel comfortable with done createdb test -SCRIPT END HERE -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Queries causing highest I/O load since pg_stat_reset?
select * from pg_stat_activity; Nope, doesn't cut it. First, it only gives you the current_query (and current_query is just that - the current query for *active* connections, not historical sessions). Second, it doesn't provide any info on blocks read/written. And pg_stat_foo_tables gives you neither blocks read/written nor query strings, and pg_stat_database and pg_statio_foo_tables only give you the *total* number of blocks read (not connected to query strings). I'd like to be able to do something like this: -- list the 10 highest i/o stressing queries since last pg_stat_reset() select s.query_string, so.blks_read from sometable s, someothertable so where s.foo = so.foo order by so.blks_read desc limit 10; /Mikael -Original Message- From: Jaime Casanova [mailto:[EMAIL PROTECTED] Sent: den 14 november 2005 21:08 To: Mikael Carneholm Cc: 'Pgsql-General (E-mail) Subject: Re: [GENERAL] Queries causing highest I/O load since pg_stat_reset? On 11/14/05, Mikael Carneholm [EMAIL PROTECTED] wrote: Hi, is it possible to retrieve a list of queries that has caused the highest i/o load? Something like (1) select pg_stat_reset(); (2) run some queries for a while (3) list the queries that caused the highest i/o since step (1) ? /Mikael select * from pg_stat_activity; -- Atentamente, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Duplicate rows
so what's the problem exactly?? what's holding you from adding the primary key over fluid_id ?? in the trigger, you could use an if exists to check if the row is there before and i guess there is no need for a loop? you can do the same per row. On 11/12/05, Bob Pawley [EMAIL PROTECTED] wrote: I have the following expression working in that the process.fluid_id is transfereed to pipe.fluid_id when the column - process.contain has a value of 'ip'. There is no transfer when the contain column holds other values. Success - so far. How do I keep the table pipe from being populated with duplicate rows? Among other reasons not to have duplicate rows, I want to make pipe.fluid_id a primary key. Bob CREATE TABLE pipe ( fluid_id int4 NOT NULL); CREATE TABLE process( fluid_id int4 NOT NULL, process varchar, contain varchar) ; create or replace function base() returns trigger as $$ DECLARE myrow RECORD; BEGIN for myrow in select * from process where contain = 'ip' loop insert into pipe(fluid_id) values (myrow.fluid_id); if not found then do nothing ; end if; end loop; return NULL; END; $$ language plpgsql; create trigger trig1 after insert on process for each row execute procedure base(); insert into process (fluid_id, process, contain) values ('1', 'water3', 'ip'); ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] 3 x PostgreSQL in cluster/redunant
On Mon, Nov 14, 2005 at 07:36:44PM +0100, Michelle Konzack wrote: Hello *, I have three Sun Server where I have reserved on each Server a Raid-5 of 1 TByte for my PostgreSQL. The first PostgreSQL is up and running with a database of 150 GByte. Keep in mind that databases and RAID5 generally don't mix very well. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Queries causing highest I/O load since pg_stat_reset?
I don't think there's any way to do this currently. Maybe if you wrote an external program that polled pg_stat_activity and then correlated procpid to the output of top, etc. On Mon, Nov 14, 2005 at 08:59:19PM +0100, Mikael Carneholm wrote: Hi, is it possible to retrieve a list of queries that has caused the highest i/o load? Something like (1) select pg_stat_reset(); (2) run some queries for a while (3) list the queries that caused the highest i/o since step (1) ? /Mikael ---(end of broadcast)--- TIP 6: explain analyze is your friend -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Updated: partitioning functions
This looks like a good set of examples of how to set up custom partition management. Not sure whether a generic approach is that useful though, but maybe it could be a tech docs paper? I was mainly into creating a single function that could set up insert/update rules/functions for a parent table, but had so much fun doing it that I ended up with those other functions as well. The version on my disk also has range_partition_cluster_all, range_partition_recluster_all, and range_partition_analyze_all functions, and I'm currently fooling around with redefine functions (init, begin, finish) to further ease partitioning. I'll tell you what, I created partitions to hold data for years 2001-2006 complete with update/insert rules (72 child tables) in what was it, 15 seconds? and am now in the middle of populating it with 30 million rows of real data for some interesting performance tests. As soon as the data is inserted, I'll index, cluster and analyze all 72 tables using three lines of code. I'll tell you, partitioning is NOT that easy in the other DBs I've worked with... :) /Mikael ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] 3 x PostgreSQL in cluster/redunant
Jim C. Nasby wrote: On Mon, Nov 14, 2005 at 07:36:44PM +0100, Michelle Konzack wrote: I have three Sun Server where I have reserved on each Server a Raid-5 Keep in mind that databases and RAID5 generally don't mix very well. oh, how come? What is the ideal setup of a database server when it comes to storage? regards -- Johnny Ljunggren, Vestlia 6, 3080 HOLMESTRAND, 918 50 411 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Choosing PostgreSQL as the database for our next project
Johnny Ljunggren wrote: Hmm, okay. We're now looking at another, simpler, way to do it. The same setup but the clients on Center 1/2 will connect directly to the Main center (2Mb leased line). The databases on Center 1/2 will then just be a replica of the Main databases. The biggest issue though is what happens when the lines go down: 1. connect to local database server 2. line comes up = changes to local database sent to main database 3. connect to main database server From what I understand this will be possible but I may need to roll my own replication code to handle data conflict issues? Especially since center 1/2 may be down at the same time and then might change the same data. How do the tools for this compare against the ones from Oracle? It sounds like now is you have a master - multi-slave setup. If master goes down, people use slave instead w/ master re-replicating after it comes back up. This config should be much easier to configure out of the box -- you shouldn't need to write any custom app code. What I'm not sure about though is what would happen with 2 slaves if main goes down AND the connection between center 1 2 is also disconnected. Others with more experience with the various master/slave replication solutions might want to chime in now on how it would work. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] 3 x PostgreSQL in cluster/redunant
On Tue, Nov 15, 2005 at 12:09:40AM +0100, Johnny Ljunggren wrote: Jim C. Nasby wrote: On Mon, Nov 14, 2005 at 07:36:44PM +0100, Michelle Konzack wrote: I have three Sun Server where I have reserved on each Server a Raid-5 Keep in mind that databases and RAID5 generally don't mix very well. oh, how come? What is the ideal setup of a database server when it comes to storage? RAID5 has horrible write performance, especialy for random writes (which is what databases tend to do). If you're running essentially a read-only database then raid5 is OK. Otherwise you'll be much better off with RAID10. You also want to put the WAL on a seperate set of drives from the main database (though again that doesn't buy much if your database is read-only). -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Question about 8.1 release news
Marko Kreen wrote: On Sun, Nov 06, 2005 at 04:35:24PM -0400, Marc G. Fournier wrote: Once released, the more visibility, the better :) Release is schedualed right now for Tuesday morning ... Just a nitpick - should the version be 8.1.0 or 8.1? 'configure.in' says ATM '8.1.0' but the usual would be '8.1'... Uh, it is the 8.1 series, meaning 8.1.0, 8.1.1, etc; at least that is how I think of it. I usually use 8.1, and talk about 8.1.0 only when needing to distinguish it from 8.1.1. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Performance of a view
Hello all, I have a fairly complex query whose performance problem I have isolated to a fairly small subset. The pertinent parts of the table structure are as follows: //- tbl_claim claim_id integer SERIAL PRIMARY KEY; claimnum varchar(32); //- tbl_invoice invoice_id integer SERIAL PRIMARY KEY; claim_id integer integer; invoicedate timestamp; active integer; //- there is an index on claimnum, and the claim_id in tbl_invoice is a foreign key which references tbl_claim I have a view which is defined as follows: //- SELECT tbl_claim.claim_id, count(tbl_invoice.invoice_id) AS count, min(tbl_invoice.invoicedate) AS invoicedate FROM tbl_claim LEFT JOIN tbl_invoice ON tbl_claim.claim_id = tbl_invoice.claim_id AND tbl_invoice.active = 1 GROUP BY tbl_claim.claim_id; //- If I runn the following: EXPLAIN ANALYZE SELECT tbl_claim.claim_id FROM tbl_claim WHERE claimnum = 'L1J8823'; I get: Index Scan using idx_claim_claimnum on tbl_claim (cost=0.00..10.01 rows=2 width=4) (actual time=0.079..0.088 rows=2 loops=1) Index Cond: ((claimnum)::text = 'L1J8823'::text) Total runtime: 0.123 ms If I run: EXPLAIN ANALYZE SELECT tbl_claim.claim_id FROM tbl_claim INNER JOIN vw_claiminvoicecount ON tbl_claim.claim_id = vw_claiminvoicecount.claim_id WHERE tbl_claim.claim_id = 217778; I get: Nested Loop (cost=17.21..25.50 rows=4 width=4) (actual time=0.069..0.076 rows=1 loops=1) - Index Scan using tbl_claim_pkey on tbl_claim (cost=0.00..8.21 rows=2 width=4) (actual time=0.020..0.021 rows=1 loops=1) Index Cond: (claim_id = 217778) - Materialize (cost=17.21..17.23 rows=2 width=4) (actual time=0.044..0.047 rows=1 loops=1) - Subquery Scan vw_claiminvoicecount (cost=0.00..17.21 rows=2 width=4) (actual time=0.041..0.043 rows=1 loops=1) - GroupAggregate (cost=0.00..17.19 rows=2 width=16) (actual time=0.039..0.040 rows=1 loops=1) - Nested Loop Left Join (cost=0.00..17.17 rows=2 width=16) (actual time=0.024..0.030 rows=1 loops=1) - Index Scan using tbl_claim_pkey on tbl_claim (cost=0.00..8.21 rows=2 width=4) (actual time=0.005..0.007 rows=1 loops=1) Index Cond: (217778 = claim_id) - Index Scan using idx_tbl_invoice_claim_id on tbl_invoice (cost=0.00..4.39 rows=7 width=16) (actual time=0.014..0.018 rows=1 loops=1) Index Cond: (outer.claim_id = tbl_invoice.claim_id) Filter: (active = 1) Total runtime: 0.232 ms However, if I run: EXPLAIN ANALYZE SELECT tbl_claim.claim_id FROM tbl_claim INNER JOIN vw_claiminvoicecount ON tbl_claim.claim_id = vw_claiminvoicecount.claim_id WHERE tbl_claim.claimnum = 'L1J8823'; I get: Merge Join (cost=60015.93..69488.39 rows=3 width=4) (actual time=4605.711..4605.762 rows=2 loops=1) Merge Cond: (outer.claim_id = inner.claim_id) - Subquery Scan vw_claiminvoicecount (cost=60005.91..68940.54 rows=215119 width=4) (actual time=3074.520..4491.423 rows=157215 loops=1) - GroupAggregate (cost=60005.91..66789.35 rows=215119 width=16) (actual time=3074.515..4265.315 rows=157215 loops=1) - Merge Left Join (cost=60005.91..64100.37 rows=215119 width=16) (actual time=3074.493..3845.516 rows=162280 loops=1) Merge Cond: (outer.claim_id = inner.claim_id) - Sort (cost=29403.35..29941.15 rows=215119 width=4) (actual time=1253.372..1392.089 rows=157216 loops=1) Sort Key: tbl_claim.claim_id - Seq Scan on tbl_claim (cost=0.00..7775.19 rows=215119 width=4) (actual time=0.031..336.606 rows=215119 loops=1) - Sort (cost=30602.56..31146.52 rows=217582 width=16) (actual time=1821.075..1967.639 rows=151988 loops=1) Sort Key: tbl_invoice.claim_id - Seq Scan on tbl_invoice (cost=0.00..6967.61 rows=217582 width=16) (actual time=0.066..507.189 rows=219530 loops=1) Filter: (active = 1) - Sort (cost=10.02..10.03 rows=2 width=4) (actual time=0.144..0.145 rows=2 loops=1) Sort Key: tbl_claim.claim_id - Index Scan using idx_claim_claimnum on tbl_claim (cost=0.00..10.01 rows=2 width=4) (actual time=0.120..0.127 rows=2 loops=1) Index Cond: ((claimnum)::text = 'L1J8823'::text) Total runtime: 4620.653 ms I roughly understand what is happening...in the first query, the dataset is being knocked down to one row, then somehow the view is being constructed using only that subset of the claim table. In the second query, the view is being constructed from the entire dataset which is hundreds of thousands of rows, and thus is much slower. My question is how would I go about