Re: [GENERAL] JOIN issues (Left vs Right for sorting), and "Nested Loop" problem
Alban Hertroys <[EMAIL PROTECTED]> writes: > I am kind of surprised that the planner doesn't understand that a > foreign key with a unique constraint (which a primary key is) means > there is a 0..1 to 1 relationship with the target table. Hm? It correctly estimated that it'd get one row out for each index probe: -> Index Scan using tradecount_pkey on tradecount (cost=0.00..8.45 rows=1 width=16) (actual time=0.006..0.008 rows=1 loops=386) Index Cond: (trades.id = tradecount.id) I don't think there's anything wrong with this plan at all, at least for queries that select only a few hundred rows in trades. It would switch to a different plan if a large fraction of the tables had to be joined, but for joining a small fraction it's hard to beat nested indexscans. (Of course, I'm assuming that this is a small fraction --- we don't actually know the total sizes of these tables...) regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] JOIN issues (Left vs Right for sorting), and "Nested Loop" problem
"Phoenix Kiula" <[EMAIL PROTECTED]> writes: > On 01/09/07, Alban Hertroys <[EMAIL PROTECTED]> wrote: >> Is 10 ms problematic for this query? > I think you got 10ms from the query plan? These queries are very fast > after they have been executed once. But the first time is huge. > Sometimes I have to wait as much as 10 seconds (10,000ms?) It's to be expected that repeating the same query would be faster, since all the data will have been pulled from disk and be sitting in cache. In this query you're fetching about 700 rows from random locations on the disk, so if none of them are in memory already there's likely to be 700 seeks done. Seek times in the range of 10ms are not unusual for cheap disks ... you do the math. Solutions include buying faster disks, or buying more RAM so more of your data can stay in cache. If your queries are very stylized (like always using the same index) then you might get somewhere by CLUSTERing on that index to reduce the number of seeks needed, but this is seldom a solution that fixes everything. >>> So my question: how can I do a left >>> join, which is the logic that I wish to accomplish, but get the >>> sorting to work from the second table and if a column is null then >>> just return as 0 instead of nothing at all? (The LEFT JOIN used to >>> work in Mysql). >> You could use ORDER BY COALESCE(tradecount.u_count, 0) desc if you >> want it to behave like you say mysql sorted it. I got curious about this assertion and went to check it. AFAICT mysql doesn't have any weird automatic coalesce involved in sorting. The difference is that they sort nulls first, rather than last as we do: mysql> select * from t1 left join t2 using(f1) order by t2.f2 ; ++--+ | f1 | f2 | ++--+ | 3 | NULL | | 1 | 11 | | 2 | 22 | ++--+ 3 rows in set (0.00 sec) mysql> select * from t1 left join t2 using(f1) order by t2.f2 desc; ++--+ | f1 | f2 | ++--+ | 2 | 22 | | 1 | 11 | | 3 | NULL | ++--+ 3 rows in set (0.00 sec) Same data in PG yields: regression=# select * from t1 left join t2 using(f1) order by t2.f2 ; f1 | f2 + 1 | 11 2 | 22 3 | (3 rows) regression=# select * from t1 left join t2 using(f1) order by t2.f2 desc; f1 | f2 + 3 | 2 | 22 1 | 11 (3 rows) Both behaviors are legal per spec (it's "implementation defined" which is the ordering, according to the SQL standard). As of PG 8.3 there will be NULLS FIRST and NULLS LAST options so that you can get either ordering, but no released version has these: regression=# select * from t1 left join t2 using(f1) order by t2.f2 nulls first; f1 | f2 + 3 | 1 | 11 2 | 22 (3 rows) regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] JOIN issues (Left vs Right for sorting), and "Nested Loop" problem
On Sep 1, 2007, at 14:48, Phoenix Kiula wrote: On 01/09/07, Alban Hertroys <[EMAIL PROTECTED]> wrote: On Sep 1, 2007, at 11:46, Phoenix Kiula wrote: . ..snip However, there's a nested loop in there as the EXPLAIN ANALYZE shows below. What is causing this nested loop? It looks like it's used to match trades to tradecounts. I think that makes sense, as the number of matching records from both tables isn't necessarily equal. The query is looping over trades until each tradecount has all its trades (for user 'jondoe' with status 'Y') associated. So are you suggesting that it would help performance if the number of rows in each table were to be exactly the same? It can be done I suppose, but according to our business logic at the moment, the counts table gets a corresponding row when there is at least one count. Otherwise, there is nothing for an "id" in the tradecount table, so "u_count" comes back to us as null. No, it wouldn't help I think. The query planner still would have no way of being sure of that, it doesn't know about your business logic. I'm not entirely sure that's the problem even... Is that combination of user_id with a specific status something you'll be querying a lot? In that case it may help to create an index over that combination, or a partial index on user_id where status = 'Y' holds true. I am kind of surprised that the planner doesn't understand that a foreign key with a unique constraint (which a primary key is) means there is a 0..1 to 1 relationship with the target table. It is kind of confusing that you're using the id column in tradecounts for both primary key and foreign key, and I'm not sure what that implies to the query planner. It suggests that there can be only (up to) one tradecounts record for each trade count, but it appears that either the planner doesn't realise that... If I drop the primary key and leave only the foreign key, will this column still be indexed (sorry if this is a stupid question). I can drop primary if that is true, but I do want to leave the foreign key intact because of the "ON DELETE CASCADE" feature to maintain data integrity. The index wouldn't drop with the dropping of the constraint. It also has no relevance to the ON DELETE CASCADE; that's part of the foreign key constraint and unrelated to other indices on that table. Having an index on that column would help though, and if it's required to be unique I'd probably opt for a unique constraint on it (which creates a unique index for you). PostgreSQL doesn't automatically create indices on foreign keys, btw. In fact there's nothing wrong with your combined primary/foreign key, except that I think it _might_ confuse the planner. I am not knowledgeable enough to say for sure. Is 10 ms problematic for this query? I think you got 10ms from the query plan? These queries are very fast after they have been executed once. But the first time is huge. Sometimes I have to wait as much as 10 seconds (10,000ms?) 10s for a join of what... 2 times 386 rows? That can't be right. Sequential scans would be faster than that (by much). Are you running out of memory for that query maybe? Or are you looking at a DNS time out? Something is wrong there. You could use ORDER BY COALESCE(tradecount.u_count, 0) desc if you want it to behave like you say mysql sorted it. Yes, this does it! I didn't think about the NULL stuff, and yes MySQL returns NULLs in integer columns as a 0, so those queries work. I It does? Oh dear... Then how do they expect you to see that there was an actual 0 in that column instead of a NULL? guess I could use the IFNULL or something, but thanks for the COALESCE idea, this is great. It works. I just hope sorting by a function does not cause a major hit to query performance, so I'll be watching over the next few days. Not much, AFAIK. But in the worst case you could create a functional index on that column. That'd move the calculation into the creation of the index and would only add a small penalty on inserting and updating. -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // !DSPAM:737,46d97e1e289908046410233! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] JOIN issues (Left vs Right for sorting), and "Nested Loop" problem
On 01/09/07, Alban Hertroys <[EMAIL PROTECTED]> wrote: > > On Sep 1, 2007, at 11:46, Phoenix Kiula wrote: . ..snip > > However, there's a nested loop in there as the EXPLAIN ANALYZE shows > > below. What is causing this nested loop? > > It looks like it's used to match trades to tradecounts. I think that > makes sense, as the number of matching records from both tables isn't > necessarily equal. The query is looping over trades until each > tradecount has all its trades (for user 'jondoe' with status 'Y') > associated. So are you suggesting that it would help performance if the number of rows in each table were to be exactly the same? It can be done I suppose, but according to our business logic at the moment, the counts table gets a corresponding row when there is at least one count. Otherwise, there is nothing for an "id" in the tradecount table, so "u_count" comes back to us as null. > It is kind of confusing that you're using the id column in > tradecounts for both primary key and foreign key, and I'm not sure > what that implies to the query planner. It suggests that there can be > only (up to) one tradecounts record for each trade count, but it > appears that either the planner doesn't realise that... If I drop the primary key and leave only the foreign key, will this column still be indexed (sorry if this is a stupid question). I can drop primary if that is true, but I do want to leave the foreign key intact because of the "ON DELETE CASCADE" feature to maintain data integrity. > Is 10 ms problematic for this query? I think you got 10ms from the query plan? These queries are very fast after they have been executed once. But the first time is huge. Sometimes I have to wait as much as 10 seconds (10,000ms?) > > QUERY PLAN > > -- > > -- > > Limit (cost=4829.70..4829.73 rows=10 width=125) (actual > > time=9.784..9.835 rows=10 loops=1) > >-> Sort (cost=4829.65..4830.61 rows=385 width=125) (actual > > time=9.703..9.757 rows=30 loops=1) > > Sort Key: tradecount.u_count > > -> Nested Loop Left Join (cost=0.00..4813.12 rows=385 > > width=125) (actual time=0.075..8.662 rows=386 loops=1) > >-> Index Scan using idx_trades_userid on trades > > (cost=0.00..1556.08 rows=385 width=117) (actual time=0.05 > > 0..1.225 rows=386 loops=1) > > Index Cond: ((user_id)::text = 'jondoe'::text) > > Filter: (status = 'Y'::bpchar) > >-> Index Scan using tradecount_pkey on tradecount > > (cost=0.00..8.45 rows=1 width=16) (actual time=0.006. > > .0.008 rows=1 loops=386) > > Index Cond: (trades.id = tradecount.id) > > Total runtime: 9.963 ms > > (10 rows) > > > > > > > > > > 2. Secondly, if I want to sort the join by a column on the second > > table, then the rows returned are not really sorted unless I do a > > RIGHT JOIN (my sql above shows a LEFT JOIN). Getting results from a > > right join is fine as long as the column is not null in the second > > table, but if it is null, then nothing is returned. This is why I do a > > LEFT join in the first place! So my question: how can I do a left > > join, which is the logic that I wish to accomplish, but get the > > sorting to work from the second table and if a column is null then > > just return as 0 instead of nothing at all? (The LEFT JOIN used to > > work in Mysql). > You could use ORDER BY COALESCE(tradecount.u_count, 0) desc if you > want it to behave like you say mysql sorted it. > Yes, this does it! I didn't think about the NULL stuff, and yes MySQL returns NULLs in integer columns as a 0, so those queries work. I guess I could use the IFNULL or something, but thanks for the COALESCE idea, this is great. It works. I just hope sorting by a function does not cause a major hit to query performance, so I'll be watching over the next few days. TIA! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] JOIN issues (Left vs Right for sorting), and "Nested Loop" problem
On Sep 1, 2007, at 11:46, Phoenix Kiula wrote: Hello, I have a simple query as follows. It joins two very straightforward tables. SELECT trades.id, trades.url, trades.alias, tradecount.t_count, tradecount.u_count FROM trades LEFT JOIN tradecount ON trades.id = tradecount.id WHERE trades.user_id = 'jondoe' and trades.status = 'Y' ORDER BY tradecount.u_count desc OFFSET 20 LIMIT 10 Both the tables have a bigint "id" field that connects them. The table definitions are included below: Table "public.trades" Column |Type | Modifiers ---+- +-- id| bigint | not null user_id | character varying(45) | not null url | text| not null alias | character varying(20) | not null title | character varying(500) | private | character(1)| status| character(1)| default 'Y'::bpchar modify_date | timestamp without time zone | disable_in_statistics | character(1)| not null default 'N'::bpchar Indexes: "trades_pkey" PRIMARY KEY, btree (id) "trades_unique_alias" UNIQUE, btree (alias) "idx_trades_mdate" btree (modify_date) "idx_trades_userid" btree (user_id) Check constraints: "trades_alias_valid" CHECK (alias::text ~ '[-A-Za-z0-9_]'::text) "trades_id_check" CHECK (id > 0) "trades_url_check" CHECK (url <> ''::text) "trades_user_id_check" CHECK (user_id::text <> ''::text) Table "public.tradecount" Column|Type | Modifiers --+-+ id | bigint | not null t_count | integer | not null default 0 u_count | integer | not null default 0 modify_date | timestamp without time zone | default now() Indexes: "tradecount_pkey" PRIMARY KEY, btree (id) "i_tradecount_uc" btree (u_count) "i_tradecount_vc" btree (t_count) Foreign-key constraints: "fk_tradecount_trades_id" FOREIGN KEY (id) REFERENCES trades(id) ON DELETE CASCADE Rules: replace_tradecount_on_duplicate_insert AS ON INSERT TO tradecount WHERE (EXISTS ( SELECT 1 FROM tradecount WHERE tradecount.id = new.id)) DO INSTEAD UPDATE tradecount SET t_count = tradecount.t_count, u_count = tradecount.u_count WHERE tradecount.id = new.id Now I have two problems: 1. The above query takes more time to fire up that an index should really take. I have bitmap heap scan off in conf file, and indexscan on, otherwise this was going into a bitmap heap thing. As you will see from the SQL above, the trades.user_id index should be limiting the number of rows to a few hundred (or thousand at max) and then we are trying to get only 10 tuples based on the OFFSET and LIMIT clauses. However, there's a nested loop in there as the EXPLAIN ANALYZE shows below. What is causing this nested loop? It looks like it's used to match trades to tradecounts. I think that makes sense, as the number of matching records from both tables isn't necessarily equal. The query is looping over trades until each tradecount has all its trades (for user 'jondoe' with status 'Y') associated. It is kind of confusing that you're using the id column in tradecounts for both primary key and foreign key, and I'm not sure what that implies to the query planner. It suggests that there can be only (up to) one tradecounts record for each trade count, but it appears that either the planner doesn't realise that... Is 10 ms problematic for this query? QUERY PLAN -- -- Limit (cost=4829.70..4829.73 rows=10 width=125) (actual time=9.784..9.835 rows=10 loops=1) -> Sort (cost=4829.65..4830.61 rows=385 width=125) (actual time=9.703..9.757 rows=30 loops=1) Sort Key: tradecount.u_count -> Nested Loop Left Join (cost=0.00..4813.12 rows=385 width=125) (actual time=0.075..8.662 rows=386 loops=1) -> Index Scan using idx_trades_userid on trades (cost=0.00..1556.08 rows=385 width=117) (actual time=0.05 0..1.225 rows=386 loops=1) Index Cond: ((user_id)::text = 'jondoe'::text) Filter: (status = 'Y'::bpchar) -> Index Scan using tradecount_pkey on tradecount (cost=0.00..8.45 rows=1 width=16) (actual time=0.006. .0.008 rows=1 loops=386) Index Cond: (trades.id = tradecount.id) Total runtime: 9.963 ms (10 rows) 2. Secondly, if I want to sort the join by a column on the second table, then the rows returned ar
[GENERAL] JOIN issues (Left vs Right for sorting), and "Nested Loop" problem
Hello, I have a simple query as follows. It joins two very straightforward tables. SELECT trades.id, trades.url, trades.alias, tradecount.t_count, tradecount.u_count FROM trades LEFT JOIN tradecount ON trades.id = tradecount.id WHERE trades.user_id = 'jondoe' and trades.status = 'Y' ORDER BY tradecount.u_count desc OFFSET 20 LIMIT 10 Both the tables have a bigint "id" field that connects them. The table definitions are included below: Table "public.trades" Column |Type | Modifiers ---+-+-- id| bigint | not null user_id | character varying(45) | not null url | text| not null alias | character varying(20) | not null title | character varying(500) | private | character(1)| status| character(1)| default 'Y'::bpchar modify_date | timestamp without time zone | disable_in_statistics | character(1)| not null default 'N'::bpchar Indexes: "trades_pkey" PRIMARY KEY, btree (id) "trades_unique_alias" UNIQUE, btree (alias) "idx_trades_mdate" btree (modify_date) "idx_trades_userid" btree (user_id) Check constraints: "trades_alias_valid" CHECK (alias::text ~ '[-A-Za-z0-9_]'::text) "trades_id_check" CHECK (id > 0) "trades_url_check" CHECK (url <> ''::text) "trades_user_id_check" CHECK (user_id::text <> ''::text) Table "public.tradecount" Column|Type | Modifiers --+-+ id | bigint | not null t_count | integer | not null default 0 u_count | integer | not null default 0 modify_date | timestamp without time zone | default now() Indexes: "tradecount_pkey" PRIMARY KEY, btree (id) "i_tradecount_uc" btree (u_count) "i_tradecount_vc" btree (t_count) Foreign-key constraints: "fk_tradecount_trades_id" FOREIGN KEY (id) REFERENCES trades(id) ON DELETE CASCADE Rules: replace_tradecount_on_duplicate_insert AS ON INSERT TO tradecount WHERE (EXISTS ( SELECT 1 FROM tradecount WHERE tradecount.id = new.id)) DO INSTEAD UPDATE tradecount SET t_count = tradecount.t_count, u_count = tradecount.u_count WHERE tradecount.id = new.id Now I have two problems: 1. The above query takes more time to fire up that an index should really take. I have bitmap heap scan off in conf file, and indexscan on, otherwise this was going into a bitmap heap thing. As you will see from the SQL above, the trades.user_id index should be limiting the number of rows to a few hundred (or thousand at max) and then we are trying to get only 10 tuples based on the OFFSET and LIMIT clauses. However, there's a nested loop in there as the EXPLAIN ANALYZE shows below. What is causing this nested loop? QUERY PLAN Limit (cost=4829.70..4829.73 rows=10 width=125) (actual time=9.784..9.835 rows=10 loops=1) -> Sort (cost=4829.65..4830.61 rows=385 width=125) (actual time=9.703..9.757 rows=30 loops=1) Sort Key: tradecount.u_count -> Nested Loop Left Join (cost=0.00..4813.12 rows=385 width=125) (actual time=0.075..8.662 rows=386 loops=1) -> Index Scan using idx_trades_userid on trades (cost=0.00..1556.08 rows=385 width=117) (actual time=0.05 0..1.225 rows=386 loops=1) Index Cond: ((user_id)::text = 'jondoe'::text) Filter: (status = 'Y'::bpchar) -> Index Scan using tradecount_pkey on tradecount (cost=0.00..8.45 rows=1 width=16) (actual time=0.006. .0.008 rows=1 loops=386) Index Cond: (trades.id = tradecount.id) Total runtime: 9.963 ms (10 rows) 2. Secondly, if I want to sort the join by a column on the second table, then the rows returned are not really sorted unless I do a RIGHT JOIN (my sql above shows a LEFT JOIN). Getting results from a right join is fine as long as the column is not null in the second table, but if it is null, then nothing is returned. This is why I do a LEFT join in the first place! So my question: how can I do a left join, which is the logic that I wish to accomplish, but get the sorting to work from the second table and if a column is null then just return as 0 instead of nothing at all? (The LEFT JOIN used to work in Mysql). TIA for any thoughts! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster