On Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili <dato0...@hotmail.com>wrote:

> I have database with few hundred millions of rows. I'm running the
> following query:
>
> select * from "Payments" as pinner join "PaymentOrders" as poon po."Id" = 
> p."PaymentOrderId"inner join "Users" as uOn u."Id" = po."UserId"INNER JOIN 
> "Roles" as ron u."RoleId" = r."Id"Where r."Name" = 'Moses'
> LIMIT 1000
>
> When the where clause finds a match in database, I get the result in several 
> milliseconds, but if I modify the query and specify a non-existent r."Name" 
> in where clause, it takes too much time to complete. I guess that PostgreSQL 
> is doing a sequential scan on the Payments table (which contains the most 
> rows), comparing each row one by one.
>
> Isn't postgresql smart enough to check first if Roles table contains any row 
> with Name 'Moses'?
>
> Roles table contains only 15 row, while Payments contains ~350 million
>
> You probably checked this already, but just in case you didn't ... did you
do an "analyze" on the small table?  I've been hit by this before ... it's
natural to think that Postgres would always check a very small table first
no matter what the statistics are.  But it's not true.  If you analyze the
small table, even if it only has one or two rows in it, it will often
radically change the plan that Postgres chooses.

Craig James

Reply via email to