|-----Original Message-----
|From: Thomas Kellerer   Sent: Wednesday, April 13, 2016 11:37 PM
|
|Alban Hertroys schrieb am 13.04.2016 um 16:39:
|>>> So my question is: why is comparing a timestamp to a date so much slower?
|>
|> The reason that the other way around is so much more expensive is that
|> the database needs to do that conversion twice for every row in the
|> table. When down-converting now(), the DB only needs to do that once
|> for all rows.
|
|Why does it do that for each row? The value of now() won't change while the
|statement is running, so this conversion could be done once at the start of the
|statement.

The general rule in the SQL Server world is that using a function in a Where 
clause or join will eliminate usage of an index that would have been leveraged 
if the function didn't exist.  The reason is that functions are 
non-deterministic, so the optimizer can't possibly tell in advance what the 
outcome will be and thus takes the safest route to completion.  I'm betting 
that the same logic holds in PG (I just haven't tested it enough to be 
absolutely sure).

In the case of now() in the Where clause, to avoid the conversion/loss of index 
usage, I always place (what should be a static value anyway) the output of 
now() into a local variable and then use that in the Where clause...and get my 
index back.

This is just a style of coding (no functions in where clauses/joins), but one 
that doesn't seem prevalent in PG...instead I see people using functions within 
functions within functions, the cascading impact of which becomes very hard to 
unravel.

Mike Sofen



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to