It was just a minimal example. The real query looks like this.
select *
from commons.financial_documents fd
where fd.creation_time < '2011-11-07 10:39:07.285022+08'
or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and
fd.financial_document_id < 100)
order by fd.creation_time desc
limit
On Fri, Nov 7, 2014 at 5:16 PM, arhipov wrote:
> Hello,
>
> I have just came across interesting Postgres behaviour with OR-conditions.
> Are there any chances that the optimizer will handle this situation in the
> future?
>
> select *
> from commons.financial_documents fd
> where fd.creation_time
Hello,
I have just came across interesting Postgres behaviour with
OR-conditions. Are there any chances that the optimizer will handle this
situation in the future?
select *
from commons.financial_documents fd
where fd.creation_time <= '2011-11-07 10:39:07.285022+08'
order by fd.creation_time
On 10/29/2014 11:49 PM, Tory M Blue wrote:
> I looked at pgtune again today and the numbers it's spitting out took me
> back, they are huge. From all historical conversations and attempts a few
> of these larger numbers netted reduced performance vs better performance
> (but that was on older versi
=?utf-8?Q?Art=C5=ABras?= Lapinskas writes:
> After some more investigation my wild guess would be that then nulls are
> involved in query postgresql wants to double check whatever they are
> really nulls in actual relation (maybe because of dead tuples).
No, it's much simpler than that: IS NULL
After some more investigation my wild guess would be that then nulls are
involved in query postgresql wants to double check whatever they are
really nulls in actual relation (maybe because of dead tuples). To do
that it has to go and fetch pages from disk and the best way to do that
is to use b