Thanks for the tips. I'll make some adjustments
On Tue, Jan 27, 2015 at 8:38 PM, Sameer Kumar sameer.ku...@ashnik.com
wrote:
On Tue, Jan 27, 2015 at 6:59 AM, Tim Uckun timuc...@gmail.com wrote:
The query seems to first use the timestamp column which results in a huge
number of records and
On Tue, Jan 27, 2015 at 6:59 AM, Tim Uckun timuc...@gmail.com wrote:
The query seems to first use the timestamp column which results in a huge
number of records and then filters out using the integer and the macaddr
indices. If it was to use the integer index first it would start with a
tiny
The effective_cache_size is one gig. The others are not set so therefore
the default.
On Sun, Jan 25, 2015 at 6:08 AM, Sameer Kumar sameer.ku...@ashnik.com
wrote:
On Fri, Jan 23, 2015 at 3:04 PM, Tim Uckun timuc...@gmail.com wrote:
Take a look at this explain
Sorry I forgot about the table description.
The table is pretty simple There are about 15 fields and about 75 million
records. This query is supposed to use three fields to narrow down the
records. One is a timestamp column, the other is a macaddress type, the
third is a integer. All three are
On Fri, Jan 23, 2015 at 3:04 PM, Tim Uckun timuc...@gmail.com wrote:
Take a look at this explain
http://explain.depesz.com/s/TTRN
Adding some info on the query and table structure (and indexes) would be
helpful here.
The final number of records is very small but PG is starting out
On 01/22/2015 11:04 PM, Tim Uckun wrote:
Take a look at this explain
http://explain.depesz.com/s/TTRN
I maybe missing it, but I do not see the actual query.
The final number of records is very small but PG is starting out with a
massive number of records and then filtering most of them out.
Take a look at this explain
http://explain.depesz.com/s/TTRN
The final number of records is very small but PG is starting out with a
massive number of records and then filtering most of them out.
I don't want to really force pg to always use the same index because in
some cases this strategy