On Tue, Jan 27, 2015 at 6:59 AM, Tim Uckun <timuc...@gmail.com> wrote:

> The query seems to first use the timestamp column which results in a huge
> number of records and then filters out using the integer and the macaddr
> indices.  If it was to use the integer index first it would start with a
> tiny number of records.
>

​May be the record distribution of quantiles is skewed.​ Have you tried to
set more granular statistics for your int column?

The effective_cache_size is one gig. The others are not set so therefore
> the default.


​Ideally the effective_cache_size can be set to as much as 50-60% of your
available memory. Also you need to tune your random_page_cost as per the
behavior of your disk.

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server​

​If these two does not work then may be you should go for setting a more
granular statistics collection for your specific column-

alter table <table_name> alter column <column_Name> set  statistics 1000;
analyze <table_name>;

​


Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] <http://www.ashnik.com/>



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).

Reply via email to