2009/5/7 David Brain :
> Hi,
Hi,
>
> Some answers in-line:
>
>>
>> Has there been a performance *change*, or are you just concerned about a
>> query which doesn't seem to use "enough" disc bandwidth?
>
> Performance has degraded noticeably over the past few days.
>
>> Certainly random access like t
On Thu, May 7, 2009 at 11:19 AM, Matthew Wakeling wrote:
> On Thu, 7 May 2009, David Brain wrote:
>
>> Certainly random access like this index scan can be extremely slow. 2-4
>>> MB/s
>>> is quite reasonable if you're fetching one 8kB block per disc seek - no
>>> more
>>> than 200 per second.
>>>
On Thu, 7 May 2009, David Brain wrote:
Certainly random access like this index scan can be extremely slow. 2-4 MB/s
is quite reasonable if you're fetching one 8kB block per disc seek - no more
than 200 per second.
We have read ahead set pretty aggressively high as the SAN seems to
'like' this,
>
> Nested Loop Left Join (cost=0.00..6462463.96 rows=1894 width=110)
> -> Append (cost=0.00..6453365.66 rows=1894 width=118)
> -> Seq Scan on datatable sum (cost=0.00..10.75 rows=1 width=118)
> Filter: ((datapointdate >= '2009-04-01
> 00:00:00'::timestamp without time
Hi,
Some answers in-line:
>
> Has there been a performance *change*, or are you just concerned about a
> query which doesn't seem to use "enough" disc bandwidth?
Performance has degraded noticeably over the past few days.
> Certainly random access like this index scan can be extremely slow. 2-4
On Thu, 7 May 2009, David Brain wrote:
This has been working reasonably well, however in the last few days
I've been seeing extremely slow performance on what are essentially
fairly simple 'index hitting' selects on this data. From the host
side I see that the postgres query process is mostly in
Hi,
Interesting, for one index on one partition:
idx_scan: 329
idx_tup_fetch: 8905730
So maybe a reindex would help?
David.
On Thu, May 7, 2009 at 10:26 AM, Scott Mead
wrote:
> On Thu, May 7, 2009 at 10:14 AM, David Brain wrote:
>>
>> Hi,
>>
>> Some context, we have a _lot_ of data, > 1TB, m
On Thu, May 7, 2009 at 10:14 AM, David Brain wrote:
> Hi,
>
> Some context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -
> the 'datatable' in the example below although in order to improve
> performance this table is partitioned (by date range) into a number of
> partition tables. Each
Hi,
Some context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -
the 'datatable' in the example below although in order to improve
performance this table is partitioned (by date range) into a number of
partition tables. Each partition contains up to 20GB of data (tens of
millons of rows),