"Benjamin Krajmalnik" writes:
> A little background - I have various multi-column indexes whenever I
> have queries which restrict the output based on the values of the 2
> fields (for example, a client code and the date of a transaction).
> Is there a performance gain using this approach as oppo
A little background - I have various multi-column indexes whenever I
have queries which restrict the output based on the values of the 2
fields (for example, a client code and the date of a transaction).
Is there a performance gain using this approach as opposed to using 2
separate indexes, one on
Thanks Mark,
Yeah, I apologize, I forgot to mention a couple of things.
m.id is the primary key but the biggest problem is that the query loops 626410
times because at one time people were allowed to delete member.id rows which
now will break the application if the a.memberid comes out and it
This is weird - is there a particular combination of memberid/answered in
answerselectindex that has a very high rowcount?
First change I would suggest looking into would be to try changing sub-query
logic to check existence and limit the result set of the sub-query to a single
row
Select dist
Scott Carey wrote:
> Don't ever have WAL and data on the same OS volume as ext3.
>
> If data=writeback, performance will be fine, data integrity will be ok
> for WAL, but data integrity will not be sufficient for the data
> partition. If data=ordered, performance will be very bad, but data
> inte
Hi,
I'm hoping someone can offer some help here. The query and explain analyze and
table layout are below and attached in a text file if the formatting is bad.
The query is part of a bigger query that our front end runs. This is the part
that takes forever (84 minutes in this case) to finish
On Mon, Aug 16, 2010 at 04:54:19PM -0400, Greg Smith wrote:
> Andres Freund wrote:
> >A new checkpointing logic + a new syncing logic
> >(prepare_fsync() earlier and then fsync() later) would be a nice
> >thing. Do you plan to work on that?
> The background writer already caches fsync calls into a
Andres Freund wrote:
A new checkpointing logic + a new syncing logic
(prepare_fsync() earlier and then fsync() later) would be a nice
thing. Do you plan to work on that?
The background writer already caches fsync calls into a queue, so the
prepare step you're thinking needs to be there is a
On Mon, Aug 16, 2010 at 04:13:22PM -0400, Greg Smith wrote:
> Andres Freund wrote:
> >Or use -o sync. Or configure a ridiciuosly low dirty_memory amount
> >(which has a problem on large systems because 1% can still be too
> >much. Argh.)...
>
> -o sync completely trashes performance, and trying to
Andres Freund wrote:
Or use -o sync. Or configure a ridiciuosly low dirty_memory amount
(which has a problem on large systems because 1% can still be too
much. Argh.)...
-o sync completely trashes performance, and trying to set the
dirty_ratio values to even 1% doesn't really work due to th
On Mon, Aug 16, 2010 at 01:46:21PM -0400, Greg Smith wrote:
> Scott Carey wrote:
> >This is because an fsync on ext3 flushes _all dirty pages in the file
> >system_ to disk, not just those for the file being fsync'd.
> >One partition for WAL, one for data. If using ext3 this is
> >essentially a p
Scott Carey wrote:
This is because an fsync on ext3 flushes _all dirty pages in the file system_
to disk, not just those for the file being fsync'd.
One partition for WAL, one for data. If using ext3 this is essentially a performance requirement no matter how your array is set up underneath.
Don't ever have WAL and data on the same OS volume as ext3.
If data=writeback, performance will be fine, data integrity will be ok for WAL,
but data integrity will not be sufficient for the data partition.
If data=ordered, performance will be very bad, but data integrity will be OK.
This is beca
13 matches
Mail list logo