There's two papers published recently at Duke that I just found, both of
which use PostgreSQL as part of their research:
Automated SQL Tuning through Trial and (Sometimes) Error:
http://www.cs.duke.edu/~shivnath/papers/dbtest09z.pdf
Tuning Database Configuration Parameters with iTuned:
http:
On Nov 10, 2009, at 10:53 AM, Laurent Laborde wrote:
On Tue, Nov 10, 2009 at 4:48 PM, Kevin Grittner
wrote:
Laurent Laborde wrote:
BTW, if you have any idea to improve IO performance, i'll happily
read it. We're 100% IO bound.
At the risk of stating the obvious, you want to make sure yo
On Tue, Nov 10, 2009 at 10:48 AM, Greg Smith wrote:
> Scott Marlowe wrote:
>>
>> On some busy systems with lots of small transactions large
>> shared_buffer can cause it to run slower rather than faster due to
>> background writer overhead.
>>
>
> This is only really true in 8.2 and earlier, where
Scott Marlowe wrote:
On some busy systems with lots of small transactions large
shared_buffer can cause it to run slower rather than faster due to
background writer overhead.
This is only really true in 8.2 and earlier, where background writer
computations are done as a percentage of shared_b
Laurent Laborde wrote:
On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith wrote:
I have 1 spare dedicated to hot standby, doing nothing but waiting for
the master to fail.
+ 2 spare candidate for cluster mastering.
In theory, i could even disable fsync and all "safety" feature on the master.
T
Craig James wrote:
On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith wrote:
Given the current quality of Linux code, I hesitate to use anything
but ext3
because I consider that just barely reliable enough even as the most
popular
filesystem by far. JFS and XFS have some benefits to them, but none
On Tue, Nov 10, 2009 at 10:07 AM, Craig James
wrote:
> On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith wrote:
>>
>> Given the current quality of Linux code, I hesitate to use anything but
>> ext3
>> because I consider that just barely reliable enough even as the most
>> popular
>> filesystem by far.
On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith wrote:
Given the current quality of Linux code, I hesitate to use anything but ext3
because I consider that just barely reliable enough even as the most popular
filesystem by far. JFS and XFS have some benefits to them, but none so
compelling to make
On Tue, Nov 10, 2009 at 9:52 AM, Laurent Laborde wrote:
> On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith wrote:
>> disks (RAID1) are the two WAL setups that work well, and if I have a bunch
>> of drives I personally always prefer a dedicated drive mainly because it
>> makes it easy to monitor exactl
On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith wrote:
> Laurent Laborde wrote:
>>
>> It is on a separate array which does everything but tablespace (on a
>> separate array) and indexspace (another separate array).
>>
>
> On Linux, the types of writes done to the WAL volume (where writes are
> constan
Laurent Laborde wrote:
It is on a separate array which does everything but tablespace (on a
separate array) and indexspace (another separate array).
On Linux, the types of writes done to the WAL volume (where writes are
constantly being flushed) require the WAL volume not be shared with
anyt
On Tue, Nov 10, 2009 at 4:48 PM, Kevin Grittner
wrote:
> Laurent Laborde wrote:
>
>> BTW, if you have any idea to improve IO performance, i'll happily
>> read it. We're 100% IO bound.
>
> At the risk of stating the obvious, you want to make sure you have
> high quality RAID adapters with large b
Laurent Laborde wrote:
> BTW, if you have any idea to improve IO performance, i'll happily
> read it. We're 100% IO bound.
At the risk of stating the obvious, you want to make sure you have
high quality RAID adapters with large battery backed cache configured
to write-back.
If you haven't a
On Tue, Nov 10, 2009 at 8:00 AM, Laurent Laborde wrote:
>
> Desktop drive can easily do 60MB/s in *sequential* read/write.
> We use high performance array of 15.000rpm SAS disk on an octocore
> 32GB and IO is always a problem.
How man drives in the array? Controller? RAID level?
> I explain the
checkpoint log :
checkpoint starting: time
checkpoint complete: wrote 1972 buffers (0.8%); 0 transaction log
file(s) added, 0 removed, 13 recycled;
write=179.123 s, sync=26.284 s, total=205.451 s
with a 10mn timeout.
--
ker2x
--
Sent via pgsql-performance mailing list
On Tue, Nov 10, 2009 at 4:11 PM, Ivan Voras wrote:
> Laurent Laborde wrote:
>
> Ok, this explains it. It also means you are probably not getting much
> runtime performance benefits from the logging and should think about moving
> the logs to different drive(s), among other things because...
It is
Laurent Laborde wrote:
On Tue, Nov 10, 2009 at 3:05 PM, Ivan Voras wrote:
Laurent Laborde wrote:
Hi !
We recently had a problem with wal archiving badly impacting the
performance of our postgresql master.
Hmmm, do you want to say that copying 16 MB files over the network (and
presumably you a
On Tue, Nov 10, 2009 at 3:05 PM, Ivan Voras wrote:
> Laurent Laborde wrote:
>>
>> Hi !
>> We recently had a problem with wal archiving badly impacting the
>> performance of our postgresql master.
>
> Hmmm, do you want to say that copying 16 MB files over the network (and
> presumably you are not d
Laurent Laborde wrote:
Hi !
We recently had a problem with wal archiving badly impacting the
performance of our postgresql master.
Hmmm, do you want to say that copying 16 MB files over the network (and
presumably you are not doing it absolutely continually - there are
pauses between log ship
On Tue, Nov 10, 2009 at 12:55:42PM +0100, Laurent Laborde wrote:
> Hi !
> We recently had a problem with wal archiving badly impacting the
> performance of our postgresql master.
> And i discovered "cstream", that can limite the bandwidth of pipe stream.
>
> Here is our new archive command, FYI, t
Hi !
We recently had a problem with wal archiving badly impacting the
performance of our postgresql master.
And i discovered "cstream", that can limite the bandwidth of pipe stream.
Here is our new archive command, FYI, that limit the IO bandwidth to 500KB/s :
archive_command = '/bin/cat %p | cst
21 matches
Mail list logo