On 26 Mar, Bruce Momjian wrote:
[EMAIL PROTECTED] wrote:
On 26 Mar, Manfred Spraul wrote:
[EMAIL PROTECTED] wrote:
Compare file sync methods with one 8k write:
(o_dsync unavailable)
open o_sync, write 6.270724
write, fdatasync13.275225
[EMAIL PROTECTED] wrote:
On 26 Mar, Manfred Spraul wrote:
[EMAIL PROTECTED] wrote:
Compare file sync methods with one 8k write:
(o_dsync unavailable)
open o_sync, write 6.270724
write, fdatasync13.275225
write, fsync, 13.359847
On 26 Mar, Manfred Spraul wrote:
[EMAIL PROTECTED] wrote:
Compare file sync methods with one 8k write:
(o_dsync unavailable)
open o_sync, write 6.270724
write, fdatasync13.275225
write, fsync, 13.359847
Odd. Which filesystem, which
On Fri, Mar 26, 2004 at 07:25:53AM +0100, Manfred Spraul wrote:
Compare file sync methods with one 8k write:
(o_dsync unavailable)
open o_sync, write 6.270724
write, fdatasync13.275225
write, fsync, 13.359847
Odd. Which filesystem,
On 25 Mar, Manfred Spraul wrote:
Tom Lane wrote:
[EMAIL PROTECTED] writes:
I could certainly do some testing if you want to see how DBT-2 does.
Just tell me what to do. ;)
Just do some runs that are identical except for the wal_sync_method
setting. Note that this should not have any
[EMAIL PROTECTED] wrote:
I've made a test run that compares fsync and fdatasync: The performance
was identical:
- with fdatasync:
http://khack.osdl.org/stp/290607/
- with fsync:
http://khack.osdl.org/stp/290483/
I don't understand why. Mark - is there a battery backed write
Bruce,
We don't actually extend the WAL file during writes (preallocated), and
the access/modification timestamp is only in seconds, so I wonder of the
OS only updates the inode once a second. What else would change in the
inode more frequently than once a second?
What about really big
On 22 Mar, Tom Lane wrote:
[EMAIL PROTECTED] writes:
I could certainly do some testing if you want to see how DBT-2 does.
Just tell me what to do. ;)
Just do some runs that are identical except for the wal_sync_method
setting. Note that this should not have any impact on SELECT
[EMAIL PROTECTED] wrote:
Compare file sync methods with one 8k write:
(o_dsync unavailable)
open o_sync, write 6.270724
write, fdatasync13.275225
write, fsync, 13.359847
Odd. Which filesystem, which kernel? It seems fdatasync is broken and
Tom Lane wrote:
[EMAIL PROTECTED] writes:
I could certainly do some testing if you want to see how DBT-2 does.
Just tell me what to do. ;)
Just do some runs that are identical except for the wal_sync_method
setting. Note that this should not have any impact on SELECT
performance, only
On 18 Mar, Tom Lane wrote:
Josh Berkus [EMAIL PROTECTED] writes:
1) This is an OSS project. Why not just recruit a bunch of people on
PERFORMANCE and GENERAL to test the 4 different synch methods using real
databases? No test like reality, I say
I agree --- that is likely to
[EMAIL PROTECTED] writes:
I could certainly do some testing if you want to see how DBT-2 does.
Just tell me what to do. ;)
Just do some runs that are identical except for the wal_sync_method
setting. Note that this should not have any impact on SELECT
performance, only insert/update/delete
[EMAIL PROTECTED] wrote:
On 18 Mar, Tom Lane wrote:
Josh Berkus [EMAIL PROTECTED] writes:
1) This is an OSS project. Why not just recruit a bunch of people on
PERFORMANCE and GENERAL to test the 4 different synch methods using real
databases? No test like reality, I say
I
I wrote:
Note, too, that the preferred method isn't likely to depend just on the
operating system, it's likely to depend also on the filesystem type
being used.
Linux provides quite a few of them: ext2, ext3, jfs, xfs, and reiserfs,
and that's just off the top of my head. I imagine the
Bruce Momjian wrote:
I have been poking around with our fsync default options to see if I can
improve them. One issue is that we never default to O_SYNC, but default
to O_DSYNC if it exists, which seems strange.
What I did was to beef up my test program and get it into CVS for folks
to run.
Andrew Dunstan wrote:
Bruce Momjian wrote:
I have been poking around with our fsync default options to see if I can
improve them. One issue is that we never default to O_SYNC, but default
to O_DSYNC if it exists, which seems strange.
What I did was to beef up my test program and get
I have updated my program with your suggested changes and put in
src/tools/fsync. Please see how you like it.
---
Zeugswetter Andreas SB SD wrote:
Running the attached test program shows on BSD/OS 4.3:
write
I have been poking around with our fsync default options to see if I can
improve them. One issue is that we never default to O_SYNC, but default
to O_DSYNC if it exists, which seems strange.
What I did was to beef up my test program and get it into CVS for folks
to run. What I found was that
Bruce Momjian [EMAIL PROTECTED] writes:
I have been poking around with our fsync default options to see if I can
improve them. One issue is that we never default to O_SYNC, but default
to O_DSYNC if it exists, which seems strange.
As I recall, that was based on testing on some different
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
I have been poking around with our fsync default options to see if I can
improve them. One issue is that we never default to O_SYNC, but default
to O_DSYNC if it exists, which seems strange.
As I recall, that was based on testing
On Thu, Mar 18, 2004 at 01:50:32PM -0500, Bruce Momjian wrote:
I'm not sure I believe these numbers at all... my experience is that
getting trustworthy disk I/O numbers is *not* easy.
These numbers were reproducable on all the platforms I tested.
It's not because they are reproducable that
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
As I recall, that was based on testing on some different platforms.
But why perfer O_DSYNC over fdatasync if you don't prefer O_SYNC over
fsync?
It's what tested out as the best bet. I think we were using pgbench
as the test platform,
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
As I recall, that was based on testing on some different platforms.
But why perfer O_DSYNC over fdatasync if you don't prefer O_SYNC over
fsync?
It's what tested out as the best bet. I think we were using
On Thu, Mar 18, 2004 at 02:22:10PM -0500, Bruce Momjian wrote:
OK, what better test do you suggest? Right now, there has been no
testing of these.
I suggest you start by doing atleast preallocating a 16 MB file
and do the tests on that, to atleast be somewhat simular to what
WAL does.
I
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
It's what tested out as the best bet. I think we were using pgbench
as the test platform, which as you know I have doubts about, but at
least it is testing one actual write/sync pattern Postgres can generate.
I assume pgbench has so
Kurt Roeckx wrote:
On Thu, Mar 18, 2004 at 02:22:10PM -0500, Bruce Momjian wrote:
OK, what better test do you suggest? Right now, there has been no
testing of these.
I suggest you start by doing atleast preallocating a 16 MB file
and do the tests on that, to atleast be somewhat
Kurt Roeckx [EMAIL PROTECTED] writes:
I have no idea what the access pattern is for normal WAL
operations or how many times it gets synched. Does it only do
f(data)sync() at commit time, or for every block it writes?
If we are using fsync/fdatasync, we issue those at commit time or when
Here are my results on Linux 2.6.1 using cvs version 1.7.
Those times with 20 seconds, you really hear the disk go crazy.
And I have the feeling something must be wrong. Those results
are reproducible.
Kurt
Simple write timing:
write0.139558
Compare fsync times
Kurt Roeckx wrote:
Here are my results on Linux 2.6.1 using cvs version 1.7.
Those times with 20 seconds, you really hear the disk go crazy.
And I have the feeling something must be wrong. Those results
are reproducible.
Wow, your O_SYNC times are great. Where can I buy some? :-)
Tom, Bruce,
My previous point about checking different fsync spacings corresponds to
different assumptions about average transaction size. I think a useful
tool for determining wal_sync_method has got to be able to reflect that
range of possibilities.
Questions:
1) This is an OSS project.
Josh Berkus [EMAIL PROTECTED] writes:
1) This is an OSS project. Why not just recruit a bunch of people on
PERFORMANCE and GENERAL to test the 4 different synch methods using real
databases? No test like reality, I say
I agree --- that is likely to yield *far* more useful results
Bruce Momjian [EMAIL PROTECTED] writes:
Well, I wrote the program to allow testing. I don't see a complex test
as being that much better than simple one. We don't need accurate
numbers. We just need to know if fsync or O_SYNC is faster.
Faster than what? The thing everyone is trying to
On Thu, Mar 18, 2004 at 03:34:21PM -0500, Bruce Momjian wrote:
Kurt Roeckx wrote:
Here are my results on Linux 2.6.1 using cvs version 1.7.
Those times with 20 seconds, you really hear the disk go crazy.
And I have the feeling something must be wrong. Those results
are
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Well, I wrote the program to allow testing. I don't see a complex test
as being that much better than simple one. We don't need accurate
numbers. We just need to know if fsync or O_SYNC is faster.
Faster than what? The thing
Tom Lane wrote:
It really just shows whether the fsync fater the close has similar
timing to the one before the close. That was the best way I could think
to test it.
Sure, but where's the separate process part? What this seems to test
is whether a single process can sync its own
Ideally that path isn't taken very often. But I'm currently having a
discussion off-list with a CMU student who seems to be seeing a case
where it happens a lot. (She reports that both WALWriteLock and
WALInsertLock are causes of a lot of process blockages, which seems to
mean that a lot
Running the attached test program shows on BSD/OS 4.3:
write 0.000360
write fsync 0.001391
I think the write fsync pays for the previous write test (same filename).
write, close fsync 0.001308
open o_fsync, write0.000924
I have
Bruce Momjian wrote:
write 0.000360
write fsync 0.001391
write, close fsync 0.001308
open o_fsync, write0.000924
That's 1 milliseconds vs. 1.3 milliseconds. Neither value is realistic -
I guess the hw cache on and the os doesn't issue cache flush
Manfred Spraul [EMAIL PROTECTED] writes:
One advantage of a seperate write and fsync call is better performance
for the writes that are triggered within AdvanceXLInsertBuffer: I'm not
sure how often that's necessary, but it's a write while holding both the
WALWriteLock and WALInsertLock. If
Mark Kirkwood wrote:
This is a well-worn thread title - apologies, but these results seemed
interesting, and hopefully useful in the quest to get better performance
on Solaris:
I was curious to see if the rather uninspiring pgbench performance
obtained from a Sun 280R (see General: ATA
40 matches
Mail list logo