Added to TODO:
* Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options
* Allow multiple blocks to be written to WAL with one write()
Bruce Momjian [EMAIL PROTECTED] writes:
It is hard for me to imagine O_* being slower than fsync(),
Not hard at all --- if
It's great as long as you never block, but it sucks for making things
wait, because the wait interval will be some multiple of 10 msec rather
than just the time till the lock comes free.
On the AIX platform usleep (3) is able to really sleep microseconds without
busying the cpu when
* William K. Volkman [EMAIL PROTECTED] [010318 11:56] wrote:
The Hermit Hacker wrote:
But, with shared libraries, are you really pulling in a "whole
thread-support library"? My understanding of shared libraries (altho it
may be totally off) was that instead of pulling in a whole
The Hermit Hacker wrote:
But, with shared libraries, are you really pulling in a "whole
thread-support library"? My understanding of shared libraries (altho it
may be totally off) was that instead of pulling in a whole library, you
pulled in the bits that you needed, pretty much as you
* William K. Volkman [EMAIL PROTECTED] [010318 11:56] wrote:
The Hermit Hacker wrote:
But, with shared libraries, are you really pulling in a "whole
thread-support library"? My understanding of shared libraries (altho it
may be totally off) was that instead of pulling in a whole
Alfred Perlstein [EMAIL PROTECTED] writes:
Just by making a thread call libc changes personality to use thread
safe routines (I.E. add mutex locking). Use one thread feature, get
the whole set...which may not be that bad.
Actually it can be pretty bad. Locked bus cycles needed for mutex
* Tom Lane [EMAIL PROTECTED] [010318 14:55]:
Alfred Perlstein [EMAIL PROTECTED] writes:
Just by making a thread call libc changes personality to use thread
safe routines (I.E. add mutex locking). Use one thread feature, get
the whole set...which may not be that bad.
Actually it can be
* Larry Rosenman [EMAIL PROTECTED] [010318 14:17] wrote:
* Tom Lane [EMAIL PROTECTED] [010318 14:55]:
Alfred Perlstein [EMAIL PROTECTED] writes:
Just by making a thread call libc changes personality to use thread
safe routines (I.E. add mutex locking). Use one thread feature, get
the
Larry Rosenman [EMAIL PROTECTED] writes:
I can get the code compiled, but don't have the skills to generate
a test case worthy of anything
contrib/pgbench would do as a first cut.
regards, tom lane
---(end of
Hello Alfred,
Friday, March 16, 2001, 3:21:09 PM, you wrote:
AP * Xu Yifeng [EMAIL PROTECTED] [010315 22:25] wrote:
Could anyone consider fork a syncer process to sync data to disk ?
build a shared sync queue, when a daemon process want to do sync after
write() is called, just put a sync
* Xu Yifeng [EMAIL PROTECTED] [010316 01:15] wrote:
Hello Alfred,
Friday, March 16, 2001, 3:21:09 PM, you wrote:
AP * Xu Yifeng [EMAIL PROTECTED] [010315 22:25] wrote:
Could anyone consider fork a syncer process to sync data to disk ?
build a shared sync queue, when a daemon process
Okay ... we can fall back to O_FSYNC if we don't see either of the
others. No problem. Any other weird cases out there? I think Andreas
might've muttered something about AIX but I'm not sure now.
You can safely use O_DSYNC on AIX, the only special on AIX is,
that it does not make a speed
Could anyone consider fork a syncer process to sync data to disk ?
build a shared sync queue, when a daemon process want to do sync after
write() is called, just put a sync request to the queue. this can release
process from blocked on writing as soon as possible. multipile sync
request
* Bruce Momjian [EMAIL PROTECTED] [010316 07:11] wrote:
Could anyone consider fork a syncer process to sync data to disk ?
build a shared sync queue, when a daemon process want to do sync after
write() is called, just put a sync request to the queue. this can release
process from
From: "Bruce Momjian" [EMAIL PROTECTED]
Could anyone consider fork a syncer process to sync data to disk ?
build a shared sync queue, when a daemon process want to do sync after
write() is called, just put a sync request to the queue. this can
release
process from blocked on writing
Alfred Perlstein [EMAIL PROTECTED] writes:
couldn't the syncer process cache opened files? is there any problem I
didn't consider ?
1) IPC latency, the amount of time it takes to call fsync will
increase by at least two context switches.
2) a working set (number of files needed to be
* Tom Lane [EMAIL PROTECTED] [010316 08:16] wrote:
Alfred Perlstein [EMAIL PROTECTED] writes:
couldn't the syncer process cache opened files? is there any problem I
didn't consider ?
1) IPC latency, the amount of time it takes to call fsync will
increase by at least two context
Alfred Perlstein [EMAIL PROTECTED] writes:
definitely need before considering this is to replace the existing
spinlock mechanism with something more efficient.
What sort of problems are you seeing with the spinlock code?
It's great as long as you never block, but it sucks for making things
I was wondering if the multiple writes performed to the
XLOG could be grouped into one write().
That would require fairly major restructuring of xlog.c, which I don't
Restructing? Why? It's only XLogWrite() who make writes.
want to undertake at this point in the cycle (we're trying to
On Fri, 16 Mar 2001, Tom Lane wrote:
Alfred Perlstein [EMAIL PROTECTED] writes:
definitely need before considering this is to replace the existing
spinlock mechanism with something more efficient.
What sort of problems are you seeing with the spinlock code?
It's great as long as you
"Mikheev, Vadim" [EMAIL PROTECTED] writes:
I was wondering if the multiple writes performed to the
XLOG could be grouped into one write().
That would require fairly major restructuring of xlog.c, which I don't
Restructing? Why? It's only XLogWrite() who make writes.
I was thinking of
We've speculated about using Posix semaphores instead, on platforms
For spinlocks we should use pthread mutex-es.
where those are available. I think Bruce was concerned about the
And nutex-es are more portable than semaphores.
Vadim
---(end of
Larry Rosenman [EMAIL PROTECTED] writes:
But, with shared libraries, are you really pulling in a "whole
thread-support library"?
Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT
of primitives to go through threads wrappers and scheduling.
Right, it's not so much
definitely need before considering this is to replace the existing
spinlock mechanism with something more efficient.
What sort of problems are you seeing with the spinlock code?
It's great as long as you never block, but it sucks for making things
I like optimistic approaches :-)
Tom Lane [EMAIL PROTECTED] writes:
Alfred Perlstein [EMAIL PROTECTED] writes:
definitely need before considering this is to replace the existing
spinlock mechanism with something more efficient.
What sort of problems are you seeing with the spinlock code?
It's great as long as you
For a log file on a busy system, this could improve throughput a lot--batch
commit. You end up with fewer than one fsync() per transaction.
This is not the issue, since that is already implemented.
The current bunching method might have room for improvement, but
there are currently fewer
[ Charset ISO-8859-1 unsupported, converting... ]
Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT
of primitives to go through threads wrappers and scheduling.
This was my concern; the change that happens on startup and lib calls
when thread support comes in through
Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
It's great as long as you never block, but it sucks for making things
wait, because the wait interval will be some multiple of 10 msec rather
than just the time till the lock comes free.
On the AIX platform usleep (3) is able to really sleep
* Tom Lane [EMAIL PROTECTED] [010315 09:35] wrote:
BTW, are there any platforms where O_DSYNC exists but has a different
spelling?
Yes, FreeBSD only has: O_FSYNC
it doesn't have O_SYNC nor O_DSYNC.
--
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
Alfred Perlstein [EMAIL PROTECTED] writes:
* Tom Lane [EMAIL PROTECTED] [010315 09:35] wrote:
BTW, are there any platforms where O_DSYNC exists but has a different
spelling?
Yes, FreeBSD only has: O_FSYNC
it doesn't have O_SYNC nor O_DSYNC.
Okay ... we can fall back to O_FSYNC if we don't
Tom Lane writes:
I think we need to make both O_SYNC and fsync() choices available in
7.1. Two important questions need to be settled:
1. Is a compile-time flag (in config.h.in) good enough, or do we need
to make it configurable via a GUC variable? (A variable would have to
be
Peter Eisentraut [EMAIL PROTECTED] writes:
As a general rule, if something can be a run time option, as opposed to a
compile time option, then it should be. At the very least you keep the
installation simple and allow for easier experimenting.
I've been mentally working through the code, and
Based on the tests we did last week, it seems clear than on many
platforms it's a win to sync the WAL log by writing it with open()
option O_SYNC (or O_DSYNC where available) rather than
issuing explicit fsync() (resp. fdatasync()) calls.
I don't remember big difference in using fsync or
* Tom Lane [EMAIL PROTECTED] [010315 11:07] wrote:
"Mikheev, Vadim" [EMAIL PROTECTED] writes:
... I would either
use fsync as default or don't deal with O_SYNC at all.
But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should
use O_DSYNC by default.
Hm. We could do that
Alfred Perlstein writes:
Sorry, what's a GUC? :)
Grand Unified Configuration system
It's basically a cute name for the achievement that there's now a single
name space and interface for (almost) all postmaster run time
configuration variables,
--
Peter Eisentraut [EMAIL PROTECTED]
Alfred Perlstein wrote:
* Tom Lane [EMAIL PROTECTED] [010315 11:07] wrote:
Peter, what do you think about configuration-dependent defaults for
GUC variables?
Sorry, what's a GUC? :)
Grand Unified Configuration, Peter E.'s baby.
See the thread starting at
* Tom Lane [EMAIL PROTECTED] [010315 11:45] wrote:
Alfred Perlstein [EMAIL PROTECTED] writes:
And since we're sorta on the topic of IO, I noticed that it looks
like (at least in 7.0.3) that vacuum and certain other routines
read files in reverse order.
Vacuum does that because it's
Peter Eisentraut [EMAIL PROTECTED] writes:
As a general rule, if something can be a run time option, as opposed to a
compile time option, then it should be. At the very least you keep the
installation simple and allow for easier experimenting.
I've been mentally working through the
Based on the tests we did last week, it seems clear than on many
platforms it's a win to sync the WAL log by writing it with open()
option O_SYNC (or O_DSYNC where available) rather than issuing explicit
fsync() (resp. fdatasync()) calls. In theory fsync ought to be faster,
but it seems
Bruce Momjian [EMAIL PROTECTED] writes:
Can someone explain why configure/platform-specific flags are allowed to
be added at this stage in the release, but my pgmonitor patch was
rejected?
Possibly just because Marc hasn't stomped on me quite yet ;-)
However, I can actually make a case for
Bruce Momjian [EMAIL PROTECTED] writes:
Can someone explain why configure/platform-specific flags are allowed to
be added at this stage in the release, but my pgmonitor patch was
rejected?
Possibly just because Marc hasn't stomped on me quite yet ;-)
However, I can actually make a
I've been mentally working through the code, and see only one reason why
it might be necessary to go with a compile-time choice: suppose we see
that none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined? With the
compile-time choice it's easy: #define USE_FSYNC_FOR_WAL, and sail on.
[ Charset ISO-8859-1 unsupported, converting... ]
Based on the tests we did last week, it seems clear than on many
platforms it's a win to sync the WAL log by writing it with open()
option O_SYNC (or O_DSYNC where available) rather than
issuing explicit fsync() (resp. fdatasync()) calls.
Bruce Momjian [EMAIL PROTECTED] writes:
I later read Vadim's comment that fsync() of two blocks may be faster
than two O_* writes, so I am now confused about the proper solution.
However, I think we need to pick one and make it invisible to the user.
Perhaps a compiler/config.h flag for
Bruce Momjian [EMAIL PROTECTED] writes:
I later read Vadim's comment that fsync() of two blocks may be faster
than two O_* writes, so I am now confused about the proper solution.
However, I think we need to pick one and make it invisible to the user.
Perhaps a compiler/config.h flag
414 9812
E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749 US
Original Message
On 3/15/01, 2:46:20 PM, Bruce Momjian [EMAIL PROTECTED] wrote
regarding Re: [HACKERS] Allowing WAL fsync to be done
Tom Lane writes:
"Mikheev, Vadim" [EMAIL PROTECTED] writes:
... I would either
use fsync as default or don't deal with O_SYNC at all.
But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should
use O_DSYNC by default.
Hm. We could do that reasonably painlessly as a compile-time
Tom Lane writes:
However, I can actually make a case for this: we are flushing out
performance bugs in a new feature, ie WAL.
I haven't followed the jungle of numbers too closely.
Is it not the case that WAL + fsync is still faster than 7.0 + fsync and
WAL/no fsync is still faster than
Peter Eisentraut [EMAIL PROTECTED] writes:
Peter, what do you think about configuration-dependent defaults for
GUC variables?
We have plenty of those already, but we should avoid a variable whose
specification is:
"The default is 'on' if your system defines one of the macros O_SYNC,
I believe that we don't know enough yet to nail down a hard-wired
decision. Vadim's idea of preferring O_DSYNC if it appears to be
different from O_SYNC is a good first cut, but I think we'd
better make it possible to override that, at least for testing purposes.
So let's leave fsync as
Tom Lane writes:
I've been mentally working through the code, and see only one reason why
it might be necessary to go with a compile-time choice: suppose we see
that none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined?
We postulate that one of those has to exist. Alternatively, you make
"The default is 'on' if your system defines one of the macros O_SYNC,
O_DSYNC, O_FSYNC, and if O_SYNC and O_DSYNC are distinct, otherwise the
default is 'off'."
The net result of this would be that the average user would have
absolutely no clue what the default on his machine is.
Peter Eisentraut [EMAIL PROTECTED] writes:
We postulate that one of those has to exist. Alternatively, you make the
option read
wal_sync_method = fsync | open_sync
In the "parse_hook" for the parameter you if #ifdef out 'open_sync' as a
valid option if none of those exist, so a user will
Tom Lane writes:
wal_sync_method = fsync | fdatasync | open_sync | open_datasync
A small problem is that I don't want to be doing multiple strcasecmp's
to figure out what to do in xlog.c.
This should be efficient:
switch(lower(string[0]) + lower(string[5]))
{
case 'f': /*
* Mikheev, Vadim [EMAIL PROTECTED] [010315 13:52] wrote:
I believe that we don't know enough yet to nail down a hard-wired
decision. Vadim's idea of preferring O_DSYNC if it appears to be
different from O_SYNC is a good first cut, but I think we'd
better make it possible to override
Alfred Perlstein [EMAIL PROTECTED] writes:
How many files need to be fsync'd?
Only one.
If it's more than one, what might work is using mmap() to map the
files in adjacent areas, then calling msync() on the entire range,
this would allow you to batch fsync the data.
Interesting thought,
Peter Eisentraut [EMAIL PROTECTED] writes:
switch(lower(string[0]) + lower(string[5]))
{
case 'f': /* fsync */
case 'f' + 's': /* fdatasync */
case 'o' + 's': /* open_sync */
case 'o' + 'd': /* open_datasync */
}
Although ugly, it should serve as a readable
* Tom Lane [EMAIL PROTECTED] [010315 14:54] wrote:
Alfred Perlstein [EMAIL PROTECTED] writes:
How many files need to be fsync'd?
Only one.
If it's more than one, what might work is using mmap() to map the
files in adjacent areas, then calling msync() on the entire range,
this would
Bruce Momjian wrote:
snip
No one will ever do the proper timing tests to know which is better except us.
Hi Bruce,
I believe in the future that anyone doing serious benchmark tests before
large-scale implementation will indeed be testing things like this.
There will also be people/companies
Bruce Momjian wrote:
snip
No one will ever do the proper timing tests to know which is better except us.
Hi Bruce,
I believe in the future that anyone doing serious benchmark tests before
large-scale implementation will indeed be testing things like this.
There will also be
Bruce Momjian [EMAIL PROTECTED] writes:
For example, Tom had a nice fsync test program. Why can't we run that
on various platforms and collect the results, then make a decision on
the best default.
Mainly because (a) there's not enough time before release, and (b) that
test program was far
I was wondering if the multiple writes performed to the XLOG could be
grouped into one write(). Seems everyone agrees:
fdatasync/O_DSYNC is better then plain fsync/O_SYNC
and the O_* flags are better than fsync() if we are doing only one write
before every fsync. It seems the
Bruce Momjian [EMAIL PROTECTED] writes:
I was wondering if the multiple writes performed to the XLOG could be
grouped into one write().
That would require fairly major restructuring of xlog.c, which I don't
want to undertake at this point in the cycle (we're trying to push out
a release
Bruce Momjian [EMAIL PROTECTED] writes:
I was wondering if the multiple writes performed to the XLOG could be
grouped into one write().
That would require fairly major restructuring of xlog.c, which I don't
want to undertake at this point in the cycle (we're trying to push out
a release
Bruce Momjian [EMAIL PROTECTED] writes:
OK, but the point of adding all those configuration options was to allow
us to figure out which was faster. If you can do the code so we no
longer need to know the answer of which is best, why bother adding the
config options.
How in the world
Bruce Momjian [EMAIL PROTECTED] writes:
My question was what are we needing to test? If we can do only single writes
to the log, don't we prefer O_* to fsync, and the O_D* options over
plain O_*? Am I confused?
I don't think we have enough data to conclude that with any certainty.
Bruce Momjian [EMAIL PROTECTED] writes:
My question was what are we needing to test? If we can do only single writes
to the log, don't we prefer O_* to fsync, and the O_D* options over
plain O_*? Am I confused?
I don't think we have enough data to conclude that with any certainty.
I
Bruce Momjian [EMAIL PROTECTED] writes:
It is hard for me to imagine O_* being slower than fsync(),
Not hard at all --- if we're writing multiple xlog blocks per
transaction, then O_* constrains the sequence of operations more
than we really want. Changing xlog.c to combine writes as much
as
Hello Tom,
Friday, March 16, 2001, 6:54:22 AM, you wrote:
TL Alfred Perlstein [EMAIL PROTECTED] writes:
How many files need to be fsync'd?
TL Only one.
If it's more than one, what might work is using mmap() to map the
files in adjacent areas, then calling msync() on the entire range,
this
* Xu Yifeng [EMAIL PROTECTED] [010315 22:25] wrote:
Hello Tom,
Friday, March 16, 2001, 6:54:22 AM, you wrote:
TL Alfred Perlstein [EMAIL PROTECTED] writes:
How many files need to be fsync'd?
TL Only one.
If it's more than one, what might work is using mmap() to map the
files in
70 matches
Mail list logo