Added to TODO:
* Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options
* Allow multiple blocks to be written to WAL with one write()
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > It is hard for me to imagine O_* being slower than fsync(),
>
> Not hard at all ---
Zeugswetter Andreas SB <[EMAIL PROTECTED]> writes:
>> HPUX has usleep, but the man page says
>>
>> The usleep() function is included for its historical usage. The
>> setitimer() function is preferred over this function.
> I doubt that setitimer has microsecond precision on HPUX.
Well, if you i
> * William K. Volkman <[EMAIL PROTECTED]> [010318 11:56] wrote:
> > The Hermit Hacker wrote:
> > >>
> > > But, with shared libraries, are you really pulling in a "whole
> > > thread-support library"? My understanding of shared libraries (altho it
> > > may be totally off) was that instead of pu
> >> It's great as long as you never block, but it sucks for making things
> >> wait, because the wait interval will be some multiple of 10 msec rather
> >> than just the time till the lock comes free.
>
> > On the AIX platform usleep (3) is able to really sleep microseconds without
> > busying
Larry Rosenman <[EMAIL PROTECTED]> writes:
> I can get the code compiled, but don't have the skills to generate
> a test case worthy of anything
contrib/pgbench would do as a first cut.
regards, tom lane
---(end of broadcast)--
* Larry Rosenman <[EMAIL PROTECTED]> [010318 14:17] wrote:
> * Tom Lane <[EMAIL PROTECTED]> [010318 14:55]:
> > Alfred Perlstein <[EMAIL PROTECTED]> writes:
> > >> Just by making a thread call libc changes personality to use thread
> > >> safe routines (I.E. add mutex locking). Use one thread fea
* Tom Lane <[EMAIL PROTECTED]> [010318 14:55]:
> Alfred Perlstein <[EMAIL PROTECTED]> writes:
> >> Just by making a thread call libc changes personality to use thread
> >> safe routines (I.E. add mutex locking). Use one thread feature, get
> >> the whole set...which may not be that bad.
>
> > Ac
Alfred Perlstein <[EMAIL PROTECTED]> writes:
>> Just by making a thread call libc changes personality to use thread
>> safe routines (I.E. add mutex locking). Use one thread feature, get
>> the whole set...which may not be that bad.
> Actually it can be pretty bad. Locked bus cycles needed for
* William K. Volkman <[EMAIL PROTECTED]> [010318 11:56] wrote:
> The Hermit Hacker wrote:
> >>
> > But, with shared libraries, are you really pulling in a "whole
> > thread-support library"? My understanding of shared libraries (altho it
> > may be totally off) was that instead of pulling in a w
The Hermit Hacker wrote:
>>
> But, with shared libraries, are you really pulling in a "whole
> thread-support library"? My understanding of shared libraries (altho it
> may be totally off) was that instead of pulling in a whole library, you
> pulled in the bits that you needed, pretty much as yo
Zeugswetter Andreas SB <[EMAIL PROTECTED]> writes:
>> It's great as long as you never block, but it sucks for making things
>> wait, because the wait interval will be some multiple of 10 msec rather
>> than just the time till the lock comes free.
> On the AIX platform usleep (3) is able to reall
> On 3/16/01, 11:10:34 AM, The Hermit Hacker <[EMAIL PROTECTED]> wrote
> regarding Re: Re[4]: [HACKERS] Allowing WAL fsync to be done via O_SYNC :
>
> > But, with shared libraries, are you really pulling in a "whole
> > thread-support library"? My unders
[ Charset ISO-8859-1 unsupported, converting... ]
> Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT
> of primitives to go through threads wrappers and scheduling.
This was my concern; the change that happens on startup and lib calls
when thread support comes in through
> For a log file on a busy system, this could improve throughput a lot--batch
> commit. You end up with fewer than one fsync() per transaction.
This is not the issue, since that is already implemented.
The current bunching method might have room for improvement, but
there are currently fewer fs
Tom Lane <[EMAIL PROTECTED]> writes:
> Alfred Perlstein <[EMAIL PROTECTED]> writes:
> >> definitely need before considering this is to replace the existing
> >> spinlock mechanism with something more efficient.
>
> > What sort of problems are you seeing with the spinlock code?
>
> It's great as
directives.
LER
>>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<
On 3/16/01, 11:10:34 AM, The Hermit Hacker <[EMAIL PROTECTED]> wrote
regarding Re: Re[4]: [HACKERS] Allowing
> >> definitely need before considering this is to replace the existing
> >> spinlock mechanism with something more efficient.
>
> > What sort of problems are you seeing with the spinlock code?
>
> It's great as long as you never block, but it sucks for making things
I like optimistic approach
Larry Rosenman <[EMAIL PROTECTED]> writes:
>> But, with shared libraries, are you really pulling in a "whole
>> thread-support library"?
> Yes, you are. On UnixWare, you need to add -Kthread, which CHANGES a LOT
> of primitives to go through threads wrappers and scheduling.
Right, it's not so
> We've speculated about using Posix semaphores instead, on platforms
For spinlocks we should use pthread mutex-es.
> where those are available. I think Bruce was concerned about the
And nutex-es are more portable than semaphores.
Vadim
---(end of broadcast)--
"Mikheev, Vadim" <[EMAIL PROTECTED]> writes:
> I was wondering if the multiple writes performed to the
> XLOG could be grouped into one write().
>>
>> That would require fairly major restructuring of xlog.c, which I don't
> Restructing? Why? It's only XLogWrite() who make writes.
I was thinkin
On Fri, 16 Mar 2001, Tom Lane wrote:
> Alfred Perlstein <[EMAIL PROTECTED]> writes:
> >> definitely need before considering this is to replace the existing
> >> spinlock mechanism with something more efficient.
>
> > What sort of problems are you seeing with the spinlock code?
>
> It's great as l
> > I was wondering if the multiple writes performed to the
> > XLOG could be grouped into one write().
>
> That would require fairly major restructuring of xlog.c, which I don't
Restructing? Why? It's only XLogWrite() who make writes.
> want to undertake at this point in the cycle (we're tryi
Alfred Perlstein <[EMAIL PROTECTED]> writes:
>> definitely need before considering this is to replace the existing
>> spinlock mechanism with something more efficient.
> What sort of problems are you seeing with the spinlock code?
It's great as long as you never block, but it sucks for making th
* Tom Lane <[EMAIL PROTECTED]> [010316 08:16] wrote:
> Alfred Perlstein <[EMAIL PROTECTED]> writes:
> >> couldn't the syncer process cache opened files? is there any problem I
> >> didn't consider ?
>
> > 1) IPC latency, the amount of time it takes to call fsync will
> >increase by at least t
Alfred Perlstein <[EMAIL PROTECTED]> writes:
>> couldn't the syncer process cache opened files? is there any problem I
>> didn't consider ?
> 1) IPC latency, the amount of time it takes to call fsync will
>increase by at least two context switches.
> 2) a working set (number of files needed
From: "Bruce Momjian" <[EMAIL PROTECTED]>
> > > Could anyone consider fork a syncer process to sync data to disk ?
> > > build a shared sync queue, when a daemon process want to do sync after
> > > write() is called, just put a sync request to the queue. this can
release
> > > process from blocked
* Bruce Momjian <[EMAIL PROTECTED]> [010316 07:11] wrote:
> > > Could anyone consider fork a syncer process to sync data to disk ?
> > > build a shared sync queue, when a daemon process want to do sync after
> > > write() is called, just put a sync request to the queue. this can release
> > > proc
> > Could anyone consider fork a syncer process to sync data to disk ?
> > build a shared sync queue, when a daemon process want to do sync after
> > write() is called, just put a sync request to the queue. this can release
> > process from blocked on writing as soon as possible. multipile sync
>
> Okay ... we can fall back to O_FSYNC if we don't see either of the
> others. No problem. Any other weird cases out there? I think Andreas
> might've muttered something about AIX but I'm not sure now.
You can safely use O_DSYNC on AIX, the only special on AIX is,
that it does not make a spee
* Xu Yifeng <[EMAIL PROTECTED]> [010316 01:15] wrote:
> Hello Alfred,
>
> Friday, March 16, 2001, 3:21:09 PM, you wrote:
>
> AP> * Xu Yifeng <[EMAIL PROTECTED]> [010315 22:25] wrote:
> >>
> >> Could anyone consider fork a syncer process to sync data to disk ?
> >> build a shared sync queue, when
Hello Alfred,
Friday, March 16, 2001, 3:21:09 PM, you wrote:
AP> * Xu Yifeng <[EMAIL PROTECTED]> [010315 22:25] wrote:
>>
>> Could anyone consider fork a syncer process to sync data to disk ?
>> build a shared sync queue, when a daemon process want to do sync after
>> write() is called, just put
* Xu Yifeng <[EMAIL PROTECTED]> [010315 22:25] wrote:
> Hello Tom,
>
> Friday, March 16, 2001, 6:54:22 AM, you wrote:
>
> TL> Alfred Perlstein <[EMAIL PROTECTED]> writes:
> >> How many files need to be fsync'd?
>
> TL> Only one.
>
> >> If it's more than one, what might work is using mmap() to
Hello Tom,
Friday, March 16, 2001, 6:54:22 AM, you wrote:
TL> Alfred Perlstein <[EMAIL PROTECTED]> writes:
>> How many files need to be fsync'd?
TL> Only one.
>> If it's more than one, what might work is using mmap() to map the
>> files in adjacent areas, then calling msync() on the entire ran
Bruce Momjian <[EMAIL PROTECTED]> writes:
> It is hard for me to imagine O_* being slower than fsync(),
Not hard at all --- if we're writing multiple xlog blocks per
transaction, then O_* constrains the sequence of operations more
than we really want. Changing xlog.c to combine writes as much
as
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > My question was what are we needing to test? If we can do only single writes
> > to the log, don't we prefer O_* to fsync, and the O_D* options over
> > plain O_*? Am I confused?
>
> I don't think we have enough data to conclude that with any cert
Bruce Momjian <[EMAIL PROTECTED]> writes:
> My question was what are we needing to test? If we can do only single writes
> to the log, don't we prefer O_* to fsync, and the O_D* options over
> plain O_*? Am I confused?
I don't think we have enough data to conclude that with any certainty.
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > OK, but the point of adding all those configuration options was to allow
> > us to figure out which was faster. If you can do the code so we no
> > longer need to know the answer of which is best, why bother adding the
> > config options.
>
> How i
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I was wondering if the multiple writes performed to the XLOG could be
> > grouped into one write().
>
> That would require fairly major restructuring of xlog.c, which I don't
> want to undertake at this point in the cycle (we're trying to push out
>
Bruce Momjian <[EMAIL PROTECTED]> writes:
> OK, but the point of adding all those configuration options was to allow
> us to figure out which was faster. If you can do the code so we no
> longer need to know the answer of which is best, why bother adding the
> config options.
How in the world di
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I was wondering if the multiple writes performed to the XLOG could be
> grouped into one write().
That would require fairly major restructuring of xlog.c, which I don't
want to undertake at this point in the cycle (we're trying to push out
a release can
I was wondering if the multiple writes performed to the XLOG could be
grouped into one write(). Seems everyone agrees:
fdatasync/O_DSYNC is better then plain fsync/O_SYNC
and the O_* flags are better than fsync() if we are doing only one write
before every fsync. It seems the o
Is someone able to put together a testing-type script or sequence so
people can run this on the various platforms and then report the
results?
For example, I can setup benchmarking, (or automated testing) on various
Solaris platforms to run overnight and report the results in the
morning. I susp
Bruce Momjian <[EMAIL PROTECTED]> writes:
> For example, Tom had a nice fsync test program. Why can't we run that
> on various platforms and collect the results, then make a decision on
> the best default.
Mainly because (a) there's not enough time before release, and (b) that
test program was f
> Bruce Momjian wrote:
> >
>
> > No one will ever do the proper timing tests to know which is better except us.
>
> Hi Bruce,
>
> I believe in the future that anyone doing serious benchmark tests before
> large-scale implementation will indeed be testing things like this.
> There will also be
Bruce Momjian wrote:
>
> No one will ever do the proper timing tests to know which is better except us.
Hi Bruce,
I believe in the future that anyone doing serious benchmark tests before
large-scale implementation will indeed be testing things like this.
There will also be people/companies ou
* Tom Lane <[EMAIL PROTECTED]> [010315 14:54] wrote:
> Alfred Perlstein <[EMAIL PROTECTED]> writes:
> > How many files need to be fsync'd?
>
> Only one.
>
> > If it's more than one, what might work is using mmap() to map the
> > files in adjacent areas, then calling msync() on the entire range,
Alfred Perlstein <[EMAIL PROTECTED]> writes:
> How many files need to be fsync'd?
Only one.
> If it's more than one, what might work is using mmap() to map the
> files in adjacent areas, then calling msync() on the entire range,
> this would allow you to batch fsync the data.
Interesting though
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> switch(lower(string[0]) + lower(string[5]))
> {
> case 'f': /* fsync */
> case 'f' + 's': /* fdatasync */
> case 'o' + 's': /* open_sync */
> case 'o' + 'd': /* open_datasync */
> }
> Although ugly, it should serve as a
* Mikheev, Vadim <[EMAIL PROTECTED]> [010315 13:52] wrote:
> > I believe that we don't know enough yet to nail down a hard-wired
> > decision. Vadim's idea of preferring O_DSYNC if it appears to be
> > different from O_SYNC is a good first cut, but I think we'd
> > better make it possible to ove
Tom Lane writes:
> wal_sync_method = fsync | fdatasync | open_sync | open_datasync
> A small problem is that I don't want to be doing multiple strcasecmp's
> to figure out what to do in xlog.c.
This should be efficient:
switch(lower(string[0]) + lower(string[5]))
{
case 'f': /* f
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> We postulate that one of those has to exist. Alternatively, you make the
> option read
> wal_sync_method = fsync | open_sync
> In the "parse_hook" for the parameter you if #ifdef out 'open_sync' as a
> valid option if none of those exist, so a user w
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> I haven't followed the jungle of numbers too closely.
> Is it not the case that WAL + fsync is still faster than 7.0 + fsync and
> WAL/no fsync is still faster than 7.0/no fsync?
I believe the first is true in most cases. I wouldn't swear to the
sec
> "The default is 'on' if your system defines one of the macros O_SYNC,
> O_DSYNC, O_FSYNC, and if O_SYNC and O_DSYNC are distinct, otherwise the
> default is 'off'."
>
> The net result of this would be that the average user would have
> absolutely no clue what the default on his machine is.
>
>
Tom Lane writes:
> I've been mentally working through the code, and see only one reason why
> it might be necessary to go with a compile-time choice: suppose we see
> that none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined?
We postulate that one of those has to exist. Alternatively, you mak
> I believe that we don't know enough yet to nail down a hard-wired
> decision. Vadim's idea of preferring O_DSYNC if it appears to be
> different from O_SYNC is a good first cut, but I think we'd
> better make it possible to override that, at least for testing purposes.
So let's leave fsync as
Peter Eisentraut <[EMAIL PROTECTED]> writes:
>> Peter, what do you think about configuration-dependent defaults for
>> GUC variables?
> We have plenty of those already, but we should avoid a variable whose
> specification is:
> "The default is 'on' if your system defines one of the macros O_SYNC
Tom Lane writes:
> However, I can actually make a case for this: we are flushing out
> performance bugs in a new feature, ie WAL.
I haven't followed the jungle of numbers too closely.
Is it not the case that WAL + fsync is still faster than 7.0 + fsync and
WAL/no fsync is still faster than 7.0/
Tom Lane writes:
> "Mikheev, Vadim" <[EMAIL PROTECTED]> writes:
> > ... I would either
> > use fsync as default or don't deal with O_SYNC at all.
> > But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should
> > use O_DSYNC by default.
>
> Hm. We could do that reasonably painlessly as a com
t;<<<<<<<<
On 3/15/01, 2:46:20 PM, Bruce Momjian <[EMAIL PROTECTED]> wrote
regarding Re: [HACKERS] Allowing WAL fsync to be done via O_SYNC:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
> > > I later read Vadim's comment that fsync() of two bl
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I later read Vadim's comment that fsync() of two blocks may be faster
> > than two O_* writes, so I am now confused about the proper solution.
> > However, I think we need to pick one and make it invisible to the user.
> > Perhaps a compiler/config
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I later read Vadim's comment that fsync() of two blocks may be faster
> than two O_* writes, so I am now confused about the proper solution.
> However, I think we need to pick one and make it invisible to the user.
> Perhaps a compiler/config.h flag fo
[ Charset ISO-8859-1 unsupported, converting... ]
> > Based on the tests we did last week, it seems clear than on many
> > platforms it's a win to sync the WAL log by writing it with open()
> > option O_SYNC (or O_DSYNC where available) rather than
> > issuing explicit fsync() (resp. fdatasync())
> > I've been mentally working through the code, and see only one reason why
> > it might be necessary to go with a compile-time choice: suppose we see
> > that none of O_DSYNC, O_SYNC, O_FSYNC, [others] are defined? With the
> > compile-time choice it's easy: #define USE_FSYNC_FOR_WAL, and sail
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Can someone explain why configure/platform-specific flags are allowed to
> > be added at this stage in the release, but my pgmonitor patch was
> > rejected?
>
> Possibly just because Marc hasn't stomped on me quite yet ;-)
>
> However, I can actual
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Can someone explain why configure/platform-specific flags are allowed to
> be added at this stage in the release, but my pgmonitor patch was
> rejected?
Possibly just because Marc hasn't stomped on me quite yet ;-)
However, I can actually make a case f
> Based on the tests we did last week, it seems clear than on many
> platforms it's a win to sync the WAL log by writing it with open()
> option O_SYNC (or O_DSYNC where available) rather than issuing explicit
> fsync() (resp. fdatasync()) calls. In theory fsync ought to be faster,
> but it seems
> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > As a general rule, if something can be a run time option, as opposed to a
> > compile time option, then it should be. At the very least you keep the
> > installation simple and allow for easier experimenting.
>
> I've been mentally working throu
* Tom Lane <[EMAIL PROTECTED]> [010315 11:45] wrote:
> Alfred Perlstein <[EMAIL PROTECTED]> writes:
> > And since we're sorta on the topic of IO, I noticed that it looks
> > like (at least in 7.0.3) that vacuum and certain other routines
> > read files in reverse order.
>
> Vacuum does that becau
Alfred Perlstein <[EMAIL PROTECTED]> writes:
> And since we're sorta on the topic of IO, I noticed that it looks
> like (at least in 7.0.3) that vacuum and certain other routines
> read files in reverse order.
Vacuum does that because it's trying to push tuples down from the end
into free space i
* Peter Eisentraut <[EMAIL PROTECTED]> [010315 11:33] wrote:
> Alfred Perlstein writes:
>
> > Sorry, what's a GUC? :)
>
> Grand Unified Configuration system
>
> It's basically a cute name for the achievement that there's now a single
> name space and interface for (almost) all postmaster run ti
Alfred Perlstein wrote:
> * Tom Lane <[EMAIL PROTECTED]> [010315 11:07] wrote:
> > Peter, what do you think about configuration-dependent defaults for
> > GUC variables?
> Sorry, what's a GUC? :)
Grand Unified Configuration, Peter E.'s baby.
See the thread starting at
http://www.postgresql.org
Alfred Perlstein writes:
> Sorry, what's a GUC? :)
Grand Unified Configuration system
It's basically a cute name for the achievement that there's now a single
name space and interface for (almost) all postmaster run time
configuration variables,
--
Peter Eisentraut [EMAIL PROTECTED]
* Tom Lane <[EMAIL PROTECTED]> [010315 11:07] wrote:
> "Mikheev, Vadim" <[EMAIL PROTECTED]> writes:
> > ... I would either
> > use fsync as default or don't deal with O_SYNC at all.
> > But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should
> > use O_DSYNC by default.
>
> Hm. We could do
"Mikheev, Vadim" <[EMAIL PROTECTED]> writes:
> ... I would either
> use fsync as default or don't deal with O_SYNC at all.
> But if O_DSYNC is defined and O_DSYNC != O_SYNC then we should
> use O_DSYNC by default.
Hm. We could do that reasonably painlessly as a compile-time test in
xlog.c, but I
> Based on the tests we did last week, it seems clear than on many
> platforms it's a win to sync the WAL log by writing it with open()
> option O_SYNC (or O_DSYNC where available) rather than
> issuing explicit fsync() (resp. fdatasync()) calls.
I don't remember big difference in using fsync or
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> As a general rule, if something can be a run time option, as opposed to a
> compile time option, then it should be. At the very least you keep the
> installation simple and allow for easier experimenting.
I've been mentally working through the code,
Tom Lane writes:
> I think we need to make both O_SYNC and fsync() choices available in
> 7.1. Two important questions need to be settled:
>
> 1. Is a compile-time flag (in config.h.in) good enough, or do we need
> to make it configurable via a GUC variable? (A variable would have to
> be postm
Alfred Perlstein <[EMAIL PROTECTED]> writes:
> * Tom Lane <[EMAIL PROTECTED]> [010315 09:35] wrote:
>> BTW, are there any platforms where O_DSYNC exists but has a different
>> spelling?
> Yes, FreeBSD only has: O_FSYNC
> it doesn't have O_SYNC nor O_DSYNC.
Okay ... we can fall back to O_FSYNC if
* Tom Lane <[EMAIL PROTECTED]> [010315 09:35] wrote:
>
> BTW, are there any platforms where O_DSYNC exists but has a different
> spelling?
Yes, FreeBSD only has: O_FSYNC
it doesn't have O_SYNC nor O_DSYNC.
--
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
-
Based on the tests we did last week, it seems clear than on many
platforms it's a win to sync the WAL log by writing it with open()
option O_SYNC (or O_DSYNC where available) rather than issuing explicit
fsync() (resp. fdatasync()) calls. In theory fsync ought to be faster,
but it seems that too
80 matches
Mail list logo